uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,994,263 | arxiv | \section{Introduction}\label{introduction}
Let $G$ be a reductive group defined over a number field
and let $P$ be a maximal parabolic subgroup of $G$ with Levi decomposition $P=MN$.
Given an automorphic representation $\pi$ of $M({\mathbb{A}})$, the adelic points of $M$, and a vector $f_\pi$ in the space of $\pi$,
one may construct an Eisenstein series $E(g;s,f_\pi)$ on $G({\mathbb{A}})$.
By the work of Langlands, this series, originally defined for $\Re(s)\gg 0$, has analytic continuation and functional equation.
If $\pi$ is generic then the Whittaker coefficients of $E$ may be expressed in terms of Langlands $L$-functions
for $\pi$, and the continuation and functional equation of these $L$-functions may be obtained from the study
of Eisenstein series---the Langlands-Shahidi method. If the parabolic subgroup $P$ is not maximal then a similar statement
is true; the Eisenstein series in that case is a function of more than one complex variable.
Now suppose instead that $\widetilde{G}$ is a metaplectic cover of degree $n$ of $G({\mathbb{A}})$. Such a cover exists
only when the base field contains enough roots of unity. By the work of M\oe glin and Waldspurger~\cite{MW}, the Eisenstein
series on $\tilde{G}$ also have analytic continuation and functional equation. Hence so do their Whittaker
coefficients. However, the Whittaker functional is not unique and the coefficients may not typically be expressed
in terms of $L$-functions, even in the simplest case of the Eisenstein series induced from the Borel subgroup
(``Borel Eisenstein series"). For example, for the $n$-fold cover of $GL(2)$, each Whittaker coefficient is an infinite
sum of $n$-th order Gauss sums. These series, first studied by Kubota \cite{Ku},
thus have meromorphic continuation and
functional equation even though they are not (for $n>2$) Eulerian. This continuation implies that the
arguments of $n$-th order Gauss sums at prime arguments are uniformly distributed on the unit circle
\cite{Pat}.
When the $n$-fold cover of $GL(2)$ is replaced by the $n$-fold cover of $GL(r)$ the Whittaker coefficients
of the Borel Eisenstein series again involve infinite sums of number theoretic
interest, but the structure of the coefficients is considerably more subtle \cite{BBF5}. Each Whittaker
coefficient is a multiple Dirichlet series whose general coefficient
may be determined from the prime power coefficients by a `twisted' multiplicativity, similarly to the way that a
Gauss sum modulo a composite modulus may be expressed in terms of Gauss sums modulo prime powers.
Moreover, each coefficient at powers of $p$, any good prime, is a sum of products
of $n$-th order Gauss sums modulo powers of $p$ (sometimes degenerate),
which may be described by using {\sl crystal graphs}. These graphs are attached to representations
of the quantum group $U_q(gl_{r}({\mathbb{C}}))$. The description using crystal
graphs is uniform in both $p$ and $n$. We shall describe this in more detail below.
When $n$ is sufficiently large many of the Gauss sums become zero and an easier Lie theoretic
description that does not require crystal graphs
may also be given. In that case the number of nonzero terms at
powers of a fixed prime $p$ is exactly the order of the Weyl group, and there
is a natural correspondence between these terms and the elements of the Weyl group.
This description was first conjectured by Brubaker, Bump and Friedberg \cite{BBF1}; its
proof in this case follows from \cite{BBF5} using \cite{BBF2}.
It is natural to seek the Whittaker coefficients of Borel
Eisenstein series on covers of groups of other Cartan types, and to establish a relation between
these coefficients and crystal graphs. This work gives the
first general result. We treat the case of covers of split orthogonal groups
of Cartan type B.
For $n$, the degree of the cover, sufficiently large, a conjectural
Lie theoretic description of the Whittaker coefficients of Borel
Eisenstein series was presented for all Cartan types by Brubaker, Bump and
Friedberg \cite{BBF1, BBF2}.\footnote{The functional equations for the series were proved, but the
connection to Eisenstein series was only conjectured.}
A new feature arises when there are two root lengths: there are
two different Gauss sums that appear. When $n$ is odd these are Gauss sums
attached to two Galois-conjugate $n$-th order residue symbols, but when
$n$ is even these are Gauss sums of orders $n$ and $n/2$. Once again for $n$ sufficiently
large the nonzero terms at powers of a fixed prime $p$
should correspond naturally to the elements of the Weyl group, and there is a conjectural Lie theoretic
description. However, for lower degree covers additional terms should arise, which
are in the convex hull of the lattice points giving the powers of $p$ corresponding to the Weyl group elements.
For Eisenstein series on odd degree covers of a split orthogonal group of Cartan type B,
a conjectural description of these terms using crystal graphs was given by
Beineke, Brubaker and Frechette \cite{BeBrF1, BeBrF2}. The quantum group that
appears should be regarded as attached to the quantized Lie algebra
of the $L$-group. By a well-known principle going back to Savin, the analogue of the $L$-group for
odd degree covers matches that of the degree one cover, although for even degree covers
the situation is more ambiguous (see Section 4.4 of \cite{BBCF}).
Hence the conjectural description of Beineke, Brubaker and Frechette involved crystal
graphs of type C.
Beineke, Brubaker and Frechette also proved that their
description was compatible with the Lie theoretic description in \cite{BBF1, BBF2}.
So for odd degree covers there is a conjectured description of the Whittaker coefficients that is uniform in $n$.
However, for $n$ even, no parallel description had been given, even conjecturally.
In this paper we establish the conjecture of Beineke, Brubaker and Frechette
for Borel Eisenstein series on odd degree covers of split orthogonal groups of type B.
That is, we show that their Whittaker coefficients may be described in terms of
multiple Dirichlet series whose $p$-parts are calculated (in a specific way)
using crystal graphs of type C. In doing so,
we thus also confirm that the conjectured Lie theoretic description of Brubaker, Bump and Friedberg \cite{BBF1, BBF2}
holds for $n$ sufficiently large, $n$ odd. In addition,
we give a different formula for the case of even degree covers. The coefficients are again expressed
in terms of the combinatorial objects associated to crystal graphs of type C, but with a
different assignment of number-theoretic data to these objects. In fact, this second, new, description is
uniform in $n$.
We use this formula to confirm the conjectured Lie theoretic description for $n$ sufficiently large, $n$ even,
as well.
There is something surprising in our proof of the conjecture of Beineke, Brubaker and
Frechette, and to explain it we must go into more detail about how
crystal graphs are used in describing the Whittaker coefficients.
For type A, there are in fact two different descriptions of the $p$-part in terms of crystal graphs.
These descriptions are based on two different factorizations of the long element of the Weyl group
into simple reflections. The factorization is used to determine a path from each vertex of the crystal
graph to the lowest weight vector (the edges of the graph indicate the action of the Kashiwara operators
attached to each simple root, and the factorization determines which edges to use in forming the path).
Then the lengths of the path segments are used to determine a collection
of Gauss sums, which also depend on how the path fits in to the rest of the crystal graph. See \cite{BBF4},
Chs.\ 2 and 3,
for more details. For two factorizations which are as far apart as possible, the crystal graph description
is valid. Moreover, the equality of the terms from the two different factorizations is not a formal equality of summands, but
rather much more intricate. In fact, this equality is enough to imply the functional equation
for the multiple Dirichlet series \cite{BBF3}.
A direct proof of the equality of these two different type A crystal graph descriptions is given in \cite{BBF4}.
It turns out that many terms in the two descriptions
may be identified after applying the Schutzenberger involution. However,
the difficult ones are not identified; in fact the number of summands on the two sides
may be different, and the equality requires identities among Gauss sums. These terms are in a suitable
sense boundary terms for the polytope which is the convex hull of the lattice points describing
all possible paths. Moreover, the equality
is true for all $n$, but with different reasons for different $n$. To establish it,
one reduces inductively to
objects called {\it short patterns}. These patterns are then classified.
The most difficult terms are exactly the terms that come from the {\it totally resonant case}---see \cite{BBF4}, (6.14)
and following. These terms must be handled by a subtle combinatorial argument.
Returning to our situation, the Whittaker coefficients are computed inductively, viewing the Eisenstein
series on $SO_{2r+1}$ as induced from an Eisenstein series on $SO_{2r-1}$. This yields a complicated inductive
formula for each Whittaker coefficient as an exponential sum involving lower rank coefficients
(Theorem~\ref{inductivesum}).
There is also an inductive formula obtained from Beineke, Brubaker and Frechette's crystal graph description
(Lemma~\ref{ind-lemma}).
However, these formulae involve {\it different} number-theoretic weight factors, a situation
reminiscent of the two type A descriptions. Remarkably, this is not
merely an analogy. The equality of the two type C (on the $L$-group side) expressions is
then established
{\it using} both the equality of the two different type A crystal graph descriptions and
a classification of type C combinatorial structures (which we again call short patterns) that
is analogous to the type A classification. (In particular, there is a totally resonant
case that requires special effort.) This is a new phenomenon.
In type A, the induction in stages methods matched the inductive crystal description exactly
(\cite{BBF5}, Sect.~8), and there was no need to
establish an equality to bridge the Eisenstein series
and the crystal descriptions. The need to use the two different type A
descriptions in studying other Cartan types
was also not expected.
Our inductive computation of the Whittaker coefficients works for covers of all degrees.
However, the identities involving Gauss sums are different in the cases of even
and of odd degree covers, due to the differing
orders of the multiplicative character in
the Gauss sums that arise. For $n$ odd, the conjecture of Beineke, Brubaker and Frechette prescribes
specific rules for attaching number-theoretic quantities to path lengths on the crystal graph.
These rules are in fact closely related to the type~A rules, and have a description in terms
of the maximality of the path segments with respect to the Kashiwara operators.
As they observe, these rules would not apply to the case that $n$ was even, and indeed our analysis establishing
their conjecture makes use of the assumption that $n$ is odd. To address the
case $n$ even, we also present a {\it different}
set of rules for attaching number-theoretic quantities to path lengths that is uniform in the degree
of the cover, and we show that this different rule also matches the inductive formula from the Eisenstein series.
The rule we present is more combinatorially complicated, but does simplify for $n$ sufficiently large,
where it yields the conjectured Lie theoretic description of Brubaker, Bump and Friedberg.
Though our approach is global, working over a number field,
our results at primes $p$ may also be interpreted locally. Indeed, working adelically,
the Whittaker integral of a Borel Eisenstein series may be unfolded to an integral over the adelic
points of the unipotent radical $U$ of the Borel subgroup; however the section
involved is not factorizable, but an infinite sum of factorizable sections.
Following Brubaker, Friedberg and McNamara
(in preparation), one may use this to prove twisted multiplicativity, and to relate the global Whittaker
coefficients to local Whittaker integrals. As McNamara has proved \cite{McN}, over a local field $F_v$ the unipotent
subgroup $U(F_v)$ may be decomposed
into cells on which the integrand in the Whittaker integral is constant, and these cells may be given
the combinatorial structure of the crystal $B(-\infty)$. Thus the Whittaker integral may be evaluated
in terms of a sum over an (infinite) crystal. However, our result does more: we reduce the support
from $B(-\infty)$ to the finite crystal attached to a representation of a specific highest weight, and we show that
values are number theoretic and may be computed for $n$ odd using the crystal graph recipe of
\cite{BeBrF1}.
In Section~\ref{sect2} we fix the notation, and review some facts about metaplectic groups. The metaplectic
Eisenstein series are defined in Section~\ref{sect3}. In Section~\ref{sect4} we
compute the Whittaker coefficients of these Eisenstein series, using an induction-in-stages argument.
Each Whittaker coefficient is written as a Dirichlet series in several complex variables whose coefficients
satisfy an inductive relation giving the rank $r$ coefficients as sums of rank $r-1$ coefficients (Theorem~\ref{inductivesum}).
The twisted multiplicativity of these coefficients is established in Section~\ref{sect5}; this allows one to
reduce to the study of the coefficients indexed by powers of a single prime.
Section~\ref{sect6} has two parts. In Section~\ref{sect61} we
present a combinatorial version of the inductive formula of Section~\ref{sect4} for the $p$-power coefficients.
Then in Section~\ref{sect62} we review the conjectured description of Beineke, Brubaker and Frechette~\cite{BeBrF1,
BeBrF2} which uses crystal graphs, and we recast it in a similar inductive format. In Section~\ref{sec:p} we establish
their conjecture (Theorem~\ref{thm:main}) by the argument described above. And in Section~\ref{sec:even},
we establish a second inductive formula that applies in the even cover case as well.
This formula is then used in Section~\ref{sec9} to prove the
conjectured Lie theoretic description for $n$ sufficiently large.
The authors express their appreciation to Benjamin Brubaker and Daniel Bump for helpful discussions.
The first named author was supported by grants from the NSF (DMS-1001326) and NSA (H98230-10-1-0183).
\section{Metaplectic Groups}\label{sect2}
We begin by fixing the notation and constructing the $n$-fold cover
of $SO_{2r+1}$. Fix an integer $n\geq 1$. Let $F$ be a number field such that the group $\mu_{4n}$ of $4n$-th roots
of unity in $F$ has order $4n$.
(The requirement that $F$ contain $2n$ $2n$-th roots of unity is necessary; containing the
full group of $4n$-th roots of unity is an assumption made for convenience. See McNamara~\cite{Mc}, Section~13.1,
for more information.)
Fix a finite set of places $S$ of $F$ containing all
archimedean places, all places ramified over $\mathbbm{Q}$ (in particular
those dividing $n$), and which is sufficiently large that the ring $\mathfrak{o}_S$
of $S$-integers of $F$ is a principal ideal domain and the residue field has at least
4 elements for all $v\not\in S$. For each $v\in S$ let $F_v$ denote the completion of
$F$ at $v$, and let $F_S = \prod_{v \in S} F_v$.
Let $(~,~)_v\in\mu_{2n}$ denote the $2n$-fold Hilbert symbol in $F_v$
and let $(~,~)_S\in\mu_{2n}$ denote the
product over $S$ of these local Hilbert symbols: $(~,~)_S=\prod_{v\in S}(~,~)_v$.
The metaplectic groups are constructed using a two-cocycle, as in Matsumoto \cite{Mat}.
In \cite{BBF5}, Brubaker, Bump and Friedberg, following earlier authors, described a specific cocycle
$\sigma_0$ in $H^{2}({\rm GL}_{2r+1}(F_{S}),\mu_{2n})$. Its
calculation involves the arithmetic of $F$ and is expressed in terms of products of Hilbert symbols
$(~,~)_S$. We will work with this cocycle composed with an inner automorphism of ${\rm GL}_{2r+1}$:
$$
g\to wgw^{-1},\qquad
w=\begin{pmatrix}
I_{r+1}&&&\\&&&1\\&&\iddots&\\&1&&
\end{pmatrix}.
$$
Let $\sigma\colon {\rm GL}_{2r+1}(F_{S})\times {\rm GL}_{2r+1}(F_{S})\to \mu_{2n}$ be the two-cocycle given by
$$\sigma(g_1,g_2)=\sigma_0(wg_1w^{-1},wg_2w^{-1}).$$ For example, on the torus one has
\begin{equation}\label{torus-cocycle}
\sigma\ppair{w^{-1}\begin{pmatrix}
x_{1}&&\\&\ddots&\\&&x_{2r+1}
\end{pmatrix}w, w^{-1}\begin{pmatrix}
y_{1}&&\\&\ddots&\\&&y_{2r+1}
\end{pmatrix}w}
=\prod_{i>j}(x_{i},y_{j})_{S}.
\end{equation}
Let $\widetilde{GL}_{2r+1}(F_S)$ denote the $2n$-fold cover of the group $GL_{2r+1}(F_S)$
corresponding to $\sigma$.
Thus the elements of $\widetilde{GL}_{2r+1}(F_S)$ are ordered pairs $(g,\zeta)$ with
$g\in GL_{2r+1}(F_S)$ and $\zeta\in \mu_{2n}$, and the multiplication in $\widetilde{GL}_{2r+1}(F_S)$ is given by
$$(g_1,\zeta_1)(g_2,\zeta_2)=(g_1g_2,\sigma(g_1,g_2)\zeta_1\zeta_2).$$
Let $p:\widetilde{GL}_{2r+1}(F_S)\to GL_{2r+1}(F_S)$
denote the projection $p((g,\zeta))=g$.
To work with the $n$-fold metaplectic cover of the split special orthogonal group we restrict the $2n$-fold cover of the general linear group.
This shift in the degree of the cover is due to the cover-doubling phenomenon described in Bump, Friedberg and Ginzburg \cite{BFG}, Section 2.
We shall see it reflected in Lemma~\ref{cover} below, in which all $2n$-fold residue symbols appear as squares.
Let $Q:F_S^{2r+1}\times F_S^{2r+1}\to F_S$ be the quadratic form
$$Q(x,y)=\sum_{i=1}^{2r+1}x_i\, y_{2r+2-i} -\frac{3}{2} x_{r+1}y_{r+1}.$$
Then $Q(x,y)$ is represented by the matrix
$$
J=\begin{pmatrix}&&&&&&1\\&&&&&\iddots&\\&&&&1&&\\&&&-1/2&&&\\&&1&&&&\\&\iddots&&&&&\\1&&&&&& \end{pmatrix}.
$$
Let $G$ or $SO_{2r+1}$ denote the split special orthogonal group which is
the stabilizer in $SL_{2r+1}$ of the quadratic form
$Q(x,y)$. Thus
$$G_{F_S}=\{g\in SL_{2r+1}(F_S)\mid~ ^t \!g J g=J\}.$$
Let $\widetilde{G}_{F_S}=p^{-1}(G_{F_S})$.
Similarly define $G_{F_v}$ and $\widetilde{G}_{F_v}$ for the completions
$F_v$ of $F$ at places $v$ of $F$. Then $\widetilde{G}_{F_S}$ is isomorphic to the quotient of the direct product
$\prod_{v\in S} \widetilde{G}_{F_v}$ by the equivalence relation
identifying the copies of $\mu_{2n}$ at each $v\in S$.
\begin{remark}{\rm The algebraic
group $SO_{2r+1}$ is not simply connected but has a two-fold covering, namely
the spin group $\rm{Spin}_{2r+1}$. So
there are two two-fold covers naturally attached to $SO_{2r+1}(F_v)$, namely
$\rm{Spin}_{2r+1}(F_v)$ and $\widetilde{G}_{F_v}$. These covers are not isomorphic.
Indeed, the spin group is an algebraic group
while the cocycle used to define the two-fold covering $\widetilde{G}_{F_v}$ depends on the arithmetic of the
field $F_v$. Moreover, though $\rm{Spin}_{2r+1}$ is a cover of the algebraic group $SO_{2r+1}$,
this cover does not descend to the $F_v$ points, so that in fact $\rm{Spin}_{2r+1}(F_v)$
is only a cover of the subgroup of $SO_{2r+1}(F_v)$ consisting of elements whose spinor norm is a square.
Let $\underline{G}$ be a semisimple connected
simply-connected split algebraic group defined over $F_v$. Then
Matsumoto \cite{Mat} defined an $n$-fold covering of $\underline{G}(F_v)$. To explain the relation between Matsumoto's covers and the ones used here, fix $n$ as above.
On the one hand, Matsumoto's result gives
a two-cocycle $\sigma_{M}$ on ${\rm Spin}_{2r+1}(F_{v})$ with values in $\mu_n$. On the other,
let $\delta$ be the spinor norm from $G_{F_{v}}$ to $F_{v}^{\times}/(F_{v}^{\times})^{2}$ and denote by $G'_{F_{v}}$ the kernel of $\delta$. There is an exact sequence
$$
1\longrightarrow \{\pm 1\}\longrightarrow {\rm Spin}_{2r+1}(F_{v})\stackrel{{\rm pr}}{ \longrightarrow} G'_{F_{v}}\longrightarrow 1.
$$
Then restricting the cocycle $\sigma$ defined above to $G'_{F_v}$
and then pulling it back, one obtains a cocycle
$\sigma_{res}$ on ${\rm Spin}_{2r+1}(F_{v})$: $\sigma_{res}(g,h):=\sigma({\rm pr}(g),{\rm pr}(h))$.
Referring to Matsumoto's results and using the notation in \cite{Mat}, in order to specify $\sigma_{M}$, we
choose $c_{\alpha}(s,t)=(s,t)^{-\|\alpha^{\vee}\|}_{n}$ where $\alpha$ is in the set $\Phi({\rm Spin}_{2r+1})$ of roots of ${\rm Spin}_{2r+1}$,
the short coroots have length 1, and $c_{\alpha}$ is defined on \cite{Mat}, pg.\ 25.
Using the assumption that $-1\in (F^{\times}_{v})^{2n}$ and Matsumoto's relations on $\sigma_{M}$ in \cite{Mat}, it is
not difficult to check that the two cocycles $\sigma_{M}$ and $\sigma_{res}$ are in fact equal.
So the covers $\widetilde{G}_{F_v}$ obtained by embedding into a cover of a general linear group essentially
coincide with those constructed by Matsumoto.}
\end{remark}
\smallskip
If $c,d\in\mathfrak{o}_S$ are relatively prime, for $m$ a divisor of $2n$ let
$\left(\frac{c}{d}\right)_{m}$ denote the $m$-th power residue symbol on $\mathfrak{o}_S$.
Here the Hilbert and power residue symbols are normalized as in \cite{BBF5}, Sections 3 and 4.
(The $m$-th power residue symbols depend on the choice of $S$ but we suppress this from the notation.)
Then
we have the reciprocity law
\begin{equation}\label{reciprocity-law}
\left(\frac{c}{d}\right)_{2n}=(d,c)_S\left(\frac{d}{c}\right)_{2n}.
\end{equation}
Also,
\begin{equation}\label{2n-squared}
\left(\frac{c}{d}\right)_{2n}^2=\left(\frac{c}{d}\right)_{n}.
\end{equation}
We embed $F$ (and in particular $\mathfrak{o}_S$)
into $F_S = \prod_{v \in S} F_v$ diagonally. We similarly embed $SL_{2r+1}(F)$ into $SL_{2r+1}(F_S)$ diagonally.
Then there is an embedding $g\mapsto (g,\kappa(g))$ of
$SL_{2r+1}(\mathfrak{o}_S)$ into $\widetilde{SL}_{2r+1}(F_S):=p^{-1}(SL_{2r+1}(F_S))$
(see \cite{BBF5}, Section 4). This embedding has the property that if $\beta$ is any positive
root of $SL_{2r+1}$ with respect to the standard Borel subgroup then
\begin{equation}\label{kubota-hom}
\kappa\left(w^{-1}i_\beta(\begin{smallmatrix}a&b\\c&d\end{smallmatrix})w\right)=
\begin{cases}\left(\frac{d}{c}\right)_{2n}&\text{if $c\neq0$}\\1&\text{if $c=0$.}\end{cases}
\end{equation}
The embedding restricts to give an embedding of $G(\mathfrak{o}_S)$ into $\widetilde{G}_{F_S}$.
Let $\Phi$ denote the set of positive roots of $SO_{2r+1}$ with respect
to the standard Borel subgroup $B$ of upper triangular matrices of $SO_{2r+1}$. In
standard notation, $\Phi=\cpair{e_{i}\pm e_{j}, e_{i}\mid i<j},$ where $e_i$ denotes
the $i$-th standard unit vector in ${\mathbb R}^r$. For $\alpha\in \Phi$,
let $i_\alpha$ denote the canonical embedding of $SL_2$ into $SO_{2r+1}$
corresponding to $\alpha$. Then the cover doubling phenomenon noted above is
reflected in the following Lemma.
\begin{lemma}\label{cover}
Let $\alpha\in\Phi$. Then
\begin{equation}\label{cover-doubling}
\kappa\left(i_\alpha(\begin{smallmatrix}a&b\\c&d\end{smallmatrix})\right)=\begin{cases}
{\left(\frac{d}{c}\right)_{2n}\left(\frac{a}{b}\right)_{2n}}&\text{if $\alpha=e_{i}-e_{j}$ for $i<j$}, \medskip \\
\left(\frac{d}{c}\right)_{n}& \text{if $\alpha=e_{i}+e_{j}$ for $i<j$}, \medskip \\
\left(\frac{d}{c}\right)_n^2&\text{if $\alpha$ is short.}
\end{cases}
\end{equation}
\end{lemma}
Indeed, if $\alpha\in \Phi$ is a long root, then we may write $i_\alpha=i_\beta\,i_{\beta'}$ where $\beta,\beta'$
are the positive roots of $SL_{2r+1}$ which restrict to $\alpha$ on the diagonal torus of $SO_{2r+1}\subset SL_{2r+1}$.
Since $\kappa$ is a homomorphism, combining
(\ref{kubota-hom}) and (\ref{2n-squared}) we find that (\ref{cover-doubling}) holds. If instead $\alpha$ is a short root,
then $i_\alpha$ is obtained from the symmetric square map. Indeed, for the short simple root $\alpha$ we have
$$i_\alpha(\gamma)=\begin{pmatrix}
I_{r-1}&&\\&M(\gamma)&\\&&I_{r-1}\end{pmatrix}\qquad \text{with}\qquad
M((\smallmatrix a&b\\c&d\endsmallmatrix))=
\begin{pmatrix} a^2&ab&b^{2}\\2ac&bc+ad&2bd\\c^2&cd&d^{2}\end{pmatrix};$$
for the other short roots the rows and columns are moved accordingly.
From the block-compatibility of the metaplectic cocycle established by Banks, Levy and Sepanski \cite{BLS} (or from the original construction of $\kappa$
by Bass, Milnor and Serre \cite{BMS}) it follows that $\kappa(i_\alpha(\gamma))=\kappa(M(\gamma))$.
It may be checked that
$$\kappa\left(M((\smallmatrix a&b\\c&d\endsmallmatrix))\right)=\left(\frac{d^2}{c^2}\right)_{2n}=\left(\frac{d}{c}\right)_n^2.$$
Indeed, this follows from \cite{BBFH1}, Eqn.\ (17), when $n=3$ by an easy computation (after taking into
account that the normalization of $\kappa$ there uses $\left(\tfrac{c}{d}\right)_n$ in place of
$\left(\tfrac{d}{c}\right)_n$), and a similar formula pertains in general
(see \cite{BBFH2}, Section 1).
Thus the Lemma holds in all cases.
\section{Eisenstein Series}\label{sect3}
In this Section we define the Eisenstein series on $\widetilde{G}_{F_S}$.
Let $B=TU$ be the standard Borel subgroup of $G=SO_{2r+1}$; the torus $T$ is given by
$$T=\{{\rm diag}(t_{1},t_{2},\dots,t_{r},1,t^{-1}_{r},\dots,t_{2}^{-1},t_{1}^{-1})\}$$
and the subgroup $U$ consists of the upper triangular unipotent matrices in $G$.
The cocyle restricted to $T_{F_S}$ is given by (\ref{torus-cocycle}). It is easy to check that $\widetilde{T}_{F_S}$,
the pullback of $T_{F_S}$ to $\widetilde{G}_{F_S}$, is not abelian. However, the subgroup
$\Omega= {\mathfrak o}^{\times}_{S}(F^{\times}_{S})^{n}$ is a maximal isotropic subgroup of $F_S$ with respect to $(~,~)_S$,
and accordingly
the subgroup
$\widetilde{T}_{\Omega}$ consisting of $t$ as above such that $t_{i}\in \Omega$
for all $i$, $1\leq i\leq r$, is an abelian subgroup of $\widetilde{G}_{F_S}$ (and is in fact a maximal abelian
subgroup of $\widetilde{T}_{F_S}$).
Let $\mathbf s=(s_1,\dots,s_r)\in {\mathbb{C}}^r$. For $x\in F_S$, let $|x|=\prod_{v\in S}|x_v|_v$.
For \break $t={\rm diag}(t_{1},t_{2},\dots,t_{r},1,t^{-1}_{r},\dots,t^{-1}_{2},t^{-1}_{1})\in T_{F_S}$,
let
$$
{\mathfrak J}(t)=(\prod^{r-1}_{i=1}\prod^{i}_{j=1} |t_{j}|^{2s_{i}})\prod^{r}_{i=1}|t_{i}|^{s_{r}}.
$$
Let $\underline{\text{s}}:G_{F_S}\to\widetilde{G}_{F_S}$ be the trivial section $\underline{\text{s}}(g)=(g,1)$.
A function $f\colon \widetilde{G}_{F_S}\rightarrow {\mathbb{C}}$ is {\it genuine} if
$f((g,\zeta))=\zeta f({\underline{\text{s}}}(g))$ for all $\zeta\in \mu_{2n}$.
Let $I({\bf s})$ be the space consisting of all smooth genuine complex-valued functions $f$ on $\tilde{G}_{F_{S}}$ such that
$$
f({\underline{\text{s}}}(tu)g)={\mathfrak J}(t)f(g),
$$
for all $t\in \widetilde{T}_{\Omega}$, $u$ in $U(F_S)$ and $g\in \widetilde{G}_{F_S}$.
Let $\tilde{G}_{F_{S}}$ act on the space $I({\bf s})$ by right translation. Then $I({\bf s})$ is a representation of $\tilde{G}_{F_{S}}$.
By Lemma 2 in McNamara \cite{Mc}, $I({\bf s})$ has a
$G(\mathfrak{o}_{S})$-invariant function and this is unique up to a constant. However, we shall consider the full
induced space below and keep track of the dependence
on the inducing data.
Suppose $f\in I({\bf s})$ with $\Re(s_i)$ sufficiently large. Since $f\in I({\bf s})$, $f(bg)=f(g)$ for all $b\in B(\mathfrak{o}_S)$, $g\in \widetilde{G}_{F_S}$. Define the Eisenstein series
$$E_f(g,\mathbf s)=\sum_{\gamma\in B(\mathfrak{o}_S)\backslash G(\mathfrak{o}_S)} f(\gamma g), \qquad g\in \widetilde{G}_{F_S}.$$
In this formula, $\gamma\in G(\mathfrak{o}_S)$ is embedded in $\widetilde{G}_{F_S}$ via the map
$\gamma\mapsto (\gamma,\kappa(\gamma))$ as described
above. Then this series converges absolutely and (for a given flat section)
has analytic continuation to all ${\bf s}\in{\mathbb{C}}^r$ (\cite{MW}).
\section{An Inductive Formula for the Whittaker Coefficients}\label{sect4}
The goal of this Section is to establish an inductive formula for the Whittaker coefficients of the Borel Eisenstein
series on the $n$-fold cover of $SO_{2r+1}$. This formula expresses the coefficients in terms of similar coefficients
but with $r$ replaced by $r-1$.
Let $\omega_r$ be the $(2r+1)\times(2r+1)$ matrix with alternating $1$ and $-1$ on the anti-diagonal and zero elsewhere:
$$
\omega_r=\begin{pmatrix}
&&&&1\\&&&-1&\\&&1&&\\&\iddots&&\\
1&&&&&
\end{pmatrix}.
$$
Then $\omega_r$ is a representative for the longest Weyl element in $G_F$.
The Whittaker coefficients of concern are described as follows. Let $\psi$ be a complex valued additive character
on $F_S$ whose conductor is exactly $\mathfrak{o}_S$.
(For the existence of $\psi$, see Brubaker and Bump \cite{BB}, Lemma 1.) Recall that $U$ denotes
the unipotent radical of the Borel subgroup $B$ of $G$; the group $U(F_S)$ embeds in $\widetilde{G}_{F_S}$
under the trivial section $\underline{\text{s}}(u)=(u,1)$. Let $\mathbf m=(m_1,\dots,m_r)$ be an $r$-tuple of nonzero elements
of $\mathfrak{o}_S$, and let
$$\psi_{\mathbf m}(u)=\psi\left(m_1u_{1,2}+m_2 u_{2,3}+\dots +m_{r-1} u_{r-1,r}+m_r u_{r,r+1}\right).$$
Then our goal is to compute the integral
\begin{equation}\label{Whit-coeff}
W_{\mathbf m}(f,\mathbf s)=\int_{U(\mathfrak{o}_S)\backslash U(F_S)} E_f(\underline{\text{s}}(u),\mathbf s)\,\psi_{\mathbf m}(u)\,\mathrm{d} u.
\end{equation}
For later use, for $f\in I({\bf s})$ as above, we also introduce the Whittaker functionals
for sufficiently large $\Re(s_{i})$,
\begin{align*}
{\Lambda}_{{\bf m},t}(f)&={\mathfrak J}(t)^{-1}\int_{U(F_S)}f({\underline{\text{s}}}(t){\underline{\text{s}}}(\omega_{r}){\underline{\text{s}}}(u))\,\psi_{\mathbf m}(u)\,\mathrm{d} u.
\end{align*}
For $0\neq C\in \mathfrak{o}_S$, let $|C|$ denote the cardinality of $\mathfrak{o}_S/C\mathfrak{o}_S$.
Also for $1\leq i\leq r$ let $\alpha_i$ denote the $i$-th simple root of (the connected component of) the Langlands
dual group $^LG$, with the long root being $\alpha_r$,
and let $\|\alpha_i\|$ denote the length of $\alpha_i$, normalized so that the short roots have length $1$.
Let $\varepsilon_i=\|\alpha_i\|^2$, so $\varepsilon_{i}=1$ for $1\leq i\leq r-1$ and $\varepsilon_{r}=2$.
We will prove the following result.
\begin{theorem}\label{inductivesum}
$$W_{\mathbf m}(f,\mathbf s)=\sum_{0\neq C_i\in \mathfrak{o}_S/\mathfrak{o}_S^\times}
\begin{frac}{H(C_1,\dots,C_r;\mathbf m)}
{|C_1|^{2s_1}\dots |C_r|^{2s_r}}\end{frac}
\Lambda_{\mathbf m, \mathbf C}(f),$$
where $\Lambda_{\mathbf m, \mathbf C}(f)$ is defined as above and
$$C={\rm diag}(C^{-1}_{1},C^{-1}_{2}C_{1},\dots,C^{-1}_{r-1}C_{r-2},C^{-2}_{r}C_{r-1},1,C^{2}_{r}C^{-1}_{r-1},\dots,C_{1}).$$
The coefficients $H$ satisfy the inductive relation
\begin{align}\label{thesum}
&H(C_{1},\dots,C_{r};m_{1},\dots,m_{r})=\\
&\sum_{\substack{0\neq D_{i}\in {\mathfrak{o}}_{S}/{\mathfrak{o}}^{\times}_{S}\\0\neq d_{i}\in {\mathfrak{o}}_{S}/{\mathfrak{o}}^{\times}_{S}}}
\sum_{\substack{c_{i}\bmod{d_{i}}\\(c_i,d_i)=1}}\,\,\prod_{k=1}^r \left(\frac{c_k}{d_k}\right)_n
\left(\frac{c_{2r-k}}{d_{2r-k}}\right)_n\cdot
\prod^{r-1}_{i=1}\prod^{2r-1}_{j=2r-i}(d_{2r-i}/d_{i},d_{j})_{S}\cdot \prod^{r}_{i=2}({\mathfrak d}_{i-1}/{\mathfrak d}_{i},D_{i})^{-\varepsilon_{i}}_{S}
\nonumber \\
&\times\psi\left(m_{1}\frac{c_{1}}{d_{1}}+
\sum_{j=1}^{r-1} m_{j+1}\left(\frac{u_{j}c_{j+1}}{d_{j+1}}+(-1)^{\varepsilon_{j+1}}\frac{d_{j}c_{2r-j}u_{2r-1-j}}{d_{j+1}d_{2r-j}}\right)\right)
|d_{r}|^{2r-2}\prod_{j=1}^{r-1} \left|d^{j-1}_{j}d^{r+j-2}_{r+j}\right|
\nonumber\\
&\times H\left(D_{2},\dots,D_{r};\frac{m_{2}d_{1}d_{2r-2}}{d_{2}d_{2r-1}},\frac{m_{3}d_{2}d_{2r-3}}{d_{3}d_{2r-2}},\dots,\frac{m_{r-1}d_{r-2}d_{r+1}}{d_{r-1}d_{r+2}},\frac{m_{r}d_{r-1}}{d_{r+1}}\right)\nonumber
\end{align}
where the outer sum is over $D_i$, $2\leq i\leq r$, and $d_i$, $1\leq i\leq 2r-1$, such that (modulo units)
$C_{i}={\mathfrak d}_{i}D_{i}$, where $D_{1}=1$ for convenience and
\begin{equation}\label{eq:C}
\begin{array}{rl}
{\mathfrak d}_{1}&=d_{1}d_{2}\cdots d_{r-1} d_{r}^{2}d_{r+1} \cdots d_{2r-1},\\
{\mathfrak d}_j&=d_jd_{j+1}\cdots d_{r-1}d_r^2d_{r+1}\cdots d_{2r-j}d_{2r-j+1}^2\dots d_{2r-1}^2\qquad 2\leq j\leq r-1,\\
{\mathfrak d}_{r}&=d_{r}d_{r+1}\cdots d_{2r-1},\\
\end{array}
\end{equation}
and such that the following divisibility conditions hold in $\mathfrak{o}_S$:
\begin{align}
d_{j+1}&\vert m_{j+1} d_{j}~\text{for}~1\leq j\leq r-1 \label{eq:div-d} \\
d_{j+1}d_{2r-j}&\vert m_{j+1} d_{j}d_{2r-j-1}~\text{for}~1\leq j\leq r-2,\,\,\ d_{r+1}\vert m_{r}d_{r-1}. \label{eq:div-d-2}
\end{align}
The integers $u_i$, $1\leq i\leq 2r-2$, are chosen so that $c_iu_i\equiv 1 \bmod d_i$ (and the
sum is independent of these choices due to the divisibility conditions).
\end{theorem}
We turn to the proof of the Theorem.
To begin, let us rewrite the Eisenstein series as a maximal parabolic Eisenstein series.
(Representation-theoretically, this step is transitivity of induction.) Let $P$ be the maximal
parabolic subgroup of $SO_{2r+1}$ consisting of matrices of the form
$$\begin{pmatrix} *&*&*\\&*&*\\&&*\end{pmatrix}$$
where the middle block is $2r-1$ by $2r-1$.
Then
$$E_f(g,s)=\sum_{\gamma\in P(\mathfrak{o}_S)\backslash G(\mathfrak{o}_S)}
\Theta_f(\gamma g)\qquad g\in \widetilde{G}_{F_S}$$
where
$$\Theta_f(g)=\sum_{\gamma\in B(\mathfrak{o}_S)\backslash P(\mathfrak{o}_S)} f(\gamma g).$$
Notice that we may identify $B(\mathfrak{o}_S)\backslash P(\mathfrak{o}_S)$ with
$B'(\mathfrak{o}_S)\backslash G'(\mathfrak{o}_S)$, where the primes denote the corresponding groups of
rank one lower. Moreover, we may index the cosets in
$P(\mathfrak{o}_S)\backslash G(\mathfrak{o}_S)$ by the bottom rows of coset representatives, modulo units.
The Eisenstein series is given as a sum over $\gamma\in P(\mathfrak{o}_S) \backslash G(\mathfrak{o}_S)$.
We must find coset representatives and compute their contributions.
While it is not difficult to find coset representatives,
the difficulty in the computation is that $G(\mathfrak{o}_S)$ embeds in $\widetilde{G}_{F_S}$
via the map $\kappa$, whose computation is not easy.
To exhibit coset representatives for this quotient and compute $\kappa$ on them,
we shall make use of the calculation for type A
by Brubaker, Bump and Friedberg \cite{BBF5}, Section 5. We show that it is possible to construct
coset representatives from products of three matrices:
two embedded $SL_r$'s and one an embedded $SL_2$ corresponding
to a short root. We then combine the calculations of \cite{BBF5} with various Bruhat decompositions
to obtain an integral expression
for the Whittaker coefficient. This may be evaluated, and leads to the inductive formula of the Theorem.
We begin by exhibiting the coset representatives. (We work in $G(\mathfrak{o}_S)$ and
then move to the cover later.) To parametrize the cosets, we use the map
$G(\mathfrak{o}_S)\to \mathfrak{o}_S^\times\backslash \mathfrak{o}_S^{2r+1}$ which takes
a matrix $\gamma\in G(\mathfrak{o}_S)$ to its last row modulo multiplication of the
entire row by units. This map factors through $P(\mathfrak{o}_S)\backslash G(\mathfrak{o}_S)$,
a coset is uniquely determined by its image, and its image is contained in the subset of
isotropic vectors with respect to the quadratic form
corresponding to $J^{-1}$ whose entries have gcd 1, again modulo units.
In fact, the last row map establishes a one-to-one correspondence to this subset. Indeed,
consider the three embeddings $i_1:SL_r\to G$, $i_2:SL_2\to G$, $i_3:SL_r\to G$
given as follows. The embedding $i_1$ is described via blocks:
$$i_1:\begin{pmatrix}g&v\\w&a \end{pmatrix} \hookrightarrow
\begin{pmatrix}a'&&&w'&\\&g&&&v\\&&1&&\\v'&&&g'&\\&w&&&a \end{pmatrix},
$$
where $g, g'\in M_{(r-1)\times(r-1)}$, $v, v', w^{t}, {w'}^{t}\in M_{(r-1)\times 1}$, and $a, a'\in M_{1\times 1}$;
the primed entries are uniquely determined so that the matrix is in $G$. The embedding $i_2$ is the
composition of the symmetric square map $M$ above with an embedding of $SO_3$ into $SO_{2r+1}$:
$$i_2:\begin{pmatrix}a&b\\c&d\end{pmatrix}\hookrightarrow
\begin{pmatrix} a^2&&ab&&b^2\\&I_{r-1}&&&\\2ac&&ad+bc&&2bd\\&&&I_{r-1}&\\c^2&&cd&&d^2\end{pmatrix}.
$$
The embedding $i_3$ maps to the Levi subgroup of the Siegel parabolic of $G$:
$$i_3: h\hookrightarrow \begin{pmatrix}h'&&\\&1&\\&&h\end{pmatrix},$$
where once again the primed matrix $h'$ is uniquely determined by the requirement that
the image of $i_3$ be in $G$.
\begin{lemma}Let $\mathbf L = (L_1,\dots,L_{2r+1})\in\mathfrak{o}_S^{2r+1}$ be an isotropic
vector with respect to the quadratic form corresponding to $J^{-1}$,
and suppose that $\gcd(L_1,\dots,L_{2r+1})=1$.
Then there exist $g_1,g_3\in SL_r(\mathfrak{o}_S)$
and $g_2\in SL_2(\mathfrak{o}_S)$ such that $i_1(g_1)\,i_2(g_2)\,i_3(g_3)$ has bottom row $\mathbf L$.
\end{lemma}
\begin{proof}
Given a matrix $\gamma\in G(\mathfrak{o}_S)$, there is a $g_3\in SL_r(\mathfrak{o}_S)$ such
that $\gamma\, i_3(g_3)$ has bottom row $\mathbf L$ with $L_{r+2}=\dots=L_{2r}=0$. So it is sufficient to show that
every $\mathbf L$ such that $L_{r+1}=\dots=L_{2r}=0$ is the bottom row of a matrix of the form $i_1(g_1)\,i_2(g_2)$.
But the bottom row of $i_1\left(\left(\begin{smallmatrix}g&v\\w&a\end{smallmatrix}\right)\right)
i_2\left(\left(\begin{smallmatrix}a_2&b_2\\c_2&d_2\end{smallmatrix}\right)\right)$ (with the blocks for $i_1$ as above) is
$$\begin{pmatrix}ac_2^2&w&ac_2d_2&0_{1\times(r-1)}&ad_2^2\end{pmatrix}.$$
Let us choose
$$\begin{array}{cccc}
w=(L_2,\dots,L_{r}),&c_2=\frac{L_1}{\gcd(L_1,L_{r+1})},&d_2=\frac{L_{r+1}}{\gcd(L_1,L_{r+1})},&
a=\gcd(L_1,L_{2r+1}).
\end{array}$$
Then it is easy to check that the product has the desired last row. (Here the greatest common divisors
are determined up to units which are adjusted to obtain the exact equality of bottom rows.) Let us
remark that since the vector $\mathbf L$ is isotropic, it satisfies the equation
$L_1L_{2r+1}=L_{r+1}^2$, and this implies, for example, that $c_2$ is also given by
$c_2=\frac{L_{r+1}}{\gcd(L_{r+1},L_{2r+1})}.$
\end{proof}
Let $P_r$ denote the standard parabolic subgroup of $SL_r$ of type $(r-1,1)$. Then we have
\begin{lemma} \label{lm:g1g3}
A complete set of coset representatives for $P(\mathfrak{o}_S)\backslash G(\mathfrak{o}_S)$
is given by $i_1(g_1)\,i_2(g_2)\,i_3(g_3)$ where $g_1,g_3\in P_r(\mathfrak{o}_S)\backslash SL_r(\mathfrak{o}_S)$
and $g_2\in P_2(\mathfrak{o}_S)\backslash SL_2(\mathfrak{o}_S)$.
\end{lemma}
Indeed, in the proof above, modulo units we see that the bottom rows of $\gamma_1$ and $\gamma_2$
are determined by $\mathbf L$. Also right-multiplying $\gamma\in G(\mathfrak{o}_S)$
by $i_3(P_r(\mathfrak{o}_S))$ does not change its bottom row (modulo units). Since the bottom row modulo units
parametrizes the cosets, we have constructed a full set of coset representatives.
Let $G_{\omega_{r}}$ denote the big Bruhat cell
$G_{\omega_{r}} =B\omega_{r} B$ of $G$. We will focus on $\gamma$ such that $P(\mathfrak{o}_S)\gamma \in
P(\mathfrak{o}_S) \backslash (G(\mathfrak{o}_S)\cap G_{\omega_{r}}(F_S))$
since, in the standard way, the other cells do not contribute to the coefficients
attached to $\psi_{\mathbf m}$ as it is a generic additive character of the unipotent subgroup $U$ of $B$.
For each such coset, we will work with $\gamma'=\gamma\omega_r^{-1}$ and factor this.
We now proceed to use the type A computation of \cite{BBF5}, Section 5 (especially (25) there), and the Bruhat decomposition.
The type A computation gives, by writing representatives as products of embedded $SL_2$'s
and using the Bruhat decomposition on each factor, factorizations for
$i_1(g_1)$ and $i_3(g_3)$. We may similarly carry out a Bruhat decomposition of $i_2(g_2)$.
Each of these factorizations is indexed by parameters $d_i$, one for each simple root used
(these are the $d_i$ which appear in Theorem~\ref{inductivesum}). We will denote the parameters
for $g_1$ by $d_{r+1},\dots,d_{2r-1}$, the parameter for $g_2$ by $d_r$
(this is simply the lower right entry of $g_2$), and the parameters for
$g_3$ by $d_1,\dots,d_{r-1}$.
Suppose $\gamma'=i_1(g_1)\,i_2(g_2)\,i_3(g_3)$. Then the above ingredients give a factorization
$$
\gamma' ={\mathfrak U}^{+}_{1}{\mathfrak D}_{1}{\mathfrak U}^{-}_{1}{\mathfrak U}^{+}_{2}{\mathfrak D}_{2}{\mathfrak U}^{-}_{2}{\mathfrak U}^{+}_{3}{\mathfrak D}_{3}{\mathfrak U}^{-}_{3}.
$$
Here the ${\mathfrak U}^{+}$ are upper triangular unipotent, the ${\mathfrak U}^{-}$ are lower triangular unipotent, and the
${\mathfrak D}$ are diagonal. We record
the matrices.
First, we have the diagonal matrices ${\mathfrak D}_i$, $i=1,2,3$, as well as their product ${\mathfrak D}$ which we will require later.
\begin{align*}
{\mathfrak D}_{1}&={\rm diag}\left((\prod^{r-1}_{i=1}d_{r+i})^{-1},d^{-1}_{2r-1},d^{-1}_{2r-2},\dots,d^{-1}_{r+1},1,d_{r+1},\dots, d_{2r-1},\prod^{r-1}_{i=1}d_{r+i}\right);\\
{\mathfrak D}_{2}&={\rm diag}\left(d^{-2}_{r},I_{2r-1},d^{2}_{r}\right);\\
{\mathfrak D}_{3}&={\rm diag}\left((\prod^{r-1}_{i=1}d_{i})^{-1},d_{1},d_{2},\dots,d_{r-1},1,d^{-1}_{r-1},\dots,d^{-1}_{1},\prod^{r-1}_{i=1}d_{i}\right);\\
{\mathfrak D}&={\mathfrak D}_{1}{\mathfrak D}_{2}{\mathfrak D}_{3}\\&=
{\rm diag}\left((d_{r}\prod^{2r-1}_{i=1}d_{i})^{-1}, \frac{d_{1}}{d_{2r-1}},\frac{d_{2}}{d_{2r-2}},\dots,\frac{d_{r-1}}{d_{r+1}},1,\frac{d_{r+1}}{d_{r-1}},\dots,\frac{d_{2r-1}}{d_{1}},d_{r}\prod^{2r-1}_{i=1}d_{i}\right).
\end{align*}
Second, we record the lower triangular unipotents:
$$\begin{array}{ccc}
&{\mathfrak U}^{-}_{1}=\begin{pmatrix} 1&&&&\\&V_{1}&&&\\&&1&&\\w'&&&V'_{1}&\\&w&&&1 \end{pmatrix},
&
{\mathfrak U}^{-}_{2}=\begin{pmatrix}1&&&&\\ &I_{r-1}&&&\\\frac{2c_{r}}{d_{r}}&&1&&\\&&&I_{r-1}&\\ \frac{c^{2}_{r}}{d^{2}_{r}}&&\frac{c_{r}}{d_{r}}&&1 \end{pmatrix}, \cr
\end{array}$$
and
$$
{\mathfrak U}^{-}_{3}=\begin{pmatrix}V'_{3}&&\\&1&\\&&V_{3} \end{pmatrix}
~\text{with}~
V_{3}=\begin{pmatrix}1&&&&&&\\
\frac{u_{r-2}c_{r-1}}{d_{r-1}}&1&&&&&\\
*&\frac{u_{r-3}c_{r-2}}{d_{r-2}}&1&&&&\\
\vdots&\vdots&\vdots&\ddots&&&\\
*&*&*&\dots&1&&\\
*&*&*&\dots&\frac{u_{1}c_{2}}{d_{2}}&1&\\
*&*&*&\dots&*&\frac{c_{1}}{d_{1}}&1 \end{pmatrix}.
$$
In these matrices, the $c_i$ and $d_i$ are coprime and $c_iu_i\equiv 1\bmod d_i$. The formula for $V_1$ is similar
to that for $V_3$ but with all indices shifted up by $r$. The matrices $V'_i$ and $w'$ are determined so that the
unipotents are in $G$.
Third, we have the following upper triangular unipotents:
$$
{\mathfrak U}^{+}_{2}=\begin{pmatrix}1&&-\frac{u_{r}}{d_{r}}&&\frac{u^{2}_{r}}{d^{2}_{r}}\\&I_{r-1}&&&\\&&1&&-\frac{2u_{r}}{d_{r}}\\&&&I_{r-1}&\\&&&&1 \end{pmatrix}
$$
$$
{\mathfrak U}^{+}_{3}=\begin{pmatrix}U'_{3}&&\\&1&\\&&U_{3} \end{pmatrix}
~~\text{where}~~
U_{3}=\begin{pmatrix} 1&&&&-\frac{u_{r-1}}{d_{r-1}}\\&1&&&-\frac{u_{r-2}}{d_{r-1}d_{r-2}}\\&&\ddots&&\vdots\\&&&1&-\frac{u_{1}}{\prod^{r-1}_{i=1}d_{i}}\\&&&&1\end{pmatrix}.
$$
We will not need ${\mathfrak U}^+_1$ (it is in the unipotent radical of $P$).
After multiplying and rearranging, one finds that
$
\gamma'={\mathfrak U}^{+}{\mathfrak D}{\mathfrak U}^{-},
$
where ${\mathfrak U}^{+}$ is in the unipotent radical of $P$ and
$$\psi_{\mathbf m}(\omega_{r}^{-1} {\mathfrak U}^- \omega_{r})=
\psi\left(m_{1}\frac{c_{1}}{d_{1}}+
\sum_{j=1}^{r-1} m_{j+1}\left(\frac{u_{j}c_{j+1}}{d_{j+1}}+(-1)^{\varepsilon_{j+1}}\frac{d_{j}c_{2r-j}u_{2r-1-j}}{d_{j+1}d_{2r-j}}\right)\right).
$$
We now proceed to compute the Whittaker coefficients (compare with \cite{BBF5}, Section 5).
The above factorization lifts from $G$ to $\widetilde{G}_{F_S}$ but at the expense of some power residue
and Hilbert symbols.
Indeed, computations similar to \cite{BBF5}, pg.\ 1098 and using Lemma~\ref{cover} show that in $\widetilde{G}_{F_S}$
\begin{align*}
&(\gamma_{1},\kappa(\gamma_1)))=\prod^{2r-1}_{k=r+1} \ppair{\frac{c_{k}}{d_{k}}}_{n}\,\,
\prod^{2r-1}_{i=r+2}\prod^{i}_{j=r+1}(d_{i},d_{j})^{-1}_{S}{\underline{\text{s}}}({\mathfrak U}^{+}_{1}){\underline{\text{s}}}({\mathfrak D}_{1}){\underline{\text{s}}}({\mathfrak U}^{-}_{1});\\
&(\gamma_2,\kappa(\gamma_2))=\left(\frac{d_r}{c_r}\right)^2_n(d_{r},c_{r})_S^{2}\,{\underline{\text{s}}}({\mathfrak U}^{+}_{2}){\underline{\text{s}}}({\mathfrak D}_{2}){\underline{\text{s}}}({\mathfrak U}^{-}_{2});\\
&(\gamma_{3},\kappa(\gamma_3))=\prod^{r-1}_{k=1}\left(\frac{d_k}{c_k}\right)_n(d_{k},c_{k})_{S}\,{\underline{\text{s}}}({\mathfrak U}^{+}_{3}){\underline{\text{s}}}({\mathfrak D}_{3}){\underline{\text{s}}}({\mathfrak U}^{-}_{3}).
\end{align*}
Indeed, the factor $\left(\tfrac{a}{b}\right)_{2n}$ of Lemma~\ref{cover} is multiplied by the Hilbert symbol $(b,d)_S$ in carrying
out this decomposition. But we have
$$\left(\frac{a}{b}\right)_{2n}(b,d)_S=\left(\frac{d}{b}\right)_{2n}^{-1}(b,d)_S=\left(\frac{b}{d}\right)_{2n}^{-1}
=\left(\frac{c}{d}\right)_{2n},$$
where these equalities are obtained by applying basic properties of the power-residue symbol along with the reciprocity law
(\ref{reciprocity-law}). Thus we obtain two factors of $\left(\frac{c_k}{d_k}\right)_{2n}$ for each $k$ which give
the factors of $\left(\frac{c_k}{d_k}\right)_{n}$.
Also a computation using (\ref{torus-cocycle}) yields
$$
\prod^{2r-1}_{i=r+2}\prod^{i}_{j=r+1}(d_{i},d_{j})^{-1}_{S}\prod^{3}_{i=1}{\underline{\text{s}}}({\mathfrak U}^{+}_{i}){\underline{\text{s}}}({\mathfrak D}_{i}){\underline{\text{s}}}({\mathfrak U}^{-}_{i})=\prod^{r-1}_{i=1}\prod^{2r-1}_{j=2r-i}(d_{2r-i}/d_{i},d_{j})_{S}\, {\underline{\text{s}}}({\mathfrak U}^{+}){\underline{\text{s}}}({\mathfrak D}){\underline{\text{s}}}({\mathfrak U}^{-}).
$$
Let $U_P$ denote the unipotent radical of the parabolic subgroup $P$. This is the group of unipotent matrices with all non-diagonal entries outside of the first row and column equal to zero. Then we have a decomposition $U=U_PU_P'$
where $U_P'$ is the complementary subgroup. Now it is easy to check that $U_P(\mathfrak{o}_S)$
acts properly on the right on the set of cosets
in $P(\mathfrak{o}_S)\backslash (G(\mathfrak{o}_S)\cap G_{\omega_{r}}(F_S))$.
We use this to
replace the integration in (\ref{Whit-coeff}) with a double integration over $U_P(F_S)$ and $U_P'(\mathfrak{o}_S)
\backslash U_P'(F_S)$:
\begin{multline*}
\sum_{\gamma \in
P(\mathfrak{o}_S) \backslash (G(\mathfrak{o}_S)\cap G_{\omega_{r}}(F_S))}\int_{U(\mathfrak{o}_S)\backslash U(F_S)}
=\\
\sum_{\gamma \in
P(\mathfrak{o}_S) \backslash (G(\mathfrak{o}_S)\cap G_{\omega_{r}}(F_S))/ U_P(\mathfrak{o}_S)}
\int_{U_P(F_S)}\int_{U_P'(\mathfrak{o}_S)\backslash U_P'(F_S)}.
\end{multline*}
To carry this out with the cosets parametrized above, note that if $\gamma=\gamma'\omega_r$ where $\gamma'$
has bottom row $\mathbf{L}$, then modding out by $U_P(\mathfrak{o}_S)$ on the right corresponds to
taking all $L_k$ modulo $L_{2r+1}$ (which is nonzero since $\gamma$ is in the big cell). With the
parametrization above, $L_{2r+1}=d_r\prod_{j=1}^{2r-1}d_j$, and this corresponds to taking
$c_k$ modulo $\tilde{d}_k$ and prime to $d_k$, where
$\tilde{d}_k=\prod^{k}_{j=1}d_{j}$ for $k\leq r$ and $\tilde{d}_k=d_{r}d^{-1}_{2r-k}\prod^{k}_{j=1}d_{j}$
for $r<k\leq 2r-1$.
Putting all this together, we have
\begin{align}
W_{\mathbf m}&(f,\mathbf s) =\sum_{d_{k}}
\prod^{r-1}_{i=1}\prod^{2r-1}_{j=2r-i}(d_{2r-i}d_{i}^{-1},d_{j})_{S}\nonumber \\
&\sum_{\substack{c_{k}\bmod \tilde{d}_{k}\\ \gcd(c_k,d_k)=1}}\ppair{\frac{c_{k}}{d_{k}}}_n^{\varepsilon_k}
\psi\left(m_{1}\frac{c_{1}}{d_{1}}+
\sum_{j=1}^{r-1} m_{j+1}\left(\frac{u_{j}c_{j+1}}{d_{j+1}}+(-1)^{\varepsilon_{j+1}}\frac{d_{j}c_{2r-j}u_{2r-1-j}}{d_{j+1}d_{2r-j}}\right)\right)
\nonumber \\
&\int_{U_P(F_S)}\int_{U_P'(\mathfrak{o}_S)\backslash U_P'(F_S)}
\Theta\left({\underline{\text{s}}}({\mathfrak D})\,{\underline{\text{s}}}(\omega_{r})\,{\underline{\text{s}}}(u_P')\,{\underline{\text{s}}}(u_P)\right)\,\psi\left(\sum^{r}_{i=1} m_{i}u_{i,i+1}\right)\,\mathrm{d} u_P'\,\mathrm{d} u_P. \label{eq:W1}
\end{align}
Here the $u_k$ satisfy $c_ku_k\equiv 1 \bmod {d}_k$.
Now the function $\Theta$ is invariant under the group of lower-triangular unipotent
matrices in $P(\mathfrak{o}_S)$. Putting such matrices
into the integral in \eqref{eq:W1}, moving them rightwards and changing variables, one sees that the integral vanishes unless the divisibility
condition \eqref{eq:div-d-2} holds. Moreover, replacing $u_j$ by $u_{j}+t_jd_{j}$, $1\leq j\leq r-1$,
and summing over $t_j$ modulo $d_j$, it follows that the expression is zero unless
the divisibility condition \eqref{eq:div-d} holds as well.
Since $\kappa(\gamma)$ depends only on $c_{k}$ modulo $d_{k}$ for $1\leq k\leq 2r-1$, if we sum over $c_{k}\bmod d_{k}$ and then multiply the result by $|\prod^{k-1}_{j=1}d_{j}|$ when $k\leq r$ and by $|d_{r}d^{-1}_{2r-k}\prod^{k-1}_{j=1}d_{j}|$ when $r<k$, this has the same result as summing over $L_{i}\mod L_{2r-1}$. In doing so
we obtain the factor $|d_{r}|^{2r-2}\prod^{r-1}_{k=1}|d_{k}|^{2r-2-k}\, |d_{r+k}|^{r-k-1}.$
Next we make use of the factorization
$$\omega_r=\begin{pmatrix}1&&\\&\omega_{r-1}&\\&&1
\end{pmatrix}\begin{pmatrix} &&1\\&-I_{2r-1}&\\1&& \end{pmatrix}.$$
To do so, for $g'\in \widetilde{{\rm SO}}_{2r-1}(F_{S})$, denote
$$
f'_{r-1}(g')=\int_{U_P(F_S)}f\left(i(g')){\mathfrak D}' {\underline{\text{s}}}\begin{pmatrix} &&1\\&-I_{2r-1}&\\1&& \end{pmatrix} {\underline{\text{s}}}(u_P)\right)\psi(m_{1}{(u_P)}_{1,2})\,\mathrm{d} u_P,
$$
where $i\colon \widetilde{{\rm SO}}_{2r-1}\to \widetilde{{\rm SO}}_{2r+1}$ is the embedding in the Levi subgroup of $P$ and
\begin{align*}
{\mathfrak D}'&=\begin{pmatrix}1&&\\&\omega_{r-1}&\\&&1\end{pmatrix}^{-1}{\mathfrak D}\begin{pmatrix}1&&\\&\omega_{r-1}&\\&&1\end{pmatrix}
\\
&={\rm diag}\left((d_{r}\!\prod^{2r-1}_{i=1} d_{i})^{-1}, \frac{d_{2r-1}}{d_{1}},\frac{d_{2r-2}}{d_{2}},\dots, \frac{d_{r+1}}{d_{r-1}},1,\frac{d_{r-1}}{d_{r+1}},\dots, \frac{d_{2}}{d_{2r-2}},\frac{d_{1}}{d_{2r-1}}, d_{r}\!\prod^{2r-1}_{i=1} d_{i}\right).
\end{align*}
(The function $f'_{r-1}$ of course depends on ${\mathfrak D}'$ but we suppress this dependence for notational convenience.)
The function $f'_{r-1}$ is in $I(\mathbf{s}')$ where $\mathbf{s}'=(s_2,\dots,s_r)$.
Then a straightforward calculation shows that
\begin{multline}\label{mess1}
\int_{U_P(F_S)}\!\!\Theta\left({\underline{\text{s}}}({\mathfrak D}) {\underline{\text{s}}}(\omega_{r}){\underline{\text{s}}}(u'_P){\underline{\text{s}}}(u_P)\right)\,
\psi(m_{1}{(u_P)}_{1,2})\,du_P \\
=\sigma(i(\omega_{r-1}),{\mathfrak D}')^{-1} \!E_{f'_{r-1}}\left({{\underline{\text{s}}}(\omega_{r-1}){\underline{\text{s}}}(u')},\mathbf{s}'\right)
\end{multline}
where $u'$ is the unipotent element in ${SO}_{2r-1}$ such that $i(u')={\mathfrak D}'u{\mathfrak D}'^{-1}$.
We may drop the ${\underline{\text{s}}}(\omega_{r-1})$ on the right hand side of (\ref{mess1}) since $E_{f'_{r-1}}$ is automorphic.
The double integral in (\ref{eq:W1}) may then be simplified by changing $u_P'$ to ${\mathfrak D}'u_P'{\mathfrak D}'^{-1}$.
This variable change has no effect on the measure of compact quotient
$U_P'(\mathfrak{o}_S)\backslash U_P'(F_S)$. However, it changes the locations that support the character (corresponding to the simple roots), sending
\begin{multline*}
\cpair{u_{2,3},u_{3,4},\dots,u_{r-1,r},u_{r,r+1}}
\mapsto\\ \cpair{\frac{d_{2r-1}d_{2}}{d_{1}d_{2r-2}}u_{2,3},\frac{d_{2r-2}d_{3}}{d_{2}d_{2r-3}}u_{3,4},\dots,\frac{d_{r+2}d_{r-1}}{d_{r-2}d_{r+1}}u_{r-1,r},\frac{d_{r+1}}{d_{r-1}}u_{r,r+1}}.
\end{multline*}
These steps give the following evaluation of the double integral in \eqref{eq:W1}:
\begin{align}
&\sigma(i(\omega_{r-1}),{\mathfrak D}')^{-1} \int_{U'(\mathfrak{o}_S)\backslash U'(F_S)}E_{f'_{r-1}}({\underline{\text{s}}}(u),\mathbf{s}')
\nonumber\\
&\psi\ppair{\frac{m_{2}d_{1}d_{2r-2}}{d_{2r-1}d_{2}}x_{2,3}+\frac{m_{3}d_{2}d_{2r-3}}{d_{2r-2}d_{3}}x_{3,4}+\cdots+\frac{m_{r-1}d_{r-2}d_{r+1}}{d_{r+2}d_{r-1}}x_{r-1,r}+\frac{m_{r}d_{r-1}}{d_{r+1}}x_{r,r+1}}\,du.\label{eq:cocycle-d}
\end{align}
Here $U'$ is the unipotent radical of $SO_{2r-1}$; for convenience we write $u\in U'$ as $u=(x_{i,j})$ with $2\leq i,j\leq 2r$.
Now we may use the induction hypothesis and write the integral in (\ref{eq:cocycle-d}) as
\begin{align*}
&\sum_{0\neq D_{i}\in {\mathfrak o}_{S}/{\mathfrak o}^{\times}_{S}}H(D_{2},\dots,D_{r};\frac{m_{2}d_{1}d_{2r-2}}{d_{2r-1}d_{2}},\dots,\frac{m_{r}d_{r-1}}{d_{r+1}})\prod^{r}_{i=2}|D_{i}|^{-2s_{i}} {\Lambda}_{{\bf m}',D}(f'_{r-1}),
\end{align*}
where
$$D={\rm diag}(D^{-1}_{2},D^{-1}_{3}D_{2},\dots,D^{-1}_{r-1}D_{r-2},D^{-2}_{r}D_{r-1},1,D^{2}_{r}D^{-1}_{r-1},\dots,D_{2})
$$
and ${\bf m}'=(\frac{m_{2}d_{1}d_{2r-2}}{d_{2r-1}d_{2}},\dots,\frac{m_{r}d_{r-1}}{d_{r+1}})$.
Let $C_{i}={\mathfrak d}_{i}D_{i}$, where the ${\mathfrak d}_{i}$ are defined in \eqref{eq:C} and $D_{1}=1$.
Then substituting in the definitions and comparing, one sees that
\begin{align}
{\mathfrak J}(D){\Lambda}_{{\bf m'},D}(f'_{r-1})&
=\sigma(i(\omega_{r-1}),{\mathfrak D}')\prod^{r}_{i=2}({\mathfrak d}_{i-1}/{\mathfrak d}_{i},D_{i})^{-\varepsilon_i}_{S}
\nonumber\\
&\left(\left\vert \frac{d_{1}}{d_{2r-1}}\right\vert^{2r-3}\left\vert\frac{d_{2}}{d_{2r-2}}\right\vert^{2r-5}\cdots
\left\vert\frac{d_{r-2}}{d_{r+2}}\right\vert^{3}\left\vert\frac{d_{r-1}}{d_{r+1}}\right\vert\right)^{-1}
{\mathfrak J}(C){\Lambda}_{{\bf m},C}(f).\label{eq:cocycle-D}
\end{align}
Combining Equations (\ref{eq:W1}), (\ref{mess1}), (\ref{eq:cocycle-d}) and (\ref{eq:cocycle-D}), Theorem~\ref{inductivesum} follows.
\qed
\smallskip
The functionals $\Lambda_{\mathbf m, \mathbf C}(f)$ may be analyzed as in \cite{BBF5},
Section 6. Since the arguments there apply with only minimal changes, we do not carry this out here.
\section{Twisted Multiplicativity}\label{sect5}
In this Section we establish the twisted multiplicativity properties of the coefficients $H$. This allows us to recover
all coefficients from those indexed by parameters which are all powers of a single prime.
See also \cite{BBF1}. Recall $\varepsilon_i=\|\alpha_i\|^2$.
\begin{theorem}\label{thm:twist-m}
If vectors ${\bf m}$, ${\bf m'}$ and ${\bf C}\in ({\mathfrak o}_{S}-\{0\})^{r}$ satisfy $\gcd(m'_{1}\cdots m'_{r}, C_{1}\cdots C_{r})=1$, then
\begin{equation}\label{eq:twist-m}
H({\bf C}; m_{1}m'_{1},\dots,m_{r}m'_{r})=\prod^{r}_{i=1}\ppair{\frac{m'_{i}}{C_{i}}}_n^{-\varepsilon_i}H({\bf C}; {\bf m}).
\end{equation}
\end{theorem}
\begin{proof}
The proof is by induction on $r$. For $r=1$, $H(C_{1};m_{1}m'_{1})=g_{2}(m_{1}m'_{1},C_{1})$, where $g_2$ is the Gauss
sum defined in Sect.~\ref{sect6} below. Equation~\eqref{eq:twist-m} follows by the usual properties of Gauss sums.
For general $r$, since all $d_i$ divide $C_1$, we have $\gcd(d_i,m'_j)=1$ for all $i,j$. Thus, the divisibility conditions~\eqref{eq:div-d}
for $(m_1m_1',\dots,m_r m'_r)$,
namely $d_{j+1}\vert m_{j+1}m'_{j+1} d_{j}$ for $1\leq j\leq r-1$, hold if and only if
$d_{j+1}\vert m_{j+1} d_{j}~\text{for}~1\leq j\leq r-1$. Similarly, the divisibility conditions~\eqref{eq:div-d-2} for $(m_1m_1',\dots,m_r m'_r)$
hold if and only if $d_{j+1}d_{2r-j}\vert m_{j+1} d_{j}d_{2r-j-1}$ for $1\leq j\leq r-2$ and $d_{r+1}\vert m_{r} d_{r-1}$.
In the inner sum in (\ref{thesum}), make the variable change $c_{i}\rightarrow (\prod_{j=1}^{i}m'_{i})^{-1}c_{i}$ if $1\leq i\leq r$, and $c_{i}\rightarrow (m'_{r}\prod_{j=1}^{i-1}m'_{i})^{-1}c_{i}$ for $r< i\leq 2r-1$.
This variable change removes all $m'_{i}$'s from the sum and contributes the factor
\begin{equation}\label{eq:factor1}
\prod^{r-1}_{i=1}\ppair{\frac{\prod^{i}_{j=1}m'_{j}}{d_{i}}}_n^{-1}
\ppair{\frac{\prod^{r}_{i=1}m'_{i}}{d_{r}}}_n^{-2}
\prod^{r}_{i=2}\ppair{\frac{\prod^{r}_{\ell=1}m'_{\ell}\prod^{r}_{j=i}m'_{j}}{d_{2r+1-i}}}_n^{-1}.
\end{equation}
In addition, by induction (note $\gcd(m'_{2}\cdots m'_{r},D_{2}\cdots D_{r})=1$), the $H$ on the right of (\ref{thesum}) contributes a factor of
\begin{equation}\label{eq:factor2}
\prod^{r}_{i=2}\ppair{\frac{m'_{i}}{D_{i}}}_n^{-\varepsilon_i}.
\end{equation}
Multiplying \eqref{eq:factor1} and \eqref{eq:factor2} and making use of \eqref{eq:C}, one obtains Theorem~\ref{thm:twist-m}.
\end{proof}
Let $\apair{\cdot,\cdot}$ be the standard Euclidean inner product, normalized so that the short roots have length $1$.
\begin{theorem}\label{twmu2}
If vectors ${\bf C}$ and ${\bf C'}$ in $({\mathfrak o}_{S}-\{0\})^{r}$ satisfy $\gcd(C_{1}\cdots C_{r}, C'_{1}\dots C'_{r})=1$, then
\begin{equation}\label{eq:twist-c}
H(C_{1}C'_{1},\dots,C_{r}C'_{r};{\bf m})=\mu({\bf C},{\bf C'})H({\bf C}; {\bf m}) H({\bf C'}; {\bf m}),
\end{equation}
where $\mu({\bf C},{\bf C'})$ is an $n$-th root of unity depending ${\bf C}$, ${\bf C'}$, given by
$$
\mu({\bf C},{\bf C'})=\prod^{r}_{i=1}\ppair{\frac{C_{i}}{C'_{i}}}_n^{\varepsilon_i}\ppair{\frac{C'_{i}}{C_{i}}}_n^{\varepsilon_i}\,\prod_{j<i}\ppair{\frac{C_{i}}{C'_{j}}}_n^{2\apair{\alpha_{i},\alpha_{j}}}
\ppair{\frac{C'_{i}}{C_{j}}}_n^{2\apair{\alpha_{i},\alpha_{j}}}.
$$
\end{theorem}
\begin{proof}
Since $\gcd(C_{1}\cdots C_{r}, C'_{1}\dots C'_{r})=1$, there is a one-to-one correspondence between the $d_{i}$ satisfying
\begin{equation*}
\begin{array}{ll}
d_{r}\prod^{2r-1}_{i=1}d_{i}=C'_{1}C_{1};&\\
(d_{r}\prod^{2r-1}_{j=i}d_{j}\prod^{2r-1}_{j=2r+1-i}d_{j})\vert C_{i}C_i'&\text{for $1< i<r$};\\
(\prod^{2r-1}_{j=r}d_{j})\vert C_{r}C_r';&
\end{array}
\end{equation*}
and the divisibility conditions~\eqref{eq:div-d}, \eqref{eq:div-d-2}, and the integer pairs $e_{i}$, $e'_{i}$ such that
$d_{i}=e_{i}e'_{i}$ for all $i$, satisfying
\begin{equation}
\begin{array}{l}
e_{r}\prod^{2r-1}_{i=1}e_{i}=C_{1}\text{ and }
e'_{r}\prod^{2r-1}_{i=1}e'_{i}=C'_{1};\\
(e_{r}\prod^{2r-1}_{j=i}e_{j}\prod^{2r-1}_{j=2r+1-i}e_{j})\vert C_{i} \text{ for } 1<i<r \text{ and }
(\prod^{2r-1}_{j=r}e_{j})\vert C_{r};\\
(e'_{r}\prod^{2r-1}_{j=i}e'_{j}\prod^{2r-1}_{j=2r+1-i}e'_{j})\vert C'_{i} \text{ for } 1<i<r\text{ and }
(\prod^{2r-1}_{j=r}e'_{j})\vert C'_{r};\\
e_{j+1}\vert m_{j+1} e_{j}~\text{for}~1\leq j\leq r-1;\\
e'_{j+1}\vert m_{j+1} e'_{j}~\text{for}~1\leq j\leq r-1;\\
e_{j+1}e_{2r-j}\vert m_{j+1} e_{j}e_{2r-j-1} \text{ for } 1\leq j\leq r-2 \text{ and } e_{r+1}\vert m_{r}e_{r-1};\\
e'_{j+1}e'_{2r-j}\vert m_{j+1} e'_{j}e'_{2r-j-1} \text{ for } 1\leq j\leq r-2 \text{ and }e'_{r+1}\vert m_{r}e'_{r-1}.\\
\end{array}
\end{equation}
Thus, we can split the sum over the $d_{i}$ into sums over $e_{i}$ and $e'_{i}$.
Let
\begin{equation}
c_i=\begin{cases}
x'_{i}\prod^{i}_{j=1}e_{j}+x_{i}\prod^{i}_{j=1} e'_{j} &\text{ for } 1\leq i\leq r,\\
x'_{i}e_{r}e^{-1}_{2r-i}\prod^{i}_{j=1}e_{j}+x_{i}e'_{r}e'^{-1}_{2r-i}\prod^{i}_{j=1}e'_{j} &\text{ for } r+1\leq i\leq 2r-1.
\end{cases}
\end{equation}
Since $\prod^{i-1}_{j=1}e_{j}$ is a unit modulo $e'_{i}$ and $\prod^{i-1}_{j=1}e'_{j}$ is a unit modulo $e_{i}$, as $x'_{i}$ varies modulo $e'_{i}$ and $x_{i}$ varies modulo $e_{i}$, $c_{i}$ for $1\leq i\leq r$ varies modulo $d_{i}$. This is also true for $r<i\leq 2r-1$. With this parametrization, the $c_{i}$ that are invertible modulo $e_{i}$ are those with $\gcd(x_{i},e_{i})=\gcd(x'_{i},e'_{i})=1$, and for such $c_{i}$, $u_{i}$ is determined by the equations
$$
\begin{array}{l}
x_{i}u_{i}\prod^{i}_{j=1}e'_{j}\equiv 1\bmod e_{i}
\text{ and } x'_{i}u_{i}\prod^{i}_{j=1}e_{j}\equiv 1\bmod e'_{i} \text{ for } 1\leq i\leq r;\\
x_{i}u_{i}e'_{r}e'^{-1}_{2r-i}\prod^{i}_{j=1}e'_{j}\equiv 1\bmod e_{i}\text{ and }
x'_{i}u_{i}e_{r}e^{-1}_{2r-i}\prod^{i}_{j=1}e_{j}\equiv 1\bmod e'_{i}\text{ for } r< i< 2r.
\end{array}
$$
Let $v_{i}$ modulo $e_{i}$ (resp. $v'_{i}$ modulo $e'_{i}$) satisfy $v_{i}x_{i}\equiv 1\bmod e_{i}$ (resp. $v'_{i}x'_{i}\equiv 1\bmod e'_{i}$).
Let $u_0=v_0=v_0'=1$. Substituting in and simplifying, one sees that
\begin{equation*}
\begin{array}{lll}
\psi\left(\frac{m_{i}u_{i-1}c_{i}}{d_{i}}\right)&=\psi\left(\frac{m_{i}v'_{i-1}x'_{i}}{e'_{i}}\right)
\psi\left(\frac{m_{i}v_{i-1}x_{i}}{e_{i}}\right)&\text{for $1\leq i\leq r$}\\
\psi\ppair{\frac{m_{i}d_{i-1}c_{2r+1-i}u_{2r-i}}{d_{i}d_{2r+1-i}}}&=
\psi\ppair{\frac{m_{i}e'_{i-1}x'_{2r+1-i}v'_{2r-i}}{e'_{i}e'_{2r+1-i}}}\psi\ppair{\frac{m_{i}e_{i-1}x_{2r+1-i}v_{2r-i}}{e_{i}e_{2r+1-i}}}
&\text{for $2\leq i\leq r$.}
\end{array}
\end{equation*}
Therefore, the exponential sum in Theorem~\ref{inductivesum} factors into two sums with similar divisibility conditions and similar exponentials.
To carry out the proof by induction, we must analyze the power residues symbols.
Below, we will work with pairs of numbers of the form $A$, $A'$ and $B$, $B'$ such that $\gcd(A,A')=\gcd(B,B')=1$.
For convenience, let $h(A,B)=\ppair{\frac{A}{B}}_n\ppair{\frac{A'}{B'}}_n$. Then
$h(A_{1}A_{2},B)=h(A_{1},B)h(A_{2},B),$ $h(A^{-1},B)=h(A,B)^{-1}$ and, by reciprocity,
$(A,B')_{S}(A',B)_{S}h(A,B)=h(B,A)$.
With this notation, we have
\begin{equation}
\ppair{\frac{c_{i}}{d_{i}}}_n=\ppair{\frac{x'_{i}}{e'_{i}}}_n\ppair{\frac{x_{i}}{e_{i}}}_n\cdot
\begin{cases}
h(e_{1}\cdots e_{i},e_{i})& \text{ if }1\leq i\leq r,\\
h(e_{r}e^{-1}_{2r-i}\prod^{i}_{j=1}e_{j},e_{i})& \text{ if } r<i\leq 2r-1.
\end{cases}\label{eq:twist-c/d}
\end{equation}
The factorization $d_i=e_ie_i'$ leads to factorizations of the $D_i$ in Equation~\eqref{eq:C} for $2\leq i\leq r$.
Rather than introducing a new letter, we denote the factors $D_i$, $D_i'$; we have
$\gcd(D_{2}\cdots D_{r},D'_{2}\cdots D'_{r})=1$.
Then by induction we have
\begin{align}
&H(D_{2}D'_{2},\dots,D_{r}D'_{r};\frac{m_{2}d_{1}d_{2r-2}}{d_{2}d_{2r-1}},\dots,\frac{m_{r-1}d_{r-2}d_{r+1}}{d_{r-1}d_{r+2}},\frac{m_{r}d_{r-1}}{d_{r+1}})
\nonumber\\
=&\prod^{r}_{i=2}h(D_{i},D_{i})^{\varepsilon_i}\prod_{i<j}h(D_{j},D_{i})^{2\apair{\alpha_{i},\alpha_{j}}}H(D_{2},\dots,D_{r},\frac{m_{2}d_{1}d_{2r-2}}{d_{2}d_{2r-1}},\dots,\frac{m_{r}d_{r-1}}{d_{r+1}})
\nonumber\\
&\times H(D'_{2},\dots,D'_{r},\frac{m_{2}d_{1}d_{2r-2}}{d_{2}d_{2r-1}},\dots,\frac{m_{r}d_{r-1}}{d_{r+1}}). \label{eq:twist-D}
\end{align}
Let $m_{i}=n_{i}n'_{i}$ such that $n_{i}e_{i-1}e_{2r-i}/e_{i}e_{2r-i+1}$ and
$n'_{i}e'_{i-1}e'_{2r-i}/e'_{i}e'_{2r-i+1}$ are integral and $\gcd(n_{i},C'_{1})=\gcd(n'_{i},C_{1})=1$ for all $i$.
Then $\gcd(n_{i},D'_{j})=\gcd(n'_{i},d_{j})=1$ for all $i$, $j$. By Theorem~\ref{thm:twist-m},
\begin{align*}
&H(D_{2},\dots,D_{r},\frac{m_{2}d_{1}d_{2r-2}}{d_{2}d_{2r-1}},\dots,\frac{m_{r}d_{r-1}}{d_{r+1}})\\
=&\prod^{r}_{i=2}\ppair{\frac{n'_{i}e'_{i-1}e'_{2r-i}/(e'_{i}e'_{2r-i+1})}{D_{i}}}_n^{-\varepsilon_i}
H(D_{2},\dots,D_{r},\frac{n_{2}e_{1}e_{2r-2}}{e_{2}e_{2r-1}},\dots,\frac{n_{r}e_{r-1}}{e_{r+1}}).
\end{align*}
Note that $\frac{n'_{i}e'_{i-1}e'_{2r-i}}{e'_{i}e_{2r-i+1}}=n'_{r}e'_{r-1}/e_{r+1}$ when $i=r$.
Moreover, for $2\leq i\leq r$,
$$
\ppair{\frac{n'_{i}e'_{i-1}e'_{2r-i}/e'_{i}e'_{2r-i+1}}{D_{i}}}_n^{-1}=\ppair{\frac{n'_{i}}{D_{i}}}_n^{-1}
\ppair{\frac{e'_{i-1}/e'_{2r-i+1}}{D_{i}}}_n^{-1}
\ppair{\frac{e'_{i}/e'_{2r-i}}{D_{i}}}_n.
$$
Since $\gcd(n'_{i},D_{i})=1$ for all $i$, we may then put the $n_{i}$ back into the coefficients $H$
by Theorem~\ref{thm:twist-m}. Doing so, and using a similar argument with the last factor in~\eqref{eq:twist-D}, we obtain
\begin{multline}
H(D_{2}D'_{2},\dots,D_{r}D'_{r};\frac{m_{2}d_{1}d_{2r-2}}{d_{2}d_{2r-1}},\dots,\frac{m_{r-1}d_{r-2}d_{r+1}}{d_{r-1}d_{r+2}},\frac{m_{r}d_{r-1}}{d_{r+1}})\\
=\ppair{\prod^{r}_{i=2}h(D_{i},D_{i})
h(e_{i-1}e^{-1}_{2r-i+1},D_{i})^{-1}h(e_{i}e^{-1}_{2r-i},D_{i})}^{\varepsilon_i}
\prod^{r-1}_{i=1}h(D_{i+1},D_{i})^{-\varepsilon_{i+1}}\\
H(D_{2},\dots,D_{r},\frac{m_{2}e_{1}e_{2r-2}}{e_{2}e_{2r-1}},\dots,\frac{m_{r}e_{r-1}}{e_{r+1}})
H(D'_{2},\dots,D'_{r},\frac{m_{2}e'_{1}e'_{2r-2}}{e'_{2}e'_{2r-1}},\dots,\frac{m_{r}e'_{r-1}}{e'_{r+1}}). \label{eq:twist-DD'}
\end{multline}
By contrast,
$$
\mu({\bf C},{\bf C'})=\prod^{r}_{i=1}h(C_{i},C_{i})^{\varepsilon_i}\prod^{r-1}_{i=1}h(C_{i+1},C_{i})^{-\varepsilon_{i+1}}.
$$
Substituting in the expressions from (\ref{eq:C}) and using the properties of the function $h(A,B)$ noted above, one finds that
\begin{align}
\mu({\bf C},{\bf C'})&=
\prod^{r}_{i=2}h(D_{i},D_{i})^{\varepsilon_i}h(e_{i}e^{-1}_{2r-i},D_{i})\prod^{r-1}_{i=1}h(D_{i+1},D_{i})^{-\varepsilon_{i+1}}
\nonumber\\
&\prod^{r}_{i=2}({\mathfrak d}_{i-1}{\mathfrak d}^{-1}_{i},D'_{i})_S^{-\varepsilon_i}({\mathfrak d}'_{i-1}{{\mathfrak d}'_{i}}^{-1},D_{i})_S^{-\varepsilon_i}
h({\mathfrak d}_{i-1}{\mathfrak d}^{-\varepsilon_i}_{i},D_i^{\varepsilon_i})^{-1}
\nonumber\\
&\prod^{r}_{i=1}\prod^{i}_{j=1}h(e_{j},e_{i})^{\varepsilon_i}\prod^{2r-1}_{i=r+1}\prod^{i}_{j=1}h(e_{r}e^{-1}_{2r-i}e_{j},e_{i})
\nonumber\\
&\prod^{r-1}_{i=1}\prod^{2r-1}_{j=2r-i}(e_{2r-i}e^{-1}_{i},e'_{j})_{S}\prod^{r-1}_{i=1}\prod^{2r-1}_{j=2r-i}(e'_{2r-i}{e'_{i}}^{-1},e_{j})_{S}. \label{eq:twist-d}
\end{align}
Finally, we collect the residue symbols that arise in~\eqref{eq:twist-c/d} and~\eqref{eq:twist-DD'}, and Hilbert symbols that arise in~\eqref{eq:cocycle-D} and~\eqref{eq:cocycle-d}. These match~\eqref{eq:twist-d}, and the theorem is proved.
\end{proof}
The proof above is similar to the proof for type A in \cite{BBF5}, Theorem 3. However, the formulae for
twisted multiplicativity in this Section involve the root system of the Langlands dual group
$^LG$, a phenomenon predicted in \cite{BBF1}.
Theorem~\ref{twmu2} agrees with \cite{BeBrF1} after noting that in
formula (20) of that work the order of
the simple roots $\alpha_i$ is the reverse of ours.
\section{Combinatorial Descriptions of the Coefficients at Powers of $p$}\label{sect6}
By twisted multiplicativity, the coefficients $H({\bf C};{\bf m})$ for all $\bf C$ and $\bf m$ (with $m_i\neq0$ for all $i$)
are determined from the coefficients in which $C_i$ and $m_i$ are powers of a prime. Let $p$ be a fixed prime.
In the rest of this paper, we will study
these $p$-power coefficients, and we shall show that they may be evaluated using crystal graphs
of type C. Such a graph is attached
to a given dominant weight $\eta$ for the Langlands dual group $^LG$
and describes a representation of the quantized universal enveloping algebra with
$\eta$ as highest weight.
See for example \cite{BBF4}, Chapter 2, for a brief summary and references.
For the $\bf m$-th Whittaker coefficient, $\eta$ will be related to the ${\rm{ord}}_p(m_i)$.
Littelmann~\cite{Litt}
described the set of path lengths attached to such a crystal graph as one traverses
from each vertex to the lowest weight vector,
following a prescribed order (coming from a certain factorization of the long
element of the Weyl group into simple reflections). Assembling these path lengths into a vector,
one obtains the lattice points of a certain polytope.
Each lattice point is called a Berenstein-Zelevinsky-Littelmann pattern (or BZL-pattern for short).
A type C BZL-pattern may be rewritten as a triangular array of non-negative integers $c_{i,j}$, with $1\leq i\leq r$
and $i\leq j\leq 2r-i$, satisfying certain inequalities. See \cite{Litt}, Theorem 6.1 and Corollary 6.1 or
\cite{BeBrF1}, Section 2.2.
Following Littelmann, write $\overline{c}_{i,j}=c_{i,2r-j}$ for $i\leq j\leq r$.
In particular, the top row of this array is given by
$$\left(\begin{matrix}
c_{1,1}&c_{1,2}&\dots&c_{1,r}&\overline{c}_{1,r-1}&\dots&\overline{c}_{1,1}
\end{matrix}\right).$$
In this Section we present two combinatorial descriptions related to crystal graphs. First, in Subsection~\ref{sect61}
we shall rewrite the inductive formula of Theorem~\ref{inductivesum} in the $p$-power case as a sum over
such top rows. Second, in Subsection~\ref{sect62} we shall recall the conjectural formula of Beineke,
Brubaker and Frechette \cite{BeBrF1}, which constructs certain coefficients $H_{BZL}$ directly from
the crystal graph. In Section~\ref{sec:p} we shall show that these coefficients in fact match the coefficients
$H$ of the Eisenstein series, as conjectured.
\subsection{A Combinatorial Version of the Inductive Formula}\label{sect61}
Let ${\bf m}$ be an $r$-tuple of non-negative integers (its components will be the logarithm base $p$ of the components
of $\bf m$
in the prior sections, but for convenience we keep the same letter),
and $\mu$ be the $r$-tuple of positive integers given by $\mu_{j}=m_{r+1-j}+1$.
Define $CQ_{1}(\mu)$ to be the set of $(2r-1)$-tuples $(d_{1},\dots,d_{2r-1})$ of non-negative integers satisfying the inequalities
\begin{equation}\label{ineq:CQ-d}
\begin{cases}
d_{j}\leq \mu_{r+1-j} & 1\leq j\leq r,\\
d_{j+1}+d_{2r-j}\leq \mu_{r-j}+d_{j}& 1\leq j\leq r-1.
\end{cases}
\end{equation}
Let ${\mathfrak t}=(d_{1},d_{2},\dots,d_{2r-1})$ be a $(2r-1)$-tuple of non-negative integers. A {\it weight vector} $k({\mathfrak t})$ of ${\mathfrak t}$ is
a vector of the form
$$
k({\mathfrak t})=(k_{1}({\mathfrak t}),k_{2}({\mathfrak t}),\dots,k_{r}({\mathfrak t}))
$$
with
\begin{equation}\label{eq:k}
k_{r}({\mathfrak t})=\sum^{r}_{j=1} d_{2r-j} \text{ and }
k_{i}({\mathfrak t})=\sum^{2r-1}_{j=i}d_{j}+d_{r}+\sum^{i-1}_{j=1}d_{2r-j}, \text{ for } 1\leq i<r.
\end{equation}
(In Sect.~\ref{sec:p} and \ref{sec:even} we will also apply similar formulae to other vectors of odd length $2r'-1$ for various $r'$, replacing
$r$ by $r'$.)
When the choice of ${\mathfrak t}$ is clear, we write $k_{i}$ in place of $k_{i}({\mathfrak t})$.
If ${\mathfrak t}\in CQ_{1}(\mu)$, the weight $k({\mathfrak t})$ is called {\it strict} if
\begin{equation}\label{strict}
k_{i+1}-k_{i+2}< \mu_{r-i}+k_{i}-k_{i+1}\text{ for $1\leq i\leq r-2$, and } 0<\mu_{1}+k_{r-1}-2k_{r}.
\end{equation}
These inequalities are equivalent to
\begin{equation}\label{strictness-ineqs}
d_{i+1}<\mu_{r-i}+d_{i}-d_{2r-i}+d_{2r-i-1} \text{ for $1\leq i\leq r-2$, and } 0<\mu_{1}+d_{r-1}-d_{r+1}.
\end{equation}
Analogous to the top row of the $BZL$-patterns, define
\begin{equation}\label{eq:Fc}
\bar{{\mathfrak c}}_{1,j}=\sum^{j}_{i=1}d_{2r-i},~{\mathfrak c}_{1,r}=\sum^{r}_{i=1}d_{2r-i}+d_{r}, \text{ and } {\mathfrak c}_{1,j}={\mathfrak c}_{1,r}+\sum^{r-1}_{i=j}d_{i}, \text{ for } 1\leq j<r.
\end{equation}
Then we have
\begin{equation}\label{eq:delta-circle}
{\mathfrak c}_{1,j}-{\mathfrak c}_{1,j+1}=d_{j}~\text{for $1\leq j\leq 2r-1$ and $j\neq r$,
~and~}{\mathfrak c}_{1,r}-\bar{{\mathfrak c}}_{1,r-1}=2d_{r}.
\end{equation}
For convenience in the discussion below, we will write ${\mathfrak c}_{1,j}=\bar{{\mathfrak c}}_{1,2r-j}$ for $1\leq j\leq 2r-1$.
We need some facts about Gauss sums. Let $m,c\in \mathfrak{o}_S$, $c\neq0$, and $t\in{\mathbb Z}$. Let
$$g_{t}(m,c)=\sum_{\substack{d\bmod c\\ \gcd(c,d)=1}}\ppair{\frac{d}{c}}^{t}_{n}\psi\ppair{\frac{md}{c}}.$$
If $t=1$ we write simply $g(m,c)$. We recall the following properties of these Gauss sums at prime powers. Let $q=|p|$.
\begin{itemize}
\item Given a fixed prime $p$, for any integer $t\not\equiv 0\pmod n$ one has
$$
g_{t}(p^{k},p^{l})=\begin{cases}
q^{k}g_{tl}(1,p) &\text{ if $l=k+1$,}\\
\phi(p^{l})&\text{ if $l\leq k$ and $n\mid lt$}\\
0&\text{ otherwise,}
\end{cases}
$$
where $\phi(p^{l})$ denotes Euler's totient function for ${\mathfrak o}_{S}/p^{l}{\mathfrak o}_{S}$.
\item For any integer $t\not\equiv0\pmod n$, $g_{t}(1,p)g_{n-t}(1,p)=q$.
\end{itemize}
The Whittaker coefficients will be described below using {\it decorated} BZL-patterns. For type A,
see \cite{BBF5}, pg.\ 1087; for type C see \cite{BeBrF1}, pg.\ 50. The decorations correspond
to certain conditions on the root strings with regard to the Kashiwara (or root) operators;
see \cite{BBF5}, Sect.~2 and \cite{BBF4}, Chs.~2 and 3.
We rewrite the inductive formula using a similar idea.
Given ${\mathfrak t}\in CQ_{1}(\mu)$, define an array $\Delta({\mathfrak t})=({\mathfrak c}_{1,1},{\mathfrak c}_{1,2},\dots,\bar{{\mathfrak c}}_{1,1})$.
We decorate $\Delta({\mathfrak t})$ as follows:
\begin{enumerate}
\item The entry ${\mathfrak c}_{1,j+1}$ is circled if ${\mathfrak c}_{1,j}={\mathfrak c}_{1,j+1}$ or ${\mathfrak c}_{1,j}=0$.
\item The entry ${\mathfrak c}_{1,j}$ is boxed if equality holds in the upper-bound inequality~\eqref{ineq:CQ-d}.
\end{enumerate}
See also Section~\ref{sec:p} below.
To each entry $c={\mathfrak c}_{1,j}$ in a decorated array $\Delta({\mathfrak t})$, we associate the quantity
\begin{equation}\label{dec:Delta}
\gamma_{\Delta}(c)=\begin{cases}
q^{c} & \text{ if the entry $c$ is circled but not boxed,}\\
g(p^{c-1},p^{c}) & \text{ if the entry $c$ is boxed but not circled,}\\
q^c(1-q^{-1}) & \text{ if the entry $c$ is neither circled nor boxed and $n\mid c$,}\\
0& \text{ otherwise.}
\end{cases}
\end{equation}
Note that in particular $\gamma_\Delta(c)=0$ if the entry $c$ is both circled and boxed.
Compare \cite{BBF5}, Eqn.~(7).
Define
$G_{\Delta}({\mathfrak t})=q^{-\sum^{r}_{i=1}d_{2r-i}}\prod^{2r-1}_{j=1}\gamma_{\Delta}({\mathfrak c}_{1,j}).$
We continue to use multi-index notation. In particular,
if ${\bf k}=(k_1,\dots,k_r)$ and ${\bf m}=(m_1,\dots,m_r)$ are $r$-tuples of non-negative integers, then
we write $H(p^{\bf k};p^{\bf m})$ for $H(p^{k_{1}},\dots,p^{k_{r}};p^{m_{1}},\dots,p^{m_{r}})$.
A straightforward calculation using Theorem~\ref{inductivesum} establishes
the following Corollary.
\begin{corollary}\label{thm:H} Let $\bf k$ and $\bf m$ be $r$-tuples of non-negative integers. Then
\begin{equation}\label{ind-equation-Delta}
H(p^{\bf k};p^{\bf m})=\sum_{\substack{{\bf k}', {\bf k}''\\ {\bf k}'+(0,{\bf k}'')={\bf k}}}\,\,
\sum_{\substack{{\mathfrak t}\in CQ_{1}(\mu)\\ k({\mathfrak t})={\bf k}'}}G_{\Delta}({\mathfrak t})
H(p^{{\bf k}''};p^{{\bf m}'})
\end{equation}
where the outer sum is over tuples ${\bf k'}=(k_1',\dots,k_r')$ and ${\bf k''}=(k''_1,\dots,k_{r-1}'')$ of non-negative integers
such that ${\bf k}'+(0,{\bf k}'')={\bf k}$ and where
$${\bf m}'=(m_{2}+k'_{1}+k'_{3}-2k'_{2},\dots,m_{r-1}+k'_{r-2}+2k'_{r}-2k'_{r-1},m_{r}+k'_{r-1}-2k'_{r}).$$
\end{corollary}
\subsection{Coefficients Constructed From Type C Crystal Graphs}\label{sect62}
We now recall the definition for the $p$-power coefficients
$H_{BZL}(p^{\kappa};p^{\ell})$ defined by Beineke, Brubaker and Frechette in
\cite{BeBrF1}. Here $\kappa=(\kappa_{1},\dots,\kappa_{r})$ and ${\ell}=(l_{1},\dots,l_{r})$ are $r$-tuples of non-negative integers.
These coefficients will be given as weighted sums over certain $BZL$-patterns.
We will write these coefficients in an inductive form similar to that given in Corollary~\ref{thm:H} above.
(The difference in our choice of the ordering of the simple roots $\alpha_i$ as compared to \cite{BeBrF1}
is reflected in the indexing below.)
Let $\epsilon_{i}$ be the fundamental dominant weights of the dual group ${\rm Sp}_{2r}({\mathbb{C}})$, and let $\rho=\sum_i \epsilon_i$
be the Weyl vector. Given $\ell$, let ${\lambda}=\sum^{r}_{i=1}l_{i}\epsilon_{i}$ and let $\mu=\lambda+\rho$.
We will work on the crystal graph of highest weight $\mu$.
Recall that each BZL-pattern has a weight coming from the projection map from the vertices of the crystal graph to the weight lattice.
The contributions to $H_{BZL}(p^{\kappa};p^{\ell})$ come from BZL-patterns in
a single weight space determined by $\kappa$ in the crystal graph of highest weight $\mu$.
Denote the set of all BZL-patterns for this crystal graph as $BZL(\mu)$.
To give this precisely, let $\Gamma=\Gamma(c_{i,j})\in BZL(\mu)$ be a BZL-pattern (the notation
follows \cite{BeBrF1}). As explained above,
the entries $c_{i,j}$ may be put into a triangular array; then the entries in the rows are non-negative and weakly decreasing, and the entries $c_{i,j}$ satisfy certain inequalities. Specifically
the entries $c_{1,j}$ in the top row satisfy the {\it upper-bound inequalities}
\begin{equation}\label{ineq:c}
\begin{cases}
\bar{c}_{1,j}\leq \mu_{r-j+1}+\bar{c}_{1,j-1}, &\\
c_{1,j}\leq \mu_{r-j+1}+\bar{c}_{1,j-1}-2\bar{c}_{1,j}+c_{1,j+1}+\bar{c}_{1,j+1},&
\end{cases}
\end{equation}
for all $1\leq j\leq r$, where $\bar{c}_{1,0}=0$.
Define the weight vector $\kappa(\Gamma)$ attached to the BZL-pattern $\Gamma$ by
$$
\kappa(\Gamma)=(\kappa_{1}(\Gamma),\kappa_{2}(\Gamma),\dots,\kappa_{r}(\Gamma))
$$
with
$$
\kappa_{1}(\Gamma)=\sum^{r}_{i=1}c_{i,r} \text{ and } \kappa_{j}(\Gamma)=\sum^{r+1-j}_{i=1} (c_{i,r+1-j}+\bar{c}_{i,r+1-j}), \text{ for } 1<j\leq r.
$$
Recall the {\it decoration rules} for the $BZL$-patterns given in \cite{BeBrF1}:
\begin{enumerate}
\item The entry $c_{i,j}$, $j<2r-i$, is circled if $c_{i,j}=c_{i,j+1}$. The right-most entry in a row, $c_{i,2r-i}$, is circled if it equals 0.
\item The entry $c_{i,j}$ is boxed if equality holds in the upper-bound inequalities. (See \cite{Litt}, Theorem 6.1 and Corollary 6.1 or
\cite{BeBrF1}, Section 2.2.)
\end{enumerate}
To each entry $c_{i,j}$ in a decorated BZL-pattern $\Gamma$, we associate the quantity
\begin{equation}\label{dec:Gamma}
\gamma_{\Gamma}(c_{i,j})=\begin{cases}
q^{c_{i,j}} & \text{ if $c_{i,j}$ is circled but not boxed,}\\
g(p^{c_{i,j}-1},p^{c_{i,j}})& \text{ if $c_{i,j}$ is boxed but not circled, and $j\neq r$},\\
g_{2}(p^{c_{i,j}-1},p^{c_{i,j}})& \text{ if $c_{i,j}$ is boxed but not circled, and $j=r$,}\\
q^{c_{i,j}}(1-q^{-1})
& \text{ if $c_{i,j}$ is neither circled nor boxed and $n\mid c_{i,j}$,}\\
0& \text{ otherwise.}
\end{cases}
\end{equation}
(The above definition corrects a misprint in \cite{BeBrF1} by adding the condition $n\mid c_{i,j}$ in
the case that $c_{i,j}$ is undecorated.)
In particular, $\gamma_\Gamma(c_{i,j})=0$
if $c_{i,j}$ is both circled and boxed.
For $\Gamma=\Gamma(c_{i,j})$ a decorated BZL-pattern, let $G(\Gamma)=\prod_{i,j}\gamma_\Gamma(c_{i,j})$.
Then the definition of the $p$-coefficients constructed in \cite{BeBrF1} by crystal graphs is:
\begin{equation}\label{crystal-def}
H_{BZL}(p^{\kappa};p^{\ell})=\sum_{\substack{\Gamma\in BZL(\mu)\\ \kappa(\Gamma)=\kappa}}G(\Gamma).
\end{equation}
We rewrite this definition inductively.
Let $BZL_{1}(\mu)$ denote the set of $(2r-1)$-tuples ${\mathfrak t}=(d_{1},\dots,d_{2r-1})\in {\mathbb Z}^{2r-1}_{\geq 0}$
which satisfy the inequalities
\begin{equation}\label{ineq:BZL-d}
\begin{cases}
d'_{j}\leq \mu_{r-j+1}+d'_{j-1}-d_{2r-j+1} &\text{for $1\leq j\leq r$,}\\
d_{j}+d'_{j}\leq \mu_{r-j+1}+d'_{j-1}-d_{2r-j+1}+d_{2r-j} &\text{for $1\leq j\leq r-1$,}\\
\end{cases}
\end{equation}
where $d_{0}=d_{2r}=0$ and
$d'_{i}=\min\left(d_{i},d_{2r-i}\right)$. Let
\begin{equation}\label{eq:c}
\bar{c}_{1,j}= \sum^{j-1}_{i=1} d_{2r-i}+d'_{j},~c_{1,r}=\sum^{r}_{i=1}d_{2r-i}, \text{ and }c_{1,j}=k_{j}({\mathfrak t})-\bar{c}_{1,j} \text{~for } 1\leq j<r.
\end{equation}
Here $k_j({\mathfrak t})$ is given by (\ref{eq:k}).
Note that for $2\leq j\leq r$,
\begin{equation}\label{eq:circle}
\bar{c}_{1,j}-\bar{c}_{1,j-1}=d'_{j}+d_{2r-j+1}-d'_{j-1} \text{ and }
c_{1,j-1}-c_{1,j}=d'_{j}+d_{j-1}-d'_{j-1}.
\end{equation}
The vector $(c_{1,1},c_{1,2},\dots,c_{1,2r-1})$ is weakly decreasing.
If $\Gamma({\mathfrak t})=(c_{1,1},c_{1,2},\dots,c_{1,2r-1})$ is a one-row array decorated using the rules above,
set $G_{\Gamma}({\mathfrak t})=\prod_{j}\gamma_{\Gamma}(c_{1,j})$.
Then the following Lemma is a direct consequence of (\ref{crystal-def}).
\begin{lemma}\label{ind-lemma} Let $\kappa$ and $\ell$ be $r$-tuples of non-negative integers. Then
\begin{equation}\label{ind-equation-Gamma}
H_{BZL}(p^{\kappa};p^{\ell})=\sum_{\substack{\kappa',\kappa''\\ \kappa'+(\kappa'',0)=\kappa}}\,\,\sum_{\substack{{\mathfrak t}\in BZL_{1}({\lambda}+\rho)\\ k_{i}({\mathfrak t})=\kappa'_{r+1-i} \forall i}}G_{\Gamma}({\mathfrak t})\,\,H_{BZL}(p^{\kappa''};p^{{\ell}'}),
\end{equation}
where the outer sum is over tuples ${\kappa'}=(\kappa_1',\dots,\kappa_r')$ and
${\kappa''}=(\kappa''_1,\dots,\kappa_{r-1}'')$ of non-negative integers
such that ${\kappa}'+({\kappa}'',0)={\kappa}$ and where
$${\ell}'=(l_{1}+\kappa'_{2}-2\kappa_{1},l_{2}+\kappa'_{3},\dots,l_{r-1}+\kappa'_{r}+\kappa'_{r-2}-2\kappa'_{r-1}).$$
\end{lemma}
\section{Evaluation of the $p$-parts and $BZL$ descriptions}\label{sec:p}
In this Section, we will prove the conjectural description of the Eisenstein series coefficients
constructed using crystal graphs by
Beineke, Brubaker and Frechette \cite{BeBrF1, BeBrF2}. We establish the following identity.
\begin{theorem}\label{thm:main}
Let $\kappa=(\kappa_1,\dots,\kappa_r)$ and $\ell=(l_1,\dots,l_r)$ be $r$-tuples of non-negative integers. Then
\begin{equation}\label{eq:H-CQ=BZL}
H(p^{\bf k};p^{\bf m})=H_{BZL}(p^{\bf \kappa};p^{\ell}),
\end{equation}
where $m_{i}=l_{r+1-i}$ and $k_{i}=\kappa_{r+1-i}$ for all $1\leq i\leq r$.
\end{theorem}
We prove this theorem by induction. In the rest of this paper, we assume that $m_{i}=l_{r+1-i}$ for all $i$. First, we consider the top rows in the definitions of these two terms. Let $\mu$ be as in Section~\ref{sect6}.
Let
$$
H_{\Gamma}(p^{\kappa}; p^{\ell})=\sum_{\substack{{\mathfrak t}\in BZL_{1}(\mu)\\ k_{i}({\mathfrak t})=\kappa_{r+1-i}\, \forall i}}G_{\Gamma}({\mathfrak t}),
\qquad
H_{\Delta}(p^{\bf k}; p^{\bf m})=\sum_{\substack{{\mathfrak t}\in CQ_{1}(\mu)\\ k({\mathfrak t})={\bf k}}}G_{\Delta}({\mathfrak t}).
$$
{\bf Statement A.} {\it Given a fixed strict weight vector ${\bf k}=k({\mathfrak t})$, let $\kappa$ be the vector with components $\kappa_{i}=k_{r+1-i}$. Then}
\begin{equation}\label{eq:H-gamma-delta}
H_{\Gamma}(p^{\kappa};p^{\ell})=H_{\Delta}(p^{\bf k};p^{\bf m}).
\end{equation}
By Corollary~\ref{thm:H} and Lemma~\ref{ind-lemma}, Statement~A implies Theorem~\ref{thm:main}. Indeed,
if the weight $k({\mathfrak t})$ is non-strict, then in (\ref{ind-equation-Gamma}) $H_{BZL}(p^{\kappa''};p^{\ell'})=0$ since some entry is both
boxed and circled, while in (\ref{ind-equation-Delta}) $G_\Delta({\mathfrak t})=0$ as some entry in ${\mathfrak t}$ is both boxed and circled.
Hence both contributions are zero. The remaining contributions are matched by Statement A together
with the inductive hypothesis. The rest of this paper will be occupied with the proof of Statement~A.
Let us compare this to the analogous step for type A. In type A, at this point the top row equality follows by a term-by-term matching
of the two sides. (See \cite{BBF5}, Sect.\ 8.) However, that is not the case here. Indeed, Statement~A
is comparable to Statement~B of \cite{BBF4}, Ch.\ 6. In \cite{BBF4}, the authors compare two different descriptions of
the coefficients
(labelled by $\Gamma$ and $\Delta$) corresponding to {\it two different factorizations of the long element into
simple reflections}. The comparison is highly non-trivial and is a mix of combinatorial geometry and number-theoretic
facts about Gauss sums. Once we recognize that Statement A of this paper presents an analogous combinatorial
problem, it is natural to establish
the comparison by adapting the methods of \cite{BBF4} to the present case. We shall do this below.
We now give an equivalent description of the
decoration rules for $\Gamma({\mathfrak t})$ and $\Delta({\mathfrak t})$ in Sections~\ref{sect62} (resp.\ \ref{sect61})
for ${\mathfrak t}$ as in Statement A, and
then introduce two new
decorated arrays $\Gamma^{\iota}({\mathfrak t})$ and $\Delta^{\iota}({\mathfrak t})$.
\begin{lemma}
\begin{enumerate}
\item
Let ${\mathfrak t}\in BZL_1(\mu)$. Then
in $\Gamma({\mathfrak t})$, the entries are decorated as follows:
\begin{enumerate}
\item The entry $c_{1,j}$ for $j\leq r$ is boxed if $d_{j}+d'_{j}=\mu_{r+1-j}+d'_{j-1}-d_{2r+1-j}+d_{2r-j}$. The entry $\bar{c}_{1,j}$ for $j<r$ is boxed if $d'_{j}=\mu_{r+1-j}+d'_{j-1}-d_{2r+1-j}$.
\item The entry $c_{1,j}$ for $j< r$ is circled if $d_{j}=d'_{j}$ and $d'_{j+1}=0$. The entry $\bar{c}_{1,j}$ for $j\leq r$ is circled if $d'_{j}=0$ and $d_{2r+1-j}=d'_{j-1}$.
\end{enumerate}
\item
Let ${\mathfrak t}\in CQ_1(\mu)$. Then in $\Delta({\mathfrak t})$, the entries are decorated as follows:
\begin{enumerate}
\item The entry ${\mathfrak c}_{1,j}$ for $j\leq r$ is boxed if $d_{j}=\mu_{r+1-j}$. The entry $\bar{{\mathfrak c}}_{1,j}$ for $j<r$ is boxed if $d_{j+1}=\mu_{r-j}+d_{j}-d_{2r-j}$.
\item The entry ${\mathfrak c}_{1,j}$ is circled if $d_{j-1}=0$ for $j>1$ or ${\mathfrak c}_{1,j}=0$.
\end{enumerate}
\end{enumerate}
\end{lemma}
This Lemma is immediate from the decoration rules in Section~\ref{sect6} in view of
Eqns.\ (\ref{eq:c}) and (\ref{eq:Fc}).
\begin{definition}
\begin{enumerate}
\item
Let $\iota(x)$ be the function $\iota(x)=N-k_{1}({\mathfrak t})+x$ where $N$ is a multiple of $n$ and $N>k_{1}({\mathfrak t})$.
Let $\Gamma^{\iota}$ be the decorated array with entries $(\iota(c_{1,1}),\iota(c_{1,2}),\dots,\iota(c_{1,2r-1}))$.
The entry $\iota(c_{1,j})$ is decorated as follows:
\begin{enumerate}
\item The entry $\iota(c_{1,j})$ for $j\leq r$ is boxed if $d_{j}+d'_{j}=\mu_{r+1-j}+d'_{j-1}-d_{2r+1-j}+d_{2r-j}$. The entry $\iota(\bar{c}_{1,j})$ for $j<r$ is boxed if $d'_{j}=\mu_{r+1-j}+d'_{j-1}-d_{2r+1-j}$.
\item The entry $\iota(c_{1,j})$ for $j\leq r$ is circled if $d_{j-1}=d'_{j-1}$ and $d'_{j}=0$. The entry $\iota(\bar{c}_{1,j})$ for $j<r$ is circled if $d'_{j+1}=0$ and $d_{2r-j}=d'_{j}$.
\end{enumerate}
(These decoration rules are independent of the choice of $N$.)
Let $G_{\Gamma^{\iota}}({\mathfrak t})=\prod^{2r-1}_{j=1}\gamma_{\Gamma}(\iota(c_{1,j}))$, where $\gamma_\Gamma$ is given
by (\ref{dec:Gamma}).
\item
Let $\Delta^{\iota}$ be the decorated array with entries
$({\mathfrak c}(\Delta^{\iota})_{1,1},\dots,{\mathfrak c}(\Delta^{\iota})_{1,2r-1})$, where ${\mathfrak c}(\Delta^{\iota})_{1,j}=\iota({\mathfrak c}_{1,j+1})$ for $1\leq j\leq 2r-2$ and ${\mathfrak c}(\Delta^{\iota})_{1,2r-1}=N-k_{1}$.
The entry ${\mathfrak c}(\Delta^{\iota})_{1,j}$ is decorated as follows:
\begin{enumerate}
\item The entry ${\mathfrak c}(\Delta^{\iota})_{1,j}$ for $j<r$ is boxed if $d_{j+1}=\mu_{r-j}$. The entry $\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j}$ for $j\leq r$ is boxed if $d_{j}=\mu_{r+1-j}+d_{j-1}-d_{2r-j+1}$.
\item The entry ${\mathfrak c}(\Delta^{\iota})_{1,j}$ for $1\leq j\leq 2r-1$ is circled if $d_{j}=0$.
\end{enumerate}
Let $G_{\Delta^{\iota}}({\mathfrak t})=q^{\sum^{r}_{i=1}d_{i}}\prod^{2r-1}_{j=1}\gamma_{\Delta}({\mathfrak c}(\Delta^{\iota})_{1,j})$,
where $\gamma_\Delta$ is given by (\ref{dec:Delta}).
\end{enumerate}
\end{definition}
We give one example.
\begin{example}\label{ex:r=1}\rm
Suppose $r=1$.
Let $d_{1}\leq \mu$. In $\Gamma({\mathfrak t})$ and $\Delta({\mathfrak t})$, the entries $c_{1,1}=d_{1}$ and ${\mathfrak c}_{1,1}=2d_{1}$ are circled if $d_{1}=0$ and boxed if $d_{1}=\mu$. Hence in all cases (using the properties of Gauss sums and
that $n$ is odd) we have
$$G_{\Gamma}({\mathfrak t})=g_{2}(p^{\mu-1},p^{d_{1}})=p^{-d_{1}}g(p^{\mu+d_1-1},p^{2d_{1}})=G_{\Delta}({\mathfrak t}).$$
In $\Gamma^{\iota}({\mathfrak t})$ and $\Delta^{\iota}({\mathfrak t})$, the entries $\iota(c_{1,1})=N-d_{1}$ and ${\mathfrak c}(\Delta^{\iota})_{1,1}=N-2d_{1}$ are circled if $d_{1}=0$ and boxed if $d_{1}=\mu$. Hence in all cases
$$
G_{\Gamma^{\iota}}({\mathfrak t})=\gamma_{\Gamma}(N-d_{1})=q^{d_{1}}\gamma_{\Delta}(N-2d_{1})=G_{\Delta^{\iota}}({\mathfrak t}).
$$
Note that the quantities $G_{\Gamma^{\iota}({\mathfrak t})}$, $G_{\Delta^{\iota}}({\mathfrak t})$ depend on the choice of $N$ as above
but they are equal for any such choice.
\end{example}
\begin{lemma}\label{lm:CQ=BZL}
Given $\mu=(\mu_{1},\dots,\mu_{r})$ in ${\mathbb Z}^{r}_{\geq 0}$,
$
CQ_{1}(\mu)=BZL_{1}(\mu).
$
\end{lemma}
\begin{proof}
It is sufficient to show that the inequalities~\eqref{ineq:BZL-d} are equivalent to the inequalities~\eqref{ineq:CQ-d}.
When $r=1$ this is trivial. We prove this Lemma for $r\geq 2$ by induction.
First, when $r=2$, the inequalities~\eqref{ineq:BZL-d} and \eqref{ineq:CQ-d} become
$$
\begin{cases}
d'_{1}\leq \mu_{2}&\\
d_{2}+d_{3}\leq \mu_{1}+d'_{1}&\\
d_{1}+d'_{1}\leq \mu_{2}+d_{3},&
\end{cases}
\text{ and }
\begin{cases}
d_{1}\leq \mu_{2}\\
d_{2}\leq \mu_{1}\\
d_{2}+d_{3}\leq \mu_{1}+d_{1}.
\end{cases}
$$
Since $d_1'=\min(d_1,d_{3})$,
we have $d_{1}\leq \mu_{2}$ if and only if $d'_{1}\leq \mu_{2}$ and $d_{1}+d'_{1}\leq \mu_{2}+d_{3}$.
In addition, $d_{2}\leq \mu_{1}$ and $d_{2}+d_{3}\leq \mu_{1}+d_{1}$ are equivalent to $d_{2}+d_{3}\leq \mu_{1}+d'_{1} $. Therefore, we have $CQ_{1}(\mu)=BZL_{1}(\mu)$.
Similarly, for general $r$ the inequalities $d'_{1}\leq \mu_{r}$ and $d_{1}+d'_{1}\leq \mu_{r}+d_{2r-1}$ are equivalent to $d_{1}\leq \mu_{r}$.
Thus the inequalities \eqref{ineq:BZL-d} are equivalent to
\begin{equation}\label{ineq:d1}
\begin{cases}
d_{1}\leq \mu_{r} &\\
d'_{2}\leq \mu_{r-1}+d'_{1}-d_{2r-1} &\\
d'_{j}\leq \mu_{r-j+1}+d'_{j-1}-d_{2r-j+1}&\text{for $2<j\leq r$}\\
d_{j}+d'_{j}\leq \mu_{r-j+1}+d'_{j-1}-d_{2r-j+1}+d_{2r-j}&\text{for $2<j<r$}\\
d_{2}+d'_{2}\leq \mu_{r-1}+d'_{1}-d_{2r-1}+d_{2r-2}.
\end{cases}
\end{equation}
Let
$$\nu=(\mu_{2},\dots,\mu_{r-2},\mu_{r-1}+d'_{1}-d_{2r-1})\in {\mathbb Z}^{r-1}_{\geq 0}.$$
Then
$(d_{2},\dots,d_{2r-2})$ is in $BZL_{1}(\nu)$.
By induction, we have $BZL_{1}(\nu)=CQ_{1}(\nu)$, and the inequalities~\eqref{ineq:d1} are equivalent to
\begin{equation} \label{ineq:d2}
\begin{cases}
d_{1}\leq \mu_{r} &\\
d_{2}\leq \mu_{r-1}+d'_{1}-d_{2r-1} &\\
d_{j}\leq \mu_{r+1-j}&\text{for $2<j\leq r$}\\
d_{j+1}+d_{2r-j}\leq \mu_{r-j}+d_{j}&\text{for $1<j\leq r-1$.}
\end{cases}
\end{equation}
In addition, $d_{2}\leq \mu_{r-1}+d'_{1}-d_{2r-1}$ is equivalent to the inequalities $d_{2}\leq \mu_{r-1}$ and $d_{2}+d_{2r-1}\leq \mu_{r-1}+d_{1}$.
Thus the inequalities \eqref{ineq:d2} are equivalent to the inequalities \eqref{ineq:CQ-d}. Therefore, we have $BZL_{1}(\mu)=CQ_{1}(\mu)$ for $\mu\in {\mathbb Z}^{r}_{\geq 0}$.
\end{proof}
Let $\mu\in{\mathbb Z}^{r}_{\geq1}$. By Lemma~\ref{lm:CQ=BZL}, $CQ_{1}(\mu)$ and $BZL_{1}(\mu)$ define the same set.
A sequence ${\mathfrak t}$ in $CQ_{1}(\mu)$ or $BZL_{1}(\mu)$ is called a {\it short pattern}.
(The usage of this term here is not the same as its usage for the type A case in
\cite{BBF4}. However, it is justified as Statement B there, an equality of two sums over
short pattern prototypes, is comparable to our Statement A.)
A decorated array is {\it strict} if there is no entry which is
both boxed and circled.
Recall that $\varepsilon_2=1$ if $r>2$ and $\varepsilon_2=2$ if $r=2$,
and $k_{1}- \varepsilon_{2}k_{2}\leq \mu_{r}$ by~\eqref{ineq:CQ-d}.
\begin{lemma}\label{lm:strictness} Let $r\geq2$ and let
${\mathfrak t}$ be a short pattern such that $k({\mathfrak t})$ is strict and $k_{1}-\varepsilon_{2}k_{2}<\mu_{r}$. Then
the decorated array $\Gamma({\mathfrak t})$ is strict if and only if $\Gamma^{\iota}({\mathfrak t})$ is strict.
\end{lemma}
\begin{proof}
First, in $\Gamma({\mathfrak t})$ the entry $\bar{c}_{1,j}$ for $j\leq r$ is both boxed and circled if and only if $d'_{j}=0$, $d_{2r+1-j}=d'_{j-1}$ and $\mu_{r+1-j}=0$. Since $\mu\in{\mathbb Z}^{r}_{\geq 1}$, there is no entry $\bar{c}_{1,j}$ for $j\leq r$ which is
both boxed and circled. The entry $c_{1,j}$ for $j<r$ is boxed and circled if and only if $d'_{j+1}=0$,
\begin{equation}\label{eq:lm:gamma-strict}
d_{j}=d'_{j}
\text{ and }
2d_{j}=\mu_{r+1-j}+d'_{j-1}-d_{2r+1-j}+d_{2r-j}.
\end{equation}
By the upper bounds~\eqref{ineq:CQ-d}, $d_{j}\leq \mu_{r-j+1}+d_{j-1}-d_{2r-j+1}$ and $d_{j}\leq \mu_{r-j+1}$.
Since $d_{j-1}'=\min(d_{j-1},d_{2r-j+1})$, Eqn.~\eqref{eq:lm:gamma-strict} is equivalent to $d_{j}=d_{2r-j}=\mu_{r-j+1}+d'_{j-1}-d_{2r-j+1}$.
The decorated array $\Gamma({\mathfrak t})$ is non-strict if and only if $d_{j}=d_{2r-j}=\mu_{r-j+1}+d'_{j-1}-d_{2r-j+1}$ and $d'_{j+1}=0$ for some $j\leq r-1$.
Next, in $\Gamma^{\iota}({\mathfrak t})$, the entry $\iota(c_{1,j})$ for $j\leq r$ is not boxed and circled simultaneously,
since that would imply the equalities
$$d_{j}'=0,~d_{j-1}=d'_{j-1}, \text{ and }
d_j=\mu_{r+1-j}+d_{j-1}-d_{2r-j+1}+d_{2r-j}.
$$
By~(\ref{strictness-ineqs}), this contradicts the strictness of $k({\mathfrak t})$ and $k_{1}-\varepsilon_{2}k_{2}<\mu_{r}$.
The entry $\iota(\bar{c}_{1,j})$ for $j<r$ is boxed and circled if and only if
\begin{equation}\label{eq:lm:iota-strict}
d'_{j+1}=0,~d_{2r-j}=d'_{j} \text{ and }
d_{2r-j}=\mu_{r+1-j}+d'_{j-1}-d_{2r-j+1}.
\end{equation}
By the same upper bounds for $d_{j}$, the entry $\iota(c_{1,j})$ for $j<r$ is both boxed and circled if and only if
$d_{j}=d_{2r-j}=\mu_{r-j+1}+d'_{j-1}-d_{2r-j+1}$ and $d'_{j+1}=0$. The Lemma follows.
\end{proof}
For each short pattern ${\mathfrak t}=(d_1,\dots,d_{2r-1})\in BZL_1(\mu)$,
let $i_{0}({\mathfrak t})$ or simply $i_0$ denote the quantity
$\min\cpair{i\mid 1\leq i<r, d_{i}\neq d_{2r-i}}$ provided this set is non-empty.
As $d_{i}-d_{2r-i}=k_{i}-k_{i+1}$ for $i<r-1$ and $d_{r-1}-d_{r+1}=k_{r-1}-2k_{r}$, the index $i_{0}$ is uniquely determined by $k({\mathfrak t})$.
To establish Statement A, we will consider the following three cases:
\begin{enumerate}
\item The set $\cpair{i\mid 1\leq i<r, d_{i}\neq d_{2r-i}}$ is empty. Such a short pattern will be,
by analogy with \cite{BBF4}, called {\it totally resonant}.
\item The index $i_{0}$ exists and $d_{i_{0}}>d_{2r-i_{0}}$. We say that such a short pattern is in {\it Class I}.
\item The index $i_{0}$ exists and $d_{i_{0}}<d_{2r-i_{0}}$. We say that such a short pattern is in {\it Class II}.
\end{enumerate}
Since this classification of patterns is determined by the weight $k({\mathfrak t})$, we shall similarly call a given weight $\bf k$
totally resonant, Class I, or Class II.
We begin by establishing some results for short patterns which are not totally resonant. From now on, we fix $\mu$ and
suppose that ${\mathfrak t}\in BZL_1(\mu)$ unless otherwise indicated.
\begin{lemma}\label{lm:gamma-circle}
Let ${\mathfrak t}$ be a short pattern that is not totally resonant such that $k({\mathfrak t})$ is strict and such that $\Gamma({\mathfrak t})$ is strict.
Let $c$ be the entry $\bar{c}_{1,j}$ or $c_{1,j}$ of $\Gamma({\mathfrak t})$ (resp.\ $\iota(\bar{c}_{1,j})$ or $\iota(c_{1,j})$ of $\Gamma^{\iota}({\mathfrak t})$) where $1\leq j\leq i_{0}$.
If $c$ is not boxed, then $G_{\Gamma}({\mathfrak t})$ (resp.\ $G_{\Gamma^{\iota}}({\mathfrak t})$) vanishes unless $n$ divides $c$.
\end{lemma}
\begin{proof}
First, suppose that the entry $c=\bar{c}_{1,j}$ of $\Gamma({\mathfrak t})$ is not boxed.
If it is also not circled, then $G_{\Gamma}({\mathfrak t})$ vanishes unless $\bar{c}_{1,j}$ is divisible by $n$,
by definition. If it is circled, then $\bar{c}_{1,j}=\bar{c}_{1,j+1}$,
so we may continue to the right of $\bar{c}_{1,j}$ until we come to the first uncircled entry,
and the same argument applies.
This can only fail if we come to the edge of the pattern. If this happens, then $\bar{c}_{1,j}$ equals 0 and is divisible by $n$. Let $\bar{c}_{1,j_{0}}$ be the uncircled one for some $1\leq j_{0}<j$. Since $\bar{c}_{1,j_{0}+1}$ is circled, we have $d'_{j_{0}+1}=0$ and the entry $c_{1,j_{0}}$ is circled, by $c_{1,j_{0}}-c_{1,j_{0}+1}=d'_{j_{0}+1}$. In addition, since $\Gamma({\mathfrak t})$ is strict, the entry $c_{1,j_{0}}$ is not boxed and $d_{j_{0}}\neq \mu_{r+1-j_{0}}$. Hence, $\bar{c}_{1,j_{0}}$ is neither boxed nor circled. $G_{\Gamma}({\mathfrak t})$ vanishes unless $\bar{c}_{1,j_{0}}=\bar{c}_{1,j}$ is divisible by $n$.
Next, suppose that the entry $c=c_{1,j}$ of $\Gamma({\mathfrak t})$ is not boxed. If it is also not circled, then again
$G_{\Gamma}({\mathfrak t})$ vanishes unless $c_{1,j}$ is divisible by $n$, by definition. If $c_{1,j}$ is circled,
we continue to the right of $c_{1,j}$ until we come to the first entry $c_{1,j_0}$ that is not circled.
It is sufficient to show that $c_{1,j_{0}}$ is not boxed. To see this, if $j_{0}\leq i_{0}$, then the entry $c_{1,j_{0}-1}$ is circled and $d_{j_{0}}=0$. Since $d_{j_{0}}\neq \mu_{r-j_{0}+1}$, the entry $c_{1,j_{0}}$ is also not boxed.
If $j_{0}>i_{0}$, then $c_{1,i_{0}}$ is circled and $d'_{i_{0}+1}+d_{i_{0}}-d'_{i_{0}}=0$. Thus,
$d_{i_{0}}<d_{2r-i_{0}}$ and $\bar{c}_{1,i_{0}+1}$ is not circled since
$\bar{c}_{1,i_{0}+1}-\bar{c}_{1,i_{0}}=d_{2r-i_{0}}-d_{i_{0}}>0$.
We only need to consider the case $i_{0}<j_{0}<2r-i_{0}$. When $j_{0}\leq r$, since $c_{1,j_{0}-1}$ is circled, $c_{1,j_{0}-1}=c_{1,j_{0}}$, which implies $d'_{j_{0}}=0$ and $d_{j_{0}-1}=d'_{j_{0}-1}$.
In addition, since $k({\mathfrak t})$ is strict, we have $d_{j_{0}}+d'_{j_{0}}<\mu_{r-j_{0}+1}+d_{j_{0}-1}-d_{2r-j_{0}+1}+d_{2r-j_{0}}$ and hence $c_{1,j_{0}}$ is not boxed.
When $r<j_{0}\leq 2r-i_{0}-1$, the entries $c_{1,\ell}$ for all $i_{0}\leq \ell<j_{0}$ are circled.
It follows that $d_{\ell}=0$ for all $i_{0}\leq \ell\leq j_{0}$. Since $k({\mathfrak t})$ is strict, this
implies that the entry $c_{1,j_{0}}$ is not boxed.
The proof for $\Gamma^{\iota}({\mathfrak t})$ is similar. For this case, however,
we consider the first uncircled
entry $\iota(c_{1,j_0})$ to the {\sl left} of $c$. We omit the details.
\end{proof}
A similar result holds for the quantities $G_{\Delta}({\mathfrak t})$ and $G_{\Delta^{\iota}}({\mathfrak t})$.
\begin{lemma}\label{lm:delta-circle}
Let ${\mathfrak t}$ be a short pattern that is not totally resonant and such that $\Delta({\mathfrak t})$ (resp.\ $\Delta^{\iota}({\mathfrak t})$) is strict.
Let $c$ be the entry $\bar{{\mathfrak c}}_{1,j}$ or ${\mathfrak c}_{1,j}$ of $\Delta({\mathfrak t})$
(resp.\ $\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j}$ or ${\mathfrak c}(\Delta^{\iota})_{1,j}$ of $\Delta^{\iota}({\mathfrak t})$) where $1\leq j\leq i_{0}$.
If $c$ is not boxed, then $G_{\Delta}({\mathfrak t})$ (resp.\ $G_{\Delta^{\iota}}({\mathfrak t})$) vanishes unless $n$ divides $c$.
\end{lemma}
\begin{proof}
Similarly to Lemma~\ref{lm:gamma-circle}, we show that there exists an index $j_0$ such that the
entry ${\mathfrak c}_{1,j_{0}}=c$ is neither boxed nor circled.
First, suppose that the entry $c={\mathfrak c}_{1,j}$ of $\Delta({\mathfrak t})$ is not boxed. If it is not circled, then using
the definition of $G_{\Delta}({\mathfrak t})$ the result holds.
If it is circled, we continue to the left of ${\mathfrak c}_{1,j}$ until we come to the first uncircled entry, which we denote
${\mathfrak c}_{1,j_0}$.
Since ${\mathfrak c}_{1,1}$ is never circled in a non-totally-resonant pattern, such an entry must exist.
Since ${\mathfrak c}_{1,j_{0}+1}$ is circled, we have $d_{j_{0}}=0<\mu_{r+1-j_{0}}$. Hence, ${\mathfrak c}_{1,j_{0}}$ is not boxed.
Next suppose that the entry $c=\bar{{\mathfrak c}}_{1,j}$ of $\Delta({\mathfrak t})$ is not boxed.
If $\bar{{\mathfrak c}}_{1,j}=0$, then it is divisible by $n$, so we may assume that $\bar{{\mathfrak c}}_{1,j}>0$.
If $c$ is not circled, we are done. If it is circled, we continue to the left of $\bar{{\mathfrak c}}_{1,j}$ until come to the first entry
which is not circled. Let $c'$ be this entry.
If $c'={\mathfrak c}_{1,j_{0}}$ for some $j_{0}\leq r$, then ${\mathfrak c}_{1,j_{0}+1}$ is circled. It follows that $d_{j_{0}}=0$ and ${\mathfrak c}_{1,j_{0}}$ is not boxed. If $c'=\bar{{\mathfrak c}}_{1,j_{0}}$ for some $j<j_{0}<r$, then the entry $\bar{{\mathfrak c}}_{1,j_{0}-1}$ is circled and $d_{2r-j_{0}}=0$. When $d_{j_{0}}>0$, $d_{j_{0}+1}<\mu_{r-j_{0}}+d_{j_{0}}$ and $\bar{{\mathfrak c}}_{1,j_{0}}$ is not boxed. When $d_{j_{0}}=0$, the entry ${\mathfrak c}_{1,j_{0}+1}$ is circled. Since $\Delta({\mathfrak t})$ is strict, ${\mathfrak c}_{1,j_{0}+1}$ is not boxed and $d_{j_{0}+1}<\mu_{r-j_{0}}$. Hence, $\bar{{\mathfrak c}}_{1,j_{0}}$ is not boxed.
The proof for $\Delta^{\iota}({\mathfrak t})$ is similar, and we omit the details.
\end{proof}
If ${\mathfrak t}$ is not totally resonant, recall that $i_0=\min\cpair{i\mid 1\leq i<r, d_{i}\neq d_{2r-i}}$.
Define
\begin{align*}
{\mathfrak t}^{*}&=(d_1, d_2,\dots,d_{i_0-1},d'_{i_0},d_{2r-i_0+1},\dots,d_{2r-2},d_{2r-1})\\
{\mathfrak t}^{\sharp}&=(d_{i_{0}+1},d_{i_{0}+2},\dots,d_{2r-i_{0}-1}).
\end{align*}
Let $a=|d_{i_{0}}-d_{2r-i_{0}}|$,
and define $\mu^{*}=(\mu_{r+1-i_{0}}-a,\mu_{r+2-i_{0}},\dots,\mu_{r})$ and $\mu^{\sharp}=(\mu_{1},\mu_{2},\dots,\mu_{r-i_{0}})$ if ${\mathfrak t}$ is in Class I, and $\mu^{*}=(\mu_{r+1-i_{0}},\mu_{r+2-i_{0}},\dots,\mu_{r})$ and $\mu^{\sharp}=(\mu_{1},\mu_{2},\dots,\mu_{r-i_{0}}-a)$ if ${\mathfrak t}$ is in Class II.
Here in Class I if $i_{0}=1$ then $\mu^{*}=\mu_{r}-a$, and in Class II if $i_{0}=r-1$ then $\mu^{\sharp}=\mu_{1}-a$.
By the bounds~\eqref{ineq:BZL-d}, ${\mathfrak t}^{*}$ and ${\mathfrak t}^{\sharp}$ are in $BZL_{1}(\mu^{*})$ and $BZL_{1}(\mu^{\sharp})$ respectively.
Set $k^{*}=k({\mathfrak t}^{*})$ and $k^{\sharp}=k({\mathfrak t}^{\sharp})$.
By the definition of weight in \eqref{eq:k} (with $r$ in \eqref{eq:k} replaced by $i_0$, resp.\ $r-i_0$), we have
\begin{align}\label{star-sharp1}k^{*}_{i_{0}}&=k^{*}_{j}/2=\sum^{i_{0}-1}_{i=1}d_{i}+d'_{i_{0}}\text{ for $j\leq i_{0}-1$,}\\
\label{star-sharp2}k^{\sharp}_{r-i_{0}}&=\sum^{r-i_{0}}_{i=1}d_{i+i_{0}},\\
\label{star-sharp3}k^{\sharp}_{i}&=\sum^{2(r-i_{0})-1}_{j=i}d_{i_{0}+j}+d_{r}+\sum^{i-1}_{j=1}d_{2r-i_{0}-j}, \text{ for } 1\leq i<r-i_{0}.
\end{align}
Notice that even if $k({\mathfrak t})$ is strict we could we have $\mu_{r+1-i_{0}}=a$ in the Class I case or $\mu_{r-i_{0}}=a$ in
the Class II case. If $k({\mathfrak t})$ is strict, $\mu_{r+1-i_{0}}=a$ in the Class I case may occur only when $i_{0}=1$, and
$\mu_{r-i_{0}}=a$ in the Class II case may occur only when $i_{0}\leq r-2$.
A short pattern ${\mathfrak t}$ in $BZL_{1}(\mu)$ is called {\it maximal} if $d_{i}=d_{2r-i}=\mu_{r+1-i}$ for all $i\leq r$,
and {\it non-maximal} otherwise. We have the following criterion for non-vanishing.
\begin{lemma}\label{lm:divisibility}
Suppose that $k({\mathfrak t})$ is strict.
\begin{enumerate}
\item
If ${\mathfrak t}$ is in Class I, then $G_{\Gamma}({\mathfrak t})$ and $G_{\Delta}({\mathfrak t})$ vanish unless $n$ divides $k^{*}_{i_{0}}$,
and $G_{\Gamma^{\iota}}({\mathfrak t})$ and $G_{\Delta^{\iota}}({\mathfrak t})$ vanish unless $n$ divides $k_{1}-k^{*}_{i_{0}}$.
\item
If ${\mathfrak t}$ is in Class II, then $G_{\Gamma}({\mathfrak t})$ and $G_{\Delta}({\mathfrak t})$ vanish unless $n$ divides $k_{1}-k^{*}_{i_{0}}$,
and $G_{\Gamma^{\iota}}({\mathfrak t})$ and $G_{\Delta^{\iota}}({\mathfrak t})$ vanish unless $n$ divides $k^{*}_{i_{0}}$.
\item
If ${\mathfrak t}$ is in Class I or Class II (that is, ${\mathfrak t}$ is not totally resonant) and ${\mathfrak t}^{*}\in BZL_1(\mu^*)$ is non-maximal,
then $G_{\Gamma}({\mathfrak t})$, $G_{\Delta}({\mathfrak t})$, $G_{\Gamma^{\iota}}({\mathfrak t})$, and $G_{\Delta^{\iota}}({\mathfrak t})$ vanish unless $n$ divides $k_{1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that ${\mathfrak t}$ is in Class I.
In $\Gamma({\mathfrak t})$, $\bar{c}_{1,i_{0}}=k^{*}_{i_{0}}$ is not boxed. Indeed, it would be boxed if and only if
$d_{i_0}'=\mu_{r+1-i_0}$, but the inequality (\ref{ineq:c}) rules this out. Similarly,
in $\Delta({\mathfrak t})$, $\bar{{\mathfrak c}}_{1,i_{0}}=k^{*}_{i_{0}}$ is not boxed.
In $\Gamma^{\iota}({\mathfrak t})$, $\iota(\bar{c}_{1,i_{0}})=N-k_{1}+k^{*}_{i_{0}}$ is not boxed.
In $\Delta^{\iota}({\mathfrak t})$, $\bar{{\mathfrak c}}(\Delta^{\iota})_{1,i_{0}+1}=\iota(\bar{{\mathfrak c}}_{1,i_{0}})=N-k_{1}+k^{*}_{i_{0}}$ is not boxed.
Hence the desired divisibility properties follow from Lemmas~\ref{lm:gamma-circle} and \ref{lm:delta-circle}.
Suppose instead that ${\mathfrak t}$ is in Class II.
In $\Gamma({\mathfrak t})$, $c_{1,i_{0}}=k_{1}-k^{*}_{i_{0}}$ is not boxed.
In $\Delta({\mathfrak t})$, ${\mathfrak c}_{1,i_{0}+1}=k_{1}-k^{*}_{i_{0}}$ is not boxed.
In $\Gamma^{\iota}({\mathfrak t})$, $\iota(c_{1,i_{0}})=N-k^{*}_{i_{0}}$ is not boxed.
In $\Delta^{\iota}({\mathfrak t})$, ${\mathfrak c}(\Delta^{\iota})_{1,i_{0}}=\iota({\mathfrak c}_{1,i_{0}+1})=N-k^{*}_{i_{0}}$ is not boxed.
Again the desired divisibility properties follow from Lemmas~\ref{lm:gamma-circle} and \ref{lm:delta-circle}.
Last, suppose that ${\mathfrak t}$ is not totally resonant and ${\mathfrak t}^{*}\in BZL_1(\mu^*)$ is non-maximal.
Then there exists an index $j\leq i_0$ such that $d_{j}<\mu^{*}_{i_{0}+1-j}$.
In $\Gamma({\mathfrak t})$, resp.\ $\Gamma^{\iota}({\mathfrak t})$, the entries
$c_{1,j}$, $\bar{c}_{1,j}$, resp.\ $\iota(c_{1,j})$, $\iota(\bar{c}_{1,j})$, are not boxed.
In $\Delta({\mathfrak t})$, resp.\ $\Delta^\iota({\mathfrak t})$, the entries ${\mathfrak c}_{1,j}$,
$\bar{{\mathfrak c}}_{1,j-1}$, resp.\ ${\mathfrak c}(\Delta^{\iota})_{1,j-1}=\iota({\mathfrak c}_{1,j})$, $\bar{{\mathfrak c}}(\Delta^\iota)_{1,j}=\iota(\bar{{\mathfrak c}}_{1,j-1})$, are not boxed. By Lemmas~\ref{lm:gamma-circle} and \ref{lm:delta-circle}, $G_{\Gamma}({\mathfrak t})$
and $G_{\Delta}({\mathfrak t})$, resp.\ $G_{\Gamma^{\iota}}({\mathfrak t})$ and $G_{\Delta^{\iota}}({\mathfrak t})$, vanish unless $n$ divides $c_{1,j}+\bar{c}_{1,j}={\mathfrak c}_{1,j}+\bar{{\mathfrak c}}_{1,j-1}=k_{1}$,
resp.\ $\iota(c_{1,j})+\iota(\bar{c}_{1,j})=\iota({\mathfrak c}_{1,j})+\iota(\bar{{\mathfrak c}}_{1,j-1})=2N-k_1$. Thus the Lemma holds.
\end{proof}
Given a fixed weight ${\bf k}$ which is strict with respect to $\mu$, i.e.\ one satisfying (\ref{strict}),
let ${\mathfrak {S}}_{\bf k}(\mu)$ be the set of all short patterns ${\mathfrak t}$ with $k({\mathfrak t})={\bf k}$. The set ${\mathfrak {S}}_{\bf k}(\mu)$ depends on ${\bf k}$ and on $\mu$ but we also write ${\mathfrak {S}}_{\bf k}$ or even ${\mathfrak {S}}$ for convenience.
For later use, we state the following result, which will allow us to carry out an inductive argument.
\begin{lemma}\label{split-reduce} Suppose that $\bf k$ is in Class I or Class II and has index $i_0$. Then
the map ${\mathfrak t}\to ({\mathfrak t}^{*},{\mathfrak t}^{\sharp})$ gives a bijection from
$
{\mathfrak {S}}_{\bf k}(\mu)$
to
$$ \bigcup_{{\bf k^{*}}, {\bf k^{\sharp}}}{\mathfrak {S}}_{\bf k^{*}}(\mu^*)\times {\mathfrak {S}}_{\bf k^{\sharp}}(\mu^{\#}),
$$
where ${\bf k^{*}}$ runs over the totally resonant weight vectors of length $i_0$, ${\bf k^\#}$ runs over
the weight vectors of length
$r-i_0$, and the union is over all pairs of weights satisfying
Eqn.~\eqref{eq:I-k} below in Case I and Eqn.~\eqref{eq:II-k} below in Case II.
\end{lemma}
\begin{proof}
This follows directly from the definitions.
\end{proof}
We now proceed to prove Statement A in each of the three cases enumerated above.
\subsection{The Totally Resonant Case}\label{Tot-Res-Case} \label{sec:resonant}
In this subsection, we consider a short pattern ${\mathfrak t}$ that is totally resonant. Then for $1\leq j\leq r-1$, we have
\begin{equation}\label{eq:resonant-c}
k_{j}=2k_{r}=2c_{1,r}={\mathfrak c}_{1,1},~
\bar{c}_{1,j}=\bar{{\mathfrak c}}_{1,j}=\sum^{j}_{i=1}d_{i},
\text{ and }
{\mathfrak c}_{1,j+1}=c_{1,j}=k_{r}+\sum^{r}_{i=j+1}d_{i}.
\end{equation}
As above, we write $\bar{c}_{1,j}=c_{1,2r-j}$ and $\bar{{\mathfrak c}}_{1,j}={\mathfrak c}_{1,2r-j}$ for $1\leq j\leq r$, for convenience.
We will apply the results concerning totally resonant short pattern prototypes of type A
in \cite{BBF4} to establish Eqn.~\eqref{eq:H-gamma-delta}.
To do so, we assign decorated {\it two-row} arrays, similar to those in \cite{BBF4} Ch.~6 ff., to the short pattern ${\mathfrak t}$.
Let $\Gamma'({\mathfrak t})$ be the array of nonnegative integers
\begin{equation*}
\Gamma'({\mathfrak t})=\cpair{
\begin{matrix}
\bar c_{1,r}&&\bar{c}_{1,r-1}&&\bar{c}_{1,r-2}&&\cdots&&\bar{c}_{1,1}\\
&c_{1,r-1}&&c_{1,r-2}&&\cdots&&c_{1,1}&
\end{matrix}
},
\end{equation*}
where the entries $c_{1,j}$ are defined by Eqn.~\eqref{eq:c} above (with $\bar c_{1,r}=c_{1,r}$),
and decorated as follows. In $\Gamma'({\mathfrak t})$, $\bar{c}_{1,j}$ is circled if $\bar{c}_{1,j}=\bar{c}_{1,j-1}$ and $\bar{c}_{1,j}$ is boxed if $\bar{c}_{1,j}-\bar{c}_{1,j-1}=\mu_{r-j+1}$, where $1\leq j\leq r$. In the bottom row, $c_{1,j}$ is circled if and only if the $\bar{c}_{1,j+1}$ is circled, and $c_{1,j}$ is boxed if and only if the $\bar{c}_{1,j}$ is boxed.
The array $\Gamma'({\mathfrak t})$ has the property that
each sum of left diagonals $c_{1,i}+\bar{c}_{1,i}$ equals $2k_r$ for $1\leq i<r$; however $\bar{c}_{1,r}=k_r$ and not $2k_r$.
Define $G_{\Gamma'}({\mathfrak t})=\prod^{2r-1}_{j=1}\gamma_{\Gamma}(c_{1,j})$. Since the rules
are the same, it is immediate that $G_{\Gamma'}({\mathfrak t})=G_{\Gamma}({\mathfrak t})$.
Also let $\Delta'({\mathfrak t})$ be the array
$$
\Delta'({\mathfrak t})=\cpair{
\begin{matrix}
{\mathfrak c}_{1,r}&&{\mathfrak c}_{1,r-1}&&{\mathfrak c}_{1,r-2}&&\cdots&&{\mathfrak c}_{1,1}\\
&\bar{{\mathfrak c}}_{1,r-1}&&\bar{{\mathfrak c}}_{1,r-2}&&\cdots&&\bar{{\mathfrak c}}_{1,1}&
\end{matrix}
},
$$
where the entries ${\mathfrak c}_{1,j}$ are defined in Eqn.~\eqref{eq:Fc} and decorated as follows.
In $\Delta'({\mathfrak t})$, if $1\leq j<r$, then ${\mathfrak c}_{1,j}$ is circled if ${\mathfrak c}_{1,j}={\mathfrak c}_{1,j+1}$, and ${\mathfrak c}_{1,j}$ is boxed if ${\mathfrak c}_{1,j}-{\mathfrak c}_{1,j+1}=\mu_{r-j+1}$. The entry ${\mathfrak c}_{1,r}$ is circled if ${\mathfrak c}_{1,r}=\bar{{\mathfrak c}}_{1,r-1}$
and it is boxed if ${\mathfrak c}_{1,r}-{\mathfrak c}_{1,r+1}=2\mu_{1}$. In the bottom row,
$\bar{{\mathfrak c}}_{1,j}$ is circled if and only if the ${\mathfrak c}_{1,j}$ is circled, and the $\bar{{\mathfrak c}}_{1,j}$ is boxed if and
only if the ${\mathfrak c}_{1,j+1}$ is boxed.
The array $\Delta'({\mathfrak t})$ is a $\Delta$-accordion of weight $2k_r$ in the sense of \cite{BBF4}, pg.~43:
each sum of right diagonals ${\mathfrak c}_{1,j}+\bar{{\mathfrak c}}_{1,j-1}$ equals $2k_r$, as does ${{\mathfrak c}}_{1,1}$.
Define $G_{\Delta'}({\mathfrak t})=q^{-\sum^{r}_{i=1}d_{i}}\prod^{2r-1}_{j=1}\gamma_{\Delta}({{\mathfrak c}}_{1,j})$.
Notice that the array $\Delta'({\mathfrak t})$ does {\sl not} have the same circling rule as that for $\Delta({\mathfrak t})$.
We will compare $G_{\Delta}({\mathfrak t})$ and $G_{\Delta'}({\mathfrak t})$ below.
\begin{remark}{\rm
The decoration rules for the arrays
$\Gamma'({\mathfrak t})$ and $\Delta'({\mathfrak t})$ above are the same as the decoration rules for the type A two-row
``accordion" arrays $\Gamma({\mathfrak t})$ and $\Delta({\mathfrak t})$ in the type A totally resonant case
as defined in \cite{BBF4},
Ch.~6.}
\end{remark}
\begin{lemma}\label{lm:Delta=Delta'}
Let ${\mathfrak t}$ be a totally resonant short pattern. Then
$G_{\Delta}({\mathfrak t})=G_{\Delta'}({\mathfrak t})$.
\end{lemma}
\begin{proof}
First, it easy to check that the following statements are equivalent to each other: (1) $\Delta({\mathfrak t})$ is not strict; (2) $\Delta'({\mathfrak t})$ is not strict; (3) $d_{j-1}=0$ and $d_{j}=\mu_{r+1-j}$ for some $j$, $2\leq j\leq r$.
If $\Delta({\mathfrak t})$ is not strict, both sides are zero, and the Lemma is trivial.
Next, assume that $\Delta({\mathfrak t})$ is strict. We must keep track of the circling rules
for the two arrays as they are different. Consider any maximal string
of consecutive zeros $\{d_{i}\}_{j_{0}\leq i\leq j_{1}}$ in ${\mathfrak t}$.
That is, $d_{i}=0$ for all $j_{0}\leq i\leq j_{1}$, and $d_{j_{0}-1}\neq 0$, $d_{j_{1}+1}\neq 0$ provided
these quantities are defined.
We analyze the following 4 cases.
Case (1): $j_{0}=1$ and $j_{1}=r$. Then $G_{\Delta}({\mathfrak t})=G_{\Delta'}({\mathfrak t})=1$.
Case (2): $j_{0}=1$ and $j_{1}<r$. We have
$$
{\mathfrak c}_{1,1}={\mathfrak c}_{1,2}=\cdots={\mathfrak c}_{1,j_{1}+1} \text{ and }
\bar{{\mathfrak c}}_{1,j_{1}}=\bar{{\mathfrak c}}_{1,j_{1}-1}=\cdots=\bar{{\mathfrak c}}_{1,1}=0.
$$
In $\Delta({\mathfrak t})$, the entries ${\mathfrak c}_{1,i}$ for $2\leq i\leq j_{1}+1$ and $\bar{{\mathfrak c}}_{1,i}$ for $1\leq i\leq j_{1}$ are circled. Since $d_{1}\neq \mu_{r}$, the entry ${\mathfrak c}_{1,1}$ is neither boxed nor circled. Since $\Delta({\mathfrak t})$ is strict,
keeping track of the circled entries we find that
$\prod^{j_{1}+1}_{i=1}\gamma_{\Delta}({\mathfrak c}_{1,i})\prod^{j_{1}}_{i=1}\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,i})=q^{j_{1}{\mathfrak c}_{1,1}}\gamma_{\Delta}({\mathfrak c}_{1,1})$.
In $\Delta'({\mathfrak t})$, the entries ${\mathfrak c}_{1,i}$ for $1\leq i\leq j_{1}$ and $\bar{\mathfrak c}_{1,i}$ for $1\leq i\leq j_{1}$ are circled. Since $0<d_{j_{1}+1}<\mu_{r-j_{1}}$, the entry ${\mathfrak c}_{1,j_{1}+1}$ is neither boxed nor circled.
Since $\Delta({\mathfrak t})$ is strict, we arrive at the equality
$$\prod^{j_{1}+1}_{i=1}\gamma_{\Delta}({\mathfrak c}_{1,i})\prod^{j_{1}}_{i=1}\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,i})=\prod^{j_{1}+1}_{i=1}\gamma_{\Delta'}({\mathfrak c}_{1,i})\prod^{j_{1}}_{i=1}\gamma_{\Delta'}(\bar{{\mathfrak c}}_{1,i}).$$
Case (3): $j_{0}>1$ and $j_{1}=r$. We have
$$
{\mathfrak c}_{1,j_{0}}={\mathfrak c}_{1,j_{0}+1}=\cdots={\mathfrak c}_{1,r}=\bar{{\mathfrak c}}_{1,r-1}=\cdots=\bar{{\mathfrak c}}_{1,j_{0}-1}.
$$
In $\Delta({\mathfrak t})$, the entries ${\mathfrak c}_{1,i}$ for $j_{0}+1\leq i\leq 2r+1-j_{0}$ are circled. Since $d_{j_{0}-1}\neq 0$ and $d_{j_{0}}\neq \mu_{r+1-j_{0}}$, the entry ${\mathfrak c}_{1,j_{0}}$ is neither boxed nor circled.
Since $\Delta({\mathfrak t})$ is strict, again keeping track of circled entries gives
$\prod^{2r+1-j_{0}}_{i=j_{0}}\gamma_{\Delta}({\mathfrak c}_{1,i})=q^{(2(r-j_{0})+1){\mathfrak c}_{1,j_0}}\gamma_{\Delta}({\mathfrak c}_{1,j_{0}})$.
In $\Delta'({\mathfrak t})$, the entries ${\mathfrak c}_{1,i}$ for $j_{0}\leq i\leq 2r-j_{0}$ are circled. Since $d_{j_{0}-1}\neq 0$ and $d_{j_{0}}\neq \mu_{r+1-j_{0}}$, the entry $\bar{{\mathfrak c}}_{1,j_{0}-1}$ is neither boxed nor circled. Since $\Delta({\mathfrak t})$ is strict, we arrive
at the equality
$$\prod^{2r+1-j_{0}}_{i=j_{0}}\gamma_{\Delta}({\mathfrak c}_{1,i})=\prod^{2r+1-j_{0}}_{i=j_{0}}\gamma_{\Delta'}({\mathfrak c}_{1,i}).$$
Case (4): $1<j_{0},j_1<r$. We have
\begin{equation}\label{eq:lm-circle}
{\mathfrak c}_{1,j_{0}}={\mathfrak c}_{1,j_{0}+1}=\cdots={\mathfrak c}_{1,j_{1}+1} \text{ and }
\bar{{\mathfrak c}}_{1,j_{1}}=\bar{{\mathfrak c}}_{1,j_{1}-1}=\cdots=\bar{{\mathfrak c}}_{1,j_{0}-1}.
\end{equation}
In $\Delta({\mathfrak t})$, the entries ${\mathfrak c}_{1,i}$ and $\bar{{\mathfrak c}}_{1,i-2}$ for $j_{0}+1\leq i\leq j_{1}+1$ are circled. Since $d_{j_{0}}\neq \mu_{r+1-j_{0}}$ and $d_{j_{0}-1}\neq 0$, the entry ${\mathfrak c}_{1,j_{0}}$ is neither boxed nor circled.
Since $d_{j_{1}+1}\neq 0$ and $d_{j_{1}+1}\neq \mu_{r-j_{1}}$, the entry $\bar{{\mathfrak c}}_{1,j_{1}}$ is neither boxed nor circled. Since $\Delta({\mathfrak t})$ is strict, we obtain
$$\prod^{j_{1}+1}_{i=j_{0}}\gamma_{\Delta}({\mathfrak c}_{1,i})\prod^{j_{1}}_{i=j_{0}-1}\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,i})=q^{(j_{1}-j_{0}+1){\mathfrak c}_{1,j_{0}}}\gamma_{\Delta}({\mathfrak c}_{1,j_{0}})q^{(j_{1}-j_{0}+1)\bar{{\mathfrak c}}_{1,j_{1}}}\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,j_{1}}).$$
In $\Delta'({\mathfrak t})$, the entries ${\mathfrak c}_{1,i-1}$ and $\bar{\mathfrak c}_{1,i-1}$ for $j_{0}+1\leq i\leq j_{1}+1$ are circled.
Since $0<d_{j_{1}+1}<\mu_{r-j_{1}}$, the entry ${\mathfrak c}_{1,j_{1}+1}$ is neither boxed nor circled. Since $d_{j_{0}-1}\neq 0$ and $d_{j_{0}}\neq \mu_{r+1-j_{0}}$, the entry $\bar{{\mathfrak c}}_{1,j_{0}-1}$ is neither boxed and circled. So we find
$$
\prod^{j_{1}+1}_{i=j_{0}}\gamma_{\Delta}({\mathfrak c}_{1,i})\prod^{j_{1}}_{i=j_{0}-1}\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,i})=\prod^{j_{1}+1}_{i=j_{0}}\gamma_{\Delta'}({\mathfrak c}_{1,i})\prod^{j_{1}}_{i=j_{0}-1}\gamma_{\Delta'}(\bar{{\mathfrak c}}_{1,i}).
$$
The entries not involving strings of zeroes are identical (including decorations) in the two arrays. Hence the desired equality holds.
\end{proof}
If ${\mathfrak t}$ is maximal, then all entries $c_{1,j}$ and ${\mathfrak c}_{1,j}$ in $\Gamma({\mathfrak t})$ and $\Delta({\mathfrak t})$ are boxed. Since ${\mathfrak c}_{1,1}=2c_{1,r}$, $\gamma_{\Gamma}(c_{1,r})=q^{-k_{r}}\gamma_{\Delta}({\mathfrak c}_{1,1})$. Hence, if ${\mathfrak t}$ is maximal, Statement~A is true.
\begin{lemma}\label{lm:know-c}
Assume that $n\nmid 2k_{r}$. Then either $G_{\Gamma'}({\mathfrak t})=G_{\Delta'}({\mathfrak t})=0$ or ${\mathfrak t}$ is maximal.
\end{lemma}
\begin{proof}
The decoration rules of $\Gamma'({\mathfrak t})$ and $\Delta'({\mathfrak t})$ are the same as the decoration rules of $\Gamma({\mathfrak t})$ and $\Delta({\mathfrak t})$ defined in \cite{BBF4}, Ch.~6.
Thus the proof of Proposition 11.1 in \cite{BBF4} also applies in our situation, and gives this result.
\end{proof}
To prove Statement A for totally resonant short patterns ${\mathfrak t}$, it remains to handle the
case that ${\mathfrak t}$ is non-maximal and $n\mid 2 k_r$.
{\sl Since $n$ is odd}, we reduce to the case that $n \mid k_{r}$,
and we assume this henceforth.
In order to apply the results in \cite{BBF4},
we shift the above arrays
by introducing the decorated arrays of non-negative integers
\begin{equation}\label{Gamma-flat}
\Gamma^{\flat}({\mathfrak t})=\cpair{
\begin{matrix}
\bar{c}_{1,r}&&\bar{c}_{1,r-1}&&\bar{c}_{1,r-2}&&\cdots&&\bar{c}_{1,1}\\
&c_{1,r-1}-k_{r}&&c_{1,r-2}-k_{r}&&\cdots&&c_{1,1}-k_{r}&
\end{matrix}
},
\end{equation}
and
\begin{equation}\label{Delta-flat}
\Delta^{\flat}({\mathfrak t})=\cpair{
\begin{matrix}
{\mathfrak c}_{1,r}-k_{r}&&{\mathfrak c}_{1,r-1}-k_{r}&&{\mathfrak c}_{1,r-2}-k_{r}&&\cdots&&{\mathfrak c}_{1,1}-k_{r}\\
&\bar{{\mathfrak c}}_{1,r-1}&&\bar{{\mathfrak c}}_{1,r-2}&&\cdots&&\bar{{\mathfrak c}}_{1,1}&
\end{matrix}
},
\end{equation}
with boxing and circling rules as above. Observe that $\Gamma^{\flat}({\mathfrak t})$ is a $\Gamma$-accordion of
weight $k_r$ and $\Delta^{\flat}({\mathfrak t})$ is a $\Delta$-accordion of weight $k_r$ in the sense of
\cite{BBF4}, pgs.\ 42-43. Passing to these arrays will allow us to use the results of \cite{BBF4} below.
Define $G_{\Gamma^{\flat}}({\mathfrak t})=\prod^{2r-1}_{j=1}\gamma_{\Delta}(c(\Gamma^{\flat})_{1,j})$ and $G_{\Delta^{\flat}}({\mathfrak t})=\prod^{2r-1}_{j=1}\gamma_{\Delta}({\mathfrak c}(\Delta^{\flat})_{1,j})$.
\begin{proposition}\label{lm:resonant}
Suppose that the weight $\bf k$ is totally resonant. Then
$$
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma}({\mathfrak t})=q^{(r-1)k_{r}}\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma^{\flat}}({\mathfrak t})=q^{(r-1)k_{r}}\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta^{\flat}}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta}({\mathfrak t}).
$$
\end{proposition}
\begin{proof}
By the above discussion, we only need to consider the case of $\bf k$ such that $n$ divides $k_{r}$,
and we may limit the sums to be over ${\mathfrak t}$ non-maximal. Since $n\mid k_r$, the Gauss sums
modulo $c+k_r$ and modulo $c$ are related. This implies the equalities
$\gamma_{\Gamma}(c+k_{r})=q^{k_{r}}\gamma_{\Gamma}(c)$ and $\gamma_{\Delta}(c+k_{r})=q^{k_{r}}\gamma_{\Delta}(c)$
and so we obtain the equalities
$$\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma}({\mathfrak t})=q^{(r-1)k_{r}}\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma^{\flat}}({\mathfrak t})$$
$$\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta}({\mathfrak t})=q^{(r-1)k_{r}}\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta^{\flat}}({\mathfrak t}).$$
We may now apply Statement C of \cite{BBF4} to conclude
that $\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma^{\flat}}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta^{\flat}}({\mathfrak t})$.
(Note that the involution ${\mathfrak t}\mapsto {\mathfrak t}'$ of that result is built into the notation
(\ref{Gamma-flat}) and (\ref{Delta-flat}) above.) This completes
the proof.
\end{proof}
We conclude this subsection by establishing a similar result using the arrays $\Gamma^{\iota}({\mathfrak t})$
and $\Delta^{\iota}({\mathfrak t})$. This will be required to treat the Class I and Class II situations below.
\begin{proposition}\label{pro:resonant}
Suppose that the weight $\bf k$ is totally resonant. Then
$$
\sum_{{\mathfrak t}\in {\mathfrak {S}}_{\bf k}}G_{\Gamma}({\mathfrak t})=q^{(2k_{r}-N)(2r-1)}
\sum_{{\mathfrak t}\in {\mathfrak {S}}_{\bf k}} G_{\Gamma^{\iota}}({\mathfrak t})=
q^{(2k_{r}-N)(2r-1)}\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}} G_{\Delta^{\iota}}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta}({\mathfrak t}).
$$
\end{proposition}
\begin{proof}
Since ${\mathfrak t}$ is totally resonant, $c_{1,j}$ and $\iota(c_{1,2r-j})$, ${\mathfrak c}(\Delta')_{1,j}$
and ${\mathfrak c}(\Delta^{\iota})_{1,2r-j}$ have the same decorations. In addition,
we have $c_{1,j}+\iota(c_{1,2r-j})={\mathfrak c}(\Delta')_{1,j}+{\mathfrak c}(\Delta^{\iota})_{1,2r-j}=N$
for all $1\leq j\leq 2r-1$. It follows that for $1\leq j\leq r-1$,
$$
\gamma_{\Gamma}(c_{1,j})\gamma_{\Gamma}(\bar{c}_{1,j})=
q^{4k_{r}-2N}\gamma_{\Gamma}(\iota(c_{1,j}))\gamma_{\Gamma}(\iota(\bar{c}_{1,j}))
\text{ and }
\gamma_{\Gamma}(c_{1,r})=q^{2k_{r}-N}\gamma_{\Gamma}(\iota(c_{1,r}),
$$
and similar relations hold for ${\mathfrak c}(\Delta')_{1,j}$ and $\bar{\mathfrak c}(\Delta^{\iota})_{1,j}$. The result then
follows from Proposition~\ref{lm:resonant}.
\end{proof}
\subsection{The Class I Case}
In this subsection, we consider the case when $\bf k$ is in Class I with fixed index $i_0$.
Our strategy is as follows: given a corresponding array
\begin{equation}\label{one-row-array}\left(\begin{matrix}
c_{1,1}&c_{1,2}&\dots&c_{1,r}&\overline{c}_{1,r-1}&\dots&\overline{c}_{1,1}
\end{matrix}\right),
\end{equation}
we break this up into the ``totally resonant" piece
$$\left(\begin{matrix}
c_{1,1}&c_{1,2}&\dots&c_{1,i_0}&\overline{c}_{1,i_0-1}&\dots&\overline{c}_{1,1}
\end{matrix}\right),$$
the lower rank piece
$$\left(\begin{matrix}
c_{1,i_0+1}&\dots&c_{1,r}&\dots&\overline{c}_{1,i_0+1}
\end{matrix}\right),$$
and the singleton $\overline{c}_{1,i_0}$.
The totally resonant piece indeed corresponds to a totally resonant short pattern of lower rank,
and may be treated using the results in Subsection~\ref{Tot-Res-Case}. The lower rank piece is handled by
induction, and the singleton will match on both sides of the desired equalities.
Now suppose ${\mathfrak t}$ is a short pattern in Class I with associated array (\ref{one-row-array}). In this case, we have
\begin{equation}
\begin{array}{ll}
k_{i}({\mathfrak t})=2k^{*}_{i_{0}}+a+k^{\sharp}_{1}
&\text{ for }1\leq i\leq i_{0},\\
k_{i}({\mathfrak t})=2k^{*}_{i_{0}}+k^{\sharp}_{i-i_{0}} &\text{ for } i_{0}+1\leq i<r,\\
k_{r}({\mathfrak t})=k^{*}_{i_{0}}+k^{\sharp}_{r-i_{0}},&
\end{array}\label{eq:I-k}
\end{equation}
where the notation is as in (\ref{star-sharp1}), (\ref{star-sharp2}), and (\ref{star-sharp3})
and the immediately preceding paragraph.
The entries of the decorated arrays $\Gamma({\mathfrak t})$ and $\Gamma^{\iota}({\mathfrak t})$ are given as follows.
\begin{equation} \label{eq:c-*-sharp}
\begin{array}{lll}
\bar{c}_{1,j}=\bar{c}^{*}_{1,j} &\iota(\bar{c}_{1,j})=N-k_{1}+\bar{c}^{*}_{1,j}&\text{ for } 1\leq j\leq i_{0},\\
\bar{c}_{1,j}=k^{*}_{i_{0}}+\bar{c}^{\sharp}_{1,j-i_{0}} &\iota(\bar{c}_{1,j})=N-k_{1}+k^{*}_{i_{0}}+\bar{c}^{\sharp}_{1,j-i_{0}} &\text{ for } i_{0}+1\leq j<r,\\
c_{1,j}=k^{*}_{i_{0}}+c^{\sharp}_{1,j-i_{0}} &\iota(c_{1,j})=N-k_{1}+k^{*}_{i_{0}}+c^{\sharp}_{1,j-i_{0}} &\text{ for } i_{0}+1\leq j\leq r,\\
c_{1,j}=c^{*}_{1,j}+a+k^{\sharp}_{1} &\iota(c_{1,j})=N-2k^{*}_{i_{0}}+c^{*}_{1,j}&\text{ for } 1\leq j\leq i_{0}.
\end{array}
\end{equation}
We define new decorated arrays $\Gamma'({\mathfrak t})$ and $\Gamma'^{\iota}({\mathfrak t})$.
The entries of $\Gamma'({\mathfrak t})$ and $\Gamma'^{\iota}({\mathfrak t})$
are the same as the entries of $\Gamma({\mathfrak t})$ and $\Gamma^{\iota}({\mathfrak t})$, resp.
If $k_{1}-\varepsilon_{2}k_{2}=\mu_{r}$, then the decoration rules for $\Gamma'({\mathfrak t})$ are the same
as those for $\Gamma({\mathfrak t})$. In this case, $\Gamma'^{\iota}({\mathfrak t})$ is by definition decorated to be non-strict
(and so will contribute zero to the coefficients).
(Notice that when $k({\mathfrak t})$ is strict, $k_{1}-\varepsilon_{2}k_{2}=\mu_{r}$ if and only if $\mu_{r+1-i_{0}}=a$.)
If $k_{1}-\varepsilon_{2}k_{2}\ne \mu_{r}$, then the decoration rules
for $\Gamma'({\mathfrak t})$ and $\Gamma'^{\iota}({\mathfrak t})$ are modified from those for
$\Gamma({\mathfrak t})$ and $\Gamma^{\iota}({\mathfrak t})$, resp., as follows.
In $\Gamma'({\mathfrak t})$, the entry $\bar{c}_{1,i_{0}}$ is neither boxed nor circled.
The entry $c_{1,i_{0}}$ is boxed if $d_{2r-i_{0}}=\mu_{r+1-i_{0}}-a$, and circled if $d_{2r-i_{0}}=0$.
The rest of the entries in $\Gamma'({\mathfrak t})$ are given the same decorations as $\Gamma({\mathfrak t})$.
In $\Gamma'^{\iota}({\mathfrak t})$, the entry $\iota(\bar{c}_{1,i_{0}})$ is neither boxed nor circled. The entry $\iota(c_{1,j})$ for $i_{0}<j<2r-i_{0}$ is circled if $\iota(c_{1,j})=\iota(c_{1,j+1})$.
The rest of the entries in $\Gamma'^{\iota}({\mathfrak t})$ are given the same decorations as $\Gamma^{\iota}({\mathfrak t})$.
Note that for $j$ in the interval $i_{0}<j<2r-i_{0}$ the decoration rules for $\Gamma'^{\iota}({\mathfrak t})$ are the same as the decoration rules for $\Gamma({\mathfrak t})$. Also note that
until now, all entries of zero occurring in decorated arrays were always circled, and so
(when not boxed) received weight $q^0=1$.
With this modification of the decoration rules,
we have the possibility of an undecorated zero, which contributes $1-q^{-1}$ since $n$ always divides $0$.
For the convenience of the reader, we give an example.
\begin{example}\label{r=3 example}\rm
Suppose $r=3$, and let ${\bf k}$ be a weight vector with $i_{0}=2$ and $a=k_{2}-2k_{3}>0$. Fix $\mu$. We have
$$
{\mathfrak {S}}_{\bf k}(\mu)=\cpair{{\mathfrak t}=(d_{1},\dots,d_{5})\in {\mathbb Z}^{5}_{\geq 0}\mid d_{1}=d_{5}\leq \mu_{3}, d_{4}\leq \mu_{2}-a, d_{3}\leq \mu_{1},k({\mathfrak t})={\bf k}}.
$$
Also, ${\mathfrak t}^{*}=(d_{1},d_{4})$ and ${\mathfrak t}^{\sharp}=(d_{3})$. Then $k^{*}_{1}=2k^{*}_{2}=2(d_{1}+d_{4})$, $k^{\sharp}_{1}=d_{3}$, $k_{1}=k_{2}=2k^{*}_{2}+a+2k^{\sharp}$ and $k_{3}=k^{\sharp}_{1}+k^{*}_{2}$. The array $\Gamma({\mathfrak t})$ is
\begin{align*}
\Gamma({\mathfrak t})=&(d_{1}+d_{2}+2d_{3}+d_{4}, d_{2}+2d_{3}+d_{1},d_{3}+d_{4}+d_{1},d_{1}+d_{4},d_{1})\\
=&(k^{*}_{2}+d_{4}+a+2k^{\sharp}_{1},k^{*}_{2}+a+2k^{\sharp}_{1},k^{\sharp}_{1}+k^{*}_{2},k^{*}_{2},d_{1}).
\end{align*}
The decoration rules for $\Gamma({\mathfrak t})$ and $\Gamma'({\mathfrak t})$ are given as follows:
$$
\Gamma({\mathfrak t}): \begin{pmatrix}
k^{*}_{2}+d_{4}+a+2k^{\sharp}_{1}&k^{*}_{2}+a+2k^{\sharp}_{1}&k^{\sharp}_{1}+k^{*}_{2}&k^{*}_{2}&d_{1}\\
\fbox{$d_{1}$}\circled{$d_{4}$}&\fbox{$d_{2}$}&\fbox{$d_{3}$}\circled{$d_{3}$}&\circled{$d_{4}$}&\fbox{$d_{1}$}\circled{$d_{1}$}
\end{pmatrix}
$$
$$
\Gamma'({\mathfrak t}): \begin{pmatrix}
k^{*}_{2}+d_{4}+a+2k^{\sharp}_{1}&k^{*}_{2}+a+2k^{\sharp}_{1}&k^{\sharp}_{1}+k^{*}_{2}&k^{*}_{2}&d_{1}\\
\fbox{$d_{1}$}\circled{$d_{4}$}&\fbox{$d_{2}$}\circled{$d_{4}$}&\fbox{$d_{3}$}\circled{$d_{3}$}&&\fbox{$d_{1}$}\circled{$d_{1}$}
\end{pmatrix}.
$$
Here the first rows are the arrays, and the second rows are the decoration rules for each array. The symbol
$\fbox{$d_{i}$}$ means the entry above it is boxed if $d_{i}=\mu_{r+1-i}$. The symbol
$\circled{$d_{i}$}$ means the entry above it is circled if $d_{i}=0$.
\end{example}
Similarly to Lemma~\ref{lm:Delta=Delta'},
these modifications in the decorations do not change the associated products
of Gauss sums. We have:
\begin{lemma}\label{I:gamma=gamma'}
If ${\mathfrak t}$ is a short pattern in Class I and $k({\mathfrak t})$ is strict, then
$G_{\Gamma}({\mathfrak t})=G_{\Gamma'}({\mathfrak t})$ and $G_{\Gamma^{\iota}}({\mathfrak t})=G_{\Gamma'^{\iota}}({\mathfrak t})$.
\end{lemma}
\begin{proof}
If $k_{1}-\varepsilon_2 k_{2}=\mu_r$, then
the desired equalities follow immediately from the definitions. Otherwise, since ${\mathfrak t}\in BZL_1(\mu)$, necessarily
$k_{1}-\varepsilon_2 k_{2}<\mu_r$
Suppose thus that $k_{1}-\varepsilon_2 k_{2}<\mu_r$.
First, we will show that $G_{\Gamma'}({\mathfrak t})=G_{\Gamma}({\mathfrak t})$. It is sufficient to show that
\begin{equation}\label{eq:lm:c-0}
\gamma_{\Gamma}(c_{1,i_{0}})\gamma_{\Gamma}(\bar{c}_{1,i_{0}})=\gamma_{\Gamma'}(c_{1,i_{0}})\gamma_{\Gamma'}(\bar{c}_{1,i_{0}}).
\end{equation}
If $d_{2r-i_{0}}\neq 0$, these two entries have the same decorations in $\Gamma({\mathfrak t})$ and $\Gamma'({\mathfrak t})$,
so Eqn.~\eqref{eq:lm:c-0} is true. If $d_{2r-i_{0}}=0$, then ${\mathfrak t}^{*}$ is not maximal.
Thus, $G_{\Gamma}({\mathfrak t})=G_{\Gamma'}({\mathfrak t})=0$ unless $n$ divides $k_{1}$ and $k^{*}_{i_{0}}$ by Lemma~\ref{lm:divisibility}.
If so, $n$ divides $\overline{c}_{1,i_0}=k_{i_0}^*$ and $n$ divides $c_{1,i_0}=k_1-\overline{c}_{1,i_0}$ (as $d_{2r-i_{0}}=0$).
Thus in this case,
$\gamma_{\Gamma}(c_{1,i_{0}})\gamma_{\Gamma}(\bar{c}_{1,i_{0}})=q^{c_{1,i_{0}}}(1-q^{-1})q^{\bar{c}_{1,i_{0}}}$
and $\gamma_{\Gamma'}(c_{1,i_{0}})\gamma_{\Gamma'}(\bar{c}_{1,i_{0}})=q^{\bar{c}_{1,i_{0}}}(1-q^{-1})q^{c_{1,i_{0}}}.$
Eqn.~\eqref{eq:lm:c-0} follows.
Second, we show that $G_{\Gamma^{\iota}}({\mathfrak t})=G_{\Gamma'^{\iota}}({\mathfrak t})$.
We only need to consider the entries $\iota(c_{1,j})$ for $i_{0}< j\leq 2r-i_{0}$.
We may suppose that no entry $c_{1,j}$ for $i_{0}<j\leq 2r-i_{0}$ is both boxed and circled in $\Gamma^{\iota}({\mathfrak t})$,
since if one were, by Lemma~\ref{lm:strictness}, an entry in $\Gamma'^{\iota}({\mathfrak t})$ would also have
this property and both sides would be zero.
In $\Gamma^{\iota}({\mathfrak t})$, let $\iota(c_{1,j})$ for $m_{1}\leq j\leq m_{2}$ be a sequence of consecutive circled entries, where $m_{1}> i_{0}$ and $m_{2}\leq 2r-i_{0}$, which is maximal,
i.e.\ such that the entry $\iota(c_{1,m_{1}-1})$ is not circled
and either $\iota(c_{1,m_{2}+1})$ is not circled or $m_{2}=2r-i_{0}$. By the assumption, the entries $\iota(c_{1,j})$ for $m_{1}\leq j\leq m_{2}$ are not boxed. Since $\Gamma({\mathfrak t})$ is strict, the entry $\iota(c_{1,m_{1}-1})$ is not boxed (note $m_{1}>i_{0}+1$ as ${\mathfrak t}$ is in Class I).
On the other hand, in $\Gamma'^{\iota}({\mathfrak t})$, the entries $\iota(c_{1,j})$ for $m_{1}-1\leq j\leq m_{2}-1$ are circled and the entry $\iota(c_{1,m_{2}})$ is unboxed and uncircled. Therefore, $G_{\Gamma^{\iota}}({\mathfrak t})=G_{\Gamma'^{\iota}}({\mathfrak t})$.
\end{proof}
We now carry out similar constructions for the $\Delta$ arrays.
In $\Delta({\mathfrak t})$, we have
\begin{equation}\label{eq:fc-*-sharp}
\begin{array}{lll}
\bar{{\mathfrak c}}_{1,j}=\bar{{\mathfrak c}}^{*}_{1,j}=\bar{c}_{1,j} &\iota(\bar{{\mathfrak c}}_{1,j})=N-k_{1}+\bar{{\mathfrak c}}^{*}_{1,j}& \text{ for } 1\leq j\leq i_{0},\\
\bar{{\mathfrak c}}_{1,j}=k^{*}_{i_{0}}+\bar{{\mathfrak c}}^{\sharp}_{1,j-i_{0}} &\iota(\bar{{\mathfrak c}}_{1,j})=N-k_{1}+k^{*}_{i_{0}}+\bar{{\mathfrak c}}^{\sharp}_{1,j-i_{0}}& \text{ for } i_{0}+1\leq j<r,\\
{\mathfrak c}_{1,j}=k^{*}_{i_{0}}+{\mathfrak c}^{\sharp}_{1,j-i_{0}} &\iota({\mathfrak c}_{1,j})=N-k_{1}+k^{*}_{i_{0}}+{\mathfrak c}^{\sharp}_{1,j-i_{0}} &\text{ for } i_{0}+1\leq j\leq r,\\
{\mathfrak c}_{1,j}={\mathfrak c}^{*}_{1,j}+a+k^{\sharp}_{1} &\iota({\mathfrak c}_{1,j})=N-2k^{*}_{i_{0}}+{\mathfrak c}^{*}_{1,j}& \text{ for } 1\leq j\leq i_{0}.
\end{array}
\end{equation}
By Lemma~\ref{lm:delta-circle}, $G_{\Delta}({\mathfrak t})$ vanishes unless $n$ divides $\bar{{\mathfrak c}}_{1,i_{0}}=k^{*}_{i_{0}}$.
We again define new decorated arrays $\Delta'({\mathfrak t})$ and $\Delta'^{\iota}({\mathfrak t})$, whose
entries are the same as the entries of $\Delta({\mathfrak t})$ and $\Delta^{\iota}({\mathfrak t})$, resp., but whose decorations are
(usually) modified.
If $k_{1}-\varepsilon_{2}k_{2}=\mu_{r}$, then the decoration rules for $\Delta'({\mathfrak t})$ are the same
as those for $\Delta({\mathfrak t})$. In this case, $\Delta'^{\iota}({\mathfrak t})$ is decorated so as
to be non-strict. If instead $k_{1}-\varepsilon_{2}k_{2}\ne \mu_{r}$, then
in $\Delta'({\mathfrak t})$ the entry $\bar{{\mathfrak c}}_{1,i_{0}}$ is neither boxed nor circled.
The entry $\bar{{\mathfrak c}}_{1,i_{0}+1}$ is circled if $d_{2r-i_{0}-1}=0$. The
remaining entries of $\Delta'({\mathfrak t})$ are assigned the same decorations as in $\Delta({\mathfrak t})$.
In $\Delta'^{\iota}({\mathfrak t})$, the entry $\overline{{\mathfrak c}}(\Delta^{\iota})_{1,i_{0}+1}$ is neither boxed nor circled.
The entry $\bar{{\mathfrak c}}(\Delta^{\iota})_{1,i_{0}+2}$ is circled if $d_{2r-i_{0}-1}=0$.
The remaining entries of $\Delta'^{\iota}({\mathfrak t})$ are assigned the same decorations as in $\Delta^{\iota}({\mathfrak t})$.
\begin{example}\label{r=3 part 2}\rm
We continue Example~\ref{r=3 example} by giving the $\Delta$ and $\Delta'$ arrays, using the same notation.
We have
\begin{align*}
\Delta({\mathfrak t})=&(2d_{1}+d_{2}+2d_{3}+d_{4},d_{2}+2d_{3}+d_{4}+d_{1},2d_{3}+d_{4}+d_{1},d_{4}+d_{1},d_{1})\\
=&(2k^{*}_{2}+a+2k^{\sharp}_{1},k^{*}_{2}+d_{4}+a+2k^{\sharp}_{1},2k^{\sharp}_{1}+k^{*}_{2},k^{*}_{2},d_{1}).
\end{align*}
The decoration rules for $\Delta({\mathfrak t})$ and $\Delta'({\mathfrak t})$ are:
$$
\Delta({\mathfrak t}): \begin{pmatrix}
2k^{*}_{2}+a+2k_1^{\sharp}&k^{*}_{2}+d_{4}+a+2k^{\sharp}_{1}&2k^{\sharp}_{1}+k^{*}_{2}&k^{*}_{2}&d_{1}\\
\fbox{$d_{1}$}&\fbox{$d_{2}$}\circled{$d_{1}$}&\fbox{$d_{3}$}&\circled{$d_{3}$}&\fbox{$d_{2}$}\circled{$d_{4}$}\circled{$d_{1}$}
\end{pmatrix}
$$
$$
\Delta'({\mathfrak t}): \begin{pmatrix}
2k^{*}_{2}+a+2k_1^{\sharp}&k^{*}_{2}+d_{4}+a+2k^{\sharp}_{1}&2k^{\sharp}_{1}+k^{*}_{2}&k^{*}_{2}&d_{1}\\
\fbox{$d_{1}$}&\fbox{$d_{2}$}\circled{$d_{1}$}&\fbox{$d_{3}$}\circled{$d_{3}$}&&\fbox{$d_{2}$}\circled{$d_{4}$}\circled{$d_{1}$}
\end{pmatrix}.
$$
\end{example}
\begin{lemma}\label{I:delta=delta'}
If ${\mathfrak t}$ is a short pattern in Class I and $k({\mathfrak t})$ is strict, then $G_{\Delta}({\mathfrak t})=G_{\Delta'}({\mathfrak t})$ and $G_{\Delta^{\iota}}({\mathfrak t})=G_{\Delta'^{\iota}}({\mathfrak t})$.
\end{lemma}
\begin{proof}
To show that $G_{\Delta}({\mathfrak t})=G_{\Delta'}({\mathfrak t})$, it is sufficient to establish the equality
\begin{equation}\label{eq:lm:fc-0}
\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,i_{0}+1})\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,i_{0}})=\gamma_{\Delta'}(\bar{{\mathfrak c}}_{1,i_{0}+1})\gamma_{\Delta'}(\bar{{\mathfrak c}}_{1,i_{0}}).
\end{equation}
If $d_{2r-i_{0}-1}\neq 0$, these two entries have the same decorations in $\Delta({\mathfrak t})$ and $\Delta'({\mathfrak t})$,
and Eqn.~\eqref{eq:lm:fc-0} holds. If $d_{2r-i_{0}-1}=0$, then $d_{i_{0}+2}<\mu_{r-i_{0}-2}+d_{i_{0}+1}-d_{2r-i_{0}-1}$ and $\bar{{\mathfrak c}}_{1,i_{0}+1}$ is not boxed, and Eqn.~\eqref{eq:lm:fc-0} again holds.
The proof for the second equality of the Lemma is similar and is omitted.
\end{proof}
\begin{proposition}\label{pro:I}
Suppose that the weight $\bf k$ is in Class I and is strict. Then
$$
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta}({\mathfrak t})
\text{ and }
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma^{\iota}}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta^{\iota}}({\mathfrak t}).
$$
\end{proposition}
\begin{proof}
We will prove this Proposition by induction on $r$. To begin the induction,
the equalities for the case $r=1$ are established in Example~\ref{ex:r=1}.
Suppose $r>1$.
Suppose first that $k_{1}-\varepsilon_{2}k_{2}\ne \mu_{r}$.
By Lemmas~\ref{I:gamma=gamma'}, \ref{I:delta=delta'} and \ref{lm:divisibility}, we have
$$
G_{\Gamma}({\mathfrak t})=q^{i_{0}(k^{\sharp}_{1}+a)+(2r-2i_{0}-1)k^{*}_{i_{0}}+k^*_{i_0}}(1-q^{-1})G_{\Gamma^{\flat}}({\mathfrak t}^{*})G_{\Gamma}({\mathfrak t}^{\sharp}),
$$
and
$$
G_{\Delta}({\mathfrak t})=q^{(2r-2i_{0}-1)k^{*}_{i_{0}}+i_{0}(a+k^{\sharp}_{1})+k^*_{i_0}}(1-q^{-1})G_{\Delta^{\flat}}({\mathfrak t}^{*})G_{\Delta}({\mathfrak t}^{\sharp}),
$$
when $n$ divides $k^{*}_{i_{0}}$, and these quantities are $0$ otherwise. Since $\mu^*$ is totally resonant,
by Proposition~\ref{lm:resonant}, we have
$$
\sum_{{\mathfrak t}^{*}\in{\mathfrak {S}}_{{\bf k}^*}(\mu^{*})}G_{\Gamma^{\flat}}({\mathfrak t}^{*})=
\sum_{{\mathfrak t}^{*}\in{\mathfrak {S}}_{{\bf k}^*}(\mu^{*})}G_{\Delta^{\flat}}({\mathfrak t}^{*}).
$$
In addition, by induction,
$$
\sum_{{\mathfrak t}^{\sharp}\in{\mathfrak {S}}_{{\bf k}^\sharp}(\mu^{\sharp})}G_{\Gamma}({\mathfrak t}^{\sharp})=
\sum_{{\mathfrak t}^{\sharp}\in{\mathfrak {S}}_{{\bf k}^\sharp}(\mu^{\sharp})}G_{\Delta}({\mathfrak t}^{\sharp}).
$$
Referring to Lemma~\ref{split-reduce}, we have
\begin{multline*}
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Gamma}({\mathfrak t})=\\
\sum_{{\bf k^{*}},{\bf k^{\sharp}}}q^{i_{0}(k^{\sharp}_{1}+a)+(2r-2i_{0}-1)k^{*}_{i_{0}}+k^*_{i_0}}(1-q^{-1})\sum_{{\mathfrak t}^{*}\in{\mathfrak {S}}_{\bf k^{*}}(\mu^*)}G_{\Gamma^{\flat}}({\mathfrak t}^*)\sum_{{\mathfrak t}^{\sharp}\in{\mathfrak {S}}_{\bf k^{\sharp}}(\mu^\#)}G_{\Gamma}({\mathfrak t}^{\sharp})
\end{multline*}
and
\begin{multline*}
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Delta}({\mathfrak t})=\\
\sum_{{\bf k^{*}},{\bf k^{\sharp}}}q^{(2r-2i_{0}-1)k^{*}_{i_{0}}+i_{0}(a+k^{\sharp}_{1})+k^*_{i_0}}(1-q^{-1})
\sum_{{\mathfrak t}^{*}\in{\mathfrak {S}}_{\bf k^{*}}(\mu^*)}G_{\Delta^{\flat}}({\mathfrak t}^{*})
\sum_{{\mathfrak t}^{\sharp}\in{\mathfrak {S}}_{\bf k^{\sharp}}(\mu^\#)}G_{\Delta}({\mathfrak t}^{\sharp})
\end{multline*}
where the outer sums over ${\bf k^{*}}$ and ${\bf k^{\sharp}}$ are as given in Lemma~\ref{split-reduce}.
Therefore,
$$\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Gamma}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Delta}({\mathfrak t}),$$
as claimed.
For the second equality, by Lemma~\ref{lm:divisibility} we may suppose that $n$ divides $k_{1}-k^{*}_{i_{0}}$
since otherwise both sides are zero. When $n$ divides $k_{1}-k^{*}_{i_{0}}$,
$$
G_{\Gamma^{\iota}}({\mathfrak t})=q^{(2r-2i_{0})(N-k_{1}+k^{*}_{i_{0}})}(1-q^{-1})\prod^{i_{0}-1}_{j=1}\gamma_{\Gamma}(\iota(c_{1,j}))\gamma_{\Gamma}(\iota(\bar{c}_{1,j}))\cdot \gamma_{\Gamma}(\iota(c_{1,i_{0}}))\cdot G_{\Gamma}({\mathfrak t}^{\sharp}),
$$
and
\begin{multline*}
G_{\Delta^{\iota}}({\mathfrak t})=q^{k_{1}-k^{*}_{i_{0}}}\prod^{i_{0}-1}_{j=1}\gamma_{\Delta}({\mathfrak c}(\Delta^{\iota})_{1,j})\gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j})\cdot
\gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,i_{0}})\\
\times (1-q^{-1})q^{(2r-2i_{0})(N-k_{1}+k^{*}_{i_{0}})}G_{\Delta}({\mathfrak t}^{\sharp}).
\end{multline*}
If ${\mathfrak t}^{*}$ is maximal, then all the entries are boxed and thus
\begin{multline*}
\prod^{i_{0}-1}_{j=1}\gamma_{\Gamma}(\iota(c_{1,j}))\gamma_{\Gamma}(\iota(\bar{c}_{1,j}))\cdot \gamma_{\Gamma}(\iota(c_{1,i_{0}}))=\\q^{k_{1}-k^{*}_{i_{0}}}\prod^{i_{0}-1}_{j=1}\gamma_{\Delta}({\mathfrak c}(\Delta^{\iota})_{1,j})\gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j})\cdot \gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,i_{0}}).
\end{multline*}
If ${\mathfrak t}^{*}$ is non-maximal, then
$$
G_{\Gamma^{\iota}}({\mathfrak t})=q^{(2r-2i_{0})(N-k_{1}+k^{*}_{i_{0}})+(i_{0}-1)(N-k_{1})+i_{0}(N-k^{*}_{i_{0}})}(1-q^{-1})G_{\Gamma^{\flat}}({\mathfrak t}^{*})G_{\Gamma}({\mathfrak t}^{\sharp}),
$$
and
$$
G_{\Delta^{\iota}}({\mathfrak t})=q^{(2r-2i_{0})(N-k_{1}+k^{*}_{i_{0}})+(i_{0}-1)(N-k_{1})+i_{0}(N-k^{*}_{i_{0}})}(1-q^{-1})G_{\Delta^{\flat}}({\mathfrak t}^{*})G_{\Delta}({\mathfrak t}^{\sharp}).
$$
Again referring to Lemma~\ref{split-reduce}, we have
\begin{multline*}
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Gamma^{\iota}}({\mathfrak t})=
\sum_{{\bf k^{*}},{\bf k^{\sharp}}}
q^{(2r-2i_{0})(N-k_{1}+k^{*}_{i_{0}})+(i_{0}-1)(N-k_{1})+i_{0}(N-k^{*}_{i_{0}})}(1-q^{-1})\\ \times
\sum_{{\mathfrak t}^{*}\in{\mathfrak {S}}_{\bf k^{*}}(\mu^*)}G_{\Gamma^{\flat}}({\mathfrak t}^{*})
\sum_{{\mathfrak t}^{\sharp}\in{\mathfrak {S}}_{\bf k^{\sharp}}(\mu^\#)}G_{\Gamma}({\mathfrak t}^{\sharp})
\end{multline*}
and
\begin{multline*}
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Delta^{\iota}}({\mathfrak t})=
\sum_{{\bf k^{*}},{\bf k^{\sharp}}}q^{(2r-2i_{0})(N-k_{1}+k^{*}_{i_{0}})+(i_{0}-1)(N-k_{1})+i_{0}(N-k^{*}_{i_{0}})}(1-q^{-1})
\\ \times
\sum_{{\mathfrak t}^{*}\in{\mathfrak {S}}_{\bf k^{*}}(\mu^*)}G_{\Delta^{\flat}}({\mathfrak t}^{*})
\sum_{{\mathfrak t}^{\sharp}\in{\mathfrak {S}}_{\bf k^{\sharp}}(\mu^\#)}G_{\Delta}({\mathfrak t}^{\sharp})
\end{multline*}
where the outer sums over ${\bf k^{*}}$ and ${\bf k^{\sharp}}$ are described in Lemma~\ref{split-reduce}.
By induction, we obtain
$\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Gamma^{\iota}}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Delta^{\iota}}({\mathfrak t})$,
as claimed.
Finally, suppose instead that $k_{1}-\varepsilon_{2}k_{2}=\mu_{r}$.
Then a similar argument easily reduces the desired result to the $r-1$ case for $\Gamma({\mathfrak t})$ and $\Delta({\mathfrak t})$ and to the non-strict case for $\Gamma^{\iota}({\mathfrak t})$ and $\Delta^{\iota}({\mathfrak t})$.
\end{proof}
\subsection{The Class II Case}
In this subsection, we consider the case when $\bf k$ is in Class II. The approach to proving
Statement A for such weights is the same as in Class I, but the details require modification.
For ${\mathfrak t}$ in Class II, we have
\begin{equation}\label{eq:II-k}
\begin{array}{ll}
k_{i}({\mathfrak t})=2k^{*}_{i_{0}}+a+k^{\sharp}_{1}
&\text{ for }1\leq i\leq i_{0},\\
k_{i}({\mathfrak t})=2k^{*}_{i_{0}}+2a+k^{\sharp}_{i-i_{0}} &\text{ for } i_{0}+1\leq i<r,\\
k_{r}({\mathfrak t})=k^{*}_{i_{0}}+a+k^{\sharp}_{r-i_{0}}.&
\end{array}
\end{equation}
The entries of $\Gamma({\mathfrak t})$ and $\Gamma^{\iota}({\mathfrak t})$ are given as follows.
\begin{equation}\label{eq:II-c-*-sharp}
\begin{array}{lll}
\bar{c}_{1,j}=\bar{c}^{*}_{1,j} &\iota(\bar{c}_{1,j})=N-k_{1}+\bar{c}^{*}_{1,j}&\text{ for } 1\leq j\leq i_{0},\\
\bar{c}_{1,j}=k^{*}_{i_{0}}+a+\bar{c}^{\sharp}_{1,j-i_{0}} &\iota(\bar{c}_{1,j})=N-k^{*}_{i_{0}}-k^{\sharp}_{1}+\bar{c}^{\sharp}_{1,j-i_{0}}&\text{ for } i_{0}+1\leq j<r,\\
c_{1,j}=k^{*}_{i_{0}}+a+c^{\sharp}_{1,j-i_{0}} &\iota(c_{1,j})=N-k^{*}_{i_{0}}-k^{\sharp}_{1}+c^{\sharp}_{1,j-i_{0}}&\text{ for } i_{0}+1\leq j\leq r,\\
c_{1,j}=c^{*}_{1,j}+a+k^{\sharp}_{1} &\iota(c_{1,j})=N-2k^{*}_{i_{0}}+c^{*}_{1,j}&\text{ for } 1\leq j\leq i_{0}.
\end{array}
\end{equation}
Once again, we define new decorated arrays
$\Gamma'({\mathfrak t})$ and $\Gamma'^{\iota}({\mathfrak t})$, whose entries
are the same as those of $\Gamma({\mathfrak t})$ and $\Gamma^{\iota}({\mathfrak t})$ (resp.), but with some modifications
to the decoration rules.
In $\Gamma'({\mathfrak t})$, the entry $c_{1,j}$ for $i_{0}< j\leq 2r-i_{0}-1$ is circled if $c_{1,j}=c_{1,j-1}$,
and the rest of the decorations for $\Gamma'({\mathfrak t})$ are the same as those of $\Gamma({\mathfrak t})$.
In $\Gamma'^{\iota}({\mathfrak t})$, the decoration rules for the entries $\iota(c_{1,j})$ in $\Gamma'^{\iota}({\mathfrak t})$ are the same as
those for $\iota(c_{1,j})$ in $\Gamma^{\iota}({\mathfrak t})$ except when $j=i_{0}$ or $j=2r-i_{0}$. The entry $\iota(c_{1,i_{0}})$
in $\Gamma'^{\iota}({\mathfrak t})$ is neither boxed nor circled, and the entry $\iota(\bar{c}_{1,i_{0}})$ is boxed if $d_{i_{0}}=\mu_{r+1-i_{0}}$ and circled if $d_{i_{0}}=0$.
\begin{lemma}\label{lm:II-gamma=gamma'}
If ${\mathfrak t}$ is a short pattern in Class II and $k({\mathfrak t})$ is strict, then
$G_{\Gamma}({\mathfrak t})=G_{\Gamma'}({\mathfrak t})$ and $G_{\Gamma^{\iota}}({\mathfrak t})=G_{\Gamma'^{\iota}}({\mathfrak t})$.
\end{lemma}
\begin{proof}
We may suppose that no entry $c_{1,j}$ for $i_{0}\leq j\leq 2r-i_{0}$ is both boxed and circled in $\Gamma({\mathfrak t})$,
as otherwise both sides are zero by
Lemma~\ref{lm:strictness}. In $\Gamma({\mathfrak t})$, let $c_{1,j}$ for $m_{1}\leq j\leq m_{2}$ be a sequence of circled entries, where $m_{1}\geq i_{0}$ and $m_{2}< 2r-i_{0}-1$,
which is maximal: the entry $c_{1,m_{2}+1}$ is not circled and either the entry
$c_{1,m_{1}-1}$ is not circled or $m_{1}=i_{0}$.
By strictness, the entries $c_{1,j}$ for $m_{1}\leq j\leq m_{2}$ are not boxed.
Next, let us show that the entry $c_{1,m_{2}+1}$ is not boxed. When $m_{2}<r$, if $c_{1,m_{2}+1}$ is boxed,
then $d_{m_{2}+1}=\mu_{r-m_{2}}+d_{m_{2}}-d_{2r-m_{2}}+d_{2r-m_{2}-1}$ and $d_{m_{2}}=d'_{m_{2}}$,
$d'_{m_{2}+1}=0$
by the circle decoration on $c_{1,m_{2}}$.
Thus, $\Gamma^{\iota}({\mathfrak t})$ is non-strict contradicting strictness for $\Gamma({\mathfrak t})$ by Lemma~\ref{lm:strictness}.
Similarly when $m_{2}\geq r$, if $\bar{c}_{1,2r-m_{2}-1}=c_{1,m_{2}+1}$ is boxed, then
$d_{m_{2}+1}=\mu_{m_{2}-r+2}+d'_{2r-m_{2}-2}-d_{m_{2}+2}$ and $d'_{2r-m_{2}}=0$, $d_{m_{2}+1}=d'_{2r-m_{2}-1}$,
contradicting strictness.
Hence, $\prod_{m_{1}\leq j\leq m_{2}+1}\gamma_{\Gamma}(c_{1,j})=(1-q^{-1})q^{\sum_{m_{1}\leq j\leq m_{2}+1}c_{1,j}}$ when $n$ divides $c_{1,m_{2}+1}$ and is zero otherwise.
On the other hand, in $\Gamma'({\mathfrak t})$, the entries $c_{1,j}$ for $m_{1}+1\leq j\leq m_{2}+1$ are circled and the entry $c_{1,m_{1}}$ is neither boxed nor circled. We obtain the same divisibility condition from $c_{1,m_1}$,
and evaluating directly, $\prod_{m_{1}\leq j\leq m_{2}+1}\gamma_{\Gamma}(c_{1,j})= \prod_{m_{1}\leq j\leq m_{2}+1}\gamma_{\Gamma'}(c_{1,j})$. Thus $G_{\Gamma}({\mathfrak t})=G_{\Gamma'}({\mathfrak t})$ as claimed.
Next, we show that $G_{\Gamma^{\iota}}({\mathfrak t})=G_{\Gamma'^{\iota}}({\mathfrak t})$. It is sufficient to prove that
\begin{equation}\label{eq:II-gamma-iota=gamma-iota'}
\gamma_{\Gamma^{\iota}}(\iota(c_{1,i_{0}}))\gamma_{\Gamma^{\iota}}(\iota(\bar{c}_{1,i_{0}}))=\gamma_{\Gamma'^{\iota}}(\iota(c_{1,i_{0}}))\gamma_{\Gamma'^{\iota}}(\iota(\bar{c}_{1,i_{0}})).
\end{equation}
If $d_{i_{0}}\neq 0$, these two entries have the same decorations in $\Gamma^{\iota}({\mathfrak t})$ and $\Gamma'^{\iota}({\mathfrak t})$, so Eqn.~\eqref{eq:II-gamma-iota=gamma-iota'} is true.
If $d_{i_{0}}=0$, then ${\mathfrak t}^{*}$ is not maximal.
Thus by Lemma~\ref{lm:gamma-circle} $G_{\Gamma^{\iota}}({\mathfrak t})=G_{\Gamma'^{\iota}}({\mathfrak t})=0$ unless $n$ divides $k_{1}$ and $k^{*}_{i_{0}}$. If this divisibility holds, then $n$ divides $\iota(c_{1,i_{0}})$ and $n$ divides $\iota(\bar{c}_{1,i_{0}})$.
Evaluating directly, we find that
$$\gamma_{\Gamma^{\iota}}(\iota(c_{1,i_{0}}))\gamma_{\Gamma^{\iota}}(\iota(\bar{c}_{1,i_{0}}))=
\gamma_{\Gamma'^{\iota}}(\iota(c_{1,i_{0}}))\gamma_{\Gamma'^{\iota}}(\iota(\bar{c}_{1,i_{0}}))
=q^{\iota(c_{1,i_{0}})+
\iota(\bar{c}_{1,i_{0}})} (1-q^{-1}),$$
and Eqn.~\eqref{eq:II-gamma-iota=gamma-iota'} holds.
\end{proof}
We turn to the $\Delta$ arrays.
The entries of $\Delta({\mathfrak t})$ and $\Delta^{\iota}({\mathfrak t})$ are given as follows.
\begin{equation}\label{eq:II-fc-*-sharp}
\begin{array}{lll}
\bar{{\mathfrak c}}_{1,j}=\bar{{\mathfrak c}}^{*}_{1,j}=\bar{c}_{1,j} &\iota(\bar{{\mathfrak c}}_{1,j})=N-k_{1}+\bar{{\mathfrak c}}_{1,j}& \text{ for } 1\leq j<i_{0},\\
\bar{{\mathfrak c}}_{1,j}=k^{*}_{i_{0}}+a+\bar{{\mathfrak c}}^{\sharp}_{1,j-i_{0}} &\iota(\bar{{\mathfrak c}}_{1,j})=N-k^{*}_{i_{0}}-k^{\sharp}_{1}+\bar{{\mathfrak c}}^{\sharp}_{1,j-i_{0}}& \text{ for } i_{0}\leq j<r,\\
{\mathfrak c}_{1,j}=k^{*}_{i_{0}}+a+{\mathfrak c}^{\sharp}_{1,j-i_{0}} &\iota({\mathfrak c}_{1,j})=N-k^{*}_{i_{0}}-k^{\sharp}_{1}+{\mathfrak c}^{\sharp}_{1,j-i_{0}} &\text{ for } i_{0}< j\leq r,\\
{\mathfrak c}_{1,j}={\mathfrak c}^{*}_{1,j}+a+k^{\sharp}_{1} &\iota({\mathfrak c}_{1,j})=N-2k^{*}_{i_{0}}+{\mathfrak c}^{*}_{1,j}& \text{ for } 1\leq j\leq i_{0}.
\end{array}
\end{equation}
Once again we define new arrays
$\Delta'({\mathfrak t})$ and $\Delta'^{\iota}({\mathfrak t})$, whose entries are the same as $\Delta({\mathfrak t})$ and $\Delta^{\iota}({\mathfrak t})$ (resp.)
but with some modifications to the decoration rules.
In $\Delta'({\mathfrak t})$,
the entry ${\mathfrak c}_{1,i_{0}+1}$ is neither boxed nor circled; when $i_{0}>1$, the entry $\bar{{\mathfrak c}}_{1,i_{0}-1}$ is boxed if $d_{i_{0}}=\mu_{r+1-i_{0}}$, and circled if $d_{i_{0}}=0$; when $i_{0}=1$, the entry ${\mathfrak c}_{1,1}$ is circled if $d_{1}=0$. The rest
of the decorations for $\Delta'({\mathfrak t})$ are the same as for $\Delta({\mathfrak t})$.
In $\Delta'^{\iota}({\mathfrak t})$,
the entry ${\mathfrak c}(\Delta^{\iota})_{1,i_{0}}$ is neither boxed nor circled, the entry $\bar{{\mathfrak c}}(\Delta^{\iota})_{1,i_{0}}$ is boxed if $d_{i_{0}}=\mu_{r+1-i_{0}}$ and circled if $d_{i_{0}}=0$, and the remaining entries in $\Delta'^{\iota}({\mathfrak t})$ are
given the same decorations as in $\Delta^{\iota}({\mathfrak t})$.
\begin{lemma}\label{lm:II-delta=delta'}
If ${\mathfrak t}$ is a short pattern in Class II and $k({\mathfrak t})$ is strict, then $G_{\Delta}({\mathfrak t})=G_{\Delta'}({\mathfrak t})$ and $G_{\Delta^{\iota}}({\mathfrak t})=G_{\Delta'^{\iota}}({\mathfrak t})$.
\end{lemma}
\begin{proof}
In order to prove $G_{\Delta}({\mathfrak t})=G_{\Delta'}({\mathfrak t})$, it is sufficient to show that
\begin{equation}\label{eq:II-delta=delta'}
\begin{array}{lll}
&\gamma_{\Delta}({\mathfrak c}_{1,1})\gamma_{\Delta}({\mathfrak c}_{1,2})=\gamma_{\Delta'}({\mathfrak c}_{1,1})\gamma_{\Delta'}({\mathfrak c}_{1,2})&
(\text{when $i_{0}=1$});\\
&\gamma_{\Delta}({\mathfrak c}_{1,i_{0}+1})\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,i_{0}-1})=\gamma_{\Delta'}({\mathfrak c}_{1,i_{0}+1})\gamma_{\Delta'}(\bar{{\mathfrak c}}_{1,i_{0}-1})&(\text{when $ i_{0}>1$}).
\end{array}
\end{equation}
If $d_{i_{0}}\neq 0$, these two entries have the same decorations in $\Delta({\mathfrak t})$ and $\Delta'({\mathfrak t})$, and Eqn.~\eqref{eq:II-delta=delta'} holds. If $d_{i_{0}}=0$ and $i_{0}=1$, then ${\mathfrak c}_{1,1}={\mathfrak c}_{1,2}$ and $\gamma_{\Delta}({\mathfrak c}_{1,1})\gamma_{\Delta}({\mathfrak c}_{1,2})=\gamma_{\Delta'}({\mathfrak c}_{1,1})\gamma_{\Delta'}({\mathfrak c}_{1,2})=q^{2{\mathfrak c}_{1,1}}(1-q^{-1})$
when $n$ divides ${\mathfrak c}_{1,1}$ and is zero otherwise, so again Eqn.~(\ref{eq:II-delta=delta'}) is true.
If $d_{i_{0}}=0$ and $i_{0}>1$, then ${\mathfrak t}^{*}$ is not maximal. Thus, $G_{\Delta}({\mathfrak t})=G_{\Delta'}({\mathfrak t})=0$ unless $n$ divides $k_{1}$ and $k^{*}_{i_{0}}$. However, evaluating, we find that
$\gamma_{\Delta}({\mathfrak c}_{1,i_{0}+1})\gamma_{\Delta}(\bar{{\mathfrak c}}_{1,i_{0}-1})=q^{{\mathfrak c}_{1,i_{0}+1}+\bar{{\mathfrak c}}_{1,i_{0}-1}}(1-q^{-1})$ when $n$ divides $\bar{{\mathfrak c}}_{1,i_{0}-1}$ and is zero otherwise,
while $\gamma_{\Delta'}({\mathfrak c}_{1,i_{0}+1})\gamma_{\Delta'}(\bar{{\mathfrak c}}_{1,i_{0}-1})$ gives the
same quantity when $n$ divides ${\mathfrak c}_{1,i_{0}+1}$ and zero otherwise.
Since ${\mathfrak c}_{1,i_{0}+1}=k_{1}-k^{*}_{i_{0}}$ and $\bar{{\mathfrak c}}_{1,i_{0}-1}=k^{*}_{i_{0}}$,
Eqn.~\eqref{eq:II-delta=delta'} follows.
We turn to the equality of
$G_{\Delta^{\iota}}({\mathfrak t})$ and $G_{\Delta'^{\iota}}({\mathfrak t})$. As above, we need only consider the case $d_{i_{0}}=0$. Then,
$\gamma_{\Delta^{\iota}}({\mathfrak c}(\Delta^\iota)_{1,i_0})\gamma_{\Delta^{\iota}}(\bar{{\mathfrak c}}(\Delta^\iota)_{1,i_0})=
q^{\iota({\mathfrak c}_{1,i_{0}+1})+\iota(\bar{{\mathfrak c}}_{1,i_{0}-1})}(1-q^{-1})$ when $n$
divides $\iota(\bar{{\mathfrak c}}_{1,i_{0}-1})$ and is zero otherwise and
$\gamma_{\Delta'^{\iota}}({\mathfrak c}(\Delta^{\iota})_{1,i_0})\gamma_{\Delta'^{\iota}}(\bar{{\mathfrak c}}(\Delta^\iota)_{1,i_0})$
gives the same quantity when $n$ divides $\iota({\mathfrak c}_{1,i_{0}+1})$ and zero otherwise.
Since $d_{i_{0}}=0$, as above $G_{\Delta^{\iota}}({\mathfrak t})=G_{\Delta'^{\iota}}({\mathfrak t})=0$ unless $n$ divides
${\mathfrak c}_{1,i_{0}-1}=k^{*}_{i_{0}}$ and ${\mathfrak c}_{i_{0}+1}=k_{1}-k^{*}_{i_{0}}$. In that case,
since $\iota(x)=N-k_{1}+x$, we see that $n$ divides $\iota(\bar{{\mathfrak c}}_{1,i_{0}-1})$ and $\iota({\mathfrak c}_{1,i_{0}+1})$. This implies the
desired equality.
\end{proof}
\begin{proposition}\label{pro:II}
Suppose that the weight $\bf k$ is in Class II and is strict. Then
$$
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta}({\mathfrak t})
\text{ and }
\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Gamma^{\iota}}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}}G_{\Delta^{\iota}}({\mathfrak t}).
$$
\end{proposition}
\begin{proof}
By Lemmas~\ref{lm:II-gamma=gamma'}, \ref{lm:II-delta=delta'} and \ref{lm:divisibility}, we have
$$
G_{\Gamma}({\mathfrak t})=q^{i_{0}(k_{1}-k^{*}_{i_{0}})} (1-q^{-1})G_{\Gamma^{\flat}}({\mathfrak t}^{*})G_{\Gamma^{\iota}}({\mathfrak t}^{\sharp}),
$$
and
$$
G_{\Delta}({\mathfrak t})=q^{i_{0}(k_{1}-k^{*}_{i_{0}})} (1-q^{-1})G_{\Delta^{\flat}}({\mathfrak t}^{*})G_{\Delta^{\iota}}({\mathfrak t}^{\sharp}),
$$
when $n$ divides $k_{1}-k^{*}_{i_{0}}$, while both quantities are zero otherwise.
Thus by an inductive step similar to the proof of Proposition~\ref{pro:I}
we obtain $\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Gamma}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}_{\bf k}(\mu)}G_{\Delta}({\mathfrak t})$.
Next, we show that $\sum_{{\mathfrak t}\in{\mathfrak {S}}}G_{\Gamma^{\iota}}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}}G_{\Delta^{\iota}}({\mathfrak t})$.
When $n$ divides $k^{*}_{i_{0}}$, we have
$$
G_{\Gamma^{\iota}}({\mathfrak t})=\prod^{i_{0}-1}_{j=1}\gamma_{\Gamma}(\iota(c_{1,j}))\gamma_{\Gamma}(\iota(\bar{c}_{1,j}))\cdot \gamma_{\Gamma}(\iota(\bar{c}_{1,i_{0}}))\cdot \phi(p^{N-k^{*}_{i_{0}}})G_{\Gamma^{\iota}}({\mathfrak t}^{\sharp}),
$$
and
$$
G_{\Delta^{\iota}}({\mathfrak t})=q^{k^{*}_{i_{0}}}\prod^{i_{0}-1}_{j=1}\gamma_{\Delta}({\mathfrak c}(\Delta^{\iota})_{1,j})\gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j})\cdot \gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,i_{0}})\cdot \phi(p^{N-k^{*}_{i_{0}}})G_{\Delta^{\iota}}({\mathfrak t}^{\sharp}),
$$
where the function $\iota$ arising in $G_{\Gamma^{\iota}}({\mathfrak t}^{\sharp})$ and $G_{\Delta^{\iota}}({\mathfrak t}^{\sharp})$ is given by $\iota(x)=N-k^{*}_{i_{0}}-k^{\sharp}_{1}+x$. Both quantities are zero if $n$ does not divide $k^{*}_{i_{0}}$.
If ${\mathfrak t}^{*}$ is maximal and $n$ divides $k^{*}_{i_{0}}$, then
$$
\prod^{i_{0}-1}_{j=1}\gamma_{\Gamma}(\iota(c_{1,j}))\gamma_{\Gamma}(\iota(\bar{c}_{1,j}))\cdot \gamma_{\Gamma}(\iota(\bar{c}_{1,i_{0}}))=q^{k^{*}_{i_{0}}}\prod^{i_{0}-1}_{j=1}\gamma_{\Delta}({\mathfrak c}(\Delta^{\iota})_{1,j})\gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j})\cdot \gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,i_0}).
$$
If ${\mathfrak t}^{*}$ is non-maximal, then $G_{\Gamma^{\iota}}({\mathfrak t})=G_{\Delta^{\iota}}({\mathfrak t})=0$ unless
$n$ divides $k_{1}$. When $n$ divides $k_{1}$,
since $\iota(c_{1,j})=N-2k^{*}_{i_{0}}+c^{*}_{1,j}$ for $1\leq j<i_{0}$ and
$\iota(\bar{c}_{1,j})=N-k_{1}+\bar{c}_{1,j}$ for $1\leq j\leq i_{0}$, we have
$$
\prod^{i_{0}-1}_{j=1}\gamma_{\Gamma}(\iota(c_{1,j}))=q^{(i_{0}-1)(N-2k^{*}_{i_{0}})}\prod^{i_{0}-1}_{j=1}\gamma_{\Gamma}(c^{*}_{1,j})
\text{ and }
\prod^{i_{0}}_{j=1}\gamma_{\Gamma}(\iota(\bar{c}_{1,j}))=q^{i_{0}(N-k_{1})}\gamma_{\Gamma}(\bar{c}^{*}_{1,j}).
$$
Similarly,
since ${\mathfrak c}(\Delta^{\iota})_{1,j}=N-2k^{*}_{i_{0}}+{\mathfrak c}^{*}_{1,j+1}$ for $1\leq j<i_{0}$ and $\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j}=N-k_{1}+\bar{{\mathfrak c}}_{1,j-1}$ for $1\leq j\leq i_{0}$, we obtain
\begin{equation}\label{eq:delta-t*-bar-II}
\prod^{i_{0}-1}_{j=1}\gamma_{\Delta}({\mathfrak c}(\Delta^{\iota})_{1,j})=q^{(i_{0}-1)(N-2k^{*}_{i_{0}})}\prod^{i_{0}}_{j=2}\gamma_{\Delta}({\mathfrak c}^{*}_{1,j})
\end{equation}
and
\begin{equation}\label{eq:delta-t*-II}
\prod^{i_{0}}_{j=1}\gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j})=q^{(i_{0}-1)(N-k_{1})}\prod^{i_{0}-1}_{j=1}\gamma_{\Delta}(\bar{{\mathfrak c}}^{*}_{1,j})\cdot q^{N-k_{1}-2k^{*}_{0}}\gamma_{\Delta}({\mathfrak c}^{*}_{1,1}).
\end{equation}
By~\eqref{eq:delta-t*-bar-II} and \eqref{eq:delta-t*-II},
\begin{align*}
q^{k^{*}_{i_{0}}}\prod^{i_{0}-1}_{j=1}\gamma_{\Delta}({\mathfrak c}(\Delta^{\iota})_{1,j})\gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,j})\cdot
& \gamma_{\Delta}(\bar{{\mathfrak c}}(\Delta^{\iota})_{1,i_0})\\
=&q^{i_{0}(N-k_{1})+(i_{0}-1)(N-2k^{*}_{i_{0}})}\cdot q^{-k^{*}_{i_{0}}}\prod^{2i_{0}-1}_{j=1}\gamma_{\Delta}({\mathfrak c}^{*}_{1,j})\\
=&q^{i_{0}(N-k_{1})+(i_{0}-1)(N-2k^{*}_{i_{0}})}G_{\Delta}({\mathfrak t}^{*}).
\end{align*}
Hence,
$$
G_{\Gamma^{\iota}}({\mathfrak t})=q^{(i_{0}-1)(N-2k^{*}_{i_{0}})+i_{0}(N-k_{1})+(N-k^{*}_{i_{0}})}(1-q^{-1})G_{\Gamma}({\mathfrak t}^{*})\, G_{\Gamma^{\iota}}({\mathfrak t}^{\sharp})
$$
and
$$
G_{\Delta^{\iota}}({\mathfrak t})=q^{(i_{0}-1)(N-2k^{*}_{i_0})+i_{0}(N-k_{1})+(N-k^{*}_{i_{0}})}(1-q^{-1})G_{\Delta}({\mathfrak t}^{*})\,G_{\Delta^{\iota}}({\mathfrak t}^{\sharp}).
$$
Since $G_{\Delta}({\mathfrak t}^{*})=q^{(i_{0}-1)k^{*}_{i_{0}}}G_{\Delta^{\flat}}({\mathfrak t}^{*})$ and $G_{\Gamma}({\mathfrak t}^{*})=q^{(i_{0}-1)k^{*}_{i_{0}}}G_{\Gamma^{\flat}}({\mathfrak t}^{*})$,
we obtain
$$
G_{\Gamma^{\iota}}({\mathfrak t})=q^{i_{0}(2N-k_{1}-k^{*}_{i_{0}})}(1-q^{-1})\,G_{\Gamma^{\flat}}({\mathfrak t}^{*})\,
G_{\Gamma^{\iota}}({\mathfrak t}^{\sharp}),
$$
and
$$
G_{\Delta^{\iota}}({\mathfrak t})=q^{i_{0}(2N-k_{1}-k^{*}_{i_{0}})}(1-q^{-1})\,G_{\Delta^{\flat}}({\mathfrak t}^{*})\,G_{\Delta^{\iota}}({\mathfrak t}^{\sharp}).
$$
Using the above equalities and arguing via induction as in the Class I case,
we have $\sum_{{\mathfrak t}\in{\mathfrak {S}}}G_{\Gamma^{\iota}}({\mathfrak t})=\sum_{{\mathfrak t}\in{\mathfrak {S}}}G_{\Delta^{\iota}}({\mathfrak t})$.
\end{proof}
Combining Propositions \ref{pro:resonant}, \ref{pro:I} and \ref{pro:II}, we conclude that Statement~A is true.
This completes the proof of Theorem~\ref{thm:main}.
\qed
\section{A Crystal Graph Description For Even Degree Covers}\label{sec:even}
In this Section, we will establish an inductive formula for the
contributions at powers of $p$ to the Whittaker coefficients in the case of even degree covers.
The formula is based on sums over certain BZL-patterns as in Section~\ref{sect62}, but with different decoration rules.
This then gives an inductive crystal graph description of the coefficients in the even degree cover case.
Recall that $\varepsilon_{i}=1$ for $1\leq i\leq r-1$ and $\varepsilon_{r}=2$.
Given an $r$-tuple $\bf k$ of non-negative integers,
we associate a graph as follows: Each $k_{i}$ is a vertex; $k_{i}$ and $k_{j}$ are connected if and
only if $j=i+1$ and $\varepsilon_{i}k_{i}=\varepsilon_{j}k_{j}$.
Let ${\mathcal {E}}=(k_{i_{1}},k_{i_{1}+1},\dots,k_{i_{2}})$ be a subsequence of ${\bf k}$,
consisting of all vertices of a connected component of the graph. We call such ${\mathcal {E}}$ a connected component of ${\bf k}$.
Define $\ell_{{\mathcal {E}}}=i_{1}$ and $r_{{\mathcal {E}}}=i_{2}$.
Separating into connected components gives a disjoint partition of ${\bf k}$,
\begin{equation}
{\bf k}=({\mathcal {E}}_{1},{\mathcal {E}}_{2},\cdots,{\mathcal {E}}_{h}),
\end{equation}
ordered by $\ell_{{\mathcal {E}}_{i+1}}=r_{{\mathcal {E}}_{i}}+1$ for $1\leq i<h$.
For each connected component ${\mathcal {E}}$, $k_{i}$
satisfies the properties that
$\varepsilon_{i}k_{i}=\varepsilon_{j}k_{j}$ for all $\ell_{{\mathcal {E}}}\leq i,j\leq r_{{\mathcal {E}}}$; either $\varepsilon_{\ell_{{\mathcal {E}}}-1}k_{\ell_{{\mathcal {E}}}-1}\ne \varepsilon_{\ell}k_{\ell_{1}}$ or $\ell_{{\mathcal {E}}}=1$; and either $\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}\ne \varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$ or $r_{{\mathcal {E}}}=r$.
If $\ell_{{\mathcal {E}}}=1$, let $a_{{\mathcal {E}}}=0$, and if $r_{{\mathcal {E}}}=r$, let $b_{{\mathcal {E}}}=0$. Otherwise let
$a_{{\mathcal {E}}}=|\varepsilon_{\ell_{{\mathcal {E}}}-1}k_{\ell_{{\mathcal {E}}}-1}-\varepsilon_{\ell_{{\mathcal {E}}}}k_{\ell_{{\mathcal {E}}}}|$ and $b_{{\mathcal {E}}}=|\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}|$.
Let $\mu$, as in Sect.~\ref{sect61}, be the $r$-tuple of positive integers obtained from $\bf m$. Then $\mu$
gives the highest weight of the crystal graph whose BZL patterns we shall employ.
If $r_{{\mathcal {E}}}\neq \ell_{{\mathcal {E}}}$, define $\mu({\mathcal {E}})=(\mu({\mathcal {E}})_{1},\mu({\mathcal {E}})_{2},\dots,\mu({\mathcal {E}})_{r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1})$ by
specifying that $\mu({\mathcal {E}})_{i}=\mu_{r-r_{{\mathcal {E}}}+i}$ for all $1<i< r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1$ and
that
$$
\mu({\mathcal {E}})_1=\begin{cases}\mu_{r+1-r_{{\mathcal {E}}}}-b_{{\mathcal {E}}}&\text{if
$\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}>\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$}\\
\mu_{r+1-r_{{\mathcal {E}}}}&\text{if $\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}<\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$}
\end{cases}
$$
$$\mu({\mathcal {E}})_{r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1}=\begin{cases}\mu_{r+1-\ell_{{\mathcal {E}}}}&\text{if
$\varepsilon_{\ell_{{\mathcal {E}}}-1}k_{\ell_{{\mathcal {E}}}-1}>\varepsilon_{\ell_{{\mathcal {E}}}}k_{\ell_{{\mathcal {E}}}}$}\\
\mu_{r+1-\ell_{{\mathcal {E}}}}-a_{{\mathcal {E}}}&\text{if
$\varepsilon_{\ell_{{\mathcal {E}}}-1}k_{\ell_{{\mathcal {E}}}-1}<\varepsilon_{\ell_{{\mathcal {E}}}}k_{\ell_{{\mathcal {E}}}}$.}
\end{cases}
$$
When $r_{{\mathcal {E}}}=\ell_{{\mathcal {E}}}$, define
$$
\mu({\mathcal {E}})_{1}=\begin{cases}
\mu_{r+1-r_{{\mathcal {E}}}}-b_{{\mathcal {E}}} &\text{if
$\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}>\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$ and $\varepsilon_{\ell_{{\mathcal {E}}}-1}k_{\ell_{{\mathcal {E}}}-1}>\varepsilon_{\ell_{{\mathcal {E}}}}k_{\ell_{{\mathcal {E}}}}$}\\
\mu_{r+1-r_{{\mathcal {E}}}}-b_{{\mathcal {E}}}-a_{{\mathcal {E}}} &\text{if
$\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}>\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$ and $\varepsilon_{\ell_{{\mathcal {E}}}-1}k_{\ell_{{\mathcal {E}}}-1}<\varepsilon_{\ell_{{\mathcal {E}}}}k_{\ell_{{\mathcal {E}}}}$}\\
\mu_{r+1-r_{{\mathcal {E}}}} &\text{if $\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}<\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$ and $\varepsilon_{\ell_{{\mathcal {E}}}-1}k_{\ell_{{\mathcal {E}}}-1}>\varepsilon_{\ell_{{\mathcal {E}}}}k_{\ell_{{\mathcal {E}}}}$}\\
\mu_{r+1-r_{{\mathcal {E}}}}-a_{{\mathcal {E}}} &\text{if $\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}<\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$ and $\varepsilon_{\ell_{{\mathcal {E}}}-1}k_{\ell_{{\mathcal {E}}}-1}<\varepsilon_{\ell_{{\mathcal {E}}}}k_{\ell_{{\mathcal {E}}}}$.}
\end{cases}
$$
The set $BZL_{1}(\mu({\mathcal {E}}))$ consists of short patterns of length $r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1$
satisfying (\ref{ineq:BZL-d}) with $\mu$ replaced by $\mu({\mathcal {E}})$. Given a weight vector ${\bf v}$ in ${\mathbb Z}^{r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1}_{\geq 0}$,
the subset ${\mathfrak {S}}_{\bf v}(\mu({\mathcal {E}}))$ of $BZL_{1}(\mu({\mathcal {E}}))$ consists of short patterns of weight $\bf v$.
Set
$$
\Xi_{\bf k}=\cpair{{\bf x}=(x_{1},x_{2},\cdots,x_{h})\in{\mathbb Z}^{h}_{\geq 0}\left.\right|~
\sum^{h}_{i=1}x_{i}+\sum^{h-1}_{i=1}\frac{b_{{\mathcal {E}}_{i}}-\varepsilon_{r_{{\mathcal {E}}_{i}}}k_{r_{{\mathcal {E}}_{i}}}+\varepsilon_{r_{{\mathcal {E}}_{i}}+1}k_{r_{{\mathcal {E}}_{i}}+1}}{2}=k_{r}}.
$$
For ${\bf x}\in \Xi_{\bf k}$ and $1\leq i\leq h$, let ${\bf k}({\mathcal {E}}_{i},{\bf x})=(2x_{i},2x_{i},\dots,2x_{i},x_{i})$ be a totally resonant weight vector in ${\mathbb Z}^{r_{{\mathcal {E}}_{i}}-\ell_{{\mathcal {E}}_{i}}+1}_{\geq 0}$. Define a map
$$
\Psi_{\bf k}\colon {\mathfrak {S}}_{\bf k}\to \cup_{{\bf x}\in\Xi_{\bf k}}{\mathfrak {S}}_{{\bf k}({\mathcal {E}}_{1},{\bf x})}(\mu({\mathcal {E}}_{1}))\times
{\mathfrak {S}}_{{\bf k}({\mathcal {E}}_{2},{\bf x})}(\mu({\mathcal {E}}_{2}))\times\cdots\times{\mathfrak {S}}_{{\bf k}({\mathcal {E}}_{h},{\bf x})}(\mu({\mathcal {E}}_{h})),
$$
by $\Psi_{\bf k}({\mathfrak t})=({\mathfrak t}({\mathcal {E}}_{1}),{\mathfrak t}({\mathcal {E}}_{2}),\dots,{\mathfrak t}({\mathcal {E}}_{h}))$, where if ${\mathfrak t}=(d_1,\dots,d_{2r-1})$ then
\begin{equation}
{\mathfrak t}({\mathcal {E}}_{i})=
(d_{\ell_{{\mathcal {E}}_{i}}},d_{\ell_{{\mathcal {E}}_{i}}+1},\dots,d_{r_{{\mathcal {E}}_{i}}-1}, d'_{r_{{\mathcal {E}}_{i}}},d_{2r-r_{{\mathcal {E}}_{i}}+1},\dots,d_{2r-\ell_{{\mathcal {E}}_{i}}-1},d_{2r-\ell_{{\mathcal {E}}_{i}}}).
\end{equation}
For example, if $\ell_{{\mathcal {E}}}=r_{{\mathcal {E}}}=1$, then ${\mathfrak t}({\mathcal {E}})=(d'_{1})$.
We will also write ${\mathfrak t}({\mathcal {E}}_{i})=(d({\mathcal {E}}_{i})_{1},d({\mathcal {E}}_{i})_{2},\dots,d({\mathcal {E}}_{i})_{2(r_{{\mathcal {E}}_{i}}-\ell_{{\mathcal {E}}_{i}})+1})$;
note that ${\mathfrak t}({\mathcal {E}}_i)$ is a totally resonant short pattern in $BZL_{1}(\mu({\mathcal {E}}_{i}))$.
\begin{lemma}
The map $\Psi_{\bf k}$ is a bijection from ${\mathfrak {S}}_{\bf k}$ to
$$
\bigcup_{{\bf x}\in\Xi_{\bf k}}{\mathfrak {S}}_{{\bf k}({\mathcal {E}}_{1},{\bf x})}(\mu({\mathcal {E}}_{1}))\times{\mathfrak {S}}_{{\bf k}({\mathcal {E}}_{2},{\bf x})}(\mu({\mathcal {E}}_{2}))\times
\cdots\times{\mathfrak {S}}_{{\bf k}({\mathcal {E}}_{h},{\bf x})}(\mu({\mathcal {E}}_{h})).
$$
\end{lemma}
\begin{proof}
One may verify that the map $\Psi_{\bf k}$ is bijective directly from the definitions.
\end{proof}
For each ${\mathcal {E}}$, define an ordered subset of $\Gamma({\mathfrak t})$ of length $2(r_{\mathcal {E}}-\ell_{\mathcal {E}})+1$
$$
\Gamma_{{\mathfrak t}}({\mathcal {E}})=(c({\mathcal {E}})_{1,1},c({\mathcal {E}})_{1,2},\dots,c({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1},
\bar{c}({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}},\dots,\bar{c}({\mathcal {E}})_{1,2},\bar{c}({\mathcal {E}})_{1,1})
$$
with
$c({\mathcal {E}})_{1,i}=c_{1,\ell_{{\mathcal {E}}}-1+i}$ for $1\leq i\leq r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}$, $\bar{c}({\mathcal {E}})_{1,i}=\bar{c}_{1,\ell_{{\mathcal {E}}}-1+i}$ for $1\leq i\leq r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}$, and
$$
c({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1}=\begin{cases}
c_{1,r_{{\mathcal {E}}}}&\text{ if }\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}>\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1} \text{ and } r_{{\mathcal {E}}}\ne r,\\
\bar{c}_{1,r_{{\mathcal {E}}}}&\text{ if } \varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}<\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}\text{ and } r_{{\mathcal {E}}}\ne r,\\
c_{1,r} & \text{ if } r_{{\mathcal {E}}}=r.
\end{cases}
$$
By convention, we write $c({\mathcal {E}})_{1,2(r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1)-i}=\bar{c}({\mathcal {E}})_{1,i}$ for $1\leq i\leq r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1$.
We shall decorate $\Gamma_{{\mathfrak t}}({\mathcal {E}})$ using the following circling and boxing rules.
If $\mu_{r+1-r_{{\mathcal {E}}}}=\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$
then the entry $c_{1,r_{{\mathcal {E}}}}$ is boxed and $\bar{c}_{1,r_{{\mathcal {E}}}}$ is circled.
Note that in this case, if {\bf k} is strict then $r_{{\mathcal {E}}}=\ell_{{\mathcal {E}}}<r$,
$d_{r_{{\mathcal {E}}}}=\mu_{r+1-r_{{\mathcal {E}}}}$ and $d_{2r-r_{{\mathcal {E}}}}=0$.
If $\mu_{r+1-r_{{\mathcal {E}}}}\ne \varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$,
we decorate $\Gamma_{{\mathfrak t}}({\mathcal {E}})$ as follows.
For $1\leq i\leq r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}$,
the entry $c({\mathcal {E}})_{1,i}$ is circled if $d({\mathcal {E}})_{i+1}=0$, and the entry
$\bar{c}({\mathcal {E}})_{1,i}$ is circled if $d({\mathcal {E}})_{i}=0$.
The entry $c({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1}$ is circled if $d({\mathcal {E}})_{1}=0$.
For $1\leq i\leq r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}$,
the entries $c({\mathcal {E}})_{1,i}$ and $\bar{c}({\mathcal {E}})_{1,i}$ are both boxed if $d({\mathcal {E}})_{i+1}=\mu({\mathcal {E}})_{r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1-i}$,
and the entry $c({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1}$ is boxed if $d({\mathcal {E}})_{1}=\mu({\mathcal {E}})_{r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1}$.
If the graph attached to $\bf k$ earlier in this Section
has $h$ components, then the union of the subsets $\Gamma_{{\mathfrak t}}({\mathcal {E}}_i)$, $1\leq i\leq h$,
omits $h-1$ entries from $\Gamma({\mathfrak t})$. Accordingly,
for ${\mathcal {E}}={\mathcal {E}}_i$ with $r_{{\mathcal {E}}}<r$, define
\begin{equation}\label{eq:c-CE}
c_{{\mathcal {E}}}=\begin{cases}
\bar{c}_{1,r_{{\mathcal {E}}}}&\text{ if }\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}>\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1},\\
c_{1,r_{{\mathcal {E}}}}&\text{ if } \varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}<\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}.
\end{cases}
\end{equation}
No entry $c_{{\mathcal {E}}_{i}}$ is either boxed or circled.
Define $G_{\Gamma_{{\mathfrak t}}({\mathcal {E}})}$ be
$$
\begin{cases}
\prod^{2(r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}})-1}_{i=1}\gamma_{\Gamma}(c({\mathcal {E}})_{1,i})\cdot q^{c_{{\mathcal {E}}}}(1-q^{-1}) &\text{ if $\mu_{r+1-r_{{\mathcal {E}}}}\ne \varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$ and $r_{{\mathcal {E}}}<r$,}\\
\prod^{2(r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}})-1}_{i=1}\gamma_{\Gamma}(c({\mathcal {E}})_{1,i}) &\text{ if $r_{{\mathcal {E}}}=r$,}\\
\gamma_{\Gamma}(c_{1,r_{{\mathcal {E}}}})\gamma_{\Gamma}(\bar{c}_{1,r_{{\mathcal {E}}}})& \text{ if $\mu_{r+1-r_{{\mathcal {E}}}}=\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$.}
\end{cases}
$$
Here the factors $\gamma_{\Gamma}(\cdot)$ are computed using the decoration rule above.
We emphasize that this decoration rule is not the same as that defined in Section~\ref{sect62}.
The contributions for the entries $c_{{\mathcal {E}}_{i}}$ are handled separately as shown.
Using this, we construct a new weight function
$G_\Psi({\mathfrak t})$ by defining
\begin{equation}\label{eq:Psi}
G_{\Psi}({\mathfrak t})=\prod^{h}_{i=1}G_{\Gamma_{{\mathfrak t}}({\mathcal {E}}_{i})}
\end{equation}
when $k({\mathfrak t})$ is strict and $n$ divides $c_{{\mathcal {E}}_{i}}$ for $1\leq i<h$, and $G_{\Psi}({\mathfrak t})=0$ otherwise.
Now we introduce a new inductive formula for the $p$-power contributions to the Whittaker coefficients.
As we shall see, this formula is valid uniformly for covers of all degrees. If $r=1$, define
$H_{\daleth}(p^{k};p^{m})=H(p^{k};p^{m})$. If $r>1$, define $H_{\daleth}(p^{k};p^{m})$ by the inductive formula
$$
H_{\daleth}(p^{\bf k};p^{\bf m})=\sum_{\substack{{\bf k}', {\bf k}''\\ {\bf k}'+(0,{\bf k}'')={\bf k}}}\,\,
\sum_{\substack{{\mathfrak t}\in BZL_{1}(\mu)\\ k({\mathfrak t})={\bf k}'}}G_{\Psi}({\mathfrak t})\,
H_{\daleth}(p^{{\bf k}''};p^{{\bf m}'}),
$$
where the outer sum is over tuples ${\bf k'}=(k_1',\dots,k_r')$ and ${\bf k''}=(k''_1,\dots,k_{r-1}'')$ of non-negative integers
such that ${\bf k}'+(0,{\bf k}'')={\bf k}$ and where
$${\bf m}'=(m_{2}+k'_{1}+k'_{3}-2k'_{2},\dots,m_{r-1}+k'_{r-2}+2k'_{r}-2k'_{r-1},m_{r}+k'_{r-1}-2k'_{r}).$$
In addition, if ${\bf m}$ is not in ${\mathbb Z}^{r}_{\geq 0}$, we define $H_{\daleth}(p^{\bf k};p^{\bf m})=0$.
\begin{theorem}\label{theorem-general} Let $\bf k$ and $\bf m$ be $r$-tuples of non-negative integers.
$$
H_{\daleth}(p^{\bf k};p^{\bf m})=H(p^{\bf k};p^{\bf m}).
$$
\end{theorem}
\begin{proof}
In the definition of $H_{\daleth}(p^{\bf k};p^{\bf m})$ and in the expression for
$H(p^{\bf k};p^{\bf m})$ given in Corollary~\ref{thm:H}, if ${\bf k'}$ is non-strict, then $G_{\Delta}({\mathfrak t})$ and $H_{\daleth}(p^{{\bf k}''};p^{{\bf m}'})$ vanish, so such $\bf k'$ do not contribute to the coefficients.
Hence, it is sufficient to prove that for a short pattern ${\mathfrak t}$ of strict weight ${\bf k'}=k({\mathfrak t})$, the equality
\begin{equation}\label{eq:Psi=Delta}
G_{\Psi}({\mathfrak t})=G_{\Delta}({\mathfrak t})
\end{equation}
holds.
If $h=1$, the graph of ${\bf k'}$ is connected and the decorated array $\Gamma_{{\mathfrak t}}({\mathcal {E}}_{1})$ is the same the decorated array $\Delta'({\mathfrak t})$ defined in Section~\ref{sec:resonant}. In this case, Eqn.~\eqref{eq:Psi=Delta} is proved in Lemma~\ref{lm:Delta=Delta'}.
Suppose that $h>1$. Similarly to the earlier constructions, we move from $\Gamma({\mathfrak t})$ to $\Delta({\mathfrak t})$.
And similarly to earlier in this section, we introduce a new decoration rule for $\Delta({\mathfrak t})$. To this end,
for each ${\mathcal {E}}$, define an ordered subset of $\Delta({\mathfrak t})$ of length $2(r_{\mathcal {E}}-\ell_{\mathcal {E}})-1$
$$
\Delta_{{\mathfrak t}}({\mathcal {E}})=({\mathfrak c}({\mathcal {E}})_{1,1},{\mathfrak c}({\mathcal {E}})_{1,2},\dots,{\mathfrak c}({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}},{\mathfrak c}({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1},\bar{{\mathfrak c}}({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}},\dots,\bar{c}({\mathcal {E}})_{1,2},\bar{c}({\mathcal {E}})_{1,1}).
$$
Here the entries ${\mathfrak c}({\mathcal {E}})_{1,i}$ are given as follows:
\begin{enumerate}
\item If $\ell_{{\mathcal {E}}}=1$, or if $\ell_{\mathcal {E}}>1$ and $d_{\ell_{{\mathcal {E}}}-1}>d_{2r-\ell_{{\mathcal {E}}}+1}$, then
$$
\Delta_{{\mathfrak t}}({\mathcal {E}})=({\mathfrak c}_{1,\ell_{{\mathcal {E}}}},{\mathfrak c}_{1,\ell_{{\mathcal {E}}}+1},\dots,{\mathfrak c}_{1,r_{{\mathcal {E}}}},\bar{{\mathfrak c}}_{1,r_{{\mathcal {E}}}-1},\dots,\bar{{\mathfrak c}}_{1,\ell_{{\mathcal {E}}}+1},\bar{{\mathfrak c}}_{1,\ell_{{\mathcal {E}}}});
$$
\item If $\ell_{\mathcal {E}}>1$ and $d_{\ell_{{\mathcal {E}}}-1}<d_{2r-\ell_{{\mathcal {E}}}+1}$, then
$$
\Delta_{{\mathfrak t}}({\mathcal {E}})=({\mathfrak c}_{1,\ell_{{\mathcal {E}}}+1},{\mathfrak c}_{1,\ell_{{\mathcal {E}}}+2},\dots,{\mathfrak c}_{1,r_{{\mathcal {E}}}},\bar{{\mathfrak c}}_{1,r_{{\mathcal {E}}}-1},\dots,\bar{{\mathfrak c}}_{1,\ell_{{\mathcal {E}}}},\bar{{\mathfrak c}}_{1,\ell_{{\mathcal {E}}}-1}).
$$
\end{enumerate}
For example, if $\ell_{{\mathcal {E}}}=r_{{\mathcal {E}}}=1$, then $\Delta_{{\mathfrak t}}({\mathcal {E}})=({\mathfrak c}_{1,1})$.
We now introduce a new decoration rule for $\Delta_{{\mathfrak t}}({\mathcal {E}})$.
If $r_{{\mathcal {E}}}=\ell_{{\mathcal {E}}}$ and $\mu_{r+1-r_{{\mathcal {E}}}}=\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$,
then the entry ${\mathfrak c}_{1,r_{{\mathcal {E}}}}$ is boxed and the entry
$\bar{{\mathfrak c}}_{1,r_{{\mathcal {E}}}}$ is circled.
If instead ${\mathcal {E}}$ satisfies $r_{{\mathcal {E}}}\ne \ell_{{\mathcal {E}}}$ or $\mu_{r+1-r_{{\mathcal {E}}}}\ne \varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$, the rule is as follows.
For $\ell_{{\mathcal {E}}}\leq i\leq r_{{\mathcal {E}}}-1$,
the entry ${\mathfrak c}_{1,i+1}$ is circled if $d_{i+1}=0$ ; for $\ell_{{\mathcal {E}}}\leq i\leq r_{{\mathcal {E}}}-1$, the entry $\bar{{\mathfrak c}}_{1,i}$ is circled if
$d_{i}=0$. This specifies the circling criteria for all but one entry
of $\Delta_{{\mathfrak t}}({\mathcal {E}})$; the remaining entry, either ${\mathfrak c}_{1,\ell_{{\mathcal {E}}}}$ or $\bar{{\mathfrak c}}_{1,\ell_{{\mathcal {E}}}-1}$,
is circled if $d_{\ell}=0$. The boxing rules for $\Delta_{{\mathfrak t}}({\mathcal {E}})$
are those already assigned as entries in $\Delta({\mathfrak t})$. Note that we have only changed the circling rules in giving this new decoration.
Similarly to $c_{{\mathcal {E}}}$ defined in~\eqref{eq:c-CE}, we also define, for $\ell_{{\mathcal {E}}}>1$,
$$
{\mathfrak c}_{{\mathcal {E}}}=\begin{cases}
\bar{{\mathfrak c}}_{1,\ell_{{\mathcal {E}}}-1}&\text{ if } d_{\ell_{{\mathcal {E}}}-1}>d_{2r-\ell_{{\mathcal {E}}}+1},\\
{\mathfrak c}_{1,\ell_{{\mathcal {E}}}}&\text{ if } d_{\ell_{{\mathcal {E}}}-1}<d_{2r-\ell_{{\mathcal {E}}}+1}.
\end{cases}
$$
Since $r_{{\mathcal {E}}_{i}}=\ell_{i+1}-1$ for all $1\leq i<h$ using the definitions of ${\mathfrak c}_{1,j}$ and $c_{1,j}$
it is easy to check that $c_{{\mathcal {E}}_{i}}={\mathfrak c}_{{\mathcal {E}}_{i+1}}$. The entries ${\mathfrak c}_{{\mathcal {E}}}$
are neither boxed nor circled.
Using this new decoration rule, define the weight function $G_{\Delta_{{\mathfrak t}}({\mathcal {E}})}$
$$
\begin{cases}
\prod^{2(r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}})+1}_{j=1}\gamma_{\Delta}({\mathfrak c}({\mathcal {E}})_{1,j})
\cdot q^{{\mathfrak c}_{{\mathcal {E}}}}(1-q^{-1})&\text{ if $\mu_{r+1-r_{{\mathcal {E}}}}\ne \varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$ and $r_{{\mathcal {E}}}<r$,}\\
\prod^{2(r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}})+1}_{j=1}\gamma_{\Delta}({\mathfrak c}({\mathcal {E}})_{1,j})&\text{ if $r_{{\mathcal {E}}}=r$,}\\
\gamma_{\Delta}({\mathfrak c}_{1,r_{{\mathcal {E}}}})\gamma_{\Gamma}(\bar{{\mathfrak c}}_{1,r_{{\mathcal {E}}}})& \text{ if $\mu_{r+1-r_{{\mathcal {E}}}}=\varepsilon_{r_{{\mathcal {E}}}}k_{r_{{\mathcal {E}}}}-\varepsilon_{r_{{\mathcal {E}}}+1}k_{r_{{\mathcal {E}}}+1}$.}
\end{cases}
$$
Applying an argument similar to the proofs of Lemmas~\ref{I:delta=delta'} and \ref{lm:II-delta=delta'},
one sees that if $k({\mathfrak t})$ is strict then
\begin{equation}\label{eq:Psi-Delta}
G_{\Delta}({\mathfrak t})=q^{-\sum^{2r-1}_{j=r}d_{j}}\prod^{h}_{i=1}G_{\Delta_{{\mathfrak t}}({\mathcal {E}}_{i})}
\end{equation}
when $n$ divides $c_{{\mathcal {E}}_{i}}$ for $1\leq i< h$, and $G_{\Delta}({\mathfrak t})=0$ otherwise.
If $\mu_{r+1-r_{{\mathcal {E}}_{i}}}=\varepsilon_{r_{{\mathcal {E}}_{i}}}k_{r_{{\mathcal {E}}_{i}}}-\varepsilon_{r_{{\mathcal {E}}_{i}}+1}k_{r_{{\mathcal {E}}_{i}}+1}$ for some $i$, then $d_{r_{{\mathcal {E}}_{i}}}=\mu_{r+1-r_{{\mathcal {E}}_{i}}}$, $d_{2r-r_{{\mathcal {E}}_{i}}}=0$ and ${\mathfrak c}_{1,r_{{\mathcal {E}}_{i}}}=c_{1,r_{{\mathcal {E}}_{i}}}$, $\bar{{\mathfrak c}}_{1,r_{{\mathcal {E}}_{i}}}=\bar{c}_{1,r_{{\mathcal {E}}_{i}}}$. Thus $G_{\Gamma_{{\mathfrak t}({\mathcal {E}}_{i})}}=G_{\Delta_{{\mathfrak t}({\mathcal {E}}_{i})}}$ and they vanish when $n$ does not divide $\bar{c}_{1,r_{{\mathcal {E}}_{i}}}$.
Recall that $c_{{\mathcal {E}}_{i}}={\mathfrak c}_{{\mathcal {E}}_{i+1}}$ for $1\leq i<h$.
By \eqref{eq:Psi} and \eqref{eq:Psi-Delta}, $G_{\Delta}({\mathfrak t})$ and $G_{\Psi}({\mathfrak t})$ vanish unless $n$ divides $c_{{\mathcal {E}}_{i}}$ for all $1\leq i< h$. Thus, we only need to consider the case that $n$ divides $c_{{\mathcal {E}}_{i}}$ for all $1<i\leq h$.
And in that case, we must establish a relation between the $G_{\Delta_{{\mathfrak t}}({\mathcal {E}}_{i})}$
and the $G_{\Gamma_{{\mathfrak t}}({\mathcal {E}}_{i})}$.
Since $d_{i}=d_{2r-i}$ for all $\ell_{{\mathcal {E}}}\leq i\leq r_{{\mathcal {E}}}-1$, we have $\bar{{\mathfrak c}}_{1,i}=\bar{c}_{1,i}$ and
${\mathfrak c}_{1,i+1}=c_{1,i}$ for $\ell_{{\mathcal {E}}}\leq i\leq r_{{\mathcal {E}}}-1$.
Also, for such $i$
the entries $\bar{{\mathfrak c}}_{1,i}$ and $\bar{c}_{1,i}$, resp.\ ${\mathfrak c}_{1,i+1}$ and $c_{1,i}$, have the
same decorations in the decorated arrays $\Gamma_{{\mathfrak t}}({\mathcal {E}})$ and $\Delta_{{\mathfrak t}}({\mathcal {E}})$.
Hence, there is only one entry left to compare in $\Gamma_{{\mathfrak t}}({\mathcal {E}})$ and $\Delta_{{\mathfrak t}}({\mathcal {E}})$. We
denote this entry by ${\mathfrak c}^{-}_{{\mathcal {E}}}$, $c^{-}_{{\mathcal {E}}}$, resp. Specifically,
$$
{\mathfrak c}^{-}_{{\mathcal {E}}}=\begin{cases}
\bar{{\mathfrak c}}_{1,\ell_{{\mathcal {E}}}-1}&\text{ if } d_{\ell_{{\mathcal {E}}}-1}<d_{2r-\ell_{{\mathcal {E}}}+1},\\
{\mathfrak c}_{1,\ell_{{\mathcal {E}}}}&\text{ if $d_{\ell_{{\mathcal {E}}}-1}>d_{2r-\ell_{{\mathcal {E}}}+1}$ or $\ell_{\mathcal {E}}=1$,}
\end{cases}
\text{ and }
c^{-}_{{\mathcal {E}}}=c({\mathcal {E}})_{1,r_{{\mathcal {E}}}-\ell_{{\mathcal {E}}}+1}.
$$
In addition, by definition, for $1<i<h$
$$
{\mathfrak c}_{{\mathcal {E}}_{i}}+{\mathfrak c}^{-}_{{\mathcal {E}}_{i}}={\mathfrak c}_{1,\ell_{{\mathcal {E}}_{i}}}+\bar{{\mathfrak c}}_{\ell_{{\mathcal {E}}_{i}}-1}=k_{\ell_{{\mathcal {E}}_{i}}}({\mathfrak t})=\bar{c}_{1,r_{{\mathcal {E}}_{i}}}+\bar{c}_{1,r_{{\mathcal {E}}_{i}}}=c_{{\mathcal {E}}_{i}}+c^{-}_{{\mathcal {E}}_{i}}=k_{r_{{\mathcal {E}}_{i}}}({\mathfrak t}).
$$
Since we are in the case that $n$ divides all $c_{{\mathcal {E}}_{i}}$,
$n$ divides ${\mathfrak c}^{-}_{{\mathcal {E}}_{i}}-c^{-}_{{\mathcal {E}}_{i}}=c_{{\mathcal {E}}_{i}}-{\mathfrak c}_{{\mathcal {E}}_{i}}$ for each ${\mathcal {E}}_{i}$. Since the decorations of ${\mathfrak c}^{-}_{{\mathcal {E}}_{i}}$ and $c^{-}_{{\mathcal {E}}_{i}}$ are the same, we deduce that
$$
G_{\Delta_{{\mathfrak t}}({\mathcal {E}}_{i})}=q^{c_{{\mathcal {E}}_{i}}-{\mathfrak c}_{{\mathcal {E}}_{i}}}\,G_{\Gamma_{{\mathfrak t}}({\mathcal {E}}_{i})}.
$$
Since $c_{{\mathcal {E}}_{i-1}}={\mathfrak c}_{{\mathcal {E}}_{i}}$,
\begin{equation}\label{eq:delta-t-i}
\prod^{h-1}_{i=2}G_{\Delta_{{\mathfrak t}}({\mathcal {E}}_{i})}=
\prod^{h-1}_{i=2}q^{c_{{\mathcal {E}}_{i}}-c_{{\mathcal {E}}_{i-1}}}G_{\Gamma_{{\mathfrak t}}({\mathcal {E}}_{i})}=q^{c_{{\mathcal {E}}_{h-1}}-c_{{\mathcal {E}}_{1}}}
\prod^{h-1}_{i=2}G_{\Gamma_{{\mathfrak t}}({\mathcal {E}}_{i})}.
\end{equation}
Moreover,
\begin{equation}\label{eq:delta-r-1}
G_{\Delta_{{\mathfrak t}}({\mathcal {E}}_{1})}=q^{c_{{\mathcal {E}}_{1}}}G_{\Gamma_{\Gamma_{{\mathfrak t}}({\mathcal {E}}_{1})}}
\text{ and }
G_{\Delta_{{\mathfrak t}}({\mathcal {E}}_{h})}=q^{\sum^{2r-1}_{j=r}d_{j}-c_{{\mathcal {E}}_{h-1}}}G_{\Gamma_{\Gamma_{{\mathfrak t}}({\mathcal {E}}_{h})}}.
\end{equation}
Multiplying~\eqref{eq:delta-t-i} and \eqref{eq:delta-r-1} together, we obtain
$$
q^{-\sum^{2r-1}_{j=r}d_{j}}\prod^{h}_{i=1} G_{\Delta_{{\mathfrak t}}({\mathcal {E}}_{i})}=\prod^{h}_{i=1}G_{\Gamma_{{\mathfrak t}}({\mathcal {E}}_{i})}.
$$
Then using \eqref{eq:Psi} and \eqref{eq:Psi-Delta}, the equality $G_{\Psi}({\mathfrak t})=G_{\Delta}({\mathfrak t})$ follows.
\end{proof}
\section{The Stable Case}\label{sec9}
In this Section, we establish a conjecture of Brubaker, Bump and Friedberg \cite{BBF1, BBF2} for the prime power
coefficients $H(p^{\bf k};p^\ell)$
when $n$ is sufficiently large, the so-called `stable case.' The formula is Lie-theoretic, and expresses the
non-zero coefficients as products of Gauss sums, with each non-zero coefficient
attached via a bijection to an element of the Weyl group.
To do so, we make use of the
work of Beineke, Brubaker and Frechette \cite{BeBrF2}, who showed that in the stable case such contributions
could also be matched with stable strict generalized Gelfand-Tsetlin patterns (see \cite{BeBrF2}, Section 4).
We recast their results in terms of decorated BZL-patterns and observe
that their description matches the one given in Section~\ref{sec:even} for $n$ sufficiently large (either odd
or even).
Fix $\mu\in {\mathbb Z}^r_{\ge1}$ as above. The condition that $n$ be sufficiently large depends on $\mu$, and is as follows.
{\bf Stability Assumption.} The degree $n$ of the cover satisfies
$$
n\geq \begin{cases}
\mu_{1}+2\sum^{r}_{i=2}\mu_{i} & \text{ if $n$ is odd,}\\
2\mu_{1}+4\sum^{r}_{i=2}\mu_{i} & \text{ if $n$ is even.}\\
\end{cases}
$$
Analogously to the definition of a stable
Gelfand-Tsetlin pattern in \cite{BeBrF2}, Definition 10, we define a stable BZL-pattern.
\begin{definition}
A decorated BZL-pattern $\Gamma\in BZL(\mu)$ is called stable if $G(\Gamma)\ne 0$ for some $n$ satisfying the Stability
Assumption.
\end{definition}
Given a BZL-pattern $\Gamma=\Gamma(c_{i,j})\in BZL(\mu)$, denote the entries of the $i$-th row
$$
(c_{i,i},c_{i,i+1},\dots,c_{i,r-1},c_{i,r},\bar{c}_{i,r-1},\dots,\bar{c}_{i,i+1},\bar{c}_{i,i}).
$$
Similar to the definitions for the first row $(c_{1,j})$, for the $i$-th row $(c_{i,j})$
define a short pattern ${\mathfrak t}_{i}=(d_{i,i},d_{i,i+1},\dots,d_{i,2r-i})$ and the weight vectors $k_{i,j}$ for ${\mathfrak t}_{i}$ where $i\leq j\leq r$.
Let $\mu_{1,j}=\mu_{j}$. Define $\mu_{i,j}$ inductively by
$$\mu_{i,r+1-j}=\mu_{i-1,j}+k_{i-1,j-1}+\varepsilon_{j+1}k_{i-1,j+1}-2k_{i-1,j}$$ for $i>1$ and $i\leq j\leq r$, where $k_{i,r+1}=0$. Let ${\bf \mu_{i}}$ be the vector $(\mu_{i,1},\mu_{i,2},\dots,\mu_{i,r+1-i})$.
Then ${\mathfrak t}_{i}$ is in $BZL_{1}({\bf \mu_{i}})$. Define $G_{\Psi}(\Gamma)=\prod^{r}_{i=1}G_{\Psi}({\mathfrak t}_{i})$.
One may check the following two facts from the definitions.
First, if a decorated BZL-pattern $\Gamma\in BZL(\mu)$
is not stable then $G_\Psi(\Gamma)=0$ for all $n$ satisfying the Stability Assumption.
This follows since some non-zero entry must be undecorated but the divisibility
condition in (\ref{dec:Gamma}) can not hold. Second, if a decorated
BZL-pattern $\Gamma\in BZL(\mu)$
is stable then $G_{\Psi}(\Gamma)=G(\Gamma).$ Indeed, this holds since the two sides are
identically equal by definition as the decorations are the same.
Let $\Phi(C)$ be the set of all roots of the dual group ${\rm Sp}_{2r}({\mathbb{C}})$ and
$W$ denote the Weyl group of $\Phi(C)$. Then Proposition~12 of \cite{BeBrF2} and the
following discussion may be recast in the language of
stable BZL-patterns as follows.
\begin{lemma} \label{lm:stable} Suppose that $\Gamma\in BZL(\mu)$ is a stable decorated BZL-pattern.
Then there exists a unique element $w$ in $W$ such that
$$
\mu-w(\mu)=(2k_{r}(\Gamma)-k_{r-1}(\Gamma),k_{r-1}(\Gamma)-k_{r-2}(\Gamma),\dots,k_{1}(\Gamma)).
$$
Each element $w\in W$ is so-obtained.
\end{lemma}
Let $\Phi^{+}(C)$ (resp.\ $\Phi^{-}(C)$) denote the sets of positive (resp.\ negative) roots in $\Phi(C)$.
For each $w\in W$, let
$$\Phi_{w}=\{\alpha\in\Phi^{+}(C)\mid w(\alpha)\in \Phi^{-}(C)\}.$$ Recall that given $\mu$, we may
realize it as $\mu=\lambda+\rho$ as in Section~\ref{sect62} above, with $\lambda=\sum l_i\epsilon_i$.
For $\alpha\in \Phi^+(C)$, let $d_\lambda(\alpha)=\frac{2<\mu,\alpha>}{<\alpha,\alpha>}$
where $\apair{\cdot,\cdot}$ is the standard Euclidean inner product. (The notation $d_\lambda(\alpha)$ is the same as that of
\cite{BBF2}.)
Then in view of the remarks above
the following Corollary follows
from Theorem~\ref{theorem-general} above together with Theorem~1 of \cite{BeBrF2}.
\begin{corollary}\label{lm:G-stable} Suppose that the Stability Assumption holds.
Let $w$ be the Weyl element associated to $\Gamma$ as in Lemma~\ref{lm:stable}. Then
$$H(p^ {\kappa(\Gamma)};p^{\ell})=
G(\Gamma)=\prod_{\alpha\in \Phi_{w}}g_{\|\alpha\|^{2}}(p^{d_{{\lambda}}(\alpha)-1},p^{d_{{\lambda}}(\alpha)}).
$$
Each non-zero $p$-power coefficient is obtained from a Weyl group element $w$.
\end{corollary}
This confirms Conjecture 1.4 of \cite{BBF1} and the conjecture described in Section 1.1
of \cite{BBF2} for root systems of this type.
|
1,314,259,994,264 | arxiv | \section{Introduction} \label{Sdrumce1}
It took 30 years for Kac' famous question `Can one hear the shape of a drum?' \cite{Kac}
to find an answer.
Gordon, Webb and Wolpert \cite{GWW} constructed two non-congruent planar domains
whose Laplacians with Dirichlet (or Neumann) boundary conditions are isospectral, that is,
they have the same sequence of eigenvalues, counted with multiplicities.
The standard counterexample takes the form of two polygons obtained by
stitching together seven copies of a given non-equilateral triangle in two different ways.
These domains are manifestations in the plane of a general principle first enunciated by
Sunada \cite{Sun1} and developed by B\'erard \cite{Ber1}, which to the best of our
knowledge accounts for all known isospectral pairs, and which was used in \cite{GWW}.
Namely, if $H$ and $K$ are two subgroups of a finite group $G$, then a unitary
intertwining operator between the spaces $L_2(H\setminus G)$ and $L_2(K\setminus G)$
induces an isometry between appropriate subspaces of any Hilbert space on which $G$
acts unitarily.
Subsequent to the publication of \cite{GWW}, several mathematicians, for example
B\'erard \cite{Ber2}, Buser--Conway--Doyle--Semmler \cite{BCDS} and Chapman
\cite{Cha}, gave simplified and more accessible proofs of the isospectrality of such
domains.
The argument in all these expository proofs consists in
showing that an eigenfunction on the first polygon
can be transposed to an eigenfunction on the second by taking particular linear
combinations of its values on the seven equal constituent triangles, and vice versa.
Following the approach taken by B\'erard \cite{Ber2},
if we consider $L_2(\Omega_1)$ and $L_2(\Omega_2)$ rather as $L_2(T)^7$,
where $T$ is the basic triangle (`brique fondamentale') and $\Omega_1$ and
$\Omega_2$ the polygons (see Figure~\ref{fdrumce1}),
\begin{figure}[ht!] \label{fdrumce1}
\vspace*{5mm}
\centering
\epsfig{file=dreiecke2.eps, height=50mm} \hspace{25mm}
\epsfig{file=dreiecke1.eps, height=50mm}
\mbox{}
\makebox[0pt]{\raisebox{10mm}[0pt][0pt]{\mbox{}\hspace*{-210mm}$\Omega_1$ }}
\makebox[0pt]{\raisebox{10mm}[0pt][0pt]{\mbox{}\hspace*{-20mm}$\Omega_2$ }}
\caption{Two isospectral domains composed of seven isometric triangles. These are based
on the `warped propeller' domains of \cite{BCDS}.}
\end{figure}
then we can construct an isometry $\Phi$ on $L_2(T)^7$ induced by a $7 \times 7$ matrix
$B$ of scalars acting as a family of Euclidean isometries superimposing the seven triangles.
The core of the argument is that $\Phi$ restricts to an isometry mapping the Sobolev
space $H^1_0(\Omega_1)$ onto $H^1_0(\Omega_2)$, whilst its adjoint $\Phi^*$
maps $H^1_0(\Omega_2)$ onto $H^1_0(\Omega_1)$.
Isospectrality of the Laplacians then follows from the variational characterization of
the eigenvalues.
The later works of Buser {\it{et al}} \cite{BCDS} and Chapman \cite{Cha} motivate and
describe in lay terms how $\Phi$ acts purely as a map between eigenfunctions, without
touching upon the concept of isometric Sobolev spaces.
The aim of this paper is to reconsider these arguments from a more
analytical perspective.
Rather than transposing eigenfunctions, we construct $\Phi$ as a similarity
transform intertwining the realizations of the Laplacian with
Neumann (or Dirichlet) boundary conditions,
or equivalently, the semigroups generated by these realizations,
on the respective polygons.
Moreover, we consider the operator-theoretic properties of such a transform $\Phi$
more carefully.
We will give a general characterization of maps $\Phi$ that intertwine any two
operators associated with elliptic forms.
In light of this characterization it looks like a miracle that there exists
a matrix which fulfils the criterion, but the key point is rather that $\Phi$
and its adjoint respect the form domains $H^1(\Omega_1)$ and $H^1(\Omega_2)$.
That the transform $\Phi$ intertwines the elliptic forms implies
that it also intertwines the associated operators and semigroups.
In the case of the Laplacian, this is then equivalent to the isospectral property.
But it is not necessary that we consider only the Laplacian: since the Sobolev spaces
are intertwined, any elliptic operator on $L_2(T)$, even if it is not self-adjoint, will yield
two operators on $L_2(\Omega_1)$ and $L_2(\Omega_2)$ which are similar.
In place of isospectrality, the correct setting is now that of similarity, a stronger
property in the non-self-adjoint case.
We can consider the Laplacian with Robin boundary conditions in this setting.
The question as to whether there exist isospectral pairs for the third boundary condition
just as for the first and second seems to be a natural one, and was briefly mentioned in
the survey article \cite{Pro}; but otherwise it
appears to have received little attention, and no answer.
We will show that any operator acting as a family of superimposing
isometries that intertwines the Robin Laplacians on $\Omega_1$ and $\Omega_2$
must also simultaneously intertwine the Dirichlet {\em and} Neumann Laplacians,
which is easily shown to be impossible.
Thus there is no reason to suppose that any known pairs of domains which are
Dirichlet or Neumann isospectral are also Robin isospectral, and it is an open
question as to whether there exists {\em any} noncongruent pair of Robin isospectral domains.
The striking implication is that isospectrality could be essentially related to the
boundary conditions and not the coefficients of the operators being intertwined.
Thus it may well be the case that one can hear the shape of a
drum after all, if one loosens the membrane before striking it.
There is another motivation for studying similarity transforms within our framework.
It has been shown (see \cite{Are3}, and cf.~also \cite{ABE} and \cite{AE4}
for the case of Riemannian manifolds) that two (Lipschitz) domains are
necessarily congruent if there exists an order isomorphism
intertwining the Laplacians.
Thus, in our case, the similarity transform $\Phi$ is not an order
isomorphism, even though, at least in the case of Neumann boundary
conditions, $\Phi$ may be taken as a positive linear map.
What goes wrong is that $\Phi$ is no longer disjointness preserving,
as on each triangle it adds (the function values on) several distinct
triangles together; thus $\Phi$ may be written as a finite sum of order
isomorphisms, and due to this `mixing' property, $\Phi^{-1}$ is not positive.
Understanding and seeking to narrow the operator-theoretic gap between
the characterization of such positive results as in \cite{Are3} and the negative
counterexamples may help us to understand Kac' problem better, as well as offering
an alternative approach to the standard one via heat and wave traces.
In fact, a version of Kac' question is still open.
The results here, just as those of \cite{GWW} and the other expositions,
can be interpreted as saying that these seven triangles can be put together
in two different ways to induce isomorphic Sobolev spaces, which is of course
the essential idea behind B\'erard's version of Sunada's Theorem.
With trivial and obvious modifications, the same is true for all
other known counterexamples as presented in \cite{BCDS} (which are all based
on Sunada's Theorem in the same way, and which can all be analyzed within our
framework in the obvious way).
But the point is that the phenomenon exhibited by these domains is
somehow exceptional and does not really answer Kac' question.
If we interpret the `correct' setting for Kac' question as being
$C^\infty$-domains in the plane, then the question is
still wide open; there is no known counterexample among $C^1$ or
convex planar domains.
In four dimensions there is a counterexample of two non-congruent
convex domains, given by Urakawa \cite{Ura} in 1982, which was in fact
the first Euclidean example.
However, the issue of regularity of the boundary seems to be far more
than a technicality, a point also made in the survey article of Protter
\cite{Pro}.
While it is certainly clear that any counterexamples generated via the
principle of Sunada's Theorem must have corners, there are
also remarkable and profound positive results obtained by
Zelditch \cite{Zel} \cite{Zel2}, who proves that, within a certain class of
domains in $\mathds{R}^2$ with analytic boundary and certain symmetry conditions,
any two isospectral domains are congruent.
The presence of corners in a domain also has significant consequences for the
asymptotic behaviour of the eigenvalues; for example, the curvature of
the boundary appears in all terms of the asymptotic expansion of the
heat kernel about $t=0$ (see, e.g., \cite{Wat}).
This article is organized as follows.
We start in Section~\ref{Sdrumce2} by characterizing operators which
intertwine two semigroups generated by sectorial forms, and relating
this to the isospectral property.
We phrase many of our results in the language of semigroups, as this
allows us to work on $L_2$-spaces in place of the more abstruse operator domains.
In Section~\ref{Sdrumce3} we recast the arguments given in \cite{Ber2} and \cite{BCDS}
within this framework, showing first how we can decompose the large domains
$\Omega_1$ and $\Omega_2$ into their constituent triangles, and give conditions
allowing us to merge the associated Sobolev spaces together.
In this setting we then
prove that realizations of the Neumann Laplacian on the two non-congruent polygons
in Figure~\ref{fdrumce1} are similar.
We work principally with the Neumann case as there are fewer conditions on the
Sobolev spaces involved, and as the similarity transform and associated matrix have the
particularly nice property that they may be taken to be positive.
In Section~\ref{Sdrumce4} we discuss properties of the intertwining operator $\Phi$
constructed in Section~\ref{Sdrumce3} from a more analytical, operator-theoretic
perspective.
The results in Section~\ref{Sdrumce3} are extended to more general elliptic
operators in Section~\ref{Sdrumce5}.
We then consider Dirichlet boundary conditions in Section~\ref{Sdrumce6}.
The underlying ideas are the same, but the details of the construction
turn out to be a little more complicated than in the Neumann case.
We therefore limit ourselves to indicating the differences
vis-\`a-vis the Neumann Laplacian.
Finally, in Section~\ref{Sdrumce7}, we show that these arguments cannot be
extended to Robin boundary conditions.
\section{Forms and intertwining operators} \label{Sdrumce2}
We start by introducing some basic terms and results from the theory of
sectorial forms.
The idea is to consider equivalent formulations of isospectrality for the
Dirichlet and Neumann Laplacians which are more suitable for adaptation
to more general operators.
To that end, let $H$ and $V$ be complex Hilbert spaces such that
$V$ is densely embedded in $H$.
Let $a \colon V \times V \to \mathds{C}$ be a continuous sesquilinear form.
Assume that $a$ is {\bf elliptic}, that is, there exist $\omega \in \mathds{R}$
and $\mu > 0$ such that
\begin{equation}
\mathop{\rm Re} a(u,u) + \omega \, \|u\|_H^2
\geq \mu \, \|u\|_V^2
\label{eSdrumce2;1}
\end{equation}
for all $u \in V$.
Denote by $A$ the operator associated with $a$.
That is, the domain of $A$ is given by
\[
D(A) = \{ u \in V : \mbox{there exists an $f \in H$ such that }
a(u,v) = (f,v)_H \mbox{ for all } v \in V \},
\]
and $A u = f$ for all $u \in D(A)$ and $f \in H$ such that
$a(u,v) = (f,v)_H$ for all $v \in V$.
Then $-A$ generates a holomorphic semigroup on $H$.
We are of course particularly interested in the Dirichlet and Neumann Laplacians
on $H = L_2(\Omega)$, where $\Omega \subset \mathds{R}^d$ is an open set with
finite measure.
These are self-adjoint operators with compact resolvent, and can
be characterized as follows.
We omit the standard proof.
\begin{prop} \label{pdrumce200}
Let $A$ be an operator in a separable infinite dimensional Hil\-bert space~$H$.
The following are equivalent.
\begin{tabeleq}
\item \label{pdrumce200-1}
$A$ is self-adjoint, bounded from below and has compact resolvent.
\item \label{pdrumce200-2}
There exist a Hilbert space $V$ which is densely and compactly embedded in
$H$ and a symmetric, continuous elliptic form $a \colon V\times V \to \mathds{C}$ such that
$A$ is associated with $a$.
\item \label{pdrumce200-3}
There exist an orthonormal basis $(e_n)_{n \in \mathds{N}}$ of $H$ and an
increasing sequence of real numbers $(\lambda_n)_{n \in \mathds{N}}$ with $\lim_{n\to\infty}
\lambda_n =\infty$ such that
\[
D(A)=\{u\in H: \sum_{n=1}^\infty |\lambda_n \, (u,e_n)_H|^2 < \infty\}
\]
and $Au = \sum_{n=1}^\infty \lambda_n \, (u,e_n)_H \, e_n$ for all $u \in D(A)$.
\end{tabeleq}
\end{prop}
If $A$ satisfies these equivalent conditions, we call $\lambda_n$ the {\bf $n$-th
eigenvalue of $A$} and $(\lambda_n)_{n \in \mathds{N}}$ the sequence of eigenvalues of $A$,
where repetition is possible.
\medskip
Let us now assume that we have two forms $a_1$ and $a_2$, with dense
form domains $V_1$ and $V_2$ in Hilbert spaces $H_1$ and $H_2$, respectively.
We assume throughout that both $a_1$ and $a_2$ are continuous and elliptic.
Let $A_1$ and $A_2$ be the operators associated with $a_1$ and $a_2$, which are
automatically bounded from below thanks to the ellipticity assumption.
Denote by $S^1$ and $S^2$ the semigroups generated by
$-A_1$ and $-A_2$.
If $A_1$ and $A_2$ also are self-adjoint and have compact resolvent then we call them {\bf isospectral}
if they have the same sequence of eigenvalues.
In this case we will denote by $(e_n)_{n \in \mathds{N}}$ the sequence of (normalized) eigenfunctions
of $A_1$ on $H_1$ and by $(f_n)_{n \in \mathds{N}}$ the similarly normalized eigenfunctions of
$A_2$ on $H_2$.
It is then immediate that there exists a unitary operator $U \in {\cal L}(H_1,H_2)$ such that
\begin{equation} \label{pdrumce201b}
U^{-1} \, S^2_t \, U = S^1_t,
\end{equation}
for all $t>0$.
We may simply choose $U$ such that $U e_n = f_n$ for all $n \in \mathds{N}$.
If we assume $H_i = L_2(\Omega_i)$ for some open $\Omega_i \subset \mathds{R}^d$
and $A_i$ is the Dirichlet (or Neumann) Laplacian on $\Omega_i$ for all $i \in \{ 1,2 \} $,
then Kac' question may be phrased as asking whether the existence of an intertwining
operator as in (\ref{pdrumce201b}) implies the existence of an isometry
$\tau \colon \Omega_1 \to \Omega_2$.
However, we wish to consider more general operator-theoretic notions
than isospectrality, in particular allowing for non-self-adjoint operators.
Moreover, the similarity transform $\Phi$ that we construct in Section~\ref{Sdrumce3}
is, in general, not unitary.
See also Section~\ref{Sdrumce4}.
(This assertion is also true for the equivalent constructions in \cite{Ber2} and
\cite{BCDS}, where the mechanism is of course the same.)
The next proposition gives a general characterization of an operator $\Phi \colon H_1 \to H_2$
that intertwines $A_1$ and $A_2$ in terms of the forms $a_1$ and $a_2$.
Of particular interest to us in what follows is Condition~\ref{pdrumce202-3}.
We do not require $A_1$ or $A_2$ to be self-adjoint or have compact resolvent.
\begin{prop} \label{pdrumce202}
Let $\Phi \in {\cal L}(H_1,H_2)$.
Consider the following conditions.
\begin{tabeleq}
\item \label{pdrumce202-1}
$S^2_t \, \Phi = \Phi \, S^1_t$ for all $t > 0$.
\item \label{pdrumce202-2}
$\Phi(D(A_1)) \subset D(A_2)$ and $A_2 \, \Phi u = \Phi \, A_1 u$
for all $u \in D(A_1)$.
\item \label{pdrumce202-3}
$\Phi(V_1) \subset V_2$, $\Phi^*(V_2) \subset V_1$ and
$a_2(\Phi u, v) = a_1(u, \Phi^* v)$ for all $u \in V_1$ and $v \in V_2$.
\end{tabeleq}
Then {\rm \ref{pdrumce202-1}}$\Leftrightarrow${\rm \ref{pdrumce202-2}}$\Leftarrow${\rm \ref{pdrumce202-3}}.
\end{prop}
\begin{proof}
`\ref{pdrumce202-1}$\Rightarrow$\ref{pdrumce202-2}'.
Let $u \in D(A_1)$.
Then
\[
\Phi \, A_1 u
= \lim_{t \downarrow 0} t^{-1} \Phi \, (I - S^1_t) u
= \lim_{t \downarrow 0} t^{-1} (I - S^2_t) \, \Phi u
= A_2 \, \Phi u
. \]
So $\Phi u \in D(A_2)$ and $A_2 \, \Phi u = \Phi \, A_1 u$.
`\ref{pdrumce202-2}$\Rightarrow$\ref{pdrumce202-1}'.
Replacing $A_k$ by $\omega I + A_k$, we may assume that both
$S^1$ and $S^2$ are exponentially decreasing.
Let $u \in D(A_1)$ and $\lambda > 0$.
Then $(\lambda I + A_2) \, \Phi u = \Phi \, (\lambda I + A_1) u$.
Since $\lambda I + A_1$ is surjective, it follows that
$\Phi \, (\lambda I + A_1)^{-1} v = (\lambda I + A_2)^{-1} \, \Phi v$
for all $v \in H_1$.
Hence by iteration,
$\Phi \, (\lambda I + A_1)^{-n} = (\lambda I + A_2)^{-n} \, \Phi$
for all $n \in \mathds{N}$.
Then by (7) in \cite{Yos} Section~IX.7 one deduces \ref{pdrumce202-1}.
`\ref{pdrumce202-3}$\Rightarrow$\ref{pdrumce202-2}'.
Let $u \in D(A_1)$.
Then for all $v \in V_2$ one has $\Phi v \in V_1$ and
\[
a_2(\Phi u, v)
= a_1(u, \Phi^* v)
= (A_1 u, \Phi^* v)_{H_1}
= (\Phi \, A_1 u, v)_{H_2}
. \]
Hence $\Phi u \in D(A_2)$ and $A_2 \, \Phi u = \Phi \, A_1 u$.
\end{proof}
We remark that under additional assumptions on the operators $A_1$ and $A_2$ it can be
proved that all three statements in Proposition~\ref{pdrumce202} are equivalent.
It suffices that $a_1$ and $a_2$ have the {\bf square root property} on $H$, which
means that for the square root operator $(\omega_k I + A_k)^{1/2}$ of $\omega_k I + A_k$,
defined as in \cite{ABHN}, Section~3.8, where $\omega_k$ is the constant in
(\ref{eSdrumce2;1}), we have $D((\omega_k I + A_k)^{1/2}) = V_k$, for all $k \in \{ 1,2 \} $.
This is not always the case; a counterexample has been given by McIntosh \cite{McI1},
although it is always true if $A_1$ and $A_2$ are self-adjoint.
As we do not need this equivalence in the sequel, we do not go into details.
\medskip
We next assume that the intertwining operator $\Phi \in {\cal L} (H_1,H_2)$ is
invertible and thus an isomorphism between $H_1$ and $H_2$.
\begin{cor} \label{cdrumce203}
Let $\Phi \in {\cal L}(H_1,H_2)$ be invertible.
Consider the following statements.
\begin{tabeleq}
\item \label{cdrumce203-1}
$\Phi(D(A_1)) \subset D(A_2)$ and $A_2 \, \Phi u = \Phi \, A_1 u$
for all $u \in D(A_1)$.
\item \label{cdrumce203-2}
$\Phi(D(A_1)) = D(A_2)$ and $A_2 \, \Phi u = \Phi \, A_1 u$ for all $u \in D(A_1)$.
\item \label{cdrumce203-3}
$\Phi^{-1} \, S^2_t \, \Phi = S^1_t$ for all $t > 0$.
\item \label{cdrumce203-4}
If $u\in D(A_1)$ and $\lambda \in \mathds{R}$ are such that $A_1 u=\lambda u$, then
$\Phi u \in D(A_2)$ and $A_2\Phi u = \lambda \Phi u$.
\end{tabeleq}
Then {\rm \ref{cdrumce203-1}} $\Leftrightarrow$ {\rm \ref{cdrumce203-2}}
$\Leftrightarrow$ {\rm \ref{cdrumce203-3}} $\Rightarrow$ {\rm \ref{cdrumce203-4}}.
If in addition $A_1$ is self-adjoint and has compact resolvent, then all four statements
are equivalent.
\end{cor}
We say that $A_1$ and $A_2$ are {\bf similar}, or equivalently, that the semigroups
$S^1$ and $S^2$ are {\bf similar}, if the equivalent statements
\ref{cdrumce203-1}--\ref{cdrumce203-3} hold.
In the case where $A_1$ and $A_2$ are self-adjoint and have compact resolvent,
we may replace \ref{cdrumce203-2} with the statement `$\Phi(D(A_1)) =
D(A_2)$ and the spectra of $A_1$ and $A_2$ coincide'.
Thus we may regard
similarity as a more general property than isospectrality.
The next result was stated in \cite{Are3} Lemma~1.3 for self-adjoint operators,
but we note that it is also a direct consequence of Proposition~\ref{pdrumce202} and
Corollary~\ref{cdrumce203} without requiring this assumption.
\begin{cor} \label{cdrumce204}
Let $\Phi \in {\cal L}(H_1,H_2)$ be unitary.
Then the following are equivalent.
\begin{tabeleq}
\item \label{cdrumce204-1}
$S^2_t \, \Phi = \Phi \, S^1_t$ for all $t > 0$.
\item \label{cdrumce204-2}
$\Phi(V_1) = V_2$ and
$a_2(\Phi u, \Phi v) = a_1(u, v)$ for all $u,v \in V_1$.
\end{tabeleq}
\end{cor}
We finish this section by pointing out that the existence of a unitary similarity
transform is guaranteed by self-adjointness of the operators alone, and compactness
of the resolvents is not needed.
\begin{prop} \label{pdrumce205}
Let $A_1$ and $A_2$ be two self-adjoint operators on $H_1$ and $H_2$,
respectively.
Assume that the semigroups $S^1$ and $S^2$ are similar.
Then there exists a unitary operator $U \in {\cal L}(H_1,H_2)$ such that
\[
U^{-1} \, S^1_t \, U = S^2_t
\]
for $t>0$.
\end{prop}
\begin{proof}
We consider the polar decomposition $\Phi = U \, |\Phi|$, where $U \in {\cal L}(H_1,H_2)$
is unitary and $|\Phi| = (\Phi^* \Phi)^{1/2} \in {\cal L}(H_1)$ is invertible and self-adjoint.
Since
\[
\Phi^* S^2_t = (S^2_t \, \Phi)^* = (\Phi \, S^1_t)^* = S^1 \, \Phi^*
\]
for all $t>0$, we see that $\Phi^*$ is also an intertwining operator.
Thus $|\Phi|$ commutes with $S^1_t$ for all $t>0$, and so
\[
U \, S^1_t
= U \, |\Phi| \, |\Phi|^{-1} \, S^1_t
= \Phi \, S^1_t \, |\Phi|^{-1}
= S^2_t \, \Phi \, |\Phi|^{-1}
= S^2_t \, U
\]
for all $t > 0$.
\end{proof}
\section{Isospectral domains for the Neumann Laplacian} \label{Sdrumce3}
For an open polygon $\Omega$ in $\mathds{R}^2$ we denote by $\Delta_\Omega^N$ the
Neumann Laplacian on $L_2(\Omega)$.
This realization of the Laplacian with Neumann boundary conditions is self-adjoint,
has compact resolvent and its negative is bounded from below, with a sequence of
eigenvalues $0 = \lambda_0 \leq \lambda_1 \leq \ldots \to \infty$.
We will consider the two (very) warped propeller-like domains from
Figure~\ref{fdrumce1} and show that $\Delta_{\Omega_1}^N$ and
$\Delta_{\Omega_2}^N$ are similar.
This will be done with the help of our form criterion established in
Proposition~\ref{pdrumce202}.
As a corollary we deduce that $\Delta_{\Omega_1}^N$ and $\Delta_{\Omega_2}^N$
are isospectral even though $\Omega_1$ and $\Omega_2$ are obviously not congruent.
Note that $\Omega_1$ and $\Omega_2$ look like propellers if the
constituent triangles are equilateral.
Since we wish to decompose our polygons into their constituent triangles, we need to start
with some basic facts about traces and integration by parts.
We let $\Omega \subset \mathds{R}^2$ be an arbitrary open polygon, although in practice we only
need the following results for our warped propellers.
On the boundary $\Gamma$ of $\Omega$ we let $\sigma$ denote the usual surface measure;
on each straight line segment, $\sigma$ is simply one-dimensional Lebesgue measure.
The Trace Theorem states that there exists a unique bounded operator ${\mathop{\rm Tr \,}} \colon H^1(\Omega)
\to L_2(\Gamma)$ such that ${\mathop{\rm Tr \,}}(u)=u|_\Gamma$ for all $u \in H^1(\Omega) \cap
C(\overline \Omega)$.
Observe that, since $\Omega$ is Lipschitz, the space $H^1(\Omega) \cap C(\overline \Omega)$ is
dense in $H^1(\Omega)$.
By $\nu(z)=(\nu_1(z), \nu_2(z))$ we denote the outer unit normal to $\Omega$ at $z\in
\Gamma$.
Then $\nu(z)$ is constant on each straight line segment of the boundary.
The integration by parts formula states that
\[
-\int_\Omega (\partial_j u) \,v = \int_\Omega u\,\partial_j v -
\int_\Gamma \nu_j \, u \,v
\]
for all $u,v \in H^1(\Omega)$ and $j \in \{ 1,2 \} $.
Here the integral over $\Gamma$ is with respect to $\sigma$, and we have omitted the
trace to simplify notation.
The {\bf Neumann Laplacian} is by definition the operator $\Delta_\Omega^N$ on $L_2(\Omega)$
such that $- \Delta_\Omega^N$ is associated with the form $a \colon H^1(\Omega) \times H^1(\Omega) \to \mathds{C}$ given by
\begin{equation}
a(u,v) = \int_\Omega \nabla u \cdot \overline{\nabla v}.
\label{eSdrumse2;8}
\end{equation}
We denote by $S$ the semigroup generated by $\Delta_\Omega^N$.
If $u \in H^1(\Omega)$ is such that the distributional Laplacian
$\Delta u \in L_2(\Omega)$, then for all
$h \in L_2(\Gamma)$ we say that $\partial_\nu u = h$ if
\begin{equation}
\int_\Omega (\Delta u) \,v + \int_\Omega \nabla u\cdot \nabla v= \int_\Gamma h \,v
\label{eSdrumce3;0}
\end{equation}
for all $v\in H^1(\Omega)$. That is, we define the normal derivative via Green's
formula.
Based on this definition, the operator $\Delta_\Omega^N$ has the domain
\[
D(\Delta_\Omega^N)=\{u\in H^1(\Omega): \Delta u\in L_2(\Omega) \mbox{ and } \partial_\nu u=0\}.
\]
This is valid whenever $\Omega$ is a Lipschitz domain.
Now let $T$ be a fixed scalene triangle whose three different sides are labelled $\Gamma_1$,
$\Gamma_2$ and $\Gamma_3$ as in Figure~\ref{fdrumce2}.
\begin{figure}[ht!]
\centering
\vspace*{1mm}
\epsfig{file=dreieck.eps, height=24mm}
\caption{The triangle $T$.\label{fdrumce2}}
\end{figure}
Thus if $\Omega_1$ and $\Omega_2$ are broken into their seven constituent triangles,
then each is congruent to $T$.
Now let $\Omega$ be either one of $\Omega_1$ and $\Omega_2$ and consider the seven
open disjoint triangles $T_1,\ldots, T_7$ such that
\[
\overline \Omega = \bigcup_{k=1}^7 \overline T_k.
\]
Two triangles $\overline T_k$ and $\overline T_l$ may have a common side; there are six such
sides inside $\Omega$.
If $u\in H^1(\Omega)$, then $u_k:= u|_{T_k} \in H^1(T_k)$ for all $k \in \{ 1,\ldots,7 \} $.
Conversely, the following basic result holds.
\begin{lemma} \label{ldrumce3b03}
Let $u \in L_2(\Omega)$ be such that $u_k:=u|_{T_k} \in H^1(T_k)$ for all $k \in \{ 1,\ldots,7 \} $.
Then $u \in H^1(\Omega)$ if and only if $u_k$ and $u_l$ have the same trace on common
sides of $T_k$ and $T_l$ for all $k,l \in \{ 1,\ldots,7 \} $ with $k \neq l$.
Moreover, if $u \in H^1(\Omega)$ then
$(\partial_j u)|_{T_k} = \partial_j u_k$ on $T_k$ for all
$k \in \{ 1,\ldots,7 \} $ and $j \in \{ 1,2 \} $.
\end{lemma}
\begin{proof}
Since $H^1(\Omega) \cap C(\overline\Omega)$ is dense in $H^1(\Omega)$ the condition
on the traces is clearly necessary.
Assume now that $u$ satisfies this trace condition.
Let $\varphi \in C_c^1(\Omega)$ and $j \in \{1,2\}$. Then
\begin{eqnarray}
-\int_\Omega u \,\partial_j \varphi
& = & -\sum_{k=1}^7 \int_{T_k} u_k \, \partial_j\varphi \nonumber \\
& = & \sum_{k=1}^7 \Big( \int_{T_k} (\partial_j u_k) \, \varphi
- \int_{\partial T_k} \nu_{k,j} \, u_k \, \varphi \Big) \nonumber \\
& = & \sum_{k=1}^7 \int_{T_k} (\partial_j u_k) \, \varphi \label{eldrumce3b03;1} \\
& = & \int_\Omega w \, \varphi, \nonumber
\end{eqnarray}
where $w \in L_2(\Omega)$ is such that $w|_{T_k} = \partial_j u_k$.
Thus $\partial_j u = w$ in $\Omega$ by definition of the weak derivative of a function.
Here we denote by $\nu_k$ the outer unit normal to $T_k$ on $\partial T_k$ with its two
components $\nu_{k,1}$ and $\nu_{k,2}$.
Since $\nu_k = - \nu_l$ on $\overline T_k \cap \overline T_l$ whenever $k\neq l$, we have
\[
\sum_{k=1}^7 \int_{\partial T_k} \nu_{k,j} \, u \, \varphi = 0
, \]
which we used in (\ref{eldrumce3b03;1}).
\end{proof}
If $\tau \colon \mathds{R}^2 \to \mathds{R}^2$ is an isometry,
then the map $U \colon L_2(\tau(\Omega)) \to L_2(\Omega)$ given by $u \mapsto u
\circ \tau$ is a unitary operator, $U(H^1(\tau(\Omega))) = H^1(\Omega)$ and
$U|_{H^1(\tau(\Omega))}$ is also unitary and isometric.
Now consider the first warped propeller $\Omega_1$ from Figure~\ref{fdrumce1}.
Denote by $T_1,\ldots, T_7$ the seven disjoint triangles isomorphic to $T$ such that
$\overline \Omega_1 = \bigcup_{k=1}^7 \overline T_k$, and
for all $k \in \{ 1,\ldots,7 \} $ denote by $\tau_k$ the isometry mapping
$T$ onto $T_k$.
If we define a map
\[
\Phi_1(w) = (w|_{T_1} \circ \tau_1,\ldots, w|_{T_7} \circ \tau_7),
\]
then $\Phi_1 \colon L_2(\Omega_1) \to L_2(T)^7$ is unitary and $\Phi_1(H^1(\Omega_1)) = V_1$,
where
\begin{eqnarray*}
V_1 = \{ (u_1,\ldots,u_7) \in H^1(T)^7
& : & u_1 = u_2 \mbox{ and } u_4 = u_7 \mbox{ on } \Gamma_1 \\
& & u_1 = u_3 \mbox{ and } u_2 = u_5 \mbox{ on } \Gamma_2 \\
& & u_1 = u_4 \mbox{ and } u_3 = u_6 \mbox{ on } \Gamma_3 \}
.
\end{eqnarray*}
Here we mean more precisely that $u_1$ and $u_2$ have the same trace on $\Gamma_1$,
and so on.
Since the trace is a continuous mapping from $H^1(T)$ into $L_2(\partial T)$, the space $V_1$
is closed in $H^1(T)^7$.
We now define a form $\tilde a_1 \colon V_1 \times V_1 \to \mathds{C}$ by
\begin{equation} \label{epdrumce3b01}
\tilde a_1 (u,v) := \sum_{k=1}^7 \int_T \nabla u_k \cdot \overline{\nabla v_k},
\end{equation}
where we have written $u = (u_1,\ldots,u_7)$ and $v = (v_1,\ldots,v_7)$.
Then $\tilde a_1$ is continuous, symmetric and elliptic with respect to $L_2(T)^7$.
We denote by $\widetilde A_1$ the self-adjoint operator on $L_2(T)^7$ associated with
$\tilde a_1$ and by $\widetilde S^1$ the semigroup generated by $-\widetilde A_1$.
We next show that the operators $-\widetilde A_1$ and $\Delta_{\Omega_1}^N$
(and the semigroups they generate) are similar.
Let $S^1$ be the semigroup generated by the Neumann Laplacian on $\Omega_1$.
We denote by $a_1$ the form associated with the semigroup $S^1$
on $\Omega_1$, cf.\ (\ref{eSdrumse2;8}).
\begin{prop} \label{pdrumce3b03}
If $t>0$ then
\[
\Phi^{-1}_1 \, \widetilde S^1_t \, \Phi_1 = S^1_t
. \]
\end{prop}
\begin{proof}
Let $u,v \in H^1(\Omega_1)$ and write $\Phi_1(u) = (u_1,\ldots, u_7)$ and
$\Phi_1(v) = (v_1,\ldots, v_7)$.
Then
\begin{eqnarray}
\tilde a_1 (\Phi_1 u, \Phi_1 v)
& = & \sum_{k=1}^7 \int_T \nabla u_k \cdot \overline{\nabla v_k} \nonumber \\
& = & \sum_{k=1}^7 \int_T \nabla (u\circ \tau_k) \cdot \overline{\nabla(v\circ \tau_k)} \nonumber \\
& = & \sum_{k=1}^7 \int_T (\nabla u) \circ \tau_k \cdot \overline{(\nabla v) \circ \tau_k} \nonumber \\
& = & \sum_{k=1}^7 \int_{T_k} \nabla u \cdot \overline{\nabla v} \nonumber \\
& = & \int_{\Omega_1} \nabla u \cdot \overline{\nabla v}. \label{edrumce3b;1}
\end{eqnarray}
Now the claim follows from Corollary~\ref{cdrumce204}.
\end{proof}
An obvious analogue holds for $\Omega_2$.
Namely, define a unitary map $\Phi_2 \colon L_2(\Omega_2) \to L_2(T)^7$ in the obvious
way, and let
\begin{eqnarray*}
V_2 = \{ (u_1,\ldots,u_7) \in H^1(T)^7
& : & u_1 = u_2 \mbox{ and } u_3 = u_6 \mbox{ on } \Gamma_1 \\
& & u_1 = u_3 \mbox{ and } u_4 = u_7 \mbox{ on } \Gamma_2 \\
& & u_1 = u_4 \mbox{ and } u_2 = u_5 \mbox{ on } \Gamma_3 \},
\end{eqnarray*}
so that $\Phi_2(H^1(\Omega_2)) = V_2$.
We define $\tilde a_2 \colon V_2 \times V_2 \to \mathds{C}$ by
\[
\tilde a_2(\Phi_2 u,\Phi_2 v):= \sum_{k=1}^7 \int_T \nabla u_k \cdot \overline{\nabla v_k},
\]
where $\Phi_2 u = (u_1,\ldots,u_7)$ for all $u \in H^1(\Omega_2)$, etc., and denote by
$\widetilde A_2$ the self-adjoint operator on $L_2(T)^7$ associated with $\tilde a_2$.
Let $\widetilde S^2$ be the semigroup generated by $- \widetilde A_2$
and by $S^2$ the semigroup generated by $\Delta_{\Omega_2}^N$.
\begin{prop} \label{pdrumce3b04}
The operators $\widetilde A_2$ on $L_2(T)^7$ and $-\Delta_{\Omega_2}^N$ on
$L_2(\Omega_2)$ are similar.
Precisely,
\[
\Phi_2^{-1} \, \widetilde S^2_t \, \Phi_2 = S^2_t
\]
for all $t > 0$.
\end{prop}
The similarities established so far are quite simple; analogous results hold for any polygon
decomposed into triangles.
But the attraction of this approach is that, to show that $\Delta_{\Omega_1}^N$ and
$\Delta_{\Omega_2}^N$ are similar, it suffices to prove the statement for $\widetilde A_1$
and $\widetilde A_2$, which are both defined as operators on $L_2(T)^7$.
This is exactly what we shall now do, and it is here that the special combinatorial relations
defining $V_1$ and $V_2$ are crucial.
We define a map $B \colon \mathds{R}^7 \to \mathds{R}^7$ by
\[
B = \left( \begin{array}{ccccccc}
0 & 1 & 1 & 1 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 1 & 1 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 1 \\
0 & 1 & 0 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0 & 1 & 1 & 0
\end{array} \right)_{\textstyle .}
\]
This is an analogue for our domains of the matrix $T^N$ considered in \cite{Ber2}
(with $a=0$ and $b=1$; see the discussion in Section~\ref{Sdrumce4}).
Moreover, define $\Phi \colon L_2(T)^7 \to L_2(T)^7$ by
\[
(\Phi u)_k = \sum_{l=1}^7 b_{kl} \, u_l
. \]
The adjoint $\Phi^*$ may also be defined directly with respect to $B^*$ by
\[
(\Phi^* u)_l = \sum_{k=1}^7 b_{kl} \, u_k
. \]
It is a simple (but central) calculation to show that
\[
\Phi(V_1) \subset V_2
\quad \mbox{and} \quad \Phi^*(V_2) \subset V_1
. \]
Moreover, for all $u \in V_1$ and $v \in V_2$ one has
\begin{eqnarray}
\tilde a_2(\Phi u, v)
= \sum_{k=1}^7 \int_T \nabla(\Phi u)_k \cdot \overline{\nabla v_k}
& = & \sum_{k=1}^7 \sum_{l=1}^7 b_{kl} \, \int_T \nabla u_l \cdot
\overline{\nabla v_k} \nonumber \\
& = & \sum_{l=1}^7 \int_T \nabla u_l \cdot \overline{\nabla (\Phi^*v)_l}
= \tilde a_1(u, \Phi^* v)
. \label{eSdrumce4;1}
\end{eqnarray}
Using Proposition~\ref{pdrumce202} it follows that $\Phi \, \widetilde S^1_t =
\widetilde S^2_t \, \Phi$ for all $t > 0$.
It is easy to verify that the matrix $B$ is invertible, implying that $\Phi$ is
invertible as an operator on $L_2(T)^7$, and therefore
\[
\widetilde S^1_t = \Phi^{-1} \, \widetilde S^2_t \, \Phi
\]
for all $t > 0$.
So the semigroups are similar.
If we define
\[
U = \Phi_2^{-1} \, \Phi \, \Phi_1
, \]
then $U \colon L_2(\Omega_1) \to L_2(\Omega_2)$ is an isomorphism such that
\[
S^1_t = U^{-1} \, S^2_t \, U
\]
for all $t > 0$.
We have proved the following result.
\begin{thm} \label{tdrumce401}
The semigroups $S^1$ and $S^2$ are similar.
In particular, $\Delta^N_{\Omega_1}$ and $\Delta^N_{\Omega_2}$
are isospectral, i.e.\ they have the same sequence of
eigenvalues, even though $\Omega_1$ and $\Omega_2$ are not congruent.
\end{thm}
\section{Order properties of the similarity transform} \label{Sdrumce4}
We keep the notation of the previous section and consider the intertwining
isomorphism $U \colon L_2(\Omega_1) \to L_2(\Omega_2)$ more closely.
Recall that a linear map $R \colon L_2(\Omega_1) \to L_2(\Omega_2)$
is called {\bf positive} if $f \geq 0$ implies $R f \geq 0$ for all
$f \in L_2(\Omega_1)$.
We then write $R \geq 0$.
One calls $R$ a {\bf lattice homomorphism} if
$R(f \vee g) = Rf \vee Rg$ for all $f,g \in L_2(\Omega_1,\mathds{R})$.
The map $R$ is called {\bf disjointness preserving} if
$f \cdot g = 0$ a.e.\ implies $(Rf) \cdot (Rg) = 0$ a.e.\ for all
$f,g \in L_2(\Omega_1)$.
It is well known that $R$ is a lattice homomorphism if and only if
$R$ is positive and disjointness preserving.
Finally, we call $R$ an {\bf order isomorphism} or a
{\bf lattice isomorphism} if $R$ is bijective and both
$R$ and $R^{-1}$ are positive.
This is equivalent to $R$ being a bijective lattice homomorphism.
We recall from \cite{Are3} Theorem~3.20 the following result.
\begin{thm} \label{tdrumce402}
Let $\Omega_1,\Omega_2 \subset \mathds{R}^d$ be two Lipschitz domains.
If there exists an order isomorphism
$U \colon L_2(\Omega_1) \to L_2(\Omega_2)$ such that
\[
U \, S^1_t
= S^2_t \, U
\]
for all $t > 0$, then $\Omega_1$ and $\Omega_2$ are congruent.
\end{thm}
Here, as before, $S^1$ and $S^2$ are the semigroups generated
by $\Delta_{\Omega_1}^N$ and $\Delta_{\Omega_2}^N$, respectively.
Using Theorem~\ref{tdrumce402} it follows that the similarity transform $U$
in Theorem~\ref{tdrumce401} is not an order isomorphism.
In fact, $U = \Phi_2^{-1} \, \Phi \, \Phi_1$.
Recall that $\Phi \colon L_2(T)^7 \to L_2(T)^7$ is
given by the matrix $B$, which is clearly positive.
Thus $\Phi$ is a positive map.
Since $\Phi_1$ and $\Phi_2$ are order isomorphisms,
$U$ is also positive.
Hence $\Phi^{-1}$, and equivalently also $U^{-1}$, is not positive.
It is easy to see that a map from $L_2(T)^7$ into $L_2(T)^7$
given by a matrix as above is disjointness preserving if and only if
each row in the matrix has at most one nonzero entry.
By way of contrast, our matrix $B$ has three nonzero entries in each row.
It follows that our $\Phi$ is the sum of three lattice homomorphisms.
This shows directly that $\Phi$ and $U$ are not disjointness preserving.
Finally, we mention that the intertwining isomorphism $U$ and the
matrix $B$ that induces it are not unique. If we let $\mathds{1}$ denote
the $7 \times 7$ matrix whose $(k,l)$th-entry is $1$ for all $k,l\in\{1,\ldots,7\}$,
and define $\widehat B:= \alpha (\mathds{1}-B) + \gamma B$, then it may be verified that
$\widehat B$ gives rise in the same way to another intertwining
isomorphism $\widehat \Phi$, provided that the coefficients $\alpha,\gamma \in \mathds{R}$
satisfy basic non-degeneracy conditions.
Our original matrix $B$ and the similarity transform $\Phi$ that it induces
are easily seen to be normal but not unitary.
We know from Section~\ref{Sdrumce2} that in such a case one can always find
a unitary transform related to $\Phi$, for example, via the polar decomposition
$\Phi = U \, |\Phi|$.
However, if we choose the coefficients $\alpha$ and $\gamma$ appropriately,
namely, as a pair of simultaneous solutions to $4\alpha^2 + 3\gamma^2 = 1$ and
$2\alpha^2 + 4\alpha\gamma + \gamma^2 = 0$, then it is easy to check that
$(\widehat B)^* \, \widehat B = I$, that is, the matrix $\widehat B$ is unitary.
In this case, the similarity transform associated with $\widehat B$ is also
unitary, and one may check that one of the operators thus obtained coincides
with the $U$ obtained from the polar decomposition of our original transform $\Phi$.
We note that B\'erard \cite{Ber2} assumes from the beginning of his construction
that his matrix is orthogonal by imposing a restriction equivalent to the one just
stated for $\alpha$ and $\gamma$.
The cases $\alpha=0$, $\gamma=1$ and $\alpha=\gamma=1$ (for which the
respective matrices are not orthogonal) correspond respectively to the mappings
$T_3$ and $T_4$ considered in \cite{BCDS} Section~2.
\section{Isospectral domains for general elliptic operators} \label{Sdrumce5}
We will now generalize our construction from Section~\ref{Sdrumce3} to allow for
general non-self-adjoint elliptic operators on $L_2(\Omega_1)$ and $L_2(\Omega_2)$.
These operators still have compact resolvent, but are in general not self-adjoint.
Hence the original formulation involving isospectrality is not strong enough.
It turns out, however, that the machinery of the previous sections works in exactly
the same fashion to give the desired similarity of the operators in the general case.
We start with a basic lemma describing how elliptic differential sectorial forms transform
under isometries.
\begin{lemma} \label{ldrumce501}
Let $\Omega \subset \mathds{R}^d$ be an open set,
$C = (c_{ij})_{i,j \in \{1,\ldots,d \} } \colon \Omega \to M_{d \times d}(\mathds{C})$
a bounded measurable map and $\tau$ an isometry.
Define $a \colon H^1(\Omega) \times H^1(\Omega) \to \mathds{C}$ by
\begin{equation}
a(u,v)
= \int_\Omega \sum_{i,j=1}^d c_{ij} \, (\partial_i u) \, \overline{\partial_j v}
\label{ldrumce501;2}
\end{equation}
and $\widehat \Omega = \tau(\Omega)$.
Define the bounded measurable map
$C_\tau = \widehat C = (\hat c_{ij})_{i,j \in \{1,\ldots,d \} } \colon \widehat \Omega \to M_{d \times d}(\mathds{C})$
by
\begin{equation}
C_\tau(y)
= \widehat C(y)
= (D \tau) \, C(\tau^{-1}(y)) \, (D \tau)^{-1}
,
\label{eldrumce501;1}
\end{equation}
where $D \tau$ denotes the derivative of $\tau$.
Define the form $\hat a \colon H^1(\widehat \Omega) \times H^1(\widehat \Omega) \to \mathds{C}$
by
\[
\hat a(u,v)
= \int_{\widehat \Omega} \sum_{i,j=1}^d \hat c_{ij} \, (\partial_i u) \, \overline{\partial_j v}
. \]
Then $\hat a(u,v) = a(u \circ \tau, v \circ \tau)$ for all
$u,v \in H^1(\widehat \Omega)$.
\end{lemma}
\begin{proof}
Denote by $\langle \cdot , \cdot \rangle$ the inner product on $\mathds{C}^d$.
Then
\begin{eqnarray*}
a(u \circ \tau, v \circ \tau)
& = & \int_\Omega \langle C^t \nabla(u \circ \tau), \nabla (v \circ \tau) \rangle \\
& = & \int_\Omega \langle C^t \, (D \tau)^t \, ((\nabla u) \circ \tau),
(D \tau)^t \, ((\nabla v) \circ \tau) \rangle \\
& = & \int_\Omega \langle (D \tau) \, C^t \, (D \tau)^t \, ((\nabla u) \circ \tau),
((\nabla v) \circ \tau) \rangle \\
& = & \int_{\widehat \Omega}
\langle (D \tau) \, (C^t \circ \tau^{-1}) \, (D \tau)^t \, \nabla u, \nabla v \rangle \\
& = & \hat a(u,v)
\end{eqnarray*}
as required.
\end{proof}
Let $\Omega \subset \mathds{R}^d$ be an open set and
$C = (c_{ij})_{i,j \in \{1,\ldots,d \} } \colon \Omega \to M_{d \times d}(\mathds{C})$
a bounded measurable map.
Define $a \colon H^1(\Omega) \times H^1(\Omega) \to \mathds{C}$ by
\[
a(u,v)
= \int_\Omega \sum_{i,j=1}^d c_{ij} \, (\partial_i u) \, \overline{\partial_j v}
. \]
Suppose that there exists a $\mu > 0$ such that
\begin{equation}
\mathop{\rm Re} \sum_{i,j=1}^d c_{ij} \, \xi_i \, \overline{\xi_j}
\geq \mu \, |\xi|^2 \qquad \mbox{ for all } \xi \in \mathds{C}^d
\label{eSdrumce5;1}
\end{equation}
almost everywhere on $\Omega$.
Then the form $a$ is elliptic.
Let $A$ be the operator associated with the form~$a$ on $L_2(\Omega)$.
Note that $A$ is self-adjoint if $c_{ij} = \overline{c_{ji}}$ a.e.\ for all
$i,j \in \{ 1,\ldots,d \} $.
We emphasize that we do not assume this.
If $\Omega$ is bounded and Lipschitz, then by a result of Auscher--Tchamitchian
\cite{AT5} the form $a$ has the square root property on $L_2(\Omega)$.
Also note that if $C$ is the identity matrix, then $A$ is the Neumann Laplacian.
If $\tau$ is an isometry and $\widehat C$ is as in Lemma~\ref{ldrumce501},
then also $\widehat C$ is the identity matrix.
So the Neumann Laplacian is transformed into the Neumann Laplacian, and
the proof of (\ref{edrumce3b;1}) is a special case of the previous lemma.
This is one of the remarkable properties of the Laplacian.
However, if we consider an elliptic operator, then we have to take into
account the conjugation with the derivative of the isometry.
Next, let $C$ be a bounded measurable elliptic matrix valued function on our reference triangle~$T$.
Thus
\[
C = (c_{ij})_{i,j \in \{1,2 \} } \colon T \to M_{2 \times 2}(\mathds{C})
\]
is a bounded measurable map satisfying the ellipticity condition
(\ref{eSdrumce5;1}).
Let $\Omega_1$ and $\Omega_2$ be the two propellers as before.
Define the form $a_1 \colon H^1(\Omega_1) \times H^1(\Omega_1) \to \mathds{C}$ by
\begin{equation}
a_1(u,v) = \sum_{k=1}^7 \int_{T_k} (C_{\tau_k})_{ij} \, (\partial_i u) \, \overline{\partial_j v}
, \label{eSdrumce5;6}
\end{equation}
where for all $k \in \{ 1,\ldots,7 \} $ the isometry $\tau_k$ is as in Section~\ref{Sdrumce3}
and $C_{\tau_k}$ is defined in (\ref{eldrumce501;1}).
We define the form $a_2 \colon H^1(\Omega_2) \times H^1(\Omega_2) \to \mathds{C}$
analogously.
Let $n \in \{ 1,2 \} $.
Then $a_n$ is elliptic.
Let $A_n$ be the operator associated with $a_n$ on $L_2(\Omega_n)$ and
let $S^n$ be the semigroup generated by $- A_n$ on $L_2(\Omega_n)$.
Next define the form $\tilde a \colon H^1(T) \times H^1(T) \to \mathds{C}$ by
\begin{equation}
\tilde a(u,v) = \sum_{i,j=1}^2 \int_T c_{ij} \, (\partial_i u) \, \overline{\partial_j v}
. \label{eSdrumce5;5}
\end{equation}
Moreover, define the form $\tilde a_n \colon V_n \times V_n \to \mathds{C}$ by
\begin{equation}
\tilde a_n(u,v)
= \sum_{k=1}^7 \tilde a(u_k,v_k)
. \label{eSdrumce5;7}
\end{equation}
Let $\widetilde A_n$ be the operator associated with $\tilde a_n$ on $L_2(T)^7$
and let $\widetilde S^n$ be the semigroup generated by $- \widetilde A_n$ on $L_2(T)^7$.
Using Lemma~\ref{ldrumce501} it follows as in Section~\ref{Sdrumce3} that
\[
\Phi_n \, S^n_t \, (\Phi_n)^{-1}
= \widetilde S^n_t
\]
for all $t > 0$, where $\Phi_n$ is the {\em same} transform as in
Section~\ref{Sdrumce3}.
Arguing as in Section~\ref{Sdrumce3} one has
\[
\Phi \, \widetilde S^1_t = \widetilde S^2_t \, \Phi
\]
for all $t > 0$, where surprisingly $\Phi$ is, again, the same transform as in
Section~\ref{Sdrumce3}.
Therefore we have proved the following theorem.
\begin{thm} \label{tdrumce502}
Let $U = \Phi_2^{-1} \, \Phi \, \Phi_1$.
Then
\[
S^1_t = U^{-1} \, S^2_t \, U
\]
for all $t > 0$.
In particular, the operators $A_1$ on $L_2(\Omega_1)$ and
$A_2$ on $L_2(\Omega_2)$ are similar even though $\Omega_1$ and $\Omega_2$
are not congruent.
\end{thm}
\section{Isospectral elliptic operators with Dirichlet boundary conditions} \label{Sdrumce6}
In this section we wish to extend Theorem~\ref{tdrumce502} to the case
of Dirichlet boundary conditions.
All the arguments are the same as before, but now we have to impose more
boundary conditions on the Sobolev spaces.
Let $\Omega \subset \mathds{R}^d$ be open and
$C = (c_{ij}) \colon \Omega \to M_{d \times d}(\mathds{C})$
a bounded measurable map.
Assume that $C$ satisfies the ellipticity condition (\ref{eSdrumce5;1}).
Let $a$ be as in (\ref{ldrumce501;2}) and let $a^D = a|_{H^1_0(\Omega) \times H^1_0(\Omega)}$,
where $H^1_0(\Omega)$ is the closure of $C_c^\infty(\Omega)$ in
$H^1(\Omega)$.
Then the operator associated with $a^D$ in $L_2(\Omega)$ is the
corresponding elliptic differential operator with Dirichlet boundary conditions.
For domains with Lipschitz boundary there is a useful characterization
of the Sobolev space $H^1_0(\Omega)$.
\begin{lemma} \label{ldrumce601}
Suppose $\Omega \subset \mathds{R}^d$ is open and $\Omega$ has a Lipschitz boundary.
Then
\[
H^1_0(\Omega)
= \{ u \in H^1(\Omega) : {\mathop{\rm Tr \,}} u = 0 \ \sigma \mbox{-a.e.} \}
. \]
\end{lemma}
\begin{proof}
See \cite{Alt} Lemma~A.6.10.
\end{proof}
Now we return to the two propellers.
Let
\[
C = (c_{ij})_{i,j \in \{1,2 \} } \colon T \to M_{2 \times 2}(\mathds{C})
\]
be a bounded measurable map satisfying the ellipticity condition
(\ref{eSdrumce5;1}).
Let $n \in \{ 1,2 \} $.
Let $a_n \colon H^1(\Omega_n) \times H^1(\Omega_n) \to \mathds{C}$ be as in (\ref{eSdrumce5;6})
and set
\[
a_n^D = a_n|_{H^1_0(\Omega_n) \times H^1_0(\Omega_n)}
. \]
Let $A_n^D$ be the operator associated with $a_n^D$
on $L_2(\Omega_n)$ and let $S^{D,n}$ be the semigroup generated
by $-A_n^D$.
Let $\Phi_n \colon L_2(\Omega_n) \to L_2(T)^7$ be as in Section~\ref{Sdrumce3}.
Define $V_n^D = \Phi_n(H^1_0(\Omega_n))$.
Let $\tilde a \colon H^1(T) \times H^1(T) \to \mathds{C}$ be as in (\ref{eSdrumce5;5}).
Define $\tilde a_n^D \colon V_n^D \times V_n^D \to \mathds{C}$ by
\[
\tilde a_n^D(u,v)
= \sum_{k=1}^7 \tilde a(u_k,v_k)
. \]
So $\tilde a_n^D = \tilde a_n|_{V_n^D \times V_n^D}$,
where $\tilde a_n$ is as in (\ref{eSdrumce5;7}).
Let $\widetilde A_n^D$ be the operator associated with $\tilde a_n^D$
on $L_2(T)^7$ and let $\widetilde S^{D,n}$ be the semigroup
generated by $- \widetilde A_n^D$ on $L_2(T)^7$.
Then as before one has
\[
\Phi_n \, S^{D,n}_t \, \Phi_n^{-1}
= \widetilde S^{D,n}_t
\]
for all $t > 0$.
Next we determine $V_n^D$.
Since Lemma~\ref{ldrumce601} imposes boundary conditions on the
9 parts of the boundary of $\Omega_n$, one has
\begin{eqnarray*}
V_1^D = \{ (u_1,\ldots,u_7) \in H^1(T)^7
& : & u_1 = u_2 \mbox{ and } u_4 = u_7 \mbox{ and } u_3 = u_5 = u_6 = 0 \mbox{ on } \Gamma_1 \\
& & u_1 = u_3 \mbox{ and } u_2 = u_5 \mbox{ and } u_4 = u_6 = u_7 = 0 \mbox{ on } \Gamma_2 \\
& & u_1 = u_4 \mbox{ and } u_3 = u_6 \mbox{ and } u_2 = u_5 = u_7 = 0 \mbox{ on } \Gamma_3 \}
\end{eqnarray*}
and
\begin{eqnarray*}
V_2^D = \{ (u_1,\ldots,u_7) \in H^1(T)^7
& : & u_1 = u_2 \mbox{ and } u_3 = u_6 \mbox{ and } u_4 = u_5 = u_7 = 0 \mbox{ on } \Gamma_1 \\
& & u_1 = u_3 \mbox{ and } u_4 = u_7 \mbox{ and } u_2 = u_5 = u_6 = 0 \mbox{ on } \Gamma_2 \\
& & u_1 = u_4 \mbox{ and } u_2 = u_5 \mbox{ and } u_3 = u_6 = u_7 = 0 \mbox{ on } \Gamma_3 \}
.
\end{eqnarray*}
Now define $B^D = (b^D_{kl})_{k,l \in \{ 1,\ldots,7 \} } \colon \mathds{R}^7 \to \mathds{R}^7$ by
\[
B^D = \left( \begin{array}{ccccccc}
0 & 1 & 1 & 1 & 0 & 0 & 0 \\
1 & 0 & -1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & -1 & 1 & 0 & 0 \\
1 & -1 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & -1 & -1 \\
0 & 1 & 0 & 0 & -1 & 0 & -1 \\
0 & 0 & 1 & 0 & -1 & -1 & 0
\end{array} \right)
\]
and define $\Phi^D \colon L_2(T)^7 \to L_2(T)^7$ by
\[
(\Phi^D u)_k = \sum_{l=1}^7 b^D_{kl} \, u_l
. \]
A surprising but simple calculation shows that
\[
\Phi^D(V^D_1) \subset V^D_2
\quad \mbox{and} \quad (\Phi^D)^*(V^D_2) \subset V^D_1
. \]
Literally the same argument as in (\ref{eSdrumce4;1}) gives
\[
\tilde a^D_2(\Phi^D u, v)
= \tilde a^D_1(u, (\Phi^D)^* v)
\]
for all $u \in V^D_1$ and $v \in V^D_2$.
Therefore Proposition~\ref{pdrumce202} implies that
\[
\Phi^D \, \widetilde S^{D,1}_t = \widetilde S^{D,2}_t \, \Phi^D
\]
for all $t > 0$.
Hence we have extended everything for Dirichlet boundary conditions.
\begin{thm} \label{tdrumce602}
Let $U^D = \Phi_2^{-1} \, \Phi^D \, \Phi_1$.
Then
\[
S^{D,1}_t = (U^D)^{-1} \, S^{D,2}_t \, U^D
\]
for all $t > 0$.
In particular, the operators $A^D_1$ on $L_2(\Omega_1)$ and
$A^D_2$ on $L_2(\Omega_2)$ are isospectral even though $\Omega_1$ and $\Omega_2$
are not congruent.
\end{thm}
\section{Operators with Robin boundary conditions} \label{Sdrumce7}
If we again let $\Omega \subset \mathds{R}^2$ be an arbitrary polygon, or more generally
Lipschitz planar domain, with boundary $\Gamma$, then for all $\beta \in \mathds{R}$
we define a new form
$a^\beta \colon H^1(\Omega) \times H^1(\Omega) \to \mathds{C}$ by
\begin{equation}
a^\beta(u,v) = \int_\Omega \nabla u \cdot \overline{\nabla v}
+ \beta \int_\Gamma u\,\overline{v}.
\label{eSdrumce7;1}
\end{equation}
It follows from the Trace Inequality that the form $a^\beta$ is continuous and
$L_2(\Omega)$-elliptic for all $\beta \in \mathds{R}$.
We denote by $-\Delta^\beta_\Omega$ the operator on $L_2(\Omega)$ associated
with $a^\beta$ and call $\Delta^\beta_\Omega$ the {\bf Robin Laplacian}
with boundary coefficient $\beta$, which has domain given by
\[
D(\Delta^\beta_\Omega) = \{u \in H^1(\Omega):\Delta u\in L_2(\Omega)
\mbox{ and } \partial_\nu u+\beta u=0 \mbox{ on } \Gamma\}
. \]
In the boundary condition $\partial_\nu u+\beta u=0$, the normal derivative
$\partial_\nu u$ is defined by {\rm (\ref{eSdrumce3;0})}, as in the case of the
Neumann Laplacian, and by $u$ we mean the trace of $u$ on $\Gamma$.
As is true of its Dirichlet and Neumann counterparts, the Robin
Laplacian is self-adjoint and has compact resolvent, and its negative is bounded
from below.
We denote by $S^\beta$ the semigroup generated by $\Delta^\beta_\Omega$.
When $\beta=0$ we recover the Neumann Laplacian, and for
$\beta \in (0,\infty)$, the Robin Laplacian `interpolates' between the
Dirichlet and Neumann Laplacians in a strong sense
\cite{AW}.
The boundary condition $\partial_\nu u+\beta u=0$ corresponds to an
`elastically supported membrane'.
So if we interpret the Dirichlet boundary condition as representing a drum
with a taut membrane and the Neumann condition naively as representing
a gong, then the Robin condition describes a drum whose membrane is
not properly attached to the body of the drum, but rather allowed to
move a little as the membrane vibrates.
Our goal is to show that no operator formed as a sum of superimposing isometries
between component triangles of $\Omega_1$ and $\Omega_2$ (as in the
Neumann and Dirichlet cases) can intertwine the Robin Laplacians
$\Delta^\beta_{\Omega_1}$ and $\Delta^\beta_{\Omega_2}$ for any
$\beta \neq 0$.
We make this statement precise by recalling some notation from
Section~\ref{Sdrumce3}.
If we first consider $\Omega_1$, we recall that $\Phi_1 \colon L_2 (\Omega_1) \to
L_2(T)^7$ is the unitary operator associated with the family of isometries
$\tau_k \colon T \to T_k$, such that
\[
\Phi_1(w) = (w|_{T_1} \circ \tau_1,\ldots, w|_{T_7} \circ \tau_7)
\]
for all $w \in L_2(\Omega_1)$, and moreover $\Phi_1 (H^1(\Omega_1)) = V_1$.
Note that since the Robin Laplacian has the same form domain as the Neumann
Laplacian, $\Phi_1$ is still the correct operator to use in this case.
However, we now wish to consider the image of $\partial\Omega_1$, the
boundary of $\Omega_1$, under the isometries $\tau_k$.
We write
\[
\Gamma^1_k := \tau_k^{-1}(\partial \Omega_1 \cap \overline T_k)
\]
for all $k \in \{ 1,\ldots,7 \} $.
Then $\Gamma^1_k \subset \Gamma_1 \cup \Gamma_2 \cup \Gamma_3$;
for example, $\Gamma^1_1 = \emptyset$ and $\Gamma^1_5 = \Gamma_1
\cup \Gamma_3$.
(Cf. Figures~\ref{fdrumce1} and \ref{fdrumce2}.)
For fixed $\beta \in \mathds{R}$ we now define a form $\tilde a^\beta_1 \colon V_1
\times V_1 \to \mathds{C}$ by
\[
\tilde a^\beta_1 (\Phi_1 u, \Phi_1 v) = a^\beta_1(u,v)
\]
for $u,v \in H^1(\Omega_1)$.
Then
\[
\tilde a^\beta_1 (u,v) = \sum_{k=1}^7 \int_T \nabla u_k \cdot
\overline{\nabla v_k} + \beta \int_{\Gamma^1_k} u_k \,\overline{v_k}
\]
for all $u = (u_1,\ldots, u_7), v = (v_1,\ldots, v_7) \in V_1$.
Note that $\tilde a^0_1$ coincides with the form $\tilde a_1$ introduced in
(\ref{epdrumce3b01}).
We do the same for $\Omega_2$, so that the unitary operator
$\Phi_2 \colon L_2(\Omega_2) \to L_2(T)^7$ intertwines the forms $a^\beta_2$
and $\tilde a^\beta_2 \colon V_2 \times V_2 \to \mathds{C}$ given by
\[
\tilde a^\beta_2 (u,v) = \sum_{k=1}^7 \int_T \nabla u_k \cdot
\overline{\nabla v_k} + \beta \int_{\Gamma^2_k} u_k \,\overline{v_k}.
\]
For an arbitrary invertible matrix $P \colon \mathds{R}^7 \to \mathds{R}^7$ given by
$P=(p_{kl})$ we construct an associated operator $\Phi \colon L_2(T)^7 \to
L_2(T)^7$ by setting
\begin{equation}
(\Phi u)_k = \sum_{l=1}^7 p_{kl} \, u_l \qquad \mbox{and} \quad (\Phi^* u)_l
= \sum_{k=1}^7 p_{kl} \, u_k,
\label{eSdrumce7;2}
\end{equation}
where $(u_1,\ldots,u_7) \in L_2(T)^7$.
We will prove that, for any $\beta \neq 0$, there is no matrix $P$ such that
the associated operator $\Phi \colon L_2(T)^7 \to L_2(T)^7$ satisfies $\Phi(V_1)
\subset V_2$, $\Phi^*(V_2) \subset V_1$ and
\begin{equation}
\tilde a^\beta_2(\Phi u,v) = \tilde a^\beta_1(u, \Phi^* v)
\label{eSdrumce7;3}
\end{equation}
for all $u\in V_1$ and $v\in V_2$.
Since this is equivalent to the non-existence of an operator
$U = \Phi_2^{-1} \, \Phi \, \Phi_1 \colon L_2(\Omega_1) \to L_2(\Omega_2)$ intertwining
$a^\beta_1$ and $a^\beta_2$, the impossibility of (\ref{eSdrumce7;3})
then implies via Proposition~\ref{pdrumce202} that the Robin Laplacians
cannot be intertwined by an operator expressible as a sum of isometries
between the triangles.
To that end, we first show that the question as to whether (\ref{eSdrumce7;3}) holds
is independent of the coefficient $\beta \neq 0$.
\begin{prop} \label{pdrumce701}
Let $\Phi \in {\cal L} (L_2(T)^7, L_2(T)^7)$ be defined by
{\rm (\ref{eSdrumce7;2})} and satisfy $\Phi(V_1) \subset V_2$
and $\Phi^*(V_2) \subset V_1$.
If {\rm (\ref{eSdrumce7;3})}
holds for some $\beta \in \mathds{R} \setminus \{0\}$ , then the same is
true for all $\beta \in \mathbb{R}$.
\end{prop}
\begin{proof}
This follows easily from the definition (\ref{eSdrumce7;2}) of the operator $\Phi$.
Just as in the Neumann case (cf.~(\ref{eSdrumce4;1})), we have
\begin{equation}
\tilde a^\beta_2(\Phi u, v)= \sum_{k=1}^7 \sum_{l=1}^7 p_{kl}
\Big( \int_T \nabla u_l \cdot \overline{\nabla v_k}
+\beta\int_{\Gamma^2_k}u_l \, \overline{v_k}\Big),
\label{pdrumce701-2}
\end{equation}
while
\begin{equation}
\tilde a^\beta_1(u, \Phi^* v) = \sum_{k=1}^7 \sum_{l=1}^7 p_{kl}
\Big( \int_T \nabla u_l \cdot \overline{\nabla v_k}
+\beta\int_{\Gamma^1_l}u_l \, \overline{v_k}\Big).
\label{pdrumce701-3}
\end{equation}
By assumption, the two are equal, and so
\begin{equation}
\beta \sum_{k=1}^7 \sum_{l=1}^7 p_{kl} \Big(\int_{\Gamma^2_k}u_l \, \overline{v_k}
- \int_{\Gamma^1_l} u_l \, \overline{v_k} \Big)=0.
\label{pdrumce701-4}
\end{equation}
Since $\beta \neq 0$, if we take any other $\beta_0 \in \mathds{R}$ and multiply
(\ref{pdrumce701-4}) by $\beta_0/\beta$, we see from (\ref{pdrumce701-2})
and (\ref{pdrumce701-3}) applied to $\beta_0$ that (\ref{eSdrumce7;3})
must hold for $\beta_0$.
\end{proof}
Our next result, which applies to any bounded Lipschitz domains $\omega_1$ and
$\omega_2$ in $\mathds{R}^d$,
states that if a unitary operator $U$ intertwines two
Robin Laplacians for two separate values of $\beta\in\mathds{R}$, then the {\em same}
operator intertwines the Robin Laplacians for {\em all} values of $\beta\in\mathds{R}$,
including the Neumann Laplacians, as well as the Dirichlet Laplacians, and
also acts isometrically on the traces of functions in $H^1(\omega_1)$.
\begin{prop} \label{pdrumce702}
Let $\omega_1$ and $\omega_2$ be bounded Lipschitz domains in $\mathds{R}^d$.
For all $\beta \in \mathds{R}$ denote by $a^\beta_1$ and $a^\beta_2$ the
forms given by {\rm (\ref{eSdrumce7;1})} on $\omega_1$ and $\omega_2$.
Suppose $U \in {\cal L}(L_2(\omega_1), L_2(\omega_2))$ is unitary, with
$U (H^1(\omega_1)) = H^1(\omega_2)$. The following statements
are equivalent.
\begin{tabeleq}
\item \label{pdrumce702-1}
There exist $\beta_1,\beta_2 \in \mathds{R}$ with
$\beta_1 \neq \beta_2$ such that
\[
a^{\beta_n}_1 (u,v) = a^{\beta_n}_2(U u, U v)
\]
for all $u,v\in H^1(\omega_1)$ and $n \in \{ 1,2 \} $.
\item \label{pdrumce702-2}
The operator $U$ intertwines the Neumann Laplacians on
$\omega_1$ and $\omega_2$. Moreover, if $u,v \in H^1(\omega_1)$, then
\begin{equation}
\int_{\partial\omega_1} u\, \overline v = \int_{\partial\omega_2} (U u) \, \overline{(U v)},
\label{pdrumce702-3}
\end{equation}
where by $u$ we mean the trace of $u$, etc.
\end{tabeleq}
Moreover, if these equivalent conditions are satisfied, then
$U(H^1_0(\omega_1))=H^1_0(\omega_2)$ and $U$ intertwines the
Dirichlet Laplacians on $\omega_1$ and $\omega_2$.
\end{prop}
\begin{proof}
`\ref{pdrumce702-1}$\Rightarrow$\ref{pdrumce702-2}'.
By writing out the form condition in \ref{pdrumce702-1} for $\beta_1$ and
$\beta_2$ and taking the difference of the two expressions, we obtain
directly that
\begin{equation}
\int_{\omega_1} \nabla u \cdot \overline{\nabla v}
= \int_{\omega_2} \nabla (U u)\cdot \overline{\nabla (U v)}
\quad\mbox{and}\quad
\int_{\partial\omega_1}u\, \overline v
= \int_{\partial\omega_2} (U u) \, \overline{(U v)}
\label{pdrumce702-4}
\end{equation}
for all $u,v \in H^1(\omega_1)$.
It follows immediately from Corollary~\ref{cdrumce204} that $U$
intertwines the Neumann Laplacians on $\omega_1$ and $\omega_2$.
`\ref{pdrumce702-2}$\Rightarrow$\ref{pdrumce702-1}'.
Fix $\beta \in \mathds{R}$ and $u,v \in H^1(\omega_1)$.
Since $U$ intertwines the Neumann Laplacians, by Corollary~\ref{cdrumce204}
it intertwines the associated forms.
Therefore
\begin{equation}
\int_{\omega_1} \nabla u \cdot \overline{\nabla v}
= \int_{\omega_2} \nabla (U u) \cdot \overline{\nabla (U v)}.
\label{pdrumce702-5}
\end{equation}
Moreover, since by assumption (\ref{pdrumce702-3}) holds, it follows directly
from the definition (\ref{eSdrumce7;1}) of $a^{\beta_n}$ that
\ref{pdrumce702-1} holds for all $\beta_1,\beta_2 \in \mathds{R}$.
Finally, to prove the last assertion, suppose that $w \in H^1_0(\omega_1)$.
Then (\ref{pdrumce702-4}) applied to $u=v=w$ implies that
\[
\int_{\partial\omega_2} |U w|^2 = \int_{\partial\omega_1} |w|^2 = 0.
\]
By Lemma~\ref{ldrumce601}, it follows that $U w \in H^1_0(\Omega_2)$.
Thus $U(H^1_0(\omega_1)) \subset H^1_0(\omega_2)$.
Since $U^{-1} = U^*$ has exactly the same properties as $U$, an identical
argument shows that $U^{-1}(H^1_0(\omega_2)) \subset H^1_0(\omega_1)$
and therefore $U(H^1_0(\omega_1)) = H^1_0(\omega_2)$.
Moreover, it is clear that $U|_{H^1_0(\omega_1)}$ is still a continuous
linear bijection, and since (\ref{pdrumce702-5}) holds for all
$u,v \in H^1_0(\omega_1) \subset H^1(\omega_1)$, by
Corollary~\ref{cdrumce204} this means $U$ intertwines the Dirichlet
Laplacians.
\end{proof}
We now show that no one operator $\Phi$ of the form (\ref{eSdrumce7;2})
can simultaneously intertwine the Dirichlet and Neumann Laplacians,
which is a noteworthy
observation in its own right.
It is also worth noting that it can be proved by observing that the families of
matrices $\widehat B = \alpha \mathds{1}- \gamma B$ and $\widehat B^D := \alpha
\mathds{1}- \gamma B^D$, for nontrivial combinations of $\alpha$ and $\gamma$,
are the only ones giving rise to operators intertwining the Neumann and Dirichlet
Laplacians, respectively, and they have no matrix in common.
However, we give a different proof based on reflections.
The principle is that, if one reflects a triangle $T$ along one of its sides, and
wishes to reflect functions in $H^1(T)$ across to the larger domain, one does
so by taking even reflections along the common line.
But to preserve $H^1_0(T)$ the reflection should be odd.
\begin{prop} \label{pdrumce703}
No invertible operator $\Phi \colon L_2(T)^7 \to L_2(T)^7$ of the form
{\rm (\ref{eSdrumce7;2})} simultaneously satisfies the Neumann condition
\[
\Phi(V_1) \subset V_2 \quad \mbox{and} \quad \Phi^*(V_2) \subset V_1
\]
and the Dirichlet condition
\[
\Phi(V_1^D) \subset V_2^D \quad \mbox{and} \quad \Phi^*(V_2^D) \subset V_1^D.
\]
\end{prop}
\begin{proof}
Assume $\Phi$ is associated with $P = (p_{kl}) \colon \mathds{R}^7 \to \mathds{R}^7$.
Consider $m := p_{12}$.
Let $w \in C_c^\infty(T \cup \Gamma_3)$ be such that $w$ does not vanish
identically on $\Gamma_3$.
If we define $u = (0,w,0,\ldots,0)$, then it is easily checked that $u \in V_1$.
Moreover, we have $(\Phi u)_1 = m \, w$ and $(\Phi u)_4 = p_{42} \, w$, using
the definition {\rm (\ref{eSdrumce7;2})} of $\Phi$.
But since $\Phi u \in V_2$, we must have $(\Phi u)_1 = (\Phi u)_4$ on
$\Gamma_3$ in the sense of traces.
Since $w \not\equiv 0$ on $\Gamma_3$, this means $p_{42}=m$.
Alternatively, choose $v = (w,0,0,w,0,0,0)$.
Then $v \in V_2^D$.
Moreover, $\Phi^* v = (0, 2m \, w, 0,0,0,0,0)$.
But $\Phi^* v \in V_1^D$ by assumption.
So $2 m \, w$ vanishes on $\Gamma_3$.
This implies that $p_{12} = m = 0$.
Arguing similarly, it follows that $p_{kl} = 0$ for all
$(k,l) \in \{ 1,\ldots,7 \} ^2 \setminus S$, where
$S = \{ (1,1), (2,4), (3,2), (4,3), (5,6), (6,7), (7,5) \} $.
Since $P$ is invertible, one has $p_{kl} \neq 0$ for all
$(k,l) \in S$.
Then
\[
\Phi u
= (p_{11} \, u_1,
p_{24} \, u_4,
p_{32} \, u_2,
p_{43} \, u_3,
p_{56} \, u_6,
p_{67} \, u_7,
p_{75} \, u_5)
. \]
If $w$ is as above, but one chooses this time
$u = (w,0,0,w,0,0,0)$, then $u \in V_1$.
So $\Phi u \in V_2$ by assumption.
Hence $(\Phi u)_1 = (\Phi u)_4$ on $\Gamma_3$, which implies that
$p_{11} \, w = 0$ on $\Gamma_3$.
This is a contradiction.
\end{proof}
Note that the same proof also works if $P$ has complex coefficients.
Our main result, that the Robin Laplacians on $\Omega_1$ and $\Omega_2$
are not intertwined by any operator acting as a linear combination of isometries
between triangles, now follows easily.
\begin{thm} \label{tdrumce701}
Suppose $\beta \neq 0$.
Then there does not exist an invertible operator $\Psi \colon L_2(\Omega_1)
\to L_2(\Omega_2)$ of the form $\Psi = \Phi_2^{-1} \, \Phi \, \Phi_1$, where
$\Phi \colon L_2(T)^7 \to L_2(T)^7$ is of the form {\rm (\ref{eSdrumce7;2})},
which intertwines $\Delta^\beta_{\Omega_1}$ and $\Delta^\beta_{\Omega_2}$.
\end{thm}
\begin{proof}
Suppose that there does exist such a $\Psi$, and therefore a $\Phi$ associated with
some invertible operator $P \colon \mathds{R}^7 \to \mathds{R}^7$.
By using the polar decomposition of $P$ (see Section~\ref{Sdrumce4}), we may
assume without loss of generality that $P$ and therefore also $\Phi$ and $\Psi$ are
unitary.
By Proposition~\ref{pdrumce701}, the map $\Phi$ satisfies (\ref{eSdrumce7;3})
for all $\beta \in \mathds{R}$ and therefore $\Psi$ intertwines both the Neumann
and Dirichlet Laplacians on $\Omega_1$ and $\Omega_2$ by Proposition~\ref{pdrumce702}.
But this contradicts Proposition~\ref{pdrumce703}.
\end{proof}
It is clear that the same method of proof works not only for more general elliptic operators,
but also for all known planar counterexamples, and indeed, should still be true for
all pairs of (Dirichlet or Neumann) isospectral domains for which Sunada's principle
applies.
In particular, there are no known pairs of noncongruent domains for which the Robin
Laplacians are isospectral (for any $\beta \neq 0$), and there is no reason
to suppose that any known Dirichlet or Neumann counterexamples
have this property.
\subsection*{Acknowledgements}
The authors wish to thank Moritz Gerlach for providing the pictures.
The second named author is most grateful for the hospitality extended
to him during a fruitful stay at the University of Ulm.
He wishes to thank the University of Ulm for financial support.
Part of this work is supported by the Marsden Fund Council from Government funding,
administered by the Royal Society of New Zealand.
The third named author is supported by a fellowship of the
Alexander von Humboldt Foundation, Germany.
|
1,314,259,994,265 | arxiv |
\section{Background and Overview}
\label{sec:background}
\vspace{-0.1mm}
\subsection{Motivation and Problem Scope}
\label{sec:background:motivation}
\vspace{-0.2mm}
The Android platform is increasingly exposed to various malicious threats and attacks. As malware detection for Android systems is a response-sensitive task, our work addresses two primary research challenges -- \textit{inductive capability} and \textit{detection rapidness}. Anomaly identification should allow for forecasting new applications that we have not seen (the so-called \textit{out-of-sample} Apps) and rapidly catch up the up-to-date malicious attacks and threats, particularly considering the vast diversity and rapid growth of emerging malicious software.
The detection procedure is typically regarded as a binary classification. Formally, we aim to take as input features $\mathcal{X}$ of Android Apps and their previous labels (malicious/benign) $\mathcal{T}$ to predict the type $t$ of any target App either old or new.
Unfortunately, the existing approaches for malware detection are inadequate in tackling inductive problems where new application is arbitrary and unseen beforehand. Most of prior work on network embedding~\cite{dong2017metapath2vec, wang2019heterogeneous,zhao2017meta,zhang2018metagraph2vec} are \textit{transductive}, i.e., if a new data point is added to the testing dataset, one has to thoroughly re-train the learning model.
Hence, malware detection is in great need of a generic \textit{inductive} learning model where any new data would be predicted, based on an observed set of training set, without the need to re-run the whole learning algorithm from scratch.
\begin{figure}[t]
\centerline{\includegraphics[width=0.345\textwidth]{figs/arch.pdf}}
\vspace{-1.65mm}
\caption{\textsc{Hawk}\xspace architecture overview}
\vspace{-1.2mm}
\label{fig:sysarch}
\end{figure}
\vspace{-1mm}
\subsection{Our Approach of \textsc{Hawk}\xspace}
\label{sec:background:overview}
\vspace{-0.4mm}
\mypara{Key idea.}
We consider this problem as a semi-supervised learning based on graph embedding. The first innovation of our approach, as a departure from prior work, is to encode the information as a structured heterogeneous information network (\textsc{Hin}\xspace)~\cite{sun2011pathsim}\cite{peng2021lime} wherein nodes depict entities and their characteristics. A \textsc{Hin}\xspace is a graph $G = (\mathcal{V},\mathcal{E},\mathcal{A},\mathcal{R})$ with an entity type mapping $\phi: \mathcal{V}\to\mathcal{A}$ and a relationship type mapping $\psi:\mathcal{E}\to\mathcal{R}$, where $\mathcal{V}$ and $\mathcal{E}$ represent node and edge set, respectively. $\mathcal{A}$ and $\mathcal{R}$ denote the type set of nodes and edge, where $|\mathcal{A}|$ + $|\mathcal{R}| \textgreater$ 2. Edges represent the relationships between a pair of entities (e.g., an App \textit{owns} a specific permission, or a permission \textit{belongs to} a permission type).
Since the detection problem is App entity oriented, it is effective to deduce the information from a self-contained \textsc{Hin}\xspace to homogeneous relational subgraphs that can be directly absorbed by GNN. As the fundamental requirement of graph embedding is to obtain the graph structure, we need to calculate the adjacency matrix from the constructed \textsc{Hin}\xspace -- the best option to reflect the proximity and the node connectivity in the graph. GNN models can be subsequently carried out to learn the numerical embedding for in-sample App nodes. To underpin the continuous embedding learning for out-of-sample nodes, the learning model is desired to make the best use of the embedding result of the existing in-sample App nodes, in an incremental manner.
\mypara{Architecture Overview.} Fig.~\ref{fig:sysarch} depicts \textsc{Hawk}\xspace's architecture, encompassing \textit{Data Modeller} and \textit{Malware Detector} components.
Specifically, \textit{Relationship Extractor} in \textit{Data Modeller} firstly offers an extraction of Android entities based on feature engineering - massive Android Apps are compiled and investigated. There are seven types of nodes ("App" together with six characteristics) and six types of edges. \textit{\textsc{Hin}\xspace Constructor} then builds up the \textsc{Hin}\xspace by organizing entities and the extracted relationships into nodes and edges of \textsc{Hin}\xspace (\S~\ref{sec:hinconstr:relationship}).
\textit{App Graph Constructor} is responsible for generating homogeneous relational subgraphs from
\textsc{Hin}\xspace that only contains App entities. This is enabled by employing meta structures including both meta path~\cite{dong2017metapath2vec} and meta graph~\cite{zhang2018metagraph2vec}
(\S~\ref{sec:hinconstr:appgraph}).
\textit{Malware Detector} then involves two distinct representation learning models to numerically embed in-sample and out-of-sample nodes, respectively. It is in great need of fully exploiting node affinities within a given meta-structure and aggregate the embeddings of the same node under different meta-structures. Specifically, we design separate strategies to learn the embedding:
$\bullet$ To represent in-sample App nodes, the proposed \textsc{MsGAT}\xspace, a meta-structure enabled GAT solution,
firstly aggregate \textit{intra-meta-structure} attention aggregation mechanism for accumulating the embedding of a target node among its neighbor nodes within the graph pertaining to a certain meta-structure.
In the second \textit{inter-meta-structure} phase, we further fuse the obtained embedding among different meta-structures so that their semantic meanings can be represented in the final embedding (\S~\ref{sec:models:insample}).
$\bullet$ To efficiently tackle the out-of-sample node embedding, we generate the embedding, \textit{incrementally}, for a new node through reusing and aggregating the embedding result of selective in-sample App nodes in close proximity to the target node. This requires the model to ascertain the similarity between existing in-sample App nodes and the target node. Similarly, the embedding is firstly gathered at neighbor node level under a given meta-structure before conducting
the \textit{inter-meta-structure} aggregation (\S~\ref{sec:models:outofsample}).
\textit{Malware Classifier} digests the learned vector embeddings to learn a classification model to determine if a given App is malicious or benign and then validates its effectiveness. General purpose techniques such as Random Forest, Logistic Regression, SVM, etc. can be adopted as the classifier implementation. We select the training set from in-sample Apps to train our classifier, whilst using the testing set from in-sample Apps and all out-of-sampling Apps to test the models.
\section{Conclusion and future work}
\label{conclusion}
\vspace{-0.3mm}
Malware detection is a critical but non-trivial task particularly in the face of ubiquitous Android applications and the increasingly intricate malware. In this paper, we propose \textsc{Hawk}\xspace, an Android malware detection framework to rapidly and incrementally learn and identify new Android Apps. \textsc{Hawk}\xspace presents the first attempt to marry the \textsc{Hin}\xspace-based embedding model with graph attention network (GAT) to obtain the numerical representation of Android Apps so that any classifier can easily catch the malicious ones.
Particularly, we exploit both meta-path and meta-graph to best capture the implicit higher-order relationships among entities in the \textsc{Hin}\xspace.
Two learning models, \textsc{MsGAT}\xspace and incremental \textsc{MsGAT++}\xspace, are devised to fuse neighbors' embedding within any meta-structure and across different meta-structures and pinpoint the proximity between a new App and existing in-sample Apps. Through the incremental representation learning model, \textsc{Hawk}\xspace can carry out malware detection dynamically for emerging Android Apps. Experiments show \textsc{Hawk}\xspace outperforms all baselines in terms of accuracy and time efficiency.
In the future, we plan to integrate \textsc{Hawk}\xspace to smart mobile devices by devising lightweight and efficient graph convolution models, such as \cite{FGAT,LI2020188} to replace the existing modules. We also plan to investigate more advanced mechanism for underpinning the model evolving in the face of model decays particularly in federated learning environments.
\section{Experiment Setup}
\label{sec:exp_setup}
\subsection{Methodology}
\mypara{Environment.} \textsc{Hawk}\xspace is evaluated on a 16-node GPU cluster, where each node has a 64-core Intel Xeon CPU E5-2680 v4@2.40GHz with 512GB RAM and 8 NVIDIA Tesla P100 GPUs, Ubuntu 20.04 LTS with Linux kernel v.5.4.0. \textsc{Hawk}\xspace depends upon tensorflow-gpu v1.12.0 and scikit-learn v0.21.3. ApkTool and aapt.exe are used for parsing Apps.
\mypara{Datasets.}
According to the aforementioned discussion of feature engineering in \S\ref{sec:hinconstr:fe}, we overall decompiled 181,235 APKs (i.e., 80,860 malicious Apps and 100,375 benign Apps) from 2013 to 2019. with the help of AndroZoo\footnote{https://androzoo.uni.lu}, benign Apps are primarily collected from GooglePlay store while malicious Apps are obtained from VirusShare and CICAndMal.
To validate the compatibility, both forward and backward, of the proposed model in \textsc{Hawk}\xspace, we train our model based on Apps released in 2017 (amid the seven time span), and then utilize it to detect Apps published from 2013 to 2019.
Specifically, we extracted 14,000 benign and 9,865 malicious Apps released in 2017, as in-sample Apps, to construct the \textsc{Hin}\xspace and train the detection model. For generating the out-of-sample sample data, we collected 7 malware subsets (\textit{v2013} to \textit{v2019}), each of which contains roughly 10,000 samples, from VirusShare over consecutive seven years, together with another 2 subsets from CICAndMal, including 242 scarewares/adwares samples in 2017 (\textit{c2017}) and 253 samples in 2019 (\textit{c2019}). Meanwhile, we extracted benign Apps to match the same number of benign Apps in each subset above.
\mypara{Methodology and Metrics.} The experiments are three-fold: we firstly evaluate the effectiveness of \textsc{Hawk}\xspace against traditional feature-based ML approaches and numerous baselines in terms of in-sample and out-of-sample scenarios (\S\ref{sec:exp:effectivenss}). Afterwards, we demonstrate the efficiency of \textsc{Hawk}\xspace by comparing the training time consumption with other approaches (\S\ref{sec:exp:efficiency}).
We further conduct several micro-benchmarkings, including an ablation analysis of performance gains, an evaluation of meta-structure's importance and the impact of the sampled neighbor number on detection precision (\S\ref{sec:exp:microbenchmark}).
We use metrics $Precision$, $Recall$, $FP$-$Rate$, $F1$ and $Accurate$ to measure the effectiveness (see Table~\ref{Metrics}), and use time consumption to measure the efficiency. The execution time includes the process of generating embedding vectors and detecting Apps whilst excluding the process of extracting Apps relation matrix. We use 5-fold cross validation and calculate the average accuracy to provide an assurance of unbiased and accurate evaluation.
\begin{table}[t]
\centering
\vspace{-0.8em}
\caption{Descriptions of evaluation metrics.}
\label{Metrics}
\renewcommand\arraystretch{1.4}
\vspace{-0.8em}
\scalebox{0.88}{
\begin{tabular}{p{43pt}<{\centering}p{209pt}<{\centering}}\toprule
Metrics&Description\\
\hline
$TP$&The number of malicious Apps that are correctly identified\\
$TN$&The number of benign Apps that are correctly identified\\
$FP$&The number of benign Apps that are mistakenly identified\\
$TN$&The number of malicious Apps that are mistakenly identified\\
$Precision$&$TP/(TP+FP)$\\
$Recall$&$TP/(TP+FN)$\\
$FP$-$Rate$&$FP/(FP+TN)$\\
$F1$&$2*Precision$*$Recall/(Precision+Recall)$\\
$Acc$&$(TP+FN)/(TP+TN+FP+FN)$\\
\toprule
\end{tabular}
}
\end{table}
\vspace{-1mm}
\subsection{Baselines}
To evaluate the performance of \textsc{MsGAT}\xspace in \textsc{Hawk}\xspace, the baselines encompasses generic models and specific models used by some well-known malware detection systems.
\mypara{Generic models.} We firstly implement the following generic models as comparative approaches:
\noindent $\bullet$ \textbf{Node2Vec} \cite{grover2016node2vec} is a typical model generalized from DeepWalk \cite{perozzi2014deepwalk} based on homogeneous graph network.
\noindent $\bullet$ \textbf{GCN} \cite{kipf2016semi} is a semi-supervised homogeneous graph convolutional network model that retains feature information and structure information of the graph nodes.
\noindent $\bullet$ \textbf{RS-GCN} represents the approach to converting the \textsc{Hin}\xspace into homogeneous graphs, applying native GCN to each graph and reporting the best performance among different graphs.
\noindent $\bullet$ \textbf{GAT} \cite{velivckovic2017graph} is a semi-supervised homogeneous graph model that utilizes attention mechanism for aggregating neighborhood information of graph nodes.
\noindent $\bullet$ \textbf{RS-GAT} denotes the approach to converting the \textsc{Hin}\xspace into homogeneous graphs based on rich semantic meta-structures, applying native GAT to each homogeneous graph and reporting the best performance among different graphs.
\noindent$\bullet$ \textbf{Metapath2Vec} \cite{dong2017metapath2vec} is a heterogeneous graph representation learning model that leverages meta-path based random walk to find neighborhood and uses skip-gram with negative sampling to learn node vectors.
\noindent$\bullet$ \textbf{Metagraph2Vec}~\cite{zhang2018metagraph2vec} is an alternative model to Metapath2Vec; both meta paths and meta graphs are applied to the random walk.
\noindent $\bullet$ \textbf{HAN}~\cite{wang2019heterogeneous} is a heterogeneous graph representation learning model that utilizes predefined meta paths and hierarchical attentions for node vector embedding.
For Node2Vec, GCN and GAT, we treat all the nodes in \textsc{Hin}\xspace as the same type to obtain the homogeneous graph. Since all these models are towards static graphs, we compare the capability of out-of-sample detection between \textsc{MsGAT++}\xspace and three generic strategies that can be easily adopted in any comparative models:
\noindent $\bullet$ \textbf{Neighbor averaging (NA)} directly averages the vector embedding of the in-sample neighbors pertaining to a given new App as the targeted embedding.
\noindent $\bullet$ \textbf{Sampled neighbor averaging (SNA)} further filters the neighbor range by sampling a fixed number of in-sample neighbors based on the sorted node similarity and simply averaging their embedding as the targeted embedding.
\noindent $\bullet$ \textbf{Re-running (RR)} primarily merges the out-of-sample Apps with in-sample Apps and rebuilds the entire \textsc{Hin}\xspace and the malware detection model.
\mypara{Specific models deriving from specialized systems.} Secondly, we compare our models in \textsc{Hawk}\xspace against the following models used by the existing malware detection systems:
\noindent $\bullet$ \textbf{Drebin}~\cite{arp2014drebin} is a framework that inspects a given App by extracting a wide range of features sets from the \texttt{manifest} and \texttt{dex} code and adopts the SVM model in the classifier.
\noindent $\bullet$ \textbf{DroidEvolver}~\cite{xu2019droidevolver}
is a self-evolving detection system to maintain and rely on a model pool of different detection models that are initialized with a set of labeled Apps using various online learning algorithms. It is worth noting that we do not directly compare against MamaDroid~\cite{mariconti2016mamadroid}, because it has been demonstrated less effective than DroidEvolver.
\noindent $\bullet$ \textbf{HinDroid}~\cite{3hou2017hindroid} constructs a heterogeneous graph with entities such as App and API and and the rich in-between relationships. It aggregates information from different semantic meta-paths and uses multi-kernel learning to calculate the representations of Apps.
\noindent $\bullet$ \textbf{MatchGNet}~\cite{mgnet} is a graph-based malware detection model that regards each software as a heterogeneous graph and learns its representation. It determines the threat of an unknown software primarily through matching the graph representation of the unknown software and that of benign software.
\noindent \textbf{Aidroid}~\cite{ye2018aidroid} is among the first attempts to tackle out-of-sample malware representations with heterogeneous graph model and CNN network. Following the detailed description in the paper, we utilize one-hop and two-hop neighbors to best function its model performance.
\mypara{Model parameters}. For Node2Vec and Metapath2Vec, we set the number of walks per node, the max walk length, and the window size to be 10, 100, 8, respectively.
For GCN, GAT and HAN, we set up the parameters suggested by their original papers. For the fairness of comparison, each model will be trained 200 times. The length of embedding vectors delivered by these models are set to be 128.
\begin{table}[t]
\centering
\caption{The F1 Value and Accuracy of In-sample Apps Detection.}
\label{In-sample}
\vspace{-2mm}
\renewcommand\arraystretch{1.3}
\scalebox{0.94}{
\begin{tabular}{p{14pt}<{\centering}p{65pt}<{\centering}p{22pt}<{\centering}p{22pt}<{\centering}p{22pt}<{\centering}p{22pt}<{\centering}}\toprule
Metrics&Approaches&20\%&40\%&60\%&80\%\\
\midrule
$\multirow{14}{*}{\rotatebox{90}{$F1$}}$
&Node2Vec&0.8355&0.8378&0.8542&0.8601\\
&GCN&0.8653&0.8677&0.8721&0.8763\\
&GAT&0.8435&0.8633&0.8752&0.8801\\
&Metapath2Vec&0.9231&0.9321&0.9328&0.9395\\
&RS-GCN&0.9212&0.9510&0.9515&0.9560\\
&RS-GAT&0.9507&0.9631&0.9653&0.9664\\
&HAN&0.9511&0.9617&0.9671&0.9705\\
&Metagraph2Vec&0.9750&0.9766&0.9764&0.9771\\
&SVM (Drebin)&0.9312&0.9387&0.9446&0.9477\\
&DroidEvolver&0.9412&0.9517&0.9566&0.9605\\
&HinDroid&0.9643&0.9669&0.9684&0.9746\\
&MatchGNet&0.9395&0.9511&0.9604&0.9753\\
&Aidroid&0.9321&0.9399&0.9414&0.9455\\
&\textsc{MsGAT}\xspace (\textsc{Hawk}\xspace) &\textbf{0.9857}&\textbf{0.9859}&\textbf{0.9871}&\textbf{0.9878}\\
\midrule
$\multirow{14}{*}{\rotatebox{90}{$Acc$}}$
&Node2Vec&0.8254&0.8388&0.8405&0.8593\\
&GCN&0.8558&0.8663&0.8630&0.8692\\
&GAT&0.8461&0.8645&0.8758&0.8833\\
&Metapath2Vec&0.9259&0.9321&0.9335&0.9388\\
&RS-GCN&0.9199&0.9494&0.9527&0.9544\\
&RS-GAT&0.9486&0.9620&0.9652&0.9664\\
&HAN&0.9521&0.9657&0.9675&0.9699\\
&Metagraph2Vec&0.9686&0.9698&0.9748&0.9762\\
&SVM (Drebin)&0.9295&0.9356&0.9407&0.9455\\
&DroidEvolver&0.9329&0.9506&0.9557&0.9623\\
&HinDroid&0.9688&0.9698&0.9722&0.9764\\
&MatchGNet&0.9302&0.9508&0.9536&0.9689\\
&Aidroid&0.9227&0.9356&0.9367&0.9437\\
&\textsc{MsGAT}\xspace (\textsc{Hawk}\xspace) &\textbf{0.9843}&\textbf{0.9855}&\textbf{0.9867}&\textbf{0.9854}\\
\toprule
\end{tabular}
}
\end{table}
\begin{table}[t]
\centering
\caption{The F-P Rate of In-sample Apps Detection.}
\label{In-sample-FP}
\vspace{-2mm}
\renewcommand\arraystretch{1.3}
\scalebox{0.94}{
\begin{tabular}{p{14pt}<{\centering}p{65pt}<{\centering}p{22pt}<{\centering}p{22pt}<{\centering}p{22pt}<{\centering}p{22pt}<{\centering}}\toprule
Metrics&Approaches&20\%&40\%&60\%&80\%\\
\midrule
$\multirow{14}{*}{\rotatebox{90}{$FP-Rate$}}$
&Node2Vec&0.0425&0.0393&0.0388&0.0342\\
&GCN&0.0350&0.0323&0.0333&0.0318\\
&GAT&0.0343&0.0334&0.0299&0.0268\\
&Metapath2Vec&0.0177&0.0175&0.0169&0.0165\\
&RS-GCN&0.0184&0.0118&0.0109&0.0107\\
&RS-GAT&0.0115&0.0088&0.0079&0.0075\\
&HAN&0.0108&0.0098&0.0085&0.0087\\
&Metagraph2Vec&0.0071&0.0068&0.0059&0.0057\\
&SVM (Drebin)&0.0163&0.0155&0.0135&0.0139\\
&DroidEvolver&0.0154&0.0116&0.0101&0.0108\\
&HinDroid&0.0075&0.0078&0.0071&0.0068\\
&MatchGNet&0.0193&0.0129&0.0122&0.0081\\
&Aidroid&0.0184&0.0171&0.0150&0.0139\\
&\textsc{MsGAT}\xspace (\textsc{Hawk}\xspace) &\textbf{0.0038}&\textbf{0.0034}&\textbf{0.0032}&\textbf{0.0035}\\
\toprule
\end{tabular}
}
\end{table}
\begin{table*}[t]
\centering
\caption{The F1 Value of out-of-sample Apps Detection.}
\vspace{-2mm}
\label{table:out_of_sample_fone}
\renewcommand\arraystretch{1.4}
\footnotesize
\scalebox{0.97}{
\begin{tabular}{p{19pt}<{\centering}p{54pt}<{\centering}p{67pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}}\toprule
Metrics&In-sample Approaches& Out-of-sample Approaches &v2013&v2014&v2015&v2016&v2017&v2018&v2019&c2017&c2019\\
\midrule
$\multirow{30}{*}{\rotatebox{90}{$F1$}}$
&$\multirow{3}{*}{Node2Vec}$&NA&0.5888&0.6746&0.6965&0.6740&0.6811&0.6744&0.6680&0.6533&0.6995\\
&&SNA&0.6541&0.6732&0.6965&0.6935&0.6851&0.6665&0.6685&0.6638&0.6845\\
&&Rerunning&0.7564&0.8102&0.7956&0.8124&0.8236&0.7549&0.7968&0.7765&0.7945\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{GCN}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8637}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8705}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8459}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8496}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8697}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8743}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8637}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8567}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8537}\\
&$\multirow{3}{*}{GAT}$&NA&0.7364&0.7423&0.7153&0.7155&0.7545&0.6225&0.7203&0.6352&0.6442\\
&&SNA&0.7433&0.7521&0.7056&0.6962&0.6842&0.7121&0.6831&0.6720&0.6318\\
&&Rerunning&0.8242&0.8448&0.8531&0.8474&0.8731&0.8595&0.8457&0.8511&0.8476\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{NA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7414}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8424}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7835}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7784}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7537}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8243}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8473}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8160}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8183}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{Metapath2Vec}&\multicolumn{1}{>{\columncolor{mycyan}}c}{SNA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7564}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8531}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7765}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7496}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7365}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8359}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8363}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8242}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8156}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9240}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9321}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9195}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9214}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9342}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9326}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9285}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9094}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9052}\\
&$\multirow{4}{*}{HAN}$&NA&0.7455&0.7405&0.6361&0.7433&0.7292&0.7443&0.7245&0.7101&0.7253\\
&&SNA&0.7593&0.7635&0.7793&0.7723&0.8046&0.7803&0.7566&0.7543&0.7768\\
&&Rerunning&0.9155&0.9626&0.9678&0.9588&0.9758&0.9522&0.9677&0.9482&0.9574\\
&&\textsc{MsGAT++}\xspace&0.8896&0.9611&0.9512&0.9462&0.9466&0.9655&0.9583&0.9358&0.9386\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{RS-GCN}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9532}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9549}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9487}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9499}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9656}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9651}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9745}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9539}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9471}\\
&$\multirow{3}{*}{RS-GAT}$&NA&0.7564&0.9400&0.8104&0.6755&0.7345&0.6423&0.7520&0.6152&0.5931\\
&&SNA&0.7564&0.9400&0.8601&0.6744&0.5290&0.7253&0.7323&0.5807&0.7707\\
&&Rerunning&0.9260&0.9321&0.9428&0.9582&0.9498&0.9392&0.9372&0.9485&0.9593\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{NA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7658}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9763}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8041}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7955}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7693}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8665}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7614}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8267}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8084}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{Metagraph2Vec}&\multicolumn{1}{>{\columncolor{mycyan}}c}{SNA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7672}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7769}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8155}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7996}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7805}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8665}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7628}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8239}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8084}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9533}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9688}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9255}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9382}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9201}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9667}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9718}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9234}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9040}\\
&$\multirow{1}{*}{Drebin}$ & &0.7442&0.7723&0.7856&0.8277&0.9432&0.7761&0.7891&0.7559&0.7413\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{DroidEvolver}&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7972}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8469}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8519}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8996}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9605}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9265}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9028}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8539}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8584}\\
&$\multirow{1}{*}{HinDroid}$ & &0.8946&0.9232&0.9298&0.9277&0.9712&0.9159&0.9466&0.9396&0.9245\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{MatchGNet}&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8981}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8965}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9323}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8833}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9675}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9265}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9053}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9123}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9137}\\
&$\multirow{1}{*}{HGiNE (AiDroid)}$ &HG2Img &0.8842&0.9723&0.9556&0.9272&0.9455&0.8761&0.8991&0.8959&0.9013\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{NA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7693}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7601}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.6465}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7725}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7693}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7741}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7741}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7401}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7454}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textsc{MsGAT}\xspace}&\multicolumn{1}{>{\columncolor{mycyan}}c}{SNA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7795}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7845}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7996}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8058}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8241}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7955}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7832}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.7791}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.8071}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9569}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9824}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9876}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9720}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9769}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9808}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9805}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9621}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.9693}}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textsc{MsGAT++}\xspace}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9007}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9804}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9736}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9687}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9695}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9665}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9658}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9461}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.9393}\\
\toprule
\end{tabular}
}
\end{table*}
\begin{table*}[t]
\centering
\caption{The False Positive rate of out-of-sample Apps Detection.}
\vspace{-2mm}
\label{table:out_of_sample_FPR}
\renewcommand\arraystretch{1.4}
\footnotesize
\scalebox{0.97}{
\begin{tabular}{p{19pt}<{\centering}p{54pt}<{\centering}p{67pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}p{23pt}<{\centering}}\toprule
Metrics&In-sample Approaches& Out-of-sample Approaches &v2013&v2014&v2015&v2016&v2017&v2018&v2019&c2017&c2019\\
\midrule
$\multirow{30}{*}{\rotatebox{90}{$FP-Rate$}}$
&$\multirow{3}{*}{Node2Vec}$&NA&0.1052&0.0846&0.0819&0.0782&0.0776&0.0846&0.0763&0.0971&0.0819\\
&&SNA&0.0968&0.0831&0.0758&0.0811&0.0862&0.0883&0.0852&0.0806&0.0789\\
&&Rerunning&0.0682&0.0531&0.0576&0.0534&0.0508&0.0698&0.0579&0.0643&0.0569\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{GCN}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0377}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0359}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0428}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0412}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0366}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0356}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0374}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0394}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0406}\\
&$\multirow{3}{*}{GAT}$&NA&0.0711&0.0708&0.0754&0.0736&0.0648&0.0981&0.0727&0.0963&0.0911\\
&&SNA&0.0675&0.0655&0.0779&0.0804&0.0836&0.0754&0.0830&0.0859&0.0966\\
&&Rerunning&0.0461&0.0408&0.0387&0.0403&0.0334&0.0370&0.0406&0.0394&0.0403\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{NA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0690}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0419}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0575}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0593}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0655}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0460}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0398}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0474}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0459}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{Metapath2Vec}&\multicolumn{1}{>{\columncolor{mycyan}}c}{SNA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0616}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0371}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0565}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0634}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0667}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0416}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0415}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0455}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0467}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0192}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0173}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0205}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0201}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0167}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0171}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0182}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0230}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0241}\\
&$\multirow{4}{*}{HAN}$&NA&0.0644&0.0657&0.0921&0.0650&0.0686&0.0647&0.0701&0.0737&0.0701\\
&&SNA&0.0614&0.0603&0.0563&0.0581&0.0496&0.0559&0.7566&0.0625&0.0568\\
&&Rerunning&0.0215&0.0094&0.0091&0.0104&0.0061&0.0121&0.0081&0.0131&0.0108\\
&&\textsc{MsGAT++}\xspace&0.0279&0.0098&0.0123&0.0136&0.0135&0.0087&0.0105&0.0162&0.0165\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{RS-GCN}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0119}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0115}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0131}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0127}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0087}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0088}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0065}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0117}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0134}\\
&$\multirow{3}{*}{RS-GAT}$&NA&0.0619&0.0153&0.0484&0.0822&0.0672&0.0906&0.0628&0.0975&0.1039\\
&&SNA&0.0622&0.0153&0.0358&0.0835&0.1203&0.0702&0.0683&0.1071&0.0585\\
&&Rerunning&0.0189&0.0172&0.0145&0.0106&0.0127&0.0154&0.1586&0.0130&0.0106\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{NA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0591}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0059}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0494}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0521}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0586}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0339}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0607}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0441}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0485}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{Metagraph2Vec}&\multicolumn{1}{>{\columncolor{mycyan}}c}{SNA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0591}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0565}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0467}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0507}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0556}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0338}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0599}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0444}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0483}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0117}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0079}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0188}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0156}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0202}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0084}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0071}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0196}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0242}\\
&$\multirow{1}{*}{Drebin}$ & &0.0653&0.0583&0.0547&0.0440&0.0145&0.0572&0.0538&0.0623&0.0653\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{DroidEvolver}&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0517}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0391}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0376}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0255}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0101}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0187}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0248}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0372}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0365}\\
&$\multirow{1}{*}{HinDroid}$ & &0.0241&0.0177&0.0253&0.0157&0.0061&0.0201&0.0149&0.0153&0.0162\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{MatchGNet}&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0257}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0218}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0137}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0236}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0065}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0156}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0201}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0185}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0173}\\
&$\multirow{1}{*}{HGiNE (AiDroid)}$ &HG2Img &0.0295&0.0071&0.0113&0.0185&0.0139&0.0316&0.0257&0.0265&0.0252\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{NA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0589}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0608}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0895}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0576}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0584}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0572}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0577}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0659}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0648}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textsc{MsGAT}\xspace}&\multicolumn{1}{>{\columncolor{mycyan}}c}{SNA}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0561}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0549}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0510}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0494}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0448}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0521}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0552}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0563}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0491}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{Rerunning}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0109}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0044}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0032}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0071}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0058}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0049}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0049}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0097}}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textbf{0.0078}}\\
&\multicolumn{1}{>{\columncolor{mycyan}}c}{}&\multicolumn{1}{>{\columncolor{mycyan}}c}{\textsc{MsGAT++}\xspace}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0232}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0049}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0067}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0079}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0077}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0085}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0086}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0136}&\multicolumn{1}{>{\columncolor{mycyan}}c}{0.0154}\\
\toprule
\end{tabular}
}
\end{table*}
\begin{figure*}[htbp]
\centering
\subfigure[F1 Score] {\label{fig:maF1}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.9\columnwidth]{figs/machinef1.pdf}
\end{minipage}%
}%
\subfigure[Acc Score] {\label{fig:maAcc}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.9\columnwidth]{figs/machineAcc.pdf}
\end{minipage}%
}%
\centering
\caption{Comparisons with Traditional Machine Learning Methods.}
\label{machine}
\end{figure*}
\section{Experiment Results}
\label{sec:exp}
\subsection{Detection Effectiveness}
\label{sec:exp:effectivenss}
\mypara{In-sample malware detection against DL models}. We choose 20\%, 40\%, 60\%, 80\% of the in-sample Apps to train the Logistic Regression model and the residual for testing. Table~\ref{In-sample} illustrates the F1 and Acc scores of each models. In general, \textsc{MsGAT}\xspace can achieve competitive classification accuracy when compared the popular malware detectors such as Drebin, DroidEvolver, MatchGNet, HinDroid and AiDroid. Compared with F1 and Acc scores, similar observations can be found in Table~\ref{In-sample-FP} when measuring False Positive rate. This is because our graph-based representation learning models can fully integrate the feature information of Apps and the implied semantic information between Apps, which improves the expression ability. In addition, the accuracy of RS-GCN and RS-GAT can be improved by over 5\% compared with native GCN and GAT. Such approaches convert the original \textsc{Hin}\xspace into homogeneous graph and the improvement derives from preserving the semantic information in the heterogeneous networks through our proposed semantic meta-structures.
It is worth noting that Metagraph2Vec and \textsc{MsGAT}\xspace achieve the highest precision, particularly compared against Metapath2Vec and HAN that only involve meta-paths. The accuracy gain, obviously, stems from introducing meta-graphs that bring rich semantics to mine more complex semantic associations. In addition, \textsc{MsGAT}\xspace outperforms Metagraph2Vec as our models adopt the aggregation mechanisms for both inter-meta-structure and intra-meta-structure, thereby aggregating semantic information from far more comprehensive views.
\mypara{Out-of-sample malware detection against DL models}.
Table~\ref{table:out_of_sample_fone} and Table~\ref{table:out_of_sample_FPR} show the F1 score and False Positive rate, respectively, when we adopt different in-sample models and out-of-sample policies. Overall, the NA and SNA policies have the lowest detection accuracy under all cases due to the substantial loss of semantic information. Obviously, direct averaging operation ignores the discrepancies among neighbors thereby reducing the precision of node embedding and the resultant detection effectiveness.
It is also observable that NA and SNA have very similar precision in almost all cases. This indicates
sampling a certain number of neighbor nodes is able to achieve approximate information in comparison to averaging all neighbor nodes.
Intuitively, the re-running policy will deliver the best performance of detection over all datasets since all data either new or old will involve in the embedding retraining. Metagraph2Vec, RS-GAT and RS-GCN outperforms Metapath2Vec, GAT and GCN due to the benefit from abundant meta-structures.
This performance improvement again demonstrates applying abundant semantic meta-structures into embedding models can bring a stronger generalization capacity.
As shown in Table~\ref{table:out_of_sample_fone}, \textsc{MsGAT}\xspace, together with the rerunning policy, achieves the best detection effectiveness on 2/3 datasets. This can be attributed to the highly rich meta-structures used to include all possible contributions from both intra- and inter- meta-structure aspects. Nevertheless, rerunning has non-negligible overheads particularly in terms of long training time (we will demonstrate the time consumption later). By contrast, \textsc{MsGAT++}\xspace is proved to be a compromising but competitive solution; the precision of \textsc{MsGAT++}\xspace is in close proximity to the rerunning baselines over all datasets.
To demonstrate the generalization, we also implement our \textsc{MsGAT++}\xspace
mechanism upon the HAN model. Similarly, the incremental learning scheme makes far better improvements when compared against native NA and SNA, only with neglectable margin from the rerunning baseline.
Hindroid, MatchGNet, HG2Img and Drebin observably deliver unstable outcomes across different datasets, indicating a limited generalization ability. This is probably because Hin2Img and Hindroid are more dependent upon large training samples and thus has lower precision on some specific datasets. MatchGNet may have limited its performance by neglecting the correlation information between Apps during the construction of the graph. In Drebin, SVM is leveraged as the feature-based machine learning technique, making it difficult to deal with malware with rapidly changing features. DroidEvolver is also based on feature engineering and updates its model in an online manner according to out-of-sample Apps, leading to a competitive classification accuracy. Nevertheless, purely relying on explicit features is intrinsically deficient compared with semantic-rich approaches.
\mypara{Comparison against traditional feature-based ML models.} We mainly use Random Forest (RF), Logistic Regression (LR), Decision Tree (DT), Gradient Boosting Decision Tree (GBDT) and AdaBoost as comparative baselines. In this experiment, we particularly use \textit{v2017} as the train set to build the \textsc{Hin}\xspace , whilst leveraging the out-of-sample Apps with various released time or various source as the test set. Following the method in \cite{li2018significant}, we extract information from permission, API, class name, interface name and \texttt{.so} file to construct the feature vector with 63,902 dimensions, which are reduced to 128 dimensions via principal component analysis (PCA).
Fig.~\ref{machine} illustrates the F1 score and accuracy score produced by different models over different test sets. Observably, \textsc{Hawk}\xspace stably outperforms all traditional baselines in all cases when carrying out the App classification. Traditional ML approaches are competitive (with Acc or F1 score around 0.95) only when the testing set is aligned with the training set (\textit{v2017}) while \textsc{Hawk}\xspace can
constantly deliver precise results. Interestingly, the performance of traditional approaches is constantly poor over the dataset of some specific years, e.g., \textit{v2014} and \textit{c2019}. After examining the features involved in the PCA, we infer the root cause for this phenomenon is because some features are preferably used by malicious Apps in those years but have yet been captured in the training set. For example, \texttt{'Ljava/lang/Cloneable'} and the .so file \texttt{'libshunpayarmeabi'} manifests in \textit{v2014} as the dominating features in the PCA but they are less important in the principle components in \textit{v2017}. Similar observations can also be found for the \textit{c2019}. This is an interesting research finding while the further in-depth study is currently beyond the scope of this paper and will be left for future work.
To sum up, the disparity of precision implies
the difficulty in applying traditional ML models -- merely relying on explicit feature extraction -- into reliable malware detection considering the explosively growing types and numbers of Apps in the market. In comparison, \textsc{Hawk}\xspace is able to mine the high-order relations between Apps, with the help of \textsc{Hin}\xspace, and thus has strong generalization, i.e., high effectiveness regardless the type and size of datasets.
\vspace{-0.6mm}
\subsection{Detection Efficiency}
\label{sec:exp:efficiency}
\vspace{-0.3mm}
\mypara{Time consumption.} In this experiment, we compare the time efficiency of our incremental detection design \textsc{MsGAT++}\xspace against those comparative approaches with an acceptable detection accuracy (demonstrated in \S \ref{sec:exp:effectivenss}), i.e., rerunning HAN, rerunning Metagraph2Vec, Drebin, DroidEvolve and HG2Img. It is worth mentioning that we exclude the extraction time from calculating the overall execution time for the sake of simplicity because all approaches in our experiment share the same procedure of feature extraction. In fact, it approximately takes 6.9 seconds per App to extract the feature information from its original APK file.
As observed in Fig.~\ref{fig:efficiency}, the execution time of \textsc{MsGAT++}\xspace is much shorter than other approaches. \textsc{MsGAT++}\xspace takes only 3.5 milliseconds on average to detect a single out-of-sample App. This millisecond level detection by \textsc{Hawk}\xspace illustrates its suitability in the real-time malware detection scenario at scale.
In particular, \textsc{MsGAT++}\xspace can accelerate the training time by 50× against the native approach that rebuilds the \textsc{Hin}\xspace and reruns the \textsc{MsGAT}\xspace. The acceleration primarily derives from our incremental learning design that can make full use of previously learned information without the need of rerunning the entire model. In addition, \textsc{MsGAT++}\xspace merely selects a fixed number of neighbor nodes to re-calibrate the embedding so that the time consumption only increases linearly with the increment of out-of-sample number.
By contrast, other rerunning \textsc{Hin}\xspace-based baselines is predominantly dependent upon updating embedding for all nodes based on the starting relation matrix. This leads to discrepancies between \textsc{MsGAT++}\xspace and others with the rerunning policy when tackling out-of-sample Apps. HG2Img relies on a certain amount of update operations to learn new features, resulting in a non-negligible time consumption.
\begin{figure}[t]
\centerline{\includegraphics[width=0.92\columnwidth]{figs/timecost.pdf}}
\vspace{-1.8mm}
\caption{Efficiency comparison of detecting out-of-sample Apps.}
\vspace{-1.2mm}
\label{fig:efficiency}
\end{figure}
\mypara{System overhead.}
Overall, the overheads are generally low, mainly generated from loading model data and carrying out the multi-tiered aggregation operations. Runtime memory consumption is typically determined by the number of nodes and features involved in the model training. The total memory consumption of \textsc{Hawk}\xspace is roughly 330MB on average, far lower than the consumption of re-running based baselines (20.88GB on average). This is because all in-sample and out-of-samples have to fully loaded into memory and involved in the embedding calculation while our incremental design significantly reduce such costs. Correspondingly, \textsc{Hawk}\xspace merely uses 3.1\% additional CPU utilization on average, mainly for sorting out top-$\sigma$ samples. By contrast, the CPU utilization is up to 76\% in rerunning baselines wherein CPU-intensive matrix operations have to be performed. The low system cost also indicates the suitability of applying \textsc{Hawk}\xspace into massive-scale malware detection.
\begin{table}[t]
\centering
\caption{Ablation Analysis}
\label{ablation}
\renewcommand\arraystretch{1.6}
\footnotesize
\scalebox{0.95}{
\begin{tabular}{p{100pt}<{\centering}p{21pt}<{\centering}p{21pt}<{\centering}p{68pt}<{\centering}}\toprule
Model&$Acc$&$F1$&AvgDetectionTime\\
\hline
\textsc{Hawk}\xspace &0.9695&0.9689&3.5ms\\
\textsc{Hawk}\xspace-I (w/o \textsc{MsGAT}\xspace) &0.8731 & 0.8725 & 1.8ms\\
\textsc{Hawk}\xspace-R (w/o \textsc{MsGAT++}\xspace) &0.9769&0.9769&205ms\\
\toprule
\end{tabular}
}
\end{table}
\begin{comment}
\begin{figure}[t]
\centerline{\includegraphics[width=0.82\columnwidth, height=30mm]{figs/ablation.pdf}}
\vspace{-1.8mm}
\caption{Ablation Analysis.}
\vspace{-1.2mm}
\label{fig:ablation}
\end{figure}
\end{comment}
\vspace{-1.5mm}
\subsection{Microbenchmarking}
\label{sec:exp:microbenchmark}
\vspace{-0.6mm}
\mypara{Ablation analysis.} To investigate the impact of each component, we remove one component at a time from our model and study the individual impact on the effectiveness of detecting the out-of-sample Apps.
We identify two tailored subsystems: i) \textsc{Hawk}\xspace-I by only retaining native GAT model and removing the hierarchical GAT structure from \textsc{Hawk}\xspace and ii) \textsc{Hawk}\xspace-R by excluding the incremental design. Table~\ref{ablation} reports their accuracy and average time to detect a single App on \textit{v2017}.
Without multi-step and hierarchical aggregation within a meta-structure and across meta-structures, \textsc{Hawk}\xspace-I can reduce the average detection time to 1.8ms. However, both accuracy and F1 score are reduced by 9.9\%
compared with \textsc{Hawk}\xspace. This phenomenon demonstrates the accuracy gain stemming from fusing embedding results under different meta-structures.
\textsc{Hawk}\xspace-R takes far longer time to detect a malware App, simply because no incremental model is loaded and everything needs to be re-trained from scratch. Inherently, although the accuracy experiences a negligible increase due to the full data involved in the model training, the detection efficiency of \textsc{Hawk}\xspace-R is still unacceptable taking into account the long execution time. Hence, it is necessary to adopt the incremental \textsc{MsGAT++}\xspace to ensure a reliable and rapid malware detection.
\begin{figure}[t]
\centerline{\includegraphics[width=0.915\columnwidth]{figs/meta.pdf}}
\vspace{-1.3mm}
\caption{Model performance under different path combinations.}
\vspace{-1.2mm}
\label{perinsample}
\end{figure}
\begin{comment}
\begin{figure}[t]
\centering
\subfigure[F1 Score.] {\label{fig:F1}
\includegraphics[width=0.466\columnwidth]{figs/f1.pdf}
}
\subfigure[Acc Score.] {\label{fig:Acc}
\includegraphics[width=0.466\columnwidth]{figs/Acc.pdf}
}
\caption{Evaluation of the number of sampling neighbors.}
\label{neighbors}
\end{figure}
\end{comment}
\begin{figure}[t]
\centerline{\includegraphics[width=0.95\columnwidth ]{figs/combine.pdf}}
\vspace{-1.3mm}
\caption{Impact of sampling neighbor number.}
\vspace{-1.2mm}
\label{neighbors}
\end{figure}
\mypara{Importance of meta-structures.}
In our model design, a group of meta-paths and meta-graphs are adopted to represent different semantic information. To ascertain the individual contribution to the detection effectiveness, we select a single meta structure at a time in this experiment.
Fig.~\ref{perinsample} depicts the metric disparities among different meta structures. More specifically, among all meta-paths, $\mathcal{MP}_1$ and $\mathcal{MP}_4$ have the highest and lowest contribution to the detection precision. In fact, when analyzing the decompiled codes, we are able to extract far more API information than \texttt{.so} files so that the relation matrix $\mathbb{A}$ is denser than $\mathbb{S}$, and thus contains more connection information for node embedding.
Observably, using meta-graphs can achieve higher detection precision when compared to purely using meta-paths, for a combination of meta-paths can find neighbors with closer affinity. Likewise, if comparing with the results in Table~\ref{In-sample}, \textsc{MsGAT}\xspace that involves the full set of semantic meta-structures unsurprisingly outperforms any situation where only a single semantic meta-structure is adopted. This implicates that introducing sophisticated semantics is significantly meaningful to precisely uncover hidden association between entities for better classification.
\mypara{Impact of the sampling neighbor number.} As shown in Fig.~\ref{neighbors},
the precision will first pick up
within a certain range but descend once the number of sampling neighbors becomes larger (surpassing four in our experiment setting).
In effect, increasing neighbors can provide more relevant and informative
embedding for the reference of the new nodes. However, as the neighbors begin to accumulate, noises generated by more irrelevant neighbors will, in turn, negatively impact the embedding aggregation, i.e., diminishing the representation learning effectiveness. This implication reveals that gauging an appropriate number of neighbors is very critical to the holistic performance of embedding incoming Apps and identifying their types. We choose 3 to 4 neighbors to generate a good enough effectiveness, but one can tune the number either manually according to specific datasets or automatically empowered by reinforcement learning. This is currently beyond the scope of this paper and will be left for future work.
\mypara{A case study of True Negative detection.}
The experiments also reveal that the true negative result manifests occasionally. In other words, a small minority of malicious Apps may not be correctly identified by our model. For example, \texttt{VirusShare_ecc4c2e7}, \texttt{VirusShare_f21ff00cf} in \textit{v2013} bypass our detection. An in-depth investigation ascertains that the embedding of such malicious apps will be assimilated by its benign neighbor nodes which are overwhelming in the process of \textsc{MsGAT++}\xspace. In fact, since these malicious Apps have far fewer entities (no more than 30 entities) than others (normally with more than 200 entities) used in the training, the neighbors of these malicious
apps obtained by \textsc{Hawk}\xspace are sparser and tend to be benign Apps,
resulting in the inaccurate classification. To address this problem, we plan to employ a label-aware neighbor similarity measure based on node attribute to better navigate the neighbor selection and distinguish the malware more efficiently in the future. Nevertheless, \textsc{Hawk}\xspace can achieve better detection accuracy against the up-to-date baselines, with far lower time consumption, particularly when detecting the out-of-sample Apps.
\vspace{-0.5mm}
\section{Discussion}
\mypara{Interpretablity.} \textsc{Hawk}\xspace is a data-driven modeling and detecting mechanism based on Heterogeneous Information Network and network representation model empowered by Graph Attention Networks (GATs). The model's interpretability can be significantly enhanced due to the inherent nature of rich semantics, stemming from the combinations of meta-paths and meta-graphs, in the \textsc{Hin}\xspace and the multi-tiered aggregation of attention from different semantics. Such an approach intrinsically outperforms the SVM based approaches such as Drebin~\cite{arp2014drebin} and Random Forest based approaches such as MaMaDroid~\cite{mariconti2016mamadroid} which has inadequate interpretability.
\mypara{Scalability.} The current HIN-based data modeling is scalable and can be easily extended, to any arbitrary entities and relationships, as long as the semantics can be demonstrated beneficial to the process of detection, either by domain knowledge or experimental assessment.
In addition, since our design does not require any model rerun, the scalability can be inherently guaranteed when coping with sizable samples.
\mypara{Robustness to obfuscation.} The semantic meta-structures based on multiple entities - including permission, permission types, classes, interfaces, etc. - can overcome the inefficiency of API-alone detection approaches and provide a robust and accurate mechanism for detecting potential malware, in the face of API obfuscation, packing, or dataset skew (e.g., samples with less visible features such as .so files in the dataset $v2013$). Particularly, the multi-tiered attention aggregation can automatically set the weight of different meta-paths or meta-graphs, thereby substantially reducing the impact of a single factor, e.g., the API obfuscation, on the numerical embedding and increasing the capability of generalization over different datasets and scenarios.
\mypara{Model aging and decays.}
Concept drift (aka. model aging, model decays) usually makes trained models fail to function on new testing samples, primarily due to the changed statistical properties of samples over time.
The existing work \cite{zhang2020enhancing, pendlebury2019tesseract, jordaney2017transcend} measured how a model performs over time facing the concept drift, underpinned the root causes for such drift and proposed enhanced approaches to improve the model sustainability. However, active learning typically involves massive labeling for tens of thousands of malware samples, usually at a significant cost of human efforts. By far, this issue is not the focus and objective of \textsc{Hawk}\xspace; In contrast, \textsc{MsGAT++}\xspace in \textsc{Hawk}\xspace aims to rapidly embed and detect the out-of-sample Apps, based upon the existing embedding results, assuming a relatively stable statistical characteristics of the existing Apps. At present, model evolving will be carried out through rerunning of \textsc{MsGAT}\xspace, which is demonstrated acceptable in terms of accuracy and time consumption (detailed in \S\ref{sec:exp:efficiency}). More advanced mechanism for improving the model evolution will be left for the future work.
\begin{comment}
\textbf{Evaluation of Different Classifiers.}
In the above experimental tests, we only treat Logistic Regression as the classifier. Now, to evaluate the compatibility of our proposed system, we treat a variety of common models as \textsc{Hawk}\xspace's classifiers. In this part, both training and testing Apps are released in 2017. From the results given in Table~\ref{classifier}, we find that \textsc{Hawk}\xspace can be well combined with various commonly used classifiers, which proves that it has great compatibility.
\begin{table}[htb]
\centering
\caption{Evaluation Result of \textsc{Hawk}\xspace Using Other Common Classifiers.}
\label{classifier}
\renewcommand\arraystretch{1.6}
\footnotesize
\begin{tabular}{p{66pt}<{\centering}p{32pt}<{\centering}p{24pt}<{\centering}p{24pt}<{\centering}p{24pt}<{\centering}}\toprule
Classifier Name&$Precision$&$Recall$&$F1$&$Acc$\\
\hline
Random Forest&0.9827&0.9823&0.9823&0.9823\\
SVM&0.9839&0.9834&0.9834&0.9834\\
Decision Tree&0.9748&0.9748&0.9748&0.9748\\
AdaBoost&0.9816&0.9811&0.9811&0.9811\\
GaussianNB&0.9662&0.9662&0.9662&0.9662\\
Gradient Boosting Decision Tree&0.9827&0.9823&0.9823&0.9822\\
Linear Discriminant Analysis&0.9845&0.9840&0.9840&0.9840\\
\toprule
\end{tabular}
\end{table}
\end{comment}
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{W}{ITH} the highest market share worldwide on mobile devices, Android is experiencing unprecedented dependability issues.
Due to Android's extensibility and openness of development, users are put at high risk of a variety of threats and illegal operations from malicious software, i.e., \textit{malware} including privacy violations, data leakage, advertisement spams, etc.
Common Vulnerabilities and Exposures (CVE) reveals 414 Android vulnerabilities that can be easily attacked in realistic environments.
This phenomenon calls for more reliable and accessible detection techniques.
Conventionally, Android Applications (Apps) are analyzed by either static
analysis, through pre-determined signatures/semantic artifacts, or dynamic analysis through multi-level instrumentation~\cite{ye2017survey}. However, static analysis could become invalid by simple obfuscation, while dynamic analysis heavily depends on OS versions and the Android runtime, which is inherently cost-expensive and time-consuming.
To tackle this, numerous machine-learning based detection techniques~\cite{2mclaughlin2017deep, li2018significant,hou2016droiddelver, dimjavsevic2015android,hou2016deep4maldroid, alzaylaee2020dl,wang2020deep} typically leverage feature engineering to extract key malware features and apply classification algorithms -- each app is represented as a vector -- to distinguish benign software from malicious software. Nevertheless, these approaches often fail to capture emerging malware that either conducts evolving camouflage and attack type or hides certain features deliberately\footnote{https://www.mcafee.com/blogs/other-blogs/mcafee-labs}.
Hence, it is imperative to build an inductive and rapid mechanism for constantly capturing software evolution and detecting malware without heavily relying on domain-specific feature selection.
Graph neural network (GNN), which is used to model the relationship between entities, is developing rapidly in both theoretical~\cite{kipf2016semi,velivckovic2017graph,PEGNN,Haar} and applied fields~\cite{peng2021streaming,peng2019hierarchical}. Heterogeneous information network (\textsc{Hin}\xspace) ~\cite{sun2011pathsim,peng2021lime}, as a special case of graph neural network, has been widely adopted in many areas such as operating systems, Internet of Things and cyber-security by exploiting the abundant node and relational semantic information before embedding into representation vectors~\cite{fan2018gotcha,3hou2017hindroid,mgnet,wang2019attentional}. More specifically, in the context of malware detection, if $App_1$ and $App_2$ share permission \texttt{SEND\_SMS} while $App_2$ and $App_3$ share permission \texttt{READ\_SMS}, \textsc{Hin}\xspace is able to capture the implicit semantic relationship between $App_1$ and $App_3$ that can be hardly achieved by feature engineering based approaches. \textsc{Hin}\xspace-based modelling is even more meaningful because malware developers are extremely difficult to hide such implicit relationships~\cite{3hou2017hindroid}. While promising, \textsc{Hin}\xspace is inherently concerned about static networks/graphs~\cite{ye2018aidroid}. The complication is, however, how to efficiently embed the \textit{out-of-sample} nodes (i.e., incoming nodes out of the established \textsc{Hin}\xspace). Considering the continuous software updates and the huge volume of Apps, it is impossible to involve all Apps in the stage of \textsc{Hin}\xspace construction and inefficient to re-construct the entire embedding model when new Apps are seen emerging. This drawback impedes the practicality and the scale this native technique can perform. Although AiDroid~\cite{ye2018aidroid} attempts to tackle this problem and represents each out-of-sample App with convolutional neural network (CNN)~\cite{simonyan2014very}, it requires heavily multiple convolution operations resulting in non-negligible time inefficiency.
In this paper, we present \textsc{Hawk}\xspace, a novel Android malware detection framework with the aid of network representation learning model and \textsc{Hin}\xspace to explore abundant but hidden semantic information among different Apps. In particular, we extract seven types of Android entities -- including App, permission, permission type, API, class, interface and \texttt{.so} file -- from the decompiled Android application package (APK) files and establish a \textsc{Hin}\xspace mainly through transforming entities and their relationships into nodes and edges, respectively. We exploit rich semantic meta structures as the templates to define relation sequence between two entity types.
This includes both meta path ~\cite{dong2017metapath2vec} and meta graph ~\cite{zhang2018metagraph2vec} that can specify the implicit relationships among heterogeneous entities. A certain meta structure corresponds to an adjacency matrix associated with a homogeneous graph. The graph only contains App nodes and is the target in the procedure of malware detection.
At the core of \textsc{Hawk}\xspace is the numerical embedding of all App entities that can be then fed into a binary classifier. In particular, \textsc{Hawk}\xspace involves two distinct learning models for in-sample and out-of-sample nodes, respectively. To embed an in-sample App, we propose \textsc{MsGAT}\xspace, a meta structure guided graph attention network mechanism~\cite{vaswani2017attention} that incorporates its neighbors' embedding within any meta structure and integrates the embedding results of different meta structures into the final node embedding. This design takes into account not only the informative connectivity of neighbor nodes but also the diverse semantic implications over different entity relationships. In addition, to efficiently embed an out-of-sample App, we present \textsc{MsGAT++}\xspace, a new incremental learning model upon \textsc{MsGAT}\xspace to make good use of the embedding of certain existing nodes. Given a specific meta structure and its corresponding graph, our model firstly pinpoints a specific set of in-sample App nodes that are most similar to the target new node, before aggregating their embedding vectors to form the node embedding under this meta structure.
Likewise, we entitle particular weights to individual embedding vector of each meta structure and aggregate them to obtain the final embedding. This incremental design can quickly calculate the embedding based on the established \textsc{Hin}\xspace structures without re-learning the holistic embedding for all nodes, thereby significantly improving the training efficiency and model scalability.
We demonstrate the effectiveness and efficiency of \textsc{Hawk}\xspace based on 80,860 malicious and 100,375 benign Apps collected and decompiled across VirusShare, CICAndMal and Google AppStore. Experiments show that \textsc{Hawk}\xspace outperforms all baselines in terms of accuracy and F1 score, indicating its effectiveness and suitability for malware detection at scale. It takes merely 3.5 milliseconds on average to detect
an out-of-sample App with accelerated training time of 50$\times$ against the native approach that rebuilds the \textsc{Hin}\xspace and reruns the \textsc{MsGAT}\xspace. To enable replication and foster research, we make \textsc{Hawk}\xspace publicly available at: \texttt{\url{github.com/RingBDStack/HAWK}}. This paper makes the following contributions:
$\bullet$ It examines 200,000+ Android Apps and decompiled 180,000+ APKs, spanning over seven years across multiple open repositories. This discloses abundant data source to establish the \textsc{Hin}\xspace and uncovers the hidden high-order semantic relationships among Apps (\S~\ref{sec:hinconstr}).
$\bullet$ It presents a meta-structure guided attention mechanism based on \textsc{Hin}\xspace for node embedding, by fully exploiting neighbor nodes within and across meta structures (\S~\ref{sec:models:insample}). Experiments show the capture of semantics can support excellent forward and backward compatible detection capabilities.
$\bullet$ It proposes an incremental aggregation mechanism for rapidly learning the embedding of out-of-sample Apps, without compromising the quality of numerical embedding and detection effectiveness. (\S~\ref{sec:models:outofsample}).
\mypara{Organization.} \S~\ref{sec:background} depicts the motivation and outlines the system overview. \S~\ref{sec:hinconstr} discusses the procedure of feature engineering and data reshaping by leveraging \textsc{Hin}\xspace while \S~\ref{sec:models} details the core techniques to tackle in-sample and out-of-sample malware detection. Experimental set-up and results are presented in \S~\ref{sec:exp_setup} and \S~\ref{sec:exp}. Related work is discussed in \S~\ref{sec:secRework} before we conclude the paper and discuss the future work.
\begin{comment}
\begin{figure}[t]
\centerline{\includegraphics[width=2.4in]{figs/App-per-App.pdf}}
\caption{A simple Android HIN.}
\label{AppperApp}
\end{figure}
\end{comment}
\section{Node Embedding Models}
\label{sec:models}
\subsection{\textsc{MsGAT}\xspace: In-Sample Node Embedding} \label{sec:models:insample}
\vspace{-0.4mm}
We introduce a series of innovative Graph Attention Network (GAT) optimizations enhanced by meta-structures -- we employ the attention mechanism ~\cite{vaswani2017attention} among neighbor nodes within a given meta-structure (\textit{intra-ms}) and coordinate the attention among different meta structures (\textit{inter-ms}).
Fig.~\ref{fig:models} depicts the flowchart of our models and important notations used in the models are outlined in Table~\ref{tab:symbols}.
\mypara{Intra-ms aggregation.} Intra-ms aggregation learns how a node pay different attention to its neighbor nodes in a graph pertaining to a meta-structure. Formally, it aggregates the neighbors' representation vectors with weights considering the feature information of entities and the edge information between entities.
To do so, we initially encode the vector of each in-sample App in the form of one-hot and concatenate them into a matrix $H$. $H_{i\cdot}$, the $i$th row of $H$, represents the embedding vector of $i$th App node. Thereafter, we design an edge weight aware GAT model (EGAT) to combine $H$ and the adjacency matrix pertaining to a given meta-structure $\mathcal{M}_k$.
To implement the EGAT model, feature information and edge weight information are fully utilized to aggregate features from neighbors. More specifically, we firstly
construct the adjacency matrix $\Psi^{\mathcal{M'}_k}$ with a normalization operation:
\begin{align}
\label{eqmatrix}
\small
\Psi^{\mathcal{M'}_k} = Normalize(H \cdot H^{T} \odot \Psi^{\mathcal{M}_k}),
\end{align}
and elements in $\Psi^{\mathcal{M'}_k}$ that are lower than a pre-defined threshold $\tau$ ($\tau$ is set to be 0.1 in our model) will be set zero.
Thereafter, we update the $\Phi^{\mathcal{M}_k}$ with GAT model~\cite{velivckovic2017graph}:
\begin{align}
\label{gat}
\footnotesize
\Phi^{\mathcal{M}_k} = GAT(H; \Psi^{\mathcal{M'}_k}).
\end{align}
Eventually, the low dimensional vector embedding for all in-sample App nodes, in a form of matrix $\Phi^{\mathcal{M}_k}$ with a collection of row vectors, can be obtained in this stage.
We then repeatedly calculate the vector matrix for all pre-defined meta-structures, and obtain a collection of embedding vectors, i.e., [$\Phi^{\mathcal{M}_1}, \dots , \Phi^{\mathcal{M}_K}$], where $K$ is the totality of meta-structures.
Concretely, the embedding matrix $\Phi^{\mathcal{M}_k}$ is of shape $L \times D$, where $L$ denotes the number of in-sample Apps in the \textsc{Hin}\xspace and $D$ denotes the dimension of each App vector. As a result, the embedding of App$_i$ node can be identified as the $i$th row, i.e., $\Phi^{\mathcal{M}_k}_{i\cdot}$.
\mypara{Inter-ms aggregation.} Since each
meta structure provisions an individual semantic view, we propose an \textit{inter-ms} attention aggregation to integrate embedding [$\Phi^{\mathcal{M}_1}, \dots , \Phi^{\mathcal{M}_K}$] under different semantics and thus enhance the quality of node embedding.
Specifically, we exploit a multi-layer perceptron (MLP) procedure for learning the weight $\beta^{\mathcal{M}_k}$ of each meta-structure $\mathcal{M}_k$ in the fusion:
\begin{align}
\label{eq3}
\footnotesize
(\beta^{\mathcal{M}_1},\dots, \beta^{\mathcal{M}_K}) = \operatorname{ softmax}(\operatorname{NN}(\Phi^{\mathcal{M}_1}),\dots, \operatorname{NN}(\Phi^{\mathcal{M}_K})),
\end{align}
where $\operatorname{NN}$ is a native Neural Network that maps a given matrix to a numerical value.
Consequently, the final embedding for all in-sample App nodes can be obtained through adding up the weighted representation matrices:
\begin{align}
\label{eq4}
\footnotesize
\Phi = \sum_{k=1}^K \beta^{\mathcal{M}_k} \cdot \Phi^{\mathcal{M}_k}.
\end{align}
we then pass $\Phi$ on to another Neural Network so that the loss function between the Neural Network's outputs and ground-true labels can be calibrated via iterative back-propagation.
\subsection{\textsc{MsGAT++}\xspace: Incremental Embedding}
\label{sec:models:outofsample}
To best embed unknown Apps not included in the training procedure, we present \textsc{MsGAT++}\xspace, an increment learning mechanism for utilizing the in-sample embedding already learned from \textsc{MsGAT}\xspace to rapidly represent those out-of-sample Apps. To make clear, we use $v_{out}$ to generally stand for any out-of-sample node out of the \textsc{Hin}\xspace.
\mypara{Exploring node similarity.} Pinpointing the underlying connections between new nodes and existing nodes in the \textsc{Hin}\xspace plays a pivotal role in providing rapid numerical representation and cost-effective malware detection. To do so, it is imperative to calculate and accumulate the similarity between $v_{out}$ and existing nodes.
Following similar methodology presented in \cite{gao2020hincti}, the node similarity between node $v_i$ and node $v_j$ under a given meta path is defined as:
\begin{align}
\label{eq-1}
\footnotesize
Sim^{\mathcal{MP}}(v_i, v_j) = \frac{2*\Psi^{\mathcal{MP}}_{ij}}{\Psi^{\mathcal{MP}}_{ii} + \Psi^{\mathcal{MP}}_{jj}},
\end{align}
where $\Psi^{\mathcal{MP}}_{ij}$ implies the number of meta structures between two connected nodes and thus a higher similarity indicates a tighter association between these two nodes. Accordingly, the node similarity between node $v_i$ and node $v_j$ under a meta graph $\mathcal{MG}$ is:
\begin{align}
\label{eq0}
\footnotesize
Sim^{\mathcal{MG}}(v_i, v_j) = Sim^{\mathcal{MP}_1}(v_i, v_j) \odot ... \odot Sim^{\mathcal{MP}_m}(v_i, v_j).
\end{align}
\mypara{Incremental aggregation for embedding learning}. The initial task is to catch the incremental relationships and construct the graph information. Within a given meta-structure, we aim to only update an adjacency matrix that quantifies the connectivity between the out-of-sample nodes and existing in-sample App nodes. This should be done in an \textit{incremental} manner to reduce the training cost. In practice, we first repeat the steps aforementioned in \S~\ref{sec:hinconstr:relationship} to calculate all relation matrices in Table~\ref{tab:mat} merely for out-of-sample App nodes. Secondly, we concatenate the relation matrices of new App nodes and those of existing App nodes to form an incremental segment of the node adjacency $\widehat{\Psi}^{\mathcal{M}_k}$ -- a pathway from an in-sample App node to a new node.
Take $\mathcal{MP}_1$ as an example; we first obtain the relation matrix $\mathbb{A}_{out}$ for all new nodes and then generate the matrix by
$ \widehat{\Psi}^{\mathcal{M}_1} = \mathbb{A}_{in} \cdot \mathbb{A}^T_{out}$.
This design ensures the incremental adjacency matrix $\widehat{\Psi}^{\mathcal{M}_k}$ can function independently from the established adjacency matrix ${\Psi}^{\mathcal{M}_k}$ whilst they together serve as the holistic abstract of connectivity among all nodes.
We propose \textsc{MsGAT++}\xspace to entitle numerical embedding to new nodes whilst calibrating existing node's representation. Similar to
\textsc{MsGAT}\xspace, the model consists of two steps: \textit{intra-ms} and \textit{inter-ms} aggregation.
Given a semantic meta-structure $\mathcal{M}_k$, we substitute $\widehat{\Psi}^{\mathcal{M}_k}$
into Eq.~\ref{eq-1} or Eq.~\ref{eq0} to calculate $Sim^{\mathcal{M}_k}(v_j, v_{out})$, the similarity between a new node $v_{out}$ and any in-sample App node $v_j$. Repeating this for all out-of-sampling nodes and all in-sample App nodes forms a similarity matrix $\mathbb{X}^{\mathcal{M}_k}$ where a larger value inherently indicates a closer proximity between two nodes. Accordingly, we can obtain a collection of similarity matrix for all meta-structures \{$\mathbb{X}^{\mathcal{M}_1},\dots, \mathbb{X}^{\mathcal{M}_K}$\}.
\begin{algorithm}[t]
\small
\caption{Incremental embedding algorithm in \textsc{MsGAT++}\xspace}
\label{alg:msgatplus}
\begin{algorithmic}[1]
\REQUIRE An out-of-sample App $v_{out}$ \\
\ENSURE $v_{out}$'s vector embedding $\widehat{\Phi}_{v_{out}}$ and the updated embedding matrix $\Phi$ for existing in-sample App nodes \\
\FOR {$ k \in \{1, ... , K\}$}
\STATE \textit{// select $\sigma$ in-sample App nodes with the highest similarity}
\STATE $\{v_{n1}, \dots, v_{n\sigma}\}$ $\gets$ {\tt DescendSort}($\mathbb{X}^{\mathcal{M}_k}$).\texttt{topK}($\sigma$)
\STATE \textit{// Calculate the weights}
\STATE \{$\alpha_{v_1}^{\mathcal{M}_k}, \dots, \alpha_{v_\sigma}^{\mathcal{M}_k}$\} $\gets$ Eq.\ref{eq:plus_weight}
\STATE \textit{// Calculate the embedding of $v_{out}$ under $\mathcal{M}_k$}
\STATE $\widehat{\Phi}_{v_{out}}^{\mathcal{M}_k} \gets$ Eq.\ref{eq:plus_aggr}.
\ENDFOR
\STATE \textit{// Embedding fusion from all meta structures}
\STATE $\widehat{\Phi}_{v_{out}} \gets$ Eq.~\ref{eq:plus_final}
\RETURN $\widehat{\Phi}_{v_{out}}$, $\Phi$
\end{algorithmic}
\end{algorithm}
Arguably, to better represent the new node in a numerical vector, we should fully aggregate existing embedding results of existing nodes in closely proximity to the new node. To this end,
we select top-$\sigma$ in-sample App nodes ($v_{n1}, \dots, v_{n\sigma})$, based on the similarity matrix $\mathbb{X}^{\mathcal{M}_k}$, and aggregate their vectors for
the embedding of the new node:
\begin{align}
\label{eq:plus_aggr}
\footnotesize
\widehat{\Phi}_{v_{out}}^{\mathcal{M}_k} = \sum_{s=1}^\sigma \alpha_{v_{ns}}^{\mathcal{M}_k} \cdot \Phi^{\mathcal{M}_k}_{v_{ns}},
\end{align}
where $\alpha_{v_j}^{\mathcal{M}_k}$ denotes the weight of the node $v_j$ ($v_j \in (v_{n1}, \dots, v_{n\sigma})$) under $\mathcal{M}_k$ and $\widehat{\Phi}$ implies the incremental embedding information for the out-of-sample node exclusively. The weight can be easily calculated by:
\begin{align}
\label{eq:plus_weight}
\footnotesize
\alpha_{v_j}^{\mathcal{M}_k} = \frac{Sim^{\mathcal{M}_k}(v_{out}, v_{ns})}{\sum_{s=1}^\sigma Sim^{\mathcal{M}_k}(v_{out}, v_{ns})}.
\end{align}
Eventually, we re-calibrate the embedding by conducting \textit{inter-ms} aggregation over $K$ individual representations under all meta-structures:
\begin{align}\label{eq:plus_final}
\footnotesize
\widehat{\Phi}_{v_{out}} = \sum_{k=1}^K \beta^{\mathcal{M}_k} \cdot \widehat{\Phi}_{v_{out}}^{\mathcal{M}_k},
\end{align}
where $\beta^{\mathcal{M}_k}$ can be obtained from Eq.~\ref{eq3} (In fact, to improve the performance of our model, we need to fine-tune these weights). Alg.~\ref{alg:msgatplus} outlines the whole procedure of our rapid incremental embedding learning in the malware detection.
\mypara{Time complexity.} Alg.~\ref{alg:msgatplus} demonstrates a simple but efficient approach with an acceptable complexity. The overall complexity is $\mathcal{O}(KLNlogN)$ where $K$ and $L$ are the number of meta-structures and the number of out-of-sample Apps, respectively while $N$ represents the number of in-sample Apps.
\section{\textsc{Hin}\xspace Based Data Modelling}
\label{sec:hinconstr}
\vspace{-0.1mm}
\subsection{Feature Engineering}
\label{sec:hinconstr:fe}
An Android application needs to be packaged in APK (Android application package) format and installed on Android system. An APK file contains code files, the configuration AndroidManifest.xml file, the signature and verification information, the lib (the directory containing platform-dependent compiled codes) and other resource files.
To better analyze Android Apps, reverse tools (e.g., APKTool \footnote{https://ibotpeaches.github.io/Apktool}) are widely leveraged to decompile the APK files so that the \texttt{.dex} source file can be decompiled into a \texttt{.smali} file. To describe key characteristics of an App, we extracted the following six types of entities:
$\bullet$ \textbf{Permission (P):} The permission determines specific operations that an App can perform. For example, only Apps with \texttt{READ\_SMS} permission can access user's email information.
$\bullet$ \textbf{Permission Type (PT):} The permission type \footnote{https://developer.android.google.cn/guide/topics/permissions} describes the category of a given permission. Table~\ref{Table-per} outlines the permission types and representative permissions.
$\bullet$ \textbf{Class (C):} Class is an abstract module in Android codes, where APIs and variables can be directly accesses. \textsc{Hawk}\xspace uses the class name in \texttt{.smali} codes to represent a class.
$\bullet$ \textbf{API:} Application Programming Interface (API) provisions the callable function in Android development environment.
$\bullet$ \textbf{Interface (I):} The interface refers to an abstract data structure in Java. We extract the name from \texttt{.samli} files.
$\bullet$ \textbf{.so file (S):} \texttt{.so} file is Android's dynamic link library, which can be extracted from the decompiled lib folder.
\begin{table}[t]
\centering
\caption{Categories of Representative Permissions}
\label{Table-per}
\renewcommand\arraystretch{1.4}
\vspace{-1mm}
\footnotesize
\scalebox{0.94}{
\begin{tabular}{p{20pt}<{\centering}p{208pt}<{\centering}}
\toprule
\textbf{Type}&\textbf{Representative Permissions}\\
\hline
NORMAl&ACCESS_NETWORK_STATE, ACCESS_WIFI_STATE\\
CONTACTS&WRITE_CONTACTS, GET_ACCOUNTS\\
PHONE&READ_CALL_LOG, READ_PHONE_STATE,\\
CALENDAR&READ_CALENDAR, WRITE_CALENDAR\\
LOCATION&ACCESS_FINE_LOCATION, ACCESS_COARSE_LOCATION\\
STORAGE&READ_EXTERNAL_STORAGE, WRITE_EXTERNAL_STORAGE\\
SMS&READ_SMS, RECEIVE_MMS, RECEIVE_SMS\\
\toprule
\end{tabular}
}
\end{table}
Following this methodology, we downloaded over 200, 000 APKs from open repositories and after de-duplication and decompilation, 181,235 APKs are finally filtered and extracted. 63,902 entities are then selected according to \cite{li2018significant}. This provisions abundant data sources for establishing the \textsc{Hin}\xspace and mining intrinsic semantics.
\vspace{-1.5mm}
\subsection{Constructing \textsc{Hin}\xspace}
\vspace{-0.8mm}
\label{sec:hinconstr:relationship}
\mypara{Extracting entity relationships into a \textsc{Hin}\xspace}. Meta-schema is a meta-level template that defines the relationship and type constraints of nodes and edges in the \textsc{Hin}\xspace. As shown in Fig.~\ref{fig:metafig}(a), we figure out a meta-schema that can encode necessary relationships between Android entities.
Based on the domain knowledge, we elaborately examine the following inherent semantic relationships:
$\bullet$ \textbf{[R1] App-API} indicates an App \textit{has} a specific API. Using the relationship between App and API is effective to dig out and represent the link between two Apps~\cite{3hou2017hindroid}.
$\bullet$ \textbf{[R2] App-Permission} specifies an App \textit{owns} a specific permission. Apps with permissions such as \texttt{READ\_SMS}, \texttt{SEND\_SMS}, \texttt{WRITE\_SMS} are strongly correlative~\cite{li2018significant}. If \texttt{SEND\_SMS} is shared between $App_1$ and $App_2$ and \texttt{READ\_SMS} is shared between $App_2$ and $App_3$, an implicit association between $App_1$ and $App_3$ is highly likely to manifest.
$\bullet$ \textbf{[R3] Permission-PermissionType} describes the permission \textit{belongs to} a specific permission type. Normally, permissions can be categorized into different types \footnote{https://developer.android.google.cn/guide/topics/permissions}.
$\bullet$ \textbf{[R4] App-Class} means
the App includes a specific class in the external SDK. A malware tends to generate instances by using classes in a vicious SDK \footnote{https://research.checkpoint.com/2019/simbad-a-rogue-adware-campaign-on-google-play}.
$\bullet$ \textbf{[R5] App-Interface} indicates the App \textit{includes} the specific interface in the external SDK.
$\bullet$ \textbf{[R6] App-.so} denotes the App \textit{has} a specific \texttt{.so} file. \cite{fan2018gotcha} demonstrates the effectiveness of associating dynamic link libraries with software in Windows system.
Fig.~\ref{figmodel} depicts a \textsc{Hin}\xspace that contains two Apps and their semantic relationships. For instance, App$_1$ has API \texttt{Ljava/net/URL/openConnection}. Both App$_1$ and App$_2$ own the Class \texttt{Ljava/io/PrintStream}".
The permission \texttt{READ\_SMS} belongs to the permission type \texttt{SMS}", etc.
\mypara{Storing entity relationships}. We use a relation matrix to store each relationship individually. For instance,
we generate an matrix $\mathbb{A}$ where the element $\mathbb{A}_{i,j}$ denotes if App$_i$ contains API$_j$. Intuitively, the transpose of a matrix depicts the backward relationship, e.g., API$_j$ belongs to App$_i$. As summarized in Table \ref{tab:mat}, six matrices are used to represent and store the relationships \textbf{[R1]} to \textbf{[R6]}.
Nevertheless, it is necessary to obtain the connectivity between two Apps if there are sophisticated semantic links, i.e., higher-order relationships.
\begin{table}[t]
\centering
\footnotesize
\caption{Descriptions of relation matrices.}
\vspace{-0.8em}
\label{tab:mat}
\renewcommand\arraystretch{1.5}
\scalebox{0.95}{
\begin{tabular}{p{21pt}<{\centering}p{16pt}<{\centering}p{195pt}}\toprule
Relation& Matrix& Description\\
\hline
\textbf{R1}&$\mathbb{A}$ & if App $i$ \textit{contains} the API $j$, $a_{i,j}$ is 1;
otherwise 0.\\
\textbf{R2}&$\mathbb{P}$ & if App $i$ \textit{has} the permission $j$, $\mathbb{P}_{i,j}$ is 1;
otherwise 0.\\
\textbf{R3}&$\mathbb{T}$ & if the type of permission $i$ is $j$, $\mathbb{T}_{i,j}$ is 1;
otherwise 0.\\
\textbf{R4}&$\mathbb{C}$ & if App $i$ owns the Class $j$, $\mathbb{C}_{i,j}$ is 1;
otherwise 0.\\
\textbf{R5}&$\mathbb{I}$ & if App $i$ uses the interface $j$, $\mathbb{I}_{i,j}$ is 1; otherwise 0\\
\textbf{R6}&$\mathbb{S}$ & if App $i$ calls the so file $j$, $\mathbb{S}_{i,j}$ is 1;
otherwise 0.\\
\toprule
\end{tabular}
}
\end{table}
\vspace{-0.3mm}
\subsection{Constructing App Graph from \textsc{Hin}\xspace}
\vspace{-0.5mm}
\label{sec:hinconstr:appgraph}
To form a homogeneous graph that only contains App nodes, the key step is to incorporate the relationship between App entity and other entities into the combined connectivity between Apps. To ascertain the hidden higher-order semantic, we mainly calculate Apps' proximity via exploiting a meta-path or meta-graph within a given \textsc{Hin}\xspace and then obtain the node adjacency matrix for the graph. In other words, given a meta structure, the \textsc{Hin}\xspace can be converted to an exclusive homogeneous graph in which each node has meta-structure specific neighbor nodes.
In fact, a \textit{meta-path} connects a pair of nodes with a semantically meaningful relationship. We enrich the meta-structures further to involve the \textit{meta-graph} -- in the form of directed acyclic graph (DAG) -- that can be used as an extended template to capture arbitrary but meaningful combination of existing relationships between a pair of nodes. In effect,
a meta structure provides a filter view to extract a homogeneous node graph, wherein all nodes satisfy particular complicated semantics.
Arguably, depending upon different meta structures, nodes will be organized distinctly within different graphs. To some extent, each graph can be regarded as a sub-graph of the holistic \textsc{Hin}\xspace under a certain view -- each sub-graph satisfies the semantic constraints given by the meta-structure.
\mypara{Meta structures.} We leverage domain knowledge from system security expertise to elaborately pick up meta structures for covering the inherent relationships. We first combine all possible meaningful semantic meta-structures, and then carefully select those meta-structures with sufficient precision through numerous experiments. The detailed procedure is discussed in \S\ref{sec:exp:microbenchmark}. As shown in Fig.~\ref{fig:metafig}(b), we eventually present six meta paths and three meta graphs that can effectively outline the structural semantics and capture rich relationships between two Android Apps in the \textsc{Hin}\xspace. For example, \texttt{A-P-A} describes the relationship where two Apps have the same permission ($\mathcal{MP}_5$) and \texttt{A-P-PT-P-A} indicates two Apps co-own the same type of permission ($\mathcal{MP}_6$). $\mathcal{MG}_2$ simultaneously combines \texttt{A-API-A} with \texttt{A-S-A}. Accordingly, the semantic constraints will be tightened, i.e., the selected nodes have to satisfy all pre-defined constraints.
Nevertheless, models \cite{yun2019graph,hu2020heterogeneous} without the manual design of original meta structures could also be applied into our scheme.
\mypara{Homogeneous App graph for each meta structure.} Performing a sequence of matrix operations over the modeled relationship matrices, we can precisely calculate the adjacency of nodes within a graph. For a given meta-path $\mathcal{MP}$, $(A_1, \dots, A_{n})$, the adjacency matrix can be calculated by
\begin{align}
\label{eq:adjmat4MP}
\footnotesize
\Psi^{\mathcal{MP}} = R_{A_1A_2} \cdot R_{A_2A_3} \dots \cdot R_{A_{n-1}A_{n}},
\end{align}
where $R_{A_jA_{j+1}}$ is the relation matrix between entity $A_j$ and $A_{j+1}$ (one instance of \textbf{[R1]} to \textbf{[R6]} in Table~\ref{tab:mat}).
For example, the adjacency matrix for the graph under $\mathcal{MP}_1$ \texttt{A-API-A} is $\Psi^{\mathcal{MP}_1} = \mathbb{A}\cdot \mathbb{A}^T$. $\Psi_{i,j} \textgreater 0$ indicates App$_i$ and App$_j$ are associated with each other, i.e., they are neighbors based on the meta-path $\mathcal{MP}_1$. Specifically, the value represents the count of meta-path instances, i.e., the number of pathways, between node $i$ and $j$.
Likewise, for a given meta-graph
$\mathcal{MG}$, a combination of several meta-paths, i.e., $(\mathcal{MP}_1, \dots, \mathcal{MP}_{m})$, the node adjacency matrix is:
\begin{align}
\label{eq:adjmat4MG}
\footnotesize
\Psi^{\mathcal{MG}} = \Psi^{\mathcal{MP}_1} \odot \dots \odot \Psi^{\mathcal{MP}_m},
\end{align}
where $\odot$ is the operation of \textit{Hadamard Product}. For instance, $\mathcal{MG}_2$, the adjacency matrix can be calculated by $\Psi^{\mathcal{MG}_2} = (\mathbb{A}\cdot \mathbb{A}^T) \odot (\mathbb{S}\cdot \mathbb{S}^T)$. By conducting graph modelling for each meta structure, the original \textsc{Hin}\xspace is converted to multiple App homogeneous graphs, each of which pertains to an adjacency matrix. Given $K$ meta-structures, we have a collection of $K$ adjacency matrices, i.e., \{$\Psi^{\mathcal{M}_1}$, ... , $\Psi^{\mathcal{M}_K}$\}.
\section{Related Work}
\label{sec:secRework}
\vspace{-0.3mm}
\mypara{Malware detection based on traditional feature engineering.}
Feature engineering and machine learning based malware detection methods are two-fold: static/dynamic feature analysis.
Static features analysis approaches \cite{li2018significant,hou2016droiddelver,2mclaughlin2017deep,arp2014drebin,xu2019droidevolver,mariconti2016mamadroid} typically include features including permissions, signatures, API sequences, etc. and directly employ such machine learning models as Random Forest, SVM or CNN for malware detection. However, they inevitably over-assume that all behaviors reflected by features should be involved within the model training, thereby having inadequate capability of tackling unknown out-of-sample cases and causing much higher false positive ~\cite{li2018significant}. Meanwhile, cunning developers can also use obfuscation techniques to hide the malicious codes \cite{alzaylaee2020dl} or perform repackaging attacks \cite{tian2017detection} to bypass detection. \cite{xu2019droidevolver} can automatically and continually update itself when detecting malware without any human involvement. Nevertheless, this scheme only proves that it has ability to adapt to updates, but does not show its compatibility with previous data sets.
In comparison, dynamic feature analysis rely on behavior detection at runtime. Specifically, \cite{dimjavsevic2015android,hou2016deep4maldroid} extract Linux kernel system calls from Apps executed in Genymotion (Android Virtual Machine) while log analysis ~\cite{alzaylaee2020dl,tobiyama2016malware} and traffic analysis ~\cite{li2016droidclassifier, wang2020deep} facilitate to capture Apps' real-world behavior. However, it is time-consuming and unrealistic to be applied in malware detection at scale. Other models from natural language processing and image recognition can be customized and re-used in malware detection. \cite{2mclaughlin2017deep} uses a deep convolutional neural network (CNN) to analyze raw opcode sequence. \cite{vinayakumar2017deep} transforms sequences of Android permissions into features by using LSTM layer and uses non-linear activation function for classification. \cite{xiao2019android} exploits LSTM to investigate potential relationships from system call sequences before classification.
However, since Apps are constantly updated, explicit features extraction from limited Apps is ineffective in detecting unseen Apps.
\mypara{Malware detection based on graph networks.} Gotcha~\cite{fan2018gotcha} builds up a \textsc{Hin}\xspace and utilizes meta-graph based approach to depict the relevance over PE files,
which captures both content- and relation-based features of windows malware.
HinDroid~\cite{3hou2017hindroid} is primarily on the basis of a \textsc{Hin}\xspace built upon relationships between APIs and Apps, and employs multi-kernel SVM for software classification. MatchGNet~\cite{mgnet} combines \textsc{Hin}\xspace model with GCN \cite{kipf2016semi} to learn graph representation and node similarity based on the invariant graph modeling of the program's execution behaviors. \cite{wang2019attentional} constructs heterogeneous program behavior graph, particularly for IT/OT systems, and then introduces graph attention mechanism \cite{vaswani2017attention} to aggregate information learned through GCN on different semantic paths with weights.
However, all these methods are impeded by the static nature of the heterogeneous information network, i.e., they have limited capability of tackling emerging Apps outside the constructed graph. AiDroid~\cite{ye2018aidroid} represents each out-of-sample App with CNN~\cite{simonyan2014very}. However, the non-negligible time inefficiency stemming from multiple convolution operations becomes a potential bottleneck. \textsc{Hawk}\xspace presents the first attempt to bridge the HIN-based embedding model and graph attention network to underpin incremental and rapid malware detection particularly for out-of-sample Apps.
\begin{comment}
\mypara{Graph Information Network}
Gradually, researchers have discovered that by capturing the implicit association between softwares, graph-based schemes are able to make it's tricky for malware to
evade because making the malware nodes unavoidably generate links in the graph \cite{gao2020hincti}. Besides, researchers can mine more general knowledge from the
graph structure to improve the versatility of their model. Next, we will introduce some concepts in the graph information network.
$\textit{Graph information network.}$ Graph information network is the abstract representation of entities and relationships between entities in the real world. According to the types of entities and the types of relationships, it can be divided into homogeneous information network and heterogeneous information network. In general, researchers usually choose heterogeneous information network to represent the model containing complex entity types and entity relationships.
$\textit{Graph network representation Learning.}$ Graph network representation Learning is proposed to map the graph information network into a low dimensional vector space while preserving structural and attribute information. Now some popular graph representation learning methods, such as \cite{kipf2016semi,velivckovic2017graph,dong2017metapath2vec,wang2019heterogeneous}, can be used to calculate vectors of nodes by aggregating neighbor nodes' information. After that, the learned node vectors can be easily applied to downstream vector-based tasks.
$\textit{Attention mechainsm.}$ Attention mechanism \cite{vaswani2017attention} is widely used in deep learning to calculate the relation weight between entities, which is also suitable for malware detection scenarios. For example, to identify a malware, a method should have more attention to its malicious neighbor Apps, and ignore the normal neighbor Apps. GAT \cite{velivckovic2017graph} first introduce node-level attention mechanism to graph network representation learning, aiming to learn the importance between each node and its neighbors and aggregate all the neighbors to represent the node. However, GAT can be only applied to homogeneous information network. HAN \cite{wang2019heterogeneous} first apply the hierarchical attention (including node-level attention and semantic-level attention) to the heterogeneous information network, which can make full use of the information in the heterogeneous network. In HAN, the node-level attention can learn the importance between nodes and their semantic-path based neighbors, the semantic-level attention is used to learn the importance of different semantic paths. But HAN only regards the meta-path structure as its semantic structure, which limits it to capture richer semantic information.
\end{comment}
|
1,314,259,994,266 | arxiv | \section{Introduction}
Many of the well-known probability density functions such as the Gaussian, the
Rayleigh, and the exponential, among others, are uniquely determined by their
moments. However, there are densities for which all moments exist yet they do
not uniquely determine the density. Such probability density functions are
called \textquotedblleft moment indeterminate\textquotedblright\ or
\textquotedblleft M-indeterminate\textquotedblright\
{(Shohat, Tamarkin, 1943; Akhiezer, N.I., 1965; Stoyanov, 2013)}.
For the continuous real random variable $X$ with associated M-indeterminate probability
density $P_0(x)$ ($x \in \mathbb{R}^1$), one formulation for M-indeterminate densities, called the
\textquotedblleft Stieltjes class,\textquotedblright\ is {(Stoyanov, 2004; Stoyanov and Tolmatz, 2005)}
\begin{equation}
P_{\varepsilon}(x)=P_{0}(x)\left[ 1+{\varepsilon}\,h(x)\right]
,\quad\label{Stieltjes}%
\end{equation}
where $-1\leq{\varepsilon}\leq1$ is a real parameter, and $h(x)\neq0$
is a real, continuous, bounded ($|h(x)|\leq1$) function that satisfies the
constraint
\begin{equation}
\label{stieltjes-constraint}
\int_{-\infty}^{\infty}x^{n} P_{0}(x) h(x) dx = 0,
\ \ \ \ n\in I^{+}%
\end{equation}
Accordingly, the densities $P_\varepsilon(x)$ all have the same moments,
\begin{equation}
\text{E\negthinspace}\left[ X^{n}\right] = \int_{-\infty}^\infty x^n P_\varepsilon(x)\, dx \,=\,
\int_{-\infty}^\infty x^n P_0(x)\, dx \quad (n \in I^+) \notag
\end{equation}
While the moment
generating function for such densities does not exist, the characteristic
function does exist {(Feller, 1966; Lukacs, 1970; Stoyanov, 2013)}. In this letter, we
1) present a characteristic function approach to generating such densities, and 2)
extend the characteristic function approach by incorporating self adjoint
operators, which leads to an unlimited number of M-indeterminate densities
beyond the Stieltjes class.
\section{Preliminaries: characteristic function and self adjoint operators} \label{prelim}
Let $R$ be a continuous real random variable with continuous probability density function
$P(r) \ (r\in \mathbb{R}^1) $.
Then the characteristic function of $P(r)$ is defined as {(Feller, 1966; Lukacs, 1970)}
\begin{equation}
M(\theta)=\int_{-\infty}^{\infty}P(r)\,e^{i\theta r}\,dr \notag
\end{equation}
Given the characteristic function, one obtains the probability density by
\begin{equation}
P(r)\,=\frac{1}{2\pi}\int_{-\infty}^{\infty}M(\theta)e^{-i\theta r}\,d\theta \notag
\end{equation}
\bigskip
Now consider functions $f(x)\ne 0$ such that $\int_{-\infty}^{\infty}\left\vert
f(x)\right\vert ^{2}dx=1 $ and that may be expanded as {(Morse and Feshbach 1953)}
\begin{equation}
\label{eigen-expansion}
f(x)=\int_{-\infty}^{\infty}F(r)u(r,x)\,dr
\end{equation}
where
\begin{equation}
F(r)=\int_{-\infty}^{\infty}f(x)u^{\ast}(r,x)\,dx \label{r-transform}%
\end{equation}
and $u(r,x)$ are the eigenfunctions obtained by solving the eigenvalue problem
\begin{equation}
\mathcal{A}\,u(r,x)=r\,u(r,x)\label{eigvalprob} \notag
\end{equation}
for self adjoint operator $\mathcal{A}$. (In writing these equations, we have assumed the
variables are continuous.)
The function $F(r)$ is the representation of $f(x)$ in the $r$-domain.
Note that because
$f(x)$ is normalized, so is $F(r)$. In particular,
both $\left\vert f(x)\right\vert ^{2}$ and $\left\vert
F(r)\right\vert ^{2}$ are nonnegative and integrate to 1,
\begin{equation}
\int_{-\infty}^{\infty}\left\vert F(r)\right\vert ^{2}dr=\int_{-\infty
}^{\infty}\left\vert f(x)\right\vert ^{2}dx=1 \notag
\end{equation}
Indeed, these magnitude-square representations are the general forms of
probability densities that are fundamental in quantum mechanics (as are self
adjoint operators) {(Bohm, 1951; Wilcox, 1967)}.
\bigskip
\noindent\textbf{Theorem \ref{prelim}.1.} If $P(r)=\left\vert F(r)\right\vert ^{2}$ and
$F(r)$ and $f(x)$ are related by Eqs. \eqref{eigen-expansion} and
\eqref{r-transform}, then the characteristic function may be obtained directly
in terms of $f(x)$ and the operator $\mathcal{A}$ as
\begin{equation}
M(\theta)=\int_{-\infty}^{\infty}f^{\ast}(x)\,e^{i\theta\mathcal{A}}\,f(x)\,dx
\label{general-char}%
\end{equation}
\noindent {\bf Proof. } See {(Cohen, 1988, 2017)} and references therein. \qedsymbol
\bigskip
\noindent\textbf{Corollary 1} For the particular case of the self adjoint
operator
\begin{equation}
\mathcal{A}=\frac{1}{i}\frac{d}{dx} \notag
\end{equation}
we have that
\begin{equation}
M(\theta)=\int_{-\infty}^{\infty}f^{\ast}(x)e^{i\theta\frac{1}{i}\frac{d}{dx}%
}\,f(x)\,dx=\int_{-\infty}^{\infty}f^{\ast}(x)\,f(x+\theta)\,dx \notag
\end{equation}
\noindent {\bf Proof. } The result follows readily by a Taylor series
expansion of $e^{\theta\frac{d}{dx}}$. \qedsymbol
\section{Stieltjes class characteristic function approach} \label{stieltjes-sec}
\subsection{M-indeterminate constraint}
For the Stieltjes class of M-indeterminate densities of Eq. \eqref{Stieltjes},
we express the characteristic function as
\begin{align}
M_{\varepsilon}(\theta) & =\int_{-\infty}^{\infty}P_{\varepsilon
}(x)e^{i\theta x}dx\,=\, \int_{-\infty}^{\infty}P_{0}(x)e^{i\theta
x}dx+\varepsilon\int_{-\infty}^{\infty}h(x)P_{0}(x)e^{i\theta x}dx \notag \\
& =M_{0}(\theta)\,+\,\varepsilon\,Q(\theta) \label{stieltjes-charfun}
\end{align}
where%
\begin{align}
M_{0}(\theta) & =\int_{-\infty}^{\infty}P_{0}(x)e^{i\theta x}dx \quad ; \quad
Q(\theta) \,=\, \int_{-\infty}^{\infty}P_{0}(x)h(x)e^{i\theta x}dx \notag
\end{align}
Further, let $H(\theta)$ be the Fourier transform of $h(x)$,
\begin{equation}
{H(\theta)}=\int_{-\infty}^{\infty}h(x)e^{-i\theta x}dx \notag
\end{equation}
\bigskip
\noindent\textbf{Theorem \ref{stieltjes-sec}.1.} For the probability densities $P_{\varepsilon}(x)$,
the moments will be independent of $\varepsilon$ if
\begin{equation}
\label{charcondition-stieltjes }
\int_{-\infty}^{\infty}M_{0}^{(k)}
(\theta)\,H^{(n-k)}(\theta)\,d\theta=0,\quad k=0,1,...,n
\end{equation}
for all $n\in I^{+}$ and any $k$, where $M_{0}^{(k)}$ indicates the $k^{th}$ derivative.
\bigskip
\noindent {\bf Proof. } The moments of $P_{\varepsilon}(x)$ are given by
\begin{equation}
\text{E\negthinspace}\left[ X^{n}\right] \,=\,\int_{-\infty}^{\infty}%
x^{n}P_{\varepsilon}(x)dx\,=\,\int_{-\infty}^{\infty}x^{n}P_{0}%
(x)dx\,+\,\,\,\varepsilon\int_{-\infty}^{\infty}x^{n}P_{0}(x)h(x)dx \notag
\end{equation}
The moments can be equivalently obtained from the characteristic function by
\begin{equation}
\text{E\negthinspace}\left[ X^{n}\right] \,=\,\left. \left\{ \left(
\frac{1}{i}\frac{d}{d\theta}\right) ^{n}\,M_{\varepsilon}(\theta)\right\}
\right\vert _{\theta=0}=\,\left. \left\{ \left( \frac{1}{i}\frac{d}%
{d\theta}\right) ^{n}\,M_{0}(\theta)\right\} \right\vert _{\theta
=0}+\,\,\varepsilon\left. \left\{ \left( \frac{1}{i}\frac{d}{d\theta
}\right) ^{n}\,Q(\theta)\right\} \right\vert _{\theta=0}%
\label{charfunmoments}%
\end{equation}
In order for the moments to be independent of $\varepsilon$ -- and hence for the densities $P_\varepsilon(x)$ to be M-indeterminate -- we require the last term to be zero, i.e.,
\begin{equation}
\left. \left\{ \frac{d^{n}}{d\theta^{n}}\,Q(\theta) \right\} \right\vert _{\theta=0}=0 \notag
\end{equation}
Plugging in the definition of $Q(\theta)$ and evaluating this expression, one can readily confirm that this condition is precisely that
given in Eq. \eqref{stieltjes-constraint}. In terms of $M_0(\theta)$ and $H(\theta)$, straightforward evaluation yields
\begin{equation}
\left. \left\{ \frac{d^{n}}{d\theta^{n}}\,Q(\theta) \right\} \right\vert _{\theta
=0}\,=\,\left( \frac{1}{2\pi}\right) ^{2}\,\int_{-\infty}^{\infty}%
M_{0}^{(n)}(\theta^{\prime}){H(\theta^{\prime})}\,d\theta^{\prime}\,=\,0 \notag
\end{equation}
by which Eq. \eqref{charcondition-stieltjes } follows via integration by parts
as the characteristic function domain condition for Stieltjes-class M-indeterminate densities,
assuming that $M_{0}(\theta)$ and ${H(\theta)}$ and their derivatives vanish
at $\pm\infty$.
\qedsymbol
\subsection{Generating Stieltjes class densities via the characteristic
function}
The characteristic function constraint for M-indeterminate densities leads to
the following theorem:
\bigskip
\noindent\textbf{Theorem \ref{stieltjes-sec}.2.} If
\begin{enumerate}
\item The characteristic function $M_{0}(\theta)$ is of finite extent:
$M_{0}(\theta) = 0$ for $|\theta|>L>0$ \label{C1}
\item The function ${H(\theta)}=0$ for $|\theta|<L$, and is normalized such
that $\frac{1}{2\pi}\left| \int_{-\infty}^{\infty}H(\theta)\,e^{i \theta x}\,
d\theta\right| \,=\, |h(x)|\,\le\,1$ \label{C2}
\end{enumerate}
Then the resulting densities $P_{\varepsilon}(x)$ are M-indeterminate.
\bigskip
\noindent {\bf Proof. }
The normalization on $H(\theta)$ insures that $P_{\varepsilon}(x)\geq0$. Then,
imposing the finite support constraint on $M_{0}(\theta)$ yields, via Eq.
\eqref{charcondition-stieltjes } with $k=0$,
\begin{equation}
\int_{-\infty}^{\infty}M_{0}(\theta^{\prime})\,{H^{(n)}(\theta^{\prime}%
)}\,d\theta^{\prime}\,=\,\int_{-L}^{L}M_{0}(\theta^{\prime})\,{H^{(n)}%
(\theta^{\prime})}\,d\theta^{\prime} \notag
\end{equation}
But since $H(\theta^{\prime})=0$ over the limits of integration, the integral
equals zero;
hence, the moments are independent of $\varepsilon$ but the densities
$P_{\varepsilon}(x)$ are not.
\qedsymbol
\bigskip
Theorem \ref{stieltjes-sec}.2 points to a simple procedure for generating an unlimited number of
Stieltjes class M-indeterminate densities:
\bigskip
Choose \textit{any} differentiable, finite-extent function $f(x)$, normalized
to 1:
\begin{align}
\frac{d^{n}f(x)}{dx^{n}} & <\infty,\,\,\,\,n=0,1,2,... \notag \\
f(x) & =0,\qquad|x|>\frac{L}{2}>0 \notag \\
\int_{-\infty}^{\infty}|f(x)|^{2}\,dx & =1 \notag
\end{align}
Examples of such functions are ``bump functions'' {(Tu, 2011)}.
Then calculate
\begin{equation}
\label{ordinary-charfun}
M_{0}(\theta)=\int_{-\infty}^{\infty}f^{\ast
}(x)f(x+\theta)\ dx \notag
\end{equation}
(Note that this form of the characteristic function corresponds to the special
case considered in Corollary 1. Note further that by virtue of the finite
extent of $f(x)$, the characteristic function is also of finite extent and in
particular it equals zero for $|\theta|>L$.)
Then the density function is
\begin{align}
P_{0}(x) & =\frac{1}{2\pi}\int_{-\infty}^{\infty}M_{0}(\theta)\,e^{-i\theta
x}\,d\theta\label{density_0} \notag \\
& =\frac{1}{2\pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f^{\ast
}(y)f(y+\theta)\,dy\,e^{-i\theta x}\,d\theta \notag \\
& =\left\vert F(x)\right\vert ^{2} \notag
\end{align}
where
\begin{equation}
F(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(y)e^{-ixy}\,dy \notag
\end{equation}
Note that because $f(x)$ is normalized, it follows that $P_{0}(x)$ is likewise
normalized,
\begin{equation}
\int_{-\infty}^{\infty}P_{0}(x)\,dx\,=\,\int_{-\infty}%
^{\infty}\left\vert F(x)\right\vert ^{2}\,dx\,=\,\int_{-\infty}^{\infty
}|f(x)|^{2}\,dx\,=\,1 \notag
\end{equation}
Then form the Stieltjes class of M-indeterminate densities
\begin{equation}
P_{\varepsilon}(x;\,\lambda,\phi)=P_{0}(x)\left[ 1+\varepsilon\,\cos\left(
\lambda x+\phi\right) \right] ,\quad\lambda>L,\quad-1\leq\varepsilon
\leq1,\quad-\pi\leq\phi\leq\pi\label{Method1-Stieltjes-class} \notag
\end{equation}
Because $\lambda>L$ and the characteristic function equals zero for
$|\theta|>L$ , it follows that $\int_{-\infty}^{\infty}x^{n}P_{0}%
(x)\cos\left( \lambda x+\phi\right) dx=0$ (which is more readily evaluated
in the characteristic function domain via Eq. \eqref{charfunmoments}). Hence,
$P_{0}(x)$ and the family of densities $P_{\varepsilon}(x;\,\lambda,\phi)$ all
have identical moments E$[X^{n}]$, consistent with Theorem 3.
\section{Generalized characteristic function and operator approach} \label{generalized}
We now expand the characteristic function approach by making use of Eq.
\eqref{general-char} with general self adjoint operators, and taking the
(normalized) function $f(x)$ to be
\begin{equation}
f(x)=f_{1}(x)+e^{i\beta}f_{2}(x) \label{m-ind-f}%
\end{equation}
where $\beta$ is a real constant, and $f_{1}(x)$ and $f_{2}(x)$ have disjoint
support, i.e.,
\begin{equation}
f_{1}(x)f_{2}(x)=0 \label{disjoint-condition}
\end{equation}
This choice of $f(x)$ is motivated by similar considerations in quantum mechanical probability distributions {(Sala Mayato et al., 2018)}.
For clarity, we shall denote here the characteristic function of Eq.
\eqref{general-char} by $M_{\mathcal{A}}(\theta)$, with corresponding probability
density $P_{\mathcal{A}}(r)$ for the continuous real random variable $R$.
\bigskip
\noindent \textbf{Theorem \ref{generalized}.1. }
For $f(x)$ given by Eq. \eqref{m-ind-f} and characteristic function
$M_{\mathcal{A}}(\theta)$ given by Eq. \eqref{general-char}, the family of
densities%
\begin{align}
P_{\mathcal{A}}(r) & =\frac{1}{2\pi}\int_{-\infty}^{\infty}M_{\mathcal{A}%
}(\theta)\,e^{-i\theta r}\,d\theta \notag \\
& =\frac{1}{2\pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f^{\ast
}(x)e^{i\theta\mathcal{A}}\,f(x)\,dx\,e^{-i\theta r}\,d\theta \notag
\end{align}
are M-indeterminate for self adjoint operators where the support of
$\mathcal{A}^{n}f(x)$ is the same as that of $f(x)$, but the support of
$e^{i\theta\mathcal{A}}\,f(x)$ is not the same as that of $f(x)$.
\bigskip
\noindent {\bf Proof. }
Substituting \eqref{m-ind-f} into \eqref{general-char} yields
\begin{align}
M_{\mathcal{A}}(\theta) & =\int_{-\infty}^{\infty}f_{1}^{\ast}%
(x)\,e^{i\theta\mathcal{A}}\,f_{1}(x)\,dx\,+\,\int_{-\infty}^{\infty}%
f_{2}^{\ast}(x)\,e^{i\theta\mathcal{A}}\,f_{2}(x)\,dx\, \notag \\
& +\,e^{i\beta}\,\int_{-\infty}^{\infty}f_{1}^{\ast}(x)\,e^{i\theta
\mathcal{A}}\,f_{2}(x)\,dx\,+\,e^{-i\beta}\,\int_{-\infty}^{\infty}f_{2}^{\ast
}(x)\,e^{i\theta\mathcal{A}}\,f_{1}(x)\,dx \notag
\end{align}
which in general depends on $\beta$, since for many self adjoint operators
$\mathcal{A}$, the support of $e^{i\theta\mathcal{A}}\,f_1(x)$ is generally different from that
of $f_1(x)$, and similarly for $f_2(x)$ (e.g., recall Corollary 1). Accordingly, even if $f_1$ and $f_2$ have disjoint support,
for many self adjoint operators it follows that $f_{2}^{\ast}(x)\,e^{i\theta\mathcal{A}}\,f_{1}(x) \ne 0$
such that the last two integrals are nonzero.
The
moments are
\begin{align}
\text{E\negthinspace}\left[ R^{n}\right] &= \int_{-\infty}^\infty r^n\,P_{\mathcal{A}}(r)\,dr
\,=\, \left. \left\{ \left(
\frac{1}{i}\frac{d}{d\theta}\right) ^{n}\,M_{\mathcal{A}}(\theta)\right\}
\right\vert _{\theta=0}\, \notag \\
&= \int_{-\infty}^{\infty}f_{1}^{\ast
}(x)\,\mathcal{A}^{n}\,f_{1}(x)\,dx\,+\,\int_{-\infty}^{\infty}f_{2}^{\ast
}(x)\,\mathcal{A}^{n}\,f_{2}(x)\,dx\,+\,\notag \\
& e^{i\beta}\, \int_{-\infty}^{\infty}f_{1}^{\ast}(x)\,\mathcal{A}^{n}%
\,f_{2}(x)\,dx\,+\,e^{-i\beta}\, \int_{-\infty}^{\infty}f_{2}^{\ast}(x)\,
\mathcal{A}^{n}\,f_{1}(x)\,dx \label{method3_moments} \notag
\end{align}
Since $f_{1}(x)f_{2}(x)=0$ and $\mathcal{A}^{n}f_{1}(x)$ has the same support
as $f_{1}(x)$, and likewise for $f_2(x)$, it follows that the last two integrals are zero. Accordingly,
the densities $P_{\mathcal{A}}(r)$ are different for different $\beta$ but all
have the same moments, and are therefore M-indeterminate.
\qedsymbol
\subsection{Example 1}
For the special case from Corollary 1 where $\mathcal{A}=\frac{1}{i}\frac
{d}{dx}$, the characteristic function is
\begin{align}
M_{\beta}(\theta) & =\int_{-\infty}^{\infty}f_{1}^{\ast}(x)\,f_{1}%
(x+\theta)\,dx\,+\,\int_{-\infty}^{\infty}f_{2}^{\ast}(x)\,f_{2}%
(x+\theta)\,dx\,+\, \notag \\
& e^{i\beta}\,\int_{-\infty}^{\infty}f_{1}^{\ast}(x)\,f_{2}(x+\theta
)\,dx\,+\,e^{-i\beta}\,\int_{-\infty}^{\infty}f_{2}^{\ast}(x)\,f_{1}%
(x+\theta)\,dx \notag
\end{align}
which clearly depends on $\beta$ since the latter two integrals are nonzero, even though
$f_1(x)f_2(x)=0.$
The moments are
\begin{align}
\text{E\negthinspace}\left[ R^{n}\right] \,=\,\left. \left\{ \left(
\frac{1}{i}\frac{d}{d\theta}\right) ^{n}\,M_{\mathcal{A}}(\theta)\right\}
\right\vert _{\theta=0}\, & =\,\,\,\int_{-\infty}^{\infty}f_{1}^{\ast
}(x)\,\left( \frac{1}{i}\frac{d}{dx}\right) ^{n}f_{1}(x)\,dy\,+\,\int
_{-\infty}^{\infty}f_{2}^{\ast}(x)\,\left( \frac{1}{i}\frac{d}{dx}\right)
^{n}f_{2}(x)\,dy\,+\, \notag \\
& e^{i\beta}\,\int_{-\infty}^{\infty}f_{1}^{\ast}(x)\,\left( \frac{1}%
{i}\frac{d}{dx}\right) ^{n}f_{2}(x)\,dy\,+\,e^{-i\beta}\,\int_{-\infty}^{\infty}%
f_{2}^{\ast}(x)\,\left( \frac{1}{i}\frac{d}{dx}\right)
^{n}f_{1}(x)\,dx \notag
\end{align}
which are independent of $\beta$ since $f_{1}(x)f_{2}(x)=0$.
\subsection{Example 2}
The operator
\begin{equation}
\mathcal{A}=\frac{1}{i}\frac{d}{dx}+c(n+1)x^{n} \notag
\end{equation}
where $c,n$ are constants, satisfies the conditions of Theorem 4; namely,
it is self adjoint, and the support of $\mathcal{A}^nf(x)$ equals that of $f(x)$,
whereas the support of $e^{i \theta \mathcal{A}}f(x)$ differs from the support of $f(x)$.
For $f(x)$ as in Theorem \ref{generalized}.1, we have by Eq. \eqref{general-char}
that the characteristic function is
\begin{align}
M_{\mathcal{A}}(\theta)
& =\int\left( f_{1}^{\ast}(x)+e^{-i\beta}f_{2}
^{\ast}(x)\right) e^{i\theta\mathcal{A}}\left( f_{1}(x)+e^{i\beta}f
_{2}(x)\right) dx \notag\\
& = M_{11}(\theta)+M_{22}(\theta)+e^{i\beta}M_{12}%
(\theta)+e^{-i\beta}M_{21}(\theta) \notag
\end{align}
where
\begin{align}
M_{lm}(\theta) & =\int f_{l}^{\ast}(x)e^{i\theta\mathcal{A}}f_{m}(x)dx \notag
\end{align}
Note that the characteristic function, and hence the density, depends on $\beta$.
For the moments to be independent of $\beta$, for which the densities are therefore M-indeterminate,
we require that
\begin{equation} \label{special2-condition}
\left. \left\{e^{i\beta}M_{12}^{(n)}(\theta)+e^{-i\beta}M_{21}%
^{(n)}(\theta)\right\} \right\vert_{\theta=0}=0
\end{equation}
where
\begin{equation}
M_{lm}^{(n)}(\theta)\,=\,\frac{d^n}{d \theta^n}M_{lm}(\theta) \notag
\end{equation}
Evaluating this expression, we have
\begin{align}
\left\{ e^{i\beta}M_{12}^{(n)}(\theta)+e^{-i\beta}M_{21}^{(n)}(\theta)\right\}
_{\theta=0} & =\left\{ e^{i\beta}\int f_{1}^{\ast}(x)i^{n}\mathcal{A}%
^{n}e^{i\theta\mathcal{A}}f_{2}(x)dx+e^{-i\beta}\int f_{2}^{\ast}%
(x)i^{n}\mathcal{A}^{n}e^{i\theta\mathcal{A}}f_{1}(x)dx\right\} _{\theta
=0} \notag \\
& =e^{i\beta}i^{n}\int f_{1}^{\ast}(x)\mathcal{A}^{n}f_{2}%
(x)dx+e^{-i\beta}i^{n}\int f_{2}^{\ast}(x)\mathcal{A}^{n}f_{1}(x)dx \notag
\end{align}
But the integrands equal zero since $f_1(x)$ and $f_2(x)$ have disjoint support and $f^\ast_1(x)\mathcal{A}^nf_2(x)=f^\ast_2(x)\mathcal{A}^nf_1(x)=0$,
by which Eq. \eqref{special2-condition} follows.
\bigskip
In closing, we note that the characteristic function for the operator approach presented herein (i.e., Eq. \eqref{general-char} with $f(x)$ as defined in Eqs. \eqref{m-ind-f} and \eqref{disjoint-condition}${}$) can not generally be written in the form of a Stieltjes class (i.e., expressed as in Eq. \eqref{stieltjes-charfun}). One exception is for the operator of Corollary 1 with $f_2(x)=f_1(x-D)$, $f_1(x) = 0$ for $|x|>L/2$, and $D>L>0$ {(Sala Mayato er al., 2021)}. Hence, this approach allows the generation of new classes of M-indeterminate probability densities.
\section{Conclusion}
We presented theorems for obtaining M-indeterminate probability densities by
way of the characteristic function and self adjoint operators. We have given
explicit methods to generate new M-indeterminate densities of the Stieltjes
class as well as M-indeterminate densities
that are not of the Stieltjes class.
\bigskip
\noindent \line(1,0){250}
\noindent Funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
\newpage
\noindent \large{\bf References}
\bigskip %
\small{
\noindent Shohat, J.A., Tamarkin, J.D., 1943. The Problem of Moments.
Mathematical Surveys, vol. I. American Mathematical Society, New York.
\bigskip \noindent \noindent Akhiezer, N.I., 1965. The Classical Moment Problem and Some Related
Questions in Analysis. Hafner Publishing Co., New York.
\bigskip \noindent Stoyanov, J., 2013. Counterexamples in Probability, 3rd ed. Dover.
\bigskip \noindent Stoyanov, J., 2004. Stieltjes classes for moment-indeterminate
probability distributions. J. Applied Probability \textbf{41}, 281-294.
\bigskip \noindent Stoyanov, J., Tolmatz, L., 2005. Method for constructing Stieltjes
classes for m-indeterminate probability distributions. Appl. Math. and Comp.
\textbf{165}, 669-685.
\bigskip \noindent Feller, W., 1966. An Introduction to Probability Theory and Its
Applications, Vol. 2. John Wiley and Sons, New York.
\bigskip \noindent Lukacs, E., 1970. Characteristic Functions, 2nd ed. Griffin \& Co., London.
\bigskip \noindent Morse, P. and Feshbach, H., 1953. Methods of Theoretical
Physics, Part I. McGraw-Hill Book Company, New York.
\bigskip \noindent Bohm, D., 1951. Quantum Theory. Prentice-Hall, New York.
\bigskip \noindent Wilcox, R.M., 1967. Exponential operators and parameter
differentiation in quantum physics. J. Math. Phys. {\bf 8}, 962-981.
\bigskip \noindent Cohen, L., 1988. Rules of probability in quantum mechanics. Found.
Phys. \textbf{18}(10), 983-998.
\bigskip \noindent Cohen, L., 2017. Are there quantum operators and wave functions in
standard probability theory?, in: Wong, M. W., Zhu, H. (Eds.),
Pseudo-Differential Operators: Groups, Geometry and Applications, pp. 133-147.
Birkh\"{a}user Mathematics.
\bigskip \noindent Tu, L. W., 2011. An Introduction to Manifolds, Second Edition.
Springer, NY.
\bigskip \noindent Sala Mayato, R, Loughlin, P., Cohen, L., 2018. M-indeterminate
distributions in quantum mechanics and the non-overlapping wave function
paradox. Physics Letters A \textbf{382}, 2914--2921.
\bigskip \noindent Sala Mayato, R, Loughlin, P., Cohen, L., 2021. Generating M-indeterminate probability densities by way of quantum mechanics. J. Theor. Probability.
}
\end{document}
|
1,314,259,994,267 | arxiv | \section{Introduction}
\label{section: introduction}
The theory of (discrete-time) quantum walks has attracted enormous attention over the past few decades. Despite its apparent simplicity, vast applications of this ubiquitous notion can be found across multiple disciplines. For instance, the physical utility of quantum walks is especially confirmed for quantum algorithms \cite{Grover-1996,Ambainis-Bach-Nayak-Vishwanath-Watrous-2001}, photosynthesis \cite{Mohseni-Rebentrost-Lloyd-Aspuru-Guzik-2008,Peruzzo-Lobino-Matthews-Matsuda-Politi-Poulios-Zhou-Lahini-Ismail-Worhoff-Bromberg-Silberberg-Thompson-OBrien-2010}, and topological insulators \cite{Kitagawa-Rudner-Berg-Demler-2010,Obuse-Kawakami-2011,Kitagawa-2012,Asboth-Obuse-2013}. The long-time limit of the velocity distribution of the quantum walker, known as the the weak limit theorem \cite{Konno-2002,Grimmett-Janson-Scudo-2004,Suzuki-2016}, has been a particularly active theme of rigorous mathematical research on quantum walks in the early years of the 21\textsuperscript{st} century. Other mathematical studies have taken various points of view: localisation \cite{Inui-Konishi-Konno-2004,Konno-2010,Segawa-2011,Cantero-Grunbaum-Moral-Velazquez-2012,Fuda-Funakawa-Suzuki-2017,Fuda-Funakawa-Suzuki-2018}, quantum walks on graphs \cite{Aharonov-Ambainis-Kempe-Vazirani-2001,Ambainis-2003,Portugal-2016}, non-linear analysis \cite{Maeda-Sasaki-Segawa-Suzuki-Suzuki-2018a,Maeda-Sasaki-Segawa-Suzuki-Suzuki-2018b,Maeda-Sasaki-Segawa-Suzuki-Suzuki-2019}, unitary equivalence classes \cite{Ohno-2016,Ohno-2017,Ohno-2018,Kuriki-Nirjhor-Ohno-2020}, time operators \cite{Sambou-Tiedra-2019,Funakawa-Matsuzawa-Sasaki-Suzuki-Teranishi-2020}, and continuous limit \cite{Maeda-Suzuki-2019}.
The present article is a continuation of rigorous mathematical studies of index theory for \textbi{chirally symmetric quantum walks} from the perspective of supersymmetric quantum mechanics \cite{Cedzich-Grunbaum-Stahl-Velazquez-Werner-Werner-2016,Cedzich-Geib-Grunbaum-Stahl-Velazquez-Werner-Werner-2018,Cedzich-Geib-Stahl-Velazquez-Werner-Werner-2018,Suzuki-2019,Suzuki-Tanaka-2019,Matsuzawa-2020,Cedzich-Geib-Werner-Werner-2020}. Such a quantum walk can be naturally identified with a pair of a time-evolution operator $U : \mathcal{H} \to \mathcal{H}$ and a unitary self-adjoint operator $\varGamma: \mathcal{H} \to \mathcal{H},$ satisfying the chiral symmetry condition;
\begin{equation}
\label{equation: chiral symmetry}
U^* = \varGamma U \varGamma,
\end{equation}
where $\varGamma$ gives a $\mathbb{Z}_2$-grading of the underlying state Hilbert space $\mathcal{H} = \ker(\varGamma - 1) \oplus \ker(\varGamma + 1).$ The existing literature mentioned above allows us to assign a certain well-defined Fredholm index, denoted by $\mathrm{ind}\,(\varGamma,U),$ to each abstract chirally symmetric quantum walk $(\varGamma, U).$ Note that this assignation of the Fredholm index requires $U$ to be both \textbi{essentially unitary} (i.e. $U$ is a unitary element in the Calkin $C^*$-algebra) and \textbi{essentially gapped} (i.e. the essential spectrum of $U,$ denoted by $\sigma_{\mathrm{ess}}(U),$ contains neither $-1$ nor $+1$).
The present article extends this index theory to encompass all those time-evolutions $U$ which fail to be essentially unitary. As a concrete example, we shall explicitly construct such a time-evolution $U$ with the property that it is essentially gapless, yet the associated index is well-defined. To put this into context, let us consider the following time-evolution operator on the state Hilbert space $\mathcal{H} := \ell^2(\mathbb{Z}, \mathbb{C}^2)$ of square-summable $\mathbb{C}^2$-valued sequences;
\begin{equation}
\label{equation: definition of evolution operator of MKO}
U_{\textnormal{mko}} := SG \Phi C_2 S G^{-1} \Phi C_1,
\end{equation}
where the operators $S, G, \Phi, C_1, C_2$ are defined respectively as the following block-operator matrices with respect to the orthogonal decomposition $\mathcal{H} = \ell^2(\mathbb{Z}, \mathbb{C}) \oplus \ell^2(\mathbb{Z}, \mathbb{C});$
\[
S :=
\begin{pmatrix}
L & 0 \\
0 & L^{-1}
\end{pmatrix}, \quad
G :=
\begin{pmatrix}
e^{\gamma} & 0 \\\centering
0 & e^{-\gamma(\cdot + 1)}
\end{pmatrix}, \quad
\Phi :=
\begin{pmatrix}
e^{i \phi} & 0 \\
0 & e^{-i\phi(\cdot + 1)}
\end{pmatrix}, \quad
C_j :=
\begin{pmatrix}
\cos \theta_j & i \sin \theta_j \\
i \sin \theta_j & \cos \theta_j
\end{pmatrix},
\]
where $L$ is the unitary bilateral left-shift operator defined by $L \Psi := \Psi(\cdot + 1)$ for each $\Psi \in \ell^2(\mathbb{Z}, \mathbb{C}),$ and where we assume that four $\mathbb{R}$-valued sequences $\gamma = (\gamma(x))_{x \in \mathbb{Z}}$, $\phi = (\phi(x))_{x \in \mathbb{Z}}, \theta_1 = (\theta_1(x))_{x \in \mathbb{Z}}, \theta_2 = (\theta_2(x))_{x \in \mathbb{Z}},$ all of which are identified with the corresponding multiplication operators on $\ell^2(\mathbb{Z}, \mathbb{C}),$ admit the following two-sided limits:
\begin{equation}
\label{equation: existence of limits}
\xi(\star) := \lim_{x \to \star} \xi(x) \in \mathbb{R}, \qquad \xi \in \{\gamma, \phi, \theta_1, \theta_2\}, \quad \star = \pm \infty.
\end{equation}
This model is a natural generalisation of the homogenous model considered in \cite[\textsection III.A]{Mochizuki-Kim-Obuse-2016} with the time-evolution \cref{equation: definition of evolution operator of MKO} being consistent with the experimental setup in \cite{Regensburger-Bersch-Miri-Onishchukov-Christodoulides-Peschel-2012} (see \cite[\textsection I-II]{Mochizuki-Kim-Obuse-2016} for details). Note that $U_{\textnormal{mko}}$ is non-unitary, unless $\gamma$ is identically zero. We shall explicitly construct a $\mathbb{Z}_2$-grading operator $\varGamma_{\textnormal{mko}} : \mathcal{H} \to \mathcal{H}$ in a highly non-trivial fashion, so that $(\varGamma_{\textnormal{mko}},U_{\textnormal{mko}})$ forms a chirally symmetric quantum walk. Complete classification of the two topological invariants $\mathrm{ind}\,(\varGamma_{\textnormal{mko}},U_{\textnormal{mko}})$ and $\sigma_{\mathrm{ess}}(U_{\textnormal{mko}})$ can be found in this paper. In particular, we show that $\sigma_{\mathrm{ess}}(U_{\textnormal{mko}})$ is a subset of the union of the unit circle $\mathbb{T}$ and
the real line $\mathbb{R},$ given explicitly by the following formula;
\[
\sigma_{\mathrm{ess}}(U_{\textnormal{mko}}) = \sigma(-\infty) \cup \sigma(+\infty),
\]
where the sets $\sigma(\pm \infty) \subseteq \mathbb{T} \cup \mathbb{R}$ depend only on the two asymptopic values $\theta_1(\pm \infty), \theta_2(\pm \infty).$ As in \cref{figure: three cases}, it is shown in this paper that for each $\star = \pm \infty,$ there exists a well-defined subinterval $[\gamma_-(\star),\gamma_+(\star)]$ of $[0,\infty],$ which enables us to classify $\sigma(\star)$ into $6$ different cases in total, depending on the sign $s(\star)$ of $-\sin \theta_1(\star) \sin \theta_2(\star).$
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[ticks=none, xmin=-2, xmax=2, ymin= -2, ymax=2, legend pos = north west, axis lines=center, xlabel=$\Re$, ylabel=$\Im$, xlabel style={anchor = north}
, width = 0.4\textwidth, height = 0.4\textwidth, clip=false
]
\addplot [domain=0:2*pi,samples=50, smooth, dashed]({cos(deg(x))},{sin(deg(x))});
\addplot [domain= 6*pi/10: 9*pi/10, samples=50, smooth, ultra thick, lightgray]({cos(deg(x))},{sin(deg(x))});
\addplot [domain= -6*pi/10: -9*pi/10, samples=50, smooth, ultra thick, lightgray]({cos(deg(x))},{sin(deg(x))});
\addplot [domain= 6*pi/10: 9*pi/10, samples=50, smooth, ultra thick]({-cos(deg(x))},{sin(deg(x))});
\addplot [domain= -6*pi/10: -9*pi/10, samples=50, smooth, ultra thick]({-cos(deg(x))},{sin(deg(x))});
\node at (0,2.5) {\textbf{Case I}};
\node at (0,-2.5) {$|\gamma(\star)| \leq \gamma_-(\star)$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[ticks=none,xmin=-2, xmax=2, ymin=-2, ymax=2, legend pos = north west, axis lines=center, xlabel=$\Re$, ylabel=$\Im$, xlabel style={anchor = north}
, width = 0.4\textwidth, height = 0.4\textwidth, clip=false
]
\addplot [domain=0:2*pi,samples=50, smooth, dashed]({cos(deg(x))},{sin(deg(x))});
\addplot [domain= 8*pi/10: pi, samples=50, smooth, ultra thick, lightgray]({cos(deg(x))},{sin(deg(x))});
\addplot [domain= -8*pi/10: -pi, samples=50, smooth, ultra thick, lightgray]({cos(deg(x))},{sin(deg(x))});
\addplot [domain= 0: 1, samples=50, smooth, ultra thick, lightgray]({-1.5*x - (1-x)/(1.5)},{0});
\addplot [domain= 8*pi/10: pi, samples=50, smooth, ultra thick]({-cos(deg(x))},{sin(deg(x))});
\addplot [domain= -8*pi/10: -pi, samples=50, smooth, ultra thick]({-cos(deg(x))},{sin(deg(x))});
\addplot [domain= 0: 1, samples=50, smooth, ultra thick]({1.5*x + (1-x)/(1.5)},{0});
\node at (0,2.5) {\textbf{Case II}};
\node at (0,-2.5) {$\gamma_-(\star) < |\gamma(\star)| < \gamma_+(\star)$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[ticks=none,xmin=-2, xmax=2, ymin=-2, ymax=2, legend pos = north west, axis lines=center, xlabel=$\Re$, ylabel=$\Im$, xlabel style={anchor = north}
, width = 0.4\textwidth, height = 0.4\textwidth, clip=false
]
\addplot [domain=0:2*pi, samples=50, smooth, dashed]({cos(deg(x))},{sin(deg(x))});
\addplot [domain= 0: 1, samples=50, smooth, ultra thick, lightgray]({-1.8*x - (1.2)*(1-x)},{0});
\addplot [domain= 0: 1, samples=50, smooth, ultra thick, lightgray]({-x/(1.2) - (1-x)/(1.8)},{0});
\addplot [domain= 0: 1, samples=50, smooth, ultra thick]({1.8*x + (1.2)*(1-x)},{0});
\addplot [domain= 0: 1, samples=50, smooth, ultra thick]({x/(1.2) + (1-x)/(1.8)},{0});
\node at (0,2.5) {\textbf{Case III}};
\node at (0,-2.5) {$\gamma_+(\star) \leq |\gamma(\star)|$};
\end{axis}
\end{tikzpicture}
\caption{
The set $\sigma(\star) \subseteq \mathbb{T} \cup \mathbb{R}$ is classified into Cases I, II, III as above according to the size of $|\gamma(\star)|.$ If $s(\star) = 1$ (resp. if $s(\star) = -1$), then the black regions (resp. gray regions) in each of the above three cases depict the subset $\sigma(\star).$ Therefore, there are $6$ distinct cases in total. In particular, $\sigma(\star)$ is a connected subset of $\mathbb{T} \cup \mathbb{R}$ containing either $-1$ or $+1,$ and so Case II is of significant importance.}
\label{figure: three cases}
\end{figure}
The present article is organised as follows. \cref{section: preliminaries} is a preliminary. In \cref{section: chirally symmetric quantum walks} we state the main theorems of this paper, proofs of which will be completely deferred to \cref{section: proof of main theorem}. In particular, it is shown in \cref{theorem: MKO model} that the $2$-step chirally symmetric quantum walk $(\varGamma_{\textnormal{mko}}, U_{\textnormal{mko}})$ can be naturally generalised to another $m$-step chirally symmetric quantum walk, denoted by $(\varGamma_m, U_m)$ in this paper, where $m$ can be any fixed non-zero integer. This new model also unifies the one-dimesional unitary quantum walks in \cite{Ambainis-Bach-Nayak-Vishwanath-Watrous-2001,Konno-2002,Suzuki-2016,Kitagawa-Rudner-Berg-Demler-2010,Kitagawa-Broome-Fedrizzi-Rudner-Berg-Kassal-Aspuru-Demler-White-2012,Kitagawa_2012,Fuda-Funakawa-Suzuki-2017,Fuda-Funakawa-Suzuki-2018,Fuda-Funakawa-Suzuki-2019,Suzuki-Tanaka-2019,Matsuzawa-2020,Tanaka-2020}. Complete classification of the two topological invariants $\mathrm{ind}\,(\varGamma_m, U_m)$ and $\sigma_{\mathrm{ess}}(U_m)$ can be collectively found in \cref{maintheorem: generalised mko}. With the aid of \crefrange{theorem: MKO model}{maintheorem: generalised mko}, we show in \cref{section: gapless time-evolution} that the non-unitary time-evolution $U_{\textnormal{mko}}$ can have a well-defined index, yet it is essentially gapless (see \cref{example: second example} for details). This construction is based upon Case II in \cref{figure: three cases}. In \cref{section: proof of main theorem}, we prove the main theorems in \cref{section: main theorem and discussion}. In particular, we shall make use of an abstract form of the one-dimensional bulk-boundary correspondence to fully classify $\mathrm{ind}\,(\varGamma_m, U_m).$ The paper concludes with the summary and discussion in \cref{section: concluding remarks}.
\section{Main theorems and discussion}
\label{section: main theorem and discussion}
Proofs of the main theorems of the current section can be collectively found in \cref{section: proof of main theorem}.
\subsection{Preliminaries}
\label{section: preliminaries}
By operators we shall always mean everywhere-defined bounded operators between Hilbert spaces throughout this paper. An operator $X$ is said to be \textbi{Fredholm}, if $\ker X, \ker X^*$ are finite-dimensional and if $X$ has a closed range. If $X$ is Fredholm, then the \textbi{Fredholm index} of $X$ is defined by
\[
\mathrm{ind}\, X := \dim \ker X - \dim \ker X^*.
\]
If the domain and range of $X$ are identical, then the (Fredholm) \textbi{essential spectrum} of $X$ is defined by $\sigma_{\mathrm{ess}}(X) := \{z \in \mathbb{C} \mid X - z \mbox{ is not Fredholm}\}.$ In particular, we call $X$ \textbi{essentially gapped}, if $-1,+1 \notin \sigma_{\mathrm{ess}}(X)$ following \cite{Cedzich-Geib-Grunbaum-Stahl-Velazquez-Werner-Werner-2018,Cedzich-Geib-Stahl-Velazquez-Werner-Werner-2018}.
A \textbi{chiral pair} on $\mathcal{H}$ is any pair $(\varGamma,U)$ of a unitary self-adjoint operator $\varGamma$ on $\mathcal{H}$ and an operator $U$ on $\mathcal{H}$ satisfying the chiral symmetry condition \cref{equation: chiral symmetry}. Note that $\varGamma$ gives a $\mathbb{Z}_2$-grading of the underlying Hilbert space $\mathcal{H} = \ker(\varGamma - 1) \oplus \ker(\varGamma + 1),$ and that $\varGamma = 1 \oplus (-1)$ with respect to this orthogonal decomposition, where $1$ denotes the identity operator on a Hilbert space throughout this paper. The operator $U$ can be written as $U = R + iQ,$ where $R, Q$ are the real and imaginary parts of $U$ respectively. More precisely, $R,Q$ admit the following block-operator matrix representations:
\begin{align}
\label{equation: representation of R and Q}
R =
\begin{pmatrix}
R_1 & 0 \\
0 & R_2
\end{pmatrix}_{\ker(\varGamma - 1) \oplus \ker(\varGamma + 1)}, \qquad
Q = \begin{pmatrix}
0 & Q_0^* \\
Q_0 & 0
\end{pmatrix}_{\ker(\varGamma - 1) \oplus \ker(\varGamma + 1)},
\end{align}
where the first equality (resp. second equality) follows from $[\varGamma, R] := \varGamma R - R \varGamma = 0$ (resp. from $\{\varGamma, Q\} := \varGamma Q + Q \varGamma = 0$).
\begin{definition}
Let $(\varGamma, U)$ be a chiral pair on a Hilbert space $\mathcal{H},$ and let $Q$ be the imaginary part of $U$ given by the second equality in \cref{equation: representation of R and Q}. Then the chiral pair $(\varGamma, U)$ is said to be \textbi{Fredholm}, if $0 \notin \sigma_{\mathrm{ess}}(Q)$ (or, equivalently, $Q_0$ is Fredholm). In this case, the \textbi{Witten index} of the Fredholm chiral pair $(\varGamma, U)$ is defined by
$
\mathrm{ind}\,(\varGamma, U) := \mathrm{ind}\, Q_0.
$
\end{definition}
We shall make use of the following \textbi{unitary invariance} property of the Witten index throughout this paper;
\begin{lemma}[unitary invariance]
\label{lemma: unitary invariance of the witten index}
Let $(\varGamma,U), (\varGamma',U')$ be two chiral pairs on two Hilbert spaces $\mathcal{H},\mathcal{H}'$ respectively. If $(\varGamma,U), (\varGamma',U')$ are \textbi{unitarily equivalent} in the sense that $(\varGamma',U') = (\epsilon^* \varGamma \epsilon, \epsilon^* U \epsilon)$ for some unitary operator $\epsilon : \mathcal{H}' \to \mathcal{H},$ then $(\varGamma, U)$ is Fredholm if and only if so is $(\varGamma', U').$ In this case, we have $\mathrm{ind}\,(\varGamma, U) = \mathrm{ind}\,(\varGamma', U').$
\end{lemma}
\subsection{Main theorems}
\label{section: chirally symmetric quantum walks}
We are now in a position to introduce the main model of the present article;
\begin{mdefinition}
\label{definition: Um}
Let $m$ be a fixed non-zero integer, and let $(\varGamma_m, U_m)$ be the pair of the following block-operator matrices with respect to $\ell^2(\mathbb{Z}, \mathbb{C}^2) = \ell^2(\mathbb{Z}) \oplus \ell^2(\mathbb{Z}):$
\begin{align}
\tag{A1}
\label{equation: definition of VarGamma m}
\varGamma_m &:=
\begin{pmatrix}
p & qL^{m} \\
L^{-m}q^* & -p(\cdot - m)
\end{pmatrix}, \\
\tag{A2}
\label{equation: definition of Um}
U_m &:=
\begin{pmatrix}
p & qL^{m} \\
L^{-m}q^* & -p(\cdot - m)
\end{pmatrix}
\begin{pmatrix}
e^{-2 \gamma(\cdot + 1)}a & e^{\gamma - \gamma(\cdot + 1)}b^* \\
e^{\gamma - \gamma(\cdot + 1)}b & -e^{2 \gamma}a
\end{pmatrix},
\end{align}
where we assume that three convergent $\mathbb{R}$-valued sequences $\gamma = (\gamma(x))_{x \in \mathbb{Z}}, p = (p(x))_{x \in \mathbb{Z}} , a = (a(x))_{x \in \mathbb{Z}}$ and two convergent $\mathbb{C}$-valued sequences $q =(q(x))_{x \in \mathbb{Z}}, b = (b(x))_{x \in \mathbb{Z}}$ satisfy the following conditions:
\begin{align}
\tag{A3}
\label{equation: condition of p and q}
&p(x)^2 + |q(x)|^2 = 1, & &x \in \mathbb{Z}, \\
\tag{A4}
\label{equation: condition of a and b}
&a(x)^2 + |b(x)|^2 = 1, & &x \in \mathbb{Z}, \\
\tag{A5}
\label{equation: limits of sequences}
&\xi(\pm \infty) := \lim_{x \to \pm \infty} \xi(x), & & \xi \in \{\gamma, p, a, q, b\}, \\
\tag{A6}
\label{equation: limits of theta and theta prime}
&
\theta(\pm \infty) :=
\begin{cases}
\arg q(\pm \infty), & q(\pm \infty) \neq 0, \\
0, & q(\pm \infty) = 0,
\end{cases}
&
&\theta'(\pm \infty) :=
\begin{cases}
\arg b(\pm \infty), & b(\pm \infty) \neq 0, \\
0, & b(\pm \infty) = 0,
\end{cases}
\end{align}
where $\arg w$ of a non-zero complex number $w$ is uniquely defined by $w = e^{i \arg w}$ and $\arg w \in [0,2\pi).$
\end{mdefinition}
The pair $(\varGamma_m, U_m)$ introduced in \cref{definition: Um}, where $\varGamma_m$ is unitary self-adjoint by \cref{equation: condition of p and q}, turns out to be a chiral pair. Indeed, $U_m$ can be uniquely written as $U_m = \varGamma_m C,$ where $C$ is self-adjoint, and so
\[
U_m^* = (\varGamma_m C)^* = C^* \varGamma_m^* = C \varGamma_m = \varGamma_m^2 C \varGamma_m = \varGamma_m U_m \varGamma_m,
\]
where the second last equality follows from $\varGamma_m^2 = 1.$ The chiral pair $(\varGamma_m, U_m)$ unifies all of the following existing models on the one-dimensional integer lattice $\mathbb{Z}:$
\begin{itemize}
\item If $m = 1$ and if $\gamma$ is identically $0,$ then $U_1$ is the unitary evolution of a \textbi{split-step quantum walk model} considered in \cite{Kitagawa-Rudner-Berg-Demler-2010,Kitagawa-Broome-Fedrizzi-Rudner-Berg-Kassal-Aspuru-Demler-White-2012,Kitagawa_2012,Fuda-Funakawa-Suzuki-2017,Fuda-Funakawa-Suzuki-2018,Fuda-Funakawa-Suzuki-2019,Suzuki-Tanaka-2019,Matsuzawa-2020,Tanaka-2020}. In particular, if we set $p = 0,$ then this model becomes the usual \textbi{one-dimensional quantum walk model} considered in \cite{Ambainis-Bach-Nayak-Vishwanath-Watrous-2001,Konno-2002,Suzuki-2016}.
\item If $m = 2,$ then $U_2$ turns out to be equivalent to the non-unitary evolution operator $U_{\textrm{mko}}$ given by \cref{equation: definition of evolution operator of MKO} in sense of the following theorem.
\end{itemize}
\begin{mtheorem}
\label{theorem: MKO model}
Let $U_{\textnormal{mko}}$ be given by \cref{equation: definition of evolution operator of MKO}, where we assume that four convergent $\mathbb{R}$-valued sequences $\gamma, \phi, \theta_1, \theta_2$ admit the two-sided limits of the form \cref{equation: existence of limits}. Then there exists a unitary self-adjoint operator $\varGamma_{\textnormal{mko}}$ on $\ell^2(\mathbb{Z}, \mathbb{C}^2),$ such that $(\varGamma_{\textnormal{mko}}, U_{\textnormal{mko}})$ forms a chiral pair. Moreover, the chiral pair $(\varGamma_{\textnormal{mko}}, U_{\textnormal{mko}})$ is unitarily equivalent to the chiral pair $(\varGamma_2, U_2),$ where the sequences $p,q,a,b$ are defined respectively by
\begin{equation}
\tag{B1}
\label{equation: mko substitution}
p := - \sin \theta_1(\cdot + 1), \quad q := - i\cos \theta_1(\cdot + 1), \quad a := \sin \theta_2, \quad b := i \cos \theta_2 e^{i (\phi + \phi(\cdot + 1))}.
\end{equation}
\end{mtheorem}
Complete classification of the two topological invariants $\mathrm{ind}\,(\varGamma_m, U_m)$ and $\sigma_{\mathrm{ess}}(U_m)$ can be collectively found in the following theorem;
\begin{mtheorem}
\label{maintheorem: generalised mko}
If $(\varGamma_m, U_m)$ is the chiral pair in \cref{definition: Um}, then we have the following two assertions:
\begin{enumerate}
\item \textnormal{\textbf{Classification of the Witten index.}} For each $\star = \pm \infty,$ we let
\begin{equation}
\tag{C1}
\label{equation: definition of pgamma}
p_{\gamma}(\star) := \frac{p(\star)}{\sqrt{p(\star)^2 + |q(\star)|^2\cosh^2(2\gamma(\star))}}.
\end{equation}
Then the chiral pair $(\varGamma_m, U_{m})$ is Fredholm if and only if $|p_{\gamma}(\star)| \neq |a(\star)|$ for each $\star = \pm \infty.$ In this case, we have the following index formula;
\begin{equation}
\tag{C2}
\label{equation: Witten index formula}
\frac{\mathrm{ind}\, (\varGamma_m, U_{m})}{m} \\
=
\begin{cases}
0, & |p_{\gamma}(-\infty)| < |a(-\infty)|, \, |p_{\gamma}(+\infty)| < |a(+\infty)|, \\
\mathrm{sgn}\, p(+\infty), & |p_{\gamma}(-\infty)| < |a(-\infty)|, \, |p_{\gamma}(+\infty)| > |a(+\infty)|, \\
- \mathrm{sgn}\, p(-\infty), & |p_{\gamma}(-\infty)| > |a(-\infty)|, \, |p_{\gamma}(+\infty)| < |a(+\infty)| , \\
\mathrm{sgn}\, p(+\infty) - \mathrm{sgn}\, p(-\infty), & |p_{\gamma}(-\infty)| > |a(-\infty)|, \, |p_{\gamma}(+\infty)| > |a(+\infty)|,
\end{cases}
\end{equation}
where the sign function $\mathrm{sgn}\, : \mathbb{R} \to \{-1,1\}$ is defined by
\begin{equation}
\tag{C3}
\label{equation: definition of sign function}
\mathrm{sgn}\, x :=
\begin{cases}
\dfrac{x}{|x|}, & x \neq 0, \\
1, & x = 0.
\end{cases}
\end{equation}
\item \textnormal{\textbf{Classification of the essential spectrum.}}
For each $\star = \pm \infty,$ we let
\begin{align}
\label{equation: definition of s star}
\tag{C4}
s(\star) &:= \mathrm{sgn}\,(p(\star)a(\star)), \\
\label{equation: definition of Lambdapm}
\tag{C5}
\Lambda_\pm(\star) &:= |p(\star) a(\star)| \cosh(2\gamma(\star)) \pm |q(\star) b(\star)|, \\
\label{equation: definition of sigma star}
\tag{C6}
\sigma(\star) &:= \bigcup_{n \in \{ -1, +1\}} \left\{ \left( x + \sqrt{x^2 - 1} \right)^n \mathrel{} \middle| \mathrel{} s(\star)x \in [\Lambda_-(\star), \Lambda_+(\star)] \right\}.
\end{align}
Then the essential spectrum of $U_{m}$ can be written as $\sigma_{\mathrm{ess}}(U_{m}) = \sigma(-\infty) \cup \sigma(+\infty).$ Furthermore, for each $\star = \pm \infty$ there exists a well-defined closed interval $[\gamma_-(\star), \gamma_+(\star)] \subseteq [0, \infty],$ such that the set $\sigma(\star)$ admits the following further classification:
\begin{itemize}
\item \textnormal{\textbf{Case I. }} If $|\gamma(\star)| \leq \gamma_-(\star),$ then $[\Lambda_-(\star),\Lambda_+(\star)] \subseteq [-1,1],$ and so $\sigma(\star) \subseteq \mathbb{T}.$
\item \textnormal{\textbf{Case II. }} If $\gamma_-(\star) < |\gamma(\star)| < \gamma_+(\star),$ then $[\Lambda_-(\star), 1] \subseteq [-1,1]$ and $[1, \Lambda_+(\star)] \subseteq [1,\infty],$ and so $\sigma(\star)$ is a connected subset of $\mathbb{T}\cup \mathbb{R}$ containing $s(\star).$
\item \textnormal{\textbf{Case III. }} If $\gamma_+(\star) \leq |\gamma(\star)|,$ then $[\Lambda_-(\star),\Lambda_+(\star)] \subseteq [1,\infty),$ and so $\sigma(\star) \subseteq \mathbb{R}.$
\end{itemize}
More explicitly, for each $\star = \pm \infty$ the closed interval $[\gamma_-(\star), \gamma_+(\star)]$ is given by
\begin{equation}
\tag{C7}
\label{equation: definition of gammaj}
\gamma_\pm(\star)
:= \frac{1}{2} \cosh^{-1} \left( \frac{1 \pm |q(\star)b(\star)|}{|p(\star)a(\star)|}\right),
\end{equation}
where $\cosh^{-1}$ denotes the inverse function of $[0,\infty] \ni x \longmapsto \cosh x \in [1,\infty]$ with $1/0 := \infty$ by convention.
\end{enumerate}
\end{mtheorem}
Explicit formulas for $\sigma(\star) \subseteq \mathbb{T} \cup \mathbb{R}$ in Cases I, II, III of \cref{maintheorem: generalised mko}~(ii) will be given shortly in \cref{section: gapless time-evolution}. This will allow us to classify $\sigma(\star)$ into the $6$ different cases as in \cref{figure: three cases}.
\begin{remark}
If $\gamma$ is identically zero and if $m = 1,$ then $U_1$ is the \textit{unitary} time-evolution of a split-step quantum walk, and the formula for $\mathrm{ind}\,(\varGamma_1, U_1)$ can be found in \cite{Suzuki-Tanaka-2019,Matsuzawa-2020,Tanaka-2020}. Similarly, under the same assumption, \cref{maintheorem: generalised mko}~(ii) coincides with \cite[Theorem 30]{Suzuki-Tanaka-2019} or \cite[Theorem B~(ii)]{Tanaka-2020}.
\end{remark}
\subsection{Discussion}
\label{section: gapless time-evolution}
Let us start with the following lemma;
\begin{lemma}
\label{lemma: essential spectrum of essentially unitary U}
Let $(\varGamma, U)$ be an abstract chiral pair on a Hilbert space $\mathcal{H},$ and let $Q$ be the imaginary part of $U.$ If $U$ is essentially unitary (i.e. $U^*U - 1, UU^* - 1$ are compact), then
\begin{equation}
\label{equation: essential spectral mapping theorem}
\sigma_{\mathrm{ess}}(Q) = \left\{\frac{z - z^*}{2i} \mathrel{} \middle| \mathrel{} z \in \sigma_{\mathrm{ess}}(U)\right\}.
\end{equation}
That is, if $U$ is essentially unitary, then the chiral pair $(\varGamma, U)$ is Fredholm if and only if $U$ is essentially gapped in the sense of \cref{section: preliminaries}.
\end{lemma}
\begin{proof}
The formula \cref{equation: essential spectral mapping theorem} can be easily proved by using the spectral mapping theorem and the trigonometric polynomial $p(z) := (z - z^*)/(2i).$ We omit the proof, since an analogous argument can be found in \cite[Lemma 3.6 ]{Tanaka-2020}.
\end{proof}
As in the following theorem, given an abstract chiral pair $(\varGamma,U),$ where $U$ may not necessarily be essentially unitary, the essential gappedness of $U$ is no longer an indispensable assumption to ensure the Fredholmness of the chiral pair $(\varGamma, U);$
\begin{theorem}
\label{theorem: main}
With the notation introduced in \cref{maintheorem: generalised mko}, suppose that the following hold true for each $\star = \pm \infty:$
\begin{equation}
\label{equation: main}
|p_\gamma(\star)| \neq |a(\star)|, \qquad \gamma_-(\star) < |\gamma(\star)| < \gamma_+(\star).
\end{equation}
Then $(\varGamma_m, U_m)$ is Fredholm, yet $U_m$ fails to be essentially gapped.
\end{theorem}
\begin{proof}
Let us first start with further classification of $\sigma_{\mathrm{ess}}(U)$ given by \cref{maintheorem: generalised mko}~(ii). We consider the following $\mathbb{R}$-valued function $g$ defined on $(-\infty,-1] \cup [1, \infty);$
\[
g(x) := x + \sqrt{x^2 - 1}, \qquad x \in (-\infty,-1] \cup [1, \infty).
\]
\cref{figure: graph of g} shows the graphs of $g, 1/g;$
\begin{figure}[H]
\centering
\label{graph: graph of g}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\begin{axis}[legend pos = north west, ytick= {-1,0,1}, yticklabel style={anchor = north west}, xtick= {-1,0,1}, xticklabel style={anchor = north west}, xmin= -3, xmax=3, ymin=-3, ymax=3, legend pos = north west, axis lines=center, xlabel=$x$, ylabel=$y$, xlabel style={anchor = west}
]
\addlegendentry{$y = g(x)$}
\addplot[color = black, samples=200,domain=1:4, very thick]{x + sqrt(x^2 - 1)};
\addplot[color = black, samples=200,domain=-1:-4, very thick, forget plot]{x + sqrt(x^2 - 1)};
\addplot [color = black!20!white, domain=-4:4,samples=50, smooth, dashed, forget plot]({x},{1});
\addplot [color = black!20!white, domain=-3:3,samples=50, smooth, dashed, forget plot]({1},{x});
\addplot [color = black!20!white, domain=-4:4,samples=50, smooth, dashed, forget plot]({x},{-1});
\addplot [color = black!20!white, domain=-3:3,samples=50, smooth, dashed, forget plot]({-1},{x});
\addlegendentry{$y = \frac{1}{g(x)}$}
\addplot[color = lightgray, densely dashed, samples=200,domain=1:4, very thick]{1/(x + sqrt(x^2 - 1))};
\addplot[color = lightgray, densely dashed,samples=200,domain=-1:-4, very thick, forget plot]{1/(x + sqrt(x^2 - 1))};
\end{axis}
\end{tikzpicture}
\caption{The black graph corresponds to $g,$ and the gray graph corresponds to $g^{-1}.$}
\label{figure: graph of g}
\end{figure}
Evidently, $g(x)g(-x)^{-1} = -1$ for $|x| \geq 1.$ It follows that for each $\star = \pm \infty$ the set $\sigma(\star)$ admits the following further classification;
\begin{itemize}
\item \textnormal{\textbf{Case I. }} If $|\gamma(\star)| \leq \gamma_-(\star),$ then
\[
\sigma(\star)
=
\begin{cases}
\{z \in \mathbb{T} \mid \Re z \in [\Lambda_-(\star), \Lambda_+(\star)]\}, & s(\star) = 1, \\
\{z \in \mathbb{T} \mid \Re z \in [-\Lambda_+(\star), -\Lambda_-(\star)]\}, & s(\star) = -1.
\end{cases}
\]
\item \textnormal{\textbf{Case II. }} If $\gamma_-(\star) < |\gamma(\star)| < \gamma_+(\star),$ then
\[
\sigma(\star)
=
\begin{cases}
\{z \in \mathbb{T} \mid \Re z \in [\Lambda_-(\star), 1]\} \cup [g(\Lambda_+(\star))^{-1}, g(\Lambda_+(\star))],& s(\star) = 1, \\
\{z \in \mathbb{T} \mid \Re z \in [-1, -\Lambda_-(\star)]\} \cup [-g(\Lambda_+(\star)), -g(\Lambda_+(\star))^{-1}],& s(\star) = -1.
\end{cases}
\]
\item \textnormal{\textbf{Case III. }} If $\gamma_+(\star) \leq |\gamma(\star)|,$ then
\[
\sigma(\star)
=
\begin{cases}
[g(\Lambda_+(\star))^{-1},g(\Lambda_-(\star))^{-1}] \cup [g(\Lambda_-(\star)), g(\Lambda_+(\star))], & s(\star) = 1, \\
[-g(\Lambda_+(\star)),-g(\Lambda_-(\star))] \cup [-g(\Lambda_-(\star))^{-1},-g(\Lambda_+(\star))^{-1}],& s(\star) = -1.
\end{cases}
\]
\end{itemize}
That is, $\sigma(\star)$ is classified into the $6$ different cases as in \cref{figure: three cases}. It immediately follows from \cref{maintheorem: generalised mko} and \cref{equation: main} that $(\varGamma_m, U_m)$ is Fredholm, and that $\sigma_{\mathrm{ess}}(U_m) = \sigma(-\infty) \cup \sigma(+\infty).$ In particular, for each $\star = \pm \infty$ the set $\sigma(\star)$ is classified as Case II. That is, each $\sigma(\star)$ is a connected subset of $\mathbb{T} \cup \mathbb{R}$ containing either $-1$ or $+1,$ and so $U_m$ fails to be essentially gapped. The claim follows.
\end{proof}
The current section concludes with the following two numerical examples:
\begin{example}
\label{example: first example}
Let $(\varGamma_m, U_m)$ be the chiral pair in \cref{maintheorem: generalised mko}. Let
\[
p_0 := 0.2, \qquad a_0 := 0.1, \qquad \gamma_0 := 0.4.
\]
If $a(\pm \infty) := \pm a_0$ and $p(\pm \infty) := \pm p_0,$ then \cref{equation: definition of gammaj} becomes
\begin{align*}
\gamma_-(-\infty)
&= \gamma_-(+ \infty)
= \frac{1}{2} \cosh^{-1} \left(\frac{1 - \sqrt{1 - p_0^2}\sqrt{1 - a_0^2}}{|p_0 a_0|}\right)
= 0.350396, \\
\gamma_+(-\infty)
&= \gamma_+(+ \infty)
= \frac{1}{2} \cosh^{-1} \left(\frac{1 + \sqrt{1 - p_0^2}\sqrt{1 - a_0^2}}{|p_0 a_0|}\right)
= 2.64283.
\end{align*}
If we let $\gamma(\pm \infty) := \gamma_0,$ then $\gamma_-(\pm \infty) < |\gamma_0| < \gamma_+(\pm \infty).$ It follows from \cref{equation: definition of sigma star} that $\sigma_{\mathrm{ess}}(U_m) = \sigma(-\infty) = \sigma(+\infty),$ since $s(-\infty) = s(+\infty) = 1.$ More precisely, the set $\sigma_{\mathrm{ess}}(U_m) = \sigma(\pm \infty)$ is classified as Case II:
\begin{align*}
\Lambda_\pm &:= |p_0 a_0| \cosh(2\gamma_0) \pm \sqrt{1 - p_0^2}\sqrt{1 - a_0^2}, \\
\sigma_{\mathrm{ess}}(U_m) &:= \{z \in \mathbb{T} \mid \Re z \in [\Lambda_-, 1]\} \cup [g(\Lambda_+)^{-1}, g(\Lambda_+)].
\end{align*}
The black region in Cases II of \cref{figure: three cases} depicts the connected subset $\sigma_{\mathrm{ess}}(U_m)$ of $\mathbb{T} \cup \mathbb{R}$ containing $1.$ It follows that $U_m$ is not essentially gapped. Furthermore, \cref{equation: definition of pgamma} becomes
\[
|p_\gamma (\pm \infty)| =
\frac{|p_0|}{\sqrt{p_0^2 + (1 - p_0) \cosh^2(2\gamma_0)}} = 0.150876 > |a(\pm \infty)| = 0.1.
\]
It follows that $(\varGamma_m, U_m)$ is Fredholm, and $\mathrm{ind}\,(\varGamma_m, U_m) = m(+1 - (-1)) = 2m$ by the index formula \cref{equation: Witten index formula}. That is, we have constructed the Fredholm chiral pair $(\varGamma_m, U_m),$ in such a way that $U_m$ fails to be essentially gapped, yet $\mathrm{ind}\,(\varGamma_m, U_m) = 2m$ is well-defined.
\end{example}
\begin{example}
\label{example: second example}
Let $U_{\textnormal{mko}}$ be the non-unitary evolution operator given by \cref{equation: definition of evolution operator of MKO}, where we assume the existence of the two-sided limits \cref{equation: existence of limits}. We define $p,q,a,b$ according to \cref{equation: mko substitution}, and \cref{theorem: MKO model} asserts $(\varGamma_{\textnormal{mko}},U_{\textnormal{mko}}) \simeq (\varGamma_2, U_2),$ where $\simeq$ denotes unitary equivalence of chiral pairs. As in \cref{example: first example}, we choose $\theta_1(\pm \infty), \theta_2(\pm \infty), \gamma(\pm \infty)$ in such a way that
\[
p(\pm \infty) = \pm 0.2, \qquad a(\pm \infty) = \pm 0.1, \qquad \gamma(\pm \infty) = 0.4.
\]
It follows that $(\varGamma_{\textnormal{mko}},U_{\textnormal{mko}}) \simeq (\varGamma_2, U_2)$ is Fredholm, and $\mathrm{ind}\,(\varGamma_{\textnormal{mko}},U_{\textnormal{mko}}) = 4.$ Furthermore, $U_{\textnormal{mko}}$ is essentially gapless.
\end{example}
\section{Proofs of the main theorems}
\label{section: proof of main theorem}
\subsection{Unitary invariance of the Witten index (\texorpdfstring{\cref{lemma: unitary invariance of the witten index}}{Lemma})}
We prove the unitary invariance of the Witten index (\cref{lemma: unitary invariance of the witten index}). Note that the special case of this invariance principle for unitary $U$ can be found in \cite[Corollary 3.6]{Suzuki-2019}, the proof of which makes use of a spectral mapping theorem for chirally symmetric unitary operators
\cite{Segawa-Suzuki-2016,Segawa-Suzuki-2019}. We give the following direct proof instead;
\begin{proof}[Proof of \cref{lemma: unitary invariance of the witten index}]
Let $(\varGamma,U), (\varGamma',U')$ be two unitarily equivalent chiral pairs on a Hilbert space $\mathcal{H}.$ That is, there exists a unitary operator $\epsilon$ on $\mathcal{H},$ such that
$
(\varGamma',U') = (\epsilon^* \varGamma \epsilon, \epsilon^* U \epsilon).
$
Let $\mathcal{H}_\pm := \ker(\varGamma \mp 1),$ and let $\mathcal{H}'_\pm := \ker(\varGamma' \mp 1).$ We may assume that the operator $\epsilon$ admits the following block-operator matrix representation;
\begin{align*}
&\epsilon =
\begin{pmatrix}
\epsilon_{+} & \epsilon_{-+} \\
\epsilon_{+-} & \epsilon_{-}
\end{pmatrix},
&&
\begin{aligned}
\epsilon_{+}&: \mathcal{H}'_+ \to \mathcal{H}_+, \qquad & \epsilon_{-+}&: \mathcal{H}'_- \to \mathcal{H}_+, \\
\epsilon_{+-}&: \mathcal{H}'_+ \to \mathcal{H}_-, \qquad & \epsilon_{-}&: \mathcal{H}'_- \to \mathcal{H}_-.
\end{aligned}
\end{align*}
Recall that the operators $U,U'$ admit the following block-operator matrix representations respectively according to \cref{section: preliminaries}:
\[
U =
\begin{pmatrix}
R_1 & iQ_0^* \\
iQ_0 & R_2
\end{pmatrix}_{\mathcal{H}_+ \oplus \mathcal{H}_-}, \qquad
U'=
\begin{pmatrix}
R'_1 & i(Q'_0)^* \\
iQ'_0 & R'_2
\end{pmatrix}_{\mathcal{H}'_+ \oplus \mathcal{H}'_-}.
\]
Since $0 = \epsilon \varGamma' - \varGamma \epsilon,$ where $\varGamma = 1 \oplus (-1)$ and $\varGamma' = 1 \oplus (-1),$ we obtain
\[
0 =
\begin{pmatrix}
\epsilon_{+} & \epsilon_{-+} \\
\epsilon_{+-} & \epsilon_{-}
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
-
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\begin{pmatrix}
\epsilon_{+} & \epsilon_{-+} \\
\epsilon_{+-} & \epsilon_{-}
\end{pmatrix}
=
\begin{pmatrix}
\epsilon_{+} & -\epsilon_{-+} \\
\epsilon_{+-} & -\epsilon_{-}
\end{pmatrix}
+
\begin{pmatrix}
-\epsilon_{+} & -\epsilon_{-+} \\
\epsilon_{+-} & \epsilon_{-}
\end{pmatrix}
=
\begin{pmatrix}
0 & -2\epsilon_{-+} \\
2\epsilon_{+-} & 0
\end{pmatrix}.
\]
This implies $\epsilon = \epsilon_{+} \oplus \epsilon_{-} : \mathcal{H}_+' \oplus \mathcal{H}'_- \to \mathcal{H}_+ \oplus \mathcal{H}_-,$ and so
\[
\epsilon^* U \epsilon = (\epsilon^*_{11} \oplus \epsilon_{-}^*) U (\epsilon_{+} \oplus \epsilon_{-}) =
\begin{pmatrix}
\epsilon^*_{+} R_1 \epsilon_{+} & i \epsilon^*_{+} Q_0^* \epsilon_{-} \\
i \epsilon^*_{-} Q_0 \epsilon_{+} & \epsilon^*_{-} R_2 \epsilon_{-}\\
\end{pmatrix} =
\begin{pmatrix}
R'_1 & i (Q'_{0})^* \\
i Q'_{0} & R'_{2} \\
\end{pmatrix}.
\]
Since $Q'_{0} = \epsilon^*_{-} Q_0 \epsilon_{+},$ where $\epsilon_{+},\epsilon_{-}$ are unitary, we have that $Q_{0}$ is Fredholm if and only if so is $Q'_0.$ In this case, $\mathrm{ind}\, Q_{0} = \mathrm{ind}\, Q'_0.$ The claim follows.
\end{proof}
\subsection{Unitary transform of the Mochizuki-Kim-Obuse model (\texorpdfstring{\cref{theorem: MKO model}}{Theorem B})}
\label{section: MKO model}
\begin{proof}[Proof of \cref{theorem: MKO model}]
We shall make use of the fact that the operator $S$ commutes with any diagonal block-operator matrices in this proof. We have
\begin{align*}
U_{\textnormal{mko}} C_1^{-1}
&= S (G \Phi) C_2 S (G^{-1} \Phi) \\
&= S
\begin{pmatrix}
e^{\gamma + i \phi} & 0 \\
0 & e^{-\gamma(\cdot + 1)-i\phi(\cdot + 1)}
\end{pmatrix}
\begin{pmatrix}
\cos \theta_2 & i \sin \theta_2 \\
i \sin \theta_2 & \cos \theta_2
\end{pmatrix}
S \begin{pmatrix}
e^{-\gamma + i \phi} & 0 \\
0 & e^{\gamma(\cdot + 1) -i\phi(\cdot + 1)}
\end{pmatrix}\\
&=
S
\begin{pmatrix}
e^{\gamma + i \phi} & 0 \\
0 & e^{-\gamma(\cdot + 1)-i\phi(\cdot + 1)}
\end{pmatrix}
\begin{pmatrix}
\cos \theta_2 & i \sin \theta_2 \\
i \sin \theta_2 & \cos \theta_2
\end{pmatrix}
\begin{pmatrix}
e^{-\gamma(\cdot + 1) + i \phi(\cdot + 1)} & 0 \\
0 & e^{\gamma -i\phi}
\end{pmatrix}S \\
&=
S
\begin{pmatrix}
\cos \theta_2 e^{\gamma -\gamma(\cdot + 1) +i (\phi + \phi(\cdot + 1))} & i \sin \theta_2 e^{2\gamma}\\
i \sin \theta_2e^{-2\gamma(\cdot + 1)} & \cos \theta_2 e^{\gamma -\gamma(\cdot + 1)-i(\phi +\phi(\cdot + 1))}
\end{pmatrix}
S,
\end{align*}
where the third equality follows from $L^{\pm 1} \Psi = \Psi(\cdot \pm 1)$ for any $\Psi \in \ell^2(\mathbb{Z}).$ If $\sigma_2 =
\begin{pmatrix}
0 & -i \\
i & 0
\end{pmatrix}$
denotes the second Pauli matrix, then $\sigma_2^2 = 1,$ and so
\begin{align*}
U_{\textnormal{mko}} &=
(S\sigma_2)
\sigma_2
\begin{pmatrix}
\cos \theta_2 e^{\gamma -\gamma(\cdot + 1) +i (\phi + \phi(\cdot + 1))} & i \sin \theta_2 e^{2\gamma}\\
i \sin \theta_2e^{-2\gamma(\cdot + 1)} & \cos \theta_2 e^{\gamma -\gamma(\cdot + 1)-i(\phi +\phi(\cdot + 1))}
\end{pmatrix}
(S\sigma_2) (\sigma_2 C_1) \\
&=
(S\sigma_2)
\begin{pmatrix}
\sin \theta_2e^{-2\gamma(\cdot + 1)} & -i\cos \theta_2 e^{\gamma -\gamma(\cdot + 1)-i(\phi +\phi(\cdot + 1))} \\
i\cos \theta_2 e^{\gamma -\gamma(\cdot + 1) +i (\phi + \phi(\cdot + 1))} & - \sin \theta_2 e^{2\gamma}\\
\end{pmatrix}
(S\sigma_2) (\sigma_2 C_1) \\
&=
(S\sigma_2)
\begin{pmatrix}
a e^{-2\gamma(\cdot + 1)} & b^* e^{\gamma -\gamma(\cdot + 1)} \\
be^{\gamma -\gamma(\cdot + 1)} & - a e^{2\gamma}\\
\end{pmatrix}
(S\sigma_2) (\sigma_2 C_1).
\end{align*}
If we let $\eta := (\sigma_2 C_1) (S \sigma_2),$ where
$\sigma_2 C_1$ and $S\sigma_2$ are unitary involutions, then
\begin{align*}
\eta^* U_{\textnormal{mko}} \eta
&=
(S \sigma_2)
(\sigma_2 C_1)
(S\sigma_2)
\begin{pmatrix}
a e^{-2\gamma(\cdot + 1)} & b^* e^{\gamma -\gamma(\cdot + 1)} \\
be^{\gamma -\gamma(\cdot + 1)} & - a e^{2\gamma}\\
\end{pmatrix}
(S\sigma_2) (\sigma_2 C_1)(\sigma_2 C_1) (S \sigma_2) \\
&=
(S \sigma_2)
(\sigma_2 C_1)
(S\sigma_2)
\begin{pmatrix}
a e^{-2\gamma(\cdot + 1)} & b^* e^{\gamma -\gamma(\cdot + 1)} \\
be^{\gamma -\gamma(\cdot + 1)} & - a e^{2\gamma}\\
\end{pmatrix}.
\end{align*}
It remains to compute $(S \sigma_2)(\sigma_2 C_1)(S\sigma_2);$
\[
(S \sigma_2) (\sigma_2 C_1) (S\sigma_2)
=
\begin{pmatrix}
0 & -iL \\
iL^{-1} & 0 \\
\end{pmatrix}
\begin{pmatrix}
\sin \theta_1 & -i\cos \theta_1 \\
i\cos \theta_1 & -\sin \theta_1 \\
\end{pmatrix}
\begin{pmatrix}
0 & -iL \\
iL^{-1} & 0 \\
\end{pmatrix}
= \varGamma_2.
\]
If we let $\varGamma_{\textrm{mko}} := \eta \varGamma_2 \eta^*,$ then $\eta^* \varGamma_{\textrm{mko}} \eta = \varGamma_2.$ The claim follows.
\end{proof}
\subsection{Classification of the topological invariants (\texorpdfstring{\cref{maintheorem: generalised mko}}{Theorem })}
\subsubsection{Strictly local operators}
To prove \cref{maintheorem: generalised mko}, let us first introduce one preliminary concept beforehand. With the obvious orthogonal decomposition $\ell^2(\mathbb{Z}, \mathbb{C}^n) = \bigoplus_{j=1}^n \ell^2(\mathbb{Z}, \mathbb{C})$ in mind, we shall consider an operator of the form
\begin{align}
\label{equation2: characterisation of strict locality}
A
=
\begin{pmatrix}
\sum^k_{y=-k} a_{11}(y, \cdot) L^{y} & \dots & \sum^k_{y=-k} a_{1n}(y, \cdot) L^{y} \\
\vdots & \ddots& \vdots \\
\sum^k_{y=-k} a_{n1}(y, \cdot) L^{y} & \dots & \sum^k_{y=-k} a_{nn}(y, \cdot) L^{y} \\
\end{pmatrix},
\end{align}
where $k$ is a finite natural number, and where each $a_{ij}(y, \cdot) = (a_{ij}(y, x))_{x \in \mathbb{Z}}$ is an arbitrary bounded $\mathbb{C}$-valued sequence viewed as a multiplication operator on $\ell^2(\mathbb{Z}, \mathbb{C}) = \bigoplus_{x \in \mathbb{Z}} \mathbb{C}.$ An operator the form \cref{equation2: characterisation of strict locality} will be referred to as a (one-dimensional) \textbi{strictly local operator} following \cite[\textsection 1.2]{Cedzich-Geib-Stahl-Velazquez-Werner-Werner-2018}.
\begin{theorem}[{\cite[Theorem A]{Tanaka-2020}}]
\label{theorem: topological invariants of strictly local operators}
Let $A$ be a strictly local operator of the form \cref{equation2: characterisation of strict locality} with the property that the following two-sided limits exist:
\begin{equation}
\label{equation: two-phase assumptions}
a_{ij}(y, \pm \infty) := \lim_{x \to \pm \infty} a_{ij}(y,x) \in \mathbb{C}, \qquad i,j = 1, \dots, n ,\ -k \leq y \leq k.
\end{equation}
Let
\begin{align}
\label{equation: definition of Apm}
A(\pm \infty)
&:=
\begin{pmatrix}
\sum^k_{y=-k} a_{11}(y, \pm \infty) L^{y} & \dots & \sum^k_{y=-k} a_{1n}(y, \pm \infty) L^{y} \\
\vdots & \ddots& \vdots \\
\sum^k_{y=-k} a_{n1}(y, \pm \infty) L^{y} & \dots & \sum^k_{y=-k} a_{nn}(y, \pm \infty) L^{y} \\
\end{pmatrix}, \\
\label{equation: definition of Fourier transform of A}
\hat{A}(z,\pm \infty)
&:=
\begin{pmatrix}
\sum^k_{y=-k} a_{11}(y, \pm \infty) z^{y} & \dots & \sum^k_{y=-k} a_{1n}(y, \pm \infty) z^{y} \\
\vdots & \ddots& \vdots \\
\sum^k_{y=-k} a_{n1}(y, \pm \infty) z^{y} & \dots & \sum^k_{y=-k} a_{nn}(y, \pm \infty) z^{y} \\
\end{pmatrix}, \qquad z \in \mathbb{T}.
\end{align}
Then the following assertions hold true:
\begin{enumerate}
\item We have that $A$ is Fredholm if and only if $\mathbb{T} \ni z \longmapsto \det \hat{A}(z,\star) \in \mathbb{C}$ is nowhere vanishing on $\mathbb{T}$ for each $\star = \pm \infty.$ In this case, the Fredholm index of $A$ is given by
\begin{equation}
\label{equation: bulk-edge correspondence}
\mathrm{ind}\,(A) = \mathrm{wn} \left(\det \hat{A}(\cdot,+\infty) \right) - \mathrm{wn} \left(\det \hat{A}(\cdot, -\infty) \right),
\end{equation}
where $\mathrm{wn} \left(\det \hat{A}(\cdot, \star) \right)$ denotes the winding number of the continuous function $\mathbb{T} \ni z \longmapsto \det \hat{A}(z,\star) \in \mathbb{C}$ with respect to the origin.
\item The essential spectrum of $A$ is given by
\begin{align*}
&\sigma_{\mathrm{ess}}(A) = \sigma_{\mathrm{ess}}(A(- \infty)) \cup \sigma_{\mathrm{ess}}(A(+ \infty)), \\
&\sigma_{\mathrm{ess}}(A(\star)) = \bigcup_{z \in \mathbb{T}} \sigma(\hat{A}(z,\star)), \qquad \star = \pm \infty.
\end{align*}
\end{enumerate}
\end{theorem}
\cref{theorem: topological invariants of strictly local operators} can be viewed as an abstract form of the
one-dimensional bulk-boundary correspondence \cite[Corollary 4.3]{Cedzich-Geib-Grunbaum-Stahl-Velazquez-Werner-Werner-2018} (see \cite[\textsection 2]{Tanaka-2020} for details).
\subsubsection{Proof of \texorpdfstring{\cref{maintheorem: generalised mko}~(i)}{Theorem C (i)}}
\label{section: proof of the index formula}
\begin{notation}
We shall make use of the notation introduce in \cref{definition: Um}. For notational simplicity, we use the following notation throughout \cref{section: proof of the index formula};
\[
(\varGamma,U) := (\varGamma_m,U_m), \qquad
C :=
\begin{pmatrix}
\alpha_1 & \beta^* \\
\beta & \alpha_2
\end{pmatrix} :=
\begin{pmatrix}
e^{-2 \gamma(\cdot + 1)}a & e^{\gamma - \gamma(\cdot + 1)}b^* \\
e^{\gamma - \gamma(\cdot + 1)}b & -e^{2 \gamma}a
\end{pmatrix}.
\]
With the above notation, the operator $U$ can be written as $U = \varGamma C.$
\end{notation}
In order to compute $\mathrm{ind}\,(\varGamma, U)$ we shall closely follow \cite[\textsection 3.2]{Tanaka-2020}. Note first that the underlying Hilbert space $\ell^2(\mathbb{Z},\mathbb{C}^2)$ admits the following two orthogonal decompositions:
\[
\ell^2(\mathbb{Z},\mathbb{C}^2)
= \ker(\varGamma - 1) \oplus \ker(\varGamma + 1)
= \ell^2(\mathbb{Z}) \oplus \ell^2(\mathbb{Z}),
\]
where $\ker(\varGamma \mp 1) \neq \ell^2(\mathbb{Z}).$ On one hand, the imaginary part $Q$ of $U$ admits an off-diagonal block operator matrix representation with respect to the former decomposition as in the second equality of \cref{equation: representation of R and Q}, where the Fredholm index of $Q_0 : \ker(\varGamma - 1) \to \ker(\varGamma + 1)$ is by definition $\mathrm{ind}\,(\varGamma, U).$ On the other hand, the same operator $Q$ can \textit{not} be expressed as an off-diagonal block-operator matrix with respect to the latter decomposition. The unitary invariance of the Witten index (\cref{lemma: unitary invariance of the witten index}) motivates us to construct a unitary operator $\epsilon : \ell^2(\mathbb{Z}) \to \ell^2(\mathbb{Z}),$ in such a way that the imaginary part $\epsilon^* Q \epsilon$ of the new chiral pair $(\epsilon^* \varGamma \epsilon, \epsilon^* U \epsilon)$ become off-diagonal with respect to $\ell^2(\mathbb{Z}) \oplus \ell^2(\mathbb{Z}).$
\begin{lemma}
\label{lemma: wada decomposition}
Let $R,Q$ be the real and imaginary parts of $U$ respectively. For each $x \in \mathbb{Z},$ let $\theta(x)$ be any real number satisfying $q(x) = |q(x)|e^{i \theta(x)},$ and let $p_\pm(x) := \sqrt{1 \pm p(x)}.$ Let
\begin{align}
\label{equation1: definition of Qepsilon}
-2i Q_{\epsilon_0} &:= p_+ e^{i \theta}L^{m} \beta p_+ - p_- \beta^* L^{-m} e^{-i \theta}p_- - |q|(\alpha_1 - \alpha_2(\cdot + m)), \\
\label{equation1: definition of Repsilon}
2R_{\epsilon_1} &:= p_- e^{i \theta}L^{m} \beta p_+ + p_+ \beta^* L^{-m} e^{-i \theta} p_- + p_+^2 \alpha_1 + p_-^2\alpha_2(\cdot + m), \\
\label{equation2: definition of Repsilon}
2R_{\epsilon_2} &:= p_+ e^{i \theta} L^{m} \beta p_- + p_- \beta^* L^{-m} e^{-i \theta} p_+ - p_-^2\alpha_1 - p_+^2 \alpha_2(\cdot + m).
\end{align}
Then there exists a unitary operator $\epsilon$ on $\ell^2(\mathbb{Z},\mathbb{C}^2),$ such that the following block-operator matrix representations hold true with respect to $\ell^2(\mathbb{Z},\mathbb{C}^2) = \ell^2(\mathbb{Z}) \oplus \ell^2(\mathbb{Z}):$
\begin{align*}
&\epsilon^* \varGamma \epsilon
=
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix},
&&\epsilon^* U \epsilon
=
\begin{pmatrix}
R_{\epsilon_1} & iQ_{\epsilon_0}^* \\
iQ_{\epsilon_0} & R_{\epsilon_2}
\end{pmatrix},
&&\epsilon^* R \epsilon
=
\begin{pmatrix}
R_{\epsilon_1} & 0 \\
0 & R_{\epsilon_2}
\end{pmatrix},
&&\epsilon^* Q \epsilon
=
\begin{pmatrix}
0 & Q_{\epsilon_0}^* \\
Q_{\epsilon_0} & 0
\end{pmatrix},
\end{align*}
Moreover, the chiral pair $(\varGamma, U)$ is Fredholm if and only if $Q_{\epsilon_0}$ is Fredholm. In this case,
\begin{equation}
\label{equation: first index formula}
\mathrm{ind}\,(\varGamma, U) = \mathrm{ind}\, Q_{\epsilon_0}.
\end{equation}
\end{lemma}
As we shall see below, the derivation of the index formula \cref{equation: first index formula} only requires the boundedness of the given sequences $\gamma, p, a, q, b,$ and so \cref{equation: limits of sequences} turns out to be redundant. Note, however, that this assumption \cref{equation: limits of sequences} is necessary to prove the index formula \cref{equation: Witten index formula}.
\begin{proof}
Note first that $\varGamma$ can be written as
\[
\varGamma
=
\begin{pmatrix}
p & qL^{m} \\
L^{-m}q^* & -p(\cdot - m)
\end{pmatrix}
=
\begin{pmatrix}
1 & 0\\
0 & L^{-m}
\end{pmatrix}
\begin{pmatrix}
p & q \\
q^* & -p
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & L^{m}
\end{pmatrix},
\]
where the middle matrix on the right hand side of the second equality admits the following diagonalisation. For each $x \in \mathbb{Z}$ we have
\begin{equation}
\label{equation: diagonalisation of unitary involutory matrices}
\epsilon_0(x)^*
\begin{pmatrix}
p(x) & q(x) \\
q(x)^* & -p(x)
\end{pmatrix}
\epsilon_0(x)
=
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}, \quad
\epsilon_0(x)
:=
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 0 \\
0 & e^{-i \theta(x)}
\end{pmatrix}
\begin{pmatrix}
p_+(x) & -p_-(x) \\
p_-(x) & p_+(x)
\end{pmatrix}.
\end{equation}
Since $\epsilon_0 := \bigoplus_{x \in \mathbb{Z}} \epsilon_0(x)$ is unitary, the following operator is also unitary;
\[
\epsilon :=
\begin{pmatrix}
1 & 0 \\
0 & L^{-m}
\end{pmatrix}
\epsilon_0
=
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 0 \\
0 & L^{-m}e^{-i \theta}
\end{pmatrix}
\begin{pmatrix}
p_+ & -p_- \\
p_- & p_+
\end{pmatrix}.
\]
It follows from the first equality that
\[
\epsilon^* \varGamma \epsilon
=
\epsilon_0^*
\begin{pmatrix}
1 & 0 \\
0 & L^{m}
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & L^{-m}
\end{pmatrix}
\begin{pmatrix}
p & q \\
q^* & -p
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & L^{m}
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & L^{-m}
\end{pmatrix}
\epsilon_0 \\
=
\epsilon_0^*
\begin{pmatrix}
p & q \\
q^* & -p
\end{pmatrix}
\epsilon_0 =
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix},
\]
where the last equality follows from \cref{equation: diagonalisation of unitary involutory matrices}.
Given a bounded operator $X$ on $\ell^2(\mathbb{Z},\mathbb{C}^2),$ we introduce the shorthand $X_\epsilon := \epsilon^* X \epsilon.$ With this convention in mind, we have $[\varGamma _\epsilon, R_\epsilon] = 0 = \{\varGamma _\epsilon, Q_\epsilon\},$ where $\varGamma _\epsilon = 1 \oplus (-1)$ with respect to $\ell^2(\mathbb{Z},\mathbb{C}^2) = \ell^2(\mathbb{Z}) \oplus \ell^2(\mathbb{Z}).$ It follows that we have the following representations:
\begin{align}
\label{equation: epsilon representation}
R_\epsilon
&=
\begin{pmatrix}
R'_{\epsilon_1} & 0 \\
0 & R'_{\epsilon_2}
\end{pmatrix},
&
Q_\epsilon
&=
\begin{pmatrix}
0 & (Q'_{\epsilon_0})^* \\
Q'_{\epsilon_0} & 0
\end{pmatrix},
&
U_\epsilon
&= R_\epsilon + iQ_\epsilon =
\begin{pmatrix}
R'_{\epsilon_1} & i(Q'_{\epsilon_0})^* \\
iQ'_{\epsilon_0} & R'_{\epsilon_2}
\end{pmatrix}.
\end{align}
It remains to show that the three operators $Q'_{\epsilon_0},R'_{\epsilon_1},R'_{\epsilon_2}$ introduced above coincide with the ones defined by the formulas \crefrange{equation1: definition of Qepsilon}{equation2: definition of Repsilon}. Note that
\begin{align}
\label{equation1: Cepsilon}
2C_\epsilon
&=
\varGamma _\epsilon (2U_\epsilon)
=
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\begin{pmatrix}
2R'_{\epsilon_1} & 2i(Q'_{\epsilon_0})^* \\
2iQ'_{\epsilon_0} & 2R'_{\epsilon_2} \\
\end{pmatrix}
=
\begin{pmatrix}
2R'_{\epsilon_1} & 2i(Q'_{\epsilon_0})^* \\
-2iQ'_{\epsilon_0} & -2R'_{\epsilon_2} \\
\end{pmatrix}.
\end{align}
It remains to compute $2C_\epsilon.$ We have
\begin{align*}
2\epsilon^*
\begin{pmatrix}
\alpha_1 & 0 \\
0 & \alpha_2 \\
\end{pmatrix}
\epsilon
&=
\begin{pmatrix}
p_+^2 \alpha_1 + p_-^2 \alpha_2(\cdot + m) & -|q|(\alpha_1 - \alpha_2(\cdot + m)) \\
-|q|(\alpha_1 - \alpha_2(\cdot + m)) & p_-^2 \alpha_1 + p_+^2\alpha_2(\cdot + m)
\end{pmatrix}, \\
2\epsilon^*
\begin{pmatrix}
0 & \beta^* \\
\beta & 0 \\
\end{pmatrix}
\epsilon
&=
\begin{pmatrix}
p_- e^{i \theta} L^{m} \beta p_+ + p_+ \beta^* L^{-m}e^{-i \theta} p_- & -p_- e^{i \theta}L^{m} \beta p_- + p_+ \beta^* L^{-m}e^{-i \theta} p_+ \\
p_+ e^{i \theta}L^{m} \beta p_+ - p_- \beta^* L^{-m}e^{-i \theta} p_- &
-p_+ e^{i \theta}L^{m} \beta p_- - p_- \beta^* L^{-m} e^{-i \theta}p_+
\end{pmatrix}.
\end{align*}
It follows from the above two equalities that
\begin{equation}
\label{equation2: Cepsilon}
2C_\epsilon =
2\epsilon^*
\begin{pmatrix}
\alpha_1 & 0 \\
0 & \alpha_2 \\
\end{pmatrix}
\epsilon +
2\epsilon^*
\begin{pmatrix}
0 & \beta^* \\
\beta & 0 \\
\end{pmatrix}
\epsilon
=
\begin{pmatrix}
2R_{\epsilon_1} & 2i Q_{\epsilon_0}^* \\
-2i Q_{\epsilon_0} & -2R_{\epsilon_2}
\end{pmatrix}
\end{equation}
By comparing \cref{equation1: Cepsilon} with \cref{equation2: Cepsilon}, we see that \cref{equation: epsilon representation} also holds true without the superscript $'$.
Note that $\ell^2(\mathbb{Z},\mathbb{C}^2) = \ell^2(\mathbb{Z}) \oplus \ell^2(\mathbb{Z})$ can be identified with the orthogonal sum $\ell^2(\mathbb{Z}) \oplus \{0\} \oplus \{0\} \oplus \ell^2(\mathbb{Z})$ through the following unitary transform;
\[
\ell^2(\mathbb{Z},\mathbb{C}^2) \ni (\Psi_1, \Psi_2) \longmapsto (\Psi_1, 0,0,\Psi_2) \in \ell^2(\mathbb{Z}) \oplus \{0\} \oplus \{0\} \oplus \ell^2(\mathbb{Z}).
\]
It is then easy to show that the operator $Q_\epsilon$ admits the following block-operator matrix representations:
\begin{equation}
\label{equation1: representation of Qepsilon}
Q_\epsilon
=
\begin{pmatrix}
0 & Q_{\epsilon_0}^* \\
Q_{\epsilon_0} & 0
\end{pmatrix}_{\ell^2(\mathbb{Z}) \oplus \ell^2(\mathbb{Z})}
=
\begingroup
\setlength\arraycolsep{5pt}
\begin{pmatrix}
0 & 0 & 0 & Q_{\epsilon_0} \\
0 & 0 & \textbf{0} & 0 \\
0 & \textbf{0} & 0 & 0 \\
Q_{\epsilon_0} & 0 & 0 & 0
\end{pmatrix}_{\ell^2(\mathbb{Z}) \oplus \{0\} \oplus \{0\} \oplus \ell^2(\mathbb{Z})}
\endgroup,
\end{equation}
where $\textbf{0}$ denotes the zero operator of the form $\textbf{0} : \{0\} \to \{0\},$ and where $\ell^2(\mathbb{Z}) \oplus \{0\} = \ker(\varGamma _\epsilon - 1)$ and $\{0\} \oplus \ell^2(\mathbb{Z}) = \ker(\varGamma _\epsilon + 1).$ On the other hand, the imaginary part $Q_\epsilon$ associated with $(\varGamma _\epsilon, U_\epsilon)$ admits the following off-diagonal block-operator matrix representation with respect to $\ell^2(\mathbb{Z},\mathbb{C}^2) = \ker(\varGamma _\epsilon - 1) \oplus \ker(\varGamma _\epsilon + 1)$ as in \cref{equation: representation of R and Q};
\begin{equation}
\label{equation2: representation of Qepsilon}
Q =
\begin{pmatrix}
0 & (Q''_{\epsilon_0})^* \\
Q''_{\epsilon_0} & 0
\end{pmatrix}_{\ker(\varGamma _\epsilon - 1) \oplus \ker(\varGamma _\epsilon + 1)}
=
\begin{pmatrix}
0 & (Q''_{\epsilon_0})^* \\
Q''_{\epsilon_0} & 0
\end{pmatrix}_{(\ell^2(\mathbb{Z}) \oplus \{0\}) \oplus (\{0\} \oplus \ell^2(\mathbb{Z}))}.
\end{equation}
It follows from \crefrange{equation1: representation of Qepsilon}{equation2: representation of Qepsilon} that $Q''_{\epsilon_0}$ is an off-diagonal block-operator matrix of the form;
\[
Q_{\epsilon_0} =
\begingroup
\setlength\arraycolsep{4pt}
\begin{pmatrix}
0 & \textbf{0} \\
Q''_{\epsilon_0} & 0
\end{pmatrix}.
\endgroup
\]
Since $\textbf{0}$ is a Fredholm operator of zero index, we have that $Q''_{\epsilon_0}$ is Fredholm if and only if $Q_{\epsilon_0}$ is Fredholm. In this case, we have $\mathrm{ind}\, Q''_{\epsilon_0} = \mathrm{ind}\, Q_{\epsilon_0} + \mathrm{ind}\, \textbf{0} = \mathrm{ind}\, Q_{\epsilon_0} + 0 = \mathrm{ind}\, Q_{\epsilon_0}.$ The claim follows from \cref{lemma: unitary invariance of the witten index}.
\end{proof}
It remains to compute the Fredholm index of the strictly local operator $Q_{\epsilon_0}$ given by \cref{equation1: definition of Qepsilon}, where $\theta = (\theta(x))_{x \in \mathbb{Z}}$ can be \textit{any} $\mathbb{R}$-valued sequence satisfying $q(x) = |q(x)|e^{i \theta(x)}$ for each $x \in \mathbb{Z}.$ Note that \cref{theorem: topological invariants of strictly local operators}~(i) is not immediately applicable to this operator $Q_{\epsilon_0},$ since it is not necessarily true that $\theta$ is convergent. More precisely, for each $\star = \pm \infty,$ if $q(\star) \neq 0,$ then we can explicitly construct $\theta$ in such a way that $\theta(\star) = \lim_{x \to \star} \theta(x)$ holds true. On the other hand, if $q(\star) = 0,$ then the same conclusion cannot be drawn in general. In order to overcome this hindrance, we shall closely follow \cite[Lemma 3.4]{Tanaka-2020};
\begin{lemma}
There exist two $\mathbb{R}$-valued sequences $\theta_+ = (\theta_+(x))_{x \in \mathbb{Z}}, \theta_- = (\theta_-(x))_{x \in \mathbb{Z}},$ such that
\begin{equation}
\label{equation: modified Qepsilon}
\begin{aligned}
e^{-i \theta_+}(-2i Q_{\epsilon_0})e^{i \theta_-}
= \quad &p_+ p_+(\cdot + m) \beta(\cdot + m)e^{i(\theta - \theta_+ + \theta_-(\cdot + m))}L^{m} \\
- &p_- p_-(\cdot - m) \beta^* e^{-i (\theta(\cdot - m) -\theta_-(\cdot - m) + \theta_+)}L^{-m} \\
- &|q|(\alpha_1 - \alpha_2(\cdot + m))e^{i(\theta_- - \theta_+)},
\end{aligned}
\end{equation}
where the three coefficients of the above strictly local operator have the following limits for each $\star = \pm \infty:$
\begin{align}
\label{equation1: new phase}
&\lim_{x \to \star} \left( p_+(x) p_+(x + m) \beta(x + m) e^{i (\theta(x) - \theta_+(x) + \theta_-(x + m))} \right) =
(1 + p(\star)) b(\star)e^{i \theta(\star)}, \\
\label{equation2: new phase}
&\lim_{x \to \star} \left(p_-(x) p_-(x - m) \beta(x)^* e^{-i (\theta(x - m) - \theta_-(x - m) + \theta_+(x))}\right) =
(1 - p(\star)) b(\star)^* e^{-i \theta(\star)}, \\
\label{equation3: new phase}
&\lim_{x \to \star}
\left(|q(x)|(\alpha_1(x) - \alpha_2(x + m))e^{i (\theta_-(x) - \theta_+(x))}\right)
= 2|q(\star)| a(\star) \cosh(2 \gamma(\star)).
\end{align}
\end{lemma}
\begin{proof}
For each $x \in \mathbb{Z}$ we let
\[
\star(x) :=
\begin{cases}
+\infty, & x \geq 0, \\
-\infty, & x < 0,
\end{cases}
\qquad
\theta_{\pm}(x) :=
\begin{cases}
\theta(x), & p(\star(x)) = \pm 1, \\
0, & p(\star(x)) \neq \pm 1.
\end{cases}
\]
Note that \cref{equation: modified Qepsilon} immediately follows from \cref{equation1: definition of Qepsilon}. We let
\[
\Theta_1 := \theta - \theta_+ + \theta_-(\cdot + m), \quad
\Theta_2 := \theta(\cdot - m) - \theta_-(\cdot - m) + \theta_+, \quad
\Theta_3 := \theta_- - \theta_+.
\]
It suffices to prove the following equalities:
\begin{align}
\label{equation4: new phase}
&\lim_{x \to \star} \left(p_+(x) p_+(x + m) e^{i \Theta_1(x)} \right) =
(1 + p(\star)) e^{i \theta(\star)}, \\
\label{equation5: new phase}
&\lim_{x \to \star} \left(p_-(x) p_-(x - m) e^{-i\Theta_2(x)}\right) =
(1 - p(\star)) e^{-i \theta(\star)}, \\
\label{equation6: new phase}
&\lim_{x \to \star}
\left(|q(x)|e^{i \Theta_3(x)}\right) = |q(\star)|.
\end{align}
Let $\star = \pm \infty,$ and let $x$ be any integer satisfying $|x| > |m|.$ If $|p(\star)| < 1,$ then $\theta_+(x) = \theta_-(x) = 0.$ In this case, \crefrange{equation4: new phase}{equation6: new phase} follow from the fact that as $x \to \star$ we have $\Theta_j(x) \to \theta(\star)$ for each $j = 1,2,$ and $\Theta_3(x) \to 0.$ On the other hand, if $|p(\star)| = 1,$ then $q(\star) = 0,$ and so \cref{equation6: new phase} becomes trivial. We need to check the following cases separately: $p(\star) = -1$ and $p(\star) = +1.$ If $p(\star) = -1,$ then \cref{equation4: new phase} holds trivially, and \cref{equation5: new phase} follows from $\theta_-(x - m) = \theta(x - m)$ and $\theta_+(x) = 0 = \theta(\star),$ where the last equality follows from \cref{equation: limits of theta and theta prime}. Similarly, if $p(\star) = +1,$ then \cref{equation5: new phase} holds trivially, and \cref{equation4: new phase} follows from $\theta_+(x) = \theta(x)$ and $\theta_-(x+m) = 0 = \theta(\star).$
\end{proof}
Since the Fredholm index is invariant under multiplication by invertible operators,
\[
\mathrm{ind}\,(e^{-i \theta_+} Q_{\epsilon_0} e^{i \theta_-}) = \mathrm{ind}\, Q_{\epsilon_0} = \mathrm{ind}\,(\varGamma, U).
\]
We are now in a position to apply \cref{theorem: topological invariants of strictly local operators}~(i) to $A := e^{-i \theta_+} Q_{\epsilon_0} e^{i \theta_-}.$ Since the two-sided limits of the coefficients of $-2iA_\epsilon$ are given respectively by \crefrange{equation1: new phase}{equation3: new phase}, we introduce the following notation according to \cref{equation: definition of Fourier transform of A};
\begin{align}
\label{equation2: definition of matsuzawa function}
c(\star) &:= |q(\star)| a(\star) \cosh(2 \gamma(\star)), \\
\label{equation1: definition of matsuzawa function}
-2if(z,\star) &:=
(p(\star) + 1) b(\star) e^{i \theta(\star)} z^{m} + (p(\star) - 1) b(\star)^* e^{-i \theta(\star)} z^{-m} -2c(\star),
\end{align}
where $\star = \pm \infty$ and $z \in \mathbb{T}.$ It follows from \cref{theorem: topological invariants of strictly local operators}~(i) that $A = e^{-i \theta_+} Q_{\epsilon_0} e^{i \theta_-}$ is Fredholm if and only if $f(\cdot,\star)$ is nowhere vanishing on $\mathbb{T}$ for each $\star = \pm \infty.$ In this case, we have
\begin{equation}
\label{equation: witten index expressed as the difference of winding numbers}
\mathrm{ind}\,(\varGamma, U) = \mathrm{ind}\, A = \mathrm{wn}(f(\cdot, + \infty)) - \mathrm{wn}(f(\cdot, - \infty)),
\end{equation}
where the last equality is a special case of \cref{equation: bulk-edge correspondence}. It remains to compute the winding number of $f(\cdot, \star).$
\begin{lemma}
\label{lemma: matsuzawa function is an ellipse}
Let $\varGamma ,C$ be as in \cref{maintheorem: generalised mko}, and let $\star = \pm \infty.$ Let $f(\cdot, \star)$ be defined by \crefrange{equation1: definition of matsuzawa function}{equation2: definition of matsuzawa function}, and let $p_\gamma(\star)$ be defined by \cref{equation: definition of pgamma}. Then the image of $\mathbb{T} \ni z \longmapsto f(z, \star) \in \mathbb{C}$ does not contain the origin if and only if $|p_\gamma(\star)| \neq |a(\star)|.$ In this case, we have
\begin{equation}
\label{equation1: winding number of matsuzawa function}
\mathrm{wn}(f(\cdot, \star)) =
\begin{cases}
m \cdot \mathrm{sgn}\, p(\star), & |p_\gamma(\star)| > |a(\star)|, \\
0, & |p_\gamma(\star)| < |a(\star)|. \\
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
Let us first prove that the image of $\mathbb{T} \ni z \longmapsto f(z, \star) \in \mathbb{C}$ does not contain the origin if and only if $|p(\star) b(\star)| \neq |c(\star)|,$ and
\begin{equation}
\label{equation2: winding number of matsuzawa function}
\mathrm{wn}(f(\cdot, \star)) =
\begin{cases}
m \cdot \mathrm{sgn}\, p(\star), & |p(\star) b(\star)| > |c(\star)|, \\
0, & |p(\star) b(\star)| < |c(\star)|. \\
\end{cases}
\end{equation}
Let us consider the following function on $\mathbb{R};$
\begin{align*}
2F(s)
&:= (|p(\star)b(\star)| + |b(\star)|) e^{is} + (|p(\star)b(\star)| - |b(\star)|)e^{-i s} \\
&= 2 |p(\star)b(\star)| \cos s + i 2|b(\star)| \sin s, \qquad s \in \mathbb{R}.
\end{align*}
Since $p(\star) = \mathrm{sgn}\, p(\star)|p(\star)|$ and $b(\star) = e^{i \theta'(\star)}|b(\star)|,$ for each $t \in [0,2\pi]$ we have
\begin{align*}
-2if(e^{i t}, \star) + 2c(\star)
&= (p(\star) + 1) b(\star) e^{i \theta(\star)} e^{i mt} + (p(\star) - 1) b(\star)^* e^{-i \theta(\star)} e^{-i mt} \\
&= \mathrm{sgn}\, p(\star) \cdot 2F(\mathrm{sgn}\, p(\star)(\theta(\star) + \theta'(\star) + mt)).
\end{align*}
It follows that $-if(e^{i t}, \star) = \mathrm{sgn}\, p(\star) \cdot F(\mathrm{sgn}\, p(\star)(\theta(\star) + \theta'(\star) + mt)) -c(\star)$ for each $t \in [0,2\pi],$ where the constant $-i$ does not play any significant role in this proof. If $p(\star)b(\star) = 0,$ then the image of the function $[0,2\pi] \ni t \longmapsto -if(e^{i t},\star) \in \mathbb{C}$ coincides with that of the vertical line segment $[-1,1] \ni t \longmapsto -c(\star) + i t|b(\star)| \in \mathbb{C}$ passing through $-c(\star).$ That is, the image of $f(\cdot, \star)$ does not contain the origin if and only if $|c(\star)| \neq 0 = |p(\star)b(\star)|,$ and in this case $\mathrm{wn}(f(\cdot, \star)) = 0.$ This is a special case of \cref{equation2: winding number of matsuzawa function}.
On the other hand, if $p(\star)b(\star) \neq 0,$ then the image of the curve $[0,2\pi] \ni t \longmapsto -if(e^{i t},\star) \in \mathbb{C}$ is the ellipse in \cref{figure: ellipse} with $m \cdot \mathrm{sgn}\, p(\star)$ being its winding number with respect to the center $-c(\star)$ on the real axis;
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[axis y line=none,ticks=none, xmin= -8, xmax=8, ymin=-2, ymax=2, legend pos = north west, axis lines=center, xlabel=$\Re$, xlabel style={anchor = west}
, width = \textwidth, height = 0.5\textwidth
]
\addplot [domain=-2*pi:2*pi,samples=50, smooth]({3*cos(deg(x))},{sin(deg(x))});
\addplot [mark=none,forget plot, dashed] coordinates {(3, -1.5) (3, 1.5)};
\addplot [mark=none,forget plot, dashed] coordinates {(-3, -1.5) (-3, 1.5)};
\addplot [mark=none,forget plot, dashed] coordinates {(0, -1.5) (0, 1.5)};
\draw [fill] (-3,0) circle (1.5 pt) node [anchor = north east] {$-c(\star) - |p(\star)b(\star)|$};
\draw [fill] (3,0) circle (1.5 pt) node [anchor = north west] {$-c(\star) + |p(\star)b(\star)|$};
\draw [fill] (0,0) circle (1.5 pt) node [anchor = north west] {$-c(\star)$};
\end{axis}
\end{tikzpicture}
\caption{The above figure shows the image of the curve $[0,2\pi] \ni t \longmapsto -if(e^{i t},\star) \in \mathbb{C}.$}
\label{figure: ellipse}
\end{figure}
If $|p(\star)b(\star)| > |c(\star)|,$ then the origin is inside the interior of the ellipse, and so $\mathrm{wn}(f(\cdot,\star)) = \mathrm{wn}(-if(\cdot,\star)) = \mathrm{sgn}\, p(\star).$ If $|p(\star)b(\star)| < |c(\star)|,$ then the origin is inside the exterior of the ellipse, and so $\mathrm{wn}(f(\cdot,\star)) = 0.$ Clearly, the ellipse $-if$ goes through the origin if and only if $|p(\star)b(\star)| = |c(\star)|.$
It remains to check that \cref{equation1: winding number of matsuzawa function} coincides with \cref{equation2: winding number of matsuzawa function}. If the notation $\lessgtr$ simultaneously denotes $>, =, <,$ then
$|p(\star) b(\star)| \lessgtr |c(\star)|$ if and only if $p(\star)^2 (1-a(\star)^2) \lessgtr |q(\star)|^2 a(\star)^2 \cosh^2(2 \gamma(\star))$ if and only if $p(\star)^2 \lessgtr a(\star)^2(p(\star)^2 + |q(\star)|^2\cosh^2(2 \gamma(\star)).$ Rearranging the last expression gives $|p_\gamma(\star)| \lessgtr |a(\star)|.$ The claim follows.
\end{proof}
\begin{proof}[Proof of \cref{maintheorem: generalised mko}~(i)]
The index formula \cref{equation: Witten index formula} immediately follows from \cref{equation: witten index expressed as the difference of winding numbers} and \cref{equation1: winding number of matsuzawa function}.
\end{proof}
It might be possible to give another proof for the index formula \cref{equation: Witten index formula} by making use of the recent developments of the scattering-theoretic techniques for discrete-time quantum walks \cite{Suzuki-2016,Richard-Suzuki-Tiedra-2017,Richard-Suzuki-Tiedra-2018,Maeda-Sasaki-Segawa-Suzuki-Suzuki-2018a,Morioka-2019,Wada-2020}. This possibility is briefly mentioned in \cite[\textsection 6]{Suzuki-Tanaka-2019}.
\subsubsection{Proof of \texorpdfstring{\cref{maintheorem: generalised mko}~(ii)}{Theorem C (ii)}}
\begin{proof}[Proof of \cref{maintheorem: generalised mko}~(ii)]
Note first that $U_m$ is a strictly local operator of the following form;
\[
U_m =
\begin{pmatrix}
p e^{-2 \gamma(\cdot + 1)} a + q L^m e^{\gamma - \gamma(\cdot + 1)} b & pe^{\gamma - \gamma(\cdot + 1)} b^* - q L^m e^{2 \gamma} a \\
L^{-m} q^* e^{-2 \gamma(\cdot + 1)} a - p(\cdot - m) e^{\gamma - \gamma(\cdot + 1)} b & L^{-m}q^* e^{\gamma - \gamma(\cdot + 1)} b^* + p(\cdot - m)e^{2 \gamma} a
\end{pmatrix}.
\]
It follows from \cref{theorem: topological invariants of strictly local operators}~(ii) that
\begin{align*}
&\sigma_{\mathrm{ess}}(U_m) = \sigma_{\mathrm{ess}}(U_m(- \infty)) \cup \sigma_{\mathrm{ess}}(U_m(+ \infty)), \\
&\sigma_{\mathrm{ess}}(U_m(\star)) = \bigcup_{z \in \mathbb{T}} \sigma\left(\hat{U}_m(z,\star)\right), \qquad \star = \pm \infty,
\end{align*}
where for each $\star = \pm \infty$ and each $z \in \mathbb{T}$ the $2 \times 2$ matrices $U_m(\star)$ and $\hat{U}_m(z,\star)$ are defined respectively by:
\begin{align*}
U_m(\star) &:=
\begin{pmatrix}
q(\star) b(\star)L^m + p(\star) a(\star) e^{-2 \gamma(\star)} & - (q(\star) a(\star) e^{2 \gamma(\star)} L^m - p(\star) b(\star)^*) \\
q(\star)^*a(\star) e^{-2 \gamma(\star)} L^{-m} - p(\star) b(\star) & q(\star)^* b(\star)^*L^{-m} + p(\star) a(\star)e^{2 \gamma(\star)}
\end{pmatrix}, \\
\hat{U}_m(z,\star)
&:=
\begin{pmatrix}
q(\star) b(\star)z^m + p(\star) a(\star) e^{-2 \gamma(\star)} & - (q(\star) a(\star) e^{2 \gamma(\star)} z^m - p(\star) b(\star)^*) \\
q(\star)^*a(\star) e^{-2 \gamma(\star)} z^{-m} - p(\star) b(\star) & q(\star)^* b(\star)^*z^{-m} + p(\star) a(\star)e^{2 \gamma(\star)}
\end{pmatrix}.
\end{align*}
Let $\star = \pm \infty$ be fixed. It remains to compute $\sigma'(\star) := \bigcup_{t \in [0,2\pi]} \sigma\left(\hat{U}_m(e^{it},\star)\right).$ We let
\[
\hat{U}_m(e^{it},\star) =:
\begin{pmatrix}
X_1(e^{it}) & -Y_1(e^{it}) \\
Y_2(e^{it}) & X_2(e^{it})
\end{pmatrix}, \qquad t \in [0,2\pi].
\]
We get the following characteristic equation;
\begin{equation}
\label{equation1: characteristic equation}
\det(\hat{U}_m(e^{it},\star) - \lambda) = \lambda^2 -(X_1(e^{it}) + X_2(e^{it}))\lambda + X_1(e^{it})X_2(e^{it}) + Y_1(e^{it})Y_2(e^{it}).
\end{equation}
Since the produce $q(\star) b(\star)$ can be written as $q(\star) b(\star) = |q(\star) b(\star)|e^{i (\theta(\star) + \theta'(\star))}$ by \cref{equation: limits of theta and theta prime}, we obtain the following two equalities:
\begin{align*}
&X_1(e^{it}) + X_2(e^{it})
= 2|q(\star) b(\star)| \cos(\theta(\star) + \theta'(\star) + mt) + 2 p(\star) a(\star) \cosh(2\gamma(\star)), \\
&X_1(e^{it})X_2(e^{it}) + Y_1(e^{it})Y_2(e^{it}) = 1.
\end{align*}
Then the characteristic equation \cref{equation1: characteristic equation} becomes
\begin{equation}
\label{equation2: characteristic equation}
\lambda^2 - 2(p(\star) a(\star) \cosh(2\gamma(\star)) + |q(\star) b(\star)| \cos(\theta(\star) + \theta'(\star) + mt))\lambda + 1 = 0.
\end{equation}
This equation motivates us to introduce the following notation;
\begin{align*}
\Lambda(\star, s) &:= p(\star) a(\star)\cosh(2\gamma(\star)) + |q(\star) b(\star)| s, &&-1 \leq s \leq 1, \\
\lambda_{\pm}(\star,s) &:= \Lambda(\star,s) \pm \sqrt{\Lambda(\star,s)^2 - 1}, &&-1 \leq s \leq 1,
\end{align*}
Indeed, \cref{equation2: characteristic equation} becomes $\lambda^2 - 2 \Lambda(\star,\cos(\theta(\star) + \theta'(\star) + mt)) \lambda + 1 = 0$ with the above notation, and so $\sigma\left(\hat{U}_m(e^{it},\star)\right)$ is a finite set consisting only of $\lambda_{\pm}(\star,\cos(\theta(\star) + \theta'(\star) + mt))$ for each $t \in [0,2\pi].$ We have
\[
\sigma'(\star)
= \bigcup_{t \in [0,2\pi]} \sigma\left(\hat{U}_m(e^{it},\star)\right)
= \bigcup_{s \in [-1,1]} \{\lambda_{\pm}(\star,s)\}
= \bigcup_{s \in [-1,1]} \{\lambda_{+}(\star,s)^{\pm 1}\},
\]
where the second equality follows from the fact that $[0,2\pi] \ni t \longmapsto \cos(\theta(\star) + \theta'(\star) + mt) \in [-1,1]$ is surjective and the last equality follows from $\lambda_{+}(\star,t)\lambda_{-}(\star,t) = 1$ for each $t \in [0,2\pi].$ It follows that $\sigma'(\star)$ coincides with the set $\sigma(\star)$ given by \cref{equation: definition of sigma star}. Note first that $[\Lambda_-(\star),\Lambda_+(\star)] \subseteq [-1,\infty)$ follows from
\[
-1 \leq -|q(\star)b(\star)| \leq |a(\star)b(\star)|\cosh(2 \gamma(\star)) -|q(\star)b(\star)| = \Lambda_-(\star) \leq \Lambda_+(\star).
\]
If $p(\star)a(\star) = 0,$ then $\Lambda_+(\star) = |q(\star)b(\star)| \leq 1.$ This is a special case of Case I, since $\gamma_-(\star) = \gamma_+(\star) = \infty$ according to \cref{equation: definition of gammaj}. It remains to consider the case $p(\star)a(\star) \neq 0.$ We shall make use of the fact that the hyperbolic cosine is an even function throughout. It follows from \cref{equation: definition of gammaj} that
\begin{equation}
\label{equation: paqb}
|p(\star)a(\star)|\cosh(2 \gamma_\pm(\star)) = 1 \pm |q(\star)b(\star)|.
\end{equation}
\textnormal{\textbf{Case I. }} If $|\gamma(\star)| \leq \gamma_-(\star),$ then
\[
\Lambda_+(\star)
\leq |p(\star) a(\star)|\cosh(2\gamma_-(\star)) + |q(\star) b(\star)| = 1,
\]
where the first inequality follows from $\cosh(2\gamma(\star)) \leq \cosh(2\gamma_-(\star))$ and the last equality follows from \cref{equation: paqb}. Thus $[\Lambda_-(\star),\Lambda_+(\star)] \subseteq [-1,1].$
\textnormal{\textbf{Case II. }} If $\gamma_-(\star) < |\gamma(\star)| < \gamma_+(\star),$ then it follows from \cref{equation: paqb} that
$
\Lambda_-(\star) < 1 < \Lambda_+(\star).
$
It follows that the interval $[\Lambda_-(\star),\Lambda_+(\star)] \subseteq [-1,\infty)$ can be written as the following union;
\[
[\Lambda_-(\star),\Lambda_+(\star)] = [\Lambda_-(\star),1] \cup [1,\Lambda_+(\star)].
\]
\textnormal{\textbf{Case III. }} If $\gamma_+(\star) \leq |\gamma(\star)|.$ Then $[\Lambda_-(\star),\Lambda_+(\star)] \subseteq [1,\infty)$ follows from
\[
1 = |p(\star)a(\star)|\cosh(2 \gamma_+(\star)) - |q(\star)b(\star)| \leq \Lambda_-(\star),
\]
where the first equality follows from \cref{equation: paqb} and the last inequality follows from $\cosh(2\gamma_+(\star)) \leq \cosh(2\gamma(\star)).$
\end{proof}
In the setting of $2$-phase quantum walks, a typical computation of the essential spectrum makes use of the discrete Fourier transform and Weyl's criterion for the essential spectrum (see, for example, \cite[Lemma 3.3]{Fuda-Funakawa-Suzuki-2017}). Weyl's criterion is applicable to, for example, non-compact perturbations (see, for example, \cite{Sasaki-Suzuki-2017}), but its usage is restricted to normal operators. This is why Weyl's criterion is not suitable for \cref{maintheorem: generalised mko}~(ii).
\section{Conclusion}
\label{section: concluding remarks}
\subsection{Summary}
The following is a brief summary of the present article. A chiral pair on a Hilbert space $\mathcal{H}$ is by definition any pair $(\varGamma,U)$ of a unitary self-adjoint operator $\varGamma : \mathcal{H} \to \mathcal{H}$ and a bounded operator $U : \mathcal{H} \to \mathcal{H}$ satisfying the chiral symmetry condition \cref{equation: chiral symmetry}. It is shown in \cref{section: preliminaries} that we can assign to each abstract chiral pair $(\varGamma, U)$ the well-defined Witten index, denoted by $\mathrm{ind}\,(\varGamma,U)$ in this paper. Note that this assignation of the Fredholm index is a natural generalisation of the existing index theory \cite{Cedzich-Grunbaum-Stahl-Velazquez-Werner-Werner-2016,Cedzich-Geib-Grunbaum-Stahl-Velazquez-Werner-Werner-2018,Cedzich-Geib-Stahl-Velazquez-Werner-Werner-2018,Suzuki-2019,Suzuki-Tanaka-2019,Matsuzawa-2020} for essentially unitary $U,$ where $\mathrm{ind}\,(\varGamma,U)$ is referred to as the \textbi{symmetry index} in the first three papers.
A motivating example for this paper is the non-unitary time-evolution $U_{\textnormal{mko}}$ defined by \cref{equation: definition of evolution operator of MKO}, where we assume the existence of the two-sided limits as in \cref{equation: existence of limits}. Recall that this evolution operator is consistent with the experimental setup in \cite{Regensburger-Bersch-Miri-Onishchukov-Christodoulides-Peschel-2012}. It is shown in \cref{theorem: MKO model} that the operator $U_{\textnormal{mko}}$ forms a chiral pair with respect to the unitary self-adjoint operator $\varGamma_{\textrm{mko}} := (\sigma_2 C_1 S \sigma_2) \varGamma_2 (\sigma_2 C_1 S \sigma_2)^*,$ where $\sigma_2$ denotes the second Pauli matrix, and that the chiral pair $(\varGamma_{\textnormal{mko}}, U_{\textnormal{mko}})$ can be naturally generalised to another chiral pair $(\varGamma_m, U_m),$ where $m$ can be any fixed integer. This new model $(\varGamma_m, U_m)$ also unifies several one-dimensional unitary quantum walk models as in \cref{section: chirally symmetric quantum walks}. Complete classification of the two associated topological invariants $\mathrm{ind}\,(\varGamma_m, U_m)$ and $\sigma_{\mathrm{ess}}(U_m)$ can be collectively found in \cref{maintheorem: generalised mko}. Our classification of $\mathrm{ind}\,(\varGamma_m, U_m)$ makes use of an abstract form of the one-dimensional bulk-boundary correspondence, the precise statement of which can be found in \cref{theorem: topological invariants of strictly local operators}~(i).
Finally, it is shown in \cref{lemma: essential spectrum of essentially unitary U} that given an abstract chiral pair $(\varGamma,U)$ with $U$ being essentially unitary, we have that $\mathrm{ind}\,(\varGamma,U)$ is a well-defined Fredholm index if and only if $U$ is essentially gapped in the sense that $-1,+1 \notin \sigma_{\mathrm{ess}}(U).$ It turns out that this characterisation does not hold true in general, if $U$ fails to be essentially unitary. To put this into context, we consider the non-unitary evolution $U_{\textnormal{mko}}.$ It is shown in \cref{example: second example} that we can choose the asymptotic values $\theta_1(\pm \infty), \theta_2(\pm \infty), \gamma(\pm \infty),$ in such a way that $U_{\textnormal{mko}}$ is essentially gapless, yet $\mathrm{ind}\,(\varGamma,U)$ is a well-defined non-zero integer.
\subsection{Discussion}
The main results of the current paper may stimulate further developments in the rigorous mathematical studies of non-unitary discrete-time quantum walks. In particular, each of the following specific topics is the subject of another paper in preparation.
\subsubsection{Further spectral analysis of the Mochizuki-Kim-Obuse model}
Complete classification of $\sigma_{\mathrm{ess}}(U_{\textnormal{mko}})$ is given in this paper. In particular, we show that $\sigma_{\mathrm{ess}}(U_{\textnormal{mko}})$ is a subset of $\mathbb{T} \cup \mathbb{R},$ and that it depends only on the asymptotic values $\theta_1(\pm \infty), \theta_2(\pm \infty), \gamma(\pm \infty).$ Note, however, that it is not known to the authors whether or not the entire spectrum of $U_{\textnormal{mko}}$ is also a subset of $\mathbb{T} \cup \mathbb{R}.$ Detailed spectral analysis of the evolution-operator $U_{\textnormal{mko}}$ may turn out to be difficult, partly because the discrete spectrum of such a non-normal operator is in general laborious to characterise (see, for example, \cite[\textsection III]{Boussaid-Comech-2019}). Note also that we expect the discrete spectrum to be non-stable under compact perturbations unlike $\sigma_{\mathrm{ess}}(U_{\textnormal{mko}}).$
\subsubsection{Topologically protected bound states}
Let $(\varGamma,U)$ be a chiral pair. If $U$ is unitary, then the non-zero vectors in $\ker(U \mp 1)$ can be referred to as \textbi{topologically protected bound states} \cite{Kitagawa-Rudner-Berg-Demler-2010,Kitagawa-Broome-Fedrizzi-Rudner-Berg-Kassal-Aspuru-Demler-White-2012,Suzuki-2019,Suzuki-Tanaka-2019,Matsuzawa-2020}. It is well-known that the Witten index $\mathrm{ind}\,(\varGamma,U)$ gives a lower bound for the number of topologically protected bound states in the following precise sense (see, for example, \cite[Theorem 3.4~(ii)]{Suzuki-2019});
\begin{equation}
\label{equation: topologically protected bound states}
|\mathrm{ind}\,(\varGamma,U)| \leq \dim \ker(U - 1) + \dim \ker(U + 1),
\end{equation}
where the chiral pair $(\varGamma,U)$ is assumed to be Fredholm. It follows that if $\mathrm{ind}\,(\varGamma,U)$ is non-zero, then $U$ has at least one topologically protected bound state. Whether or not an estimate analogous to \cref{equation: topologically protected bound states} holds true for non-unitary $U$ is an open problem.
\begin{comment}
\subsubsection{A scattering theoretic approach to the Witten index}
\end{comment}
\begin{acknowledgements}
The authors are deeply indebted to the members of the Shinshu Mathematical Physics Group for extremely valuable discussions and comments. Our sincerely thanks go to T.~Daniels for carefully reading the preprint. Y.~T.~was supported by JSPS KAKENHI Grant Number 20J22684. This work was partially supported by the Research Institute for Mathematical Sciences, a Joint Usage/Research Center located in Kyoto University.
\end{acknowledgements}
\bibliographystyle{alpha}
|
1,314,259,994,268 | arxiv |
\section{Introduction}
Increasingly, massive amounts of data are being generated, stored, and disseminated as a result of human activity. For instance, whenever a mobile phone call, monetary transaction, or social media post is made, geo-located data is automatically generated by mobile network provider, bank, or social network provider (e.g., Facebook or Twitter) and attached to the data record generated by the activity. An extensive body of works leverage for studying human dynamics through cell phone data \cite{ratti2006mlu, gonzalez2008uih, amini2014impact, DBLP:journals/corr/GrauwinSMGR14, sobolevsky2013delineating}, social media posts \cite{hawelka2014geo, paldino2015urban}, bank card transactions \cite{sobolevsky2014money,DBLP:journals/corr/SobolevskySGCHAR14} and vehicle GPS traces \cite{kang2013exploring,santi2014quantifying}. Over the last few years, the use of mobile phones as sensors of human behavior has radically increased. Storage and analysis of information from mobile phones can provide useful insights into how people move and behave. From them, it is possible to infer with a certain level of accuracy, the activities humans are performing at every moment they are connected to the mobile network. Indeed, experiments in large-scale social dynamics have been conducted in the areas of public safety and emergency management \cite{Lu17072012,10.1371/journal.pmed.1001083,DBLP:journals/corr/Pastor-EscuredoMTBWCRLRFOFL14}, health and disease management \cite{Kovanen:Wesolowski_Science2012,6113095}, social and economic development \cite{Frias-martinez12onthe,citeulike:7205422,amini2014impact}, transport/infrastructure \cite{Berlingerio_allabroad,kang2013exploring}, urban planning \cite{5928310,5594641,Girardin_quantifyingurban,citeulike:5158387,10.1371/journal.pone.0037027,pei2014new,DBLP:journals/corr/GrauwinSMGR14}
and international development, poverty \cite{Smith_ubiquitoussensing} and more \cite{Bogomolov:2014:OUC:2663204.2663254}. A large fraction of mobile phone data has been shown to be extremely useful for humanitarian and development applications (Robert Kirkpatrik UN 2013). This geo---and---time information have explicit or implicit geographic location.
A commonly used source of mobile phone data for which studies is comprised of aggregated and anonymized Call Detailed Records (CDRs), provide meta data about phone activity. This allows us to identify the time, the location, the duration, and possibly of the movements of mobile phones owners. Because the mobile network operator knows the
locations of their cell towers, it is possible to use CDRs to approximate the location of users. The spacing of cell
towers, and thus the accuracy in determining caller's locations, varies according to expected traffic and terrain.
Cell towers are typically spaced 2-3 km apart in rural areas and 400-800 m apart in densely populated areas. Commonly the CDRs are available as anonymous, often incomplete or aggregated because of privacy issues. Accessing personal or private data is a very sensitive and critical topic, which arises claims to be aware of privacy issues before the research outcomes are shared. The main research area of this article is the one, which investigates inferring (collective) human behavior starting from such CDR.
Current research \cite{DBLP:journals/corr/abs-1106-0560, Phithakkitnukoon:2010:AMI:1881331.1881336, Calabrese:2010:GTA:2166616.2166619,Furletti:2012:IUP:2346496.2346500} notes that with a pure CDR, it is possible to identify human behaviors, but results suffer from the heterogeneity, uncertainty and complexity of raw datasets and that the lack of qualitative content is included in the data itself that may be used to help to infer human behaviors. In spite of the bad quality of data, some researchers are able to identify a certain level of human behaviors with the help of features that can be indirectly obtained from the raw CDR. These features are observed information about real-world cases. For example, a user's home location is identified by the frequent stay (location) of the user between midnight to early morning \cite{Calabrese:2010:GTA:2166616.2166619}. These features are mostly specified by domain experts. Most of the behavioral datasets, such as CDR or GPS traces, are relatively large raw datasets where user location is not perfect for tracking the dataset. Such data are incomplete, typically lacking in content and the measurement accuracy of the data is low and coarse grained. While this data provides extensive information regarding when and where people go, it typically does not allow understanding of which activity is performed at that location.
When the context information of raw data is not directly available in the data, some useful characteristic values for human behaviors can still be discovered by analyzing the activity patterns of mobile network communication \cite{candia-2007,gonzalez08,song2010limits}. For example, anomalous events can be observed by the change of daily communication activity patterns \cite{candia-2007}. Using some semantics about ongoing events (e.g., the location and the time of an event), researchers~\cite{Calabrese:2010:GTA:2166616.2166619,hoteit2014estimating,Furletti:2012:IUP:2346496.2346500,DBLP:journals/corr/abs-1106-0560,Calabrese:2010:GTA:2166616.2166619} are able to discover some relationships between human behaviors (i.e., communication and mobility patterns), and emergency (e.g., earthquake, blackout) or non emergency events (e.g., concert, festival). However, the results are limited to only specific events and the correlations between the events and human behaviors. For instance, the quantitative analysis of behavioral changes in presence of extreme emergency events show radical increase of call frequency right after the event occurs, with long term impacts \cite{DBLP:journals/corr/abs-1106-0560}.
Human behavior is influenced by many external factors, such as weather condition, urban structure, social event. Knowing the context, where a person is important to guess what he/she is doing. For instance, if a person is in or close to a restaurant and makes a phone call around 1:00 pm., or 8:00 pm., then the person is probably looking for eating some food, or (s)he works at the restaurant. In this article, we propose to formalize the context as a pair $\left<l,t\right>$, where $l$ is a location (= geographical area) and $t$ is a time interval. A context can be associated with various information. For instance, weather, events, surrounding geo-objects, and more. For every record in a CDR dataset, it is possible to determine to a certain level of approximation the context associated to such a record. Some research studies~\cite{10.1371/journal.pone.0112608,
10.1371/journal.pone.0045745,10.1371/journal.pone.0045745,10.1371/journal.pone.0081153,
Girardin_quantifyingurban,Phithakkitnukoon:2010:AMI:1881331.1881336} have been performed to understand the correlations of human behaviors to environmental factors, using some additional contextual information about weather, social events and geographical information systems. This focuses on the inference of a certain level of human activities in a certain condition. For instance, human behavior changes during periods of uncomfortable weather condition~\cite{10.1371/journal.pone.0112608}.
Among the literature of raw CDR analysis, researchers mostly employ data-mining and machine learning algorithms\footnote{http://en.wikipedia.org/wiki/Supervised\_learning} \cite{Daniele2009,s141018131} in the analysis. Supervised learning algorithms (e.g., Multiclass SVM, Logistic Regression, Multilayer Perceptron, Decision Tree, KNN) can achieve reasonable accuracy on the CDR analysis but they require qualitative and quantitative properties in data and ground truth information in order to perform behavior classification well. The researchers claimed to acquire a sufficient amount of labeled training data. For their studies, but obtaining ground-truth information (e.g., user diaries and surveys) for training data remains a very expensive and almost unmanageable task, especially when one considers the large amounts of CDR data that need to be annotated and the diversity in the contexts of mobile phone events that makes the annotation more difficult.
The result of CDR analysis usually provides quantitative evidence that are presented in visual analytic tools (e.g. graphs and tendencies), and requires a qualitative description for the human behavior interpretation. Less interest has been dedicated to the development of methods that are capable of producing a qualitative/semantic level description of human behavior. This thesis will concentrate on providing qualitative descriptions of human behavior. For representing human behavior, we propose to use semantic descriptions based on an ontology. More in details, a semantic description of human behaviors is a representation of the behaviors of a single person, the behaviors of a group of people or the events that happen in the human society, in terms of concepts and relations of an ontology. An example of a semantic description of human behaviors is in the fact that ``a person is performing some specific actions'' (e.g. working, shopping and hiking), or the fact that ``certain events are happening in a certain area'' (e.g. a car accident and a train suddenly stopping in the middle of nowhere).
Growing information in a variety of Web 2.0 and social platforms opens opportunities for collecting and using information about the context associated to mobile phone data records. These sources include a wide and diverse set of data that can be useful to characterize a user's context: environmental data; statistic description of the territory; public and private, emergency and non-emergency events; statistical data about demography, ethnography, energy or water consumption; and more. These kinds of data should be employed as contextual information if we are to gain a better understanding of a user's context, as they allow a more representative and semantically expressive characterization of the relationships between human behavior and different contextual factors.
This article seeks understanding human behavior based on contextual information obtained (also) by leveraging available Web 2.0 data sources, representing them in both a conceptual and computational model, such as the correlations between contexts and human behaviors, and prediction of human activities of a group of people will perform in a given context, and the most probable action that a group of people are doing in a context. The contextual information can be inaccurate, unavailable, uncertain and noisy. In our research, we need to pre-estimate if the information is useful enough by measuring the validity, accuracy, and suitability of the data. Such contextual data needs to be effectively integrated, coping with the problems caused by uncertainty and heterogeneity of data.
We identified that points of interest (POIs) are good proxies for predicting the content of human activities in the context in which mobile network events occur. Our proposed model is named High Level Representation of Behavior Model (HRBModel), and correlates POIs and the time of the day with typical user activities. The model integrates an ontology for POIs and times/days with an ontology for human activities and statistical information about the correlations between the two. Given a set of POIs and a time of the day (which can both be inferred from the context), the model generates a set of human activities associated with a likelihood measure.
We validated the accuracy of the model using two different qualitative geo-located data-sources, providing us ground-truth information on what type of human activities were performed in a given location: 1) user feedback collected in Trento, Italy and, 2) bank card transaction data generated in Barcelona, Spain. Our extended evaluation of the model includes a validation of the impact of heterogeneity and uncertainty of open geographical data for recognizing human activities. This validation is performed on city scale and takes different land-use types into account in order to understand the level of accuracy of human activity prediction from the geographical data or vice versa from bank card transaction data.
The innovative aspect of the research is the development and usage of ontologies for analysis of raw CDR data, combined in a mixed model with data mining numerical methods. Our method improves the quality of human activity recognition tasks given noisy, lossy, and uncertain data, and allows to infer, with a certain level of accuracy, the different level (of hierarchy) of activities humans are performing at a specific location and time. Accordingly, the model can deal with a wide range of contextual features about objects of a territory by extracting concepts and inter-concept relations.
Another innovative aspect is that the HRBModel is a general model that can be used to provide semantic interpretation of any type of geo-located and time-dependent data, other than CDR, such as data provided by social platforms like Foursquare, or Twitter, GPS, and Credit Card data. Potential applications of this research are a context aware application systems, and ``smart cities'' applications that provide decision support for stakeholders in areas such as urban, transport planning, tourism and event analysis, emergency response, health improvement, community understanding, economic indicators and others ~\cite{nrc12377,DBLP:journals/corr/abs-1210-0137,DBLP:journals/pervasive/FerrariBCC13}.
The remainder of the paper is structured as follows: Section \ref{related works} briefly summarizes relevant literature. In the following section, we introduce the data-sources we used for contextual information. The core methodology of the HRBModel to extract high level human activities from geographical information is explained in Section \ref{methodology}. The experimental results and evaluation of the model are presented in Section \ref{evaluation}. In Section \ref{usage of the model}, we discuss the usage of the model and describe possible developments (extensions). Finally, in Section \ref{conclusion}, we draw some conclusions.
\section{Related Works}\label{related works}
Great opportunity and impact on human activity recognition is brought by mobile phone data. Through data mining and machine learning techniques, human activity data becomes available for further analysis.
Researchers in the areas of behavioral and social sciences are interested in examining CDR to characterize and understand real-life phenomena, such as individual traits, human mobility \cite{DBLP:journals/corr/abs-1106-0560,Calabrese:2010:GTA:2166616.2166619,4756329,Timothy-2006}, communication and interaction pattern\cite{Calabrese:2010:GTA:2166616.2166619,
onnela-2006,10.1371/journal.pone.0014248}. Candia et al~\cite{candia-2007} proposes an approach to understand dynamics of the individual calling activities, which could carry implications on social networks. The author analyzed calling activities of different group of users; (some people rarely use mobile phone, others use often). The cumulative distribution of consecutive calls made by each user is measured within each group and the result explains that the subsequent time of consecutive calls is useful to discover some characteristic values for the behaviors. For example, peaks occur near noon and late evening. The fraction of active traveling population and average distance of travel are almost stable during the day. This approach can be applied for detecting anomalous events. In \cite{Calabrese:2010:GTA:2166616.2166619}, the authors analyze the mobility traces of groups of users with the objective of extracting standard mobility patterns for people during special events. In particular, this work presents an analysis of anonymized traces from the Boston metropolitan area during a number of selected events that happened in the city. They finally proved that people who live close to an event are preferentially interested in events organized in the proximity of their residence.
Some researchers used semantic tags for geographic location from social networks or user diaries to identify a semantic meaning of places \cite{Ye:2011:SAP:2020408.2020491,conf/huc/KrummR13,
Sakaki10earthquakeshakes,Sengstock:2011:ECG:2093973.2094017,
Yin:2011:GTD:1963405.1963443} captured in user CDRs or GPS trajectories. Supervised learning algorithms (e.g. binary SVM and hidden markov models) are employed for this identification.
People perform different activities even when they stay in the same location at the same time, with respect to the wide difference of situations caused by various factors like natural, technological or societal disasters, from hurricanes to violent conflicts. This requires the consideration of potential influencing context factors on a common space-time basis that enables the associations of those different datasets. Indeed, some researchers~\cite{DBLP:journals/corr/abs-1106-0560, Phithakkitnukoon:2010:AMI:1881331.1881336, Calabrese:2010:GTA:2166616.2166619,Furletti:2012:IUP:2346496.2346500} use an additional information about context factors in order to study the relationship between human behaviors and context factors like social events, geographical location, weather condition, etc. This is always as successful as the quality of the context factors.
In fact, some researchers validated the use of spatial and temporal contextual features for understanding the relationship between human behaviors and environmental factors. For instance, Sagl and Resch et al. \cite{Sagl2014} found that many factors influence the collective human behavior and weather conditions and geographic information are certainly two of these factors. Therefore, associating environmental and social factors to mobile phone data is extremely useful for analyzing the dependency of human behaviors from the external factors. These studies focus on the inference of a certain level of human activities under certain conditions like human behavior changes in uncomfortable weather conditions~\cite{10.1371/journal.pone.0112608,10.1371/journal.pone.0045745,10.1371/journal.pone.0045745,10.1371/journal.pone.0081153},
human mobility of different communities~\cite{Phithakkitnukoon:2010:AMI:1881331.1881336,candia-2007}, human mobility during an event~\cite{Calabrese:2010:GTA:2166616.2166619,D4D:2013,
DBLP:journals/corr/abs-1106-0560}, or communication activity patterns in different land-use types~\cite{Girardin_quantifyingurban,Phithakkitnukoon:2010:AMI:1881331.1881336,
pei2014new,sobolevsky2013delineating}. In this work, geographical information in particular points of interest (POIs) are collected using pYsearch (Python APIs for Y! search services) from a map. They annotated POIs with four type of activities; eating, recreational, shopping and entertainment. The authors analyzed human activity patterns (i.e., sequence of area profiles visited by users) correlating to geographical profiles. Bayesian method is used to classify the areas into crisp distribution map of activities, which enables the activity pattern extraction of the users. The results shows that the users who share the same work profile follow the similar daily activity patterns. But not only these area profiles can explain the mobility of these users and the enlargement of activity or event taxonomy in those areas can enable the classificaions of these activity patterns. Due to the limitation of heterogeneity of activities considered in this research, the results in measurement lacks of certainty.
The authors of \cite{Calabrese:2010:GTA:2166616.2166619} find that residents are more attracted to events that are organized close to the home location of the residents. F. Girardin et al.~\cite{Girardin_quantifyingurban} analyzed aggregate mobile phone data records in New York City to explore the capacity to quantify the evolution of the attractiveness of urban space and the impact of a public event on the distribution of visitors and on the evolution of the attractiveness of the points of interest in proximity. In \cite{DBLP:conf/icsdm/SaglBRB11}, the authors introduced an approach to provide additional insights in some interactions between people and weather. Weather can be seen as a higher-level phenomenon, a conglomerate that comprises several meteorological variables including air temperature, rainfall, air pressure, relative humidity, solar radiation, wind direction and speed, etc. The approach has been significantly extended to a more advanced context-aware analysis in \cite{s120709800}.
\section{Data gathering}\label{data-source}
In this study, we model one contextual location dataset of 4.6 million POIs in Trento, Italy, 2.6 million POIs from OSM in Barcelona, Spain against two 130K POSs from Banco Bilbao Vizcaya Argentaria (BBVA), and CDRs from GSM 900 and GSM 1800 (Telecom Italia). Here, we describe the structure of mobile network events and to investigate how mobile network data can be combined with the available contextual information for supporting human recognition task. We show that location and time are the key factors to integrate mobile phone data records with the contextual information coming from the other sources about environments and events.
\subsection{Mobile phone data records}
There are many possible types of CDRs generated by operator companies. In the followings we describe the ones we are using to use
in this article.
The initial experiments of this thesis have been done on a CDR dataset (completely anonymized)
provided by Telecom Italia with the following structure:
\begin{description}
\item[source cell and target cell:] the identifiers of the cell from
which the call is issued and the receiver's cell, respectively. They are
composed of a Location Area Code and additional parameters, such as
the operator, etc. Cell ID needs to be resolved into a physical
geographical area
\item[date and time]
The date and the time of the day in which the call is issued
\item[duration]
The duration of the call
\end{description}
To combine mobile phone data records with contextual information, location and time associated to each single call are key. CDR are not directly geo-referenced but they are labelled with the cell. From the cell, and the geographical cell distribution we can reconstruct with a certain level of accuracy and estimation of the geographical location of the user when he or she generated a particular mobile event.
As an example, here we describe the structure of cell coverage map that generated by Telecom Italia. We use aggregate GSM 900 and 1800 data that was provided by Telecom Italia. The cell coverage map of the mobile network is captured across Trento containing completely anonymized cell (antennas). Cell identification is used for identification of some portion of a physical geographical area featured within a set of devices (antennas) that support the mobile communication (e.g. calls, sms and internet). The size of coverage areas can vary depending on the estimation of
call traffic from/to this area. Usually the size of the coverage areas
are inversely proportional to the density of the population inhabiting
the area. However
there is an upper-bound of the coverage area size due to antennas
physical limits to 35 km. The cell partition is not fixed and can vary
depending on the demand of network during specific periods. For
instance to support an increased request of communications, additional
cells can be dynamically activated in order to increase the capability
of the entire network. Partial knowledge of mapping between cell ID's
and coverage areas, irregularity of the size of the cell, and
dynamically of the cell configuration add complexity and uncertainty
on the identification of the physical location from where a call is
issued. Nowadays, cell coverage maps are restricted to be accessed due
to privacy issues. Yet, some open sources for cell coverage maps are
available on the web sites, such as Open
Cell-ID\footnote{http://www.opencellid.org} and Open Signal
Maps\footnote{http://opensignal.com}
\subsection{Credit card data}
Credit card data is raw data semantically enriched with some semantic information which describes the type of transaction that will be used in our analysis in order to evaluate the proposed approach. The descriptions of which are provided for research purposes by one of the largest Spanish banks, BBVA. The raw dataset is protected by the appropriate non-disclosure agreement and is not publicly available. The data was completely anonymized on the bank's side, preventing access to any personal information of the customers in accordance with all applicable privacy protection regulations. Each transaction is characterized with its value, a timestamp, and additionally the retail location, where it was performed at the Points of Sale (POS) terminals. Each POS is categorized into one of the 76 business activity types, such as groceries, fashion, bars, restaurants, sport shops, etc.
\subsection{Contextual data}
Openstreetmap (OSM) is an open and free map of the world enriched with a large number of geo-referenced objects upon an open content license. The database of OSM is frequently updated and is rapidly growing. It has a free tagging system based on a well documented taxonomy, which classifies objects in categories, such as roads, buildings, etc. OSM adopts a topological data structure based on the three core elements:
\begin{description}
\item[Nodes:] points with a geographic coordinate (i.e., a pair of latitude and longitude). A node mostly describes a POI. For example, shops, restaurants, etc.
\item[Ways:] ordered lists of nodes in a poly-line or a polygon. These elements are used for representing linear features or areas, such as streets and forests.
\item [Relations:] groups of nodes and ways that represent the relations among existing nodes and ways, such as roads with several exiting ways.
\end{description}
Each of aforementioned three element types is described with geographic coordinates. Nodes are mainly used to represent POIs, but they can also be used as part of ways to represent linear features. So a POI can be described in a node or in nodes, and part of a way, for instance, a railway level crossing. The metadata of OSM is provided in the form of tags (i.e., a textual description described as a pair of key and value) to describe each element. Nodes, ways, and relations are tagged, and that the tag represent some information about the node, for instance if it is a station, a shop, a road, a crossing, etc. A list of common tags used (but not all) is captured in the Map Features page\footnote{http://wiki.openstreetmap.org/wiki/$Map\_Features$}. The tags for nodes, ways, and relations are represented as a set of pairs of $\left<key,value\right>$. For example, a university is tagged as that the key is amenity, the value is university, and the highway route is tagged as that the key is route, the value is highway. The OSM tags for one node corresponding to a supermarket in XML representation are shown below::
\begin{verbatim}
<node id="618033185" version="3" uid="330007"
user="pikappa79"
timestamp="2011-05-26T16:02:14Z"
changeset="8254868"
lat="46.0946011" lon="11.1162507">
<tag k="addr:postcode" v="38121"/>
<tag k="shop" v="supermarket"/>
<tag k="addr:country" v="IT"/>
<tag k="name" v="Supermercato PAM"/>
<tag k="addr:housename" v="Bren Center"/>
<tag k="addr:street"
v="Via Giovanni Battista Trener"/>
<tag k="addr:city" v="Trento"/>
</node>
\end{verbatim}
The way we intend to use OSM in this article is by collecting the objects which are in the cell area. These objects can provide insights on the type of action a person while he is located in a cell area. For instance, if a person is in or close to a restaurant and makes a phone call around 1:00 pm., or 8:00 pm., then the person is probably looking for eating some food. Alternatively or complementary, we could use Google Map or Yahoo Map or regional cadastral map database for collecting the POIs within the each cell area. A relevant technique for POIs extraction in a given region from geographical information system is described in~\cite{DBLP:conf/mum/DashdorjSAL13,Phithakkitnukoon:2010:AMI:1881331.1881336}.
Table \ref{table:ontology mapping} introduces a brief description of ontology we used to generate useful information from the aforementioned heterogeneous data sources. For the integration between these qualitative data-sources we designed an ontology for each domains used in this PhD research work: 1) POI, 2) Action, 3) Time. This ontology is based on surveys, crowdsourcing, and domain experts, or the formally defined classifications for instance, business career classification by YellowPages or sport classification by the Olympic Game Committee, etc.
\begin{table*}[!htbp]
\centering
\caption{Input and Output cardinalities}
\label{table:ontology mapping}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Ontology} & \textbf{Input} & \textbf{Input} & \textbf{Output} & \textbf{Output} \\
& & \textbf{Cardinality} & & \textbf{Cardinality} \\ \hline
OSMonto & OSM & 1822 & POIs & 517\\ \hline
ActOnto & Survey, Crowdsourcing & 420 & actions & 217\\
& YellowPages & & & \\ \hline
TimeOnto & Survey, experts & 72 & fuzzy times & 27 \\
& & & days & \\ \hline
HBOnto & OSMonto, ActOnto & 52 classifiers & actions & parent-10 \\
& TimeOnto & & & all-217 \\\hline
\end{tabular}
\end{adjustbox}
\end{table*}
\section{High-level representation of behavior model}\label{methodology}
This section introduces a prediction model, called High Level Representation of Behavior Model (HRBModel)~\cite{DBLP:conf/mum/DashdorjSAL13}. This model generates a set of human activities with a likelihood from the set of POIs. The structure of the model is visualized in Figure \ref{gra:framework}. The HRBModel consists of two core components: the Human Behavior Ontology (HBOnto) and the Stochastic Behavior Model (SBM). The HBOnto is an ontology that provides a classification of POIs, human actions, and periods of the day, and it formalizes the relations between these three sets of elements. For example, at the university, people are studying in the morning and afternoon. The SBM provides a likelihood measure to each of the actions that has been selected by the ontology, on the basis of the frequency of POIs and on a fuzzy model of the actions depending on the time of the day. It allows us to predict the possible top-$k$ activities associated with a likelihood, that could be performed in each context of a given region. For example, having a breakfast activity is the most probable activity at residential types of area in the early morning.
In the remainder of the chapter, we describe in details the two components of the HRBModel: HBOnto in Section \ref{HBONTO} and SBO in Section \ref{SBO}. In Section \ref{sec:application HRBModel}, we show the application of the HRBModel.
\subsection{Human behavior ontology (HBOnto)}\label{HBONTO}
The Human Behavior Ontology (HBOnto) provides a set of possible activities for a given POI type or in the time of day, i.e., it does a priori activity selection on the basis of POIs and time of the day. For instance, it excludes ``sleeping'' if the time of the day is 1:00 pm. and it excludes ``hiking'' if you are in the city center. It includes activities only if there are POIs that activates them. For example, an area containing a swimming pool activities swimming activity. The HBOnto\footnote{http://brenta.disi.unitn.it/~dashdorj/Ontology/HBOnto.zip} is composed of three ontologies (see Figure \ref{gra:HBOnto}). OSMOnto, which contains the various classes of POIs as classified in OSM; ActOnto, which contains the different types of human activities we are interested in; TimeOnto, which contains temporal references in which those human activities can occur. These three ontologies are related by two main relations. The first one connects an action class to the classes of POIs, in which the action can be performed. The second relation connects an action class to the classes of the time of day during which the action is possible. In the following, we describe each ontology of HBOnto (Sections \ref{sec:OSMONTO}, \ref{sec:ActOnto}, \ref{sec:TimeOnto}), the relations among them (Sections \ref{sec:relation POI and activity} and \ref{sec:relation time/day and activity}) and how DL reasoning on HBOnto can be used to determine possible activities at a given POI type and time (Section \ref{sec:reasoning}).
\begin{figure*}[htb!]
\centering
\subfigure[The prototype framework of the HRBModel]
{\includegraphics[scale=0.22]{images/framework.png}
\label{gra:framework}}
\quad
\subfigure[Composition of human behavioral ontology (HBOnto)]
{\includegraphics[scale=0.26]{images/HRBOnto.png}
\label{gra:HBOnto}}
\caption{}
\end{figure*}
\subsubsection{The Ontology of POI---OSMonto} \label{sec:OSMONTO}
A POI denotes a geo-referenced object, such as a restaurant, a shop,
or a lake or other objects, whih have a precise geographical location. In this study, we extended the existing OSMonto ontology~\cite{CodescuOSMONTO:2011:DAS:2008664.2008673}, which proposes a classification of POIs, by adding new POI types that have recently been appended to OSM (see, for instance~\cite{osmwiki}) as well as the new frequent POI types found in our analysis. The (extended) OSMonto ontology is a taxonomy of POI types, that formalizes the information about POIs contained in OSM. The POIs are encoded in the OSM as in the form of a pair $\left<key,value\right>$.
This information is transformed in order to be represented in OSMOnto. The key is a type of POI that includes a prefix ``$k\_$'', and the value is a sub type of POI that includes a prefix ``$v\_$''. However, according to the OSM tagging structure about POIs, some POIs might have the same ``$v\_$'' value denoting different sub-types based on the ``$k\_$'' key. For instance, a POI $\left<railway,train\right>$, and a POI $\left<route,train\right>$ both have the same values. In this case, we represent those types of POIs as the combination of the key and the value, for instance in $k\_railway\_v\_train$. An example of the classes of POIs in OSMOnto is shown in Figure~\ref{gra:osmonto classes}. We added manually the recent POI types appeared in OSM. In total, 517 POI types are used in OSMonto. We discard and exclude the irrelevant POIs, for example, those do not reflect a human action like benches, chimneys, power towers. and trash bins.
\begin{figure*}[htb!]
\centering
\includegraphics[scale=0.35]{images/Osmonto_b.png}
\caption{OSMOnto ontology for POI classes}
\label{gra:osmonto classes}
\end{figure*}
\subsubsection{The Ontology of Human Activity---ActOnto} \label{sec:ActOnto}
The ActOnto ontology is a taxonomy of human activities. ActOnto contains totally 217 human activity classes hierarchically organized in different levels up to four levels. The upper level is composed of 10 activity types (see the Table~\ref{table:group action}). Various sources are exploited to classify those activities, such as Yellow Pages\footnote{http://www.yellowpages.ca/business/}, Human Resource Management\footnote{http://mayor2.dia.fi.upm.es/oeg-upm/files/hrmontology/hrmontology-RDF.zip}, and Olympic Games\footnote{http://swat.cse.lehigh.edu/resources/onto/olympics.owl} and we also asked for support from domain experts. The ontology has been constructed manually. Figure \ref{gra:activity_hierarchy} shows the top two levels of the human activities taxonomy. We use description logic to formalize the hierarchy of human activities with axioms. For example, $travel\_by\_airplane \sqsubseteq travel\_by\_transport, travel\_by\_car \sqsubseteq travel\_by\_transport\\ $.
\begin{table*}[!htbp]
\centering
\caption{Human activity categories in ActOnto}
\label{table:group action}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{|l|l|}
\hline
\textbf{Top level classes of activities} & \textbf{Type of POIs} \\ \hline
eating & fast food, food court, restaurant, cafe\\ \hline
shopping & grocery, general stores\\ \hline
health medicine activity & hospital, pharmacy\\ \hline
entertainment activity & bar, casino, movie, theater\\ \hline
education activity & library, university school\\ \hline
transportation traveling & airplane, bus, car, train\\ \hline
outdoor activity & sightseeing, personal care, religious places\\ \hline
sporting activity & car racing, summer, winter sports\\ \hline
working activity & professional work place, industrial place\\ \hline
residential activity & guest house, hotel, hostel, residential building\\ \hline
\end{tabular}
\end{adjustbox}
\end{table*}
\begin{figure*}[htb!]
\centering
\includegraphics[scale=0.30]{images/graph1.png}
\caption{Hierarchy of human activity classes are visualized in two levels down from the top level of human activity classes in ActOnto in which the nodes are the top level of activities and the leaves are the second level of activities}
\label{gra:activity_hierarchy}
\end{figure*}
\subsubsection{The Ontology of Time---TimeOnto}\label{sec:TimeOnto}
Human activities are also correlated to time and, in particular, to the time of day and the day of week. For example, in early morning, people have breakfast and in the afternoon people have lunch. For the task of activity detection, we are more interested in a qualitative representation of time than fine-grained time measurements. The TimeOnto ontology is designed for modeling time in qualitative terms (e.g., late morning), organizing them into a containment hierarchy (e.g., morning includes late morning). The qualitative terms fuzzify numerical time periods. A representation of how fuzzy times/days are hierarchically organized is given by following axioms in description Logic:
\begin{itemize}[]
\itemsep0em
\setlength{\itemsep}{0pt}
\item[] $morning \sqsubseteq time$
\item[] $early\_morning \sqsubseteq morning$
\item[] $mid\_morning \sqsubseteq morning$\\\ldots
\item[] $weekday \sqsubseteq day$
\item[] $holiday \sqsubseteq day$
\item[] $Saturday \sqsubseteq holiday$
\item[] $Sunday \sqsubseteq holiday$\\\ldots
\end{itemize}
In total, 17 classes of fuzzy time and 10 classes of fuzzy day are described in the ontology.
\subsubsection{Relation between POIs and Human activities} \label{sec:relation POI and activity}
A POI activates a set of human activities that people can perform when they are in that object, or in its neighborhood. For instance, in a restaurant people usually perform eating activity. On a highway road, traveling by transportation is the most usual activity. Every $poi_i$ is related with a set of activities $A(poi_{i}) \sqsubseteq \{a_{1}, a_{2},.., a_{n}\}$ which can be performed or hosted there or nearby. For example, on a highway road, \textit{traveling by car} is the most usual activity. The relationship between human activities and POIs is represented with the object property ``$what\_can\_be\_done$''. The assertion of this relationship~\cite{FOST} is an existential restriction that is exemplified by the following axiom, which states that for every highway road there is an activity of the type \textit{traveling by car}, which is possible: $ v\_highway \sqsubseteq\exists what\_can\_be\_done.traveling\_by\_car $. As a result, we defined around 151 associations between human activities and corresponding POI types.
\subsubsection{Relation between Human activities and Time periods} \label{sec:relation time/day and activity}
The activities of a user are highly influenced by his location. For instance, if a person is close to a university, the most probable activities are studying, teaching, and working. In order to capture this dependency, we first need to model which activities can be performed or hosted within or nearby every POI (e.g., eating is possible in a restaurant, while traveling is possible on a railway). Different activities have different timetables; these timetables may depend on a particular context/culture, e.g., the city of Trento. Then we performed the experimental evaluation.
\begin{itemize}
\itemsep0em
\item shopping activity [9 :00 am,12:00 pm.], [3:00 pm.,6:00 pm.], except Sunday.
\item eating activity [8:00 am.,2:00 pm.], [6:00 pm.,10:00 pm.].
\item entertaiment activity [8:00 pm.,3:00 am.].
\end{itemize}
Every human activity type is related with a set of times/days $A(t_i) \sqsubseteq \{a_1, a_2,.., a_n\}$. This relationship between time of the day and human activities is represented with an object property, ``$is\_usually\_done\_during$'' and the relationship between days and human activities is designed as an object property ``$is\_usually\_done\_on$''. The assertion of these relationships is an existential restriction, which is specified using the following example axiom in DL:
$ shopping \sqsubseteq activity \sqcap \exists is\_usually\_done\_during.(morning \sqcap afternoon) \sqcap \exists is\_usually\_done\_on.(weekday \sqcap saturday)$. This axiom should be read as ``every shopping activity can be possibly performed in the morning and in the afternoon except on Sunday''. We attempt to associate time periods to all actions using the above type of axiom accordingly. As a result, all human activities in HBOnto are associated with the day of the week and the dime during the day.
\subsection{Usage of the HBOnto ontology}\label{sec:reasoning}
We explain the formulas of the ontology for deriving of a set of human activities that could be performed in a certain POI during a certain time of the day and day of the week. We use an axiom for deriving the activities. We use a DL reasoner, for instance, FACT++ reasoner in Pellet\footnote{http://clarkparsia.com/pellet/}.
\subsubsection{Deriving Human Activities in a Given POI type}
For deriving a set of human activities from a certain POI, there is a boolean validity function VAL$(poi_i, a_i)$ that indicates whether an activity is valid or is not in $poi_{i}$ which exploits the reasoning of the axioms defined, as the following example axiom: $v\_highway \sqsubseteq\exists what\_can\_be\_done.traveling\_by\_car$.
\subsubsection{Deriving Human Activities at a certain time and day}
For extracting a set of human activities in a certain time of the day and day of the week, there is a validity boolean function VAL$(a_i, t_i, d_i)$ that indicates whether an activity is valid or is not at $t_i$ in $d_i$. The function is derived from the reasoning axioms called derived facts. For example, in order to derive what are the activities that are usually done during the early afternoon of workdays as the following example axiom: $workday\_early\_afternoon\_activity \equiv activity \sqcap \\
\exists is\_usually\_done\_during.early\_afternoon \ \sqcap \exists is\_usually\_done\_on.workday$
We modeled 52 axioms for this reasoning. For example, a reasoning on human activities in the early afternoon of weekday is shown Figure \ref{gra:reasoning}.
\begin{figure} \centering
\begin{tikzpicture}[
every node/.style={anchor=north east,inner sep=0pt},
x=-2mm, y=-2mm,
]
\node (fig1) at (0,0)
{\includegraphics[scale=0.40]{images/reasoning.png}};
\node (fig2) at (3,3)
{};
\end{tikzpicture}
\caption{Example of a classifier atomic formula for human activities on weekday early afternoons}
\label{gra:reasoning}
\end{figure}
\subsection{Stochastic behavior model (SBM)}\label{SBO}
The HBOnto ontology provides a set of possible actions, a stochastic model (SBM) - the other component of the HRBModel we propose - tries to rank the actions on the basis of the number of POIs of certain type and the distance from them, and a fuzzy model from the time of the day. Thus we compute as following: $P(a|l,t,d) = \frac{P(a|l) * P(a|t,d) }{P(a)}$, where conditional independence of $l$ and time periods $\left<t,d\right>$ in given $a$. Section \ref{sec:action in time} explains the estimation of the $P(a|t,d)$ based on the relative importance of action in certain time period $t$ of $d$ employing Fuzzy model. Section \ref{sec:action in location} describes the estimation of the $P(a|l)$ based on the relative importance of POIs in location $l$ employing TF-IDF. Section \ref{sec:nearby location} shows how this approach can be generalized to $P(a|l,r)$ in order to account for POIs that are in a larger neighborhood of the user to make the model more robust. $P(a)$ is the total probability of actions occur in all locations and all certain time periods of all days.
\subsubsection{Likelihood of human activity in a given time/day}\label{sec:action in time}
The probability of human activity that happens at a certain time and day is $P(a|t,d )$ that is a probability computed using the Fuzzy Reasoning model, as described in the following formula: $ P(a|t,d ) = \frac{S({FM}(a))}{S(FM(t,d))}$, where $S(x)$ is a fuzzy controller which estimates the area of the fuzzy values (fuzzy sets are represented in trapezoidal curve), $FM(a)$ is a trapezoidal fuzzy membership function for possible values of action set \textit{a}; $a \in A$, $A$=\{eating, working, studying,...\} and $FM(t, d)$ is a trapezoidal fuzzy membership function for possible values of timeset \textit{t}; $t \in T$, $T$=\{early morning, mid morning, late morning, mid day,..,late night\}. For example, Figure \ref{gra:fuzzy logics} illustrates that the fuzzy sets for the morning time (interval [6:00 am.,11:00 am.], for the having breakfast activity (interval [6:00 am.,10:00 am.]), for the afternoon time and so on. In this example, the probability of having breakfast in the morning is the intersection between the fuzzy set of morning time period and the fuzzy set of having breakfast. This intersection implies the probability of the action occurring during a certain time period and it is computed as the trapezoid area estimation in the fuzzy controller: $S(x) = \sum\limits_{i=0}^{22}{\frac{x_{i+1} + x_{i}}{2}*h}$, where $h$ is the height of the trapezoid and equal to 1, $a_i$ are the fuzzy set values of action \textit{a}; $a \in A$, $A$ = \{eating, working, studying,...\}, $t_i$ are the fuzzy set values of time period \textit{t}; $t \in T$, $T$= \{early morning, mid morning, late morning, mid day,..,late night\}, $i$ is a parameter for time interval by hour ($i$=0,1,2...,22).
\begin{figure*}[htb!]
\centering
\subfigure[Fuzzy intervals for morning time (blue), having breakfast activity (red), midday (yellow), afternoon (purple), evening (green)]{\includegraphics[scale=0.3]{images/fuzzy_logics.png}
\label{gra:fuzzy logics}}
\quad
\subfigure[Minimum cartesian distance between the centroid point of the circle with the aggregation radius $r$ and the nearby areas]
{\includegraphics[scale=0.30]{images/radius_distance.png}
\label{gra:radius}}
\caption{}
\end{figure*}
\subsubsection{Likelihood of Human Activity from a set of POIs in a Given Location} \label{sec:action in location}
POIs are the most important factors affecting the likelihood of activity occurrence. But not all the POIs has the same weight. Some POIs, such as bus stops, small shops, ATM, etc., which frequently occur in several areas, have a minor discriminative influence on the likelihood estimation than very distinctive POIs, like an airport or a swimming pool. Borrowing the approach commonly used in the Information Retrieval, we estimate the importance of POIs in a given area by the Term Frequency-Inverse Document Frequency (TF-IDF) \cite{Salton:1988:TAA:54259.54260} function. In this case, the Term Frequency (TF) factor measures the frequency of a POI in a given area, while the Inverse Document Frequency (IDF) factor gives an indication of the general discriminative power of a POI. A high IDF factor is associated with POIs that are rare while a low IDF factor is associated with POIs that are very common and thus are of low usefulness for distinguishing among different areas. For these common POIs, the ratio inside the logarithm of the IDF formula approaches 1, bringing the TF-IDF closer to 0. More in details, the weight of the POIs in location \textit{l} is estimated by the formula, $
\text{tf-idf}(f, l) = \frac{N(f, l)}{\mathop{\rm argmax}\limits_w \{N(w, l) : w \in l\}} * \log \frac{|L| }{ |\{l \in L: f \in l\}|} $, where $f$ is a given POI; $f \in F$, $F$=\{building, hospital, supermarket,...\} and $l$ is a given location; $l \in L$, $L$=\{location1, location2, location3,...\}, $N(f, l)$ is the number of occurrence that POI \textit{f} appears in location \textit{l} \ and $\mathop{\rm argmax}\limits_w \{N(w, l) : w \in l\}$ is the maximum occurrence of all the POIs in location \textit{l}, $|L|$ is the number of all locations, $|\{l \in L: f \in l\}|$ is the number of locations, where POI \textit{f} appears. This way, after we assign a weight to each POI in a given area, we retrieve a set of human activities relevant to those POIs in the area by performing reasoning on the HBOnto ontology. A weight of POIs is propagated to the relevant actions, and the total weight of a given action is estimated as follow: $W(a , l) = \sum_{f \in F(a)} \frac{ \text{tf-idf}(f, l) }{ |{A(f)}| }$, where $a$ is the action in location $l$, $f$ is the element of POI types which derive action $a$, $|{A(f)}|$ is the number of elements (actions) in the set $A(f)$, and $F(a)$ is the set of POI types associated to the action $a$. The probability of human activity that happen in location \textit{l} is a probability given by $P(a|l) = \frac{ W(a, l) }{ W(l)}$, where $W(a, l)$ is the weight of human activity \textit{a} that occurs in location \textit{l}; $l \in L$, $L$ = \{location1, location2, location3,...\}, and $W(l)$ is the total weight of all activities that occur in location \textit{l}.
\subsubsection{Likelihood of human activity from a set of POIs in or nearby locations}\label{sec:nearby location}
In some cases, the correct activity for a user cannot be predicted properly based on the POIs immediately close to the user. This happens, for instance, if the user is moving and starts a phone call before reaching his intended destination where the activity will take place. To tackle these cases, we propose an extension of the stochastic model that also takes into account POIs that are not located in the user location but can be find considering a larger neighborhood.
Starting with the centroid of a location, we consider POIs in locations within 100 m, as a default aggregation radius of the point. If the total number of POI’s in those locations is at lower than \textit{h}=50 (as a default), we expand the aggregation radius by 25 m until the number of POIs in the intersecting locations is higher than \textit{h}, or until the aggregation radius reaches to maximum radius of 3.000 m which was chosen consistently aligned with the maximum coverage of, for instance, mobile phone cells. As an example, Figure \ref{gra:radius} shows that four nearby locations fall inside the aggregation radius \textit{r} of the centroid point of a given location. The weight of the activities in such areas is estimated depending on the intersecting weight [0,1] of the nearby locations fall inside the circle: $l_i \in L(l,r) $. So we extend $P( a|l) $ using aggregation radius \textit{r}, which is the likelihood estimation of the human activities in given location \textit{l} as the formula: $ P( a|l, r) = \frac{\mathop{\rm \sum}\limits_{l_i \in L(l,r)} \{W( a, l_i ) * \lambda_i\}}{ \sum\limits_{l_i \in L(l,r)} \mathop{\rm \sum}\limits_{a_j \in A(l_i)} \{W( a_j, l_i ) * \lambda_i\}} $, where $W(a,l)$ is the weight of action $a$ in location $l$, and $A(l_i)$ is the set of actions in location $l_i$. The $\lambda_i$ is the intersection weight [0,1] that is measured by the minimum cartesian distance between the centroid point of the circle with radius \textit{r} given by the centroid point of location $l$ and the closest point of nearby areas $\lambda_i = 1-\frac{\{d_i : {d_i} <= r\}}{r} $, where $d_i$ is the distance of nearby location $i$, in which the distance is equal or lower than the \textit{r} of the circle. For instance, if a nearby location falls in the circle, the intersection weight is 1, if not, it is lower than 1. Given $\lambda_i$, the $\mathop{\rm } \{W( a, l_i ) * \lambda_i\}$ is the activity weight in the intersecting locations that fall inside the circle. The following example introduces the human activities with a likelihood measure, which could be performed in the morning of workday for a given set of POIs in a location as shown in Table \ref{tbl:HRBModel inference}.
\begin{table*}[htb!]
\centering
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{ | c | c | c | c | l | l |}
\hline
\textbf{POIs} & \textbf{Count} & \textbf{Weight} & \textbf{Activities in a given location} & \textbf{Activities in the morning of workday} & \textbf{Probability} \\ \hline
\multirow{4}{*}{$\left<tourism,hostel\right>$} & \multirow{4}{*}{1} & \multirow{4}{*}{0.3} & relaxing at home & relaxing at home & 0.1\\
& & & having breakfast & having breakfast & 0.1 \\
& & & having lunch & & \\
& & & having dinner & & \\ \hline
$\left<natural,tree\right>$ & 10 & 1.5 & hiking & hiking & 0.6\\ \hline
$\left<highway,bus\_stop\right>$ & 1 & 0.02 & traveling by bus & traveling by bus & 0.01 \\ \hline
\end{tabular}
\end{adjustbox}
\caption{The HRBModel inference for human actions in the morning of workday}
\label{tbl:HRBModel inference}
\end{table*}
\subsection{Usage of the HRBModel}\label{sec:application HRBModel}
The HBRModel estimates the probability of an action to happen in a region on the bases of the frequency of POIs. POIs are bias across areas. Because some locations (i.e., the center of the city) have many POIs that could derive many actions and locations (i.e., the remote area) have few POIs that could lead to few actions. So, we cannot divide a region in areas of the same dimension, but we have to take into account the number of POIs included in an area. We consider the areas to be well characterized in terms of POIs. We divide a territory into grid cells(=locations) in order to have uniformly distributed POIs across locations. We consider Openstreetmap (OSM) as the source of POIs, as it is open and free. We first describe how POI information (human activity relevant) can be extracted from OSM for a given location. Then we described how a territory can be organized into locations satisfying certain properties on the basis of available location and POI data. An default and spatial rectangular grid is designed for populating the representative POIs (i.e., related human activities) at each location. The area of a region or territory is divided into unit areas, where each unit size of the grid is 50 $m^2$ and in each of that we populate the POIs.
We build a density-based POI distribution grid for a territory which have a threshold number of POIs in each cell(=location) of the grid (in a default configuration, the aggregation radius is 0). The default grid is then re-partitioned using Quad-Trees \cite{QuadTree} in a way that each unit area (=location) contains $h$ $\in$ [10,20] number of POI’s. The Quad-trees are commonly used to partition two dimensional space by recursively subdividing it into four quadrants or regions. The regions might be square or rectangular, or might have arbitrary shapes. We define this re-partitioned grid as a density-based POI distribution grid, in which the locations in urban spaces have a smaller size of coverage as the POI density is higher and the locations in remote spaces have a larger size of coverage as the POI density is lower. In our analysis, we discard and exclude the irrelevant POIs, for example, those do not reflect any human activity like benches, chimneys, power towers. and trash bins.
\section{Evaluation}\label{evaluation}
In this section, we performed three types of evaluation for the applicability of the predictive model, HRBModel, taking various types of spatial and temporal data and their factors into account:
\begin{itemize}
\item Evaluation through user feedback
\item Evaluation through bank card transaction data revealing actual human economic activities
\item Extended evaluation of the model
\end{itemize}
\subsection{Evaluation through user feedback}
We designed an experimental application for collecting user feedback data to validate our HRBModel. The user feedback was collected in the city of Trento, Italy. We then present the experimental outcome and evaluation results. The goal of the experiment was to understand whether human activities associated with a given area at a given time in our fuzzy model are correct, activities are evaluated with respect to the feedback provided by a set of users. The application recommends a list of top-\textit{k} possible human activities at each user-chosen context and then users identified evaluate the correct activity he/she performed from the list. The application shows the map of Trento overlaid by the density-based POI distribution grid of the mobile network coverage map. The interface of the application is illustrated in Figure \ref{gra:evaluation app}. The figure shows the most probable activity in top-\textit{k} is predicted to the user. The preliminary analysis of this evaluation was introduced in our previous work~\cite{DBLP:conf/mum/DashdorjSAL13}: in this work, we improved the fuzzy reasoning model and re-analyzed the geographical area with richer content.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.35]{images/evaluation_app.png}
\caption{Visualization of demonstration app, at Piazza Duomo on weekday mornings}
\label{gra:evaluation app}
\end{figure}
In building this application, first of all, we needed to build a density-based POI distribution grid for this city: for this we collected 4.6 million POIs extracted from the OSM. Using Quantium GIS\footnote{www2.qgis.org}, the map of the city has been divided into a grid of dimensions 401 x 302 cells in the beginning, where each unit size of the grid is 50 m$^2$. The cells are partitioned using quad-tree in a way that each unit area (i.e., location) contains \textit{h} $\in$ [10,20] size of representative POIs. The re-partitioned locations containing threshold \textit{h} number of POIs are represented in Figure \ref{gra:Grid_partition}. In total, 3,150 locations were processed. Using the data from OSM, we populated the POIs at each location with those contained in, crossed over located on the border of, or intersected with that location. In total 135,918 relevant POIs were extracted across the locations: this number was reduced to 31,514 after cleaning and discarding irrelevant POIs (i.e., those that do not reflect relevant human activity).
\begin{figure*}[htb!]
\centering
\subfigure[POI distribution in Trento]
{\includegraphics[scale=0.22]{images/Trento_POI.png}
\label{gra:POI_dist_Trento}}
\quad
\subfigure[Density-based POI distribution grid in Trento]
{\includegraphics[scale=0.21]{images/grid_quadtree.png}
\label{gra:Grid_partition}}
\caption{}
\end{figure*}
Figure \ref{gra:scatter_features} shows the comparison of number changes of the items (i.e. raw and refined POI and extracted human activities) across locations in scatter plot. The refined number of POIs are 10 times lower than the number of raw POIs after the cleaning and discarding irrelevant POIs. The number of human activities extracted from the refined POIs are 5 times lower than the raw POIs after the activity inference. The density distributions of number of POIs and human activities across locations are represented in Figure \ref{gra:POI_dist}, where the density curves are Gaussian types. This figure shows that across locations, the minimum, median, and maximum number of POI are 15, ~38, and 375, and the minimum, median, and maximum number for human activity are 1, ~9, and 59. The remote areas are described in the left tail of the distribution that contain a small number of POIs and human activities, up to the mode of the density distribution of POIs 37, and human activities 9. The dense areas are described in the right tail of the distribution that contain the number of POIs and human activities are up to 375 and 59, respectively.
\begin{figure*}[htb!]
\centering
\subfigure[The changes of the number of refined POIs (green) and extracted human activities (reddish) compared to the raw POIs, aggregation radius=0]
{\includegraphics[scale=0.27]{images/POI_Activity_Raw.png}
\label{gra:scatter_features}}
\quad
\subfigure[The distributions of POIs and human activities across locations, aggregation radius=0 (i.e. default configuration)]
{\includegraphics[scale=0.29]{images/action_poi_dist_grid_gaussian.png}
\label{gra:POI_dist}}
\caption{The representative POIs and human activity distribution}
\end{figure*}
To populate a sufficient number and diversity of POIs in each location, we consider the nearby areas for estimating the likelihood of human activities. The nearby areas are the intersected locations within the aggregation radius of the centroid point at each location. The aggregation radius is configured differently for each location: this satisfies the need for the total number of POIs in such intersected locations to be above threshold \textit{h} (see Figure \ref{gra:POI Human activity distribution nearby areas}), where each location has at least \textit{h}(=50) number of POIs in the intersected locations. Across locations, the minimum, median, and maximum number of POIs are 14, 55, and 268. The distinct activities and parent level of distinct activities are estimated in each location using the aggregation radius. The main types of POI in Trento city include building, land-use, amenity, and highway types (see Figure \ref{gra:poi_grid}) and the main human activities extracted from these POIs are mostly related to eating, residential, and work activities (see Figure \ref{gra:act_grid}).
\begin{figure} \centering
\begin{tikzpicture}[
every node/.style={anchor=north east,inner sep=0pt},
x=-2mm, y=-2mm,
]
\node (fig1) at (0,0)
{\includegraphics[scale=0.38]{images/nearby_radius_50.png}};
\node (fig2) at (3,3)
{\includegraphics[scale=0.18]{images/nearby_radius_50_distinct.png}};
\end{tikzpicture}
\caption{POI and the relevant distinct human activity distribution using aggregation radius}
\label{gra:POI Human activity distribution nearby areas}
\end{figure}
\begin{figure*}[htb!]
\centering
\subfigure[Distribution of POI categories across locations of the POI distribution grid, and of locations of the POI distribution grid containing POI categories, aggregation radius=0 ]
{\includegraphics[scale=0.25]{images/key_grid.png}
\label{gra:act_grid}}
\quad
\subfigure[Distribution of activity categories across locations of the POI distribution grid, and of locations of the POI distribution grid containing activity categories, aggregation radius=0 ]
{\includegraphics[scale=0.25]{images/activity_grid.png}
\label{gra:poi_grid}}
\caption{Semantics of POIs and activity distribution over Trento, Italy}
\end{figure*}
Our web application is able to recommend a set of top-\textit{k} human activities in a given location and time of the user selection. However, multiple activities that have the same priority can occur at the same time. In this experiment, we do not evaluate the order of activities in the prediction due to the limitation of fine-grained ground-truth data we use. So we concentrate only on the ``fuzzification'' of the activity prediction, which evaluates if one of the activities among top-\textit{k} prediction performed by the application is correct or not. However, we can further evaluate the order of the activity prediction if the ground-truth data we use is fine grained. So, a user evaluation among the first \textit{k} predictions can be an initial evaluation to measure the accuracy of the model performance. In any of the above locations, upon selection, the user is prompted with a list of top-\textit{k} human activities associated with the time of day, having the highest likelihood. The user is then asked to select those activities that he or she has actually performed by in that area at that time. The user is also allowed to select an activity not in the top-\textit{k} list among all the available activities, in case none of the above fits.
We collected user feedback through the web application described above for one week with 32 participants involved. We then analyzed the collected user feedback in order to measure the accuracy of the HRBModel. We measured the accuracy of the model. For the latter, we took into consideration only the areas with the highest amount of feedback, Downtown and Povo.
To measure the accuracy of the approach, we consider the prediction of an activity correct if the user selected it in the web application regardless of its ordering position. Within the total of 481 items of users' feedback collected, if we consider a set of the top-8 activities in the recommendation, the overall correctly predicted activity percentage is 68\% (\textit{k}=8). There, we propagated the probability activities to their child activities. We increased the prediction granularity of the activity to \textit{k=n}, where \textit{n} is the total number of distinct activities, and the overall prediction percentage was raised to 93\%. By changing the granularity of activity level to high-level (parent) human activities, the prediction accuracy is increased up to 80\% (\textit{k=8}), and 97\% (\textit{k=n}). When we ignore the time for estimating the likelihood of human activities, the accuracy percentage was lower 57\% (\textit{k=8}) but stable at 97\% for \textit{k=n}. This might be because some human activities are tightly related to the location and have nothing to do with time. For example, if there is an open football stadium, people tend to play there at any time they like.
So the overall prediction percentage is affected by how we choose the threshold top-\textit{k} of the prediction granularity. This might be a reason that the prediction percentage as well as its granularity can vary at the different local scales. However, the user feedback did not have enough distribution over the city. So we selected two geographical areas; the center of the city and Povo. In these locations, we have sufficient feedback to evaluate the model at the different local scales. The center of the city is characterized with different types of activity, while the Povo is mostly characterized with studying and research work activities as our participants are researchers. We estimated the prediction accuracy in the center of the city, and the result shows that the high-level (parent) activity prediction percentage is 98\% (\textit{k=n}) but 95\% in Povo. This shows that there is an attraction in the city center, so the participants who share a similar work profile or similar study profile go there to perform similar activities. This result matches the finding of Phithakkitnukoon et al~\cite{Phithakkitnukoon:2010:AMI:1881331.1881336}, which showed that a daily pattern of human activity (sequence of human activity) strongly correlates with a certain type of geographic area that shares a common characteristic context.
\subsection{Evaluation through credit card data revealing human economical activities}
We employed user feedback for evaluating the model in the previous section, but the ability to reflect actual human activity is not perfect (note the aforementioned limitations of user feedback data we collected in Trento city). Here, we introduce another source of data that could serve as a direct proxy for human economical activity for a more sophisticated evaluation of our HRBModel. We use bank card transaction data collected in Barcelona, Spain which were performed at POS terminals by users. The type of business activity at every POS terminal, for instance, at a department store or a restaurant, etc., can reveal the human activities that can be performed at such a location. With the use of this data, we evaluate the performance of our HRBModel in a way similar to the evaluation with user feedback in the previous section.
\begin{figure*}[htb!]
\centering
\subfigure[Density-based POI distribution grid in Barcelona]
{\includegraphics[scale=0.25]{images/Barcelona_grid.png}
\label{gra:Grid_partition_barcelona}}
\quad
\subfigure[(Default configuration) POI and human activity distribution across the locations in Barcelona, aggregation radius=0]
{\includegraphics[scale=0.33]{images/barcelona_action_poi_dist_grid_gaussian.png}
\label{gra:POI_dist_barcelona}}
\caption{}
\end{figure*}
First, we also needed to build a density-based POI distribution grid for the city of Barcelona, Spain and for that we collected 2.6 million POIs extracted from OSM in a rectangular grid with dimensions 252 x 393, where the unit size of each grid is 50 m$^2$. Further, we re-partitioned the grid using the quad-tree approach to ensure that each location contains h $\in$ [10,20] POIs. Final partitioning is presented in Figure \ref{gra:Grid_partition_barcelona}. In total, 3,853 locations were processed. Using the data coming OSM, we populated the POIs that were contained in, crossed over, were located on the border of, or intersected each location. In total, 197,289 activity relevant POIs were extracted across the locations, which was reduced to 53,529 after the cleaning and discarding of irrelevant (POIs those that do not reflect any human activity). Figure \ref{gra:POI_dist_barcelona} summarizes the POI filtering results compared to the raw POIs. In this case the POIs are 5 times lower compared to the number raw POIs in each location. The figure also shows the corresponding human activities to POIs that were generated after the raw POI cleaning process. The POI and human activity density distribution across the locations are represented in Figure \ref{gra:POI_dist_barcelona}, where the density curve is skewed in the center. The figure also shows that across locations, the min, median, and max number of POI are 1, ~47, and 545, and the min, median, and max number for human activity are 1, ~13, and 46. The remote areas contain a small number of POIs and human activities, up to the mode of the density distribution of POIs 32 and human activities 12. In dense areas, the number of POIs and human activities are up to 545, and 46, respectively. These POIs and human activities are the representative activities in each location.
To populate a sufficient number and diversity of POIs in each location, we consider the nearby areas for estimating the likelihood of human activities. The nearby areas
are the intersected locations within the aggregation radius of the centroid point at each location. The aggregation radius is configured differently in each location, which satisfies the need for the total number of POIs in such intersected locations to be above the threshold \textit{h} (see Figure \ref{gra:Barcelona POI Human activity distribution nearby areas}) where each location has at least \textit{h}=50 number of POI’s in the intersected
locations. Across locations, the minimum, median, and maximum number of POIs are 18, 55, and 162. The distinct activities and parent level of distinct activities are estimated in
each location using the aggregation radius.
\begin{figure}
\centering
\begin{tikzpicture}[
every node/.style={anchor=north east,inner sep=0pt},
x=-2mm, y=-2mm,
]
\node (fig1) at (0,0)
{\includegraphics[scale=0.38]{images/Barcelona_nearby_radius_50.png}};
\node (fig2) at (3,3)
{\includegraphics[scale=0.18]{images/Barcelona_nearby_radius_50_distinct.png}};
\end{tikzpicture}
\caption{POI and the relevant distinct human activity distribution considering aggregation radius}
\label{gra:Barcelona POI Human activity distribution nearby areas}
\end{figure}
The main types of POI in Barcelona city are highway, building, and route types (see Figure \ref{gra:poi_grid_Barcelona}) and the main human activities extracted from these POIs are eating, residential, and sporting (see Figure \ref{gra:act_grid_Barcelona}).
\begin{figure*}[htb!]
\centering
\subfigure[Distribution of POI categories across locations of the POI distribution grid, and of locations of the POI distribution grid containing POI categories, aggregation radius=0 ]
{\includegraphics[scale=0.25]{images/Barcelona_key_grid.png}
\label{gra:act_grid_Barcelona}}
\quad
\subfigure[Distribution of activity categories across locations of the POI distribution grid, and of locations of the POI distribution grid containing activity categories, aggregation radius=0 ]
{\includegraphics[scale=0.25]{images/Barcelona_activity_grid.png}
\label{gra:poi_grid_Barcelona}}
\caption{Semantics of POIs and activity distribution over Barcelona}
\end{figure*}
We performed the evaluation of the model to measure the accuracy of the approach in same way as we have done in Trento city, but here we selected random POS locations in Barcelona; assume that each POS generates correct human activities. Every POS is associated with a business category: therefore, 76 types of businesses are given for all the POSs, such as travel, food, hypermarkets, hotels, real estate, automation, bars and restaurants, personal care, sport and toys, technology, home, content, fashion, leisure, health, transport, etc. Since we use both POIs and POSs in the data sets in our evaluation, we adjust them so that they are comparable to avoid a semantic gap between the data sets. For instance, when a customer buys groceries in a supermarket, POS interprets that a customer is doing shopping in grocery stores. The POIs and POSs use different activity type taxonomies, so they need to be mapped to a common taxonomy for comparability. We define consistent activity types for POS using the human activities described in ActOnto. We discard POIs and POSs that have no activity type reflected in it, such as ATM use.
For evaluating the HRBModel, we compared the human activities of a given POS with the set of top-\textit{k} human activities generated by HRBModel in the same context of POS location and time, using the aggregation radius of nearby areas for estimating the likelihood of human activities, as we described in Section \ref{methodology}. We performed this validation by selecting 1,000 random POSs in the city. The overall accuracy of parent level activity prediction percentage is 82\% (\textit{k}=8) and 92\% (\textit{k=n}). When we ignore the time, the parent level activity prediction percentage is stable at 92\% (\textit{k=n}). This also shows that some human activities are tightly related to the location and have nothing to do with time. However, the random selection of POS is not a sophisticated choice for analyzing the granularity of the prediction task in the city scale. We need to estimate the accuracy fraction at the local scales to know where the HRBModel gives better accuracy and good precision.
\subsection{Extended evaluation of the model}
In this section, we consider an extensive validation of the HRBModel in order to estimate if POI data can be a good proxy for predicting such a specialized type of activity (economical activity) at the local scale. For that purpose, we compare the possible activity distribution predicted by our model for each location across the city with the actual data on the observed spending activity in POS at that location. This gives us ground-truth information on for instance, what types of economic activity are occurring in a city. However, spatial data (i.e., POI and POS data) distribution (see Figure \ref{gra:POI POS distribution Barcelona}) is very uncertain and heterogeneous, and extracting meaningful information from it is a very challenging task. The data can be imprecise and not available in some areas. The heterogeneity of the data directly influences the confidence regarding how the data is characterized. We performed city-wide normalization of POI activity distribution to match that of POS, and then matched the different categories of human activities in the POI and POS data, which gives good accuracy and good precision both at the local scale and for different land-use types.
In our analysis, POS data reflects spending activity on a fine-grained scale, economic activities, which are very important component of spending activity. The POI data, on the other hand, is coarse grained, and each of its points represent a single coarse-grained geographical object, not the objects within it. For example, POS machines are associated with every store at the mall, and the type of POS represents a type of activity performed at the store. But in POI data, the mall could be represented as a single building of type “mall”, which does not contain any information about stores within the mall. This granularity difference causes a quantitative difference in the categories of activities in between the POI and POS data. These differences could introduce a substantial bias in the model performance. To achieve comparable performance between the data sets, which would enable using POI data for predicting economic activity, we first need to remove the overall systematic bias of the data sets. We introduce a global normalization to make POIs and POSs consistent at the macro-level in order to account for possible heterogeneity in the representativity across different categories of activities in the two data sets.
\begin{figure}[htb!]
\centering
\subfigure[POI distribution in Barcelona]{\includegraphics[scale=0.15]{images/POI.png}
\label{gra:regular}}
\quad
\subfigure[POS distribution in Barcelona]{\includegraphics[scale=0.15]{images/POS.png}
\label{gra:irregular}}
\caption{POI and POS distribution in Barcelona}
\label{gra:POI POS distribution Barcelona}
\end{figure}
We generate the distribution of different human activities in POI and POS data as depicted in Figure \ref{gra:activity_distribution_POS_POI_new}, where non-economic activities including educational, outdoor, residential, sporting, and working types of activity, have less of a value in the both distributions of different activities in POI and POS data. An economic activity is a special but very important component of spending and here we hypothesize that POIs can be used to predict the economic activity after appropriate normalization (i.e., $W_{poiNorm}$) for the general data set representativity bias on the city scale. The result might also be associated with the fact that spending is just one of the components of human activity and not all activities have something to do with spending.
\begin{figure}
\centering
\begin{tikzpicture}[
every node/.style={anchor=north east,inner sep=0pt},
x=-2mm, y=-2mm,
]
\node (fig1) at (0,0)
{\includegraphics[scale=0.38]{images/activity_dist_POI_POS_old_classification.png}};
\end{tikzpicture}
\caption{Overall economical activity distribution in each data set, 1) POI 2) POS}
\label{gra:activity_distribution_POS_POI_new}
\end{figure}
For each location \textit{l} we have certain distribution of POIs among \textit{n} categories of activity we consider with relative weights, $W_{poi}(a, l)$ and estimate the likelihood of human activities extracted from POIs starting with the centroid of location, considering the POIs in all locations within 100 m of the point, as we described in Section \ref{methodology}. Here we use a larger radius size in case there are not enough POIs within a considered circle: we expanded the aggregation radius by 25 m if fewer than 50 POI’s and POSs were contained in intersecting locations, or until the radius reaches to 3.000 m. This also gives a sufficient number of POIs in remote areas. The aggregation radius selected in each location is presented in Figure \ref{gra:aggregated POI POS}. The length of each location size is processed in cumulative probability after the aggregation radius selection, as plotted in Figure \ref{gra:aggregated location length}. So, 80\% of total locations are increased to the length of 2,000 m. Figure \ref{gra:aggregated POI POS distribution} shows the number of POIs and POSs at each location using the cumulative probability function. Across locations, the minimum, median, and maximum number of POI are 18, ~93, and 958, and are 3, ~55, and 952, respectively, for POS.
Now before comparing the values of $W_{poi}$ with $W_{pos}$, we look at the overall distributions first, (i.e., $W_{poi}(a, c)$ vs $W_{pos}(a, c)$ where $c$ is the entire city). It might happen that those average relative weights are also not the same if there exists a systematic bias introduced by the way POS and POI are defined (for example, it could happen that each restaurant appears in POIs but only 50\% of restaurants appear in POS (say, not all of them accept cards) and this situation could change from one category to another. The overall POI and POS distribution across the city is depicted in Figure \ref{gra:activity_distribution_POS_POI_new}, where the global distributions are relatively similar for the type of activity labeled, eating. So, if this systematic bias exists, then it is no surprise that we will also see a difference for any particular location, which would reduce our reported accuracy.
In order to account for this bias we consider the following normalization:
\\$W_{poiNorm}(a,l)=\frac{W_{poi}(a,l)*W_{pos}(a,c)}{W_{poi}(a,c)}$ and for each location we compare $W_{pos}(a,l)$ vs $W_{poiNorm}(a,l)$ (which is already normalized by the possible systematic bias between POS and POI distributions) instead of $W_{poi}(a,l)$. In each location, the bias between $W_{pos}(a,l)$ vs $W_{poiNorm}(a,l)$ is compared by the Hellinger distance \cite{Sengstock:2011:ECG:2093973.2094017,nikulin2002hellinger,Sengstock:2013:PMS:2525314.2525353} measure, which quantifies the similarity between two probability distributions and lies between 0 and 1. So we call it as an estimate error between the two distribution: \\
$HD(W_{poiNorm}, W_{pos}) = \frac{1}{\sqrt{2}} \; \sqrt{\sum_{i=1}^{k} (\sqrt{W_{poiNorm}(i,l)} - \sqrt{W_{pos}(i,l)})^2}$, where \textit{i} is an activity of \textit{n} category in each location.
\begin{figure*}[htb!]
\centering
\subfigure[Variation of aggregation radius in each location]{
\resizebox{6cm}{!}{
\begin{tikzpicture}[
every node/.style={anchor=north east,inner sep=0pt},
x=-2mm, y=-2mm,
]
\node (fig1) at (0,0)
{\includegraphics[scale=0.28]{images/Barcelona_POI_POS_rate.png}\label{gra:aggregated POI POS}};
\node (fig2) at (3,3)
{\includegraphics[scale=0.13]{images/Barcelona_radius.png}};
\end{tikzpicture}
}
}
\quad
\subfigure[Variation of the location length after the aggregation]
{\includegraphics[scale=0.27]{images/Barcelona_location_size.png}\label{gra:aggregated location length}}
\caption{}
\end{figure*}
\begin{figure*}[htb!]
\centering
\subfigure[POI and POS distribution across the location in the aggregation radius]
{\includegraphics[scale=0.28]{images/gaussian_POI_POS.png}\label{gra:aggregated POI POS distribution}}
\quad
\subfigure[The POI and POS normalized estimate error vs local estimate error]
{\includegraphics[scale=0.28]{images/normal_density_estimate_errorHBOnto_radiusUP.png}\label{gra:error_bias}
}
\caption{}
\end{figure*}
The estimate error between the distribution of activities in POI and POS data in a given location is called \textit{local estimate error}, and the estimate error between the distribution of activities in $POI_{norm}$ and $POS$ in a given location is called \textit{POI normalized estimate error}. The estimate error between the distribution of activities in $POI$ and $POS_{norm}$ in a given location is called \textit{POS normalized estimate error}. The density distribution of local estimate errors, and the POI and POS normalized estimate errors are described by logistic distribution fit across all the locations within the city as depicted in Figure \ref{gra:error_bias}. The peak (mode) of the POI normalized estimate error gives the smallest error values compared to the local and POS normalized estimate error values. The peak (mode) of the POI normalized estimate error distribution density falls at 0.18, which represents an 82\% match in accuracy between the POI and POS data. Either way, POS normalized estimate error would give the same distribution, but the mode of the distribution is lower. On the other hand, the peak (mode) of the local estimate error distribution density falls at 0.40, which is around 60\% accuracy on the POI and POS match. So if we make two distributions comparable by de-biasing them globally, the global estimate error is approximately reduced by 22\%, which is the quantitative difference between the POI and POS data sets. For the general data set representativity bias on the city scale (which might also be associated with the fact that spending is just one of the components of human activity and that not all activities involve spending). From the results of previous section, we note that POI data can be used as a proxy to predict human activities after appropriate normalization. We will show that once overall city normalization is performed, two data sets start to give comparable estimates for human activity at the local scale: this means that one needs to know only the overall bias on the city scale and then POI data can be applied to predict actual human activities. As POS reveals actual human economic activity, we propose to use POI to estimate economic activity when POS data is not available.
\begin{figure*}[htb!]
\centering
\subfigure[The number of POIs in each random location of different land-use types]
{\includegraphics[scale=0.27]{images/land_use_POI.png}
\label{gra:error range landuse POI}}
\quad
\subfigure[The number of POSs in each random location of different land-use types]
{\includegraphics[scale=0.27]{images/land_use_POS.png}
\label{gra:error range landuse POS}}
\caption{The number of POI and POS in each random location selected in different land-use types}
\end{figure*}
\begin{figure*}[htb!]
\centering
\subfigure[Variation of aggregation radius selected of random locations in different land use types]{
\resizebox{6cm}{!}{
\begin{tikzpicture}[
every node/.style={anchor=south east,inner sep=0pt},
x=-2mm, y=2mm,
]
\node (fig1) at (0,0)
{\includegraphics[scale=0.29]{images/land_use_radius.png}\label{gra:error range landuse aggregation radius}};
\node (fig2) at (3,3)
{\includegraphics[scale=0.14]{images/land_use_aggregation.png}};
\end{tikzpicture}
}
}
\subfigure[Variation of an overall estimated error of economical activities prediction with respect to POI normalized and actual POS activities]
{\includegraphics[scale=0.27]{images/land_use_error_filter.png}\label{gra:error range landuse}}
\caption{}
\end{figure*}
\begin{figure*}[htb!]
\centering
\subfigure[Differences in predicted levels of different types of activity across various locations vs observed activity levels in dense (the center of the city) land-use based on POS data]
{\includegraphics[scale=0.27]{images/land_use_dense.png}
\label{gra:error_bias_activity dense}}
\quad
\subfigure[Differences in predicted levels of different types of activity across various locations vs observed activity levels in railway land-use based on POS data]
{\includegraphics[scale=0.27]{images/land_use_railway.png}
\label{gra:error_bias_activity railway}}
\caption{}
\end{figure*}
As we have seen above, the estimate error distribution is of Gaussian type and on the left tail, the prediction percentage is better (around 84-95\%), while the right tail is heavy tailed down to 20\%. This could be an effect of land-use types, where the distribution of activities in POI and POS data is well described in some type of land areas and not in others. For instance, while POS data about economic-type activity, it is supposed to have strong match in railway, commercial and dense areas. Therefore, we investigate the POI normalized estimate error distribution in different land-use types, which are referenced by OSM: industrial, recreational, commercial, railway, retail, residential, and the central part of the city. We randomly selected up to 100 locations in each land-use type, which were chosen in order to be consistent with the previous work in \cite{Dashdorj:2014:HAR:2675316.2675321}. The populated sufficient POIs and POSs in each location (see Figures \ref{gra:error range landuse POI}, \ref{gra:error range landuse POS}) using the aggregation radius (see Figure \ref{gra:error range landuse aggregation radius}). We excluded the outlier locations whose aggregation radius was more than 1,000 m, which removed 9\% of the total locations randomly selected.
The POI normalized estimate error distribution in those land-use types is shown in Figure \ref{gra:error range landuse}, where the railway, commercial, and dense areas show lower error values with the accuracy of (81--90\%): this is to be expected, since we validated the POI data with the POS data, which is mostly related to shops. In the railway area, we got the lowest error values, with an accuracy of 88--90\%. In a dense area (the center of the city), the POI normalized estimate error with the accuracy of (81-84\%), and the error distribution is significantly heavy tailed as the area consists of a wide range of human activities. The commercial land-use is also well described with the accuracy of 86\%, while the industrial and recreational ground types of land-use have lower accuracy than 72\%. In order to estimate the sensitive error values among the different categories of activities at those local distributions of POI and POS data, for instance, in dense and railway areas, we estimated the POI normalized percentage difference, which is an error between $W_{poiNorm}(a,l)$ vs $W_{pos}(a,l) $ among the different categories of activities. The percentage difference is a difference between two values divided by the average of the two values, then $PD(W_{poiNorm}(a,l), W_{pos}(a,l)) = \frac{|W_{poiNorm}(a,l)-W_{pos}(a,l)|}{W_{poiNorm}(a,l)+W_{pos}(a,l)}$. Figures \ref{gra:error_bias_activity dense} and \ref{gra:error_bias_activity railway} show the POI normalized percentage difference among the different categories of activities in dense and railway areas. In dense areas, among the categories, the shopping activity has the lowest error (around 11--17\%) in the distribution of activities between the POI normalized and POS as shops are mostly located in the center of the city. Also in railway areas, the shopping activity has the lowest error (around 3--7\%) in the distribution of activities as we evaluated economic-type activities from POI and POS. The health and medical activity has an error value of almost 100\%, as actions of this type are not usually described in POS data. However, while the analysis suffers from a lack of POIs and POSs information collected the model applicability error can be more meaningful for estimating the data loss. The representativity of the data sets by the global normalization at the city scale allows us to estimate how well POI data can predict human activity at the global scale, and the result was validated in land-use types (e.g., railway and dense land-uses).
\section{Possible developments (extensions) }\label{usage of the model}
We presented in previous sections that the HRBModel was developed in order to infer human activities by extracting concepts and their relations from open geographical data. In this section, we briefly explain how the correlation between context and human behavior encoded in HRBModel, can be exploited to infer human behavior from CDR data. To this purpose, we enrich CDR with human activities, which are derived from our model through location and time.
\begin{figure}[htb!]
\centering
\subfigure[regular coverage]{\includegraphics[scale=0.12]{images/Gardolo.png}
\label{gra:regular}}
\quad
\subfigure[irregular coverage]{\includegraphics[scale=0.12]{images/Cognola.png}
\label{gra:irregular}}
\caption{Example of the two extreme cases of cell coverage area}
\label{gra:coverage-areas trento}
\end{figure}
As we mentioned earlier, our main contribution of this article is to use background knowledge regarding human activities that could happen in the context where mobile network events occur \cite{Zolzaya:DC:2013,Zolzaya:Poster:2013}. The most common mobile network antennas are GSM 900 and GSM 1800. Each antenna covers a certain area for the support of mobile communication. Sometimes, cell coverage areas are not precisely defined, and they can be temporarily modified depending on the estimation of call traffic from/to an area. Usually, the size of the coverage areas is inversely proportional to the density of the population inhabiting the area. However, there is an upper bound of the coverage area size due to antennas physical limits to 35 km. Concerning the shape of the cell coverage areas one can observe that,
when the territory is regular (i.e., flat with no mountains or other natural irregularities), the shape of cells can be approximated with convex polygons, but in presence of irregular
territory with a lot of mountains and valleys, the region associated
to a cell can be very irregular and possibly disconnected. Irregular coverage areas makes particularly difficult the task of
estimating the calling point. In other cases, the shape of cells can be very irregular and possibly disconnected. An example of these two extreme cases are shown in Figure \ref{gra:coverage-areas trento}. Irregular coverage areas make the task of estimating the calling location particularly difficult. Regular coverage is a contiguous area, and irregular coverage forms a less distinct area with many holes and irregularities. The HRBModel can provide qualitative description of human activities associated with a likelihood measure, to each cell coverage area. For example, a cell coverage area of Gardolo in Trento city is characterized with the possible human activities ranked that could happen in that area, associated with its likelihood, which is illustrated in Figure \ref{gra:experimental app}. In this case, travel by transport, outdoor activity, and eating are higher ranked activities.
\begin{figure*}
\centering
\includegraphics[scale=0.25]{images/cognola_actions.png}
\caption{Visualization of the ranked activities in the cell coverage for Gardolo area of Trento }
\label{gra:experimental app}
\end{figure*}
Contextually enriched CDR can be exploited to analyze and evaluate call patterns\footnote{A call frequency pattern is a quantitative model that describes the behavior of a communication in a certain class of contexts} (i.e., human mobility, communication and interaction patterns) associated to human activities for revealing existing relationships between human behaviors, and CDR stream. This can be done through a combination of the HRBModel and the bottom-up approaches in pattern extraction tasks used in the state of the art. The qualitative and quantitative properties in semantically enriched CDR allow us to generate training data with a certain accuracy that, together with other behavioral data coming from bank card transaction, social networks and so on, can be used as ground-truth information for training those supervised learning approaches for automatic activity inference in the area of mobile phone data analysis.
Like many factors that influence human behavior, the CDR related to such behavior varies depending on the different situations. For example, these situations could involve a behavioral pattern of attending a football match on rainy day or sunny day, or doing so in the central area or in remote area. Another example of the contextual factors that influences human behaviors is a social event. For instance, human behavior is different during national or local holidays. While some people used to work during national holidays, others may have vacation plans to go somewhere. This concept has been successfully applied in \cite{D4D:2013} where we created an event repository that collected data from the Ivory Coast city in order to analyze CDRs for the identification and characterization of human behavior patterns in the city. Event is an another key element that can be used to enrich the background knowledge of mobile phone data records.
Therefore, the HRBModel is very useful for filling the missing data in spatial data-sources, where such information is unavailable and measuring the noise, scalability and accuracy among the spatial data-sources. For example, if economical type of activities are not sufficient enough when generated in bank card transaction data, we could combine the OSM data with bank card transaction data in order to better study economical or social and emerging types of activity ~\cite{RePEc:sae:urbstu:v:49:y:2012:i:7:p:1471-1488, D4D:2013,DBLP:journals/corr/SobolevskySGCHAR14,sobolevsky2014money} at the macro level. The most directly applicable scenario includes, for instance, at the finest scale, a semantic enrichment of cell coverage areas, trajectories, and behavioral patterns (normal or exceptional), and at the broad scale, semantic interpretation of CDR traffic data more toward smart (intelligent) city research that gain better understanding of real-life cases, resulting in better classification of human behaviors in a region or territory.
The HRBModel can be enabled for a further consideration of the GeoSparql extensions\footnote{http://geosparql.org/} in order to represent and querying of geospatial linked data, such as geo-objects and geo-actions for the Semantic Web. The geospatial RDF/OWL data which can support both quantitative spatial reasoning on the Semantic Web and querying with the SPARQL database query language. Further, we enrich the HRBModel with other contextual information like weather information, events and some other types of linked objects and data. This possibility to extend the model further helps us to refine human behavior recognition by considering more complex and complete contextual information (via linked open data) by employing real-time inference techniques some of which could be learned and decoupled with the pre-processing of features from the raw data of mobile phones.
\section{Conclusion}\label{conclusion}
Even without taking a person's environment into account, many factors influence the collective human behavior: the geographical information system is certainly only one of these factors. We proposed a model able to enrich mobile phone data records with the context of human activity. The qualitative terms of the human activities can be used for labeling the data in order to train a statistical model for automatic human behavior recognition tasks given heterogeneous and uncertain CDR data. By leveraging data from Web 2.0 as a source of contextual information for mobile data records, we study the possibility to infer human activities with a certain accuracy. Our model is a combination of ontological and stochastic model, HRBModel, for predicting a set of human activities associated with a likelihood measure in a given context of mobile phone data records. Our study demonstrated that semantic expressiveness of human-environmental relationships could be investigated, in principal, based on diverse geographical information systems. The evaluation of the model was performed in two different cities and it demonstrates the applicability and scalability of the model under the conditions of heterogeneity and uncertainty of those spatial data sources like POI and POS data. Furthermore, this model takes various types of spatial and temporal factors into account. Based on our evaluations performed in two different cities, the prediction level depends to some extent on the granularity of the activity being predicted. The analysis shows that the level of activity prediction from the POI data is significant (around 84--95\%). As we concentrated on the model evaluation for a specific economic type of activities using point of sale bank card spending data, we validated that within areas of different land-use types, the POI is a good proxy for predicting those economic activities with 81--90\% accuracy in dense, railway, and commercial areas. This way the context of mobile network records can be used to deepen the context aware human activity recognition from the mobile network events. Hence, a number of challenges remain for future research and deeper investigations of human and environmental dynamics, such as behavior recognition and classification from the semantically enriched CDR.
\section{Acknowledgements}\label{acknowledgements}
The authors would like to thank the Telecom Italia Semantic Knowledge Innovation Lab (SKIL) and Banco Bilbao Vizcaya Argentaria (BBVA) for providing the datasets for this research. The authors further thank the Fondazione Bruno Kessler Data Knowledge Management Unit (FBK-DKM) as well as Ericsson, the MIT SMART Program, the Center for Complex Engineering Systems (CCES) at KACST and MIT CCES program, the National Science Foundation, the MIT Portugal Program, the AT\&T Foundation, Audi Volkswagen, BBVA, The Coca Cola Company, Expo 2015, Ferrovial, Liberty Mutual, The Regional Municipality of Wood Buffalo, UBER and all the members of the MIT SENSEable City Lab Consortium for supporting the research.
\part{%
\makeatother
\usepackage{setspace}
\usepackage{floatflt}
\usepackage{times}
\usepackage{authblk}
\newtheorem{mydef}{Definition}
\usepackage{mwe,tikz}
\usepackage[percent]{overpic}
\usepackage{tikz}
\usepackage{tkz-tab}
\usepackage[pdfborder={0 0 0}]{hyperref
\providecommand{\keywords}[1]{\textbf{\textit{Index terms---}} #1}
\begin{document}
\title{Semantic Enrichment of Mobile Phone Data Records Using Background Knowledge}
\author[1,2,3,4]{Zolzaya Dashdorj}
\author[2]{Stanislav Sobolevsky}
\author[3]{Luciano Serafini}
\author[4]{Fabrizio Antonelli}
\author[2]{Carlo Ratti}
\affil[1]{University of Trento, Italy Via Sommarive, 9 Povo, TN, Italy}
\affil[ ]{\url{dashdorj@disi.unitn.it}}
\affil[2]{Massachusetts Institute of Technology, MIT 77 Massachusetts Avenue Cambridge, MA, USA}
\affil[ ]{\url{stanly,ratti@mit.edu}}
\affil[3]{Fondazione Bruno Kessler, Via Sommarive 18 Povo, TN, Italy}
\affil[ ]{\url{serafini@fbk.eu}}
\affil[4]{SKIL LAB – Telecom Italia, Italy Via Sommarive 18 Povo, TN, Italy}
\affil[ ]{\url{fabrizio.antonelli@telecomitalia.it}}
\date{}
\maketitle
\input{abstract}
\input{introduction}
\bibliographystyle{spbasic}
\bibliographystyle{spmpsci}
\bibliographystyle{spphys}
|
1,314,259,994,269 | arxiv | \section{Introduction and Program Structure}
{KK}MC-hh\cite{kkmchh} is a precision MC generator for Z production and decay in high-energy
proton collisions. It is based on {KK}MC,\cite{Jadach:1999vf} which was originally developed for
precision Z boson phenomenology in $e^+e^-$ collisions, including exponentiated
multiple photon effects: \hbox{$e^+e^-\rightarrow Z\rightarrow f{\overline f} + n\gamma$}
including exact ${\cal O}(\alpha)$ and ${\cal O}(\alpha^2L)$ initial-state
radiation (ISR), final-state radiation (FSR), and initial-final interference
(IFI). ($L$ is an appropriate ``big logarithm'' for the process, $\ln(p^2/m^2)$
for a relevant mass.)
Order $\alpha$ electro-weak matrix element corrections are included via
an independent DIZET module, originally version 6.21\cite{zfitter6:1999},
but recently upgraded to
version 6.45\cite{Arbuzov:2005ma,Arbuzov:2020}. Collision energies up to 1 TeV are supported.
The LEP2 precision
tag was $0.2\%$. Beginning with version 4.22, {KK}MC also includes support for
parton-level collisions of quarks.\cite{Jadach:2013aha}
An adaptive MC, FOAM,\cite{FOAM} underlies the low-level event generation.
The FOAM grid is set up during an initial exploratory
phase, creating a crude MC distribution that includes the PDF factors and a
crude YFS form-factor for the ISR photon radiation.
{KK}MC generates multiple-photon radiation using one of two modes of resummation.
EEX mode (exclusive exponentiation) is based on YFS soft photon resummation,\cite{yfs}
implemented at the cross-section level, while CEEX mode (coherent exclusive
exponentiation)\cite{Jadach:2000ir} is an amplitude-level adaptation of YFS
resummation. IFI enters
naturally when an amplitude including exponentiated ISR and FSR factors is
squared. See Ref. \cite{JadachYost2019} for a recent study of IFI in the CEEX framework.
The events generated by {KK}MC-hh may be showered externally by exporting them in an
LHE-format event file,\cite{lhe-format} or by running a built-in HERWIG6.5\cite{HERWIG} shower.
\section{Photonic Radiative Corrections}
{KK}MC-hh takes
an ab-initio calculation of photon ISR at the Feynman diagram level, with
exponentiation. This is in contrast to a more
traditional approach of factorizing the collinear ISR into the parton
distribution functions.
A QED-corrected PDF can account for the shift in quark moment due to photonic
ISR to the extent that the observable is sufficiently inclusive to average
over any transverse momentum. In particular, it is reasonable to expect that
{KK}MC-hh should show good agreement with a QED-corrected PDF for distributions
such as the invariant-mass distribution of the final leptons, in the absence
of individual lepton cuts. However, {KK}MC-hh can go beyond the PDF approximation and account
for the transverse momentum effects from the ISR that would be missed in a strictly
collinear representation that effectively confines the ISR to the protons.
Fig. 1(a) shows the ratio of the invariant mass ($M_{ll}$) distribution
with ISR turned on in {KK}MC-hh to the distribution with ISR off (red), compared
the same ratio calculated with ISR off in {KK}MC-hh, but using the LuxQED version of NNPDF3.1
instead of the standard version. The results are from a $10^9$ event muon sample
generated without a QCD shower or FSR and with no cuts on the muon momenta,
using current quark masses from the PDG,\cite{PDG2018} for 8 TeV proton CM energy.
Both methods of accounting for ISR lead to
a shift of about $-0.5\%$, and the distributions largely overlap to the
statistical limits of the sample, suggesting a high degree of consistency
between the two approaches in a case where agreement is expected. The shift in the total
cross section for $60\ {\rm GeV} < M_{ll} < 120\ {\rm GeV}$ was $-0.524\pm 0.004\%$ when turning
on ISR in {KK}MC-hh, and $-0.624\pm 0.002\%$ when switching to the NNPDF3.1-LuxQED, so the
two ways of accounting for ISR give cross sections in agreement to $0.1\%$. The shift in the
average $M_{ll}$ was $-0.10\pm 0.27\ {\rm MeV}$ when turning on ISR in {KK}MC-hh, and $-0.40
\pm 0.27\ {\rm MeV}$ when switching to a LuxQED PDF instead.
Fig. 1(b) shows a similar comparison for the rapidity distribution $Y_{ll}$
for the same events. Both {KK}MC and the LuxQED version of NNPDF3.1 lead to
a shift in the rapidity distribution on the order of $-0.5\%$, but the
shape is somewhat different, crossing at $Y=2.5$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{histMllRatioALL}
\includegraphics[width=0.49\textwidth]{histYllRatioALL}
\caption{Comparison of the ratios of the invariant mass ($M_{ll}$) and rapidity ($Y_{ll}$)
distributions for the final lepton pair with and without ISR corrections added using {KK}MC-hh
(red) or switching to the LuxQED version of NNPDF3.1 (blue).}
\end{figure}
Angular distributions are of particular interest in the context of the measurement of the
electroweak mixing angle at the LHC, in which {KK}MC-hh is participating together with
other programs combining hadronic and electroweak effects, including POWHEG-EW\cite{PowhegEW},
MC-SANC\cite{MCsanc}, ZGRAD2\cite{ZGRAD2}, and HORACE\cite{HORACE}.
The Collins-Soper angle\cite{CollinsSoper} is the scattering angle in the CM frame of the final
lepton pair, given by
\begin{equation}
\cos(\theta_{\rm CS}) = {\rm sgn}(P_{ll}^z)
\frac{p_l^+ p_{\overline l}^- - p_l^- p_{\overline l}^+}{
\sqrt{P_{ll}^2 P_{ll}^+ P_{ll}^-}}
\end{equation}
neglecting masses,
with $P_{ll} = p_l + p_{\overline l}$ and $p^{\pm} = p^0 \pm p^z$. The initial-final interference
contribution is of particular interest for the angular distribution, since it is strongly
dependent on the scattering angle.
Fig. 2 shows the CS angle distribution generated in a
{KK}MC-hh run with $9\times 10^9$ muon events at an 8 TeV CM energy, using NNPDF3.1
PDFs\cite{nnpdf3.1}, and without additional fermion cuts.
Similar results are described in detail in Ref.\cite{KKMChhAFB}.
The graph on the right of shows the $\cos(\theta_{\rm CS})$ full {KK}MC-hh distribution (green)
together with a version with IFI off (red), a version with both ISR and IFI off (black), and a
version with IFI off, but ISR effects included by using a LuxQED\cite{LUXqed}
version of NNPDF3.1 instead, NNPDF3.1-LuxQED\cite{nnpdfluxqed}
(blue). The shift due to ISR is $-0.4\%$ for both ways of accounting for ISR, to within
$\pm 0.1\%$. The IFI correction is less than $10^{-4}$, but this effect
increases for less inclusive cuts, as in the binned results for the forward-backward asymmetry
shown below.
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{CosThUncut.eps}
\includegraphics[width=0.49\textwidth]{CosThUncutRatios.eps}
\caption{The Collins-Soper angle distribution with full {KK}MC-hh (green) compared to versions
without IFI (red), without IFI and ISR (black), and without IFI but including ISR effects via a
LuxQED version of the PDFs (blue). The graph on the right shows ratios with respect to a
baseline result including only FSR corrections.
}
\end{figure}
The effect of the photonic corrections on the forward-backward asymmetry $A_{\rm FB}$ is
shown in Fig. 3 as a function of $M_{ll}$ and Fig. 4 as a function of $Y_{ll}$. The IFI
effect becomes much more pronounced in these binned results, while ISR is less
significant, at least for cuts close to $M_{ll} = M_Z$. The right-hand graph of Fig. 3 shows that
IFI has an effect (green) on the order of $0.1\%$ with a strong dependence on $M_{ll}$ in the
vicinity of $M_{ll} = M_Z$. The {KK}MC-hh ISR effect is near zero and flat in the vicinity of
$M_Z$, but with large statistical errors for larger $M_{ll}$. The ISR effect obtained by switching
from NNPDF3.1 to NNPDF3.1-LuxQED, however, shows a contribution small at $M_Z$ but larger
elsewhere, and with a pronounced slope. ISR has negligible effect in Fig. 4,
but IFI is strongly enhanced at high rapidities.
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{AFBM.eps}
\includegraphics[width=0.49\textwidth]{AFBMdifs.eps}
\caption{The forward-backward asymmetry $A_{\rm FB}$ for full {KK}MC-hh (green), as a function
of dilepton invariant mass $M_{ll}$, compared to
versions without IFI (red), without IFI and ISR (black), and without IFI but including ISR
effects via a LuxQED version of the PDFs (blue). The graph on the right shows differences
with respect to a baseline calculation with only FSR photonic corrections.
}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{AFBY.eps}
\includegraphics[width=0.49\textwidth]{AFBYdifs.eps}
\caption{The forward-backward asymmetry $A_{\rm FB}$ for full {KK}MC-hh (green) as a function
of dilepton rapidity $Y_{ll}$, compared to
versions without IFI (red), without IFI and ISR (black), and without IFI but including ISR
effects via a LuxQED version of the PDFs (blue). The graph on the right shows differences
with respect to a baseline calculation with only FSR photonic corrections.
}
\end{figure}
\section{Summary and Outlook}
{KK}MC-hh provides a precise tool for calculating exponentiated photonic
corrections to hadron scattering. In particular, it can calculate
contributions of ISR and IFI to the forward-backward asymmetry,
which will be useful for determining the electroweak mixing angle from LHC data.
{KK}MC-hh is particularly well suited to evaluating IFI due to its CEEX
exponentiation.
While showered results were not presented in this note, they are possible both with an internal
HERWIG6.5 shower, and by exporting events and applying an external shower. This will allow
addressing NLO QCD effects as well. There has also been progress on a version of {KK}MC-hh
which can be run to add electroweak corrections to previously-generated hadronic events.
{KK}MC-hh is presently being transcoded entirely to C++.
This will facilitate compiling it with a current hadronic generator such as
Herwig 7\cite{Herwig7} or KrkNLO\cite{krknlo}.
\section{Acknowledgments}
S. Yost acknowledges support of The Citadel Foundation and the Institute of Nuclear Physics,
IFJ-PAN, Krak{\'o}w, Poland, which provided computing resources for this project. S. Jadach
acknowledges support of National Science Centre, Poland Grant No. 2019/34/E/ST2/00457.
Z. W{\c a}s was supported in part by of Polish National Science Centre under
decisions DEC-2017/27/B/ST2/01391.
|
1,314,259,994,270 | arxiv | \section{Introduction}
In the past elementary particle theory was based on the assumption that nature must be described by a renormalizable quantum field theory. However, a dramatic progress in the realm of critical phenomena revealed that any non-renormalizable theory will look as if it were renormalizable at sufficiently low energies. From this perspective renormalizable theories correspond to a subset of RG flow trajectories for which all but a few of the couplings vanish. In recent years it has become increasingly apparent that renormalizability is not a fundamental physical requirement, and any realistic quantum field theory may contain non-renormalizable as well as renormalizable interactions. In fact, the experimental success of The Standard Model merely imposes a constraint on the characteristic energy scale of any non-renormalizable interaction. In particular, it remains unclear what is the fundamental principle behind the choice of the trajectory corresponding to the real world from the infinite number of possible theories.
One of the admissible ways to address this problem is to demand that the Hamiltonian of the theory belongs to a trajectory which terminates at a fixed point in UV. Theories with this property are called asymptotically safe. This concept was originally introduced by Weinberg \cite{Weinberg:1976xy,Weinberg:1979}
as an approach to select physically consistent quantum field theories (QFT). It provided an alternative
to the standard requirement of renormalizability. The original argument was motivated by an attempt to avoid hitting the Landau pole along the RG flow by imposing a demand that the flow of a QFT terminates at the UV fixed point. Asymptotically safe theories are not necessarily renormalizable in the usual sense. However, they are interacting QFTs with no unphysical singularities at high energies.
Just as the requirement of renormalizability leaves only few possible interaction terms
which one is allowed to include in the Lagrangian, so does the requirement
of asymptotic safety imposes an infinite number of constraints, limiting the number
of physically acceptable theories. Indeed, a UV fixed point in general has only finite number of relevant deformations, therefore any asymptotically safe field theory is entirely specified by a finite number of parameters. Some of such theories are represented by a renormalizable field theory at low energies, while others might be non-renormalizable.
Thus, for instance, a scalar field theory respecting $\phi\to-\phi$ symmetry rules out the possibility of any renormalizable interaction in five dimensions, yet it exhibits a non-trivial UV fixed point with just two relevant deformations in higher dimensions, and therefore it provides an example of asymptotically safe non-renormalizable quantum field theory \cite{Weinberg:1976xy}. An immediate application of the asymptotic safety
is related to the problem of quantum gravity
\cite{Weinberg:1976xy,Weinberg:1979,Christensen:1978sc,Gastmans:1977ad}
which is ongoing
(see, \textit{e.g.}, \cite{Eichhorn:2017egq} and references therein for recent work).
Several techniques for studying UV properties of QFTs in particular and their RG flows
in general have been developed.
One of the approaches is to perform a dimensional continuation, and subsequently
apply a perturbative expansion around the desired integer-valued physical space-time
dimension.
This is done with the hope that the result
will have sufficiently accurate convergence behaviour to be applied to physically meaningful space-time
dimensions, such as $d=3,4,5$. In this spirit the Wilson-Fisher fixed point has originally
been derived in $4-\epsilon$ dimensions, in which case at the fixed point the coupling
constant is perturbative in $\epsilon$ \cite{Wilson:1971dc}.
In the context of asymptotic safety, gravity has been studied in
$2+\epsilon$ dimensions, see \cite{Weinberg:1979,Christensen:1978sc,Gastmans:1977ad}
for some of the early works.
Recently the $O(N)$ vector model of the scalar fields $\phi^a$, $a=1,\dots,N$, and $\sigma$
with the relevant cubic interactions $\phi^a\phi^a \sigma$ and $\sigma^3$ in $d=6-\epsilon$ dimensions
has been extensively studied perturbatively in $\epsilon$, see e.g., \cite{Fei:2014yja,Fei:2014xta}. This model exhibits an IR fixed point, and it was suggested as an alternative description of the critical scalar $O(N)$ model with quartic interaction $(\phi^a\phi^a)^2$.
Several consistency checks for this proposal were given \cite{Fei:2014yja,Fei:2014xta}.
Some of them, such as matching of the coefficients
of the three-point functions $\langle\phi^a\phi^b \sigma\rangle$
and $\langle\sigma\sigma\sigma\rangle$ in both theories, fall within a
universal class of results obtained for a generic large-$N$ CFT
by the methods of conformal bootstrap \cite{Petkou:1995vu,Petkou:1994ad}.
However, some other checks,
such as matching of the anomalous dimensions of certain operators
in these theories to high order both in $\epsilon$ and $1/N$ expansion,
are rather non-trivial \cite{Fei:2014yja,Fei:2014xta}.
Recent work \cite{Giombi:2019upv} is also dedicated to calculation of non-perturbative
imaginary-valued contributions to the scaling dimensions
as a result of fluctuations around instanton background.
Large-$N$ expansion suggests another useful and widely applied tool to study non-perturbative aspects of QFTs.
A significant part of the large $N$ lore is due to
the original observation made by 't~Hooft \cite{tHooft:1973alw}
that the $SU(3)$ QCD becomes more tractable when generalized
to the $SU(N)$ gauge theory with matter, and considered in the large-$N$
limit (see also \cite{Witten:1979kh} and references therein for some of the earlier
arguments regarding the accuracy of such an approximation).
In the context of the AdS/CFT correspondence
the large-$N$ limit of the $SU(N)$ gauge theories has been subsequently related to
the weak Newton/string coupling limit of the dual bulk theory in the AdS space
\cite{Maldacena:1997re,Witten:1998qj,Gubser:1998bc}.
Other recent applications of the large-$N$ formalism include studies of the large-$N$ three-dimensional
$SU(N)$ gauge theories with the Chern-Simons interaction, and generalizations of these models to study
fundamental vector matter at finite temperature and chemical potential (see, \textit{e.g.},
\cite{Giombi:2011kc,Aharony:2012ns,Yokoyama:2012fa,Jain:2014nza,Geracie:2015drf,Goykhman:2016zgd}
and references therein).\footnote{For recent advances in the large-$N$ QED, see \textit{e.g.}, \cite{Giombi:2016fct}.}
Some of the earlier ideas related to applications of the
large-$N$ methods are due to the work by Parisi (see \cite{Parisi:1975} and references therein),
who in particular developed a systematic proof of renormalizability of the
$O(N)$ vector model in $4<d<6$ dimensions to each order in the $1/N$ expansion.
In particular, one can study the $O(N)$ vector model in a physically interesting dimension $d=5$.
Therefore, this model provides a useful testing ground for the formalism of asymptotic safety \cite{Weinberg:1976xy}.
Furthermore, it was shown that the large-$N$ approach can meet the dimensional continuation method
(see, \textit{e.g.}, \cite{Fei:2014yja} for recent developments in this direction),
thanks to the simple observation that the IR Wilson-Fisher fixed point in $d=4-\epsilon$
dimensions can be analytically continued to obtain perturbative UV fixed point in $d=4+\epsilon$ dimensions,
albeit the coupling constant at that fixed point
is negative-valued, and therefore the theory is unstable \cite{Weinberg:1976xy}.
In fact, the work of \cite{Fei:2014yja} was partially motivated by an attempt to design a UV completion of the critical $O(N)$ vector model in $4<d<6$ dimensions.\footnote{It should be noted that the cubic $\sigma^3$ theory considered in \cite{Fei:2014yja} is still expected to run into instabilities because of the negative mode associated with fluctuations around the instanton background. See \cite{Giombi:2019upv} for the recent detailed account of the stability issues in this model.}
In this work we continue to study the $O(N)$ vector model with quartic interaction $(\phi ^a\phi^a)^2$ in $2\leq d \leq 6$ dimensions. We scrutinize renormalization in the $d=5$ case and provide an additional evidence for the asymptotic safety of the model.
Our calculations are done to the next-to-leading
order in the $1/N$ expansion, and we successfully recover the known anomalous scaling
dimensions associated with the UV CFT \cite{Vasiliev:1981yc,Petkou:1995vu,Petkou:1994ad,Vasiliev:1981dg}.
The calculations can be readily extended to general $d$, and therefore similar conclusions hold for the IR CFT in $2<d<4$ as well as for the UV CFT in $4<d<6$.
The primary goal of this work is to derive a CFT data associated with the three-point functions $\langle \phi^a\phi^b s\rangle$ and $\langle s s s\rangle$ to the next-to-leading order in the $1/N$ expansion.
In order to find the $\langle sss\rangle$ three-point
function at the ${\cal O}(1/N^{3/2})$ order we calculate the 4-loop triangle diagram, and the associated 3-loop trapezoid
diagram, as well as the three-loop bellows diagram, in general dimension $d$. To the best of our knowledge analytic expressions for these diagrams were not presented in the literature before.
The results for the $\langle \phi^a\phi^b s\rangle$ agree with
\cite{Petkou:1995vu,Petkou:1994ad}, whereas the ${\cal O}(1/N^{3/2})$ result for $\langle sss \rangle$ is new. This data is important to carve a CFT which describes the critical vector model in general dimension.
Setting $d=6-\epsilon$ we compare our findings
with their counterparts obtained in \cite{Fei:2014yja,Fei:2014xta} for the critical cubic model.
We find that the OPE coefficients of the $\langle sss \rangle$
three-point functions of these CFTs match at the next-to-leading order in the $1/N$
expansion. We discuss the significance of this non-trivial match in the context of the non-Lagrangian
bootstrap approach to the large-$N$ CFTs with $O(N)$ symmetry.
The rest of this paper is organized as follows. In section \ref{sec:setup} we
define the model studied in this work and set our notation.
Next we assume existence of the fixed point and derive a general relation between the counterterms at the fixed point. In what follows this relation is used as a consistency check for the existence of the fixed point.
In section \ref{sec:d5fixed point} we focus on the model in 5D. We calculate various counterterms and provide additional evidence in favor of the asymptotic safety of the model.
In section \ref{sec:propagators renormalizaion} we review diagrammatic and
computational techniques for a CFT in general $d$, and use them to evaluate various
anomalous dimensions and amplitudes of the two-point functions.
In section~\ref{sec:three-point} we calculate the three-point functions associated with a CFT that emerges at the fixed point of the model in $2\leq d \leq 6$ dimensions. We discuss our results in section \ref{sec:discussion}.
\section{Setup}
\label{sec:setup}
In this paper we focus on a $d$-dimensional Euclidean vector model governed by the bare action
\begin{equation}
\label{starting action}
S = \int d^dx \, \left( \frac{1}{2} \, (\partial _ \mu \phi ) ^ 2 + \frac{1}{2} \, g_2 \, \phi ^ 2
+ \frac{g_4}{N} \left( \phi^ 2 \right) ^2 \right)\,,
\end{equation}
where the field $\phi$ has $N$ components, but we suppress the vector index for brevity. We keep the dimension $d$ general most of the time, however, some of the specific calculations are carried out in $d=5$ and $d=6-\epsilon$. The behaviour of the model at the fixed point is our main objective, but for now we keep the mass parameter $g_2$ to preserve generality.
The straightforward and well-known approach (sometimes referred to as the Hubbard-Stratonovich transformation)
to study the model such as (\ref{starting action}) is to introduce the auxiliary field $s$ which has no impact on the original path integral,
\begin{equation}
\label{starting action intermediate}
S = \int d^dx \, \left( \frac{1}{2} \, (\partial _ \mu \phi ) ^ 2 + \frac{1}{2} \, g_2 \phi ^ 2
+ \frac{g_4}{N} \left( \phi ^ 2 \right) ^2
- \frac{1}{4g_4}\left( s - \frac{2g_4}{\sqrt{N}} \, \phi^2 \right)^2 \right)\,.
\end{equation}
After a straightforward simplification this action takes the form
\begin{equation}
\label{main action}
S = \int d^dx \, \left( \frac{1}{2} \, (\partial _ \mu \phi ) ^ 2 + \frac{1}{2} \, g_2 \phi^ 2
- \frac{1}{4g_4} \, s ^ 2 + \frac{1}{\sqrt{N}} \, s \phi^ 2 \right)\,.
\end{equation}
As usual, the model (\ref{main action}) has to be renormalized.
To begin with, we introduce the renormalized fields $\tilde \phi$, $\tilde s$
\begin{equation}
\phi = \sqrt{Z_\phi} \, \tilde \phi \,, \qquad s = \sqrt{Z_s} \, \tilde s \,,
\end{equation}
where the field strength renormalization constants $Z_{\phi, s}$ are expressed in terms of the
counterterms $\delta _{\phi, s}$ as
\begin{equation}
\label{Z phi and s definition}
Z_\phi = 1 + \delta _\phi\,, \qquad Z_s = 1 + \delta _ s\,.
\end{equation}
Similarly, the renormalized mass $m$ and coupling $\tilde g_4$ are defined by
\begin{align}
g_2 Z_\phi &= m^2 +\delta _m\,,\\
g_4 Z_\phi ^ 2 &= \tilde g_4 +\delta _4\,,\label{definition of renormalized g4}
\end{align}
where $\delta _m$ and $\delta _4$ are mass and quartic coupling counterterms respectively.
\footnote{It follows from the definition (\ref{definition of renormalized g4}) that the interaction term in the
original bare action (\ref{starting action}) can be expressed as the following sum
\begin{equation}
g_4 ( \phi^ 2 ) ^2
=\tilde g_4 ( \tilde\phi^ 2 ) ^2
+\delta_4 ( \tilde\phi ^2 ) ^ 2\,.
\end{equation}}
It turns out that all counterterms vanish to leading order in the $1/N$ expansion. Therefore to carry out calculations to the next-to-leading order, it is sufficient to linearize the full action with respect to various counterterms. The action (\ref{main action}) in terms of the renormalized parameters is thus given by
\begin{align}
\label{renormalized action preliminary}
S &= \int d^dx \, \left( \frac{1}{2} \, (\partial _ \mu \tilde\phi ) ^ 2
+ \frac{1}{2} \, m^2 \tilde\phi ^ 2
+ \frac{1}{2} \,\delta_\phi\, (\partial _ \mu \tilde\phi ) ^ 2
+ \frac{1}{2} \,\delta_m \tilde\phi ^ 2 \right.\notag\\
&- \left.\frac{1}{4\tilde g_4} \,\left(1 + 2\delta_\phi + \delta _s - \frac{\delta _ 4}{\tilde g_4}\right) \, \tilde s ^ 2
+ \frac{1}{\sqrt{N}} \, \left(1 + \frac{1}{2}\delta_s +\delta_\phi\right) \, \tilde s \, \tilde \phi ^ 2 \right)\,.
\end{align}
It is convenient to define the following combinations of the counterterms,
\begin{align}
\label{definition of hat delta s}
\hat \delta _s &= 2\delta_\phi + \delta _s - \frac{\delta _ 4}{\tilde g_4}\,,\\
\label{definition of hat delta 4}
\frac{\hat \delta _4}{\sqrt{\tilde g_4}} &= \frac{1}{2}\delta_s +\delta_\phi\,,
\end{align}
in terms of which the action can be rewritten as follows
\begin{align}
\label{renormalized action}
S &= \int d^dx \, \left( \frac{1}{2} \, (\partial _ \mu \tilde\phi ) ^ 2
+ \frac{1}{2} \, m^2 \tilde\phi ^ 2
-\frac{1}{4\tilde g_4} \, \tilde s ^ 2
+ \frac{1}{\sqrt{N}} \, \tilde s \, \tilde \phi ^ 2
\right.\notag\\
&+ \left. \frac{1}{2} \,\delta_\phi\, (\partial _ \mu \tilde\phi ) ^ 2
+ \frac{1}{2} \,\delta_m \tilde\phi ^ 2
-\frac{1}{4\tilde g_4} \, \hat\delta _s \, \tilde s ^ 2
+ \frac{1}{\sqrt{N}} \, \frac{\hat \delta _4}{\sqrt{\tilde g_4}} \, \tilde s \, \tilde \phi ^ 2 \right)\,.
\end{align}
Loop corrections along with renormalization conditions fix the counterterms $\delta_\phi$, $\delta _m$, $\hat \delta _s$, and $\hat \delta _4$. Inverting (\ref{definition of hat delta s}), (\ref{definition of hat delta 4}) one then obtains
\begin{align}
\label{delta s in terms of auxiliaries}
\delta _ s & = 2\left(\frac{\hat\delta _ 4 }{\sqrt{ \tilde g _ 4 }} - \delta _ \phi\right)\,, \\
\label{delta 4 in terms of auxiliaries}
\frac{\delta _4}{\tilde g_4} &= 2\frac{\hat \delta _4}{\sqrt{\tilde g_4}} - \hat \delta _s\,.
\end{align}
Finally, renormalization of the quartic coupling constant can be read off (\ref{Z phi and s definition}), (\ref{definition of renormalized g4}) and (\ref{delta 4 in terms of auxiliaries})
\begin{equation}
\label{g4 renormalization}
g_4=\tilde g_4 \left( 1+2\frac{\hat\delta _ 4}{\sqrt{\tilde g_4}} - 2 \delta _ \phi -\hat \delta _s \right)
=\tilde g_4 \left( 1+\delta_s-\hat \delta _s \right)\,,
\end{equation}
where in the last equality (\ref{delta s in terms of auxiliaries}) was used.
The Callan-Symanzik equation can be used to find various relations between the counterterms of the theory. In particular, using it for the three-point function $\langle \phi\phi s\rangle$, we argue in Appendix \ref{appendix:callan-symanzik equation} that at the UV (IR) fixed point in $4<d<6$ ($2<d<4$) dimensions one obtains
\begin{equation}
\label{delta s hat delta s result}
\mu\frac{\partial}{\partial\mu} \left(\delta _s - \hat \delta_s +2 B {\cal L}_s \right)=0+\mathcal{O}(1/N^2) \,,
\end{equation}
where $\mu$ is an arbitrary renormalization scale.
In section \ref{sec:d5fixed point} we explicitly confirm this relation in $d=5$ at the next-to-leading order in the $1/N$ expansion. We interpret it as a consistency check of the assumption that the theory is asymptotically safe.
Of course, the Callan-Symanzik equation is just a statement about consistency of the calculations at various scales of RG flow, and therefore it holds in any field theory. However, it depends on the beta functions of the couplings which are not known in general dimension. For instance, existence of the fixed point in the $\phi^4$ vector model was not verified directly in general $d$. This is just an assumption based on accumulated evidence rather than a derived fact. In particular, eq. (\ref{delta s hat delta s result}) is the Callan-Symanzik equation with beta functions suppressed by hand. Yet, it is satisfied by the remaining counterterms which are calculated separately. In this sense (\ref{delta s hat delta s result}) plays a role of the consistency check for the existence of a fixed point.
\section{UV fixed point in $d=5$}
\label{sec:d5fixed point}
A vast amount of literature, starting from the earlier works \cite{Parisi:1975,Weinberg:1976xy},
is dedicated to
studying the properties of the UV conformal fixed point of the $O(N)$ vector models with quartic interaction
in $4<d<6$.
While the quartic interaction is non-renormalizable in $d>4$, one can take advantage
of the fact that this model is renormalizable at each order in the $1/N$ expansion \cite{Parisi:1975}.
In this section we consider (\ref{renormalized action})
in $d=5$ dimensions. The main goal is to systematically derive various anomalous dimensions
\cite{Vasiliev:1981yc,Vasiliev:1981dg,Petkou:1995vu,Petkou:1994ad} and to verify explicitly
that they satisfy certain constraints which are expected to hold at the fixed point.
The calculations are carried out to the next-to-leading order in the $1/N$ expansion.
To begin with, we list all necessary Feynman rules for the model (\ref{renormalized action}).
A solid line will be used to denote the propagators of various components of the scalar field $\phi$, whereas a
dashed line is associated with the propagator of the Hubbard-Stratonovich auxiliary field $s$. Obviously, the matrix of propagators is diagonal, whereas each non-trivial interaction vertex carries two identical vector indices. Hence, for simplicity we suppress the Kronecker delta which explicitly emphasizes these facts. We also omit the momentum conservation delta function at each vertex. Vertices are denoted by a solid blob,
whereas propagators are enclosed by black dots (absence of dots indicates an amputated external leg).
\begin{center}
\begin{picture}(700,155) (0,0)
\SetWidth{1.0}
\SetColor{Black}
\Vertex(30,150){2}
\Text(55,155)[lb]{$p$}
\Line[](30,150)(80,150)
\Vertex(80,150){2}
\Text(95,142)[lb]{$=\frac{1}{p^2 + m^2}$}
\Vertex(250,150){2}
\Text(275,155)[lb]{$p$}
\Line[dash,dashsize=10](250,150)(300,150)
\Vertex(300,150){2}
\Text(315,145)[lb]{$=-2\tilde g_4$}
\Text(55,115)[lb]{$p$}
\Line[](30,105)(50,105)
\Arc[](55,105)(5,135,495)
\Line[](52,101)(58,109)
\Line[](52,109)(58,101)
\Line[](60,105)(80,105)
\Text(95,100)[lb]{$=-\delta_\phi p^2-\delta _m$}
\Line[dash,dashsize=10](250,105)(290,105)
\Arc[](295,105)(5,135,495)
\Line[](292,101)(298,109)
\Line[](292,109)(298,101)
\Text(315,100)[lb]{$=\delta _0$}
\Line[dash,dashsize=10](30,40)(80,40)
\Vertex(80,40){4}
\Line[](80,40)(110,70)
\Line[](80,40)(110,10)
\Text(125,30)[lb]{$=-\frac{2}{\sqrt{N}}$}
\Line[dash,dashsize=10](250,40)(290,40)
\Arc[](295,40)(5,135,495)
\Line[](292,36)(298,44)
\Line[](292,44)(298,36)
\Line[](298,44)(330,70)
\Line[](298,36)(330,10)
\Text(345,30)[lb]{$=-\frac{2}{\sqrt{N}}\frac{\hat \delta _4}{\sqrt{\tilde g_4}}$}
\end{picture}
\end{center}
Note that we introduced a counterterm $\delta _0$. It is completely determined by the renormalization condition $\langle \tilde s \rangle=0$. In what follows we assume this condition has been satisfied without elaborating the details.
The full propagator of the Hubbard-Stratonovich field $\tilde s$
to leading order in the $1/N$ expansion is thus given by an infinite sum of bubble diagrams
\begin{center}
\begin{picture}(670,40) (105,-28)
\SetWidth{1.0}
\SetColor{Black}
\Vertex(120,-4){2}
\Line[dash,dashsize=10](120,-4)(170,-4)
\Vertex(170,-4){2}
\Text(186,-8)[lb]{$+$}
\Vertex(210,-4){2}
\Line[dash,dashsize=10](210,-4)(230,-4)
\Vertex(230,-4){4}
\Arc[](255,-4)(25,135,495)
\Vertex(280,-4){4}
\Line[dash,dashsize=10](280,-4)(300,-4)
\Vertex(300,-4){2}
\Text(316,-8)[lb]{$+$}
\Vertex(340,-4){2}
\Line[dash,dashsize=10](340,-4)(360,-4)
\Vertex(360,-4){4}
\Arc[](385,-4)(25,135,495)
\Vertex(410,-4){4}
\Line[dash,dashsize=10](410,-4)(430,-4)
\Vertex(430,-4){4}
\Arc[](455,-4)(25,135,495)
\Vertex(480,-4){4}
\Line[dash,dashsize=10](480,-4)(500,-4)
\Vertex(500,-4){2}
\Text(516,-8)[lb]{$+\dots $}
\end{picture}
\end{center}
This is a geometric series which can be readily written in a closed form.\footnote{One should take into account that each bubble comes with the
symmetry factor $1/2$.} We denote it by $G_s(p)$ and represent it diagrammatically by a wavy line
\begin{center}
\begin{picture}(670,10) (10,-10)
\SetWidth{1.0}
\SetColor{Black}
\Vertex(30,-4){2}
\Photon(30,-5)(80,-4){2}{9}
\Text(55,5)[lb]{$p$}
\Vertex(80,-4){2}
\Text(95,-12)[lb]{$= G_s(p)=-2\tilde g_4\sum_{n=0}^\infty (-4\tilde g_4 B) ^ n=\frac{-2\tilde g_4}
{1+4\tilde g_4 B}~,$}
\end{picture}
\end{center}
where each bubble in the infinite series is associated with a UV divergent loop integral
\begin{equation}
\label{B definition}
B(p) = \int \frac{d^5q}{(2\pi)^5}\,\frac{1}{(q^2+m^2)((p+q)^2 + m^2)}\,.
\end{equation}
To regularize the UV divergence we introduce a spherically symmetric sharp cutoff $\Lambda$
\begin{equation}
\label{B0 result}
B(p) = \frac{\Lambda}{12\pi^3} -\frac{1}{64\pi^2 p}\,\left((4m^2+p^2)\,\tan^{-1}\left(\frac{p}{2|m|}\right)+2|m|p\right)\,.
\end{equation}
The power law divergence can be eliminated by adjusting the counterterm $\hat\delta _s$. Unlike logarithmic divergences, the power law divergences depend on the details of regularization scheme. For instance, they are absent in dimensional regularization. Hence, we simply ignore such divergences in what follows to reduce clutter in the equations. In particular, in the large momentum limit which we are interested in for the purpose of finding the UV fixed point, we obtain
\begin{equation}
\label{B0 in UV}
B\Big|_{p\gg m} = -\frac{p}{128\pi}\,.
\end{equation}
Using the asymptotic behaviour (\ref{B0 in UV}) rather than the full expression (\ref{B0 result}), and thus focusing on the UV
regime, one avoids passing through the pole of the propagator $G_s(p)$ at some finite momentum
\cite{Parisi:1975}. Therefore in all of our calculations
we take the limit $\tilde g_4 p \gg 1$ to simplify the propagator of the Hubbard-Stratonovich field
(see, \textit{e.g.}, \cite{Fei:2014yja}),
\begin{equation}
\label{massless full s propagator}
G_s(p)=-\frac{1}{2B(p)} = \frac{64\pi}{p}\,.
\end{equation}
By assumption, the beta function for $\tilde g_4$ is expected to exhibit a UV fixed point. As pointed out in Appendix \ref{appendix:callan-symanzik equation}, the counterterms must therefore satisfy equation (\ref{delta s hat delta s result}). In particular, (\ref{delta s hat delta s result}) serves as a non-trivial consistency check of the assumption about the UV behaviour of the beta function. In order to explicitly verify this identity, we proceed to calculation of the counterterms to ${\cal O}\left(\frac{1}{N}\right)$ order. It is natural to set $m^2=0$ since we are ultimately interested in the UV behaviour of the loop integrals.
\subsection{Scalar field two-point function}
To leading order in the $1/N$ expansion only $G_s(p)$ exhibits loop corrections. As argued above, there are no non-trivial UV divergences associated with loop diagrams at this order, and all counterterms thus vanish. However, a
non-trivial renormalization is induced at the next-to-leading order in $1/N$.
The ${\cal O}\left(\frac{1}{N}\right)$ contribution to the counterterms $\delta_{\phi,m}$ are derived from the requirement
that the sum of the loop diagram
\begin{center}
\begin{picture}(670,60) (125,-40)
\Vertex(180,-4){2}
\Line[](180,-4)(230,-4)
\Text(205,0)[lb]{$p$}
\Vertex(230,-4){4}
\Arc[](255,-4)(25,-180,-360)
\Text(245,-45)[lb]{$p+q$}
\PhotonArc[](255,-4)(25,0,180){2}{9}
\Text(255,30)[lb]{$q$}
\Vertex(280,-4){4}
\Line[](280,-4)(330,-4)
\Text(305,0)[lb]{$p$}
\Vertex(330,-4){2}
\Text(340,-15)[lb]{$=\frac{1}{(p^2)^2}\left(-\frac{2}{\sqrt{N}}\right)^2\int \frac{d^5q}{(2\pi)^5}\frac{-1}{2B(q)}\,\frac{1}{(p+q)^2}$}
\end{picture}
\end{center}
and the counter-term contribution
\begin{center}
\begin{picture}(670,40) (315,-30)
\Vertex(370,-4){2}
\Line[](370,-4)(420,-4)
\Arc[](425,-4)(5,135,495)
\Line[](422,-8)(428,0)
\Line[](422,0)(428,-8)
\Line[](430,-4)(480,-4)
\Vertex(480,-4){2}
\Text(395,2)[lb]{$p$}
\Text(455,2)[lb]{$p$}
\Text(495,-13)[lb]{$=\frac{1}{(p^2)^2}\,(-p^2\delta_\phi -\delta _m)$}
\end{picture}
\end{center}
is finite.
Introducing a sharp cutoff $\Lambda$ and focusing on the logarithmic divergences only, yields\footnote{If we keep the mass in the action (\ref{renormalized action}), then there is an additional counterterm of the form
\begin{equation}
\delta _ m = -\frac{1}{N}\frac{320m^2}
{3\pi^2}\,\log\left(\frac{\Lambda}{\mu}\right)\,.
\end{equation}
}
\begin{align}
\label{delta phi d5}
\delta _ \phi &= -\frac{1}{N}\frac{64}{15\pi^2}\,\log\left(\frac{\Lambda}{\mu}\right)\,
\end{align}
where $\mu$ is an arbitrary renormalization scale. The anomalous dimension of the scalar field $\phi$ at the UV fixed point can be readily evaluated using the Callan-Symanzik equation for the two-point function
of $\tilde\phi$, see Appendix \ref{appendix:callan-symanzik equation}
\begin{equation}
\label{gamma_phi d=5 result}
\gamma_\phi = \frac{1}{2}\frac{\partial}{\partial\log\mu}\,\delta_\phi =
\frac{1}{N}\frac{32}{15\pi^2}\,.
\end{equation}
This result is in full agreement with \cite{Vasiliev:1981yc,Vasiliev:1981dg,Petkou:1995vu,Petkou:1994ad}.
\subsection{Renormalization of the interaction vertex}
\label{sec:interaction vertex in d=5}
There are two loop diagrams that contribute to renormalization of the interaction vertex at the next to leading order in the $1/N$ expansion. To calculate the corresponding counterterm $\hat \delta _4$ it is enough to set all external momenta to zero. The first diagram takes the form
\begin{center}
\begin{picture}(450,194) (55,-63)
\SetWidth{1.0}
\SetColor{Black}
\Line[](96,114)(176,34)
\Line[](176,34)(96,-46)
\Photon(128,82)(128,-14){2}{9}
\Text(115,30)[lb]{$q$}
\Text(155,0)[lb]{$q$}
\Text(155,60)[lb]{$q$}
\Vertex(128,82){4}
\Vertex(128,-14){4}
\Line[dash,dashsize=10](176,34)(208,34)
\Vertex(176,34){4}
\Text(226, 23)[lb]{$ =\left(-\frac{2}{\sqrt{N}}\right)^3 \int \frac{d^5q}{(2\pi)^5}\frac{-1}{2B(q)}\,\frac{1}{(q^2)^2}
=-\frac{1}{N^{3/2}}\,\frac{128}{3\pi^2}\,\log\Lambda$}
\end{picture}
\end{center}
whereas the second diagram is given by
\begin{center}
\begin{picture}(400,194) (329,-63)
\SetWidth{1.0}
\SetColor{Black}
\Line[](384,66)(384,2)
\Text(356,30)[lb]{$p+q$}
\Text(340,30)[lb]{$q$}
\Text(373,85)[lb]{$q$}
\Text(373,-25)[lb]{$q$}
\Text(373,85)[lb]{$q$}
\Text(405,53)[lb]{$p$}
\Text(405,8)[lb]{$p$}
\Line[](384,2)(416,34)
\Line[](384,66)(416,34)
\Vertex(416,34){4}
\Line[dash,dashsize=10](416,34)(448,34)
\Vertex(352,98){4}
\Photon(352,98)(384,66){2}{9}
\Vertex(384,66){4}
\Vertex(384,2){4}
\Photon(384,2)(352,-30){2}{9}
\Vertex(352,-30){4}
\Line[](352,98)(320,130)
\Line[](352,-30)(320,-62)
\Line[](352,98)(352,-30)
\Text(460, 23)[lb]{$=
\left(-\frac{2}{\sqrt{N}}\right)^5 N \int \frac{d^5q}{(2\pi)^5}\left(\frac{-1}{2B(q)}\right)^2\,
\frac{1}{q^2}\,\left(-\frac{1}{2}\right)
\frac{\partial B(q)}{\partial m^2} =-\frac{1}{N^{3/2}}\,\frac{512}{3\pi^2}\,\log\Lambda
~,$}
\end{picture}
\end{center}
where we used (\ref{B definition}) to get the following simple relation
\begin{equation}
-\frac{1}{2}\,\frac{\partial B(q)}{\partial m^2}= \int \frac{d^5p}{(2\pi)^5}\,\frac{1}{(p^2+m^2)^2((p+q)^2 + m^2)}\,.
\end{equation}
In particular, the loop integral can be calculated by taking derivative of (\ref{B0 result}) with respect to the mass parameter.
Finally, the tree level counterterm contribution can be read off the Feynman rules we listed in the beginning of this section.
Combining everything together and demanding cancellation of the divergences, we obtain
\begin{equation}
\label{hat delta 4 result}
\frac{\hat \delta _4}{\sqrt{\tilde g_4}} = -\frac{1}{N}\frac{320}{3\pi^2}\,
\,\log\left(\frac{\Lambda}{\mu}\right)\,.
\end{equation}
\subsection{Auxiliary field propagator}
Next let us evaluate the counterterm $\hat\delta _s$. To this end, we perform renormalization of the Hubbard-Stratonovich propagator. There are five loop diagrams which contribute at the next to leading order in the $1/N$ expansion. We tag these diagrams with $C_{i}$, $i=1,\dots,5$ and denote their external momentum by $u$. For simplicity, all the diagrams are amputated, \textit{i.e.}, the two external $s$ propagators are factored out. Thus, for instance
\begin{center}
\begin{picture}(194,80) (31,-25)
\SetWidth{1.0}
\SetColor{Black}
\Text(-5,5)[lb]{$C_1=$}
\Line[dash,dashsize=10](32,10)(94,10)
\Text(60,15)[lb]{$u$}
\Arc[](128,10)(35,153,513)
\Line[dash,dashsize=10](162,10)(224,10)
\Text(190,15)[lb]{$u$}
\Photon(128,44)(128,-24){2}{9}
\Text(117,10)[lb]{$r$}
\Text(70,35)[lb]{$p+u$}
\Text(90,-20)[lb]{$p$}
\Text(160,-20)[lb]{$p+r$}
\Text(160,35)[lb]{$p+r+u$}
\Vertex(94,10){4}
\Vertex(162,10){4}
\Vertex(128,-24){4}
\Vertex(128,44){4}
\end{picture}
\end{center}
is given by
\begin{equation}
C_1 =\frac{1}{2}\left(-\frac{2}{\sqrt{N}}\right)^4 N
\int\frac{d^5r}{(2\pi)^5}\,\frac{64\pi}{r}
\int\frac{d^5p}{(2\pi)^5}
\frac{1}{p^2(p+u)^2(p+r)^2(p+r+u)^2}\,.
\end{equation}
This integral is somewhat laborious to evaluate in full generality. However, we are only interested in its behaviour at large momenta
\begin{equation}
\label{C1 result}
C_1 = \frac{1024\pi}{N}
\int\frac{d^5p}{(2\pi)^5}
\frac{1}{p^2(p+u)^2}
\int\frac{d^5r}{(2\pi)^5}\,\frac{1}{r^5}
=\frac{1}{N}\frac{256}{3\pi^2}\,B(u)\,\log\,\Lambda+\dots\,.
\end{equation}
Similarly,
\begin{center}
\begin{picture}(322,100) (31,-33)
\SetWidth{1.0}
\SetColor{Black}
\Text(-5,11)[lb]{$C_2=$}
\Text(60,22)[lb]{$u$}
\Text(323,22)[lb]{$u$}
\Text(100,-22)[lb]{$p$}
\Text(275,-22)[lb]{$r$}
\Text(280,45)[lb]{$r+u$}
\Text(80,45)[lb]{$p+u$}
\Text(190,55)[lb]{$q$}
\Text(180,-30)[lb]{$q+u$}
\Text(113,11)[lb]{$p+q+u$}
\Text(225,11)[lb]{$r+q+u$}
\Line[dash,dashsize=10](32,16)(94,16)
\Arc[](128,16)(35,153,513)
\Photon(144,48)(240,48){2}{9}
\Photon(144,-16)(240,-16){2}{9}
\Arc[](256,16)(35,153,513)
\Line[dash,dashsize=10](292,16)(354,16)
\Vertex(94,16){4}
\Vertex(292,16){4}
\Vertex(144,48){4}
\Vertex(240,48){4}
\Vertex(144,-16){4}
\Vertex(240,-16){4}
\end{picture}
\end{center}
is given by
\begin{equation}
C_2 =\frac{1}{2}\left(-\frac{2}{\sqrt{N}}\right)^6 N^2
\int\frac{d^5q}{(2\pi)^5}\,\frac{64\pi}{q}
\,\frac{64\pi}{|q+u|}
\left(\int\frac{d^5p}{(2\pi)^5}\frac{1}{p^2(p+u)^2(p+q+u)^2}\right)^2\,.
\end{equation}
Expanding it in the region of large momenta, gives
\begin{align}
C_2&=\frac{2^{18}\pi^2}{N}
\int\frac{d^5p}{(2\pi)^5}
\frac{1}{p^2(p+u)^2}
\int\frac{d^5r}{(2\pi)^5}\frac{1}{(r^2)^2}
\int\frac{d^5q}{(2\pi)^5}\frac{1}{(q^2)^2(r+q)^2}\notag\\
&=\frac{1}{N}\frac{1024}{3\pi^2}\,B(u)\,\log\,\Lambda+\dots\,.
\label{C2 result}
\end{align}
Next, we focus on
\begin{center}
\begin{picture}(194,100) (31,-35)
\SetWidth{1.0}
\SetColor{Black}
\Text(-5,5)[lb]{$C_3=$}
\Line[dash,dashsize=10](32,10)(94,10)
\Text(60,15)[lb]{$u$}
\Text(85,15)[lb]{$p$}
\Text(168,15)[lb]{$p$}
\Arc[](128,10)(35,153,513)
\Line[dash,dashsize=10](162,10)(224,10)
\Text(190,15)[lb]{$u$}
\Photon(97,25)(159,25){2}{9}
\Text(125,13)[lb]{$q$}
\Text(115,50)[lb]{$p+q$}
\Text(115,-38)[lb]{$p+u$}
\Vertex(94,10){4}
\Vertex(162,10){4}
\Vertex(97,25){4}
\Vertex(159,25){4}
\end{picture}
\end{center}
Using Feynman rules results in the following integral expression
\begin{equation}
C_3 =\left(-\frac{2}{\sqrt{N}}\right)^4 N
\int\frac{d^5p}{(2\pi)^5}\,\frac{1}{(p^2)^2(p+u)^2}
\int\frac{d^5q}{(2\pi)^5}\,\frac{64\pi}{q(p+q)^2}\,.
\end{equation}
Performing integration over $q$ and keeping the logarithmic divergence only\footnote{Recall that we ignore scheme dependent power law divergences.}, yields
\begin{equation}
\label{C3 result}
C_3=-\frac{1}{N}\frac{256}{15\pi^2}\,B(u)\,\log\,\Lambda+\dots\,.
\end{equation}
The two remaining diagrams are
\begin{center}
\begin{picture}(194,90) (31,-27)
\SetWidth{1.0}
\SetColor{Black}
\Text(-15,5)[lb]{$C_4=2\times$}
\Line[dash,dashsize=10](32,10)(89,10)
\Text(60,15)[lb]{$u$}
\Arc[](128,10)(35,0,173)
\Arc[](128,10)(35,187,360)
\Line[dash,dashsize=10](162,10)(224,10)
\Text(190,15)[lb]{$u$}
\Text(125,50)[lb]{$p$}
\Text(115,-38)[lb]{$p+u$}
\Arc[](94,10)(5,135,495)
\Line[](91,6)(97,14)
\Line[](91,14)(97,6)
\Vertex(162,10){4}
\end{picture}
\end{center}
and
\begin{center}
\begin{picture}(194,90) (31,-35)
\SetWidth{1.0}
\SetColor{Black}
\Text(-5,5)[lb]{$C_5=$}
\Line[dash,dashsize=10](32,10)(94,10)
\Text(60,15)[lb]{$u$}
\Arc[](128,10)(35,0,83)
\Arc[](128,10)(35,97,360)
\Line[dash,dashsize=10](162,10)(224,10)
\Text(190,15)[lb]{$u$}
\Text(125,55)[lb]{$p$}
\Text(115,-38)[lb]{$p+u$}
\Arc[](128,45)(5,135,495)
\Line[](125,41)(131,49)
\Line[](125,49)(131,41)
\Vertex(94,10){4}
\Vertex(162,10){4}
\end{picture}
\end{center}
An extra factor of two in $C_4$ comes from the interchange of the ordinary vertex with the counterterm. More specifically, the integral expressions for these diagrams read
\begin{align}
\label{C4 result}
C_4 &=2\times\frac{1}{2}\left(-\frac{2}{\sqrt{N}}\right)
\left(-\frac{2}{\sqrt{N}}\frac{\hat\delta _4}{\sqrt{\tilde g_4}}\right)N
\int\frac{d^5p}{(2\pi)^5}\,\frac{1}{p^2(p+u)^2}=4\frac{\hat\delta _4}{\sqrt{\tilde g_4}}B(u)\,, \\
\label{C5 result}
C_5 &=\left(-\frac{2}{\sqrt{N}}\right)^2N
\int\frac{d^5p}{(2\pi)^5}\,\frac{-p^2\delta _\phi }{(p^2)^2(p+u)^2}=-4\delta _\phi B(u)\,.
\end{align}
Combining the loop diagrams
(\ref{C1 result}), (\ref{C2 result}), (\ref{C3 result}), (\ref{C4 result}), (\ref{C5 result})
results in the total loop correction to the $\langle \tilde s\tilde s\rangle$ propagator\footnote{$\mu_0$ is an arbitrary IR scale.}
\begin{align}
{\cal L}_s=\left(-\frac{1}{2B}\right)^2\sum_{i=1}^5 C_i
=\frac{1}{B}\frac{1}{N}\,\frac{512}{5\pi^2}\,\log\,\frac{\mu}{\mu_{0}}\,.
\end{align}
It satisfies
\begin{equation}
\label{Ls in d=5}
B\, \mu\frac{\partial}{\partial\mu}\,{\cal L}_s =- \mu\frac{\partial}{\partial\mu}\,
\left(\frac{\hat\delta _4}{\sqrt{\tilde g_4}}-\delta_\phi\right)
\end{equation}
Notice that ${\cal L}_s$ is finite (the logarithmically divergent terms cancelled out),
and therefore
\begin{equation}
\label{hat delta s result in d=5}
\hat\delta _s=0\,.
\end{equation}
\iffalse
Summing up the loop contributions to the Hubbard-Stratonovich propagator
we obtain the full expression to the ${\cal O}(1/N)$ order,
\begin{center}
\begin{picture}(670,40) (100,-30)
\SetWidth{1.0}
\SetColor{Black}
\Vertex(120,-4){2}
\Photon[](120,-4)(170,-4){2}{9}
\Vertex(170,-4){2}
\Text(186,-8)[lb]{$+$}
\Vertex(210,-4){2}
\Photon[dash,dashsize=10](210,-4)(260,-4){2}{9}
\GOval(285,-4)(25,25)(0){0.882}
\Text(283,-7)[lb]{$C$}
\Photon[dash,dashsize=10](310,-4)(350,-4){2}{9}
\Vertex(350,-4){2}
\Text(360,-11)[lb]{$=G_s^{(1/N)}=-\frac{1}{2B}+\left(-\frac{1}{2B}\right)^2\,C\,,$}
\end{picture}
\end{center}
where we denoted the total of loop diagrams $C_i$, $i=1,\dots,5$ as
\begin{equation}
C=\sum_{i=1}^5 C_i\,,
\end{equation}
and depicted it symbolically with a blob.
On the other hand, the auxiliary field propagator is given by
\begin{equation}
G_s^{(1/N)}=-\frac{1}{2B}(1-\hat \delta _s)\,.
\end{equation}
Comparing these relations, and using expressions
(\ref{C1 result}), (\ref{C2 result}), (\ref{C3 result}), (\ref{C4 result}), (\ref{C5 result})
for $C_i$, $i=1,\dots,5$, and expressions (\ref{delta phi d5}), (\ref{hat delta 4 result})
for $\delta_\phi$, $\hat\delta _4$,
we then obtain
\begin{equation}
\label{d=5 delta s hat delta s result}
\mu\frac{\partial}{\partial\mu}\,\left(2\left(\frac{\hat\delta _4}{\sqrt{\tilde g_4}}-\delta_\phi\right)-\hat\delta _s\right)
= 0 + {\cal O}(1/N^2)\,.
\end{equation}
\fi
Using the relation (\ref{delta s in terms of auxiliaries}) we
can re-write (\ref{Ls in d=5}) as
\begin{equation}
\label{delta s Ls in d=5}
\mu\frac{\partial}{\partial\mu} \left(\delta _s+2 B {\cal L}_s \right)=0\,.
\end{equation}
Our calculations (\ref{hat delta s result in d=5}), (\ref{delta s Ls in d=5}) in $d=5$
agree with the general expression (\ref{delta s hat delta s result}). As discussed at the end of section \ref{sec:setup} we interpret it as a consistency check of the assumption that the model is asymptotically safe.
Finally, using the Callan-Symanzik equation for the two-point function of $\tilde s$,
see Appendix \ref{appendix:callan-symanzik equation},
we derive the anomalous dimension of the auxiliary field $\tilde s$
to ${\cal O}(1/N)$ order,
\begin{equation}
\label{gamma_s d=5 result}
\gamma_s = \frac{1}{2}\mu\frac{\partial}{\partial\mu}\,(\hat \delta_s +
2B{\cal L}_s) =
\frac{1}{N}\frac{512}{5\pi^2}\,.
\end{equation}
This result agrees with the literature
\cite{Vasiliev:1981yc,Vasiliev:1981dg,Petkou:1995vu,Petkou:1994ad} and
serves as a check of the calculation.
\section{Position space calculation in general dimension}
\label{sec:propagators renormalizaion}
In the previous section we focused on providing an evidence for the existence of UV fixed point in the
5D vector model. Our next goal is to derive a new CFT data associated with the model at criticality in $2\leq d \leq 6$. The stage is set in this section, whereas the new results are presented and derived in section \ref{sec:three-point}.
We start from calculating the next-to-leading ${\cal O}\left(\frac{1}{N}\right)$ order
corrections to the CFT propagators of the
fundamental scalar field $\phi$
and the auxiliary Hubbard-Stratonovich field $s$, working in position space at the UV fixed point. Space-time
dimension $d$ will be assumed to be general $4<d<6$ in this section, albeit
the results are also applicable to the IR conformal fixed point in $2<d<4$ dimensions.
To leading order in the large-$N$ expansion the propagators are given by
(to avoid clutter we continue to suppress the $O(N)$ vector indices and the explicit factors of the Kronecker delta)
\begin{align}
\label{free phi propagator in position space}
\langle \phi (x_1)\phi (x_2)\rangle &=
C_\phi \,\frac{1}
{|x_{12}|^{2\Delta_\phi}}\,,\\
\label{free s propagator in position space}
\langle s(x_1)s(x_2)\rangle &=C_s \,\frac{1}{|x_{12}|^{2\Delta_s}}\,,
\end{align}
where $x_{12}=x_1-x_2$ and the amplitudes $C_{\phi,s}$ are specified below.
To calculate the sub-leading corrections we employ the technique developed in \cite{Vasiliev:1981yc,Vasiliev:1981dg,Vasiliev:1975mq}, following recent developments in \cite{Gubser:2017vgc} (see also
\cite{Vasiliev:1982dc,Vasilev:2004yr,Gracey:2018ame,Kotikov:2018wxe}
for an extensive review and discussion of the $1/N$ calculations). The $1/N$ corrections dress the above propagators. As a result, their form is modified as follows
\begin{align}
\label{corrected phi propagator in position space}
\langle \phi(x_1)\phi(x_2)\rangle &=
C_\phi (1+A_\phi)\,\frac{\mu^{-2\gamma_\phi}}
{|x_{12}|^{2(\Delta_\phi + \gamma_\phi)}}\,,\\
\label{corrected s propagator in position space}
\langle s(x_1)s(x_2)\rangle &=C_s (1+A_s)\,\frac{\mu^{-2\gamma_s}}{|x_{12}|^{2(\Delta_s + \gamma_s)}}\,,
\end{align}
where $\mu$ is an arbitrary renormalization scale, $\gamma_{\phi,s}$ represent anomalous dimensions, whereas
the symbols $\Delta_{\phi,s}$ stand for the scaling dimensions of the fields $\phi$ and $s$ at the gaussian fixed point,
\begin{equation}
\label{engineering scaling}
\Delta_\phi = \frac{d}{2}-1\,,\qquad \Delta_s =2 \,, \qquad 2\Delta_\phi +\Delta_s = d\, .
\end{equation}
While we have already derived the anomalous dimensions $\gamma_{\phi,s}$
in section \ref{sec:setup} for the five dimensional model,
in this section we calculate both the anomalous dimensions and the amplitudes $A_{\phi,s}$ in general dimension $d$. Our findings in this section
are in full agreement with the known results in the literature, \textit{e.g.}, \cite{Derkachov:1997ch},
and therefore the reader familiar with the subject can proceed to the next section.
\subsection{Preliminary remarks}
Performing the Fourier transform
\begin{equation}
\int\frac{d^dk}{(2\pi)^d}\,e^{ik\cdot x}\frac{1}{(k^2)^{\frac{d}{2}-\Delta}}=
\frac{2^{2\Delta-d}}{\pi^{\frac{d}{2}}}\frac{\Gamma(\Delta)}{\Gamma\left(\frac{d}{2}-\Delta\right)}
\frac{1}{|x|^{2\Delta}}\,,
\end{equation}
we find the coefficient $C_\phi$ in (\ref{corrected phi propagator in position space})
\begin{equation}
C_\phi = \frac{1}{4\pi^{\frac{d}{2}}}\,\Gamma\left(\frac{d}{2}-
1\right)\,.
\end{equation}
Using expression (\ref{massless full s propagator}) for the $s$-propagator in momentum space,
as well as the result for the bubble loop integral $B$ in general $d$
\begin{equation}
\label{B in general d}
B(p) = \int\frac{d^dq}{(2\pi)^d}\frac{1}{q^2(p+q)^2}
=-\frac{(p^2)^{\frac{d}{2}-2}}{2^d(4\pi)^\frac{d-3}{2}\Gamma\left(\frac{d-1}{2}\right)
\sin\left(\frac{\pi d}{2}\right)}
\end{equation}
we obtain
\footnote{It agrees with \cite{Fei:2014yja} after reconciling conventions for normalization of
the Hubbard-Stratonovich field.}
\begin{equation}
C_s=\frac{2^d\,\Gamma\left(\frac{d-1}{2}\right)\sin\left(\frac{\pi d}{2}\right)}{\pi^\frac{3}{2}
\Gamma\left(\frac{d}{2}-2\right)}\,.
\end{equation}
Let us now review the Feynman rules for a CFT in position space \cite{Vasiliev:1981yc,Vasiliev:1981dg,Vasiliev:1975mq}.
At the conformal fixed point, there is no need to keep separate notation (solid or dashed lines)
for the propagators of $\phi$ and $s$. Instead, we put the power index $2a$
(determined by the scaling dimension $a$ of the corresponding field) on top of the line. The line itself is assumed to be normalized to unity.
\begin{center}
\begin{picture}(98,0) (31,-65)
\SetWidth{1.0}
\SetColor{Black}
\Line[](30,-58)(90,-58)
\Vertex(30,-58){2}
\Vertex(90,-58){2}
\Text(15,-58)[lb]{$x_1$}
\Text(95,-58)[lb]{$x_2$}
\Text(55,-53)[lb]{$2a$}
\Text(120,-67)[lb]{$=\frac{1}{|x_{12}|^{2a}}$}
\end{picture}
\end{center}
In particular, the calculation of each diagram starts from counting and writing down explicitly all the factors of
$C_{\phi,s}$. The loop diagram in position space simply amounts to adding two powers together.
\begin{center}
\begin{picture}(257,50) (0,0)
\SetWidth{1.0}
\SetColor{Black}
\Arc[clock](81,-39)(77.006,127.614,52.386)
\Arc[](80,86)(80,-126.87,-53.13)
\Line[](160,22)(256,22)
\Vertex(160,22){2}
\Vertex(256,22){2}
\Vertex(33,22){2}
\Vertex(129,22){2}
\Text(142,20)[lb]{$=$}
\Text(77,-5)[lb]{$2b$}
\Text(77,43)[lb]{$2a$}
\Text(190,27)[lb]{$2(a+b)$}
\end{picture}
\end{center}
We also need the propagator splitting/merging relation \cite{Vasiliev:1981yc,Vasiliev:1981dg,Vasiliev:1975mq}
\begin{equation}
\label{propagator splitting}
\int d^d x_3\, \frac{1}{(x_3^2)^a((x_3-x_{12})^2)^b}
=U(a,b,d-a-b)\,\frac{1}{(x_{12}^2)^{a+b-\frac{d}{2}}}\,,
\end{equation}
where
\begin{equation}
\label{U definition}
U(a,b,c) = \pi^\frac{d}{2}
\,\frac{\Gamma\left(\frac{d}{2}-a\right)\Gamma\left(\frac{d}{2}-b\right)\Gamma\left(\frac{d}{2}-c\right)}
{\Gamma(a)\Gamma(b)\Gamma(c)}\,.
\end{equation}
Expression (\ref{propagator splitting}) can be graphically represented as
\begin{center}
\begin{picture}(98,50) (130,-80)
\SetWidth{1.0}
\SetColor{Black}
\Line[](30,-58)(90,-58)
\Line[](90,-58)(150,-58)
\Line[](180,-58)(240,-58)
\Vertex(30,-58){2}
\Vertex(90,-58){4}
\Vertex(150,-58){2}
\Vertex(180,-58){2}
\Vertex(240,-58){2}
\Text(55,-53)[lb]{$2a$}
\Text(115,-53)[lb]{$2b$}
\Text(163,-61)[lb]{$=$}
\Text(180,-53)[lb]{$2(a+b)-d$}
\Text(250,-63)[lb]{$\times U(a,b,d-a-b)$}
\end{picture}
\end{center}
where the middle point on the l.h.s. is an integrated over vertex.
Additionally, we will make use of the following identity for $a_1+a_2+a_3=d$,
also known as the uniqueness relation
\cite{Parisi:1971,Symanzik:1972wj,Vasiliev:1981yc,Vasiliev:1981dg,Vasiliev:1975mq,Kazakov:1983ns,Kazakov:1984bw}
\begin{equation}
\label{uniqueness}
\int d^dx\,\frac{1}{|x_1-x|^{2a_1}|x_2-x|^{2a_2}|x_3-x|^{2a_3}}
=\frac{U(a_1,a_2,a_3)}{|x_{12}|^{d-2a_3}|x_{13}|^{d-2a_2}|x_{23}|^{d-2a_1}}\,,
\end{equation}
where the function $U$ is defined in (\ref{U definition}).
The uniqueness relation can be represented diagrammatically as \cite{Vasiliev:1981yc,Vasiliev:1981dg,Vasiliev:1975mq}
\begin{center}
\begin{picture}(210,66) (70,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](32,34)(80,2)
\Line[](80,2)(32,-30)
\Line[](80,2)(128,2)
\Line[](192,34)(192,-30)
\Line[](192,-30)(240,2)
\Line[](240,2)(192,34)
\Vertex(32,34){2}
\Vertex(32,-30){2}
\Vertex(80,2){4}
\Vertex(128,2){2}
\Vertex(192,34){2}
\Vertex(192,-30){2}
\Vertex(240,2){2}
\Text(55,-25)[lb]{$2a_1$}
\Text(55,22)[lb]{$2a_2$}
\Text(100,5)[lb]{$2a_3$}
\Text(160,-1)[lb]{$=$}
\Text(180,-1)[lb]{$\alpha$}
\Text(215,-28)[lb]{$\beta$}
\Text(215,24)[lb]{$\gamma$}
\Text(250,-5)[lb]{$\times \left(-\frac{2}{\sqrt{N}}\right) U\left(a_1,a_2,a_3\right)$}
\end{picture}
\end{center}
where the middle vertex on the l.h.s. is assumed to be integrated over, and we introduced
$\alpha = d-2a_3$, $\beta = d-2a_2$, $\gamma = d-2a_1$. Here we also accounted for the Feynman rule associated with the cubic vertex.
\begin{center}
\begin{picture}(210,66) (70,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](32,34)(80,2)
\Line[](80,2)(32,-30)
\Line[](80,2)(128,2)
\Vertex(80,2){4}
\Text(150,-7)[lb]{$=-\frac{2}{\sqrt{N}}$}
\end{picture}
\end{center}
\subsection{$\phi$ propagator}
Let us consider the $1/N$ correction to the $\phi$ propagator. It is given by the following 1-loop diagram
\begin{center}
\begin{picture}(670,60) (125,-40)
\Text(140,-7)[lb]{$P\;\;=$}
\Vertex(180,-4){2}
\Line[](180,-4)(230,-4)
\Text(175,0)[lb]{$x_1$}
\Text(215,0)[lb]{$x_3$}
\Text(285,0)[lb]{$x_4$}
\Vertex(230,-4){4}
\Arc[](255,-4)(25,-180,-360)
\Arc[](255,-4)(25,0,180)
\Vertex(280,-4){4}
\Line[](280,-4)(330,-4)
\Text(325,0)[lb]{$x_2$}
\Vertex(330,-4){2}
\Text(195,0)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(253,-42)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(253,25)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(305,0)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\end{picture}
\end{center}
This diagram reveals a simplest application of the Feynman rules formulated in the previous subsection. It diverges in any $d$, and therefore we regularize it by adding $\delta/2\ll 1$ to the scaling dimension of $s$ \cite{Vasiliev:1981yc,Vasiliev:1981dg,Vasiliev:1975mq}. In other words, we analytically continue the diagram in $\Delta_s$ rather than in $d$. Applying the merging relation (\ref{propagator splitting}) twice, yields
\begin{align}
P(x_1,x_2)&=\left(-\frac{2}{\sqrt{N}}\right)^2 C_\phi^3 C_s\,\mu^{-\delta}\,
\int d^dx_{3,4}\frac{1}{|x_{13}|^{2\Delta_\phi}
|x_{24}|^{2\Delta_\phi}|x_{34}|^{2\Delta_\phi+2\Delta_s+\delta}}\notag\\
&=\frac{4}{N}C_\phi^3 C_s\,\mu^{-\delta}\,
U\left(\Delta_\phi,\Delta_\phi +\Delta_s+\frac{\delta}{2},-\frac{\delta}{2}\right)
\int d^dx_3 \frac{1}{|x_{13}|^{2\Delta_\phi}|x_{23}|^{d+\delta}}\\
&=\frac{4}{N}C_\phi^3 C_s\,\mu^{-\delta}\,
U\left(\Delta_\phi,\Delta_\phi +\Delta_s+\frac{\delta}{2},-\frac{\delta}{2}\right)
U\left(\Delta_\phi,\frac{d+\delta}{2},\frac{d-\delta}{2}-\Delta_\phi\right)\,
\frac{1}{|x_{12}|^{2\Delta_\phi +\delta}}\,.\notag
\end{align}
Combining with the free propagator (\ref{free phi propagator in position space})
and expanding around $\delta=0$, we obtain
\begin{equation}
\label{phi propagator result position space 1}
\langle \phi(x_1)\phi(x_2)\rangle
=\frac{C_\phi}{|x_{12}|^{2\Delta_\phi}}\,\left(1+\left(\frac{2\gamma_\phi}{\delta}
+A_\phi+{\cal O}(\delta)\right)\,\frac{1}{(|x_{12}|\mu)^\delta}\right)\,.
\end{equation}
Here
\begin{align}
\label{gamma phi 1over N general}
\gamma_\phi &= \frac{1}{N}\,
\frac{2^d \sin \left(\frac{\pi d}{2}\right) \Gamma \left(\frac{d-1}{2}\right)}
{\pi ^{3/2} (d-2) d \Gamma \left(\frac{d}{2}-2\right)}\,,\\
\label{A phi 1over N general}
A_\phi &= \gamma_\phi\,\left(\frac{d}{2-d}-\frac{2}{d}\right)\,.
\end{align}
Divergence in the correlation function can be readily removed by the wave function renormalization,
\begin{equation}
\label{general d Z phi}
\phi = \sqrt{Z_\phi}\, \tilde \phi\, , \qquad Z_\phi = 1 + \frac{2\gamma_\phi}{\delta}\,.
\end{equation}
As a result, the correlation function for the physical field $\tilde \phi$ to the next-to-leading order in the $1/N$ expansion takes the form (\ref{corrected phi propagator in position space}) with $\gamma_\phi$, $A_\phi$ given by (\ref{gamma phi 1over N general}), (\ref{A phi 1over N general}).
Note that $\gamma_\phi$ in $d=5$ agrees with (\ref{gamma_phi d=5 result}), whereas in general $d$ our results match \cite{Vasiliev:1981yc,Vasiliev:1981dg,Petkou:1995vu,Petkou:1994ad,Derkachov:1997ch}. In particular, the anomalous dimension (\ref{gamma phi 1over N general}) and the amplitude shift (\ref{A phi 1over N general})
in $d=6-\epsilon$ dimensions are given by
\begin{align}
\gamma_\phi &= \frac{\epsilon}{N}+{\cal O}(\epsilon^2)\,,\\
\label{Aphi 6 - epsilon}
A_\phi &= -\frac{11\epsilon}{6N}+{\cal O}(\epsilon^2)\,.
\end{align}
\subsection{Hubbard-Stratonovich propagator}
Next we derive the $1/N$ correction to the propagator of the auxiliary field $s$. As in section \ref{sec:d5fixed point}, we have three diagrams $C_{1,2,3}$. Furthermore, since the model is regulated by analytic continuation in the scaling dimension of the fields, there is no need to introduce explicitly the counterterm diagrams $C_{4,5}$ of section \ref{sec:d5fixed point}. They are associated with the wave function renormalization and we merely implement it at the end of calculation.
For each diagram we begin by counting and writing down the prefactor associated with the amplitudes $C_{\phi,s}$.
To this end, denote
\begin{equation}
C_1(x_1,x_2) = C_s^3 C_\phi^4\,\tilde C_1(x_1,x_2)\,\mu^{-\delta}\,,
\end{equation}
where $\tilde C_1(x_1,x_2)$ is given by
\begin{center}
\begin{picture}(194,90) (31,-37)
\SetWidth{1.0}
\SetColor{Black}
\Text(-15,5)[lb]{$\tilde C_1=$}
\Line[](32,10)(94,10)
\Text(55,15)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Arc[](128,10)(35,153,513)
\Line[](162,10)(224,10)
\Text(190,15)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Line(128,44)(128,-24)
\Text(133,15)[lb]{\scalebox{0.6}{$2\Delta_s+\delta$}}
\Text(73,35)[lb]{\scalebox{0.6}{$2\Delta_\phi-\eta$}}
\Text(73,-20)[lb]{\scalebox{0.6}{$2\Delta_\phi+\eta$}}
\Text(157,-20)[lb]{\scalebox{0.6}{$2\Delta_\phi+\eta$}}
\Text(157,35)[lb]{\scalebox{0.6}{$2\Delta_\phi-\eta$}}
\Vertex(32,10){2}
\Vertex(94,10){4}
\Vertex(162,10){4}
\Vertex(224,10){2}
\Vertex(128,-24){4}
\Vertex(128,44){4}
\Text(17,5)[lb]{\scalebox{1}{$x_1$}}
\Text(230,5)[lb]{\scalebox{1}{$x_2$}}
\Text(80,-5)[lb]{\scalebox{1}{$x_3$}}
\Text(165,-5)[lb]{\scalebox{1}{$x_6$}}
\Text(125,50)[lb]{\scalebox{1}{$x_4$}}
\Text(125,-40)[lb]{\scalebox{1}{$x_5$}}
\end{picture}
\end{center}
We introduced an extra power $\eta$ to the internal lines representing $\phi$ propagator, see {\it e.g.,}
\cite{Vasiliev:1981yc,Vasiliev:1981dg,Ciuchini:1999wy,Belokurov:1984da}.
Ultimately we are interested in the limit $\eta, \delta\rightarrow 0$. However, this limit
can be reliably taken as long as $\eta={\cal O}(\delta)$. Indeed, the resulting diagram
is both symmetric w.r.t. $\eta\rightarrow-\eta$ and finite as $\eta\to 0$, therefore its series expansion around $\eta=0$ takes the form \cite{Gubser:2017vgc}
\begin{equation}
\tilde C_1= f_0+f_2\,\eta^2+f_4\eta^4+\dots
\end{equation}
Further expanding it around $\delta=0$ and keeping in mind that the coefficients $f_a$
have at most simple poles at $\delta=0$, we conclude that $\tilde C_1=f_0$
in the limit $\delta\rightarrow 0$, as long as $\eta={\cal O}(\delta)$.
It is convenient to choose $\eta=\delta/2$. As a result, we obtain
\begin{align}
\tilde C_1&=\frac{1}{2}\left(-\frac{2}{\sqrt{N}}\right)^4N
\int d^dx_{3,6}\frac{1}{(|x_{13}||x_{26}|)^{2\Delta_s}}
\int d^dx_5\frac{1}{|x_{35}|^{2\Delta_\phi+\frac{\delta}{2}}
|x_{56}|^{2\Delta_\phi+\frac{\delta}{2}}}\notag\\
&\times\int d^dx_4\frac{1}{|x_{34}|^{2\Delta_\phi-\frac{\delta}{2}}
|x_{45}|^{2\Delta_s+\delta}
|x_{46}|^{2\Delta_\phi-\frac{\delta}{2}}}\,.
\end{align}
Presence of $\eta=\delta/2$ makes it possible to use the uniqueness equation (\ref{uniqueness})
to integrate over $x_4$. Applying subsequently the propagator merging relation
(\ref{propagator splitting}) to the integral over $x_5$, yields
\begin{align}
\label{tilde C1 result}
\tilde C_1&=\frac{8}{N}\,
U\left(\Delta_\phi-\frac{\delta}{4},\Delta_\phi-\frac{\delta}{4},
\Delta_s+\frac{\delta}{2}\right)
U\left(\frac{d+\delta}{2},\frac{d+\delta}{2},-\delta \right)\notag\\
&\times\int d^dx_{3,6}\frac{1}{(|x_{13}|
|x_{26}|)^{2\Delta_s}}
\frac{1}{|x_{36}|^{2d-2\Delta_s+\delta}}\,.
\end{align}
We postpone carrying out the remaining integrals over the two edge points $x_{3,6}$. Similar integrals appear in the calculation of $C_{2,3}$, and therefore it makes sense to evaluate them after we have assembled all the terms
$C_{1,2,3}$ together.
The next diagram contributing to the $s$ propagator at the ${\cal O}(1/N)$ order is denoted by
\begin{equation}
\label{C2 in terms of C tilde 2}
C_2(x_1,x_2) = C_s^4 C_\phi^6 \tilde C_2(x_1,x_2)\,\mu^{-\delta}\,,
\end{equation}
where
\begin{center}
\begin{picture}(322,100) (31,-37)
\SetWidth{1.0}
\SetColor{Black}
\Text(-15,11)[lb]{$\tilde C_2=$}
\Text(55,22)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(320,22)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(80,-22)[lb]{\scalebox{0.6}{$2\Delta_\phi+\eta$}}
\Text(275,-22)[lb]{\scalebox{0.6}{$2\Delta_\phi-\eta$}}
\Text(280,45)[lb]{\scalebox{0.6}{$2\Delta_\phi+\eta$}}
\Text(80,45)[lb]{\scalebox{0.6}{$2\Delta_\phi-\eta$}}
\Text(180,55)[lb]{\scalebox{0.6}{$2\Delta_s+\delta/2$}}
\Text(180,-30)[lb]{\scalebox{0.6}{$2\Delta_s+\delta/2$}}
\Text(145,11)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(225,11)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(15,11)[lb]{$x_1$}
\Text(360,11)[lb]{$x_2$}
\Text(80,5)[lb]{$x_7$}
\Text(295,5)[lb]{$x_8$}
\Text(140,55)[lb]{$x_3$}
\Text(230,55)[lb]{$x_4$}
\Text(140,-30)[lb]{$x_5$}
\Text(230,-30)[lb]{$x_6$}
\Line[](32,16)(94,16)
\Arc[](128,16)(35,153,513)
\Line[](144,48)(240,48)
\Line[](144,-16)(240,-16)
\Arc[](256,16)(35,153,513)
\Line[](292,16)(354,16)
\Vertex(32,16){2}
\Vertex(94,16){4}
\Vertex(292,16){4}
\Vertex(354,16){2}
\Vertex(144,48){4}
\Vertex(240,48){4}
\Vertex(144,-16){4}
\Vertex(240,-16){4}
\end{picture}
\end{center}
Notice that we regularized this diagram by supplementing the internal $s$ propagators with an additional power of $\delta/2$ rather than $\delta$. This is done in order to ensure the same total compensating power of $\mu$ on the r.h.s.
of (\ref{C2 in terms of C tilde 2}) as in $C_1$, namely $\mu^{-\delta}$, and therefore such prescription guarantees consistency of regularization of the diagrams \cite{Gubser:2017vgc}. We have also modified some of the $\phi$ lines, relying on the technique we used in evaluating the diagram $\tilde C_1$. Setting $\eta=\frac{\delta}{2}$ \cite{Ciuchini:1999wy,Belokurov:1984da}, see also \cite{Gubser:2017vgc},
allows us to carry out the integrals over $x_{3,6}$ using the uniqueness relation
(\ref{uniqueness}),
\begin{align}
\tilde C_2&=\frac{1}{2}\,\left(-\frac{2}{\sqrt{N}}\right)^6\,N^2\,
\int d^dx_{7,8}\frac{1}{(|x_{17}||x_{28}|)^{2\Delta_s}}
\int d^dx_{4,5}\frac{1}{|x_{48}|^{2\Delta_\phi+\frac{\delta}{2}}
|x_{57}|^{2\Delta_\phi+\frac{\delta}{2}}}\notag\\
&\times
\int d^dx_3\frac{1}{|x_{37}|^{2\Delta_\phi-\frac{\delta}{2}}
|x_{34}|^{2\Delta_s+\frac{\delta}{2}}
|x_{35}|^{2\Delta_\phi}}
\int d^dx_6\frac{1}{|x_{68}|^{2\Delta_\phi-\frac{\delta}{2}}
|x_{56}|^{2\Delta_s+\frac{\delta}{2}}
|x_{46}|^{2\Delta_\phi}}\,.
\end{align}
Integrating over $x_{3,6}$, gives
\begin{align}
\tilde C_2&=\frac{32}{N}\, U\left(\Delta_\phi-\frac{\delta}{4}, \Delta_s+\frac{\delta}{4}, \Delta_\phi\right)^2 \,
\int d^dx_{7,8}\frac{1}{(|x_{17}||x_{28}|)^{2\Delta_s}}
\tilde c_2(x_7,x_8;0)\,,
\end{align}
where
\begin{align}
\tilde c_2(x_7,x_8;\eta')
=\int d^dx_{4,5}\frac{1}{|x_{48}|^{2d-3\Delta_s-\eta'}
|x_{57}|^{2d-3\Delta_s+\eta'}
|x_{45}|^{2\Delta_s+\delta}
|x_{47}|^{\Delta_s-\eta'}
|x_{58}|^{\Delta_s+\eta'}}\,.
\end{align}
Here we have introduced an additional exponent $\eta'$
\cite{Vasiliev:1981yc,Vasiliev:1981dg,Gubser:2017vgc} which has little impact on the final value of the diagram.
Indeed, by performing the following change of integration variables
\begin{equation}
x_4\rightarrow x_7+x_8-x_5\,,\qquad
x_5\rightarrow x_7+x_8-x_4\,,
\end{equation}
one can see that $\tilde c_2(x_7,x_8;\eta')=\tilde c_2(x_7,x_8;-\eta')$.
Therefore, similarly to the calculation of $\tilde C_1$, any choice of $\eta'={\cal O}(\delta)$ does not change the value of the diagram in the limit $\delta\rightarrow 0$.
Specifically choosing $\eta'=\frac{\delta}{2}$ makes it possible to apply the uniqueness
relation (\ref{uniqueness}) to the integral over $x_4$ \cite{Gubser:2017vgc} followed by
the propagator merging relation (\ref{propagator splitting})
to the integral over $x_5$
\begin{align}
\label{tilde C2 result}
\tilde C_2&=\frac{32}{N}\,
U\left(\Delta_\phi-\frac{\delta}{4},\Delta_s+\frac{\delta}{4},
\Delta_\phi\right)^2
U\left(d-\frac{3\Delta_s}{2}-\frac{\delta}{4},\Delta_s+\frac{\delta}{2},
\frac{\Delta_s}{2}-\frac{\delta}{4}\right)\notag\\
&\times U\left(\frac{d+\delta}{2},\frac{d+\delta}{2},-\delta \right)\int d^dx_{7,8}\frac{1}{(|x_{17}|
|x_{28}|)^{2\Delta_s}}
\frac{1}{|x_{78}|^{2d-2\Delta_s+\delta}}\,.
\end{align}
Finally, the last diagram is given by
\begin{equation}
C_3(x_1,x_2) = C_s^3 C_\phi^4 \tilde C_3(x_1,x_2)\,\mu^{-\delta}\,,
\end{equation}
where
\begin{center}
\begin{picture}(194,90) (31,-27)
\SetWidth{1.0}
\SetColor{Black}
\Text(-15,5)[lb]{$\tilde C_3=$}
\Text(16,5)[lb]{$x_1$}
\Text(230,5)[lb]{$x_2$}
\Line[](32,10)(94,10)
\Text(55,15)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(78,15)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(168,15)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(80,0)[lb]{$x_3$}
\Text(168,0)[lb]{$x_4$}
\Text(85,30)[lb]{$x_5$}
\Text(163,30)[lb]{$x_6$}
\Arc[](128,10)(35,153,513)
\Line[](162,10)(224,10)
\Text(195,15)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Line[](97,25)(159,25)
\Text(115,15)[lb]{\scalebox{0.6}{$2\Delta_s+\delta$}}
\Text(125,48)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(125,-36)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Vertex(32,10){2}
\Vertex(94,10){4}
\Vertex(162,10){4}
\Vertex(224,10){2}
\Vertex(97,25){4}
\Vertex(159,25){4}
\end{picture}
\end{center}
This diagram contains a sub-diagram which was evaluated already, namely,
the one-loop correction to the $\phi$ propagator. Hence, it can be readily simplified
\begin{align}
\label{tilde C3 result}
\tilde C_3&=\frac{16}{N}\,
U\left(\Delta_\phi,\Delta_\phi+\Delta_s+\frac{\delta}{2},
-\frac{\delta}{2}\right)
U\left(\Delta_\phi,\frac{d+\delta}{2},\frac{d-\delta}{2}-\Delta_\phi \right)\notag\\
&\times\int d^dx_{3,4}\frac{1}{(|x_{13}|
|x_{24}|)^{2\Delta_s}}
\frac{1}{|x_{34}|^{2d-2\Delta_s+\delta}}\,.
\end{align}
Next we combine $C_{1,2,3}$.
To begin with, we note that (\ref{tilde C1 result}),
(\ref{tilde C2 result}), (\ref{tilde C3 result}) exhibit a similar structure,
\begin{equation}
\tilde C_i=\hat C_i \int d^dx_{3,4}\frac{1}{(|x_{13}|
|x_{24}|)^{2\Delta_s}}
\frac{1}{|x_{34}|^{2d-2\Delta_s+\delta}}\,.
\end{equation}
Using (\ref{propagator splitting}) twice to perform the remaining integrals over $x_{3,4}$, yields
\begin{equation}
\tilde C_i=U\left(\Delta_s,
d-\Delta_s+\frac{\delta}{2},-\frac{\delta}{2}\right)
U\left(\Delta_s,\frac{d+\delta}{2},
\frac{d-\delta}{2}-\Delta_s\right)\,\hat C_i\,\frac{1}{|x_{12}|^{2\Delta_s + \delta}}\,.
\end{equation}
Summing up all three diagrams and expanding around $\delta=0$ we obtain
\begin{equation}
\label{s propagator general result}
\langle s(x_1)s(x_2)\rangle
=\frac{C_s}{|x_{12}|^{2\Delta_s}}\,\left(
1+\left(\frac{2\gamma_s}{\delta}+A_s+{\cal O}(\delta)\right)\,\frac{1}{(|x_{12}\mu|)^\delta}\right)\,,
\end{equation}
with
\begin{align}
\label{gamma s result}
\gamma_s &= \frac{1}{N}\,\frac{4 \sin \left(\frac{\pi d}{2}\right) \Gamma (d)}
{\pi \Gamma \left(\frac{d}{2}+1\right) \Gamma \left(\frac{d}{2}-1\right)}\,,\\
\label{A s 1over N general}
A_s&=2\gamma_\phi\,\left(\frac{d(d-3)+4}{4-d}
\left( H_{d-3}+\pi \cot \Big(\frac{\pi d}{2}\Big)\right)
+\frac{8}{(d-4)^2}+\frac{2}{d-2}+\frac{2}{d}-2 d-1\right)\,,
\end{align}
where $H_{n}$ is the $n$\textit{th} harmonic number.
As before, divergence in the correlation function is removed by the wave function renormalization,
\begin{equation}
\label{general d Z s}
s=\sqrt{Z_s} \tilde s \, , \qquad Z_s = 1 + \frac{2\gamma_s}{\delta}\,.
\end{equation}
In particular, the correlation function for the physical field $\tilde s$ to the next-to-leading order in the $1/N$ expansion takes the form (\ref{corrected s propagator in position space}) with $\gamma_s$, $A_s$ given by (\ref{gamma s result}), (\ref{A s 1over N general}).
Notice that the wave function renormalization constants (\ref{general d Z phi}), (\ref{general d Z s}) generate
the following renormalization of the bare term in the action (\ref{main action})
\begin{equation}
\frac{1}{\sqrt{N}}\,\phi ^ 2 s = \frac{1}{\sqrt{N}}\,\tilde\phi ^ 2 \tilde s
+ \frac{2\gamma_\phi + \gamma_s}{\delta} \, \frac{1}{\sqrt{N}}\,\tilde\phi ^ 2 \tilde s\,,
\end{equation}
where we keep $\mathcal{O}(1/N)$ corrections only. Thus the counterterm is given by
\begin{equation}
\label{S int ct in general dimension}
S_{\textrm{int}}^{\textrm{c.t.}} = \frac{2\gamma_\phi + \gamma_s}{\delta} \, \frac{1}{\sqrt{N}}\,\tilde\phi ^ 2 \tilde s\,.
\end{equation}
To avoid clutter we suppress tilde above the renormalized fields $\tilde\phi$, $\tilde s$ in what follows. Of course,
physical correlations are associated with the renormalized fields.
Before closing this section we note that $\gamma_s$ in $d=5$ agrees with (\ref{gamma_s d=5 result}), whereas in general $d$ our results match \cite{Vasiliev:1981yc,Vasiliev:1981dg,Petkou:1995vu,Petkou:1994ad,Derkachov:1997ch}. In particular, the anomalous dimension (\ref{gamma s result}) and the amplitude shift (\ref{A s 1over N general}) in $d=6-\epsilon$ dimensions are given by
\begin{align}
\gamma_s &= \frac{40\epsilon}{N}+{\cal O}(\epsilon^2)\,,\\
\label{As 6 - epsilon}
A_s &= \frac{44}{N}+{\cal O}(\epsilon)\,.
\end{align}
\section{Conformal three-point functions}
\label{sec:three-point}
In this section we calculate the next-to-leading order correction to
the OPE coefficients of the conformal three-point functions
$\langle \phi \phi s\rangle$ and $\langle s s s \rangle$ associated with the fundamental
scalar field $\phi$ and the auxiliary Hubbard-Stratonovich field $s$. In a general $d$-dimensional
$O(N)$ symmetric CFT,
the results for $\langle \phi \phi s\rangle$ to the next-to-leading order
and leading order $\langle sss\rangle$ correlator
are known based on the conformal
bootstrap approach \cite{Petkou:1995vu,Petkou:1994ad}
(see also \cite{Lang:1993ct,Leonhardt:2003du}
for discussion in the context of $O(N)$ sigma model). In contrast, we recover these results
in the critical $\phi^4$ model from the new perspective which has not previously been given in the literature, and extend the calculations to derive a new OPE coefficient of the three-point function $\langle sss\rangle$.
Furthermore, to facilitate and make progress in the calculations we evaluate the 4-loop conformal triangle diagram, and the associated 3-loop trapezoid graph, as well as the 3-loop bellows diagram in general dimension $d$. To the best of our knowledge, these diagrams were not evaluated in the literature before, and the corresponding analytic expressions were not displayed elsewhere. These diagrams naturally appear in numerous conformal calculations, and therefore might be useful in the future.
\subsection{$\langle \phi \phi s\rangle$}
\label{sec:phi phi s}
The leading order result for the $\langle \phi \phi s\rangle$ correlator follows directly from the tree-level diagram,
\begin{equation}
\langle \phi(x_1) \phi(x_2) s(x_3)\rangle \Bigg|_{\textrm{leading}}
=-\frac{2}{\sqrt{N}}\,C_\phi^2 C_s\,\int d^dx\frac{1}{|x_1-x|^{2\Delta_\phi}
|x_2-x|^{2\Delta_\phi}|x_3-x|^{2\Delta_s}}\,.
\end{equation}
Using the uniqueness relation (\ref{uniqueness}) we obtain
the conformal three-point function
\begin{equation}
\langle \phi(x_1) \phi(x_2) s(x_3)\rangle \Bigg|_{\textrm{leading}}
=\frac{C_{\phi\phi s}}{|x_{12}|^{2\Delta_\phi-\Delta_s}|x_{13}|^{\Delta_s}
|x_{23}|^{\Delta_s}}\,,
\end{equation}
where the leading order coefficient is given by
\begin{equation}
\label{Original phi phi s}
C_{\phi\phi s}=-\frac{2}{\sqrt{N}}\,
C_\phi^2 C_sU(\Delta_\phi, \Delta_\phi, \Delta_s) \,.
\end{equation}
It is convenient to express the three-point functions in terms of normalized fields
\begin{equation}
\label{field normalizations}
\phi\rightarrow \phi \, \sqrt{C_\phi(1+A_\phi)}\,,\qquad
s\rightarrow s \, \sqrt{C_s(1+A_s)}\,,
\end{equation}
\textit{i.e.}, fields whose two-point functions are normalized to unity. The OPE coefficient, $\tilde C_{\phi\phi s}$, associated with the renormalized fields is given
by\footnote{We ignore $A_{\phi,s}$ in (\ref{field normalizations}) to leading order in the $1/N$ expansion.}
\begin{equation}
\tilde C_{\phi\phi s} = \frac{C_{\phi\phi s}}{C_\phi C_s^\frac{1}{2}}\,,
\end{equation}
or equivalently \cite{Petkou:1995vu,Petkou:1994ad},
\begin{equation}
\label{phi phi s leading}
\tilde C_{\phi\phi s} =-\frac{2}{\sqrt{N}}\, C_\phi C_s^\frac{1}{2}U(\Delta_\phi, \Delta_\phi, \Delta_s) \,.
\end{equation}
In particular, in $d=6-\epsilon$ dimensions we obtain
\begin{equation}
\tilde C_{\phi\phi s} = - \sqrt{\frac{6\epsilon}{N}} + {\cal O}(\epsilon)\,.
\end{equation}
Next we evaluate the $1/N$ correction to the three-point coefficient $\tilde C_{\phi\phi s}$ in general dimension,
and demonstrate that it agrees with \cite{Petkou:1995vu,Petkou:1994ad}.\footnote{
We thank Simone Giombi, Igor Klebanov and Gregory Tarnopolsky for attracting our attention to \cite{Petkou:1994ad} and encouraging us to carry out the calculation in general dimension.}
In fact, the only known complete derivation of the $\langle\phi\phi s\rangle$ three-point
function to the next-to-leading order in the large-$N$ expansion has been given
in \cite{Petkou:1995vu,Petkou:1994ad} by solving the consistency constraints in the bootstrap approach.
Therefore a direct diagrammatic technique, which is followed below, provides an independent derivation of the result for the $\langle\phi\phi s\rangle$ three-point function.
We begin by determining the dressed propagators and vertices,
which will also be used later to calculate the next-to-leading order correction to the
three-point function $\langle sss\rangle$.
The relevant diagrams
are obtained by dressing the propagators and $\phi\phi s$ interaction
vertex of the tree level diagram. Let us denote the dressed propagators
(\ref{corrected phi propagator in position space}) and
(\ref{corrected s propagator in position space}) by a line and a grey blob,
\begin{center}
\begin{picture}(162,34) (130,-26)
\SetWidth{1.0}
\SetColor{Black}
\Line[](144,-14)(208,-14)
\Text(115,5)[lb]{\scalebox{0.6}{$2(\Delta+\gamma)$}}
\GOval(128,-14)(16,16)(0){0.882}
\Line[](48,-14)(112,-14)
\Vertex(48,-14){2}
\Vertex(208,-14){2}
\Text(215,-17)[lb]{$x_2$}
\Text(32,-17)[lb]{$x_1$}
\Text(240,-23)[lb]{$=C (1+A)\,\frac{\mu^{-2\gamma}}
{|x_{12}|^{2(\Delta + \gamma)}}$}
\end{picture}
\end{center}
Here the set of parameters $(C,A,\Delta,\gamma)$ stands either for
$(C_\phi,A_\phi,\Delta_\phi,\gamma_\phi)$, when the $\phi$-propagator is considered, or
for $(C_s,A_s,\Delta_s,\gamma_s)$, when the line represents $s$-propagator.
The full three-point function can then be obtained by evaluating the following diagram
\begin{center}
\begin{picture}(260,182) (31,-47)
\SetWidth{1.0}
\SetColor{Black}
\Line[](96,34)(128,34)
\GOval(144,34)(16,16)(0){0.882}
\Line[](180,70)(156,46)
\Line[](180,-2)(154,22)
\GOval(192,82)(16,16)(0){0.882}
\Line[](228,118)(204,94)
\Vertex(228,118){2}
\Text(235,118)[lb]{$x_1$}
\Text(235,-55)[lb]{$x_2$}
\GOval(192,-14)(16,16)(0){0.882}
\Line[](228,-50)(204,-26)
\Vertex(228,-50){2}
\Text(65,55)[lb]{\scalebox{0.6}{$2(\Delta_s+\gamma_s)$}}
\Text(30,20)[lb]{$x_3$}
\GOval(80,34)(16,16)(0){0.882}
\Line[](32,34)(64,34)
\Vertex(32,34){2}
\Text(170,102)[lb]{\scalebox{0.6}{$2(\Delta_\phi+\gamma_\phi)$}}
\Text(170,-40)[lb]{\scalebox{0.6}{$2(\Delta_\phi+\gamma_\phi)$}}
\end{picture}
\end{center}
where the grey blob in the center represents a dressed three-point vertex.\footnote{By definition, the dressed vertex has no propagators attached to the external legs (amputated diagram).} By conformal invariance this diagram takes the form
\begin{equation}
\label{s phi phi expected form}
\langle \phi(x_1)\phi(x_2)s(x_3)\rangle =\tilde C_{\phi\phi s}\,\frac{(1+W_{\phi\phi s})\mu^{-2\gamma_\phi-\gamma_s}}
{|x_{12}|^{2(\Delta_\phi+\gamma_\phi)-(\Delta_s+\gamma_s)}
|x_{13}|^{\Delta_s+\gamma_s}|x_{23}|^{\Delta_s+\gamma_s}}\,,
\end{equation}
where the fields $\phi$ and $s$ are normalized such that their two-point function has unit amplitude. Our goal is to calculate $W_{\phi\phi s}$ to order ${\cal O}\left(\frac{1}{N}\right)$. It represents a subleading correction to the OPE coefficient $\tilde C_{\phi\phi s}$.
The dressed 1PI vertex, $\Gamma(x,y,z)$, in the above diagram was evaluated in \cite{Derkachov:1997ch} to $\mathcal{O}(1/N)$ order, see eq. (4.18) there,
\begin{align}
\label{conformal triangle phi phi s integral}
\int\int\int \Gamma(x_1,x_2,x_3) \phi(x_1)\phi(x_2)s(x_3)= \frac{\hat Z}{\sqrt{N}}\int\int\int
\frac{ \mu^{-2\gamma_\phi-\gamma_s} \phi(x_1)\phi(x_2)s(x_3)}{|x_3-x_1|^{2\alpha}|x_3-x_2|^{2\alpha}|x_1-x_2|^{2\beta}}\,,
\end{align}
where in notations of \cite{Derkachov:1997ch}
\begin{equation}
\alpha = \Delta_\phi-\frac{\gamma_s}{2}\,,\quad
\beta = \Delta _ s - \gamma_\phi +\frac{\gamma_s}{2}\,,
\end{equation}
and $\hat Z$ is a constant given by eq. (A.4) in \cite{Derkachov:1997ch}
\footnote{Our notations differ from those of \cite{Derkachov:1997ch}.
The precise relation is $\mu\rightarrow\tilde\mu$, $\gamma_\psi\rightarrow\gamma_s$,
$H(\Delta)\rightarrow A(\Delta)$.}
\begin{align}
&\hat Z = -\frac{\chi}{2}\,\frac{A(1)^2A(\tilde \mu-2)\Gamma(\tilde \mu)}{\pi^{2\tilde\mu}}\,
\left(1+\delta V'\right)\,,\quad \chi = - (2\gamma_\phi+\gamma_s)\,,\notag \\
&A(\Delta) = \frac{\Gamma\left(\frac{d}{2}-\Delta\right)}{\Gamma(\Delta)} \,,\quad \tilde\mu = \frac{d}{2}\,,\\
& \delta V' = \frac{\eta_1}{N}\frac{ (\tilde\mu -3) \left(6 \tilde\mu ^2-9 \tilde\mu +2\right)}{(\tilde\mu -2)^2}\,,\quad
\eta_1 \equiv \frac{4 (2-\tilde\mu ) \Gamma (2 \tilde\mu -2)}{\Gamma (\tilde\mu -1)^2 \Gamma (2-\tilde\mu )
\Gamma (\tilde\mu +1)}\,. \notag
\end{align}
To get $W_{\phi\phi s}$ one should simply attach the dressed propagators to $\Gamma(x,y,z)$ and apply the uniqueness formula (\ref{uniqueness}) three times, because all 3 vertices $x_{1,2,3}$ happen to be unique. Normalizing the external legs, yields
\begin{equation}
\label{conformal triangle phi phi s integrated}
\tilde C_{\phi\phi s} \, (1 + W_{\phi\phi s}) =-\frac{2}{\sqrt{N}}\frac{C_\phi^2 (1+A_\phi)^2 C_s(1+A_s)}{\sqrt{C_\phi^2 (1+A_\phi)^2C_s(1+A_s)}}\hat Z \hat U\,,
\end{equation}
where we defined
\begin{align}
\hat U &= U\left(\Delta_\phi-\frac{\gamma_s}{2},
\Delta_\phi-\frac{\gamma_s}{2},
\Delta_s+\gamma_s\right)
U\left(\Delta_\phi-\frac{\gamma_s}{2},
\Delta_\phi+\gamma_\phi,
\Delta_s-\gamma_\phi+\frac{\gamma_s}{2}\right)\notag\\
&\times U\left(\Delta_\phi+\gamma_\phi,
\frac{\Delta_s+\gamma_s}{2},
\frac{d}{2}-\gamma_\phi-\frac{\gamma_s}{2}\right)\,.
\label{hat U definition}
\end{align}
Expansion of $\hat U$ in $1/N$ has the following form
\begin{equation}
\label{hat U expansion}
\hat U = N \, u_1 + u_2 +{\cal O}\left(\frac{1}{N}\right)\,.
\end{equation}
Notice that it starts with the divergent
in $N\rightarrow \infty$ piece $N\,u_1$, originating from the singular in that limit factor
$U\left(\frac{d}{2}-\gamma_\phi-\frac{\gamma_s}{2},\cdots\right)$ in
(\ref{hat U definition}).
It is convenient to denote the ratio
of the sub-leading to leading coefficients in (\ref{hat U expansion}) by
\begin{equation}
\hat u = \frac{u_2}{N u_1} =
\frac{\gamma _s ((26-3 d) d-44)+2 \gamma_ \phi ((d-6) d+4)}{2 (d-4) (d-2) }\,.
\end{equation}
Here the anomalous dimensions $\gamma_\phi$, $\gamma_s$
are given by (\ref{gamma phi 1over N general}), (\ref{gamma s result}).
Combining $\hat u$ with the other next-to-leading order terms in (\ref{conformal triangle phi phi s integrated}), we obtain
\begin{equation}
\label{general W phi phi s}
W_{\phi\phi s} = A_\phi +\frac{A_s}{2}+\delta V\,,
\end{equation}
where
\begin{equation}
\delta V = \hat u + \delta V'
= \frac{1}{N} \, \frac{2^{d-3} (d (d (5 d-42)+116)-96) \sin \left(\frac{\pi d}{2}\right) \Gamma \left(\frac{d-1}{2}\right)}{\pi ^{3/2} (d-4) (d-2) \Gamma \left(\frac{d}{2}+1\right)}\,,
\end{equation}
which we can re-write as
\begin{equation}
\label{delta V in general d}
\delta V =\gamma_\phi\, \left(\frac{8}{(d-4)^2}+\frac{2}{d-2}+\frac{6}{d-4}+5\right)\,,
\end{equation}
where $\gamma_\phi$ is given by (\ref{gamma phi 1over N general}).
Substituting (\ref{A phi 1over N general}), (\ref{A s 1over N general}) and (\ref{delta V in general d}) into
(\ref{general W phi phi s}) gives $W_{\phi\phi s}$ in general dimension. It matches \cite{Petkou:1995vu,Petkou:1994ad} where the generic $O(N)$ symmetric CFT was studied, {\it e.g.,} see eq. (21) in \cite{Petkou:1995vu}.\footnote{Note that the leading terms
in the large $N$ expansion on the l.h.s. and r.h.s. of (\ref{conformal triangle phi phi s integrated})
agree, as one can explicitly verify by direct calculation,
\begin{equation}
-\frac{2}{\sqrt{N}}\,C_\phi C_s^\frac{1}{2}\,
\left(-\frac{\chi}{2}\,\frac{A(1)^2A(\tilde \mu-2)\Gamma(\tilde \mu)}{\pi^{2\tilde\mu}}\right)\,N\,u_1=
-\frac{2}{\sqrt{N}}C_\phi C_s^\frac{1}{2}U(\Delta_\phi,\Delta_\phi,\Delta_s)=\tilde C_{\phi\phi s}\,.
\end{equation}
} In particular, the above $W_{\phi\phi s}$ agrees with its counterpart in a CFT emerging at the IR fixed point of the $O(N)$ vector model with cubic interactions in $d=6-\epsilon$ dimensions \cite{Fei:2014yja}. In this case one gets
\begin{equation}
\label{d6 W phi phi s}
\delta V= \frac{1}{N}\,\frac{21\epsilon}{2}+{\cal O}(\epsilon^2)\,, \quad
W_{\phi\phi s} = \frac{22}{N} + {\cal O}(\epsilon)\,.
\end{equation}
While this way of calculating $W_{\phi\phi s}$ is straightforward, it does not separate the effect of the anomalous dimensions intrinsic to the propagators from the contribution of the dressed vertex. Such a separation proves to be useful when we evaluate $\langle sss\rangle$ correlation function. In other words, the impact of anomalous dimensions inherent to the propagators is singled out in our method of calculating the $\langle sss \rangle$ correlation function. Hence, for future purpose we define two additional Feynman rules.
To begin with, we exclude $A_{\phi, s}$ from the residue of the dressed propagators. These constants represent $1/N$ corrections to the amplitudes of the two-point functions and can be accounted at the very end. The dressed propagators without $A_{\phi, s}$ will be denoted by a solid black blob\footnote{Note that it is crucial to account for $A_{\phi, s}$, see {\it e.g.,} (\ref{general W phi phi s}).}
\begin{center}
\begin{picture}(147,31) (47,-33)
\SetWidth{1.0}
\SetColor{Black}
\Line[](50,-19)(150,-19)
\Text(159,-29)[lb]{$=C\,\frac{\mu^{-2\gamma}}{|x|^{2(\Delta+\gamma)}}$}
\Vertex(103,-19){8}
\Vertex(50,-19){2}
\Vertex(150,-19){2}
\end{picture}
\end{center}
As before $(C,\Delta,\gamma)$ stand for $(C_{\phi, s},\Delta_{\phi, s},\gamma_{\phi, s})$,
depending on the considered propagator.
Next we define a dressed vertex with the bare propagators
attached to it, {\it i.e.,} propagators without anomalous dimensions and $A_{\phi, s}$. Diagrammatically
it is given by
\begin{center}
\begin{equation}
\begin{picture}(590,70) (17,-20)
\SetWidth{1.0}
\SetColor{Black}
\scalebox{0.75}{
\GOval(84,28)(18,18)(0){0.882}
\Line[](96,40)(126,64)
\Line[](126,-8)(96,16)
\Line[](66,28)(24,28)
\Vertex(24,28){2}
\Vertex(126,64){2}
\Vertex(126,-8){2}
\Vertex(222,28){2}
\Vertex(324,64){2}
\Vertex(324,-8){2}
\Line[](222,28)(324,64)
\Line[](324,64)(324,-8)
\Line[](324,-8)(222,28)
\Text(143,23)[lb]{\scalebox{0.9}{$=C_{\phi\phi s}(1+\delta V)$}}
\Text(100,55)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(100,-5)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(40,32)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(253,25)[lb]{\scalebox{0.6}{$2(\Delta_\phi+\gamma_\phi)-(\Delta_s+\gamma_s)$}}
\Text(252,53)[lb]{\scalebox{0.6}{$\Delta_s+\gamma_s$}}
\Text(252,1)[lb]{\scalebox{0.6}{$\Delta_s+\gamma_s$}}
\Vertex(366,28){2}
\Vertex(462,-8){2}
\Vertex(462,64){2}
\Line[](366,28)(420,28)
\Line[](420,28)(462,64)
\Line[](420,28)(462,-8)
\Vertex(420,28){4}
\Line[](504,28)(558,28)
\Vertex(504,28){2}
\Vertex(558,28){4}
\Line[](558,28)(600,64)
\Line[](558,28)(600,-8)
\Vertex(600,-8){2}
\Vertex(600,64){2}
\Vertex(390,28){8}
\Vertex(440,46){8}
\Vertex(440,10){8}
\Text(340,26)[lb]{\scalebox{0.6}{$-$}}
\Text(480,26)[lb]{\scalebox{0.6}{$+$}}
\Text(425,60)[lb]{\scalebox{0.6}{$2(\Delta_\phi+\gamma_\phi)$}}
\Text(425,-12)[lb]{\scalebox{0.6}{$2(\Delta_\phi+\gamma_\phi)$}}
\Text(375,40)[lb]{\scalebox{0.6}{$2(\Delta_s+\gamma_s)$}}
\Text(575,55)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(575,-5)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(525,32)[lb]{\scalebox{0.6}{$2\Delta_s$}}
}
\end{picture}
\label{Dressed vertex diagram}
\end{equation}
\end{center}
where the external legs are not normalized, and therefore $C_{\phi\phi s}$ rather than $\tilde C_{\phi\phi s}$ appears on the r.h.s. The term $\delta V \sim \mathcal{O}(1/N)$ equals $W_{\phi\phi s}$ up to corrections $A_{\phi, s}$ that we stripped off. In fact, $\delta V$ is closely related to the sum of two loop diagrams, which have been studied
in momentum space in section \ref{sec:interaction vertex in d=5}.
Notice that $\delta V$ is rendered finite, because besides those two loop diagrams there is a contribution from the vertex counterterm (\ref{S int ct in general dimension}). The last diagram on the r.h.s. simply subtracts the leading $1/N$ contribution from the second diagram
\begin{center}
\begin{picture}(590,70) (17,-20)
\SetWidth{1.0}
\SetColor{Black}
\scalebox{0.75}{
\Vertex(16,28){2}
\Vertex(112,-8){2}
\Vertex(112,64){2}
\Line[](16,28)(70,28)
\Line[](70,28)(112,64)
\Line[](70,28)(112,-8)
\Vertex(70,28){4}
\Line[](154,28)(208,28)
\Vertex(154,28){2}
\Vertex(208,28){4}
\Line[](208,28)(250,64)
\Line[](208,28)(250,-8)
\Vertex(250,-8){2}
\Vertex(250,64){2}
\Vertex(40,28){8}
\Vertex(90,46){8}
\Vertex(90,10){8}
\Text(130,26)[lb]{\scalebox{1}{$-$}}
\Text(250,17)[lb]{\scalebox{1}{$= -\frac{2}{\sqrt{N}}\,C_\phi^2 C_
\int d^dx_4\,\frac{1}{(|x_{14}||x_{24}|)^{2\Delta_\phi}|x_{34}|^{2\Delta_s}}\,
\left( \frac{2}{\delta\mu^\delta}\,
\left(\frac{\gamma_\phi}{|x_{14}|^\delta}+\frac{\gamma_\phi}{|x_{24}|^\delta}
+\frac{\gamma_s}{|x_{34}|^\delta}\right)-2\frac{2\gamma_\phi+\gamma_s}{\delta}\right)$}}
\Text(75,60)[lb]{\scalebox{0.6}{$2(\Delta_\phi+\gamma_\phi)$}}
\Text(75,-12)[lb]{\scalebox{0.6}{$2(\Delta_\phi+\gamma_\phi)$}}
\Text(25,40)[lb]{\scalebox{0.6}{$2(\Delta_s+\gamma_s)$}}
\Text(225,55)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(225,-5)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(175,32)[lb]{\scalebox{0.6}{$2\Delta_s$}}
}
\end{picture}
\end{center}
where the last term within the parenthesis stands for the counterterm associated with the wave function renormalization (\ref{general d Z phi}) and (\ref{general d Z s}), and we used regularized expressions for the dressed propagators obtained previously by the direct calculation of the Feynman graphs. Note that the newly defined vertex with the bare propagators attached to it has logarithmic terms proportional to the anomalous dimensions. Such terms emerge because of renormalization of the vertex, and are intrinsic to the vertex itself rather than to the propagators.
We choose to keep track separately of the contributions associated with the dressed vertex correction $\delta V$ and anomalous dimensions $\gamma_{\phi,s}$ of the propagators. Hence, the above diagrams will be used extensively in what follows. This approach turns out to be useful when we evaluate the $\langle sss \rangle$ correlation function.
\subsection{$\langle sss\rangle$}
In this subsection we calculate the three-point function $\langle sss \rangle$ to
${\cal O}\left(\frac{1}{N^{3/2}}\right)$ order. For the normalized field
(\ref{field normalizations}) it has the form
\begin{align}
\label{sss general}
\langle s(x_1)s(x_2)s(x_3)\rangle =
\tilde C_{s^3}\,\frac{(1+W_{s^3})\mu^{-3\gamma_s}}
{|x_{12}|^{\Delta_s+\gamma_s}
|x_{13}|^{\Delta_s+\gamma_s}|x_{23}|^{\Delta_s+\gamma_s}}\,.
\end{align}
We are going to calculate the leading coefficient $\tilde C_{s^3}$ and the $1/N$ correction
$W_{s^3}$.
The leading behaviour is completely determined by the one-loop triangle diagram
\begin{center}
\begin{picture}(162,162) (15,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](96,130)(96,66)
\Line[](96,66)(64,18)
\Line[](96,66)(128,18)
\Line[](64,18)(128,18)
\Line[](128,18)(182,-20)
\Line[](64,18)(9,-20)
\Vertex(182,-20){2}
\Vertex(9,-20){2}
\Vertex(128,18){4}
\Vertex(64,18){4}
\Vertex(96,130){2}
\Vertex(96,66){4}
\Text(92,21)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(66,43)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(116,43)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(100,95)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(20,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(158,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(103,125)[lb]{$x_1$}
\Text(103,65)[lb]{$x_6$}
\Text(47,17)[lb]{$x_4$}
\Text(135,17)[lb]{$x_5$}
\Text(-7,-22)[lb]{$x_2$}
\Text(188,-22)[lb]{$x_3$}
\end{picture}
\end{center}
Normalizing the external legs yields
\begin{align}
\langle s(x_1)s(x_2)s(x_3)\rangle\Bigg|_{\textrm{leading}} &=
N\left(-\frac{2}{\sqrt{N}}\right)^3\frac{C_\phi^3 C_s^3}{\sqrt{C_s^3}}
\int d^dx_{4,5}\frac{1}{|x_{35}|^{2\Delta_s}
|x_{45}|^{2\Delta_\phi}|x_{24}|^{2\Delta_s}}\notag\\
&\times \int d^dx_6\frac{1}{|x_{16}|^{2\Delta_s}|x_{46}|^{2\Delta_\phi}
|x_{56}|^{2\Delta_\phi}}\,.
\end{align}
Applying uniqueness formula (\ref{uniqueness}) to the integrals over $x_{4,5,6}$,
we arrive at the leading $\langle sss\rangle$ triangle expression
\begin{equation}
\langle s(x_1)s(x_2)s(x_3)\rangle\Bigg|_{\textrm{leading}}=
\frac{\tilde C_{s^3}}{|x_{12}|^{\Delta_s}|x_{23}|^{\Delta_s}|x_{13}|^{\Delta_s}}\,,
\end{equation}
where
\begin{equation}
\label{Cs3 answer}
\tilde C_{s^3}=-\frac{8}{\sqrt{N}}\,C_\phi^3 C_s^\frac{3}{2}\,
U(\Delta_\phi, \Delta_\phi, \Delta_s)^2 \, U\left(\frac{\Delta_s}{2},
\Delta_s,d-\frac{3\Delta_s}{2}\right)\,.
\end{equation}
One can readily show that for general $d$
\begin{equation}
\label{relation 1}
C_\phi^2 C_s\,
U(\Delta_\phi, \Delta_\phi, \Delta_s) U\left(\frac{\Delta_s}{2},
\Delta_s,d-\frac{3\Delta_s}{2}\right)=\frac{d-3}{2}\,,
\end{equation}
and therefore using (\ref{phi phi s leading}) we obtain
\begin{equation}
\label{Cs3 and Cphiphi s relation}
\tilde C_{s^3} = 2(d-3)\,\tilde C_{\phi\phi s}\,,
\end{equation}
in agreement with \cite{Petkou:1995vu,Petkou:1994ad}.
Next we evaluate the sub-leading term $W_{s^3}$
in (\ref{sss general}). It is determined by the ${\cal O}(1/N^{3/2})$ diagrams with three external legs of type $s$. A large portion of them is obtained by dressing the constituents of the leading order triangle diagram with $1/N$ corrections. Before studying this class of diagrams, recall that the dressed $\phi$- and $s$-propagators have a non-trivial $1/N$ correction due to the amplitudes $A_\phi$, $A_s$, as well as due to the anomalous dimensions $\gamma_\phi$, $\gamma_s$. We will account for the contribution related to $A_{\phi, s}$ later in this subsection, while for now we focus on studying the effect associated with the dressed vertex (\ref{Dressed vertex diagram}) and the anomalous dimensions $\gamma_{\phi, s}$ inherent to the dressed propagators.
Quite surprisingly, it turns out that various diagrams obtained by dressing the leading $\langle sss\rangle$ triangle can be grouped in such a way that the integrals over their internal vertices can be carried out using the uniqueness relation. We illustrate how it works now.
Dressing each of the three $\phi \phi s$ vertices of the leading $\langle sss\rangle$ triangle diagram gives
\begin{center}
\begin{picture}(400,162) (15,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](96,130)(96,66)
\Line[](96,66)(64,18)
\Line[](96,66)(128,18)
\Line[](64,18)(128,18)
\Line[](128,18)(182,-20)
\Line[](64,18)(9,-20)
\Vertex(182,-20){2}
\Vertex(9,-20){2}
\Vertex(128,18){4}
\Vertex(64,18){4}
\Vertex(96,130){2}
\GOval(96,66)(16,16)(0){0.882}
\Text(92,21)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(66,43)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(116,43)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(100,95)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(20,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(158,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(103,125)[lb]{$x_1$}
\Text(47,17)[lb]{$x_4$}
\Text(135,17)[lb]{$x_5$}
\Text(-7,-22)[lb]{$x_2$}
\Text(188,-22)[lb]{$x_3$}
\Text(208,43)[lb]{\scalebox{1}{$+\quad$ two cyclic permutations $x_1\rightarrow x_2\rightarrow x_3\rightarrow x_1$.}}
\end{picture}
\end{center}
Here the vertex blob is determined by (\ref{Dressed vertex diagram}).
In addition, each of the three internal $\phi$-propagators and each of the three
external $s$-propagators in the leading triangle diagram need to be endowed with the anomalous dimensions. The corresponding diagram is given by
\begin{center}
\begin{picture}(162,162) (15,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](96,130)(96,66)
\Vertex(96,98){8}
\Line[](96,66)(64,18)
\Vertex(80,42){8}
\Line[](96,66)(128,18)
\Vertex(112,42){8}
\Line[](64,18)(128,18)
\Vertex(96,18){8}
\Line[](128,18)(182,-20)
\Vertex(155,-1){8}
\Line[](64,18)(9,-20)
\Vertex(36,-1){8}
\Vertex(182,-20){2}
\Vertex(9,-20){2}
\Vertex(128,18){4}
\Vertex(64,18){4}
\Vertex(96,130){2}
\Vertex(96,66){4}
\Text(103,125)[lb]{$x_1$}
\Text(-7,-22)[lb]{$x_2$}
\Text(188,-22)[lb]{$x_3$}
\end{picture}
\end{center}
where we used the black blob propagator notation introduced in subsection \ref{sec:phi phi s}.
To avoid over-counting the contribution of the leading order triangle diagram, it should be subtracted from the above graphs. The remainder contributes to $W_{s^3}\sim\mathcal{O}(1/N)$ which is the ultimate goal of our calculation. However, to avoid clutter we do not carry out these subtractions explicitly. They are done by default in what follows.
Remarkably, when summing the Feynman diagrams with identical skeleton structure, the anomalous dimensions of the propagators behave additively at $1/N$ order. This tremendous simplification holds because $\gamma_{\phi,s}\sim 1/N$, and therefore one can linearize a Feynman graph with respect to the anomalous dimensions.
For instance, each of the three dressed vertices of the leading order triangle diagram contributes a $1/N$ term associated with the second diagram on the r.h.s. of (\ref{Dressed vertex diagram}). Combining these terms with the above Feynman graph gives
\begin{center}
\begin{picture}(162,162) (15,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](96,130)(96,66)
\Line[](96,66)(64,18)
\Line[](96,66)(128,18)
\Line[](64,18)(128,18)
\Line[](128,18)(182,-20)
\Line[](64,18)(9,-20)
\Vertex(182,-20){2}
\Vertex(9,-20){2}
\Vertex(128,18){4}
\Vertex(64,18){4}
\Vertex(96,130){2}
\Vertex(96,66){4}
\Text(82,21)[lb]{\scalebox{0.6}{$2(\Delta_\phi-\gamma_\phi)$}}
\Text(46,43)[lb]{\scalebox{0.6}{$2(\Delta_\phi-\gamma_\phi)$}}
\Text(116,43)[lb]{\scalebox{0.6}{$2(\Delta_\phi-\gamma_\phi)$}}
\Text(100,95)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(20,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(158,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(103,125)[lb]{$x_1$}
\Text(103,65)[lb]{$x_6$}
\Text(47,17)[lb]{$x_4$}
\Text(135,17)[lb]{$x_5$}
\Text(-7,-22)[lb]{$x_2$}
\Text(188,-22)[lb]{$x_3$}
\end{picture}
\end{center}
Since this relation between the diagrams is reliable up to $\mathcal{O}(1/N)$ order, we linearize over the $\gamma_\phi$ in the internal $\phi$-propagators and retain the next-to-leading corrections only. They are given by the sum of three diagrams which are identical up to a permutation of the external legs
\begin{center}
\raggedright
\begin{picture}(162,162) (15,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](146,130)(114,18)
\Line[](146,130)(178,18)
\Line[](114,18)(178,18)
\Line[](178,18)(232,-20)
\Line[](114,18)(59,-20)
\Vertex(232,-20){2}
\Vertex(59,-20){2}
\Vertex(178,18){4}
\Vertex(114,18){4}
\Vertex(146,130){2}
\Text(123,21)[lb]{\scalebox{0.6}{$4\Delta_\phi-\Delta_s-2\gamma_\phi$}}
\Text(116,83)[lb]{\scalebox{0.6}{$\Delta_s$}}
\Text(166,83)[lb]{\scalebox{0.6}{$\Delta_s$}}
\Text(70,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(208,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(153,125)[lb]{$x_1$}
\Text(97,17)[lb]{$x_4$}
\Text(185,17)[lb]{$x_5$}
\Text(43,-22)[lb]{$x_2$}
\Text(238,-22)[lb]{$x_3$}
\Text(208,43)[lb]{\scalebox{1}{$+\quad$ two cyclic permutations $x_1\rightarrow x_2\rightarrow x_3\rightarrow x_1$.}}
\Text(45,43)[lb]{\scalebox{1}{$4C_{\phi\phi s}C_\phi C_s^2~\times$}}
\end{picture}
\end{center}
where we integrated over the unique vertex with the lines $\Delta_\phi$,
$\Delta_\phi$, $\Delta_s$ and used (\ref{Original phi phi s}). The factor of
$\left(-\frac{2}{\sqrt{N}}\right)^2NC_\phi C_s^{2}$
is associated with the Feynman rules for the vertices $x_{4,5}$, the closed $\phi$-loop, and the leading order amplitudes
of the $\phi$- and $s$- propagators.
Furthermore, there is a contribution related to the conformal triangle on the r.h.s. of (\ref{Dressed vertex diagram}). There are three such terms, since there are three vertices in the leading order correlation function
$\langle sss\rangle$,
\begin{center}
\begin{picture}(162,162) (15,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](6,130)(-26,18)
\Line[](6,130)(38,18)
\Line[](-26,18)(38,18)
\Line[](38,18)(92,-20)
\Line[](-26,18)(-81,-20)
\Vertex(92,-20){2}
\Vertex(-81,-20){2}
\Vertex(38,18){4}
\Vertex(-26,18){4}
\Vertex(6,130){2}
\Text(-25,5)[lb]{\scalebox{0.6}{$4\Delta_\phi-\Delta_s+2\gamma_\phi-\gamma_s$}}
\Text(-39,75)[lb]{\scalebox{0.6}{$\Delta_s+\gamma_s$}}
\Text(27,75)[lb]{\scalebox{0.6}{$\Delta_s+\gamma_s$}}
\Text(-70,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(68,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(13,125)[lb]{$x_1$}
\Text(-43,17)[lb]{$x_4$}
\Text(45,17)[lb]{$x_5$}
\Text(-97,-22)[lb]{$x_2$}
\Text(98,-22)[lb]{$x_3$}
\Text(78,43)[lb]{\scalebox{1}{$+\quad$ two cyclic permutations $x_1\rightarrow x_2\rightarrow x_3\rightarrow x_1$.}}
\Text(-130,43)[lb]{\scalebox{1}{$4C_{\phi\phi s}(1+\delta V)C_\phi C_s^2~ \times$}}
\end{picture}
\end{center}
As before only $\mathcal{O}(1/N)$ terms are eventually retained, and therefore the last two diagrams can be combined by simply adding the anomalous dimensions of the corresponding propagators
\begin{center}
\begin{picture}(415,162) (15,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](96,130)(64,18)
\Line[](96,130)(128,18)
\Line[](64,18)(128,18)
\Line[](128,18)(182,-20)
\Line[](64,18)(9,-20)
\Vertex(182,-20){2}
\Vertex(9,-20){2}
\Vertex(128,18){4}
\Vertex(64,18){4}
\Vertex(96,130){2}
\Text(78,21)[lb]{\scalebox{0.6}{$4\Delta_\phi-\Delta_s-\gamma_s$}}
\Text(53,68)[lb]{\scalebox{0.6}{$\Delta_s+\gamma_s$}}
\Text(116,68)[lb]{\scalebox{0.6}{$\Delta_s+\gamma_s$}}
\Text(20,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(158,0)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(103,125)[lb]{$x_1$}
\Text(47,17)[lb]{$x_4$}
\Text(135,17)[lb]{$x_5$}
\Text(-7,-22)[lb]{$x_2$}
\Text(188,-22)[lb]{$x_3$}
\Text(200,26)[lb]{\scalebox{1}{
$=
\frac{4 \, C_{\phi\phi s}\,C_\phi C_s^2\,
U\left(\Delta_s,\frac{\Delta_s+\gamma_s}{2},d-\frac{3\Delta_s+\gamma_s}{2}\right)
U\left(\Delta_\phi+\frac{\gamma_s}{2},\Delta_\phi-\frac{\gamma_s}{2},\Delta_s\right)}
{\left(|x_{13}||x_{12}|\right)^{\Delta_s+\gamma_s}
|x_{23}|^{\Delta_s-\gamma_s}}$}}
\Text(140,32)[lb]{\scalebox{1}{+ two perm.}}
\Text(285,12)[lb]{\scalebox{1}{$\times (1+\delta V)$ ~ + ~ two permutations}}
\Text(-10,33)[lb]{\scalebox{0.7}{$4C_{\phi\phi s}(1+\delta V)C_\phi C_s^2 ~ \times$}}
\end{picture}
\end{center}
where the integrals over $x_{4,5}$ were carried out using the uniqueness relation. Note that only terms up to $\mathcal{O}(1/N)$ order are reliable, because we added the anomalous dimensions to combine the diagrams. Using (\ref{relation 1}), (\ref{Cs3 and Cphiphi s relation}), the contribution to $W_{s^3}$ takes the form
\begin{eqnarray}
\label{hat f expression}
3\delta V + \hat f &=& 3\left(\delta V +
\frac{U\left(\Delta_s,\frac{\Delta_s+\gamma_s}{2},d-\frac{3\Delta_s+\gamma_s}{2}\right)
U(\Delta_\phi+\frac{\gamma_s}{2},\Delta_\phi-\frac{\gamma_s}{2},\Delta_s)}
{U\left(\Delta_s,\frac{\Delta_s}{2},d-\frac{3\Delta_s}{2}\right)
U(\Delta_\phi,\Delta_\phi,\Delta_s)} -1 \right)
\nonumber \\
&=&3\delta V + \frac{6 \sin \left(\frac{\pi d}{2}\right) \Gamma (d) \left(-\frac{2}{d-4}+\pi \cot \left(\frac{\pi d}{2}\right)
+H_{d-4} \right)}{ N \pi \Gamma \left(\frac{d}{2}-1\right) \Gamma \left(\frac{d}{2}+1\right)}
+ \mathcal{O}(1/N^2)\,,
\end{eqnarray}
where $H_n$ is the $n$\textit{th} harmonic number. Expanding around $d=6$, yields
\begin{equation}
\label{f around 6d}
\hat f = -{120\over N} +{\cal O}(d-6)\,.
\end{equation}
Another contribution to $W_{s^3}$ arises from the following diagram
\begin{center}
\begin{picture}(162,162) (15,-31)
\SetWidth{1.0}
\SetColor{Black}
\Line[](96,130)(96,98)
\Line[](37,0)(155,0)
\Line[](155,0)(182,-20)
\Line[](37,0)(9,-20)
\Line[](37,0)(96,98)
\Line[](155,0)(96,98)
\Vertex(182,-20){2}
\Vertex(9,-20){2}
\Vertex(155,0){4}
\Vertex(37,0){4}
\Vertex(96,130){2}
\Vertex(96,98){4}
\Line[](78,68)(114,68)
\Vertex(78,68){4}
\Vertex(114,68){4}
\Line[](75,0)(56,32)
\Line[](117,0)(136,32)
\Vertex(75,0){4}
\Vertex(56,32){4}
\Vertex(117,0){4}
\Vertex(136,32){4}
\Text(100,112)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(8,-10)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(171,-10)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(103,125)[lb]{$x_1$}
\Text(103,98)[lb]{$x_7$}
\Text(20,0)[lb]{$x_8$}
\Text(163,0)[lb]{$x_9$}
\Text(-7,-22)[lb]{$x_2$}
\Text(188,-22)[lb]{$x_3$}
\Text(61,68)[lb]{$x_4$}
\Text(73,-15)[lb]{$x_5$}
\Text(143,32)[lb]{$x_6$}
\Text(51,-10)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(91,-10)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(131,-10)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(30,15)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(70,15)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(110,15)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(150,15)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(52,50)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(130,50)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(90,58)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(71,82)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(109,82)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\end{picture}
\end{center}
Integrating over $x_{4-9}$ through the use of uniqueness relation (\ref{uniqueness}),
and normalizing the external $s$ legs according to (\ref{field normalizations}),
we obtain an additional contribution to $W_{s^3}$
of the form $w_3/\tilde C_{s^3}$ with
\begin{align}
\label{w3}
w_3&=\frac{1}{C_s^{3/2}}\,\left(-\frac{2}{\sqrt{N}}\right)^9\,N^3\,
C_\phi^9 C_s^6\,U\left(\Delta_\phi,\Delta_\phi,\Delta_s\right)^3
U\left(\frac{\Delta_s}{2}, \Delta_s, d-\frac{3\Delta_s}{2}\right)^3\,\hat w_3\notag\\
&=-\frac{64(d-3)^3}{N^{3/2}}\,C_\phi^3\,C_s^\frac{3}{2}\,\hat w_3\,,
\end{align}
where in the last line we used (\ref{relation 1}), and
$\hat w_3$ is defined by the diagram\footnote{From now on we
explicitly use the values (\ref{engineering scaling}) for the scaling dimensions $\Delta_{\phi, s}$. To get $\hat w_3$ from this diagram one needs to integrate over the three internal points.
We already accounted for the amplitudes $C_{\phi, s}$ of the propagators and factors of $-2/\sqrt{N}$ coming from the interaction vertices.}
\begin{center}
\begin{picture}(165,150) (65,-5)
\SetWidth{1.0}
\SetColor{Black}
\Line[](32,6)(192,6)
\Line[](192,6)(112,134)
\Line[](112,134)(32,6)
\Line[](112,6)(72,74)
\Line[](72,70)(152,70)
\Line[](152,70)(112,6)
\Vertex(72,70){4}
\Vertex(152,70){4}
\Vertex(112,134){2}
\Vertex(32,6){2}
\Vertex(112,6){4}
\Vertex(192,6){2}
\Text(120,135)[lb]{$x_1$}
\Text(17,-5)[lb]{$x_2$}
\Text(198,-5)[lb]{$x_3$}
\Text(70,103)[lb]{\scalebox{0.6}{$6-d$}}
\Text(65,-5)[lb]{\scalebox{0.6}{$6-d$}}
\Text(183,35)[lb]{\scalebox{0.6}{$6-d$}}
\Text(105,76)[lb]{\scalebox{0.6}{$d-2$}}
\Text(72,35)[lb]{\scalebox{0.6}{$d-2$}}
\Text(137,35)[lb]{\scalebox{0.6}{$d-2$}}
\Text(145,-5)[lb]{\scalebox{0.6}{$d-2$}}
\Text(28,35)[lb]{\scalebox{0.6}{$d-2$}}
\Text(140,103)[lb]{\scalebox{0.6}{$d-2$}}
\Text(210,62)[lb]{$=\frac{\hat w_3}{|x_{12}|^{2}|x_{13}|^{2}|x_{23}|^{2}}$}
\end{picture}
\end{center}
Integrating both sides of this diagrammatic equation w.r.t. $x_1$ we obtain
\begin{equation}
\label{hat w3 in terms of v3}
\hat w_3=\frac{U\left(3-\frac{d}{2},\frac{d}{2}-1,d-2\right)}{U(1,1,d-2)}\,v_3\,,
\end{equation}
where $v_3$ is determined by the diagram
\begin{center}
\begin{picture}(165,90) (75,-5)
\SetWidth{1.0}
\SetColor{Black}
\Line[](32,6)(192,6)
\Line[](192,6)(152,70)
\Line[](72,70)(32,6)
\Line[](112,6)(72,74)
\Line[](72,70)(152,70)
\Line[](152,70)(112,6)
\Vertex(72,70){4}
\Vertex(152,70){4}
\Vertex(32,6){2}
\Vertex(112,6){4}
\Vertex(192,6){2}
\Text(17,-5)[lb]{$x_2$}
\Text(198,-5)[lb]{$x_3$}
\Text(65,-5)[lb]{\scalebox{0.6}{$6-d$}}
\Text(183,35)[lb]{\scalebox{0.6}{$6-d$}}
\Text(110,76)[lb]{\scalebox{0.6}{$2$}}
\Text(72,35)[lb]{\scalebox{0.6}{$d-2$}}
\Text(137,35)[lb]{\scalebox{0.6}{$d-2$}}
\Text(145,-5)[lb]{\scalebox{0.6}{$d-2$}}
\Text(28,35)[lb]{\scalebox{0.6}{$d-2$}}
\Text(220,30)[lb]{$=\frac{v_3}{|x_{23}|^{6-d}}$}
\end{picture}
\end{center}
Attaching propagator lines with powers $2d-4$ to $x_{2,3}$, and integrating
both sides of the obtained equation w.r.t. $x_{2,3}$, yields
\begin{equation}
\label{v3 in terms of tilde v3}
v_3 = \frac{U\left(3-\frac{d}{2},\frac{d}{2}-1,d-2\right)}{U(1,1,d-2)}\,\tilde v_3(0)\,,
\end{equation}
where $\tilde v_3(\delta)$ is defined by the diagram
\begin{center}
\begin{picture}(165,90) (75,-5)
\SetWidth{1.0}
\SetColor{Black}
\Line[](32,6)(192,6)
\Line[](192,6)(152,70)
\Line[](72,70)(32,6)
\Line[](112,6)(72,74)
\Line[](72,70)(152,70)
\Line[](152,70)(112,6)
\Vertex(72,70){4}
\Vertex(152,70){4}
\Vertex(32,6){2}
\Vertex(112,6){4}
\Vertex(192,6){2}
\Text(17,-5)[lb]{$x_2$}
\Text(198,-5)[lb]{$x_3$}
\Text(68,-5)[lb]{\scalebox{0.6}{$2$}}
\Text(183,35)[lb]{\scalebox{0.6}{$2-\delta$}}
\Text(110,76)[lb]{\scalebox{0.6}{$2$}}
\Text(82,35)[lb]{\scalebox{0.6}{$2$}}
\Text(137,35)[lb]{\scalebox{0.6}{$2$}}
\Text(145,-5)[lb]{\scalebox{0.6}{$2d-6$}}
\Text(10,35)[lb]{\scalebox{0.6}{$2d-6+2\delta$}}
\Text(220,30)[lb]{$=\frac{\tilde v_3(\delta )}{|x_{23}|^{d-2+\delta}}$}
\end{picture}
\end{center}
Here we have introduced an auxiliary regulator, $\delta$, in order to apply the integration by parts
relation (the diagram itself is finite in $\delta\rightarrow 0$ limit). We relegate the details to
Appendix \ref{appendix: calculation of v3}.
Combining (\ref{phi phi s leading}), (\ref{Cs3 and Cphiphi s relation}), (\ref{w3}), (\ref{hat w3 in terms of v3}),
(\ref{v3 in terms of tilde v3}) gives the result
\begin{equation}
\label{w3 over Cs3}
\frac{w_3}{\tilde C_{s^3}} = \frac{16(d-3)^2}{N}\,C_\phi^2 C_s
\, \frac{U\left(3-\frac{d}{2},\frac{d}{2}-1,d-2\right)^2}{U(1,1,d-2)^2\,
U\left(\frac{d-2}{2},\frac{d-2}{2},2\right)}\,\tilde v_3(0)\,,
\end{equation}
where $\tilde v_3(0)$ is given by (\ref{tilde v3 answer}).
Next, we evaluate the following contribution to $W_{s^3}$ represented by the 3-loop `bellows'\footnote{The name is based on the visual resemblance of the three-dimensional shape of this diagram to a bellows, see, \textit{e.g.}, \href{https://en.wikipedia.org/wiki/Bellows}{\underline{https://en.wikipedia.org/wiki/Bellows}}} diagram\footnote{We are grateful to anonymous referee at PRD who pointed out this diagram to us.}
\begin{center}
\begin{picture}(324,136) (47,-39)
\SetWidth{1.0}
\SetColor{Black}
\Line[](28,22)(94,22)
\Line[](172,76)(172,-32)
\Line[](172,22)(208,22)
\Line[](250,22)(172,76)
\Line[](250,22)(172,-32)
\Line[](250,22)(316,22)
\Line[](94,22)(172,76)
\Line[](94,22)(172,-32)
\Line[](136,52)(136,-8)
\Vertex(136,52){4}
\Vertex(136,-8){4}
\Vertex(172,76){4}
\Vertex(172,-32){4}
\Vertex(172,22){4}
\Vertex(94,22){4}
\Vertex(250,22){4}
\Vertex(30,22){2}
\Vertex(208,22){2}
\Vertex(316,22){2}
\Text(25,28)[lb]{$x_1$}
\Text(312,28)[lb]{$x_2$}
\Text(206,28)[lb]{$x_3$}
\Text(85,28)[lb]{$x_4$}
\Text(130,60)[lb]{$x_5$}
\Text(130,-23)[lb]{$x_6$}
\Text(178,28)[lb]{$x_7$}
\Text(253,28)[lb]{$x_8$}
\Text(58,28)[lb]{\scalebox{0.6}{$2\Delta_s$}}%
\Text(106,42)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(106,-2)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(142,22)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(147,70)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(147,-29)[lb]{\scalebox{0.6}{$2\Delta_s$}}
\Text(192,28)[lb]{\scalebox{0.6}{$2\Delta_s$}}%
\Text(178,45)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(178,-8)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(214,51)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(214,-14)[lb]{\scalebox{0.6}{$2\Delta_\phi$}}
\Text(280,28)[lb]{\scalebox{0.6}{$2\Delta_s$}}%
\Text(332,12)[lb]{$=\frac{w_4}{(|x_{12}||x_{13}||x_{23}|)^{\Delta_s}}$}
\end{picture}
\end{center}
Normalizing the external $s$ legs in accord with (\ref{field normalizations}),
and using the uniqueness relation (\ref{uniqueness}) to
integrate over the five points $x_{4-8}$, yields\footnote{The symmetry factor of this diagram is $1/2$, whereas its multiplicity is $3$
due to the possibility of having the $\phi$-triangle attached to each of the points $x_{1-3}$. Recall also that the contribution to $W_{s^3}$ is obtained by dividing the value of $w_4$ by $\tilde C_{s^3}$, see (\ref{sss general}).}
\begin{equation}
\label{w3}
W_{s^3}\supset\frac{w_4}{\tilde C_{s^3}} = \frac{3}{2} \Big( -\frac{2}{\sqrt{N}} \Big)^4 \,N\,
C_\phi^4 C_s^2\,U(\Delta_\phi,\Delta_\phi,\Delta_s)^2\, v_4\,,
\end{equation}
where $v_4$ is defined by the followihg diagrammatic equation
(the multiplicative factors of $-2/\sqrt{N}$ and $C_{\phi, s}$ in the vertices and propagators of this diagram should be stripped off, because
we already took them into account)
\begin{center}
\begin{picture}(374,136) (47,-39)
\SetWidth{1.0}
\SetColor{Black}
\Line[](192,76)(192,-32)
\Line[](270,22)(192,76)
\Line[](270,22)(192,-32)
\Line[](114,22)(192,76)
\Line[](114,22)(192,-32)
\Line[](192,76)(228,22)
\Line[](192,-32)(228,22)
\Vertex(192,76){4}
\Vertex(192,-32){4}
\Vertex(114,22){2}
\Vertex(270,22){2}
\Vertex(228,22){2}
\Text(226,28)[lb]{$x_3$}
\Text(105,28)[lb]{$x_1$}
\Text(273,28)[lb]{$x_2$}
\Text(300,12)[lb]{$=\frac{v_4}{(|x_{12}||x_{13}||x_{23}|)^{\Delta_s}}$}
\Text(140,52)[lb]{\scalebox{0.6}{$\Delta_s$}}
\Text(140,-12)[lb]{\scalebox{0.6}{$\Delta_s$}}
\Text(162,22)[lb]{\scalebox{0.6}{$2d-3\Delta_s$}}
\Text(205,40)[lb]{\scalebox{0.6}{$\Delta_s$}}
\Text(205,3)[lb]{\scalebox{0.6}{$\Delta_s$}}
\Text(234,51)[lb]{\scalebox{0.6}{$\Delta_s$}}
\Text(234,-14)[lb]{\scalebox{0.6}{$\Delta_s$}}
\end{picture}
\end{center}
Integrating both sides of this equation w.r.t. $x_3$, we obtain
(here we explicitly use the values (\ref{engineering scaling}) for the scaling dimensions $\Delta_{\phi, s}$)
\begin{center}
\begin{picture}(374,136) (47,-39)
\SetWidth{1.0}
\SetColor{Black}
\Line[](192,76)(192,-32)
\Line[](270,22)(192,76)
\Line[](270,22)(192,-32)
\Line[](114,22)(192,76)
\Line[](114,22)(192,-32)
\Vertex(192,76){4}
\Vertex(192,-32){4}
\Vertex(114,22){2}
\Vertex(270,22){2}
\Text(105,28)[lb]{$0$}
\Text(273,28)[lb]{$x$}
\Text(300,12)[lb]{$=\frac{v_4}{|x|^{6-d}}$}
\Text(140,52)[lb]{\scalebox{0.6}{$2$}}
\Text(140,-12)[lb]{\scalebox{0.6}{$2$}}
\Text(168,22)[lb]{\scalebox{0.6}{$d-2$}}
\Text(234,51)[lb]{\scalebox{0.6}{$2$}}
\Text(234,-14)[lb]{\scalebox{0.6}{$2$}}
\end{picture}
\end{center}
This is the so-called self-energy diagram. It can be reduced to the known
$\textrm{ChT}(\alpha,\beta)$ graph, given by eq. (16) in \cite{Vasiliev:1981dg}.\footnote{See Appendix
\ref{appendix: calculation of v3} for details regarding the self-energy diagram, {\it e.g.,}
(\ref{ChT expression}) for an explicit form of $\textrm{ChT}(\alpha,\beta)$.}
To this end we perform an inversion transformation on the external point $x$
and on both of the integrated vertices. This gives $v_4 = \textrm{ChT}(1,1)$. Combining
all together, we arrive at
\begin{align}
\label{w4 result}
\frac{w_4}{\tilde C_{s^3}}=\gamma_\phi\,\frac{3 d (d-2) \left(\pi ^2-6 \psi ^{(1)}\left(\frac{d}{2}-1\right)\right)}{4 (d-4)}\,,
\end{align}
where $\psi^{(1)}$ is the first derivative of the digamma function. In particular,
\begin{align}
\label{w4 in d4}
\frac{w_4}{\tilde C_{s^3}} &=-\frac{9}{2}\,\frac{1}{N}\, (d-4)^2 \psi ^{(2)}(1)+{\cal O}\left((d-4)^3\right)\,,\qquad d\rightarrow 4\,,\\
\label{w4 in d6}
\frac{w_4}{\tilde C_{s^3}} &= -54\, \frac{1}{N}\,(d-6) + {\cal O}((d-6)^{2})\,,\qquad d\rightarrow 6\,.
\end{align}
The plot of (\ref{w4 result}) for the range of values $2<d<6$ is shown in figure \ref{fig:w4}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=300pt]{w4.eps}
\end{center}
\caption{$Nw_4/\tilde C_{s^3}$ as a function of space-time dimension $2<d<6$.}
\label{fig:w4}
\end{figure}
To account for the contribution of the $\mathcal{O}(1/N)$ corrections, $A_{\phi, s}$, to the amplitudes
of the dressed $\phi$- and $s$-propagators, we simply add to $W_{s^3}$ a term $3A_s+3A_\phi$.
Furthermore, there is an additional term $-\frac{3}{2}A_s$, associated with normalization (\ref{field normalizations}) of the three external $s$ legs. As a result, $W_{s^3}$ takes the form
\begin{equation}
\label{Ws3 total}
W_{s^3} =3\,\delta V + \hat f+\frac{3}{2}A_s+3A_\phi + \frac{w_3+w_4}{\tilde C_{s^3}}\,.
\end{equation}
Substituting (\ref{general W phi phi s}), yields
\begin{equation}
\label{Ws3 total in terms of Wpps}
W_{s^3} = 3W_{\phi\phi s} +\hat f+ \frac{w_3+w_4}{\tilde C_{s^3}}\,.
\end{equation}
Here $w_3/\tilde C_{s^3}$ is given by (\ref{w3 over Cs3}), which we re-write as
\begin{equation}
\label{w3 over Cs3 in terms of b3 and tilde v3}
\frac{w_3}{\tilde C_{s^3}} = b_3\,\tilde v_3(0)\,,
\end{equation}
where $\tilde v_3(0)$ is calculated in Appendix \ref{appendix: calculation of v3}, and
\begin{equation}
\label{b3 final answer}
b_3 = \frac{16(d-3)^2}{N}\,C_\phi^2 C_s
\, \frac{U\left(3-\frac{d}{2},\frac{d}{2}-1,d-2\right)^2}{U(1,1,d-2)^2\, ~
U\left(\frac{d-2}{2},\frac{d-2}{2},2\right)}\,.
\end{equation}
In particular, it simplifies in the vicinity of $d=4,\, 6$
\begin{align}
\label{b3 in d4}
b_3 &= \frac{1}{N}\,\frac{(d-4)^3}{\pi^6} + {\cal O}((d-4)^4)\,,\qquad d\rightarrow 4\,,\\
\label{b3 in d6}
b_3 &= - \frac{1}{N} \,\frac{216(d-6)^3}{\pi^9} + {\cal O}((d-6)^4)\,,\qquad d\rightarrow 6\,.
\end{align}
Moreover, from (\ref{tilde v3 answer}) we obtain
\begin{align}
\label{tilde v3 in d4}
\tilde v_3(0) &= 20\pi^6\zeta(5) + {\cal O}(d-4)\,,\qquad d\rightarrow 4\,,\\
\label{tilde v3 in d6}
\tilde v_3(0) &= - \frac{\pi^9}{(d-6)^3} + {\cal O}((d-6)^{-2})\,,\qquad d\rightarrow 6\,.
\end{align}
The full expression for $w_3/\tilde C_{s^3}$ is displayed in
(\ref{w3 over C3 full result}) below, where $\psi^{(n)}(x)$ is \textit{n}th derivative of the digamma function $\psi^{(0)}(x)=\Gamma'(x)/\Gamma(x)$,
and $\gamma$ is the Euler constant. The plot of (\ref{w3 over C3 full result}) for the range $2\leq d\leq 6$ is shown in figure~\ref{fig:w3}. There is an apparent singularity in $d=3$, which is an artifact of our normalization. Note that $\tilde C_{s^3}$ has a simple zero in $d=3$.
The theory becomes free at the fixed point in the limit $d\to 4$, therefore $W_{s^3}(d\rightarrow 4)$ must vanish.\footnote{We would like to thank Simone Giombi, Igor Klebanov and Gregory Tarnopolsky for discussing with us the $d=4$ case. Their insight helped us to improve our previous version of the related calculation.}
Indeed, based on our (\ref{general W phi phi s}), $W_{\phi\phi s}(d\rightarrow 4) = 0$, in agreement with \cite{Petkou:1995vu,Petkou:1994ad}.
From (\ref{Ws3 total in terms of Wpps}) it then remains to show that
$\frac{w_3+w_4}{\tilde C_{s^3}}(d\rightarrow 4) = 0$, which is indeed the case
(in fact $w_{3,4}/\tilde C_{s^3}$ contributions vanish individually), as can be seen from
(\ref{w4 in d4}), (\ref{w3 over Cs3 in terms of b3 and tilde v3}),
(\ref{b3 in d4}), (\ref{tilde v3 in d4}). Using (\ref{hat f expression}) one can also verify
that $\hat f(d = 4) = 0$.
Moreover, it follows from (\ref{w3 over Cs3 in terms of b3 and tilde v3}),
(\ref{b3 in d6}), (\ref{tilde v3 in d6}) that in the vicinity of $d=6$, we have
\begin{equation}
\label{w3 over Cs3 in d6}
\frac{w_3}{\tilde C_{s^3}} = \frac{216}{N} + {\cal O}(d-6)\,.
\end{equation}
From (\ref{d6 W phi phi s}), (\ref{f around 6d}),
(\ref{w4 in d6}), (\ref{Ws3 total in terms of Wpps}) and (\ref{w3 over Cs3 in d6})
we then obtain in $d=6-\epsilon$,
\begin{equation}
\label{Ws3 final result}
W_{s^3} = \frac{162}{N} + {\cal O}(\epsilon)\,.
\end{equation}
The same value for $W_{s^3}$ was obtained in \cite{Fei:2014yja,Fei:2014xta}
for the critical cubic model. This match between the OPE coefficients provides an additional non-trivial evidence for the equivalence between the models. In fact, it suggests existence of a wide class of universal relations between the $O(N)$ CFTs in general $d$, at least in the $1/N$ expansion. Some of these relations
have been established in \cite{Petkou:1995vu,Petkou:1994ad} using the bootstrap method and
without considering a particular
Lagrangian or space-time dimension.
Our findings suggest that the bootstrap approach to the $O(N)$ CFTs, initiated in
\cite{Petkou:1995vu,Petkou:1994ad}, might not be exhausted.\footnote{Recent advancements in
bootstrap calculations in the $O(N)$ vector models have been reported in \cite{Alday:2019clp}.}
Presumably it can be extended to unravel a larger class of universal relations, which hold regardless of the specific structure of the $O(N)$ invariant Hamiltonian at the fixed point. In fact, these relations could be valid
in general $d$, and apply to all $O(N)$ symmetric CFTs with the same number of degrees of freedom,
rather than just to the critical $\phi^4$
vector model or cubic model of \cite{Fei:2014yja,Fei:2014xta}.
A non-trivial match (\ref{Ws3 final result}) of the OPE data to $1/N$ order in the case of apparently distinct models provides a partial evidence to this statement.
We plot the full result (\ref{Ws3 total in terms of Wpps}) for $W_{s^3}$ in figure \ref{fig:W3Tot}.
It should be noticed that the lack of the refined result for the $\langle sss\rangle$ correlator
has been particularly emphatic since the work of \cite{Petkou:1995vu,Petkou:1994ad},
where the next-to-leading order value for the $\langle \phi\phi s\rangle$
three-point function was established. The new result
(\ref{Ws3 total in terms of Wpps}) for the $\langle ss s\rangle$
allowed us to subject the hypothesis of \cite{Fei:2014yja,Fei:2014xta} to a non-trivial test.
Finally, we would like to stress that while the results for the multi-loop diagrams, {\it e.g.,}
the 4-loop triangle diagram (\ref{w3 over C3 full result}), played a crucial role in the direct
diagrammatic calculation of the $\langle sss\rangle$ three-point function
at the next-to-leading order in the $1/N$ expansion, they have independent value, because diagrams of this type are ubiquitous in perturbative CFTs. Two additional useful relations obtained in this section are the 3-loop trapezoid diagram $v_3$, or equivalently $\tilde v_3$ (see also Appendix \ref{appendix: calculation of v3}) and the 3-loop bellow diagram (\ref{w4 result}). To best of our knowledge the analytic expressions for these diagrams do not appear in the literature.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=300pt]{w3.eps}
\end{center}
\caption{$Nw_3/\tilde C_{s^3}$ as a function of space-time dimension $2<d<6$. Our choice of normalization results in a pole, because $\tilde C_{s^3}$ vanishes in $d=3$.}
\label{fig:w3}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=300pt]{W3tot.eps}
\end{center}
\caption{The total $1/N$ correction $W_{s^3}$
to $\langle sss\rangle$ as a function of space-time dimension $2<d<6$.
Some of the values are $NW_{s^3}(d=2)=1/2$,
$NW_{s^3}(d=4)=0$, $NW_{s^3}(d=6)=162$.}
\label{fig:W3Tot}
\end{figure}
For convenience, we recapitulate the full answer for $W_{s^3}$,
determined by (\ref{A phi 1over N general}),
(\ref{A s 1over N general}),
(\ref{general W phi phi s}),
(\ref{delta V in general d}),
(\ref{hat f expression}),
(\ref{w4 result}),
(\ref{Ws3 total in terms of Wpps}) and
(\ref{w3 over Cs3 in terms of b3 and tilde v3})
\begin{align}
W_{s^3} &= 3W_{\phi\phi s} +\hat f+ \frac{w_3+w_4}{\tilde C_{s^3}}\,,\\
W_{\phi\phi s} &= \gamma_\phi\,\left(\frac{d(d-3)+4}{4-d}
\Big( H_{d-3}+\pi \cot \big(\frac{\pi d}{2}\big)\Big)
+\frac{16}{(d-4)^2}+\frac{6}{d-4}+\frac{2}{d-2}-2 d+3\right)\,,\\
\hat f&=\gamma_\phi\,\frac{6(d-1)(d-2)}{d-4}\,\left(H_{d-4}-\frac{2}{d-4}+\pi \cot \big(\frac{\pi d}{2}\big) \right)\,,\\
\frac{w_3}{\tilde C_{s^3}} &=\gamma_\phi {2d(d{-}2)(d{-}3)\over (d{-}4)^2}
\left( 6 \psi ^{(1)}\Big(\frac{d}{2}{-}1\Big){-}\psi ^{(1)}(d{-}3) {-}{\pi^2\over 6}
{-} H_{d-4}\Big(H_{d{-}4}{+}2\pi\cot\big({\pi d\over 2}\big) \Big)
\right)
\label{w3 over C3 full result}\,,\\
\frac{w_4}{\tilde C_{s^3}}&=\gamma_\phi\,\frac{3 d (d-2) \left(\pi ^2-6 \psi ^{(1)}\left(\frac{d}{2}-1\right)\right)}{4 (d-4)}\,,
\end{align}
where $\psi^{(n)}$ is the $n$\textit{th} derivative of the digamma function
$\psi^{(0)}(x) = \Gamma'(x)/\Gamma(x)$,
$H_n$ is the $n$\textit{th} harmonic number,
and $\gamma_\phi\sim 1/N$ is given by (\ref{gamma phi 1over N general}).
We conclude this section by discussing a potential application of our results for the three-point function
$\langle sss\rangle$ in the context of $3d/4d$ critical vector model/higher spin theory correspondence
\cite{Klebanov:2002ja}.
We find that in $d=3$ the OPE coefficient vanishes up to the next-to-leading order in $1/N$, \footnote{Most
accurate available numerical values of the total $\langle sss\rangle$ OPE coefficient
$\lambda_{sss}$ in $d=3$ were presented in \cite{Kos:2016ysd,Chester:2019ifh}
\begin{equation}
\lambda_{sss}\Bigg|_{O(2)} = 0.830914(32)\,,\qquad
\lambda_{sss}\Bigg|_{O(3)}= 0.499(12)\,.
\end{equation}}
\begin{equation}
\label{Ws3 in d=3}
W_{s^3}(d=3) = 0+ {\cal O}(1/N^2)\,.
\end{equation}
It happens because of a rather non-trivial cancellation between the simple poles of two apparently unrelated terms in the expression for $W_{s^3}$
\begin{align}
N\, \hat f &= \frac{16}{\pi^2}\,\frac{1}{d-3}+{\cal O}((d-3)^0)\,,\\
N\, \frac{w_3}{\tilde C_{s^3}}&= - \frac{16}{\pi^2}\,\frac{1}{d-3}+{\cal O}((d-3)^0)\,.
\end{align}
A large body of the literature, starting from the work of \cite{Klebanov:2002ja}
was dedicated to studying the holographic
correspondence between the critical $O(N)$ vector model in $d=3$ and the type-A Vasiliev higher-spin theory
in $AdS_4$.
While in this paper we do not do justice to properly reviewing related literature, we would like
to point out that it is tempting to discuss our result (\ref{Ws3 in d=3})
in the context of holographic correspondence.
Indeed, the earlier works by \cite{Petkou:2003zz,Sezgin:2003pt}
(see also \cite{Giombi:2009wh} for an extensive discussion of the holographic three-point functions)
argue that to leading order in the large $N$
expansion, the higher-spin theory in $AdS_4$ bulk implies that in $d=3$
\begin{equation}
\label{sss general to leading in 1/N}
\langle sss\rangle = 0 + {\cal O}(1/N^{3/2})\,.
\end{equation}
This equation is in full agreement with the field theory
dual, which in fact has been known
since \cite{Petkou:1995vu,Petkou:1994ad}, as we reviewed above in this section.
The new result (\ref{Ws3 in d=3})
for the next-to-leading order $\langle sss\rangle$
raises a natural question whether the holographic work
of \cite{Petkou:2003zz,Sezgin:2003pt}
extends to order ${\cal O}(1/N^{3/2})$.
Notice that \cite{Petkou:2003zz} shows that
(\ref{sss general to leading in 1/N}) is valid for the $AdS_4$ dual of \textit{any} $O(N)$ CFT in $d=3$.
In particular, no assumption was made that the $AdS_4$ bulk is populated by the higher-spin fields
of Vasiliev's theory \cite{Petkou:2003zz}.
In fact, it may happen that (\ref{Ws3 in d=3}) holds for a large class of $O(N)$ CFTs in $d=3$. Holography is a possible tool to check this conjecture.
In particular, one can try to extend the holographic argument of \cite{Petkou:2003zz} to the next-to-leading order in the $1/N$ expansion.
A possible outcome $\langle sss\rangle = 0 + {\cal O}(1/N^{5/2})$ of a generic bulk calculation
would be a strong evidence in favor of the holographic duality \cite{Klebanov:2002ja},
and universality of the $W_{s^3}$ for a broad class of $O(N)$ CFTs in general dimension.
\iffalse
Using (\ref{w3 over Cs3}) we obtain that in $d=4-\epsilon$
\begin{equation}
\label{w3 over Cs3 in d4}
\frac{w_3}{\tilde C_{s^3}}\Bigg|_{d=4-\epsilon}
= \left(-\frac{\epsilon^3}{\pi^6}+{\cal O}(\epsilon^4)\right)\,\hat w_3|_{d=4-\epsilon}\,.
\end{equation}
To find $\hat w_3|_{d=4-\epsilon}$
we need to calculate the following diagram
\begin{center}
\begin{picture}(165,90) (15,-5)
\SetWidth{1.0}
\SetColor{Black}
\Line[](32,6)(192,6)
\Line[](192,6)(152,70)
\Line[](72,70)(32,6)
\Line[](112,6)(72,74)
\Line[](72,70)(152,70)
\Line[](152,70)(112,6)
\Vertex(72,70){4}
\Vertex(152,70){4}
\Vertex(32,6){2}
\Vertex(112,6){4}
\Vertex(192,6){2}
\Text(17,-5)[lb]{$x_2$}
\Text(198,-5)[lb]{$x_3$}
\Text(65,-5)[lb]{\scalebox{0.6}{$2+\epsilon$}}
\Text(183,35)[lb]{\scalebox{0.6}{$2+\epsilon$}}
\Text(105,76)[lb]{\scalebox{0.6}{$2-\epsilon$}}
\Text(72,35)[lb]{\scalebox{0.6}{$2-\epsilon$}}
\Text(137,35)[lb]{\scalebox{0.6}{$2-\epsilon$}}
\Text(145,-5)[lb]{\scalebox{0.6}{$2-\epsilon$}}
\Text(28,35)[lb]{\scalebox{0.6}{$2-\epsilon$}}
\end{picture}
\end{center}
Such a diagram has been calculated in $d=4-\epsilon$ dimensions in
\cite{Kazakov:1983ns,Kazakov:1984bw}.
\footnote{In notations of \cite{Kazakov:1983ns,Kazakov:1984bw} each conformal propagator line is indexed
with the half of the total scaling power.}
For our purpose now we need to take the limit
$\epsilon\rightarrow 0$, which gives a finite value $\hat w_3=20\zeta(5)$ in $d=4$.
Due to (\ref{w3 over Cs3 in d4}) we then obtain $\frac{w_3}{\tilde C_{s^3}}(d\rightarrow 4) = 0$.
\fi
\section{Discussion}
\label{sec:discussion}
In this work we explore the $O(N)$ critical vector model with quartic interaction in $2\leq d \leq 6$ dimensions. In higher dimensions this model provides a nice illustration of the asymptotically safe quantum field theory, whereas in lower dimensions it is closely related to realistic systems such as the critical Ising model in 3D. While it is difficult to prove the existence of the fixed point in full generality, perturbative approach, $\epsilon$- and large-$N$ expansions confirm they exist in this model. Our calculations encompass the next-to-leading order analysis in the $1/N$ expansion and extend previously known results in a few ways.
We derive and perform consistency checks that provide an additional evidence for the existence of a non-trivial fixed point. Moreover, we continue non-perturbative studies of the emergent conformal field theory and use conformal techniques to calculate a new CFT data associated with the three-point functions of the fundamental scalar and Hubbard-Stratonovich fields. This helps to expand our understanding of a CFT describing critical $\phi^4$ model in general dimension. Along the way we evaluate a number of conformal multi-loop diagrams up to and including 4 loops in general $d$. These diagrams are generic and have value in themselves since they are not restricted to the critical $\phi^4$ model, but rather inherent to various CFTs.
In \cite{Fei:2014yja,Fei:2014xta} an alternative description of the critical $O(N)$ model in terms of $N+1$ massless scalars with cubic interactions was proposed. It was explicitly shown that the scaling dimensions of various operators within the alternative description match the known results for the critical $O(N)$ theory. While our findings confirm the observations made in \cite{Fei:2014yja,Fei:2014xta}, we find an additional agreement between the next-to-leading
coefficients of the $\langle sss\rangle$ three-point functions, previously unobserved in the literature.
In fact, it can be shown that all higher order correlation functions $\langle s(x_1)\cdots s(x_n)\rangle$, $n=4,5,\dots$ in both models match to leading order in the $1/N$ and $\epsilon$ expansion. In general, these correlators are entirely fixed by the 1PI vertices of the effective action. To leading order in the $1/N$ expansion these vertices are given by a single $\phi$-loop with $s$-legs attached to it. In the $\phi^4$ theory they have the following form
\begin{equation}
\label{n point function in phi4}
\Gamma_{\phi^4}^\text{1PI}(x_1,\ldots x_n)
=\,N\,C_\phi^n\,C_s^{n/2}\, \left(-\frac{2}{\sqrt{N}}\right)^n\,
\int \prod_{i=1}^n d^dx_i ~ \mathcal{I} (x_1,\dots, x_n) s(x_1)\cdots s(x_n)\,,
\end{equation}
where $\mathcal{I} (x_1,\dots, x_n)$ is a space dependent structure representing the internal $\phi$-loop, and the external $s$-legs are normalized according to (\ref{field normalizations}).
In the cubic model the same vertex equals to
\begin{equation}
\label{n point function in cubic}
\Gamma_\text{cubic}^\text{1PI}(x_1,\ldots x_n) =N\,C_\phi^{3n/2}\, (-g_1)^n\,
\int \prod_{i=1}^n d^dx_i ~ \mathcal{I} (x_1,\dots, x_n)s(x_1)\cdots s(x_n)\,,
\end{equation}
where the fixed point value of the $\phi\phi s$ coupling constant is given by \cite{Fei:2014yja}
\begin{equation}
g_1 = \sqrt{\frac{6\epsilon (4\pi)^3}{N}}\left(1+\mathcal{O}\left(\frac{1}{N},\;\epsilon\right)\right)~.
\end{equation}
In (\ref{n point function in cubic}) we took into account that the $s$ field in the cubic model is canonically
normalized, and therefore the amplitude of its propagator in position space equals $C_\phi$.
Note that (\ref{n point function in phi4}) and
(\ref{n point function in cubic}) share the same structure $ \mathcal{I} (x_1,\dots, x_n)$ because the 1PI diagrams are identical up to an overall constant prefactor, which is explicitly written down in both cases. It then remains to show that these constants are identical. Indeed,
\begin{equation}
\left(\sqrt{N} \, \frac{g_1}{2}\right)^2\,\frac{C_\phi}{C_s} = 1 +\mathcal{O}\left(\frac{1}{N},\;\epsilon\right)\,.
\end{equation}
The above matches support the equivalence suggested in\cite{Fei:2014yja,Fei:2014xta} between the critical $\phi^4$ vector model and the critical cubic model in $d=6-\epsilon$ dimensions.
Based on these findings we propose that the new result for $W_{s^3}$, which did not appear in the literature before our work,
is in fact universally applicable to a large class of $O(N)$ CFTs. Therefore we interpret
the match between the $W_{s^3}$ coefficients of the critical $\phi^4$
model and the critical cubic model in $d=6-\epsilon$ dimensions as a particular manifestation
of this universality. Moreover, we suspect that it might be possible to systematically derive additional universal relations, at least in the $1/N$ expansion, thereby replacing the equivalence of \cite{Fei:2014yja,Fei:2014xta}
by a universally valid bootstrap statement.\footnote{Another reason to believe in the universality of the critical $O(N)$ vector models in general $d$ is due to the observation related to the behavior
of $W_{s^3}$ coefficient
when extrapolated to $d=8$. While the vector model in $d=8$
is manifestly non-unitary, since the scaling dimension of the operator $s$ is below
the unitarity bound, we can nevertheless formally compare our result $NW_{s^3}(d=8) = -950$ with a similar calculation in the exotic critical model in $8-\epsilon$ dimensions \cite{Gracey:2015xmw}. We find a precise match with $NW_{s^3}$ calculated in \cite{Gracey:2015xmw}.
We are grateful to Simone Giombi, Igor Klebanov and Gregory Tarnopolsky for letting us know about
\cite{Gracey:2015xmw} and suggesting to compare the results.}
Furthermore, we notice that
the critical $O(N)$ model in higher dimensions does have certain unphysical features. The well-known $\epsilon$-expansion shows that the coupling constant at the Wilson-Fisher fixed point is negative in $4+\epsilon$ dimensions \cite{Weinberg:1976xy}. This means that the potential is unbounded from below for large values of the field. However, perturbative calculations in $1/N$ do not reveal any sign of instability at the level of the correlation functions. Thus, for instance, unitarity bounds are satisfied. Of course, this argument only shows that the vacuum state of the theory is metastable if the instability observed within $\epsilon$-expansion persists in the $\epsilon\to 1$ limit.
Indeed, in the recent work \cite{Giombi:2019upv} the authors provide a non-perturbative argument in favour of instability of the model. Their analysis rests on the observation that the path integral in the large-$N$ limit is entirely dominated by the saddle point. The corresponding saddle point equation is Weyl invariant at the fixed point, and therefore admits a family of solutions parametrized by size and location. This type of large-$N$ instantons was previously observed in the critical $\phi^6$ model in three dimensions \cite{Smolkin:2012er}. In particular, it was argued that the instantons and associated instability of the critical $\phi^6$ model can be used as a toy model towards holographic resolution of the singularity and conformal factor problems in quantum cosmology. It would be interesting to explore these aspects in the context of critical $\phi^4$ model.
Remarkably, the critical $O(N)$ vector models exhibit peculiar behaviour when coupled to a thermal bath \cite{Chai:2020zgq}. We hope that certain results presented in this paper might be of help towards understanding this behaviour in lower dimensions. Progress in this direction will be reported elsewhere \cite{prog}.
\section*{Acknowledgements} \noindent We thank Noam Chai, Soumangsu Chakraborty, Johan Henriksson, Zohar Komargodski and Anastasios Petkou for helpful discussions and correspondence. We would like to express our special thanks of gratitude to Simone Giombi, Igor Klebanov and Gregory Tarnopolsky for numerous comments and stimulating correspondence which helped us to improve our results and their presentation. This work is partially supported by the Binational Science Foundation (grant No. 2016186), the Israeli Science Foundation Center of Excellence (grant No. 2289/18) and by the Quantum Universe I-CORE program of the Israel Planning and Budgeting Committee (grant No. 1937/12).
|
1,314,259,994,271 | arxiv | \section{Introduction}
\vspace{-0.1in}
There is a large gender disparity in representation in the physics community, with men dominating in both rank and number~\cite{Pettersson2011}. In studying this, much emphasis in physics education research focuses on gender gaps in \textit{performance}, such as concept inventories and course grades~\cite{Scherr,Madsen2013}. However, \textit{participation} in the physics community through the roles people take on (and in particular doing lab work) can heavily shape one's identity as a physicist~\cite{Irving2015,Irving2016}. Correspondingly, a gendered division of roles influences the modern practice of physics to be laden with masculine connotations~\cite{Gonsalves2016}. Understanding how these gendered roles develop and how they are shaped through behaviors in labs is critical.
In this paper, we explore student participation through the \textit{behaviours} they take on in an introductory physics lab course. Previous work has shown mixed results with regards to gendered action in first-year physics labs~\cite{Danielsson2009,Jovanovic1998} such as men using desktop computers more than women~\cite{Day2016} and that management of equipment apparatus is heavily impacted by gender in mixed-group pairs~\cite{Holmes2014}. In this paper, we begin to explore the replicability and generalizability of these studies, as well as understand underlying mechanisms and implications.
We performed a cluster analysis to categorize student behaviours, a \textit{person-centered approach} which can account for non-linearities missed in common regression analyses~\cite{Corpus2014}. We found that, in the inquiry lab sections designed to foster collaborative work and promote student agency, women used laptops and personal devices more than men, and men used lab equipment more than women. We found no such difference in traditional lab sections, in which students were guided through experiments and individually filled out worksheets. We conjecture that students in the inquiry labs were afforded the opportunity to divide tasks within their groups, and therefore did so along gendered lines. We use these results to guide future work which will aim to explore the mechanisms for this observed gender-based behaviour difference in labs.
\vspace{-0.2in}
\section{Methods}
Participants were students enrolled in the honours-level mechanics course of a calculus-based physics sequence. During Fall 2017, all students in this study attended the same lecture, were mixed together in discussion sections, but were separated into two pedagogically different lab types (three \textit{traditional lab} sections and two \textit{inquiry lab} sections). Students self-selected into their lab sections prior to the start of the course; at the time of selection, they were unaware of differences between the labs. During Spring 2018, the two lab sections under study were both inquiry labs.
The \emph{traditional labs} were designed to reinforce physics content knowledge by providing students with hands-on experiences with physical phenomena. Students were provided with a detailed lab worksheet that guided them through experiments that demonstrated physics concepts. Each student handed in their individual worksheet at the end of the lab period.
The \emph{inquiry labs} were designed to emphasize the process of experimentation in physics. In these labs, students were provided with a goal but were not provided with specific procedures or decisions for reaching that goal. Experimentation skills were emphasized in all lab activities with a focus on iterating, improving, and extending investigations. Students worked collaboratively on electronic lab notes to document their processes and submitted one set of notes per group at the end of the lab session.
\vspace{-0.15in}
\subsection{Quantifying student behaviours}
In all lab sections, observers documented student behaviours following the observation protocol used in Day \textit{et. al.}~\cite{Day2016}. Every five minutes, an observer noted each student's actions in the lab using the codes explained in Table~\ref{table:codes}. The cumulative actions of a student in a given lab period formed a student profile. Thus over the course of a semester there are multiple profiles for each student, one for each lab period. A profile is constructed by normalizing the frequency of observed codes for a student in a lab period (and therefore represents the fraction of codes associated with each student). In Fall 2017, observers were physically present in the lab space. In Spring 2018, observers coded video using the same protocol to determine student profiles.
\begin{table}[htbp]
\caption{\textbf{Action codes used in observations}. The \textit{Laptop} code is used for both handling a laptop or personal device (students used laptops, phones, and tablets for the purpose of notetaking, writeup, data analysis and reading instructions in the inquiry labs). \label{table:codes}}
\begin{ruledtabular}
\begin{tabular}{cl}
\textbf{Code} & \textbf{Description} \\
\hline
Equipment & Handling equipment \\
Laptop & Using a laptop or personal device \\
Paper & Writing on paper or in a notebook \\
Computer & Using the desktop computer at the lab bench \\
Other & Other behaviour, such as discussing or observing
\end{tabular}
\end{ruledtabular}
\vspace{-0.1in}
\end{table}
Codes were applied by identifying what the students handled (laptop or personal device, computer, paper, equipment). The \textit{Other} code captured all other actions such as talking with peers, asking questions of the instructor, listening to explanations, observing other group members, and off-task behaviour. One code was applied to each student in the class every five minutes, except in cases where the student had not yet arrived, had already left, or could not be easily identified (such as walking off camera).
To validate this method, two observers coded student actions in the same lab period using the described protocol but at different five-minute intervals to independently determine student profiles. Observers were not coding the same student at the same time. This was done to address two issues: (1) the reliability of the codes, and (2) the validity of the five minute time interval at capturing overall student behaviours in a two-hour lab period. A chi-squared analysis was performed on the contingency table constructed from the cumulated student profiles (frequencies of each code). In all cases observers' profiles were not significantly different ($p > 0.1$). Because each pair of observers obtained statistically indistinguishable observations, single observers coded subsequent lab periods.
Through in-class surveys, students self-reported demographic information. In all, 143 students were used in this study, resulting in 522 student profiles across 30 lab periods (each student is assigned a unique profile per lab period, and is in at most 5 lab periods). Table~\ref{table:Demographics} shows the gender demographics of the two lab sections. Although surveys provided students with the opportunity to disclose another gender, no student chose to do so.
To compare profiles from all students across all lab sections in both semesters, each student profile was normalized so that each measure represented the fraction of codes rather than the number of codes (see Table~\ref{table:codes} for list of codes). To perform a cluster analysis, each profile was grand mean scaled (Mean~=~0, SD~=~1), thus turning the different measures into z-scores~\cite{Schmidt2017,Corpus2014}. The Euclidean distance between student profiles represents the dissimilarity of student profiles, in units of standard deviations~\cite{Corpus2014}. In this way, we can relate geometric quantities (Euclidean distances) to statistical quantities (dissimilarities between profiles).
\begin{table}[htbp]
\caption{\textbf{Student demographics} of this study, with numbers in paranthises. In all, 143 students were used in this study. Students were observed during multiple lab classes during the semester, resulting in 522 student behaviour profiles. \label{table:Demographics}}
\begin{ruledtabular}
\begin{tabular}{l c c c c}
& \multicolumn{2}{c}{\textbf{Traditional Labs}} & \multicolumn{2}{c}{\textbf{Inquiry Labs}} \\
& Students & Profiles & Students & Profiles \\
& $\%(N)$ & $\%(N)$ & $\%(N)$ & $\%(N)$\\
\hline
Women & $19 \pm 5 (11)$ & $18\pm 3(34)$ & $25\pm 5 (21)$ & $26\pm 2(87)$ \\
Men & $79\pm 5(46)$ & $81 \pm 3(152)$ & $74 \pm 5 (63)$ & $74 \pm 2(226)$ \\
Undisclosed & $2\pm2 (1)$ & $1\pm 1(2)$ & $1\pm 1 (1)$ & $0.3\pm 0.3(1)$
\end{tabular}
\end{ruledtabular}
\vspace{-0.1in}
\end{table}
\vspace{-0.2in}
\subsection{Cluster analysis}
\vspace{-0.1in}Once all student profiles were obtained in z-score format, a standard k-means cluster analysis was performed. K-means is an iterative algorithm, where the optimal solution is found when the sum of square of distances from all points to their respective cluster center is minimized~\citep{kMeans}. We used the elbow method to determine if the data are clusterable~\cite{elbow}. This method optimizes the number of clusters by looking at the square distance from each point to their respective cluster center, and plotting this as a function of the number of clusters. When averaged over the number of profiles, it represents the variance of the data. Note that increasing the number of clusters will always decrease the average squared distance, because allowing more clusters will explain more variance in the data. The ``elbow" in the plot corresponds to the optimal number of clusters. Fig.~\ref{fig:elbow} illustrates the method and compares to random (unclusterable) data. From this we determined the optimal number of clusters to be five (a coincidence of this study, and not a reflection of the number of unique codes, as the random data does not have an elbow at five). The clusters account for 70\% of the variance in the data (64\% of equipment use, 78\% of paper and notebook use, 79\% of laptop and personal device use, 73\% of lab desktop computer use, and 59\% of other activities), well above the 50\% threshold used for a study of this type~\cite{Schmidt2017,Corpus2014}.
\begin{figure}
\includegraphics[width=0.9\linewidth]{clusterdistances_clusters.png}
\caption{Average squared distance from each point to the center of its assigned cluster, illustrating the use of the elbow method~\cite{elbow} to determine the optimal number of clusters. Blue points represent clustered student profiles, and orange points represent ten thousand randomly generated (non-clusterable) points for comparison. Points are illustrated using a t-SNE visualization~\cite{VanDerMaaten2008} for qualitative comparison, with random points forming a blob and classroom observations showing structure.\label{fig:elbow}}
\end{figure}
Clusters are primarily characterized by their centers, and so we label each cluster based on a description of their respective center. In Fig.~\ref{fig:ZScores}{(a)}, we see that the centers correspond to the five codes described in Table~\ref{table:codes}, i.e. a student profile in the equipment cluster corresponds to a strong positive deviation from the average equipment use. In Fig.~\ref{fig:ZScores}{(b)} we use t-stochastic network embedding (t-SNE)~\cite{VanDerMaaten2008} to visualize the clusters, where each point in the figure represents an individual student profile. Since we are attempting to project a five-dimensional space into two-dimensions, the resulting image primarily preserves structure and is used for a qualitative visualization, with distant points dissimilar and close points similar to each other.
\begin{figure}
\vspace{-0.1in}
\includegraphics[width=\linewidth]{clusterCentersZScore.png}
\caption{(a) The z-score profile of each cluster center shows a division based on task, and so we name the clusters according to the codes from Table~\ref{table:codes}. (b) Student profiles are visualized in two dimensions using t-SNE and colored by cluster. Since this image is an attempt to project a five dimentional space into two dimensions, it provides a \textit{qualitative} picture of the cluster shapes~\cite{VanDerMaaten2008}.\label{fig:ZScores}}
\vspace{-0.1in}
\end{figure}
\vspace{-0.2in}
\section{Results}
Once the student profiles were clustered, we analyzed each cluster's composition, shown in Fig.~\ref{fig:clusterComposition}. Profiles from students who chose not to disclose their genders (n=2) are omitted from this part of the analysis. The first striking difference in cluster composition is that the \textit{Laptop} cluster is composed entirely of students in the inquiry labs, and that the \textit{Paper} cluster is composed entirely of students in the traditional labs. This reflects the logistical differences between these two sections. In the traditional labs, students wrote answers to prompts on paper worksheets. In contrast, students in the inquiry labs worked collaboratively on electronic lab notes, and so documented everything using electronic devices (laptops, personal devices, or the desktops provided in lab). In both the traditional and inquiry labs, students needed to use equipment and were provided with a desktop computer. Therefore, as expected, the \textit{Computer} and \textit{Equipment} clusters contain students from both the traditional and inquiry labs. We note here, and will discuss further in the Section~\ref{sef:disc}, that these codes reflect \textit{what} a student was handling and not \textit{why} they were handling it (i.e. a desktop computer can be used for data gathering, analysis or writeup).
\begin{figure}
\vspace{-0.1in}
\includegraphics[width=\linewidth]{clusterComposition.png}
\caption{Composition of each cluster. The biggest difference occurs in laptop and paper usage, which reflects the different logistical differences in the traditional and inquiry labs. While there is no statistically significant difference in the distribution of men's and women's profiles in the traditional labs ($p=0.65$), there \textit{is} a difference in inquiry labs ($p=0.011$), specifically with regards to equipment and laptop usage. Label colours match cluster colours from Fig.~\ref{fig:ZScores}.\label{fig:clusterComposition}}
\end{figure}
Because each student has multiple profiles arising from the different lab sessions throughout the semester, we analyzed whether or not individual students' profiles appear in multiple clusters over the semester. In the traditional labs, $87\pm 4\%$ of students have profiles in more than one cluster. Similarly, in the inquiry labs $86\pm 4\%$ of students have profiles in more than one cluster. This suggests that student profiles cannot be further collapsed to indicate `semester long' behaviour, since they vary from week to week (for various reasons, such as variability in lab content and students changing lab partners).
Figure~\ref{fig:clusterComposition} shows that, in the traditional labs, there is no statistically significant difference in the fraction of men's and women's profiles in any of the clusters ($p=0.65$). However, we notice that there \textit{is} a difference in the cluster composition with respect to men's and women's profiles in the inquiry labs ($p=0.011$). Specifically, $44\pm6\%$ of women's profiles in the inquiry labs are in the \textit{Laptop} cluster compared to $25\pm 3\%$ of men's profiles, and $4\pm 2\%$ of women's profiles are in the \textit{Equipment} cluster compared to $14\pm2\%$ for men. This suggests a division of tasks along gender lines in the inquiry labs, with women using laptops and personal devices more than men and men using equipment more than women.
\begin{table}[htbp]
\caption{\textbf{Student averaged fraction of codes} in the inquiry labs, for handling equipment or using a laptop or personal deivce, broken down by gender. All other code comparisons have $p>0.2$, indicading no statistically significant difference.\label{table:timeFrac}}
\begin{ruledtabular}
\begin{tabular}{l c c c}
& \textbf{Women} & \textbf{Men} & \textbf{p Value}\\
\hline
\textbf{Equipment} & $9\pm 1\%$ & $13\pm1\%$ & 0.0097\\
\textbf{Laptop} & $31\pm3\%$ & $23\pm 2\%$ & 0.0082
\end{tabular}
\end{ruledtabular}
\end{table}
We looked at average fraction of codes in the inquiry lab to see if the results corroborate or refute the results of the cluster analysis. Table~\ref{table:timeFrac} shows these averages, broken down by gender. These results support our results from the cluster analysis. Men spent a larger fraction of their coded time handling equipment than women, and women spent a larger fraction of their coded time on a laptop or personal device than men.
\vspace{-0.1in}
\section{Discussion and Conclusions}
\label{sef:disc}
In this study, we analyzed the in-lab behaviours of students in two pedagogically different lab sections of the same introductory physics course. The biggest effect impacting the cluster composition was due to logistical differences between the traditional and inquiry labs (with regards to laptop and paper usage), an expected result given the large structural differences between labs. Furthermore, we found a second-order effect with respect to gender. We found no gendered difference in the traditional labs, but we did find one in the inquiry labs. We conjecture that this is because students in the inquiry labs were afforded the opportunity to divide tasks within their groups and did so along gendered lines.
Students in the traditional labs worked in groups to closely follow detailed lab worksheets but individually filled in answers. In other words, they completed a specific (assigned) individual task within a group setting where they shared equipment. However, there was very little room for decision making, as they followed specific instructions designed to demonstrate physics concepts. In contrast, students in the inquiry labs needed to decide as a group how to meet the provided goal of the lab and submitted one electronic lab notebook per group rather than individual lab worksheets. The inquiry labs were designed to foster collaboration within and between groups so students were free to divide tasks, and did so along gendered lines (with men handling equipment more than women, and women handling laptops and personal devices more than men).
These results raise many questions about equity in lab groups, in particular (1) how gendered roles are constructed and (2) how tasks are assigned. Studying more nuanced yet conceptually different tasks (such as data analysis versus secretarial note-taking) could provide insight into the mechanism behind gendering roles in lab classes. There is high variability with regards to the specifics of lab sections, which can lead to different amounts of equipment and laptop usage. For example, our results seem to contradict previous work which showed men using desktop computers more than women~\cite{Day2016}. Instead, analyzing \textit{why} a student is engaging in a particular task (such as using a computer for secretarial notetaking versus for data analysis) could provide a deeper understanding of student behaviour. Video recordings of individual groups were captured during the course of this study, and will be analyzed to answer specific questions with regards to task allocation. In future work, we will also evaluate the impacts of adding structure to the group work in the inquiry labs, such as deliberately assigning students to roles based on concepts such as those used in cooperative grouping~\cite{Heller1992}. In this way, students can maintain agency in decision making with respect to experimental design and data analysis, while structure is provided for role assignments. We plan to compare the behaviour of students in such labs to the ones used in this study and see if a gender-based behaviour difference persists.
\vspace{-0.1in}
\acknowledgments{We thank the teaching assistants and lab instructors for the course used in this study for their invaluable support and cooperation.This study was supported by the President's Council for Cornell Women's Affinito-Stewart Grant and the Cornell's College of Arts and Sciences Active Learning Initiative.}
\bibliographystyle{apsrev}
|
1,314,259,994,272 | arxiv | \section{Representation of the spectral derivative}
\label{app:spec}
Another representation of the spectral derivative can be obtained by taking the Fourier transform on the RHS of \eqref{trick}. One obtains
\begin{eqnarray}\label{eq:spectral_rep}
\big\{[[\partial_i]]\psi^n\big\}_{k} =
\cfrac{1}{N_i}
\sum_{p_{i}=-N_i/2}^{N_i/2-1}\sum_{k'_{i}=0}^{N_i-1}
{\tt i}\xi^{i}_{p_{i}}e^{{\tt i}\xi^{i}_{p_{i}}(x^{i}_{k_i} - x^{i}_{k'_i})} \psi^{n}_{k|k_{i} \rightarrow k'_{i}}.
\end{eqnarray}
This can be simplified further by noting that the sum on $(p_{i})_{i=1,2,3}$ can be performed explicitly. Therefore, the final result is that
\begin{eqnarray}\label{eq:spectral_rep_final}
\big\{[[\partial_i]]\psi^n\big\}_{k} =
\sum_{k'_{i}=0}^{N_i-1}
A^{i}_{k_{i} k'_{i}} \psi^{n}_{k|k_{i} \rightarrow k'_{i}} ,
\end{eqnarray}
where the differentiation matrices are given by
\begin{eqnarray}
A^{i}_{k_{i} k'_{i}} &=&
\cfrac{1}{N_i}
\sum_{p_{i}=-N_i/2}^{N_i/2-1}
{\tt i}\xi^{i}_{p_{i}}e^{{\tt i}\xi^{i}_{p_{i}}(x^{i}_{k_i} - x^{i}_{k'_i})} \\
&=& {\tt i}\frac{\pi}{N_{i}a_{i}} \left[ \frac{2e^{{\tt i}B^{i}_{k_{i} k'_{i}}} \sin \left( \frac{N_{i} B^{i}_{k_{i} k'_{i}}}{2}\right)}{\left(e^{{\tt i}B^{i}_{k_{i} k'_{i}}}-1\right)^{2}} + {\tt i} \frac{N_{i}\cos \left( \frac{N_{i} B^{i}_{k_{i} k'_{i}}}{2}\right)}{\left(e^{{\tt i}B^{i}_{k_{i} k'_{i}}}-1\right)} \right],
\end{eqnarray}
where $B^{i}_{k_{i} k'_{i}}:= \frac{\pi}{a_{i}} (x_{k_{i}} - x_{k'_{i}})$.
\section{Explicit construction of the matrix in the Crank-Nicolson scheme}\label{APXA}
The second step, given explicitly by
\begin{eqnarray}
\label{eq:step2_cn}
G_{k}^{n+1/2} \psi^{n^*}_{k} & = & \widetilde{G}_{k}^{n+1/2} \psi^{n+1/2}_{k},
\end{eqnarray}
can be written as a linear system of equations, by using the discrete pseudospectral representation of the derivative. For the LHS of \eqref{eq:step2_cn}, we use the representation given in Eqs. \eqref{trick} and \eqref{eq:spectral_rep_final} to obtain
\begin{eqnarray*}
G_{k}^{n+1/2} \psi^{n^*}_{k} & = & \psi_{k}^{n^*} +
\cfrac{\Delta t}{2} \sum_{i=1,2,3}
\cfrac{\alpha_{k}^{i}}{S_{k}^{i}} \sum_{k'_{i}=0}^{N_{i}-1}
A^{i}_{k_{i},k'_{i}}
\psi^{n^{*}}_{k|k_{i} \rightarrow k'_{i}} \, .
\end{eqnarray*}
Re-arranging the sums and introducing Kronecker's symbol, the last expression can be written in the form of
\begin{eqnarray}
\label{eq:matrix_vec}
G_{k}^{n+1/2} \psi^{n^*}_{k} & = &
%
\sum_{k'=0}^{N-1}
%
\mathcal{G}^{n}_{k k'} \psi^{n^{*}}_{k'} ,
%
\end{eqnarray}
where we define the matrix representation of $G_{k}^{n+1/2}$ as
\begin{eqnarray}
\label{eq:G_mat}
\mathcal{G}^{n}_{kk'}&:=& \delta_{kk'}
+\cfrac{\Delta t}{2} \sum_{i=1,2,3}
\cfrac{\alpha_{k}^{i}}{S_{k}^{i}}
A^{i}_{k_{i},k'_{i}}
\delta_{kk'|k_{i}=k'_{i}} .
\end{eqnarray}
Equation \eqref{eq:matrix_vec} is just the matrix-vector product with a matrix defined in \eqref{eq:G_mat}.
The RHS of Eq. \eqref{eq:step2_cn} is simpler because it contains the initial data, which is a known quantity. Therefore, it is possible to use the spectral representation of the derivative in Fourier space as in Eq. \eqref{trick}. This yields
\begin{eqnarray*}
\widetilde{G}_{k}^{n+1/2} \psi^{n+1/2}_{k}
&:=& \mathcal{H}^{n}_{k} ,\\
&=&
\psi_{k}^{n+1/2} -
\cfrac{\Delta t}{2} \sum_{i=1,2,3}
\cfrac{\alpha_{k}^{i}}{S^{i}_{k}N_i}\sum_{p_{i}=-N_i/2}^{N_i/2-1}{\tt i}\xi^{i}_{p_{i}}\widetilde{\psi}^{n}_{k|k_{i} \rightarrow p_{i}}e^{{\tt i}\xi^{i}_{p_{i}}(x^{i}_{k_i}+a_i)}.
\end{eqnarray*}
With these results, the linear system has the form
\begin{eqnarray}\label{step2}
\mathcal{G}^{n}\boldsymbol{\psi}^{n^*} = \boldsymbol{\mathcal{H}}^{n} ,
\end{eqnarray}
where $\mathcal{G}^{n}$ is the matrix defined in \eqref{eq:G_mat} and the bold symbols represent vectors in real space, with components $\boldsymbol{V} = [V_{0,0,0},V_{1,0,0},\cdots,V_{N_{1}-1,N_{2}-1,N_{3}-1}]^{T}$. Solving this linear system yields $\psi^{n^{*}}_{k}$, the value of the wave function after the second steps of the Crank-Nicolson scheme. The main problem with this approach is the evaluation of the matrix $\mathcal{G}^{n}$, which is not efficient (the computational complexity is $O(N^{2})$), and the storing of the same matrix which can be problematic since it requires $O(N^{2})$ of memory.
\section{Introduction}
The Dirac equation is one of the most important equations of Physics, giving a quantum description of relativistic spin-1/2 particles such as electrons and quarks \cite{Thaller}. For this reason, it can be found in the theoretical description of many physical systems in nuclear physics, condensed matter physics, laser-matter interaction, cosmology, and many others. However, this partial differential equation is notoriously hard to solve, and one often has to resort to numerical methods for accurate and non-perturbative solutions. Motivated by quantum electrodynamics (heavy ion collision, pair production)
\cite{PhysRevA.78.062711,PhysRevA.24.103,PhysRevC.71.024904,p13,Fillion-Gourdeau2013}, by strong field physics (Schwinger's effect, intense laser-molecule interaction) \cite{Fillion-Gourdeau2013,Fillion-Gourdeau2013b,graph2,p6,keitel4}, graphene modeling \cite{graph1, Katsnelson2006}, tremendous efforts have been put these past two decades on the development of numerical methods for the computation of the time-dependent Dirac equation in flat Minkowski space. Real space methods such as Quantum Lattice Boltzman techniques \cite{PhysRevLett.111.160602,succi,jcp2014,cpc2012,Lorin_Bandrauk}, Galerkin methods \cite{0022-3700-19-20-003,Fillion-Gourdeau2016122,8}, and pseudospectral or spectral methods \cite{16,fft1,fft2,fft3,Beerwerth2015189,Bauke20112454,keitel,grobe} were developed to efficiently and accurately solve the Dirac equation. Semi-classical regimes were considered in \cite{shijin} using Gaussian beams or Frozen Gaussian Approximations \cite{FGA,rousse}, while the non-relativistic limit has been studied in several recent papers \cite{bao3,bao2}. The computational difficulties for solving the time-dependent Dirac equation include the {\it fermion doubling problem}, related to numerical dispersion \cite{cpc2012}, and the Zitterbewegung \cite{Thaller}, resulting in highly oscillating solutions whose origin can be traced back to the presence of the mass term $\beta mc^2$. Finally, drastic stability conditions can lead to numerical diffusion related to the finite wave propagation speed, the speed of light $c$.
Another numerical challenge for the Dirac equation shared by any other wave equation in real space solved on a truncated domain is the need of imposing special boundary conditions in order to avoid spurious wave reflections at the computational domain boundary. Therefore, the computational methods on truncated domains require non-reflective boundary conditions \cite{jcp2014,AABES,MOLPHYS,Hammer2014728,Antoine2014268}, absorbing or perfectly matched layers (PML) \cite{MOLPHYS,pinaud,TurkelYefet,zeng,Tsynkov}, or the introduction of an artificial potential \cite{pinaud}. On the other hand, Fourier-based methods applied on bounded domains naturally induce periodic boundary conditions, which can be problematic when dealing with delocalized wave functions. In this respect, the technique developed in \cite{jcp2019,cpc2017} for the Dirac equation in flat space, where a spectral method is combined with PML, is an interesting alternative. One of the goals of this article is to extend this numerical scheme to the Dirac equation in curved space.
Recently, the Dirac equation in curved space-time has gained important interest in some applications such as condensed matter physics for describing the dynamics of charge carriers in deformed 2D Dirac materials \cite{CORTIJO2007293,Cortijo_2007}, as well as astrophysics for fermion tunneling in black holes \cite{Kerner_2008,Di_Criscienzo_2008,LI2008370,CHEN2008106}. In its discrete version, it has been considered from the lattice Boltzmann technique point of view \cite{Succi2015,PhysRevB.98.155419,Debus_2018} and as continuous limits of quantum walks \cite{PhysRevA.88.042301,Arrighi2016,Mallick_2019}. However, the literature on numerical methods for this equation is scarse. Therefore, this article is an attempt to fill this gap. In particular, we present a numerical method based on the pseudodifferential representation \cite{TaylorBook} of the Dirac equation in curved space in combination with perfectly matched layers (PML). In the pseudodifferential representation, it is possible to efficiently use Fourier-based methods, even though the Dirac equation under consideration has non-constant spatial coefficients. A similar methodology was successfully developed in \cite{jcp2019,cpc2017} for the Dirac equation in flat space and in \cite{AntoineGeuzaineTang} for the Gross-Pitaevskii equation. In these two cases, the pseudodifferential representation was used within some absorbing layers at the truncated domain boundary to implement wave absorption at the boundary using PML. Within the domain however, the scheme was an usual spectral numerical method. In curved space, as we will show below, it is beneficial to employ the pseudodifferential representation in the whole domain because convolution products do not appear explicitly. When combined with an implicit scheme for the time evolution, this allows for a Fourier-based method benefiting from spectral convergence and unconditional $\ell^2-$stability. In addition, the PML can be straightforwardly included, reducing the effects of the inherent periodic boundary conditions.
This paper is organized as follows. In Section \ref{sec:dirac}, we describe the Dirac equations under consideration. In Section \ref{sec:PML}, we present the type of absorbing (perfectly matched) layers for the Dirac equation. In Sections \ref{sec:discrete} and \ref{sec:space}, we propose and analyze two numerical methods for approximating the Dirac equation in static curved space-times. Section \ref{sec:numerics} is dedicated to numerical experiments. We conclude in Section \ref{sec:conclusion}.
\section{Dirac equation}\label{sec:dirac}
This section is devoted to the presentation of the Dirac equations studied in this paper. We first recall the basics of the usual Dirac equation in flat space and then, we present its extension to curved space. Finally, we reformulate the latter in ``Hamiltonian form'', similar to the one in flat space but with space-dependent coefficients.
\subsection{Dirac equation in flat space}
The time-dependent Dirac equation in Cartesian coordinates reads \cite{Itzykson:1980rh}
\begin{eqnarray}
{\tt i}\partial_t \psi(t,{\boldsymbol x}) = H_{\mathrm{flat}}(t,{\boldsymbol x}) \psi(t,{\boldsymbol x}),
\label{eq:dirac_eq}
\end{eqnarray}
where $\psi(t,{\boldsymbol x})$ is the time and coordinate dependent four-spinor, and $H_{\mathrm{flat}}$ is the Hamiltonian operator. The latter is given by
\begin{eqnarray}
H_{\mathrm{flat}}(t,{\boldsymbol x}) =
\boldsymbol{\alpha} \cdot \left[ -{\tt i}\nabla - e\boldsymbol{A}(t,{\boldsymbol x}) \right] + \beta m + \mathbb{I}_{4}V(t,{\boldsymbol x}),
\label{eq:hamiltonian_flat}
\end{eqnarray}
where $\psi(t,{\boldsymbol x}) \in L^{2}(\mathbb{R}^{3}) \otimes \mathbb{C}^{4}$ is the time $t=x^{0}$ and coordinate (${\boldsymbol x} = (x^{1},x^{2},x^{3})$) dependent four-spinor, $\boldsymbol{A}(t,{\boldsymbol x})$ represents the three space components of the electromagnetic vector potential, $V(t,{\boldsymbol x}) = eA_{0}(t,{\boldsymbol x})+V_{\textrm{nuc.}}({\boldsymbol x})$ is the sum of the scalar and interaction potentials, $e$ is the electric charge (with $e=-|e|$ for an electron), $\mathbb{I}_{4}$ is the $4 \times 4$ unit matrix and $\beta,\boldsymbol{\alpha}=(\alpha^{i})_{i=1,2,3}$ are the Dirac matrices.
In this work, the Dirac representation is used, where
\begin{eqnarray}
\beta =
\begin{bmatrix}
\mathbb{I}_{2} & 0 \\
0 & -\mathbb{I}_{2}
\end{bmatrix}
\; \; , \; \;
\alpha^{i} =
\begin{bmatrix}
0 & \sigma^{i} \\
\sigma^{i} & 0
\end{bmatrix}.
\label{eq:dirac_mat}
\end{eqnarray}
The $\sigma^{i}$ are the usual $2 \times 2$ Pauli matrices defined as
\begin{eqnarray}
\sigma^{1} =
\begin{bmatrix}
0 & 1 \\ 1 & 0
\end{bmatrix}
\;\; \mbox{,} \;\;
\sigma^{2} =
\begin{bmatrix}
0 & -{\tt i} \\ {\tt i} & 0
\end{bmatrix}
\;\; \mbox{and} \;\;
\sigma^{3} =
\begin{bmatrix}
1 & 0 \\ 0 & -1
\end{bmatrix},
\end{eqnarray}
while $\mathbb{I}_{2}$ is the $2 \times 2$ unit matrix. Note that natural units are used where $\hbar = c = 1$.
To simplify the notation further and to parallel the one used in the next section for the Dirac equation in curved space time, it is convenient to write the Hamiltonian as
\begin{eqnarray}
H_{\mathrm{flat}}(t,{\boldsymbol x}) =
-{\tt i}\boldsymbol{\alpha} \cdot \nabla + \beta m + F_{\mathrm{flat}}(t, \boldsymbol{x}),
\label{eq:hamiltonian_flat_simp}
\end{eqnarray}
where the function $F_{\mathrm{flat}}$ contains the contribution from the electromagnetic potential:
\begin{eqnarray}
F_{\mathrm{flat}}(t, \boldsymbol{x}):= - e \boldsymbol{\alpha} \cdot \boldsymbol{A}(t,{\boldsymbol x}) + \mathbb{I}_{4}V(t,{\boldsymbol x}) .
\end{eqnarray}
\subsection{Dirac equation in curved space}
In this section, the generalization of the Dirac equation to curved space is presented. Throughout, Einstein's notation is assumed with the following conventions: greek indices relate to general curved space (characterized by the general metric $g^{\mu \nu}(x)$, where $x:=(t,\boldsymbol{x})$ denotes a space-time point), uppercase latin indices relate to flat space (characterized by the Minkowski metric $\eta^{AB} = \mathrm{diag}(1,-1,-1,-1)$) while lowercase latin indices are summed over spatial coordinates only ($g^{ij}(x)$ for $i,j = 1,2,3$).
The extension of the Dirac equation to curved space follows by imposing general covariance under arbitrary coordinate transformations. In covariant notation and general background space, the Dirac equation takes the form \cite{pollock2010dirac}
\begin{eqnarray}
\label{eq:dirac_covariant}
\biggl\{ {\tt i} \gamma^{\mu}(x) \left[ \partial_{\mu} + \Omega_{\mu}(x) -{\tt i}eA_{\mu}(x) \right] - m \biggr\} \psi(x) = 0,
\end{eqnarray}
where $A_{\mu}$ is the four vector electromagnetic potential. The generalized gamma matrices used in the Dirac equation define a Clifford algebra:
\begin{eqnarray}
\label{eq:anticomm}
\{\gamma^{\mu}(x), \gamma^{\nu}(x) \} = 2 g^{\mu \nu}(x),
\end{eqnarray}
where the notation $\{\cdot,\cdot \}$ stands for the anticommutator and $g^{\mu \nu}$ is the metric characterizing the curved space. These matrices are a generalization of the Dirac matrices in flat space, which are not space-dependent and are related to the Minkowski metric as
\begin{eqnarray}
\{\gamma^{A}, \gamma^{B} \} = 2 \eta^{AB}.
\end{eqnarray}
The two sets of matrices are related \textit{via} the tetrad formalism as
\begin{eqnarray}
\gamma^{\mu}(x) = \gamma^{A}e_{ A}^{\ \mu}(x),
\end{eqnarray}
where $e_{ A}^{\ \mu}(x)$ is the tetrad. The tetrads are used to link the metric in curved and flat spaces and thus, obey the property:
\begin{eqnarray}
g^{\mu \nu}(x) = e_{A}^{\ \mu}(x) e_{B}^{\ \nu}(x) \eta^{AB}.
\end{eqnarray}
The spinorial affine connection $\Omega_{\mu}(x)$ was introduced in the Dirac equation to preserve the covariance. It is given by
\begin{eqnarray}
\Omega_{\mu}(x) = -\frac{{\tt i}}{4} \omega_{\mu}^{\ AB}(x) \sigma_{AB},
\end{eqnarray}
where $\sigma_{AB} = {\tt i}[\gamma_{A},\gamma_{B}]/2$ is the commutator of the ``flat space'' Dirac matrices while the spin connection is
\begin{eqnarray}
\omega_{\mu}^{\ AB}(x) =e_{\nu}^{\ A}(x) \left[
\partial_{\mu}e^{\nu B}(x)
+ \Gamma^{\nu}_{\ \mu \sigma}(x) e^{\sigma B}(x)
\right],
\end{eqnarray}
where the Christoffel symbols $\Gamma^{\nu}_{\ \mu \sigma}(x)$ were introduced. It is also important to notice that in curved space, the usual $\ell^2-$norm is not preserved. Instead, denoting
\begin{eqnarray}\label{l2gamma}
\left.
\begin{array}{lclcl}
\langle \psi, \psi\rangle_{\gamma} & = & \|\psi\|_{\gamma}^2 & = &\displaystyle \int \sqrt{|g(x)|}\psi^{\dagger}[\gamma^0\gamma^0(x)]\psi d^3x \, ,
\end{array}
\right.
\end{eqnarray}
where $g(x)$ is the determinant of the metric, the norm $\|\psi\|_{\gamma}$ is preserved in time, see \cite{Leclerc_2007,PhysRevD.22.1922,PhysRevLett.44.1559}. To develop a numerical scheme, it is convenient to rewrite \eqref{eq:dirac_covariant} in a form similar to \eqref{eq:dirac_eq} in flat space, i.e. in ``Hamiltonian form''. This is performed straightforwardly multiplying \eqref{eq:dirac_covariant} by $\gamma^{0}(x)$ and using the anticommutation relation \eqref{eq:anticomm}. The Dirac equation in curved space can then be written as \cite{PhysRevD.22.1922}
\begin{eqnarray}
{\tt i}\partial_t \psi(t,{\boldsymbol x}) = H(t,{\boldsymbol x}) \psi(t,{\boldsymbol x}),
\label{eq:dirac_eq2}
\end{eqnarray}
where $H(t,{\boldsymbol x})$ is the Dirac Hamiltonian operator in curved space given by
\begin{eqnarray}
H(t,{\boldsymbol x}) &=&-{\tt i} [g^{00}(x)]^{-1} \gamma^{0}(x)
\gamma^{i}(x) \left[ \partial_{i} + \Omega_{i}(x) - {\tt i} e A_{i}(x)\right] \nonumber \\
& & +[g^{00}(x)]^{-1} \gamma^{0}(x) m - \mathbb{I}_{4} \left[{\tt i} \Omega_{0}(x) + eA_{0}(x) \right] .
\end{eqnarray}
Defining generalized Dirac matrices as
\begin{eqnarray}
\beta(t,\boldsymbol{x}) &:=& [g^{00}(x)]^{-1} \gamma^{0}(x), \\
\alpha^{i}(t, \boldsymbol{x}) &:=& [g^{00}(x)]^{-1} \gamma^{0}(x)\gamma^{i}(x),
\end{eqnarray}
the Dirac Hamiltonian becomes
\begin{eqnarray}
H(t,{\boldsymbol x}) &=&
-{\tt i}\boldsymbol{\alpha}(t,{\boldsymbol x}) \cdot \left[\nabla + \boldsymbol{\Omega}(t,{\boldsymbol x}) - {\tt i} e \boldsymbol{A}(t,{\boldsymbol x})\right]
+ \beta (t,{\boldsymbol x}) m \nonumber \\
& & -\mathbb{I}_{4} \left[ {\tt i}\Omega_{0}(t,{\boldsymbol x}) + eA_{0}(t,{\boldsymbol x}) \right] .
\label{eq:dirac_hamil}
\end{eqnarray}
At this point, it is convenient to simplify the notation further by defining a function $F$ that allows for writing the Hamiltonian as
\begin{eqnarray}
H(t,{\boldsymbol x}) &=&
-{\tt i}\boldsymbol{\alpha}(t,{\boldsymbol x}) \cdot \nabla
+ \beta (t,{\boldsymbol x}) m + F(t,{\boldsymbol x}) \, ,
\label{eq:dirac_simplified}
\end{eqnarray}
where
\begin{eqnarray*}
F(t,{\boldsymbol x}) :=
-{\tt i}\boldsymbol{\alpha}(t,{\boldsymbol x}) \cdot \left[ \boldsymbol{\Omega}(t,{\boldsymbol x}) - {\tt i} e \boldsymbol{A}(t,{\boldsymbol x})\right]
- \mathbb{I}_{4} \left[ {\tt i}\Omega_{0}(t,{\boldsymbol x}) + eA_{0}(t,{\boldsymbol x}) \right] .
\end{eqnarray*}
This is the general form of the Dirac Hamiltonian in curved space-time. The functions $\beta (t,{\boldsymbol x})$, $\boldsymbol{\alpha}(t,{\boldsymbol x})$ and $F(t,{\boldsymbol x})$ need to be determined \textit{a priori} from the metric and/or from the electromagnetic field potential. Some explicit examples are presented below. It is well-known that the preceding Hamiltonian is not self-adjoint with respect to the covariant inner product when the metric is time-dependent \cite{Leclerc_2007,PhysRevD.22.1922,PhysRevLett.44.1559,PhysRevD.79.024020}, casting some doubts on the conservation of probability. This can be remedied by adding a new term in the Hamiltonian as \cite{Leclerc_2007,PhysRevD.79.024020}
\begin{eqnarray}
H'(t,{\boldsymbol x}) = H(t,{\boldsymbol x}) + \frac{{\tt i}}{2} \partial_{t} \ln \left(\sqrt{|g(x)| g^{00}(x)} \right).
\end{eqnarray}
The new Hamiltonian $H'$ is now self-adjoint. The new term can be interpreted in many ways, such as in the pseudo-hermitian operator formalism \cite{PhysRevD.79.024020,PhysRevD.82.104056,PhysRevD.83.105002} or as the time-dependence of the position eigenstates \cite{PhysRevD.79.024020}. However, the generality of these results are disputed by other authors, who proposed a different approach \cite{arminjon2010basic}. In this work, we do not dwell into this controversy as it is outside the scope of this article. For the sake of simplicity, and throughout the rest of this article, we will assume the metric is time-independent, allowing us to write the Hamiltonian as
\begin{eqnarray}
H(t,{\boldsymbol x}) &=&
-{\tt i}\boldsymbol{\alpha}({\boldsymbol x}) \cdot \nabla
+ \beta ({\boldsymbol x}) m + F(t,{\boldsymbol x}) , \\
F(t,{\boldsymbol x}) &=&
-{\tt i}\boldsymbol{\alpha}({\boldsymbol x}) \cdot \left[ \boldsymbol{\Omega}({\boldsymbol x}) - {\tt i} e \boldsymbol{A}(t,{\boldsymbol x})\right]
- \mathbb{I}_{4} \left[ {\tt i}\Omega_{0}({\boldsymbol x}) + eA_{0}(t,{\boldsymbol x}) \right] .
\end{eqnarray}
This Hamiltonian is now self-adjoint and is the starting point for the development of the numerical schemes.
The main difference between the flat and curved space versions of the Dirac equation is twofold: 1) the Dirac matrices are space dependent functions $\boldsymbol{\alpha}({\boldsymbol x})$ and $\beta({\boldsymbol x})$, assumed here to be smooth, and 2) the function $F(t,{\boldsymbol x})$ contains the contribution coming from the spin affine connection and the metric. As long as the latter are smooth enough, they do not lead to any particular computational issues. On the other hand, because the Dirac matrices are space dependent, it is obviously not possible to solve directly this equation with a Fourier-based method without avoiding convolution products. However, we can rewrite the equation in pseudodifferential form as follows in $\R^3$
\begin{eqnarray}
{\tt i}\partial_t \psi(t,{\boldsymbol x}) = -{\tt i}\boldsymbol{\alpha}({\boldsymbol x}) \cdot \mathcal{F}^{-1}_{{\boldsymbol x}} \big\{{\tt i} \boldsymbol{\xi} \mathcal{F}_{{\boldsymbol x}}\{ \psi \} (t,\boldsymbol{\xi}) \big\} \nonumber
+ \biggl\{ \beta({\boldsymbol x}) m
+ F(t,\boldsymbol{x}) \biggr\}\psi(t,{\boldsymbol x}),
\label{eq:dirac_eq3}
\end{eqnarray}
which will be the ground of the proposed methodology. In this last equation, $ \mathcal{F}_{{\boldsymbol x}}\{ \cdot \} (t,\boldsymbol{\xi})$ is the Fourier-transform operator on spatial coordinates and $\boldsymbol{\xi}$ is the transform variable. In this formulation, the Dirac partial differential equation becomes an integral equation where the derivative is expressed through its spectral representation.
Mathematically, the Dirac equation \eqref{eq:dirac_eq2} along with the Hamiltonian \eqref{eq:dirac_simplified} is a hermitian non-strictly hyperbolic system of equations \cite{leveque2002finite}. In principle, other numerical methods such as finite volumes or Galerkin could be used. An example of a finite volume discretization in 1D and its interpretation as a lattice Boltzmann method and quantum walk can be found in \cite{Succi2015}.
\section{Absorbing Layers}\label{sec:PML}
From a practical point of view, the time-dependent Dirac equation is considered on a bounded truncated physical domain denoted by $\mathcal{D}_{\textrm{Phy}}$.
The pseudospectral method used to solve the Dirac equation naturally induces periodic boundary conditions on a bounded domain. What follows, is a general strategy which simultaneously i) avoids$/$reduces artificial wave reflections at the domain boundary, ii) limits the transfer of the wave from one side to the opposite one by periodicity. To reach this goal, we add a layer $\mathcal{D}_{\textrm{PML}}$ surrounding $\mathcal{D}_{\textrm{Phy}}$, and stretch the coordinates in all the directions. The overall computational domain is next defined by: $\mathcal{D} = \overline{\mathcal{D}_{\textrm{Phy}}\cup \mathcal{D}_{\textrm{PML}}}$. We refer to \cite{MOLPHYS} for the construction of PMLs for quantum wave equations and more specifically to \cite{pinaud} for the derivation and analysis of PMLs for the Dirac equation.
Here, we outline the main features of this technique which is detailed in \cite{jcp2019}.
The first step of the implementation of PMLs is the following change of variables \cite{ZhengPML} involving only the space variable:
\begin{eqnarray}
\widetilde{x}^{i} = x^{i} + e^{{\tt i}\theta}\int_{L^{* i}}^{x^{i}}\Sigma(s)ds,
\end{eqnarray}
where $\theta \in (0,\pi/2)$, $i=1,2,3$ and $\Sigma$ is a function to be determined. We then define
\begin{eqnarray*}
S^{i}(x^{i}) := 1+e^{{\tt i}\theta^{i}}\widetilde{\Sigma}(x^{i}),
\end{eqnarray*}
the function $\widetilde{\Sigma}^{i}$ being given by
\begin{eqnarray*}
\widetilde{\Sigma}^{i}(x^{i}) =
\left\{
\begin{array}{ll}
\Sigma(|x^{i}|-L^{i}), & L^i_* \leq |x^{i}| < L^{i},\\
0, & |x^{i}| <L^i_*,
\end{array}
\right.
\end{eqnarray*}
where $L^i_*<L^{i}$, and such that $\{x^{i}\in\R \, : \, L^i_* \leq |x^{i}| < L^{i} \}$ is the absorbing layer. The partial derivatives are then transformed into
\begin{eqnarray}\label{pml2}
\partial_{i} \rightarrow \cfrac{1}{S^{i}(x^{i})}\partial_{i} = \cfrac{1}{1+e^{{\tt i}\theta^{i}}\widetilde{\Sigma}^{i}(x^{i})}\partial_{i}
\end{eqnarray}
with $\widetilde{\Sigma}$ vanishing while $S^{i}$ is equal to $1$ in $\mathcal{D}_{\textrm{Phy}}$, respectively. On truncated domains, we will consider the transformation \eqref{pml2}, and the associated new Hamiltonian
\begin{eqnarray}
H_{\textrm{PML}} = - {\tt i} \boldsymbol{\alpha}({\boldsymbol x}) \cdot {\boldsymbol T}({\boldsymbol x}) + \beta({\boldsymbol x}) m + F(t,{\boldsymbol x}),
\label{eq:hamiltonian_pml}
\end{eqnarray}
where ${\boldsymbol T} := ([S^{1}(x^{1})]^{-1} \partial_{1},[S^{2}(x^{2})]^{-1} \partial_{2},[S^{3}(x^{3})]^{-1} \partial_{3})^T$. Several types of functions can be selected. An exhaustive study of the absorbing functions $\Sigma$ is proposed in \cite{AntoineGeuzaineTang} for Schr\"odinger equations. Here are some examples:
\begin{eqnarray*}
\left.
\begin{array}{lll}
\textrm{Type I: } \Sigma_0(x^{i}+\delta^{i})^2, & \textrm{Type II: } \Sigma_0(x^{i}+\delta^{i})^3, & \textrm{Type III: } -\Sigma_0/x^{i}, \\
\\
\textrm{Type IV: } \Sigma_0/(x^{i})^2, & \textrm{Type V: } -\Sigma_0/x^{i} -\Sigma_0/\delta^{i}, & \textrm{Type VI: } \Sigma_0/(x^{i})^2-\Sigma_0/(\delta^{i})^2,
\end{array}
\right.
\end{eqnarray*}
where $\delta^{i}:=L^i-L_*^i$. From the pseudospectral point of view, the space-dependence of the coefficients $(S^{i})^{-1}$ again prevents the direct application of the Fourier transform on the equation, even in the flat-space case. In the latter case, the same pseudodifferential operator representation will still allow for combining the efficiency of the pseudospectral method and the computation of the non-constant coefficient Dirac equation (see \cite{jcp2019}).
\section{Time-discretization}\label{sec:discrete}
This section is devoted to the time discretization of the Dirac equation, in flat and curved spaces. The main tool
which is often used is the operator splitting technique. The complications related to the appearance of spatial differential operators are relegated to the next section where the spatial discretization is discussed.
\subsection{Time-discretization for Dirac equation in flat space}
\noindent In order to solve the Dirac equation in flat space, a natural approach consists in splitting the equation as follows. Here, an order-2 Strang splitting \cite{strang} is considered, but higher order operator splittings can naturally be used. Let us consider a time interval from $t_n$ to $t_{n+1}$ and assuming $\psi(t_n,\cdot)$ is given, the exact formal solution to the Dirac equation in flat space is
\begin{eqnarray}
\psi(t_{n+1},{\boldsymbol x})= \mathcal{T} \exp \left\{ -{\tt i} \int_{t_{n}}^{t_{n+1}} H_{\mathrm{flat}}(s,\boldsymbol{x}) ds \right\} \psi(t_{n},{\boldsymbol x}),
\end{eqnarray}
where $\mathcal{T}$ stands for the time-ordered exponential. The latter can be approximated to second order by the following symmetric decomposition \cite{suzuki1993general,Suzuki1990319}:
\begin{eqnarray}
\label{eq:op_flat}
\psi(t_{n+1},{\boldsymbol x})=
e^{-{\tt i} \frac{\Delta t}{2} \left[\beta m + F_{\mathrm{flat}}(t_{n+1/2},\boldsymbol{x}) \right]}
e^{-\Delta t \boldsymbol{\alpha} \cdot \nabla }
e^{-{\tt i} \frac{\Delta t}{2} \left[\beta m + F_{\mathrm{flat}}(t_{n+1/2},\boldsymbol{x}) \right]}\psi(t_{n},{\boldsymbol x})
+ \mathcal{O}(\Delta t^{3}),
\end{eqnarray}
where three exponential operators have been introduced. The first and third exponential operators can be evaluated analytically using the fact that
\begin{eqnarray}
\label{eq:exp_dirac}
e^{{\tt i}[\beta G(t,\boldsymbol{x}) + \boldsymbol{\alpha} \cdot \boldsymbol{G}(t,\boldsymbol{x})]} =
\mathbb{I}_{4} \cos(|G|)
+
{\tt i} \frac{[\beta G(t,\boldsymbol{x}) + \boldsymbol{\alpha} \cdot \boldsymbol{G}(t,\boldsymbol{x})]}{|G|} \sin(|G|),
\end{eqnarray}
for arbitrary functions $G$ and $\boldsymbol{G}$, where we defined $|G| = \sqrt{G^{2}(t,\boldsymbol{x}) + \boldsymbol{G}^{2}(t,\boldsymbol{x})}$.
A common method to deal with the second differential operator is to use the Fourier transform $\mathcal{F}_{{\boldsymbol x}}$, as follows (still using an order 2-splitting). Step by step, denoting $t_{n^*}=t_n+\Delta t$, we have
\begin{eqnarray}\label{split}
\left\{
\begin{array}{lcll}
\psi(t_{n+1/2},{\boldsymbol x}) & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta m + F_{\mathrm{flat}}(t_{n+1/2},\boldsymbol{x}) \right]} \psi(t_{n},{\boldsymbol x}), & t\in [t_n,t_{n+1/2}], \\
\psi(t_{n^*},{\boldsymbol x}) & = & \mathcal{F}_{{\boldsymbol x}}^{-1}\big\{e^{{\tt i}\boldsymbol{\alpha} \cdot \boldsymbol{\xi}\Delta t} \psi(t_{n+1/2},\boldsymbol{\xi})\big\}, & t\in [t_n,t_{n^*}], \\
\psi(t_{n+1},{\boldsymbol x}) & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta m + F_{\mathrm{flat}}(t_{n+1/2},\boldsymbol{x}) \right]} \psi(t_{n^*},{\boldsymbol x}), & t\in [t_{n+1/2},t_{n+1}].
\end{array}
\right.
\end{eqnarray}
Then, the second step in Fourier space can also be evaluated \textit{via} \eqref{eq:exp_dirac}. After discretizing spatially, the second equation is commonly solved using the Fast Fourier Transform (FFT), resulting in an {\it operator splitting pseudospectral scheme} \cite{fft2,Bauke20112454,keitel,grobe,keitel5}.
\subsection{Time discretization for Dirac equation in curved space}
\noindent Following the same procedure as in flat space, described in the last section, an operator splitting approach can be introduced in curved space. For a time interval from $t_n$ to $t_{n+1}$ and assuming that $\psi(t_n,\cdot)$ is given, the exact formal solution to the Dirac equation in curved space is
\begin{eqnarray}
\psi(t_{n+1},{\boldsymbol x})= \mathcal{T} \exp \left\{ -{\tt i} \int_{t_{n}}^{t_{n+1}} H(s,\boldsymbol{x}) ds \right\} \psi(t_{n},{\boldsymbol x}),
\end{eqnarray}
where $\mathcal{T}$ stands for the time-ordered exponential. Again, the latter can be approximated to third order accuracy by a symmetric decomposition \cite{suzuki1993general,Suzuki1990319}:
\begin{eqnarray}
\label{eq:op_split_curved}
\psi(t_{n+1},{\boldsymbol x})&=&
e^{-{\tt i} \frac{\Delta t}{2} \left[\beta(\boldsymbol{x}) m + F(t_{n+1/2},\boldsymbol{x}) \right]}
e^{-\Delta t \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla }
e^{-{\tt i} \frac{\Delta t}{2} \left[\beta(\boldsymbol{x}) m + F(t_{n+1/2},\boldsymbol{x}) \right]}\psi(t_{n},{\boldsymbol x}) \nonumber \\
&& + \mathcal{O}(\Delta t^{3}).
\end{eqnarray}
This has the same form as \eqref{eq:op_flat}, except for the second exponential, which now has a space-dependent Dirac matrix. The latter makes the direct use of the Fourier transform, as in the flat case, challenging because the efficient FFT cannot be used and the computational complexity would be $\mathcal{O}(N^{2})$, where $N$ is the number of lattice points. Our strategy is to approximate the exponential operator to simplify the problem and to be able to exploit the pseudodifferential form of the derivative operator. Two different types of approximation are introduced, leading to two different classes of numerical schemes:
\begin{enumerate}
\item \textbf{Crank-Nicolson approximation:}
This is obtained by formally approximating the exponential operator in its lowest order unitary form (1/1 Pad\'e's approximant of the exponential function):
%
\begin{eqnarray}
\label{eq:pade_approx}
e^{-\Delta t \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla } =
\frac{\mathbb{I}_{4} - \frac{\Delta t}{2} \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla}
{\mathbb{I}_{4} + \frac{\Delta t}{2} \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla}
+ \mathcal{O}(\Delta t^{3}).
\end{eqnarray}
%
Then, the operator splitting can be implemented by the following sequence
%
\begin{eqnarray}
\label{eq:op_split_cn}
\left\{
\begin{array}{ll}
\displaystyle
\psi(t_{n+1/2},{\boldsymbol x}) = e^{-{\tt i} \frac{\Delta t}{2} \left[\beta (\boldsymbol{x}) m + F(t_{n+1/2},\boldsymbol{x}) \right]} \psi(t_{n},{\boldsymbol x}), & t\in [t_n,t_{n+1/2}], \\
%
\displaystyle
\left[ \mathbb{I}_{4} + \frac{\Delta t}{2} \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla \right] \psi(t_{n^*},{\boldsymbol x}) = \left[\mathbb{I}_{4} - \frac{\Delta t}{2} \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla\right] \psi(t_{n+1/2},\boldsymbol{x}), & t\in [t_n,t_{n^*}], \\
%
\displaystyle
\psi(t_{n+1},{\boldsymbol x}) = e^{-{\tt i} \frac{\Delta t}{2} \left[\beta (\boldsymbol{x}) m + F(t_{n+1/2},\boldsymbol{x}) \right]} \psi(t_{n^*},{\boldsymbol x}), & t\in [t_{n+1/2},t_{n+1}].
\end{array}
\right. \nonumber \\
\end{eqnarray}
As shown below, this yields a semi-implicit numerical scheme.
\item \textbf{Polynomial approximation:}
This is obtained by formally approximating the exponential operator $e^{-\Delta t \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla }$, by a polynomial of the form
%
\begin{eqnarray}
\sum_{q=0}^{N_{\mathrm{p}}} a_{q} P_{q}(-\Delta t \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla),
\end{eqnarray}
%
where $(P_{q})_{q=0,\cdots, N_{\mathrm{p}}}$ are some set of orthogonal polynomials (such as Taylor series, second order differencing or Chebychev polynomials, for example \cite{doi:10.1063/1.448136}) and $(a_{q})_{q=0,\cdots, N_{\mathrm{p}}}$ are the polynomial coefficients, fixed to have an accurate approximation of the exponential. Then, the operator splitting can be implemented by the following sequence
%
\begin{eqnarray}
\label{eq:op_split_poly}
\left\{
\begin{array}{lcll}
\displaystyle
\psi(t_{n+1/2},{\boldsymbol x}) &=& e^{-{\tt i} \frac{\Delta t}{2} \left[\beta (\boldsymbol{x}) m + F(t_{n+1/2},\boldsymbol{x}) \right]} \psi(t_{n},{\boldsymbol x}), & t\in [t_n,t_{n+1/2}], \\
%
\displaystyle
\psi(t_{n^*},{\boldsymbol x}) &=& \displaystyle \sum_{q=0}^{N_{\mathrm{p}}} a_{q} P_{q}(-\Delta t \boldsymbol{\alpha} (\boldsymbol{x}) \cdot \nabla) \psi(t_{n+1/2},\boldsymbol{x}), & t\in [t_n,t_{n^*}], \\
%
\displaystyle
\psi(t_{n+1},{\boldsymbol x}) &=& e^{-{\tt i} \frac{\Delta t}{2} \left[\beta (\boldsymbol{x}) m + F(t_{n+1/2},\boldsymbol{x}) \right]} \psi(t_{n^*},{\boldsymbol x}), & t\in [t_{n+1/2},t_{n+1}].
\end{array}
\right. \nonumber \\
\end{eqnarray}
As shown below, this yields explicit numerical schemes which do not require the solution to a linear system.
\end{enumerate}
These two strategies are well-known for the Schr\"odinger equation, but are adapted here to the Dirac equation in curved space and combined to the pseudodifferential representation of the derivative. In the next sections, some methods will be given based on the discretization of Eqs. \eqref{eq:op_split_cn} and \eqref{eq:op_split_poly} with spectral accuracy in space.
The Strang splitting was introduced mostly to be consistent with the traditional splitting strategy used in flat space and to isolate the part of the equation that requires a special treatment with pseudodifferential operators. In principle, this is not mandatory and unsplit schemes could also be used in combination with an approximation of the time-ordered exponential. Nevertheless, notice that the overall second-order of accuracy in time is still preserved by the operator splitting.
\section{Space-discretization for the Dirac equation in curved space}\label{sec:space}
In this section, the spatial discretization of the Dirac equation in curved space is described, based on the two different approaches described in the last section. Throughout, we assume that the Dirac equation is solved in curved space on a truncated domain $[-a_1,a_1]\times [-a_2,a_2]\times [-a_3,a_3] \varsubsetneq \R^3$. The cases of flat space or the one with time-dependent coefficients are standard and not recalled here. We define two sets of grid-points in real and Fourier spaces by
\begin{eqnarray*}
\mathcal{D}^{(x)}_{N}=\big\{{\boldsymbol x}_{k} &:=& {\boldsymbol x}_{k_1,k_2,k_3}=(x^{1}_{k_1},x^{2}_{k_2},x^{3}_{k_3})\big\}_{k \in \mathcal{O}^{(x)}_{N}}, \\
%
\mathcal{D}^{(\xi)}_{N}=\big\{{\boldsymbol \xi}_{p} &:=& {\boldsymbol \xi}_{p_1,p_2,p_3}=(\xi^{1}_{p_1},\xi^{2}_{p_2},\xi^{3}_{p_3})\big\}_{p \in \mathcal{O}^{(\xi)}_{N}},
\end{eqnarray*}
where $N$, $k$ and $p$ are multi-indices ($N:= (N_1,N_2,N_3)$, $k=(k_{1},k_{2},k_{3})$ and $p=(p_{1},p_{2},p_{3})$, respectively and $N_i \in 2\N^*$), and with
\begin{eqnarray*}
\mathcal{O}^{(x)}_{N} &=&\bigg\{ k \in \N^3/ \left(k_{i}=0,\cdots,N_i-1 \right)_{i=1,2,3} \bigg\}, \\
%
\mathcal{O}^{(\xi)}_{N} &=& \bigg\{ p \in \N^3/ \left(p_{i}=-\frac{N_{i}}{2},\cdots,\frac{N_{i}}{2}-1 \right)_{i=1,2,3}\bigg\}.
\end{eqnarray*}
The set $\mathcal{D}^{(x)}_{N}$ defines a mesh with equidistant point positions in each dimension with sizes (for $i=1,2,3$)
\begin{equation*}
\displaystyle x^{i}_{k_i+1}-x^{i}_{k_i}=h_i=2a_i/N_i \, .
\end{equation*}
One can deduce that the discrete wavenumbers in Fourier space are given by (for $i=1,2,3$)
\begin{eqnarray*}
\xi^{i}_{p_{i}}&=&p_{i}\pi/a_i \, .
\end{eqnarray*}
The wave function $\psi(t,\boldsymbol{x})$ is discretized spatially by a projection onto the spatial mesh while $\widetilde{\psi}(t,\boldsymbol{\xi})$ is discretized on the momentum mesh. Thus, we denote by $\psi^{n}_{k}$ the approximate wavefunction at time $t_{n}$ and position ${\boldsymbol x}_k$ and by $\widetilde{\psi}^{n}_{p}$ the wave function in momentum space at time $t_{n}$ and momentum $\boldsymbol{\xi}_{p}$. The discrete wave functions $\psi^{n}_{k}$ and $\widetilde{\psi}^{n}_{p}$ are related by the discrete Fourier transform pair:
\begin{eqnarray*}
\widetilde{\psi}^{n}_{p} = \displaystyle \sum_{k=0}^{N-1}\psi^{n}_{k}e^{-{\tt i}\boldsymbol{\xi}_{p} \cdot (\boldsymbol{x}_{k}+\boldsymbol{a})}
\;\;,\;\;
\psi^{n}_{k} = \displaystyle \frac{1}{N} \sum_{p=-N/2}^{N/2-1}\widetilde{\psi}^{n}_{p}e^{{\tt i}\boldsymbol{\xi}_{p} \cdot (\boldsymbol{x}_{k}+\boldsymbol{a})} \, ,
\end{eqnarray*}
where $\boldsymbol{a}=(a_1,a_2,a_3)$. Armed with this notation, we can write the partial discrete Fourier coefficients in each dimension as, for $i=1,2,3$:
\begin{eqnarray*}
\widetilde{\psi}^{n}_{k|k_{i} \rightarrow p_{i}} &=&
\boldsymbol{\mathcal{F}}_{i}(\psi^{n}_{k})
= \sum_{k_i=0}^{N_i-1}\psi^{n}_{k}e^{-{\tt i}\xi^{i}_{p_{i}}(x^{i}_{k_i}+a_i)} \\
%
\psi^{n}_{k} &=&
\boldsymbol{\mathcal{F}}^{-1}_{i} (\widetilde{\psi}^{n}_{k})
=
\cfrac{1}{N_i}\sum_{p_{i}=-N_i/2}^{N_i/2-1}\widetilde{\psi}^{n}_{k|k_{i} \rightarrow p_{i}}e^{{\tt i}\xi^{i}_{p_{i}}(x^{i}_{k_i}+a_i)},
\end{eqnarray*}
where the notation $k|k_{i} \rightarrow p_{i}$ means that the index $k_{i}$ in the set $k$ is replaced by the index $p_{i}$ and where the partial discrete Fourier transform operator in the $i$ coordinate is denoted by $\boldsymbol{\mathcal{F}}_{i}(\cdot)$.
In order to approximate the partial derivatives, we use pseudospectral approximations of the pseudodifferential representation of the derivative operators. This leads to the following approximate first-order partial derivatives:
\begin{eqnarray}\label{trick}
\partial_{i}\psi(t_n,{\boldsymbol x}_{k}) \approx
\big\{[[\partial_i]]\psi^n\big\}_{k} :=
\cfrac{1}{N_i}\sum_{p_{i}=-N_i/2}^{N_i/2-1}{\tt i}\xi^{i}_{p_{i}}\widetilde{\psi}^{n}_{k|k_{i} \rightarrow p_{i}}e^{{\tt i}\xi^{i}_{p_{i}}(x^{i}_{k_i}+a_i)}.
\end{eqnarray}
This is the spectral representation of the derivative which, under standard assumptions on the smoothness of the wave function, has a spectral accuracy \cite{q1,GOTTLIEB200183}. Another representation of the spectral derivative can be found in \ref{app:spec} in terms of the differentiation matrix. In the next sections, we will exploit these relations to obtain accurate numerical schemes. This approach not only allows to select the spatial steps as large as wanted, but it also preserves the very high spatial accuracy, the parallel computing structure and the scalability of the split method developed in \cite{cpc2012}. In practice, we use the FFT to implement the Discrete Fourier Transform (DFT). This strategy is now combined with the two time discretization proposed in the last section to solve the Dirac equation in curved space.
\subsection{Numerical scheme I: Crank-Nicolson scheme}\label{subsec:CN}
A Crank-Nicolson scheme is obtained by discretizing \eqref{eq:op_split_cn} and by using the spectral operator for computing the derivative. The main idea then consists in approximating $\mathcal{F}^{-1}_{\boldsymbol{x}} \big\{ {\tt i}\boldsymbol{\xi} \mathcal{F}_{\boldsymbol{x}}\{ \psi \} (t,\boldsymbol{\xi}) \big\}$ by using the discrete operator $[[\nabla]]$ defined in \eqref{trick}. This yields
\begin{eqnarray}
\label{eq:cn}
\left.
\begin{array}{rcll}
\psi^{n+1/2}_{k} & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta_{k} m + F^{n+1/2}_{k} \right]} \psi^{n}_{k}, & t\in [t_n,t_{n+1/2}], \\
G_{k} \psi^{n^*}_{k} & = & \widetilde{G}_{k} \psi^{n+1/2}_{k}, & t\in [t_n,t_{n^*}], \\
\psi^{n+1}_{k} & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta_{k} m + F^{n+1/2}_{k} \right]} \psi^{n^*}_{k}, & t\in [t_{n+1/2},t_{n+1}],
\end{array}
\right.
\end{eqnarray}
where we defined $\boldsymbol{\alpha}_{k}=\boldsymbol{\alpha}({\boldsymbol x}_{k})$, $ \beta_{k}=\beta({\boldsymbol x}_{k})$ and $F^{n}_{k} = F(t_{n}, {\boldsymbol x}_{k})$. We also introduced the operators
\begin{eqnarray}
G_{k} &:=& \mathbb{I}_{4} + \frac{\Delta t}{2} \boldsymbol{\alpha}_{k} \cdot [[\nabla]], \\
\widetilde{G}_{k} &:=& \mathbb{I}_{4} - \frac{\Delta t}{2} \boldsymbol{\alpha}_{k} \cdot [[\nabla]] ,
\end{eqnarray}
for convenience. An explicit example of this procedure for a given 2D metric can be found in \ref{app:cn_2d}.
Then, the numerical solution can be obtained by implementing the following algorithm.
From time $t_n$ to $t_{n+1}$, and adding the PML ${\bf T}$ (see \eqref{eq:hamiltonian_pml}), the 3-steps scheme \eqref{eq:cn} explicitly reads:
\begin{itemize}
\item {\it Step 1.} From $t_n$ to $t_{n+1/2}$ such that $t_{n+1/2} = t_n + \Delta t/2$, with initial data $\psi^{n}_{k}$:
\begin{eqnarray}\label{step1}
\psi^{n+1/2}_{k} & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta_{k} m + F^{n+1/2}_{k} \right]} \psi^{n}_{k}.
\end{eqnarray}
%
To perform this step, an exact or approximate expression of the exponential operator is required. This can be achieved in different ways, depending on the metric chosen and the form of $\beta$ and $F$. The first thing to note here is that both $\beta$ and $F$ are 4-by-4 matrices. One possibility is then to use one of the numerical techniques described in \cite{doi:10.1137/S00361445024180} to compute the exponential of the matrix.
\item {\it Step 2.} The second step, given by
%
\begin{eqnarray}
G_{k} \psi^{n^*}_{k} & = & \widetilde{G}_{k} \psi^{n+1/2}_{k},
\end{eqnarray}
%
can be written as a linear system of equations, by using the discrete pseudospectral representation of the derivative. Naively, one would construct the matrix $G_{k}$ explicitly as done in \ref{APXA}, and solve the corresponding linear system. However, this procedure has the same computational complexity as the evaluation of convolution products. The computational efficiency of this step can be improved significantly by using a Krylov iteration solver (GMRES, conjugate gradient). This follows a technique developed before and we refer the reader to \cite{AntoineGeuzaineTang} for more details.
\item {\it Step 3.} The third step is given by
%
\begin{eqnarray}\label{step3}
\psi^{n+1}_{k} & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta_{k} m + F^{n+1/2}_{k} \right]} \psi^{n^*}_{k}.
\end{eqnarray}
%
The same technique as in Step 1 can be used to evaluate the matrix exponential.
\end{itemize}
In flat space, it is well-known that the $\ell^2-$norm of the 4-spinor must be preserved, while in static curved space, the $\ell_{\gamma}^2-$norm is preserved.
\begin{prop}
Assume that $(\alpha^{i,n}_{k})_{i=1,2,3}$ are hermitian, and that $F$ is a bounded function. The numerical scheme \eqref{step1}-\eqref{step3} is unconditionally $\ell^2$-stable, and preserves the $\ell^2-$norm in flat space.
\end{prop}
{\bf Proof.} The proof is straighforward, as it mainly relies on i) the hermitivity of the Dirac matrices $(\alpha^{i}_{k})^{\dagger} = \alpha^{i}_{k}$, with $i=1,2,3$, for steps \eqref{step1}, \eqref{step3}, ii) the definition of $G $ and $\widetilde{G}$. Using the matrix representation of the derivative given in \ref{app:spec}, we start from the hermitian transpose of the differentiation matrices:
\begin{eqnarray}
\overline{A}^{i}_{k_{i} k'_{i}} &=&
-\cfrac{1}{N_i}
\sum_{p_{i}=-N_i/2}^{N_i/2-1}
{\tt i}\xi^{i}_{p_{i}}e^{{-\tt i}\xi^{i}_{p_{i}}(x^{i}_{k_i} - x^{i}_{k'_i})} \, .
\end{eqnarray}
Setting $p'_{i}=-p_{i}$ and using the fact that $\xi^{i}_{-p'_{i}} = -\xi^{i}_{p'_{i}}$, we obtain
\begin{eqnarray}
\overline{A}^{i}_{k_{i} k'_{i}} &=&
-\cfrac{1}{N_i}
\sum_{-p_{i}=-N_i/2}^{N_i/2-1}
{\tt i}\xi^{i}_{-p_{i}}e^{{-\tt i}\xi^{i}_{-p_{i}}(x^{i}_{k_i} - x^{i}_{k'_i})} \, , \\
&=&
\cfrac{1}{N_i}
\sum_{-p_{i}=-N_i/2}^{N_i/2-1}
{\tt i}\xi^{i}_{p_{i}}e^{{\tt i}\xi^{i}_{p_{i}}(x^{i}_{k_i} - x^{i}_{k'_i})} \, .
\end{eqnarray}
This implies that, by construction: $\overline{[[\nabla ]]} = -[[\nabla]]$, and as a consequence
\begin{eqnarray*}
\left.
\begin{array}{lcl}
\widetilde{G}^{\dagger} & = & \mathbb{I} - (\Delta t/2) \big({\boldsymbol \alpha}\cdot [[ \nabla ]]\big)^{\dagger} \\
& = & \mathbb{I} + (\Delta t/2) {\boldsymbol \alpha}\cdot [[ \nabla ]] .
\end{array}
\right.
\end{eqnarray*}
As the coefficients of the Dirac equation are bounded in space and time, we trivially conclude about the $\ell^2$-stability. In flat space, ${\boldsymbol \alpha}$ is constant and $F$ is purely real the $\ell^2-$norm is trivially preserved (Steps 1 to 3 are unitary). $\Box$
\subsection{Numerical II: polynomial scheme}
Polynomial schemes are obtained using the same procedure as for the Crank-Nicolson method, i.e. by discretizing \eqref{eq:op_split_poly} and by using the spectral operator for computing the derivative. Again, we approximate $\mathcal{F}^{-1}_{\boldsymbol{x}} \big\{ {\tt i} \boldsymbol{\xi} \mathcal{F}_{\boldsymbol{x}}\{ \psi \} (t,\boldsymbol{\xi}) \big\}$ by using the discrete operator $[[\nabla]]$ defined in \eqref{trick}. This yields
\begin{eqnarray}
\label{eq:poly}
\left.
\begin{array}{rcll}
\psi^{n+1/2}_{k} & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta_{k} m + F^{n+1/2}_{k} \right]} \psi^{n}_{k}, & t\in [t_n,t_{n+1/2}], \\
\psi^{n^*}_{k} & = & \sum_{q=0}^{N_{\mathrm{p}}} a_{q} P_{q}(-\Delta t \boldsymbol{\alpha}_{k} \cdot [[\nabla]]) \psi^{n+1/2}_{k}, & t\in [t_n,t_{n^*}], \\
\psi^{n+1}_{k} & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta_{k} m + F^{n+1/2}_{k} \right]} \psi^{n^*}_{k}, & t\in [t_{n+1/2},t_{n+1}].
\end{array}
\right.
\end{eqnarray}
The first and second steps are exactly the same as in the Crank-Nicolson scheme (see \eqref{eq:cn}) and thus are not discussed here. The second step, on the other hand, is different because it does not require a solution of a linear system. The main challenge is in computing powers of the operator $[[\nabla]]$. This is performed by using FFTs, where the number of FFT pairs is given by the order of the polynomial.
Every scheme of this form is explicit and thus, is \textit{a priori} at best conditionally stable. As a matter of fact, numerical experiments often show an instability of the numerical solution. However, the Crank-Nicolson scheme derived above naturally requires the solution to a large linear system at each time iteration. In order to improve the efficiency while keeping a reasonable accuracy, we propose a 2-steps {\it polynomial scheme} with directional splitting. In the following, in order to simplify the presentation, we will assume that $\boldsymbol{\alpha}(\boldsymbol{x})$ is of the form
\begin{eqnarray}
\label{eq:alpha_form}
{\boldsymbol \alpha}({\boldsymbol x}) & = & {\bf a}({\boldsymbol x})\cdot {\boldsymbol \alpha},
\end{eqnarray}
where ${\bf a}({\boldsymbol x}):=(a^{1}({\boldsymbol x}),a^{2}({\boldsymbol x}),a^{3}({\boldsymbol x}))^T$ and where $(a^{i})_{i=1,\cdots,3}$ are space dependent scalar functions. Using directional splitting and the form \eqref{eq:alpha_form} for the matrices, the scheme \eqref{eq:poly} is slightly modified to
\begin{eqnarray}
\label{eq:poly_mod}
\left.
\begin{array}{rcll}
\psi^{n+1/2}_{k} & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta_{k} m + F^{n+1/2}_{k} \right]} \psi^{n}_{k}, & t\in [t_n,t_{n+1/2}], \\
\psi^{n^{*}_{1}}_{k} & = & \sum_{q=0}^{N_{\mathrm{p}}} a_{q} P_{q}(-\Delta t a^{1}_{k}\alpha^{1} [[\partial_{1}]]) \psi^{n+1/2}_{k}, & t\in [t_n,t_{n^*}], \\
\psi^{n^{*}_{2}}_{k} & = & \sum_{q=0}^{N_{\mathrm{p}}} a_{q} P_{q}(-\Delta t a^{2}_{k}\alpha^{2} [[\partial_{2}]]) \psi^{n^{*}_{1}}_{k}, & t\in [t_n,t_{n^*}], \\
\psi^{n^{*}_{3}}_{k} & = & \sum_{q=0}^{N_{\mathrm{p}}} a_{q} P_{q}(-\Delta t a^{3}_{k}\alpha^{3} [[\partial_{3}]]) \psi^{n^{*}_{2}}_{k}, & t\in [t_n,t_{n^*}], \\
\psi^{n+1}_{k} & = & e^{-{\tt i} \frac{\Delta t}{2} \left[\beta_{k} m + F^{n+1/2}_{k} \right]} \psi^{n^{*}_{3}}_{k}, & t\in [t_{n+1/2},t_{n+1}].
\end{array}
\right.
\end{eqnarray}
Let us remark that PMLs can easily be included by simply replacing $a^{i}({\boldsymbol x})$ by $a^{i}({\boldsymbol x})/S^{i}(x^{i})$, for $i=1,2,3$ in the scheme below. Steps 1. and 5. are identical to the {\bf Numerical Scheme I} \eqref{subsec:CN}. The principle of the second step of {\bf Numerical Scheme II} consists in approximating the evolution of each direction by a Taylor expansion and by diagonalizing the Dirac matrix. We denote by $\Lambda=\textrm{diag}(1,1,-1,-1)$ and $\Pi^{i}$ the transition matrices, such that $\alpha^{i} = \Pi^{i}\Lambda \Pi^{i,\dagger}$, for $i=1,2,3$. In addition, in the $i$-direction, we set $\phi^{n}_{k}=\Pi^{i,\dagger}\psi^{n}_{k}$.
Then, the time evolution is approximated as follows (with $n^{*}_{0} = n+1/2$):
\begin{eqnarray}\label{rem2}
\left.
\begin{array}{lcl}
\phi^{n^{*}_{i}}_{k} & = & \phi^{n^{*}_{i-1}}_{k} -\Delta ta^{i}_{k} \Lambda ([[\partial_{i}]]\phi^{n^{*}_{i-1}})_{k} + \mathcal{O}(\Delta t),\\
& = & a^{i}_{k} \boldsymbol{\mathcal{F}}_{i}^{-1}\big[\big(1-{\tt i} \Delta t \Lambda \xi^{i}\big) \boldsymbol{\mathcal{F}}_i(\phi^{n^{*}_{i-1}}_{k})\big] + \big(1- a^{i}_{k} \big)\phi^{n^{*}_{i-1}}_{k} + \mathcal{O}(\Delta t) \, .
\end{array}
\right.
\end{eqnarray}
The first line of this equation is a polynomial Taylor scheme and is usually unstable while the second line is just a re-writing of the first line. Stability is recovered when the first term on the right-hand-side is approximated by an exponential, as $1-{\tt i} \Delta t \Lambda \xi^{i} = e^{-{\tt i} \Delta t \Lambda \xi^{i}} + \mathcal{O}(\Delta t^{2})$, which is accurate to second order in time. Then, the scheme reads
\begin{eqnarray}\label{eq:second_scheme}
\left.
\begin{array}{lcl}
\phi^{n^{*}_{i}}_{k}
& = & a^{i}_{k} \boldsymbol{\mathcal{F}}_{i}^{-1}\big[e^{-{\tt i} \Delta t \Lambda \xi^{i}} \boldsymbol{\mathcal{F}}_i(\phi^{n^{*}_{i-1}}_{k})\big] + \big(1- a^{i}_{k} \big)\phi^{n^{*}_{i-1}}_{k} \, .
\end{array}
\right.
\end{eqnarray}
Finally, to recover the wave function, we set $\psi^{n^{*}_{i}}_{k}=\Pi^{i}\phi^{n^{*}_{i}}_{k}$. We proceed similarly in the other directions. As it is not totally obvious, we next prove the consistency of {\bf Numerical Scheme II}.
\begin{prop}
The {\bf Numerical Scheme II} is consistent with \eqref{eq:dirac_eq2} .
\end{prop}
{\bf Proof.} The analysis of the consistency only requires a focus on one of the steps 2-4, as the other steps are similar or standard. From
\begin{eqnarray*}
e^{-{\tt i} \Delta t \xi^{i}_p \Lambda} & = & \mathbb{I}_{4} - {\tt i} \Delta t \xi^{i}_p \Lambda - \Delta t^2 (\xi^{i}_p)^2 \Lambda^2 + \mathcal{O}(\Delta t^3) \, ,
\end{eqnarray*}
we can write the first term of \eqref{eq:second_scheme} as
\begin{eqnarray*}\label{eq:second_scheme_bis}
\Xi^{n^{*}_{i}}_{k}
& := & \boldsymbol{\mathcal{F}}_{i}^{-1}\big[\Pi^{i}\big(\mathbb{I}_4 - {\tt i} \Delta t \xi^{i} \Lambda - \Delta t^2 (\xi^{i})^2 \Lambda^2 \big)\Pi^{i,\dagger} \boldsymbol{\mathcal{F}}_i(\psi^{n^{*}_{i-1}}_{k})\big] + \mathcal{O}(\Delta t^{3}) , \\
&=& \psi^{n^{*}_{i-1}}_{k} - \Delta t \alpha^{i} \partial_{i} \psi^{n^{*}_{i-1}}_{k}
+ \Delta t^2 \mathbb{I}_{4} \partial_{i}^{2} \psi^{n^{*}_{i-1}}_{k} + \mathcal{O}(\Delta t^{3}) ,
\end{eqnarray*}
since $\Pi^{i}\Lambda\Pi^{i,\dagger} = \alpha^{i}$. Thus, \eqref{eq:second_scheme} is written as
\begin{eqnarray}
\label{eq:scheme_with_xi}
\psi^{n^{*}_{i}}_{k}
& = & a^{i}(\boldsymbol{x}_{k}) \Xi^{n^{*}_{i-1}}_{k} + \big(1- a^{i}(\boldsymbol{x}_{k}) \big)\psi^{n^{*}_{i-1}}_{k} \\
& = & \psi^{n^{*}_{i-1}}_{k} - a^{i}(\boldsymbol{x}_{k}) \Delta t \alpha^{i} \partial_{i} \psi^{n^{*}_{i-1}}_{k}
+ a^{i}(\boldsymbol{x}_{k}) \Delta t^2 \mathbb{I}_{4} \partial_{i}^{2} \psi^{n^{*}_{i-1}}_{k} + \mathcal{O}(\Delta t^{3}),
\end{eqnarray}
which is equivalent to
\begin{eqnarray}\label{LW}
{\tt i} \cfrac{\psi^{n^{*}_{i}}_{k} - \psi^{n^{*}_{i-1}}_{k}}{\Delta t}
& = & - {\tt i} a^{i}(\boldsymbol{x}_{k}) \alpha^{i} \partial_{i} \psi^{n^{*}_{i-1}}_{k}
+ {\tt i} a^{i}(\boldsymbol{x}_{k}) \Delta t \mathbb{I}_{4} \partial_{i}^{2} \psi^{n^{*}_{i-1}}_{k} + \mathcal{O}(\Delta t^{2}).
\end{eqnarray}
We have proven that is consistent at order $1$ in time, with
\begin{eqnarray*}
{\tt i}\partial_{t} \psi(t,{\boldsymbol x}) &=&-{\tt i} a^{i}({\boldsymbol x}) \alpha^{i} \partial_{x} \psi(t,{\boldsymbol x}),
\end{eqnarray*}
and this transport-like equation is equivalent to the 2 to 4 steps in the operator splitting. $\Box$
\\
\\
We now provide a stability result.
\begin{prop}
Let us assume that $ 0 \leq \sup_{{\boldsymbol x}}a^{i}(\boldsymbol{x}_{k}) \leq C$ with $i=1,2,3$.
Then, the {\bf Numerical Scheme II} approximating \eqref{eq:dirac_eq2}, which is assumed to be well-posed, is unconditionally $\ell^2-$stable.
\end{prop}
{\bf Proof.} Again the proof relies on Step 2. Let us focus on one of the direction and assume first that $C\leq 1$. As $\Pi^{i}$ is unitary, we trivially get : $\big|\Xi_{k}^{n} \big|^2_{2} = \big|\psi_{k}^{n}\big|^2_{2}$, where $|\cdot|_2$ denotes the $\ell^2-$norm in $\C^4$ on spinor components. Consequently, this yields
\begin{eqnarray*}
\left.
\begin{array}{lclcl}
\displaystyle \| {\boldsymbol \Xi}_{k}^{n} \|_{2} & := & \displaystyle \Big(h^3\sum_{k=0}^{N} \big|\Xi_{k}^{n}\big|_2^2 \Big)^{1/2} & = & \|{\boldsymbol \psi}_{k}^{n}\|_{2} \, .
\end{array}
\right.
\end{eqnarray*}
Moreover, denoting $\mathcal{R}\{z\}$ (resp. $\mathcal{I}\{z\}$) the real (resp. imaginary) part of $z$ and using \eqref{eq:scheme_with_xi}, we get
\begin{eqnarray}
\big|\psi^{n^{*}_{i}}_{k}\big|_{2}^{2}
& = & [a^{i}(\boldsymbol{x}_{k})]^{2} \big|\Xi^{n^{*}_{i-1}}_{k}\big|_{2}^{2} + \big(1- a^{i}(\boldsymbol{x}_{k}) \big)^{2}\big|\psi^{n^{*}_{i-1}}_{k}\big|_{2}^{2} \nonumber \\
&&+ 2a^{i}(\boldsymbol{x}_{k})\big(1- a^{i}(\boldsymbol{x}_{k}) \big)
\left[ \mathcal{R}\{\Xi^{n^{*}_{i-1}}_{k}\} \mathcal{R}\{\psi^{n^{*}_{i-1}}_{k}\} + \mathcal{I}\{\Xi^{n^{*}_{i-1}}_{k}\} \mathcal{I}\{\psi^{n^{*}_{i-1}}_{k}\} \right], \nonumber \\
\label{interm}
& \leq & [a^{i}(\boldsymbol{x}_{k})]^{2} \big|\Xi^{n^{*}_{i-1}}_{k}\big|_{2}^{2} + \big(1- a^{i}(\boldsymbol{x}_{k}) \big)^{2}\big|\psi^{n^{*}_{i-1}}_{k}\big|_{2}^{2} \nonumber \\
&&+ 2\big|a^{i}(\boldsymbol{x}_{k})\big|\big|1- a^{i}(\boldsymbol{x}_{k})| \big|\Xi^{n^{*}_{i-1}}_{k}\big|_{2}\big|\psi^{n^{*}_{i-1}}_{k}\big|_{2},
\end{eqnarray}
where the Cauchy-Schwarz inequality was used to obtain \eqref{interm}.
Expanding \eqref{interm} and using $\big|\Xi^{n^{*}_{i-1}}_{k} \big|^2_{2} = \big|\psi^{n^{*}_{i-1}}_{k} \big|^2_{2}$, we easily obtain the inequality: $\big|\psi^{n^{*}_{i}}_{k} \big|^2_2 \leq \big|\psi^{n^{*}_{i-1}}_{k} \big|^2_2$. We deduce that $\| \boldsymbol{\psi}^{n^{*}_{i}}_{k} \|_{2} \leq \|\boldsymbol{\psi}^{n^{*}_{i-1}}_{k}\|_{2}$, for $i=1,2,3$.
Thus, we obtain: $\| {\boldsymbol \psi}_{k}^{n^*} \|_{2} \leq \|{\boldsymbol \psi}_{k}^{n}\|_{2}$. The stability analysis from Steps 1 and 5 is straightforward, hence leading to
\begin{eqnarray*}
\left.
\begin{array}{lcl}
\| {\boldsymbol \psi}_{k}^{n+1} \|_{2} & \leq & \|{\boldsymbol \psi}_{k}^{n}\|_{2} \, .
\end{array}
\right.
\end{eqnarray*}
This concludes the proof for $C\leq 1$. For $C>1$, we proceed similarly, and conclude with the Gronwall's inequality. $\Box$
\\
\\
We can extend the above result to a second-order time scheme, by replacing the second step \eqref{eq:scheme_with_xi} by
\begin{eqnarray}
\label{step2bis}
\psi^{n^{*}_{i}}_{k}
& = & a^{i}(\boldsymbol{x}_{k}) \Xi^{n^{*}_{i-1}}_{k} + \big(1- a^{i}(\boldsymbol{x}_{k}) \big)\psi^{n^{*}_{i-1}}_{k} + a^{i}(\boldsymbol{x}_{k}) \Delta t^2 \mathbb{I}_{4} [[\partial_{i}^{2}]]\Xi^{n^{*}_{i-1}}_{k} \, .
\end{eqnarray}
The second order in time is obtained thanks to the addition of the rightmost term to the scheme \eqref{eq:scheme_with_xi}. The stability occurs from the implicitation of the correction$/$anti-diffusion term $a^{i}(\boldsymbol{x}_{k}) \Delta t^2 \mathbb{I}_{4} [[\partial_{i}^{2}]]\psi$.
\subsection{Computational complexity analysis}
This paragraph is dedicated to the analysis of the computational complexity of the presented methods. In particular, we compare the complexity with i) the direct implicit method based on the direct application of the FFT on the equation, involving spatial convolution products, ii) the Crank-Nicolson scheme and the polynomial scheme. Each {\it time iteration} requires on a $N$-point grid the following operations:
\begin{itemize}
\item The implicit direct method requires $\mathcal{O}(N^2 + N^{\nu})$ operations, with $\nu>1$. The first term is due to the approximation of the convolution products by standard quadrature rules, and the second term comes from the numerical computation of the solution to the linear system.
\item The Crank-Nicolson scheme needs $\mathcal{O}(N\log N + N^{\nu})$ operations, for $\nu>1$. The first term is related
to the approximation of the convolution products by FFT-method, while the second one comes from the solution to the linear system.
\item The polynomial scheme implies $\mathcal{O}(N\log N)$ operations since the convolution products are computed by an
FFT-method, the rest of the scheme being linear.
\end{itemize}
\section{Numerical experiment for the curved-space Dirac equation}\label{sec:numerics}
Several specific examples will be considered, which basically correspond to different metrics. The objective is to demonstrate how simple and efficient is the proposed methodology. We will start with simple one-dimensional tests, then will consider more elaborated two-dimensional physical configurations. In 1-D and 2-D, the Dirac equation in curved space is slightly modified compared to 3-D. In particular, the dimension of Dirac matrices is 2-by-2, instead of 4-by-4. The numerical schemes described previously can be straightforwardly adapted to these cases.
\subsection{Static spacetime} We consider the metric $ds^2=e^{2\Phi(x)}dt^2-e^{2\Psi(x)}dx^2$, such that $\Phi$ and $\Psi$ are two space-dependent functions. This leads to the following one-dimensional Dirac equation \cite{Koke}
\begin{eqnarray*}
{\tt i}\partial_t\psi & = & -{\tt i}e^{\Phi(x)-\Psi(x)}\sigma_x\Big(\partial_x + \cfrac{\Phi'(x)}{2}\Big)\psi + e^{\Phi}\sigma_zm\psi \, .
\end{eqnarray*}
\noindent{\bf Numerical Experiment 1.}
We propose a benchmark with $\Psi(x)=e^{-10^{-2}x^2}$, $\Phi(x)=e^{-5\times 10^{-3}x^2}$, and for
$\phi_0(x)=\exp(-x^2/2+{\tt i}k_0x)$, where $k_0=5$ and $c=1$. The computational domain is $[-5,5]$,
while the discretization parameters are set through: $\Delta t=5 \times 10^{-4}$ and $N_1=18027$. We compare the proposed method, with a real-space method, at $\mathrm{CFL}=0.99$, which degenerates into the Quantum-Boltzmann method for flat space \cite{cpc2012}. The second-order splitting implicit pseudospectral method ({\bf Numerical Scheme I}) is implemented by using GMRES \cite{saad}.
We plot $x \mapsto \exp(\Phi(x)-\Psi(x))$, corresponding to a velocity field, in Fig. \ref{figvelo} (Left). We report the real and imaginary parts of the first component $\psi_1$ in Fig. \ref{figvelo} (Middle, Right) at time $T=0.5$, corresponding to $1000$ time iterations.
\begin{figure}
\begin{center}
\includegraphics[height=4cm,keepaspectratio]{1d/exp_phi_psi.pdf}
\includegraphics[height=4cm,keepaspectratio]{1d/Rephi1_1d.pdf}
\includegraphics[height=4cm,keepaspectratio]{1d/Imphi1_1d.pdf}
\end{center}
\caption{{\bf Experiment 1.} (Left) Velocity field $x \mapsto \exp\big(\Phi(x)-\Psi(x)\big)$. (Middle) Real part of $\psi_1(\cdot,T)$ with real space and pseudospectral methods. (Right) Imaginary part of $\psi_1(\cdot,T)$ with real space and pseudospectral methods. The
final time is $T=0.5$.}
\label{figvelo}
\end{figure}
Unlike the real space method for $\mathrm{CFL}=1$, the pseudospectral method is {\it linearly stable}. As an illustration, we compare in Fig. \ref{figveloB} (Left, Middle) the real and imaginary parts of $\psi_1$ on a coarse grid ($h=1.1\times 10^{-2}$, $\Delta t=10^{-2}$) and fine grid ($h=5.5\times 10^{-4}$, $\Delta t=5 \times 10^{-4}$). Finally in Fig. \ref{figveloB} (Right), we report in logscale the $\ell^2-$ norm of the error ($\|\psi_{h}(T,\cdot)-\psi_{\textrm{ref},h}(T,\cdot)\|_{\ell^2}$) as a function of the space size $h$.\\
\begin{figure}
\begin{center}
\includegraphics[height=4cm,keepaspectratio]{1d/comp_Rephi1_1d.pdf}
\includegraphics[height=4cm,keepaspectratio]{1d/comp_Imphi1_1d.pdf}
\includegraphics[height=4cm,keepaspectratio]{1d/convergence.pdf}
\end{center}
\caption{{\bf Numerical Experiments 1.} (Left) Real part of $\psi_1(\cdot,T)$ on coarse and fine grids. (Middle) Imaginary part of $\psi_1(\cdot,T)$ on coarse and fine grids. (Right) $\ell^2$-norm error.}
\label{figveloB}
\end{figure}
\noindent{\bf Numerical Experiments 2.} We compare the pseudospectral method in flat and curved spaces. For the curved space case, we select $\Psi(x)=\cos(x/10)e^{-10^{-2}x^2}$, $\Phi(x)=e^{-10^{-2}x^2}$, with $\phi_0(x)=\exp(-x^2/2+{\tt i}k_0x)$, where $k_0=5$ and we again take $c=1$. The computational domain is $[-5,5]$, with discretization parameters $\Delta t=5 \times 10^{-4}$ and $N_1=20001$. The second order implicit splitting pseudospectral method is again solved by using GMRES \cite{saad}. We plot $x \mapsto \exp(\Phi(x)-\Psi(x))$, corresponding to the velocity field in Fig. \ref{figvelo2} (Left), and the real and imaginary parts of the first component $\psi_1$ in Fig. \ref{figvelo2} (Middle, Right) at time $T=1$, corresponding to $2000$ time iterations. This illustrates the effect of the spatial curvature on the solution to the Dirac equation.\\
\begin{figure}
\begin{center}
\includegraphics[height=4cm,keepaspectratio]{1d/exp_phi_psi2.pdf}
\includegraphics[height=4cm,keepaspectratio]{1d/Rephi1_1d2.pdf}
\includegraphics[height=4cm,keepaspectratio]{1d/Imphi1_1d2.pdf}
\end{center}
\caption{{\bf Numerical Experiments 2.} (Left) Velocity field $x \mapsto \exp\big(\Phi(x)-\Psi(x)\big)$. (Middle) Real part of $\psi_1(\cdot,T)$ for flat and curved spaces. (Right) Imaginary part of $\psi_1(\cdot,T)$ for flat and curved spaces. The final time is $T=1$.}
\label{figvelo2}
\end{figure}
\noindent{\bf Numerical Experiments 3.}
We now consider a two-dimensional Dirac equation in curved space, defined by the metric $d{\boldsymbol s}^2=e^{2\Phi({\boldsymbol x})}dt^2-e^{2\Psi({\boldsymbol x})}d{\boldsymbol x}^2$, such that $\Phi$, $\Psi$ are two space-dependent functions, with ${\boldsymbol x}=(x,y)$,
\begin{eqnarray*}
{\tt i}\partial_t\psi & = & -{\tt i}e^{\Phi({\boldsymbol x})-\Psi({\boldsymbol x})}\Big(\sigma_x\Big(\partial_x + \cfrac{\partial_x\Phi({\boldsymbol x})}{2}\Big) + \sigma_y\Big(\partial_y + \cfrac{\partial_y\Phi({\boldsymbol x})}{2}\Big) \Big)\psi + e^{\Phi({\boldsymbol x})}\sigma_zm\psi \, .
\end{eqnarray*}
We assume that $\Phi({\boldsymbol x})=e^{-10^{-2}\|{\boldsymbol x}\|^2}$, $\Psi({\boldsymbol x})=e^{-5\times 10^{-3}\|{\boldsymbol x}\|^2}$, with $\phi_0({\boldsymbol x})=e^{-\|{\boldsymbol x}\|^2/2+{\tt i}{\boldsymbol k}_0 \cdot {\boldsymbol x}}$, where ${\boldsymbol k}_0=(5,5)^T$ and setting $c=1$. The computational domain is $[-5,5]^2$.
We report in Fig. \ref{velo2D} (Left) the velocity field on $\mathcal{D}=[-5,5]^2$, ${\boldsymbol x} \mapsto \exp\big(\Phi({\boldsymbol x}) - \Psi({\boldsymbol x})\big)$.\\
\begin{figure}
\begin{center}
\includegraphics[height=6cm,keepaspectratio]{velo2d.jpg}
\includegraphics[height=6cm,keepaspectratio]{init2d.jpg}
\end{center}
\caption{{\bf Numerical Experiments 3.} (Left) Velocity field. (Right) Initial wavefunction $\psi_1(0,\cdot)$.}
\label{velo2D}
\end{figure}
We here implement the {\bf Numerical scheme II}. The numerical data are as follows: $\Delta t=1.14\times 10^{-4}$, and $N_1=N_z=512$. The initial data is a wavepacket
\begin{eqnarray*}
\psi_0(x,z) = \big(\phi_1(x,z),0,0,0\big)^T,
\end{eqnarray*}
where $\phi_1(x,y)=e^{-(x^2+y^2)/2 + 5{\tt i}(x-z)}$, which is plotted in Fig. \ref{velo2D} (Right). We report in Fig. \ref{sol2D} (resp. Fig. \ref{sol2Dbis}) the modulus of the first component (resp. real part of the first component) of the Dirac 4-spinor at times (in {\it atomic units}) $t_1=0.57\times 10^{-2}$, $t_2=1.14\times 10^{-2}$, $t_3=2.28\times 10^{-2}$ and $t_4=4.56\times 10^{-2}$ in the flat (Left) and curved (Right) spaces.
\begin{figure}
\begin{center}
\includegraphics[height=4.8cm,keepaspectratio]{Curved50it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{Flat50it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{Curved100it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{Flat100it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{Curved200it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{Flat200it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{Curved400it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{Flat400it.jpg}
\end{center}
\caption{{\bf Numerical Experiments 3.} Modulus of the first component of the 4-spinor (Left) Curved space: at times $t_1=0.57\times 10^{-2}$, $t_2=1.14\times 10^{-2}$, $t_3=2.28\times 10^{-2}$ and $t_4=4.56\times 10^{-2}$. (Right) Flat space: at times $t_1=1.14\times 10^{-2}$, $t_2=2.28\times 10^{-2}$ and $t_4=4.56\times 10^{-2}$. }
\label{sol2D}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=4.8cm,keepaspectratio]{ReCurved50it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{ReCurved100it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{ReFlat100it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{ReCurved200it.jpg}
\includegraphics[height=4.8cm,keepaspectratio]{ReCurved400it.jpg}
\end{center}
\caption{{\bf Numerical Experiments 3.} Modulus of the first real part of the component of the 4-spinor (Left) Curved space: at times $t_1=0.57\times 10^{-2}$, $t_2=1.14\times 10^{-2}$, $t_3=2.28\times 10^{-2}$ and $t_4=4.56\times 10^{-2}$. (Right) Flat space: at times $t_1=1.14\times 10^{-2}$, $t_2=2.28\times 10^{-2}$ and $t_4=4.56\times 10^{-2}$. }
\label{sol2Dbis}
\end{figure}
\subsection{Massless Dirac particles in curved graphene}
Charge carriers in graphene can be described theoretically by a 2-D Dirac equation in curved space-time, where the metric is related to the graphene sample deformation \cite{Cortijo_2007}. The dynamics of these charge carriers was recently studied numerically \cite{PhysRevB.98.155419,Debus_2018} with a lattice Boltzmann method. Using our numerical schemes, we now propose some tests with similar configurations.
\noindent{\bf One-dimensional test: rippled graphene sample}. This test is dedicated to the simulation of strained graphene, and a simple comparison with non-strained graphene (corresponding to flat space). More specifically, we consider a rippled graphene sheet parameterized by the coordinate transformation map \cite{PhysRevB.98.155419}. We set $h(x)=a_0\cos(2\pi k_0x/\ell)$ and $f(x)=\big(h'(x)\big)^2/2 = 2\pi^2a_0^2k_0^2\sin\big(2\pi k_0 x/\ell\big)^2/\ell^2$, where $a_0$ and $k_0$ denote the amplitude and wave vector of the surface ripples, and $\ell$ is the length of the sheet. Moreover, in one-dimension, the spatial part of the metric and the tetrad are given by
\begin{eqnarray*}
g(x) & = & \big(1-f(x)\big)^2, \qquad e_x^x(x) = \cfrac{1}{1-f(x)} \, .
\end{eqnarray*}
The Dirac equation for modeling strained graphene with external electromagnetic potentials $(A,V)$ in 1-d then reads
\begin{eqnarray}\label{eqC}
\partial_t \psi + \sigma_xe^x_x\big(\partial_x-{\tt i}A_x\big)\psi & = & -{\tt i}\gamma^0(m-V)\psi \, .
\end{eqnarray}
This equation can be solved using the numerical schemes developed in this article.
It is easy to show that the following $\ell^2_{\gamma}-$norm is conserved
\begin{eqnarray*}
\|\psi\|_{\gamma} & := & \Big(\big(1-f(x)\big)|\psi(x)|^2dx\Big)^{1/2} \, .
\end{eqnarray*}
Indeed, we multiply \eqref{eqC} by $(1-f(x))\psi^{\dagger}$, take the real part and integrate in space, and directly get:
\begin{eqnarray*}
\cfrac{d}{dt}\int \big(1-f(x)\big)|\psi(x)|^2dx & = & 0 \, .
\end{eqnarray*}
\noindent{\bf Experiment 4.} In this first experiment, we assume that the initial data is
\begin{eqnarray*}
\psi(0,x) = (1,{\tt i})^T\beta e^{-\beta x^2/2}/\sqrt{4\pi } \, .
\end{eqnarray*}
Numerically, we take $\beta=2$, $a_0=4\times 10^{-1}$, $k_0=2$, $c=1$, and $\ell=5$. Moreover, we fix $A_x(x)=V(x)=5x$. We plot in Fig. \ref{figV0} (Right), the graph of $x\in [-10,10] \mapsto f(x)$. Numerically, we choose the discretization parameters
to be $\Delta t=10^{-2}$ and $h=10^{-2}$. We report in Fig. \ref{compDen0} the density, defined by $d_F(t,\cdot)=|\psi_1(t,\cdot)|^2+|\psi_2(t,\cdot)|^2$ in flat space, and the density $d_C(t,\cdot)$ in curved space at different times $t=0.4$, $t=0.8$, $t=1.2$ and $t=1.6$. In Fig. \ref{l2norm0} (Left), we report in logscale the $\ell^2-$ and $\ell_{\gamma}^2-$norms of the solution as a function of time iterations, in flat and curved space as well as illustrating the $\ell^2-$stability, and the $\ell^2-$norm conservation in flat space, and $\ell^2_{\gamma}-$norm conservation in curved space.
\begin{figure}
\begin{center}
\includegraphics[height=6cm,keepaspectratio]{init_dens_graphene0.pdf}
\includegraphics[height=6cm,keepaspectratio]{f_G0.pdf}
\end{center}
\caption{{\bf Experiment 4.} (Left) Initial density (Right) Graph of $f$ and $|G|$.}
\label{figV0}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm,keepaspectratio]{d20.pdf}
\includegraphics[height=6cm,keepaspectratio]{d40.pdf}
\includegraphics[height=6cm,keepaspectratio]{d60.pdf}
\includegraphics[height=6cm,keepaspectratio]{d80.pdf}
\end{center}
\caption{{\bf Experiment 4.} From Top-Left to Bottom-Right: density in flat and curved spaces at time $t=0.4$, $t=0.8$, $t=1.2$ and $t=1.6$.}
\label{compDen0}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm,keepaspectratio]{l2norm0.pdf}
\includegraphics[height=6cm,keepaspectratio]{cv0.pdf}
\end{center}
\caption{{\bf Experiment 4. }(Left) $\ell^2-$norm and $\ell_{\gamma}^2-$norm in logscale of the solution as a function of time iterations in flat and curved space. (Right) Graph of convergence in logscale $(h,\|\psi_{\textrm{approx.}}(T,\cdot)-\psi_{\textrm{ref.}}(T,\cdot)\|_2$ with $h=1/2,\cdots,1/2^{10}$.}
\label{l2norm0}
\end{figure}
We also report the graph of convergence in Fig. \ref{l2norm0} (Right.), with $\Delta t=10^{-5}$, and $T=10^{-1}$ (corresponding to $1000$ time-iterations) and the computational domain is still $[-10,10]$. We represent the $\ell^2$-norm error between a solution of reference $\psi_{\textrm{ref.}}(\cdot,T)$ (computed on a very fine mesh) and the approximate one $\psi_{\textrm{approx.}}(\cdot,T)$, computed with a mesh-size $h$ of $1/2^{i}$ with $i=4,...,11$.\\
\noindent{\bf Numerical Experiments 5.} A more severe test is performed with different physical data. The computational domain is given by $[-5,5]$ and the initial data is
\begin{eqnarray*}
\psi(0,x) = (1,{\tt i})^T\beta e^{-\beta x^2/2}/\sqrt{4\pi } \, ,
\end{eqnarray*}
with $\beta=2$. Numerically, we take $\Delta t=10^{-2}$, and $h=10^{-2}$. Now, we take $V(x)=1/(|x|+1)$, $A_x=10x^2$ and $a_0=0.4$, $\ell=10$, $k_0=5$. The initial data, the potential, the functions $G$ and $f$ are reported in Fig. \ref{figV2}. We
plot in Fig. \ref{compDen2}, the density $d_F(t,\cdot)$ in flat space, and the density $d_C(t,\cdot)$ in curved space at different times $t=0.2$, $t=0.4$, $t=0.6$ and $t=0.8$. In Fig. \ref{l2norm}, we report the $\ell^2-$norm of the solution as a function of time iterations, in flat and curved space, illustrating the $\ell^2-$stability (and $\ell^2-$norm preserving in flat space) of the proposed scheme.\\
\begin{figure}
\begin{center}
\includegraphics[height=6cm,keepaspectratio]{init_dens_graphene2.pdf}
\includegraphics[height=6cm,keepaspectratio]{f_G2.pdf}
\end{center}
\caption{{\bf Numerical Experiments 5.} (Left) Initial density and potential $V$. (Right) Graph of $f$.}
\label{figV2}
\end{figure}
The straining on graphene is enhanced compared to the above setting.\\
\begin{figure}
\begin{center}
\includegraphics[height=6cm,keepaspectratio]{d1_2.pdf}
\includegraphics[height=6cm,keepaspectratio]{d2_2.pdf}
\includegraphics[height=6cm,keepaspectratio]{d3_2.pdf}
\includegraphics[height=6cm,keepaspectratio]{d4_2.pdf}
\end{center}
\caption{{\bf Numerical Experiments 5.} From Top-Left to Bottom-Right: density in flat and curved spaces at time $t=0.2$, $t=0.4$, $t=0.6$ and $t=0.8$.}
\label{compDen2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm,keepaspectratio]{l2norm.pdf}
\end{center}
\caption{{\bf Numerical Experiments 5.} $\ell^2-$norm of the solution as a function of time iterations in flat and curved space, and $\ell^2_{\gamma}$-norm of the solution as a function of time iterations in curved space.}
\label{l2norm}
\end{figure}
\noindent{\bf Numerical Experiments 6.} The last experiment is dedicated to the use of PML in order to absorb the waves reaching the computational domain boundary. We compare the solution and its norm using and without using PML. The objective is to show that PML allow to circumvent the effect of periodic boundary conditions, by avoiding the propagation of non-physical waves entering back into the physical domain from one side to the other. We refer to \cite{jcp2019}, for a detailed study of PML for the Dirac equation using the pseudospectral method presented in this paper. The objective in this example is not to construct the best PML possible (this would require a fine study of absorbing functions and their parameters), but rather to show how efficient these PML can be. In the following, we consider absorbing functions of type I (with $\sigma_0=1$ and $\theta=0$, see Section \ref{sec:PML} for definitions), with a PML corresponding to $10\%$ of the overall computational domain. The computational domain is given by $[-4.5,4.5]$ and the initial data is
\begin{eqnarray*}
\psi(0,x) = (1,{\tt i})^T\beta e^{-\beta x^2/2}/\sqrt{4\pi } \, ,
\end{eqnarray*}
with $\beta=2$. Numerically, we take $\Delta t=10^{-2}$, and $h=10^{-2}$. Now, we take $V(x)=A_x(x)=0$, $m=0$, and $a_0=0.4$, $\ell=5$, $k_0=2$. We report in Fig. \ref{compDen2PML}, the density $d_F(t,\cdot)$ in flat space, and the density $d_C(t,\cdot)$ in curved space at different times $t=0.75$, $t=1.5$, $t=2.25$ and $t=4$, with (Left column) and without PML (Right column). In Fig. \ref{l2normPML}, we report the $\ell^2-$norm of the solution as a function of time iterations, in flat and curved space with and without PML. The latter illustrate the conservation of the norms, when using periodic boundary conditions without PML, and {\it vice-et-versa}.
\begin{figure}
\begin{center}
\includegraphics[height=5.5cm,keepaspectratio]{pml_75.pdf}
\includegraphics[height=5.5cm,keepaspectratio]{nopml_75.pdf}\\
\includegraphics[height=5.5cm,keepaspectratio]{pml_150.pdf}
\includegraphics[height=5.5cm,keepaspectratio]{nopml_150.pdf}\
\includegraphics[height=5.5cm,keepaspectratio]{pml_225.pdf}
\includegraphics[height=5.5cm,keepaspectratio]{nopml_225.pdf}\\
\includegraphics[height=5.5cm,keepaspectratio]{pml_400.pdf}
\includegraphics[height=5.5cm,keepaspectratio]{nopml_400.pdf}
\end{center}
\caption{{\bf Numerical Experiments 6.} Density in flat and curved spaces at time $t=0.75$, $t=1.5$, $t=2.25$ and $t=4$. (Left column) with PML. (Right column) without PML.}
\label{compDen2PML}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm,keepaspectratio]{pml_norm_400.pdf}
\includegraphics[height=6cm,keepaspectratio]{nopml_norm_400.pdf}
\end{center}
\caption{{\bf Numerical Experiments 6.} $\ell^2-$norm of the solution as a function of time iterations in flat and curved space, and $\ell^2_{\gamma}$-norm of the solution as a function of time iterations in curved space. (Left) with PML. (Right) without PML.}
\label{l2normPML}
\end{figure}
This experiment shows that, although the proposed method naturally imposes perdiodic boundary conditions, its negative effects can be tackled thanks to Perfectly Matched Layers, which can very easily be implemented within the pseudospectral method.
\section{Conclusion}\label{sec:conclusion}
We have derived and analyzed simple pseudospectral computational methods for solving the Dirac equation in curved space with perfectly matched layers at the computational domain boundary, and more generally for Dirac-like equations with non-constant coefficients. Interestingly, the proposed methods can easily be implemented from existing Fourier-based methods. Some
numerical one- and two-dimensional experiments illustrating the properties of the numerical schemes were proposed. In a forthcoming paper, we will apply the developed methodology
to an extensive study of strained graphene.
|
1,314,259,994,273 | arxiv | \section{Introduction}
\par\noindent
Since the discovery of Bekenstein \cite{Bekenstein1972,Bekenstein1973,Bekenstein1974} and Hawking \cite{Hawking1974,Hawking1975}
connecting the entropy ($S$) and area ($\mathcal{A}$) of a black hole (BH), general relativity, quantum field theory and statistical
physics became deeply linked. Not only in BHs does this connection work but also, as shown by Gibbons and Hawking
\cite{Gibbons1977a}, the cosmological
horizon which is present in the de Sitter space fulfills $S=\frac{\mathcal{A}}{4}$. The geometric features of BH entropy seem to imply that it is related to the
non-trivial topological structure of the corresponding spacetime. Moreover, Hawking and Gibbons \cite{Gibbons1977b}
argued that, due to the different topologies between extremal and non-extremal BHs, the area law fails for the former because their entropy
vanish despite the non-zero area of the horizon. Furthermore, Teitelboim confirmed \cite{Teitelboim1995}, using the Hamiltonian formalism,
that extremal BHs had zero entropy. These facts led Liberati and Pollifrone \cite{Liberati1997a}
to suggest the formula $S=\frac{\chi \mathcal{A}}{8}$, where $\chi$ is the Euler characteristic of the corresponding regular and Riemannian version of the
considered spacetime (gravitational instanton). Although this formula has been checked for a wide class of gravitational instantons
\cite{Liberati1997a,Wang2000,Ma2003}, it is well know that it does not work in case of multiple horizon spacetimes \cite{Cai1998}.
\par\noindent
The Euler characteristic of a gravitational instanton is related with a particular combination of curvature invariants of the form
$\mathcal{G}=R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}-4 R^{\mu\nu}R_{\mu\nu}+R^{2}$ by means of the Gauss-Bonnet (GB) theorem. This GB
term, $\mathcal{G}$, not only plays an important role in BH entropy but also in higher dimensional extensions of general relativity
as well as in certain extensions in the four dimensional case. Specifically, the so-called $\mathcal{F}(\mathcal{G})$-gravity \cite{Nojiri2005}, where $\mathcal{F}$ is a
non-linear function of the four dimensional GB invariant, produces late-time accelaration \cite{Nojiri2006} and is an interesting alternative to standard
cosmology (see, for example, Ref. \cite{Felice2009} and references therein). Moreover, topological spherically symmetric vacuum solutions in $\mathcal{F}(R,\mathcal{G})$-gravity have been
recently studied \cite{Myrzakulov2013}. Particularly, in Ref. \cite{Myrzakulov2013}, the authors looked for non-linear deformations of four dimensional
$\mathcal{G}$- and $R$-gravity theories such that solutions with a constant $\mathcal{G}$ appear.
In addition, the GB term is related to the trace anomalies in gravity (see Ref. \cite{Duff1994} for a review).
Therefore, as pointed out in Ref. \cite{Gibbons1995},
the knowledge of the GB term may be also useful to know how this quantum anomaly affects the classical solutions.
\par
In this work, a different intepretation of the static and spherically symmetric geometries supporting a constant $\mathcal{G}$ will be given
in terms of non-vacuum solutions of standard Einsteinian gravity. Our interpretation is somewhat similar in spirit
to that of Ref. \cite{Myrzakulov2013} but coupling certain model of non-linear electrodynamics (NLED) or null dust fluids (which generalize the
Vaidya solutions \cite{Vaidya1951}) to gravity, instead of deforming the gravitational action.
\section{Preliminaries: matter content}
\subsection{Non-linear electrodynamics}
\par\noindent
In geometrized units, Einstein equations ($\Lambda =0$) read
\begin{equation}
\label{einstein}
R_{\mu\nu}-\frac{1}{2}R\, g_{\mu\nu} = 8 \pi T_{\mu\nu},
\end{equation}
where $T_{\mu\nu}$ is the energy-momentum tensor.
\par\noindent
For the matter content we choose non-linear electromagnetic fields. To justify the study of these NLED theories, let us focus on two arguments.
First of all, quantum corrections to Maxwell theory can be described by means of non-linear effective Lagrangians that define NLED as, for instance,
the Euler-Heisenberg Lagrangian \cite{HE,Sch}. When higher order corrections are taken into account, we are led to a sequence of effective Lagrangians which
are polynomials in the field invariants \cite{Bia}. Among all the non-linear generalizations of Maxwell electrodynamics, Born-Infeld (BI) theory \cite{BI} has
been widely studied. Interestingly, the BI Lagrangian depends on the two field invariants in the same way as
the one-loop effective Lagrangian for vacuum polarization due to a constant external electromagnetic field, which gives support to it.
A second argument comes from the low-energy limit of string theory. Specifically, in case of dealing with open bosonic strings, the resulting tree-level
effective Lagrangian is shown to coincide with the BI Lagrangian \cite{PLB1985,NPB1997}.
\par\noindent
When coupled to gravity, NLED gives place to interesting phenomena. The corresponding solutions give place to generalizations of the
Reissner-Nordstr\"om geometry, which have received considerable attention recently. In particular, BI solutions were presented
in \cite{Garcia1984,Breton2003}. BH solutions for generalized BI theories were
studied in \cite{Hendi2013}. An exact regular BH geometry in the presence of NLED was obtained in \cite{Ayon1998} and further discussed in
\cite{Baldo2000,Bronni2001}. Finally, BHs with the Euler-Heisenberg effective Lagrangian as a source term were examined in \cite{Yajima2001},
and the same type of solutions with Lagrangian densities that are powers of Maxwell's Lagrangian were analyzed in \cite{Hassaine2008}.
\par\noindent
In this work we consider a simple choice for an energy-momentum tensor for NLED which is written as
\begin{equation}
\label{nlT}
T^{\mu\nu}=-\frac{1}{4\pi}\left[\mathcal{L}(F)g^{\mu\nu}+\mathcal{L}_{F}F^{\mu}_{\;\;\rho}F^{\rho \nu} \right]
\end{equation}
where $\mathcal{L}$ is the corresponding Lagrangian, $F=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$, and $\mathcal{L}_{F}=\frac{d\mathcal{L}}{dF}$ (along this work,
dependence on the second field invariant, $\sqrt{-g}/2\epsilon_{\mu\nu\rho\sigma}F^{\rho\sigma}F^{\mu\nu}$, will not be considered).
\par\noindent
For simplicity, let us take spherically symmetric and static solutions to Eqs. (\ref{einstein}) given by
\begin{equation}
\label{metric}
ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2} d\Omega^{2}.
\end{equation}
\par\noindent
In the electrovacuum case, we consider only a radial electric field as the source, namely,
\begin{equation}
\label{eqmaxwell}
F_{\mu \nu}=E(r)\left(\delta^{r}_{\mu}\delta^{t}_{\nu}-\delta^{r}_{\nu}\delta^{t}_{\mu} \right)
\end{equation}
where $t$ and $r$ stand for the time and radial coordinates, respectively. Maxwell equations will now read
\begin{equation}
\nabla_{\mu}\left(F^{\mu\nu}\mathcal{L}_{F}\right)=0,
\end{equation}
thus,
\begin{equation}
\label{max}
E(r)=-\frac{q}{r^{2}}(\mathcal{L}_{F})^{-1}.
\end{equation}
As pointed out in Ref. \cite{Cherubini2002}, the non-Weyl part of the curvature determined by the matter content can be separated by showing that
\begin{equation}
\label{main}
4 R^{\mu\nu}R_{\mu\nu}-R^{2}=(16 \pi)^{2}\left(T^{\mu\nu}T_{\mu\nu}-\frac{T^2}{4} \right)
\end{equation}
where $T=g^{\mu\nu}T_{\mu\nu}$ is the trace of the energy-momentum tensor. Therefore, in the considered spherically
symmetric and static case, we arrive to the following expression for the electric field
\begin{equation}
\label{electric}
E(r)=\frac{r^{2}}{4q}\sqrt{4 R^{\mu\nu}R_{\mu\nu} -R^{2}}.
\end{equation}
It is important to point out that Eq. (\ref{electric}) is also valid for $\Lambda\ne0$, as can be easily shown by direct calculation.
\par\noindent
Let us now check Eq. (\ref{electric}) in two particular cases.
In the first case, we take a Reissner-Nordstr\"om BH whose relevant curvature invariants are given by
$R^{\mu\nu}R_{\mu\nu}=\nobreak 4 q^{4}/r^{8}$ and $R=0$.
Therefore, $4 R^{\mu\nu}R_{\mu\nu}-R^{2}=16 q^{4}/r^{8}$ and $E(r)=q/r^{2}$, as expected.
\par\noindent
In the second case, let us consider a regular BH metric \cite{Balart2014} given by $f(r)=1-\frac{2M}{r}e^{-\frac{q^{2}}{2Mr}}$.
For this geometry, we get $R=e^{-\frac{q^2}{2 M r}} q^4 /2 M r^5$ and
$R^{\mu\nu}R_{\mu\nu}= e^{-\frac{q^2}{M r}} q^4 \left(q^4-8 M q^2 r+32 M^2 r^2\right)/ 8 M^2 r^{10}$.
Therefore, $4 R^{\mu\nu}R_{\mu\nu}-R^{2}=e^{-\frac{q^2}{M r}} q^4 \left(q^2-8 M r\right)^2 /4 M^2 r^{10}$ and
$E(r)=\frac{q}{r^{2}}\left(1-\frac{q^{2}}{8 M r} \right)e^{-\frac{q^{2}}{2 M r}}$ which coincide with Eq. (19)
of Ref. \cite{Balart2014}.
\par\noindent
The underlying NLED theory can be obtained using the $P$ framework \cite{Salazar1987}, which is somehow dual to the $F$ framework. After introducing the
tensor $P_{\mu\nu}=\mathcal{L}_{F}F_{\mu\nu}$ together with its invariant $P=-\frac{1}{4}P_{\mu\nu}P^{\mu\nu}$,
one considers the Hamiltonian-like quantity
\begin{equation}
\label{H}
\mathcal{H}=2 F \mathcal{L}_{F} -\mathcal{L}
\end{equation}
as a function of $P$, which specifies the theory. Therefore, the Lagrangian can be written as a function of $P$
as
\begin{equation}
\label{L}
\mathcal{L}=2 P \frac{d\mathcal{H}}{d{P}}-\mathcal{H}.
\end{equation}
Finally, by reformulating the coupled Einstein-NLED equations
in terms of $P$, $\mathcal{H}(P)$ is shown to be given by \cite{Bronnikov2001}
\begin{equation}
\mathcal{H}(P)=-\frac{1}{r^2}\frac{d \mathcal{M}(r)}{dr}
\end{equation}
where the mass function $\mathcal{M}(r)$
is such that $f(r)=1-\frac{2 \mathcal{M}(r)}{r}$.
\subsection{Null fluids}
\par\noindent
Let us consider the geometry in terms of (ingoing) Eddington-Finkelstein coordinates
\begin{equation}
\label{genV}
ds^{2}=-f(r, v)dv^{2}+2dr dv+r^{2}d\Omega^{2}
\end{equation}
with $v=t+r$.
The kind of metrics represented by Eq. (\ref{genV}) are
a generalization of the Vaidya metric \cite{Vaidya1951}
describing a spherically symmetric and nonrotating body which is either emitting ($u$-coordinate) or
absorbing ($v$-coordinate) null dusts, i.e., ``incoherent" electromagnetic radiation.
Moreover, it can be seen \cite{Husain1996,Wang1999} that they solve the Einstein equations
provided the associated energy-momentum tensor is that of a Type II perfect fluid \cite{book} given by
\begin{equation}
T_{\mu\nu}=(\rho + p)(l_{\mu}n_{\nu}+l_{\nu}n_{\mu})+p g_{\mu\nu}
\end{equation}
where $l_{\mu}=\delta^{t}_{\mu}$ and $n_{\mu}=\frac{1}{2}f(r)\delta^{t}_{\mu}-\delta^{r}_{\mu}$ are two
null vectors. The density and pressure of the fluid are given by
\begin{eqnarray}
\rho&=&\frac{\mathcal{M}_{r}}{4\pi r^2} \nonumber \\
p&=&-\frac{\mathcal{M}_{rr}}{8\pi r}
\end{eqnarray}
where $\mathcal{M}_{r}\equiv d \mathcal{M}(r)/dr$.
\par\noindent
At this point, it is noteworthy that for Type II fluids, the energy conditions
read: (a) weak: $\rho\ge 0$ and $p+\rho \ge0$,
(b) strong: $p\ge 0$ and $p+\rho \ge0$, and
(c) dominant: $\rho\ge 0$ and $-\rho\le p \le \rho $.
\section{Solutions with constant topological Euler density}
\par\noindent
As mentioned in Refs. \cite{Nojiri2005,Nojiri2006, Myrzakulov2013}, several interesting cosmological models and
topological static spherically symmetric solutions in $\mathcal{F}(R,\mathcal{G})$ gravity are obtained
when the topological Euler density, i.e., $\mathcal{G}$, is constant. In addition, in $\mathcal{F}(\mathcal{G})$ gravity,
during the transition from the matter to the accelerated era the topological Euler density is zero \cite{Felice2009}.
For these reasons as well as for simplicity, we consider here solutions with constant topological Euler density.
\par\noindent
The topological Euler density is defined as
\begin{equation}
\mathcal{G}=\frac{1}{32\pi^2}\left(R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}-4 R^{\mu\nu}R_{\mu\nu}+R^{2} \right).
\end{equation}
For a spherically symmetric solution of the Einstein equations given by Eq. (\ref{metric}) we get
\begin{equation}
\mathcal{G}=\frac{4}{r^2}\left(f'(r)^{2}+\left(f(r)-1\right) f''(r)\right)
\end{equation}
with the factor $32 \pi^2$ to have been included into $\mathcal{G}$ for convenience.
\subsection{$\mathcal{G}=k\ne 0$ solutions}
\par\noindent
The solution of $\mathcal{G}=\nobreak k$ with $k$ to be an arbitrary constant is given by
\begin{equation}
\label{new}
f(r)=1\pm\sqrt{1-2 A+B r +\frac{k r^{4}}{24}}
\end{equation}
where $A$ and $B$ are arbitrary constants. Let us focus on the negative sign case which will give the BH solutions.
\par\noindent
Employing Eq. (\ref{electric}), we get that Eq. (\ref{new}) solves the Einstein-NLED system provided
the electric field is given by
\begin{widetext}
\begin{equation}
\label{newelectric}
E(r)=\sqrt{6}\; \frac{4 (1-2 A)^2+48 (1-2 A) B r+27 B^2 r^2+(-1+2 A) k r^4}
{q \left(24-48 A+24 B r+k r^4\right)^{3/2}}.
\end{equation}
\end{widetext}
\par\noindent
At this point, a number of comments are in order.
First, the electric field asymptotically will be
\begin{equation}
\label{coul}
\lim_{r\rightarrow \infty}E(r) \rightarrow \sqrt{\frac{6}{ k}} \frac{2 A-1}{q r^{2}}
\end{equation}
which corresponds to a Coulomb-like behavior with a charge $q^{2} = \sqrt{\frac{6}{ k}} (2 A-1)$.
Second, if the constants are selected to be $A=1/2$ and $B=0$, then Eq. (\ref{new}) corresponds to de Sitter
or anti de Sitter space and its electric field vanishes, as shown by Eq. (\ref{newelectric}).
Third, it is evident from Eq. (\ref{coul}) that the asymptotic electric field does not depend on $B$.
Fourth, the metric element as given in Eq. (\ref{new}) asymptotically reads
\begin{equation}
\label{lim}
\lim_{r\rightarrow\infty} f(r)\rightarrow 1-\sqrt{\frac{6}{k}}\frac{B}{r}-\sqrt{\frac{6}{k}}\frac{2A-1}{r^2}-\sqrt{\frac{k}{6}}\frac{r^2}{2}
\end{equation}
which corresponds to a Reissner-Nordstr\"om-de Sitter solution provided that
\begin{eqnarray}
\label{constants}
A &=&\frac{1}{2}+ \frac{ q^{2} \Lambda }{3} \nonumber \\
B&=& \frac{4M\Lambda}{3} \nonumber \\
k&=&\frac{8\Lambda^{2}}{3}.
\end{eqnarray}
\par\noindent
Therefore, utilizing Eqs. (\ref{new}) and (\ref{constants}), a BH solution of the coupled Einstein-NLED system
for a certain electromagnetic Hamiltonian which will be derived in the Appendix, will be of the form
\begin{eqnarray}
\label{final}
ds^{2}\!\!\!&=&\!\!\!-\left(1-\sqrt{\frac{4 M \Lambda r}{3}+\frac{\Lambda^{2}}{9}r^{4}-\frac{2 q^{2}\Lambda}{3}}\right)dt^{2}\nonumber\\
\!\!\!&&\!\!\!+\frac{dr^{2}}{\left(1-\sqrt{\frac{4 M \Lambda r}{3}+\frac{\Lambda^{2}}{9}r^{4}-\frac{2 q^{2}\Lambda}{3}}\right)} +r^2 d \Omega^{2} ~.
\end{eqnarray}
\par\noindent
Let us now study the massless ($M=0$) and massive ($M\ne 0$) cases, separately.
\subsubsection{Massless case}
\par\noindent
In this case, the metric element as given in Eq. (\ref{final}) now reads
\begin{eqnarray}
\label{final1}
ds^{2}\!\!\!&=&\!\!\!-\left(1-\sqrt{\frac{\Lambda^{2}}{9}r^{4}-\frac{2 q^{2}\Lambda}{3}}\right)dt^{2}\nonumber\\
\!\!\!&&\!\!\!+\frac{dr^{2}}{\left(1-\sqrt{\frac{\Lambda^{2}}{9}r^{4}-\frac{2 q^{2}\Lambda}{3}}\right)} +r^2 d \Omega^2 ~.
\end{eqnarray}
Depending on the values of the parameters, $q$ and $\Lambda$, of this metric, one gets:
\begin{itemize}
\item $q=0$ and $\Lambda\ne0$\\
This case corresponds to the de Sitter solution which has a cosmological horizon at $r_{c}=\nobreak (3/\Lambda)^{1/2}$.
\item $q\ne0$ and $\Lambda\ne0$\\
This case has an event horizon located at $r_{h}=\left[\frac{9}{\Lambda^2}+\frac{6 q^2}{\Lambda}\right]^{1/4}$
with $\Lambda < - 3 / 2 q^{2}$. In addition, using Eqs. (\ref{newelectric}) and (\ref{constants}), the electric field reads
\begin{equation}
\label{elec}
E(r)=q\frac{r^4 +6 q^2/\Lambda}{\left(r^4-6q^2/\Lambda\right)^{3/2}}
\end{equation}
which asymptotically becomes, as expected,
\begin{equation}
\lim_{r\rightarrow \infty}E(r) \rightarrow \frac{q}{r^2}~.
\end{equation}
\end{itemize}
\par\noindent
At this point, a number of comments are in order. First, the metric element, i.e., $f(r)$, of Eq. (\ref{final1})
can be written in the form
\begin{eqnarray}
\label{deviation1}
f(r)&=&1-\frac{\Lambda}{3}r^2\sqrt{1-\frac{\alpha}{r^4}}\nonumber \\
&=&1-\frac{\Lambda}{3}r^2+\frac{q^2}{r^2}+\mathcal{O}\left(\alpha^{2}\right)
\end{eqnarray}
\par\noindent
with $\alpha\equiv 6q^2/\Lambda$ to be a parameter that measures the deviation of the geometry described by
the metric element given in Eq. (\ref{deviation1}) from de Sitter space.
This deviation is depicted in Figure 1.
\begin{figure}[h!]
\includegraphics[scale=0.55]{fig.eps}
\caption{The red curve is the de Sitter space with $\Lambda=-1$, the blue curve is a spacetime with the metric function
$f(r)$ to have $q=1$ and $\Lambda=-1$, and the olive green curve depicts a spacetime
with metric function to have $q=2$ and $\Lambda=-1$.}
\label{fig}
\end{figure}
\par\noindent
Second, it is evident from Eq. (\ref{elec}) that there is an intrinsic singularity located at $r_{s}=\alpha^{1/4}$
with $\Lambda>0$. This singularity can also be detected if one computes the curvature invariants
which, in this case, are given by
\begin{widetext}
\begin{eqnarray}
\label{inv2}
R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}&=&\frac{8 \Lambda \left(216 q^8-144 q^6 r^4 \Lambda +114 q^4 r^8 \Lambda ^2-18 q^2 r^{12} \Lambda ^3+r^{16} \Lambda ^4\right)}
{3 r^4 \left(\Lambda r^4 - 6 q^2 \right)^3}
\nonumber \\
R^{\mu\nu}R_{\mu\nu}&=&4 \Lambda \left(\Lambda +\frac{2 q^4 \left(36 q^4+60 q^2 r^4 \Lambda -7 r^8 \Lambda ^2\right)}
{r^4 \left(\Lambda r^4 - 6 q^2\right)^3}\right)
\nonumber \\
g^{\mu\nu}R_{\mu\nu}&=&\frac{4 \sqrt{\Lambda \left(-6 q^2+r^4 \Lambda \right)} \left(6 q^4-9 q^2 r^4 \Lambda +r^8 \Lambda ^2\right)}
{r^{2} \left( \Lambda r^{4} - 6 q^2 \right)^{2}}.
\end{eqnarray}
\end{widetext}
\par\noindent
It should be stressed that, for $\Lambda>0$, the singularity $r_{s}$ will satisfy $r_{h}>r_{s}$ and, therefore,
it will be an intrinsic singularity which will never become a naked one.
Furthermore, this singularity can be avoided by choosing $\Lambda<0$. However, it is easily seen by inspecting
the curvature invariants in Eq. (\ref{inv2}) that another singularity is present.
The singularity at $r=0$ becomes relevant in this case and, therefore, it is not possible a non-singular solution
to be achieved although the electric field as given by Eq. (\ref{elec}) is regular everywhere.
It is also noteworthy, that the singularity lying at the origin, i.e., at $r=0$, becomes a naked one when
$\Lambda\ge -3 / 2 q^{2}$.
\subsubsection{Massive case}
\par\noindent
This is the general case so the metric element is the one given by Eq. (\ref{final})
\begin{eqnarray}
\label{final2}
ds^{2}\!\!\!&=&\!\!\!-\left(1-\sqrt{\frac{4 M \Lambda r}{3}+\frac{\Lambda^{2}}{9}r^{4}-\frac{2 q^{2}\Lambda}{3}}\right)dt^{2}\nonumber\\
\!\!\!&&\!\!\!+\frac{dr^{2}}{\left(1-\sqrt{\frac{4 M \Lambda r}{3}+\frac{\Lambda^{2}}{9}r^{4}-\frac{2 q^{2}\Lambda}{3}}\right)} +r^2 d \Omega^{2}~.
\end{eqnarray}
\par\noindent
Several comments for this general case have been given at the beginning of this section. In the case that
$q\ne 0$ and $\Lambda=0$, the geometry is that of Minskowski spacetime. Therefore, the case which will be studied now
is the one with $q=0$ and $\Lambda\ne0$. The corresponding curvature invariants are written as
\begin{widetext}
\begin{eqnarray}
\label{inv3}
R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}&=& \frac{8}{3} \Lambda \left(\Lambda +\frac{4374 M^4}{\left(12 M r+r^4 \Lambda \right)^3}\right) \nonumber \\
R^{\mu\nu}R_{\mu\nu}&=& \frac{4 \Lambda \left(2754 M^4+1620 M^3 r^3 \Lambda +414 M^2 r^6 \Lambda ^2+36 M r^9 \Lambda ^3+r^{12} \Lambda ^4\right)}{\left(12 M r+r^4 \Lambda \right)^3} \nonumber \\
g^{\mu\nu}R_{\mu\nu}&=&\frac{4 \left(3 M+r^3 \Lambda \right) \sqrt{r \Lambda \left(12 M+r^3 \Lambda \right)} \left(15 M+r^3 \Lambda \right)}{\left(12 M r+r^4
\Lambda \right)^2}~.
\end{eqnarray}
\end{widetext}
\par\noindent
Here, a number of comments are in order.
First, it is evident from Eq. (\ref{inv3}) that provided $\Lambda>0$ the solution is regular everywhere except at $r=0$.
Second, as no electric charge, i.e., $q=0$, is present in this case, the interpretation
in terms of certain NLED is not of any interest. However, the corresponding source can be taken to be a Type II perfect
fluid whose density and pressure are given, respectively, as
\begin{eqnarray}
\label{rho+pres}
\rho &=& \frac{\Lambda \left(6 M+r^3 \Lambda \right)}{8 \pi r \sqrt{r \Lambda \left(12 M+r^3 \Lambda \right)}} \nonumber \\
p &=&-\frac{\Lambda ^2 \left(18 M^2+18 M r^3 \Lambda +r^6 \Lambda ^2\right)}{8 \pi \left[r \Lambda \left(12 M+r^3 \Lambda \right)\right]^{3/2}}~.
\end{eqnarray}
Third, if $M\ll \Lambda$, then the corresponding spacetime will be a de Sitter one
since the density and the pressure become, respectively,
\begin{eqnarray}
\rho &=& \frac{\Lambda}{8 \pi } \left( 1 + \mathcal{O}\left( (M/ \Lambda)^{2}\right) \right )\nonumber \\
p &=& - \left[1 - \frac{270 M^2}{r^6 \Lambda ^2}+ \mathcal{O}\left( (M/ \Lambda)^{4} \right) \right]\rho ~.
\end{eqnarray}
Moreover, the pressure can also be written as
\begin{equation}
p=- \left[1 - \frac{270 M^2}{\Lambda ^2 r^6}+\mathcal{O}\left(\frac{1}{r^9}\right)\right]\rho
\end{equation}
\par\noindent
and, thus, the geometry becomes de Sitter also at spatial infinity.
\par\noindent
Fourth, this solution satisfies both the weak and dominant energy conditions while
the strong energy condition is violated due to the near-de Sitter behavior, as expected.\\
\subsection{$\mathcal{G}=k=0$ solutions}
\par\noindent
The solution of $\mathcal{G}=\nobreak k = 0$ is given by
\begin{equation}
\label{newB}
f(r)=1\pm\sqrt{1-2 A+B r }~.
\end{equation}
We focus, as before, on the negative sign case which will give the BH solutions.
In this case, the mass function reads
\begin{equation}
\label{mass}
\mathcal{M}(r)=\frac{r}{2}\sqrt{1-2 A + B r}
\end{equation}
and the curvature invariants become
\begin{widetext}
\begin{eqnarray}
\label{inv4}
&&R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}=\frac{\left(8+32 A^2-32 A (1+r B)+r B (16+9 r B)\right)^2}{16 r^4 (1-2 A+r B)^3} \nonumber \\
&&R^{\mu\nu}R_{\mu\nu}= \frac{64 (1-2 A)^4-320 r (-1+2 A)^3 B+608 r^2 (1-2 A)^2 B^2+504 r^3 (1-2 A) B^3+153 r^4 B^4}{32 r^4 (1-2 A+r B)^3}\nonumber \\
&&R=\frac{8+32 A^2-16 A (2+3 r B)+3 r B (8+5 r B)}{4 r^2 (1-2 A+r B)^{3/2}}~.
\end{eqnarray}
\end{widetext}
\par\noindent
Depending on the values of the constants, $A$ and $B$, the following cases are considered:
\begin{itemize}
\item $A=1/2$, $B=0$\\
In this case, the geometry corresponds to Minkowski spacetime and
the three energy conditions are trivially satisfied.
\item $A<1/2$ and $B=0$\\
In this case, there is a naked singularity located at $r=0$. In addition, the density and
the pressure are given, respectively, as
\begin{eqnarray}
\rho&=&\frac{\sqrt{1-2 A}}{8 \pi r^2} \nonumber \\
p&=&0~.
\end{eqnarray}
Therefore, the weak, strong, and dominant energy conditions are satisfied provided $A<1/2$.
Furthermore, this case corresponds to that of the gravitational field
of a global monopole \cite{Barriola1989,Wang1999} with a deficit angle given by $\Delta=\sqrt{1-2A}$.
\item $A=1/2$, $B> 0$\\
In this case, there is a horizon at $r=B^{-1}$ surrounding a singularity at $r=0$. In addition,
the density and the pressure are given, respectively, as
\begin{eqnarray}
\rho&=&\frac{3 \sqrt{B r}}{16 \pi r^2} \nonumber \\
p&=&-\frac{3 \sqrt{B r}}{64 \pi r^2}~.
\end{eqnarray}
\par\noindent
Therefore, the weak and dominant, but not the strong, energy conditions are satisfied. It is noteworthy that
this situation corresponds to a particular case of quintessence dark matter (see, for example, Ref. \cite{Ellis2012})
whose equation of state is of the form $p=-\frac{1}{4}\rho$.
\item $A\ne0$, $B\ne0$\\
In this general case, the density and pressure are given, respectively, as
\begin{eqnarray}
\label{general}
\rho&=&\frac{2-4 A+3 B r}{16 \pi r^2 \sqrt{1-2 A+B r}}\nonumber \\
p&=&-\frac{B (4-8 A+3 B r)}{64 \pi r (1-2 A+B r)^{3/2}}.
\end{eqnarray}
\par\noindent
It is easily seen that the most interesting case is when $0<2A<1$ and $B>0$. In this situation,
there is a horizon at $r=2A/B$ surrounding the singularity at $r=0$. Both the weak and dominant energy conditions
are satisfied when $r>0$. On the contrary, the strong energy condition is never fulfilled.
\end{itemize}
\section{Discussion and conclusions}
\par\noindent
In this work we have studied a class of four dimensional spherically symmetric and static geometries with constant
topological Euler density, i.e., $\mathcal{G}$, showing that they can be interpreted as Reissner-Nordstr\"om-de Sitter-like
spacetimes when non-linear electrodynamics is utilized, or as generalized Vaidya solutions, depending on the value
of $\mathcal{G}$.
\par\noindent
In the first case in which the topological Euler density is a non-zero constant, i.e., $\mathcal{G}\ne0$,
we managed to show for the massless case, i.e., $M=0$, that the non-linear electric field
can be regularized everywhere by taking $\Lambda<0$, although
the geometry remains singular at $r=0$. Furthermore, it was shown that when $\Lambda \ge -3/2 q^{2} $,
the singularity lying at the origin, i.e., $r=0$, becomes naked.
For the massive case, i.e., $M\ne 0$ when the charge is switched off, then provided that $\Lambda > 0$, the obtained geometry is
regular everywhere except the origin, i.e., $r=0$, where a singularity lies. In this situation, the source term is interpreted in terms of generalized Vaidya geometries.
Contrary to the previous case, now both the density and pressure of the fluid become singular at $r=0$.
\par\noindent
In the second case in which the topological Euler density is equal to zero, i.e., $\mathcal{G}=0$, the geometries obtained are
that of Minkowski spacetime, of a global monopole, of a quintessence dark matter model, and of a BH with an horizon
surrounding the singularity at $r=0$.
\par\noindent
Finally, it should be stressed that, as stated in \cite{Garcia2012}, the only kinds of matter consistent with
spherically symmetric and static gravitational fields in General Relativity are the cosmological constant vacuum,
the anisotropic fluids, the perfect fluids, and the linear/non-linear Abelian/non-Abelian electromagnetic/Yang-Mills fields.
Interestingly, as it was pointed out in Ref. \cite{Garcia2012}, this latter case also allows the interpretation of
an anisotropic fluid in certain cases, one of which has been studied in the present work.
\section{Acknowledgments}
\par\noindent
P. B. acknowledges support from the Faculty of Science and Vicerrector\'{\i}a de Investigaciones of
Universidad de los Andes, Bogot\'a, Colombia.\\
\section{Appendix}
\par\noindent
Using the dual formalism briefly described in section II, the underlying theory is shown to be given by
\begin{equation}
\label{NLED}
\mathcal{H}(P) =\left( -\frac{3}{2}\tilde{a}q +\frac{\tilde{b}P}{q} \right)\frac{1}{\sqrt{\tilde{a} q^{2}- 2 \tilde{b}P}}
\end{equation}
with $\tilde{a}=\frac{\Lambda^{2}}{q}$ and $\tilde{b}=\frac{2}{3}q^{2}\Lambda$.
Let us note that for small fields ($P\ll \Lambda$)
\begin{equation}
\label{lagrangian}
\mathcal{H}(P) =- \frac{\Lambda}{2} - P +12\sqrt{\Lambda}\, P^{2}+\mathcal{O}(P^3 )~.
\end{equation}
\par\noindent
\par\noindent
At this point, it is worth of note that the BI Hamiltonian when expanded for small fields compared to
the maximal field strength, $b$, reads
\begin{equation}
\mathcal{H}_{BI}=-F+\frac{F^2}{2 b^2}+\mathcal{O}\left(F^{3}\right)~.
\end{equation}
\par\noindent
Therefore, the NLED model, described here, gives place to a BI-like model (up to $F^2$)
when $ b^{2}= \frac{1}{24\sqrt{\Lambda}}$.
|
1,314,259,994,274 | arxiv | \section{Introduction}
With the recent discovery of the top quark at Fermilab,~\cite{CDF,D0}
top physics has moved from the search phase into the study phase.
The very large mass of the top quark separates it from the other fermions
and presents the possibility that new
physics may be discovered in either its production or its decays.
In this paper I review the prospects for measurements of the top quark
production and decay parameters over the course of the next ten years or so.
\section{Top Yields}
\subsection{Tevatron Accelerator Upgrades}
With the turn on of the Main Injector at the Tevatron
it is predicted that
the average instantaneous luminosity will reach $2\times10^{32}~cm^{-2}s^{-1}$
with a peak luminosity of $5\times10^{32}~cm^{-2}s^{-1}$. Integrated
luminosity delivered during Run II, beginning in 1999, is expected to be 2
fb$^{-1}$. In addition to these luminosity upgrades, the Tevatron will also
be
undergoing an energy upgrade from $\sqrt{s}=$1.8 TeV, to $\sqrt{s}=2.0$ TeV,
which gives an approximate 40\% increase in the $t\bar{t}$ production cross
section.
\subsection{Tevatron Detector Upgrades}
CDF and D0 both have significant detector upgrades planned prior to Run II. I
here use CDF as an example to calculate $t\bar{t}$ yields, but the numbers for
D0
should be similar.
Significant tracking upgrades are planned at CDF including a 3-D silicon
tracker
and a fiber tracker which will allow stand-alone tracking out to\\
$\mid\eta\mid$=2.0. The efficiency for tagging at least one b-jet in a
$t\bar{t}$ event is expected to be 80\% and for double tagging close to 40\%.
Improvements in lepton identification will come from an upgrade to the end plug
calorimeter and through the completion of the muon coverage. The increase in
the acceptance for electrons from $t\bar{t}$ decays is expected to be 36\% and
that for muons 25\%.
Including the 40\% increase in $\sigma_{t\bar{t}}$, an integrated luminosity of 2
fb$^{-1}$ will yield approximately 1400 {\em tagged} W plus $\geq$3 jet events
and about 140 dilepton events from $t\bar{t}$ decays, per experiment.
\subsection{Yields at the LHC}
Top physics at the LHC is expected to be done primarily during the early
running
at relatively low luminosities of $10^{32}-10^{33}~cm^{-2}s^{-1}$. At
$\sqrt{s}$=14 TeV and $10^{32}~cm^{-2}s^{-1}$, this corresponds to about 6000
$t\bar{t}$ pairs produced per day. Folding in typical detection efficiencies,
one can expect of order 100 tagged lepton + jet events on tape per day and
about 20 dilepton events on tape per day. Only searches for the rarest
phenomena in top production or decays will be limited by statistics at the
LHC.
\section{Mass Measurement}
\subsection{Current Status and Prospects at the Tevatron}
The current CDF top mass measurement, from an integrated luminosity of 67
pb$^{-1}$ is $M_{top}=176\pm 8\pm10$ GeV/c$^2$, where the first uncertainty is
statistical and the second is systematic. One observes that already the
statistical and systematic uncertainties are comparable and therefore the
future precision will be determined by systematic effects (for Run II the
statistical uncertainty will be $\sim$1 GeV/c$^2$).
Both CDF and D0 use constrained fitting of b-tagged W+$\geq$ 4 jet events,
using
the four highest ${\rm E}_{\rm T}$ jets, to measure the mass. The dominant systematic
effects are due to the understanding of the jet energy scale in the detector,
the effects of gluon radiation, biases due to b-tagging, and the understanding
of the shape of the underlying background. In evaluating how the systematic
uncertainties are likely to scale with increased integrated luminosity, the
key
question is whether a control dataset exists with which to study the effect in
question. If so, then it is reasonable to assume that the uncertainty will
scale as $1/\sqrt{N}$.
Uncertainties due to jet energy scale and gluon radiation effects are studied
using photon-jet balancing and Z + jet events, thus energy scale uncertainties
can be expected to scale down as $1/\sqrt{N}$.
However, in
addition to energy scale uncertainties, the effects of initial and final state
radiation create combinatoric confusion.
Combinatoric effects can be significantly reduced with increased statistics by
such things as requiring both b-jets to be tagged, which reduces the number of
possible particle assignments to four,
and through the requirement of four and only 4 high ${\rm E}_{\rm T}$ jets
in the event. It is unclear how uncertainties due to combinatoric effects
will
scale, but it is clear that they will be significantly reduced with a large
increase in dataset size.
Uncertainties due to b-tagging bias are currently understood with control
samples of inclusive lepton events and with the $t\bar{t}$ Monte Carlo samples.
This uncertainty is not large to begin with and should scale statistically.
The uncertainty due to the background shape is currently studied using the
VECBOS Monte Carlo program, which is in turn validated using
W+1,2 jet data. With increased dataset size it should be possible to study
the
background shape using top depleted datasets, selected for instance by {\em
anti}-b tagging, and with Z+$\geq$ 3 jet events.
It is conceivable that background shape uncertainties will
scale statistically, but uncertain.
If we assume statistical scaling of the systematic uncertainties from the
current 67 pb$^{-1}$ values, the uncertainty projected at the completion of
Run
II will be 2-3 GeV/c$^2$. Conservatively we can expect $< 4$ GeV/c$^2$.
\subsection{Prospects at LHC}
The subject of top mass fitting at LHC has been studied by several
authors.~\cite{Unal,Froidevaux,Mekki} In the lepton plus jets channel, the
technique will likely be quite similar to that now employed at the Tevatron.
In the LHC event samples, the statistical uncertainty
in the mass fitting will be negligible, and the systematic effects
will be similar to those now under study at the Tevatron. With the enormous
datasets, however, several new handles for control of the systematics are
available. We list below the major systematic effects and the available
datasets used to control the uncertainties:
\begin{itemize}
\item{Jet Energy Scale:} With large enough statistics the relevant jet energy
scale can be measured directly in $t\bar{t}$ events by renormalization of the
reconstructed mass of the hadronically decaying W boson. Indeed, at CDF
one is already able to reconstruct a W mass peak on the hadronic side in
$t\bar{t}$ events when the W mass constraint is {\em removed} from the fitting
procedure.~\cite{lep-pho} The difference between the energy scale for jets
from the W decay and the energy scale for b-jets will become more important
as the uncertainty in the mass measurement decreases. With sufficiently
large datasets, the b-jet energy scale can be understood using
${\rm Z}\rightarrowb\bar{b}$ and ${\rm WZ}\rightarrow\ell\nub\bar{b}$ events.
\item{Gluon Radiation:} The LHC will provide copious samples of Z+jet
events which provide an excellent sample for measuring the effect of gluon
radiation on jet reconstruction. Furthermore, it may be possible to study
hard gluon radiation, which can produce additional high ${\rm P}_{\rm T}$ jets, in $t\bar{t}$
events themselves by measuring the population of additional jets.
\item{b-Tagging Bias:} This systematic effect is already small at the Tevatron
and is expected to be negligible at the LHC.
\item{Background Shape:} There is little work on the effect of the uncertainty
in the shape of the underlying background on the top mass fitting at the LHC,
but it should be possible to study this directly using carefully constructed
`top-free' datasets.
\end{itemize}
The LHC literature quotes an overall uncertainty on the top mass measurement
from lepton plus jets events of $\pm$3 GeV/c$^2$. This work was done prior
to the discovery of top at the Tevatron, and given the current experience seems
extremely conservative. More likely the final uncertainty at LHC will be in
the 1-2 GeV/c$^2$ range.
\section{Measurement of $\sigma_{t\bar{t}}$}
The measurement of the production cross section for $t\bar{t}$ pairs is a test of
QCD. A significant deviation of the measured cross section from the predicted
value can signal non-Standard Model production mechanisms such as the decay of
a
heavy object into $t\bar{t}$ pairs. As there is relatively little uncertainty in
the theoretical prediction for the cross section,~\cite{Berger} this can be a
rather sensitive testing ground. The Tevatron and LHC measurements of
$\sigma_{t\bar{t}}$ are complementary. Although the LHC at higher $\sqrt{s}$
has greater reach, the dominant glue-glue nature of its
collisions makes it insensitive to a spin one color singlet resonance, while
there is no such restriction at the Tevatron where high $\sqrt{\hat{s}}$
collisions are dominantly $q\bar{q}$.
The uncertainty in the production cross section at the Tevatron is currently
of order 30\% and is dominated by statistics. With the statistics of Run II,
one can expect a 10\% measurement. At this point uncertainties due to
acceptance and integrated luminosity become comparable to the statistical
uncertainty and it is uncertain how much more precise the measurement can
become. In any case, the ultimate precision in the luminosity is 3.5\%, which
is the accuracy to which the effective cross section of the luminosity monitors
is known, so one can expect that the uncertainty in the cross section
measurement will plateau in the range 5-10\% . The final uncertainty in the
LHC measurement is likely to be in the same range.
\section{Single Top Production}
Single top production can occur through both the W-gluon fusion process, with a
t-channel W \cite{Yuan} or through an s-channel W$^*$
decay.~\cite{Willenbrock}
In either case the final state contains one top and one bottom quark and, in
lowest order, nothing else in the case of s-channel W$^*$ decay, while the
W-gluon fusion process contains an additional light quark jet in the final
state. The cross section for single top production is proportional to the
square of the CKM element V$_{tb}$ and is therefore of great interest.
\subsection{W-gluon Fusion}
The signal for single top production is extracted via the Wb invariant mass
distribution in b-tagged events. There is a significant
background from $q\bar{q}\rightarrow\ Wb\bar{b}$ and the signal to background at
the Tevatron is expected to be only 1:2. With 2 fb$^{-1}$ of Run II data
however, the Wb mass peak can be extracted above background and a cross section
measurement with a statistical uncertainty of better than 20\% is
expected.~\cite{tev2000} Extraction of V$_{tb}$ from this measured cross
section depends on the knowledge of the gluon distribution function and
therefore an accuracy of no better than 30\% is expected. Comparable S/B and
final uncertainty on V$_{tb}$ can be expected at LHC.
\subsection{W$^*\rightarrow$tb}
The single top signal in this decay mode is also extracted via the Wb invariant
mass distribution, as above. However, the backgrounds from the W-gluon fusion
process can be reduced by vetoing events with
additional jets and from W$b\bar{b}$ by making an invariant mass cut on
the $b\bar{b}$ pair of $>$110
GeV/c$^2$ (this cut is more efficient for the W$^*$ process
than for W-gluon fusion).~\cite{Willenbrock}
This process has a significant advantage over the W-gluon fusion process
for measuring V$_{tb}$ because it is not sensitive to the gluon distribution
function, and the uncertainties in the quark distributions can be controlled
by normalizing to $q\bar{q}\rightarrow\ell\nu$ at the same $\sqrt{\hat{s}}$.
With
2 fb$^{-1}$ at the Tevatron, a 12\% measurement of V$_{tb}$ is expected.
At LHC the W$^*$ signal is swamped by W-gluon fusion and $t\bar{t}$ production,
both of which grow faster with $\sqrt{s}$ than the $q\bar{q}$ initiated W$^*$
process. It
is unlikely therefore, that the measurement of V$_{tb}$ with this technique
at LHC will compete with the measurement at the Tevatron.
\section{Measurement of V$_{tb}$ from Top Decays}
In addition to the single top measurements discussed above, there is also
sensitivity to V$_{tb}$ by measuring the branching fraction $t\rightarrow
Wb$/$t\rightarrow Wq$. This branching fraction is currently measured at CDF,
via the ratio of $t\bar{t}$ events with 0,1 or 2 b-tagged jets, to about 30\%.
With 2 fb$^{-1}$ a branching fraction uncertainty of 10\% is expected.
Converting the branching fraction measurement to a measurement of V$_{tb}$
requires assumptions about the magnitudes of V$_{td}$ and V$_{ts}$. Since
these latter two CKM elements are quite small in the Standard Model, the
branching fraction measurement is not a terribly sensitive way to measure
V$_{tb}$, assuming V$_{tb}$ is close to 1 . A 10\% measurement of the
branching fraction corresponds to a 1$\sigma$ lower limit on V$_{tb}$ of
0.26. At LHC, assuming a branching fraction uncertainty of 1\%, the lower
limit on V$_{tb}$ is only 0.4 .
\section{$t\rightarrow H^+b$}
Supersymmetric models include charged Higgs bosons which couple to top quarks
with a strength which depends on the Higgs mass and the ratio of vacuum
expectation values, $\tan\beta$. The Higgs subsequently decays to $\tau\nu$ or
$cs$. The branching fraction dependence of the top and Higgs decays on
$\tan\beta$ is shown in Figure 1.
The ratio, R$_{\ell\ell}$, of the rate of $t\bar{t}$ pairs decaying into
dilepton
final states to those decaying into single lepton final states is, in
principle,
sensitive to the presence of a charged Higgs component of the top decays.
However, the contributions to both the dilepton and single lepton final states
from $H\rightarrow\tau\nu,~cs$ must first be understood.
Extrapolating from the current Tevatron experience, R$_{\ell\ell}$ will be
measured to $\sim$10\% with 2 fb$^{-1}$, where the uncertainty is dominated by
the statistics of the dilepton sample. At LHC, the measurement is dominated
by uncertainties in the backgrounds, which can be controlled somewhat via
b-tagging in both the single lepton and dilepton channels.~\cite{Unal} A
measurement of the ratio to less than 5\% is a reasonable expectation.
In either case, Tevatron
or LHC, there is sensitivity to only a limited $\tan\beta$ range for a
given Higgs mass (see Fig.1).
A more direct method for searching for charged Higgs decays is to look for an
excess of taus in top decays. An LHC study~\cite{Felcini} has shown that this
technique can be sensitive to a charged Higgs over most of the $\tan\beta$
range
if M$_{Higgs}$ is near 150 GeV/c$^2$.
\section{W-t-b Vetex}
The most general form of the W-t-b vertex has 4 form factors: $F_L^1,F_R^1,
F_L^2,~{\rm and}~F_R^2$.~\cite{Kane} In the Standard Model, only $F_L^1$, the
V-A form factor, is
non-zero at lowest order. Experimental sensitivity to the values of these form
factors can be achieved by measuring the polarization of the W bosons in top
decays. In the Standard Model, the ratio of the number of longitudinally
polarized to transversely polarized W bosons depends on M$_{top}^2$ and gives
about 70\% longitudinally polarized
Ws for M$_{top}$=175 GeV/c$^2$. The fraction of longitudinal W bosons can
be measured using the angular distribution of leptons from W decays in top
events. Non-zero values of $F^2_{L,R}$ would produce
a departure from the predicted value. A non-zero $F^1_R$ gives a
V+A component of the coupling but does not affect the fraction of longitudinal
W bosons. However, it would produce a right-handed component in the transverse
Ws and therefore there is also sensitivity to $F^1_R$ in the lepton angular
distributions.
Studies at the Tevatron have shown that a 2 fb$^{-1}$ sample would give a
statistical uncertainty of
3\% on the fraction of longitudinal, and a
1\% statistical uncertainty on the fraction of right handed,
W bosons in top decays.
At LHC the statistical uncertainties will be a factor of 3-10 better.
In both cases systematic effects, which remain to be studied, are likely
to dominate the precision of the measurement.
\section{Rare Decays}
Flavor changing neutral current decays such as $t\rightarrow\gamma c$ and
$t\rightarrow Zc$ are unobservably small in the Standard Model at either the
Tevatron or LHC. Any observation of such decays at either machine would be a
breakthrough. With 2 fb$^{-1}$ at the Tevatron, branching fraction limits for
either of these decays will be at the per cent level, whereas the Standard
Model
predictions are eight orders of magnitude smaller. The Standard Model
prediction
for the decay $t\rightarrow Ws$ is of order $10^{-3}$, which is about an order
of magnitude smaller than the sensitivity at the Tevatron with 2 fb$^{-1}$. It
is possible that LHC will make the first observation of this rare decay.
\section{Conclusions}
By the turn of the century, Run II at the Tevatron will have produced 2
fb$^{-1}$ of data for both CDF and D0. Each experiment will have measurements
of the top mass to better than 4 GeV/c$^2$, the production cross section to
10\%, V$_{tb}$ to 12\% as well as searches for rare decay modes of the top. The
charged current couplings of the top will have been probed to a few per cent
via
W polarization measurements. If, as many hope and some expect, the top quark
turns out to be a window onto physics beyond the Standard Model, these
measurements at the Tevatron may very well yield the first glimpse of that new
physics.
R\&D efforts at Fermilab are now under way to evaluate the
possibility of even higher luminosities. Running at 10$^{33}$ cm$^{-2}$s$^{-1}$
or a constant $5\times 10^{32}$ cm$^{-2}$s$^{-1}$ is considered possible,
and might yield as much as 10 fb$^{-1}$ per year.
In a year of LHC running at $10^{32}-10^{33}$cm$^{-2}$s$^{-1}$, $t\bar{t}$
samples
will be 1-2 orders of magnitude larger than those with 2 fb$^{-1}$ at the
Tevatron. With such samples, many LHC measurements will be limited by
systematic effects which are difficult to quantify at this point.
Nevertheless,
improvements by a factor of 2-3 in the top mass uncertainty, and at least that
much in sensitivity to rare decays and non-standard charged current couplings
seem reasonable expectations. While Tevatron running should produce the best
measure of V$_{tb}$, the first observation of $t\rightarrow Ws$ may come from
LHC. If new physics does show up in top production or decays, then LHC is in
an
excellent position to either discover it, or study it if it should be found
first at Fermilab.
\section{Acknowledgements}
This paper draws on the work of many other people, hopefully covered adequately
in the references. In particular, I would like to thank D. Amidei,
D. Froidevaux, and S. Willenbrock for helpful discussions during
its preparation.
\newpage
|
1,314,259,994,275 | arxiv | \section{Introduction}
\label{sec:intro}
The calculation of meson properties in the Dyson-Schwinger-Bethe-Salpeter-equation (DSBSE) approach
has enriched the theoretical hadron-physics landscape for many years. In fact, it was realized
soon after the conception of the quark picture of hadrons that a relativistic dynamical setup was
needed for a more in-depth description of the ever-growing sample of hadron-physics data. Moreover,
the phenomenological success of the quark-model hypothesis clearly indicated the convincing potential
of a covariant description of hadrons rooted in quantum chromodynamics (QCD), which is now widely
accepted to be the theory describing the strong interaction. The modern tools that displayed the
capability to achieve this goal are lattice-regularized QCD on one hand and continuum quantum field
theoretical methods on the other hand, one of which is the DSBSE approach employed here.
In a phenomenological DSBSE setup one is immediately confronted with the complexity of the infinite,
coupled system of QCD's Dyson-Schwinger equations (DSEs) \cite{Fischer:2006ub}. Thus, the straight-forward
rainbow-ladder (RL) truncation of the coupled quark DSE and meson Bethe-Salpeter-equation (BSE)
system rose to great popularity quickly
\cite{Maris:2003vk} after its helpful and QCD-authentic features had been demonstrated. In particular,
attention was drawn to relevant Ward-Takahashi identities (WTIs) such as the axial-vector WTI (AVWTI),
see, e.\,g., \cite{Munczek:1994zz}, and its satisfaction in RL truncation
(together with the vector WTI \cite{Maskawa:1974vs,Aoki:1990eq,Kugo:1992pr,Bando:1993qy,
Munczek:1994zz,Maris:1997hd,Maris:1999bh,Maris:2000sk}),
which leads to a comprehensively veracious description of the pion and its properties \cite{Maris:1997tm}.
More precisely, a pion computed from an RL-truncated DSBSE model calculation follows the pattern
required by the Goldstone theorem in that it is massless in the chiral limit. For small finite current-quark
masses, it follows the well-known Gell-Mann--Oakes--Renner relation; in fact,
it was even shown that a generalized version of this relation exists that is valid for all pseudoscalar mesons
regardless of their mass and level of excitation \cite{Maris:1997hd,Holl:2004fr}.
On top of the archetypical treatment of the pion in this approach, it is not surprising that
most of the phenomenological studies that followed and which used a sophisticated model interaction
in RL truncation focused on the light-quark sector. In addition, reaching larger current quark masses
and numerically computing bound states of such quarks in a Landau-gauge calculation (for insight in
the situation in Coulomb gauge, see, e.\,g.,
\cite{Alkofer:2005ug,Popovici:2010mb,Popovici:2011yz,Popovici:2011wx,LlanesEstrada:2010bs,Cotanch:2010bq,Rocha:2009xq})
in Euclidean space like it is used in this approach poses numerical challenges. When these were finally
being overcome recently, see, e.\,g., \cite{Krassnigg:2008gd,Blank:2010bp}, the model assumptions remained
anchored to the light-quark domain nonetheless. While this still gave reasonable results for pseudoscalar
and vector mesons, states identified with either radial or orbital angular momentum excitations were not well described
\cite{Bhagwat:2006py,Bhagwat:2007rj,Fischer:2009jm,Krassnigg:2009zh,Krassnigg:2010mh,Blank:2010sn,Qin:2011xq,Rojas:2014aka,Fischer:2014xha}.
While this could be interpreted on a general footing and the conclusion could be drawn that the RL truncation
is not sufficient to provide a generally satisfying meson phenomenology and that for such satisfaction
to be achieved one needs to include corrections to this truncation or, simply speaking, a quark-gluon
vertex more complicated than the bare one, we challenge this line of thinking and attempt
a counterexample.
More precisely, we start our version of a phenomenological QCD-model approach via the DSBSE method
in the heavy-quark domain. Generalizing on previous accomplishments \cite{Blank:2011ha}, we allow for more
freedom in the effective interaction and test our assumption by comparing our results to the available meson
data in the bottomonium system. Herein we present a first look at the possibilities and limitations
of the present setup, as well as steps to be taken next to complete this study.
\section{Bottomonium in the DSBSE approach}
\label{sec:bottomonium}
The study of the bottomonium system in the DSBSE approach has been a part of several investigations
of meson properties. This section means to put them in perspective with respect to each other
and to the comprehensive study to follow up on the present excerpt \cite{Popovici:2014mt}.
In the context of the present setup it is always instructive
to note that first simplified attempts at meson spectroscopy including bottomonium were already undertaken
several decades ago \cite{Munczek:1983dx} and later, under certain approximations to the quark propagators
that violated the AVWTI, in \cite{Munczek:1991jb,Jain:1993qh}, where also radial excitations were studied.
This line of work was continued by investigating corrections beyond RL truncation in a systematic
truncation scheme using a simplified model interaction \cite{Bhagwat:2004hn,Gomez-Rocha:2014vsa}.
Separable forms of the BSE kernel were employed mainly to make use of concepts along the lines of heavy-quark
effective theory and study heavy-light mesons \cite{Ivanov:1998ms}.
In later studies with a full numerical account of the quark propagators and thus an also numerical
satisfaction of the AVWTI, heavy quarks were difficult to treat with methods available at the time and so
at first efforts focussed on systems involving only light or at most charmed quarks \cite{Krassnigg:2004if}.
Bottomonium in this context first appeared
only a couple of years ago \cite{Maris:2006ea} and soon thereafter several investigations involved bottomonium as
an important part for the study of, e.\,g., effects of the dressing of heavy quarks or various parts
of the effective interaction
\cite{Krassnigg:2009zh,Souchlas:2010st,Souchlas:2010zz,Nguyen:2010yh,Nguyen:2010yj,Nguyen:2009if}.
The most recent development regarding bottomonium in this context is given in \cite{Blank:2011ha} where
bottomonium ground-state masses and decay constants were studied to test the straight-forward
applicability of a standard effective interaction to this system simply by adjusting one free model
parameter and without subsequent fine-tuning of any of the model parameters, which proved to be
successful for all ground states known experimentally.
\section{Interaction model and phenomenological setup}
\label{sec:model}
In ladder truncation the homogeneous BSE for
quark-antiquark bound states reads:
\begin{eqnarray}
\Gamma(p;P)&=&-C_F\int^\Lambda_q\!\!\!\!\mathcal{G}((p-q)^2)\; D_{\mu\nu}^f(p-q) \;
\gamma_\mu \; S(q_+) \Gamma(q;P) S(q_-)\;\gamma_\nu \;,\label{eq:bse}
\end{eqnarray}
where $\Gamma$ is the Bethe-Salpeter amplitude (BSA), $C_F=4/3$ the Casimir color factor,
$D_{\mu\nu}^f$ is the free gluon propagator, $\gamma$ is
the Dirac part of the bare quark-gluon vertex, and
$\int^\Lambda_q:=\int^\Lambda d^4q/(2\pi)^4$ represents a
translationally invariant regularization of the integral, with the
regularization scale $\Lambda$ \cite{Maris:1997tm}.
$q$ and $P$ are the
relative and total momenta of the $q\bar{q}$ state, respectively, and
the semicolon separates them as four-vector arguments of the BSA. The (anti)quark
momenta are $q_+ = q+\eta P$ and $q_- = q- (1-\eta) P$, where $\eta
\in [0,1]$ is referred to as the momentum partitioning parameter.
We use the arbitrariness of the value of $\eta$ in our covariant
framework to set $\eta=1/2$.
The renormalized dressed quark propagator $S(p)$ is obtained from
the corresponding rainbow-truncated quark DSE
\begin{eqnarray}\label{eq:dse}
S(p)^{-1} &=& (i\gamma\cdot p + m_q)+ \Sigma(p)\,,\\\label{eq:selfenergy}
\Sigma(p)&=& C_F\int^\Lambda_q\!\!\!\! \mathcal{G}((p-q)^2) \; D_{\mu\nu}^f(p-q)
\;\gamma_\mu \;S(q)\; \gamma_\nu \,.
\end{eqnarray}
$\Sigma(p)$ denotes the quark self-energy, and $m_q$ is the current-quark mass; details
of the renormalization of the quark propagator can be found in
\cite{Maris:1997tm,Maris:1999nt}.
The function $\mathcal{G}$ apparent in both Eqs.~(\ref{eq:bse}) and (\ref{eq:selfenergy})
is the effective form of the quark-qluon interaction to go with the RL truncated model setup.
With $s:=(p-q)^2$ we employ the well-established parameterization \cite{Maris:1999nt}
\begin{equation}
\label{eq:interaction}
\frac{{\cal G}(s)}{s} =
\frac{4\pi^2 D}{\omega^6} s\;\mathrm{e}^{-s/\omega^2}
+\frac{4\pi\;\gamma_m \pi\;\mathcal{F}(s) }{1/2 \ln
[\tau\!+\!(1\!+\!s/\Lambda_\mathrm{QCD}^2)^2]}.
\end{equation}
This form has a perturbative limit consistent with the one-loop
renormalization group behavior of QCD. While the far infrared is not expected to
have a significant impact for our purposes \cite{Blank:2010pa}, its low and intermediate momentum ranges
include some model enhancement to provide the flexibility needed in a phenomenological
approach, e.\,g., to accommodate the correct amount of dynamical chiral symmetry breaking.
Furthermore, ${\cal F}(s)= [1 - \exp(-s/[4
m_t^2])]/s$, $m_t=0.5$~GeV, $\tau={\rm e}^2-1$, $N_f=4$,
$\Lambda_\mathrm{QCD}^{N_f=4}= 0.234\,{\rm GeV}$, and
$\gamma_m=12/(33-2N_f)$ \cite{Maris:1999nt}.
This model interaction has been used over the past years to successfully describe hadron properties,
most prominently but not limited to the ones of pseudoscalar and vector mesons, such as electromagnetic
properties \cite{Maris:1999bh,Maris:2005tt,Holl:2005vu,Bhagwat:2006pu,Eichmann:2007nn,Eichmann:2011vu},
strong decay widths \cite{Jarecke:2002xd,Mader:2011zf}, valence-quark distributions
\cite{Nguyen:2011jy,Cloet:2013jya}, as well as properties at finite
temperature \cite{Maris:2000ig,Blank:2010bz}.
\section{Approach}
\label{sec:approach}
In this section we outline our strategy for obtaining a DSBSE result for
the bottomonium spectrum that is most satisfactory in the current setup.
Notably, two differences compared to the previous study in \cite{Blank:2011ha}
appear: First, we attempt to describe the spectrum of not only ground
but also radially excited states. Second, we allow additional variation
of the model parameters in Eq.~(\ref{eq:interaction}). Keeping this in mind,
we thus test our model effective interaction within the range specified
below with regard to the following challenges:
\begin{itemize}
\item reproduce the splittings of bottomonium ground-state masses for
the states available experimentally for $J=0,1,2$ with the same quality as already achieved
in \cite{Blank:2011ha}
\item in addition, reproduce the splitting of the ground vs.~first radially excited state in
each channel experimentally available
\item alternatively, reproduce the splittings of all first radially excited states with
respect to each other, where experimentally available
\end{itemize}
It is important to note at this point that this is the first study with the declared goal
to successfully describe both ground and radially excited meson states in an RL-truncated
DSBSE approach. Since it is not clear a priori that such an endeavor can be successful
even for the promising realm of heavy-quark bound states, several steps are needed to test
model assumptions and restrictions without losing track of where certain changes come from.
The original setup of Maris and Tandy \cite{Maris:1999nt} for their interaction was
anchored in the light-quark domain and model parameters were adjusted to relevant quantities,
namely the pion mass and decay constant as well as the chiral condensate. The relevant
term in the effective interaction Eq.~(\ref{eq:interaction}) is the first one, while the
second determines the behavior of calculated results in or towards the perturbative domain.
More precisely, the current-quark mass $m_q$ as well as the parameters $\omega$ and $D$
were adjusted such that light pseudoscalar and vector meson masses and decay constants were well
described by, as it turned out, fixing the product $D\times\omega$ to $0.372$ GeV${}^3$ and varying
$\omega$ in the range $[0.3,0.5]$ GeV. In this way, the choice of $D\times\omega$ and $m_q$ effectively
defined a one-parameter model. While the calculated pseudoscalar and vector ground-state observables were
independent of $\omega$, it was shown later that radial- and orbital-excitation properties strongly
depend on $\omega$, even with a fixed value for $D\times\omega$, see \cite{Krassnigg:2009zh} and
references therein. This is not surprising, since $\omega$ corresponds to an inverse range of
the intermediate-momentum (i.\,e., the long-range) part of the effective interaction and one
would expect such a parameter to have a noticeable effect on excited but not ground states \cite{Holl:2004un}.
In \cite{Blank:2011ha} the original value for the product $D\times\omega=0.372$ was kept and $\omega$ fitted
to $\omega=0.61$ GeV to achieve excellent agreement with the experimentally known bottomonium ground states.
An equally successful description of radial excitations in addition to the ground states is not possible
without allowing both $\omega$ \emph{and} $D$ to vary \emph{independently}, which is what we have done to
arrive at the results presented here.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{split_chi.pdf}
\caption{\emph{Left panel}: $\chi^2$ for a combination of splittings calculated in the
$[\omega, D]$ plane for the bottomonium system and compared to experimental data \cite{Beringer:1900zz}.
\emph{Right panel}: contour plot for the same data with interpolated values in between our
grid points (red triangles).}
\label{fig:splittings}
\end{figure*}
More precisely, as a first step we calculate the mass-splittings among ground and excited states in the bottomonium
system for a number of values on an $\omega$-$D$ grid for a fixed value of the bottom current-quark mass and
plotted the corresponding $\chi^2$ resulting
from our comparison with the available experimental numbers for those splittings, as shown in the left
panel of Fig.~\ref{fig:splittings}. The right panel of this figure illustrates the behavior of a spline of our
grid data.
The second step is then to use the optimal value combination of $\omega$, $D$ on our grid and adjust the
bottom current-quark mass $m_b$ such that the experimentally known ground-state masses in the bottomonium
system are best reproduced in a least-squares fitting procedure. More concretely, the masses used for this fit
have the quantum numbers $J^{PC}=0^{-+}$, $0^{++}$, $1^{--}$, $1^{++}$, $1^{+-}$, and
$2^{++}$. The masses of the remaining states are thus predictions of the model.
\section{Results and Discussion}
\label{sec:results}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth,clip=true]{plot-bottomonium.pdf}
\caption{Bottomonium spectrum: calculated (symbols) versus
experimental (lines) data. Error bars are contained inside the symbols for each set of data.}
\label{fig:bottomonium}
\end{figure*}
The set of various splittings among bottomonium ground and excited states
was computed for a bottom-quark mass of $m_b=3.71$ GeV (given at a renormalization point $\mu=19$ GeV)
and is best reproduced on our grid by the combination $\omega=0.7$ GeV and $D=1.3$ GeV${}^2$.
The subsequent least-squares fit of the ground-state masses as described above yields
$m_b=3.635$ GeV; our corresponding results are depicted in Fig.~\ref{fig:bottomonium}, where we also
provide the experimental data. The agreement is surprisingly good with the exception of two ``extra'' states
that appear as calculated excitations in the $J^{PC}=1^{++}$ and $J^{PC}=1^{+-}$ channels.
Clearly, the nature of these states needs further investigation, which is carried out at the moment.
Attempts to include these states as the first radial excitations in their respective channels were unsuccessful,
which may also hint at the fact that further degrees of freedom are needed in the effective interaction
to provide an overall satisfactory description of the bottomonium system with all its excitations.
We note that, since both our numerical as well as experimental uncertainties are smaller than the respective
symbol sizes, we have not plotted error bars in Fig.~\ref{fig:bottomonium}.
\section{Conclusions}
Building on the success of a previous study of the bottomonium ground sates in an RL truncated DSBSE approach, we have provided
the first successful combined description of ground and radially excited states for the bottomonium system by
allowing a wider and more independent variation of the parameters in the effective model interaction.
This is the immediate consequence of the idea to anchor the effective quark-gluon interaction in the heavy-quark
domain; in addition, our ultimate goal is to provide a comprehensive description of meson spectra along the whole range
of quark masses from bottomonium down to the chiral limit, possibly allowing the effective interaction to depend on the current
quark mass (see, e.\,g., \cite{Williams:2014iea} for recent insight regarding this topic). This might mimic effects beyond RL truncation such that a successful description of both ground-
and excited-state meson properties can be maintained also in the charmonium system and, ultimately, the light-quark sector.
\begin{acknowledgements}
We acknowledge helpful conversations with M.\,Blank. This work was
supported by the Austrian Science Fund (FWF) under project no.\
P25121-N27.
\end{acknowledgements}
|
1,314,259,994,276 | arxiv |
\section*{Acknowledgments}
The problem of estimating change in soil microbial diversity associated with TPH was motivated by discussions with the Terrestrial and Nearshore Ecosystems research team at the Australian Antarctic Division (AAD). The case study data used in this paper was provided by the AAD, with particular thanks to Tristrom Winsley. We acknowledge the generous technical assistance of researchers at the AAD, in particular Ben Raymond, Catherine King, Tristrom Winsley and Ian Snape. We also wish to thank Nicolas Chopin and Annalisa Cerquetti for helpful discussions, as well as the Editor, Karen Kafadar, an Associate Editor and three referees for their constructive feedback. Part of the material presented here is contained in the PhD thesis \citet{arbel2013thesis} defended at the University of Paris-Dauphine in September 2013.
\section{Introduction}
This paper was motivated by the ecotoxicological problem of studying communities, or groups of species, observed as counts
of species at a set of sites, where the composition and distribution of
species may differ among sites, and for which the sites are indexed by a
contaminant. More specifically, the soil microbial data set we are focusing on in this paper was acquired at different sites of a fuel spill region in Antarctica. Although there is now much greater awareness of human impacts on the Antarctic, substantial challenges remain. One of these is the containment of historic buried station waste, chemical dumps and fuel spills. These wastes do not break down in such extreme environments and their spread is exacerbated by melting ice in summer. In order to develop effective containment strategies, it is important to understand the impact of these incursions on the natural environment. The data set considered here consists of soil microbial counts of operational taxonomic units, OTUs, as well as a site contaminant level measured by the total petroleum
hydrocarbon, TPH. Thus the aim is to model the probabilities of occurrence associated with the
species at the different sites and to be able to interpret the impact of
the contaminant on the community as a whole or on a particular species.
This specific case study gives rise to a more general problem
that can be described as modeling the probability of membership of
subgroups of a community based on partially replicated data obtained
by observing different subsets of the subgroups at different levels of a
covariate. The problem can also be considered as the analysis of
compositional data in which the data points represent so called compositions, or
proportions, that sum to one. A typical example is the chemical
composition of rock specimens in the form of percentages of a
pre-specified number of elements \citep[see \eg][]{aitchison1982statistical,barrientos2013bayesian}. More
generally, the problem is endemic in many fields such as
biology, physics, chemistry and medicine.
Despite this, the solution to that problem remains a challenge. Common approaches are
typically based on parametric assumptions and require pre-specification
of the number of subgroups (e.g., species) in the community. In this
paper, we suggest an alternative that overcomes this drawback. The method is described in terms
of species for reasons of intuitiveness in
description, nevertheless, the approach is generally
applicable far beyond the species sampling framework.
We propose a Bayesian nonparametric approach to both the
specific and general problems described above, using a covariate dependent
random probability measure as a prior distribution. Dependent extensions
of random probability measures, with respect to a covariate
such as time or position, have been extensively studied recently under
three broad constructions. First, a class of solutions is based on the
Chinese Restaurant process; see for instance \citet{caron2006bayesian,johnson2013bayesian}. These
are oriented towards in-line data collection and fast implementation.
Second, some approaches use completely random measures; see for example,
\citet{lijoi2013bayesian,lijoi2013dependent}. An appealing feature of this approach is
analytical tractability, which allows for more elaborate studying of the
distributional properties of the measures. Third, many strategies make
use of the stick-breaking representation, based on the line of research pioneerd by
\citet{maceachern1999dependent,maceachern2000dependent} which define dependent Dirichlet processes. See its plentiful variants which include \citet{griffin2006order,griffin2011stick,dunson2007bayesian, dunson2008kernel, chung2009local} among others. The success of the stick-breaking constructions stems from their attractiveness from a computational point of view as well as their great flexibility in terms of full support, which we prove for our model in Section~\ref{sec:full_support} of \supp. This is the approach that we follow here.
We define a dependent version of the \GEMname distribution (hereafter denoted \GEM), which is the distribution of the weights in a Dirichlet process, for modeling presence probabilities. Dependence is introduced via the covariance function of a Gaussian process, which allows dependent Beta random variables to be defined by inverse cumulative distribution functions transforms. The resulting model is not confined
to the estimation of diversity indices, but could also utilize the
predictive structure yielded by specific discrete nonparametric priors
to address issues such as the estimation of the number of new species
(subgroups) to be recorded from further sampling, the probability of
observing a new species at the $(n+m+1)$-th draw conditional on the
first $n$ observations, or of observing rare species, where by rare species one refers to species whose frequency is below a certain threshold \citep[see \eg][]{lijoi2007bayesian,favaro2012new}.
The paper is organized as follows.
In Section~\ref{sec:diversity} we describe our case study, review the ecotoxicological literature and background, and discuss diversity and effective concentration estimation.
Section~\ref{sec:models} describes the Bayesian nonparametric model, posterior sampling and most useful properties of the model. Estimation results and ecotoxicological guidelines are given in Section~\ref{sec:applications}. A discussion on model considerations is given in Section~\ref{sec:considerations} and Section~\ref{sec:discussion} concludes this paper with a general discussion. Extended results, details of posterior computation and the proofs of our results are available in \supp available as \citet{arbel2015supplementary}.
\section{Case study and ecotoxicological context\label{sec:diversity}}
\subsection{Case study and data\label{sec:data}}
As already sketched in the Introduction, our case study consists in a soil microbial data set acquired across a hydrocarbon contamination gradient at the location of a fuel spill at Australia’s Casey Station in East Antarctica ($110^{\circ}\, 32'$ E, $66^{\circ}\, 17'$ S), along a transect at 22 locations. Microbes are classified as Operational Taxonomic Units (OTU), that we also generically refer to as species throughout the paper. OTU sequencing were processed on genomic DNA using the \textsf{mothur} software package, see \citet{schloss2009fromTris}. We refer to \citet{siciliano2015} for a complete account on the data set acquisition.
The total number of species recorded at least once at one site is 1,800+. All species were included in the estimation. However, we have noticed that it is possible to work with a subset of the data, consisting of those species with abundance over all measurements exceeding a given low threshold (say up to ten), without altering significantly the results.
A crucial point for the subsequent analyses is that we order the species by \textit{decreasing overall abundance}, \ie species $j=1$ is the most numerous species in the whole data set. The variations of sampling across the sites explain why the species are not strictly ordered when considered site by site, see Figure~\ref{fig:comparison_DP_prop}.
OTU measurements are paired with a contaminant called \TPHname \citep[TPH, see][]{siciliano2014}, suspected to impact OTU diversity. The contamination TPH level recorded at each site ranges from 0 to 22,000 mg TPH/kg soil. Ten sites were actually recorded as uncontaminated, \ie with TPH equal to zero. We call the microbial communities associated to these sites \textit{baseline communities}, and use them in order to define effective concentrations $EC_x$, see Section~\ref{sec:EC}. Although a continuous variable, TPH is recorded with ties that we interpret as due to measurement rounding. We jitter TPH concentrations with a random Gaussian noise (absolute value for the case TPH = 0) in order to account for measurement errors and to discriminate the ties. This noise can be incorporated in the probabilistic model. Reproducing estimation for varying values of the variance of the noise, moderate compared with the variability of TPH, have shown little to no alteration of the results.
\subsection{Ecotoxicological context\label{sec:ecotox}}
This paper focuses on an ecotoxicological case study where the goal is to predict the impact of a contaminant on an ecosystem. The common treatment of this question relies on toxicity tests, either on single species (called populations) or on multiple species (called communities). The need for appropriate modeling techniques is apparent due to data limitations, for instance in our case where data acquisition in Antarctica is extremely expensive. If single species modeling methods are now well comprehended,
community modeling still lacks from theoretical evidence endorsement. There are two alternative community modeling approaches. On one hand, one can model single species independently and then aggregate the individual predictions into community predictions \citep[e.g.][]{ellis2011}. A drawback attached to the aggregation is the lack of appropriate uncertainty of the method, on top of which one necessarily lose crucial information by dismissing interplays across species. On the other hand, the response of the community as a whole is modeled, which generally entails the use of some univariate summaries of community responses, such as compositional dissimilarity \citep[e.g.][]{ferrier2006,ferrier2007} or rank abundance distributions \citep{foster2010}. Alternatively, the responses of multiple species can be modeled simultaneously \citep[e.g.][]{foster2010,dunstan2011,wang2012}.
Single species are commonly modeled through the probability of presence $p_j$ of each species $j$ as a function of the environmental parameters. The natural distribution for multiple species is the multinomial distribution, which provides an intuitive framework when the sampling process consists of independent observations of a fixed number of species. Recent literature demonstrates the popularity of the multinomial distribution in ecology \citep[e.g.][]{fordyce2011,death2012multinomial,holmes2012dirichlet} and genomics \citep{bohlin2009,dunson2009nonparametric}. Our use of the \GEM distribution actually extends the multinomial distribution to cases where the number of species does not need be neither fixed nor known, \ie where the prior is on infinite vectors of presence probabilities
\subsection{Diversity\label{sec:div}}
Modeling presence probabilities provides a clear link to indices that describe various community properties of interest to ecologists, such as species diversity, richness, evenness, \etc. The literature on diversity is extensive, not only in ecology \citep{hill1973diversity,patil1982diversity,foster2010,colwell2012, death2012multinomial} but also in other areas of science, such as biology, engineering, physics, chemistry, economics, health and medicine \citep[see][]{borges1998family,havrda1967quantification,kaniadakis2005two}, and in more mathematical fields such as probability theory \citep{donnelly1993asymptotic}. There are numerous ways to study the diversity of a population divided into groups, examples of predominant indices in ecology include the Shannon index $-\sum_j p_j\log p_j$, the Simpson index (or Gini index) $1-\sum_j p_j^2$, on which we focus in this paper, and the Good index which generalizes both $-\sum_{j} p_j^\alpha\log^\beta p_j$, $\alpha,\beta\geq 0$ \citep{good1953population}.
Diversity estimation, and more generally estimation of community indices based on species data, has been a statistical problem of interest for a long time. One of the reasons for that problem is simple and can be traced back to the high variability inherent to species data. For instance the most obvious estimators, hereafter referred to as \textit{empirical estimators}, which consist in plugging in empirical presence probabilities, \ie observed proportions $\hat p_{ij}$ of species $j$ at site $i$, suffer from that curse. Many treatments were proposed in the literature to account for this issue. An first approach is the field of occupancy modeling and imperfect detection, see for instance the monograph \citet{royle2008hierarchical}. We provide a concise description of imperfect detection modeling in Section~\ref{sec:meas-error} and do not pursue this direction here. Another approach, that we follow in this paper, consists in smoothing, or regularizing, empirical estimates. A Bayesian approach is a natural way to do so. Specifically, \citet{gill1979bayesian} show that using a Dirichlet prior distribution over $(p_1,\ldots,p_J)$ in the multinomial model with $J$ species greatly improves estimation over empirical counterparts. The reason for this is that using a prior prevents pathological behaviors due to outliers by smoothing the estimates. The smoothing is controlled by the Dirichlet parameter which can be conducted according to expert information. Compared to the framework of \citet{gill1979bayesian}, there is additional variability across sites in our case study. To instantiate this high variability of the empirical estimates of Simpson diversity, see their representation (dots) on Figure~\ref{fig:post_shannon}. However, we leverage this additional difficulty by borrowing of strength across the sites by following the intuition that neighboring sites should respond similarly to contaminant. The borrowing of strength is done by incorporating dependence across the sites in the prior distribution.
In order not to impose the total number of species to be known a priori, we adopt a Bayesian nonparametric approach, hence extending the work by \citet{gill1979bayesian} from Dirichlet prior distributions to covariate-dependent Dirichlet process prior. This is also extending the model of \citet{holmes2012dirichlet} to a covariate-dependent setting with a priori unknown number of species.
Note that this idea of using a Bayesian nonparametric approach as a smoothing technique for species data was recently adopted in the context of discovery probability, the probability of observing new species or species already observed with a given frequency. \citet{good1953population} proposed smoothed estimators popularized as Good--Turing estimators for discovery probabilities. Good--Turing estimators were shown to have a Bayesian nonparametric interpretation \citep[see][]{lijoi2007bayesian,favaro2015rediscovering,arbel2015discovery}, which demonstrate the ability of Bayesian nonparametric methods to regularize species data.
\subsection{Effective concentration\label{sec:EC}}
Highly relevant in terms of protecting an ecosystem, the \textit{effective concentration} at level $x$, denoted by $EC_x$, is the concentration of contaminant that causes $x$\% effect on the population relative to the baseline community \citep[e.g.][]{newman2012quantitative}. For example, the $EC_{50}$ is the median effective concentration and represents the concentration of a contaminant which induces a response halfway between the control baseline and the maximum after a specified exposure time. For single species studies, this is commonly assessed by an $x$\% increase in mortality. In applications with a multi species response as we are interested in this paper, it is the response of the community as a whole that is of interest. The $EC_x$ values are used to derive appropriate protective guidelines on contaminant concentrations, for instance in terms of waste, chemical dumps and fuel spills containment strategies. Currently, it is not clear how to best calculate $EC_x$ values using whole-community data. The $EC_x$ values can be defined in many ways depending on the specific aspects of interest to the ecological application. We illustrate the use of the Jaccard dissimilarity index, denoted by $\text{Jac}(X)$, one of the many dissimilarity variants available, as a measure of change in community composition.
We defined the baseline community as the set of uncontaminated sites (ten sites), where TPH equals zero, see Section~\ref{sec:data}. The dissimilarity at TPH zero, denoted by $\text{Jac}_0$, is an estimate of the variability in community composition between uncontaminated sites. The $EC_x$ value is the smallest TPH value $X$ such that
\begin{align}\label{eq:ECxdef}
\text{Jac}(X)=1-(1-\text{Jac}_0)(1-x/100).
\end{align}
In this way, $EC_0$, the TPH value for which there is no change relative to baseline, is obtained at $\text{Jac}(X)=\text{Jac}_0$, while $EC_{100}$ is obtained at $\text{Jac}(X)=1$, \ie for a TPH value such that the community composition becomes disjoint with the baseline. We see by Equation~\eqref{eq:ECxdef} that intermediate values are obtained by linear interpolation.
The smallest TPH value is used so as to provide a conservative $EC_x$ estimate, since the dissimilarity curve is not guaranteed to be monotonic. A particular feature of the model which allows us to follow this methodology is its ability to estimate the community composition between observed TPH values, since it is unlikely that the dissimilarity threshold $ \text{Jac}(X)$ sought in Equation~\eqref{eq:ECxdef} will coincide exactly with one of the measured TPH levels in the data. $95\%$ credible bands for $EC_x$ values were obtained in a similar fashion, \ie as the smallest and the largest values of, respectively, the $2.5\%$ and $97.5\%$ quantiles of the $EC_x$ value, again so as to provide conservative estimates. See Figure~\ref{fig:ECx_ECx} for an illustration of the method.
\section{Model\label{sec:models}}
\subsection{Data model\label{sec:sampling}}
We describe here the notations and the sampling process of covariate-dependent species-by-site count data. To each site $i=1,\ldots,I$ corresponds a covariate value $X_i\in\X$, where the space $\X$ is a subset of $\R^d$. We focus here on a single covariate, \ie $d=1$. The general case $d\geq 1$ is discussed in Section~\ref{sec:discussion}.
Individual observations $Y_{n,i}$ at site $i$ are indexed by $n=1,\ldots,N_i$, where $N_i$ denotes the total abundance, or number of observations. Observations $Y_{n,i}$ take on positive natural numbers values $j\in\{1,\ldots,J_i\}$ where $J_i$ denotes the number of distinct species observed at site $i$.
No hypothesis is made on the unknown total number of species $J=\max_i J_i$ in the community of interest, which might be infinite.
We denote by $(\Xb,\Yb)$ the observations over all sites, where $\Xb=(X_i)_{i=1,\ldots,I}$, $\Yb =(\Y_{i}^{N_i})_{i=1,\ldots,I}$ and $\Y_{i}^{N_i}=(Y_{n,i})_{n=1,\ldots,N_i}$. The abundance of species $j$ at site $i$ is denoted by $N_{ij}$, \ie the number of times that $Y_{n,i}=j$ with respect to index $n$. The relative abundance satisfies $\sum_{j=1}^{J_i} N_{ij} = N_{i}$.
We model the probabilities of presence $\p=(\p(X_i))_{i=1,\ldots, I}=(p_j(X_i)_{j=1,2,\ldots})_{i=1,\ldots, I}$, where $p_j(X_i)$ represents the probability of species $j$ under covariate $X_i$, by the following
\begin{equation}\label{eq:mixture_model}
Y_{n,i}\,\vert \,\p(X_i),X_i\simind \sum_{j=1}^\infty p_j(X_i)\delta_j,
\end{equation}
for $i=1,\ldots, I$, $n=1,\ldots, N_i$, where $\delta_j$ denotes a Dirac point mass at $j$.
\subsection{Dependent prior distribution\label{sec:dep_prior}}
We follow a Bayesian approach, which implies that we need to define a prior distribution for the probabilities $\p$. The Dirichlet process \citep{ferguson1973bayesian} is a popular distribution in Bayesian nonparametrics which has been used for modeling species data by \cite{lijoi2007bayesian}. We extend the methodology developed by Lijoi et al. in building a covariate-dependent prior distribution in a way which is reminiscent of the extension of the classical \DPname to the dependent Dirichlet process by \citet{maceachern1999dependent}. More specifically, the marginal prior distribution on $\p(X)$ for covariate $X$ is defined by the following stick-breaking construction, which introduces Beta random variables $V_{j}(X)\simiid \Be(1,M)$ such that $p_1(X)=V_1(X)$ and, for $j>1$:
\begin{equation}
p_{j}(X)=V_{j}(X)\prod_{l<j}(1-V_{l}(X)).\label{eq:beta_on_V}
\end{equation}
This prior distribution is called \GEMname distribution and denoted by $\p(X)\sim\GEM(M)$, where $M>0$ is called the precision parameter. The motivation for using the \GEM distribution is explained by Figure~\ref{fig:comparison_DP_prop} which shows, for species $j=1,\ldots,32$, the observed proportions $(\hat p_{ij})$ at site $i=9$ and draws of $(p_j)$ from the $\GEM(M)$ prior with precision parameter $M=6$.
Since the $\GEM(M)$ prior on $\p(X_i)$ is \emph{stochastically ordered} \citep[see][]{pitman2006combinatorial}, it puts more mass on the more numerous species of the community.
It makes sense to sort the data by decreasing overall abundance, as explained in Section~\ref{sec:data}, and to use a prior with a stochastic order on $\p$ since the data under study are naturally present in large and small numbers of species. In Figure~\ref{fig:comparison_DP_prop} we observe the same non-increasing pattern between the observed frequencies and draws from the \GEM prior, which is an argument in favour of the use of the $\GEM(M)$ prior for marginal modeling of the probabilities $\p(X)$. For a discussion on the ordering assumption, see Section~\ref{sec:assump-data}.
\begin{center}
\begin{figure}[ht!]
{\centering
\includegraphics[width=.3\linewidth]{Figures/graphics-graph_p_emp3}
\includegraphics[width=.3\linewidth]{Figures/graphics-graph_p_GEM3}
}
\caption[Comparison of probabilities $p_j$.]{Comparison of probabilities of presence in raw data at site $i=9$ (left) and probabilities sampled from the \gem prior with $M=6$ (right). The $x$-axis represents species $j=1,\ldots,32$.}
\label{fig:comparison_DP_prop}
\end{figure}
\end{center}
For an exhaustive description of the prior distribution on $\p$, the marginal description~\eqref{eq:beta_on_V} needs be complemented by specifying a distribution for stochastic processes $(V_j(X),X\in\X)$, for any positive integer $j$. Since~\eqref{eq:beta_on_V} requires Beta marginals, natural candidates are Beta processes. A simple yet effective construct to obtain a Beta process is to transform a Gaussian process by the inverse cumulative distribution function (\CDF) transform as follows. Denote by $Z\sim \Norm(0,\sigma_Z^2)$ a Gaussian random variable, by $\Phi_{\sigma_{Z}}$ its \CDF and by $F_M$ a $\Be(1,M)$ \CDF. Then $V=F_M^{-1}\circ\Phi_{\sigma_{Z}}(Z)$ is $\Be(1,M)$ distributed, with $F_M^{-1}(U) = 1-(1- U)^{1/M}$. Denote by $g_{\sigma_Z,M} =F_M^{-1}\circ\Phi_{\sigma_{Z}}$.
Note that the idea of including a transformed Gaussian process within a stick-breaking process is used in previous articles including \citet{rodriguez2010latent,rodriguez2011nonparametric,barrientos2012support,pati2013posterior}.
In our case, we use Gaussian processes $\Zp_j$ on the space $\X$, $j=1,2,\ldots$, which define Beta processes $\V_j$, which in turn define the probabilities $\p_j$. Though the main parameters of interest are the $\p_j$, we will work hereafter with $\Zp_j$ for computational convenience.
The Gaussian process is used as a prior probability distribution over functions.
It is fully specified by a mean function $m$, which we take equal to 0, and a covariance function $K$ defined by
\begin{equation}\label{eq:cov_function_K}
K(X_i,X_l) = \cov\big(\Zp_j(X_i),\Zp_j(X_l)\big).
\end{equation}
We control the overall variance of $\Zp_j$ by a positive pre-factor $\sigma_{\Z}^2$ and write $K=\sigma_{\Z}^2\tilde{K}$ where $\tilde{K}$ is normalized in the sense that $\tilde{K}(X_i,X_i)=1$ for all $i$.
We work with the squared exponential (SE), Ornstein--Uhlenbeck (OU), and rational quadratic (RQ) covariance functions. See Section~\ref{sec:covariance-matrix} in \supp for more details. All three involve a parameter $\lambda$ called the length-scale of the process $\Zp_j$. It tunes how far apart two points $X_1$ and $X_2$ have to be for the process to change significantly. The shorter $\lambda$ is, the rougher are the paths of the process $\Zp_j$.
We adopt the same technique as \citet{van2009adaptive} who deal with $\lambda$ by making it random with an inverse-Gamma (denoted \IG) prior distribution.
They obtain adaptive minimax-optimal posterior contraction rates which indicate that the length-scale parameter $\lambda$ correctly adapts to the path smoothness. \citet{gibbs1997bayesian} derived a covariance function where the length-scale $\lambda(X)$ is a (positive) function of $X$. This case is not studied here, although it could result in interesting behaviour, as noted in \citet{Rasmussen:2006aa}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=.8\textwidth]{Figures/graph_model.pdf}
\caption{Diagram representation for the \DGEM model. Squares represent observed data, \ie covariates $\Xb=(X_i)_{i=1,\ldots,I}$ and observations $\Y_i^{N_i}=(Y_{1,i},\ldots,Y_{N_i,i})$, and circles represent parameters for the \DGEM model.}
\label{fig:graphical_model}
\end{center}
\end{figure}
Each species $j$ is associated to a Gaussian process $\Zp_j$. We have a set of $I$ points $\Xb=(X_1,\ldots,X_I)$ in the covariate space $\X$ which reduces the evaluation of the whole process $\Zp_j$ to its values at $\Xb$ denoted by $\Z_j=(Z_{1,j},\ldots,Z_{I,j})= (\Zp_j(X_1),\ldots,\Zp_j(X_I))$. We denote also by $\Z$ the matrix of all vectors $\Z_j$, $\Z = (Z_{ij})_{1\leq i\leq I, 1\leq j \leq J}$. The vector $\Z_j$ is multivariate Gaussian. Its covariance matrix $K(\Xb,\lambda,\sigma_{\Z})=(\sigma_{\Z}^2\tilde{K}_{\lambda}(X_i,X_l))_{i,l=1,\ldots,I}$ is a Gram matrix with entries given by Equation~(\ref{eq:cov_function_K}). The prior distribution of $\Z_j$ is
\begin{equation*
\log \pi(\Z_j\vert \Xb,\lambda,\sigma_{\Z}) = \frac{1}{2} \Z_j^\top K^{-1}(\Xb,\lambda,\sigma_{\Z})\Z_j - \frac{1}{2}\log\vert K(\Xb,\lambda,\sigma_{\Z})\vert - \frac{I}{2}\log 2\pi,
\end{equation*}
or, written in terms of $\sigma_{\Z}^2$ and $\tilde{K}_{\lambda}=(\tilde{K}_{\lambda}(X_i,X_l))_{i,l=1,\ldots,I}$,
\begin{align*}
\pi(\Z_j\vert \Xb,\lambda,\sigma_{\Z})\propto \sigma_{\Z}^{-I}\vert\tilde K_\lambda\vert^{-1/2}\exp\Big(-\frac{\Z_j^\top\tilde K_\lambda^{-1}\Z_j}{2\sigma_{\Z}^2}\Big).
\end{align*}
The prior distribution is complemented by specifying the distributions over hyperparameters $\sigma_{\Z}$ the standard deviation, $\lambda$ the length-scale and $M$ the precision parameter of the \GEM distribution. We use the following standard hyperpriors:
\begin{align}\label{eq:hyperpriors}
\sigma_{\Z}^2 \sim \IG(a_{\Z},b_{\Z}),\,\, \lambda \sim \IG(a_{\lambda},b_{\lambda}), \text{ and } M \sim \Ga(a_M,b_M).
\end{align}
Note that these are also common choices in the absence of dependence since they are conjugate priors, and recall that the inverse-Gamma for $\lambda$ also proves to lead to good convergence results.
It is convenient to estimate the model in terms of $\Z_j$, and then to use the transform $\V_j = g_{\sigma_{\Z},M}(\Z_j)$. The likelihood is
\begin{align}\label{eq:factorized_like}
\Lc(\Yb\vert\Z,\Xb, \sigma_{\Z},M)=\prod_{j =1}^J\prod_{i =1}^I g_{\sigma_{\Z},M}(Z_j(X_i))^{N_{ij}} (1-g_{\sigma_{\Z},M}(Z_j(X_i)))^{\bar N_{i,j+1}},
\end{align}
where $\bar N_{i,j+1}=\sum_{l>j}{N_{il}}$. The posterior distribution is then
\begin{equation}\pi(\Z,\lambda,\sigma_{\Z},M\vert\Yb,\Xb) \propto \Lc(\Yb\vert \Z,\Xb, \sigma_{\Z},M)\pi(\Z\vert \Xb,\lambda,\sigma_{\Z})\pi(\sigma_{\Z})\pi(\lambda)\pi(M).\label{eq:post_GP}
\end{equation}
\subsection{Posterior computation and inference\label{sec:post}}
Here we highlight the main points of interest of the algorithm which is fairly standard, whereas the fully detailed posterior sampling procedure can be found in Supplementary Material, Section~\ref{sec:app-post}. Inference in the \DGEM model is performed via two distinct samplers: (i) first a Markov chain Monte Carlo (hereafter \MCMC) algorithm comprising Gibbs and Metropolis-Hastings steps for sampling the posterior distribution of $(\Z,\sigma_{\Z},\lambda,M)$. It proceeds by sequentially updating each parameter $\Z,\,\sigma_{\Z},\,\lambda$ and $M$ via its conditional distribution; (ii) second a sampler from the posterior predictive distribution of $\Z_*$. This consists in posterior conditional sampling of the Gaussian process $\Zp$ at covariates $\Xb_*=(X_{1}^*,\ldots,X_{I_*}^*)$ which are not observed, \ie such that $\{X_1,\ldots,X_I\}$ and $\{X_{1}^*,\ldots,X_{I_*}^*\}$ are pairwise distinct. This is achieved by integrating out $\Z$ in the conditional distribution of $\Z_*$ given $\Z$ according to the posterior distribution sampled in (i).
\subsection{Distributional properties\label{sec:properties}}
We provide in Proposition~\ref{prop:covariance_diversity} the first prior moments, expectation, variance and covariance, of the diversity. It is of crucial importance in order to elicit the values of hyperparameters, or their prior distribution, based on prior information (expert, etc.) Additionally, since the \DGEM introduces some dependence across the $p_j(X_i)$ in varying $X_i$, the question of the dependence induced in a diversity index arises. Denote the Simpson index by $H_\Simp(X_i)$, see Section~\ref{sec:div}. An answer is formulated in the next Proposition in terms of the covariance between $H_\Simp(X_1)$ and $H_\Simp(X_2)$. Further properties worth mentioning are presented in \supp Section~\ref{sec:suppl-properties}, including marginal moments of the \DGEM prior and continuity of sample paths in Proposition~\ref{prop:moments}, full support in Proposition~\ref{prop:full_supp_DGEM}, a study of the joint distribution of samples from the \DGEM prior in Proposition~\ref{prop:joint_law}, and a discussion on the joint exchangeable partition probability function based on size-biased permutations in Section~\ref{sec:size-biased}.
\begin{prop}\label{prop:covariance_diversity}
The expectation and variance of the Simpson diversity, and its covariance at two sites $X_1$ and $X_2$, induced by the \DGEM distribution, are as follows
\begin{align}
&\E(H_\Simp)=\frac{M}{1+M},\,\var(H_\Simp(X)) \frac{2M}{(M+1)(M+1)_{3}},\label{eq:variance_simp}\\
&\cov(H_\Simp(X_1),H_\Simp(X_2))=\frac{\nu_{2,2}(1-\omega_{2,0})+2\nu_{2,0}\gamma_{2,2}}{(1-\omega_{2,0})(1-\omega_{2,2})}-\nu_{1,0}^2,\label{eq:cov_Simpson}
\end{align}
where $\nu_{i,j}=\E[V^i(X_1)V^j(X_2)]$, $\omega_{i,j}=\E[(1-V(X_1))^i(1-V(X_2))^j]$, and $\gamma_{i,j}=\E[V^i(X_1)(1-V(X_2))^j]$.
\end{prop}
The values of $\nu_{i,j},\omega_{i,j},\gamma_{i,j}$ cannot be computed in a closed-form expression when $i\times j \neq 0$ but they can be approximated numerically. The same formal computations for the Shannon index lead to somehow more complex expressions which are not displayed here \citep[see also][]{cerquetti2014bayesian}. The expressions of Proposition~\ref{prop:covariance_diversity} are illustrated on Figure~\ref{fig:var_and_cov_H}.
The precision parameter $M$ has the following impact on the prior distribution and on the diversity: when $M\rightarrow 0$, the prior degenerates to a single species with probability 1, hence $H_\Simp\rightarrow 0$, whereas when $M\rightarrow \infty$, the prior tends to favour infinitely many species, and $H_\Simp\rightarrow 1$. In both cases, the variance and the covariance vanish. In between, the variance is maximum for $M\approx 0.49$. The covariance at $X_1$ and $X_2$ equals the variance when $X_1=X_2$ (by continuity of the sample paths), while the covariance vanishes when $\vert X_1-X_2\vert \rightarrow \infty$ (this corresponds to independence for infinitely distant covariates).
\begin{center}
\begin{figure}[ht!]
{\centering
\includegraphics[width=.33\linewidth]{Figures/expectation_H-1}
\includegraphics[width=.33\linewidth]{Figures/variance_H-1}
\includegraphics[width=.32\linewidth]{Figures/asymp_covariance-1}
}
\caption{Illustration of Proposition~\ref{prop:covariance_diversity}. \emph{Left}: $\E(H_\Simp(X))$ \wrt $M$. \emph{Middle}: $\var(H_\Simp(X))$ \wrt $M$. \emph{Right}: three paths of $\cov(H_\Simp(X_1),H_\Simp(X_2))$ \wrt $\vert X_1-X_2\vert$ for $M\in\MM$.}
\label{fig:var_and_cov_H}
\end{figure}
\end{center}
Despite the fact that the first moments of the diversity indices under a \GEM prior can be derived, a full description of the distribution seems hard to achieve. For instance, the distribution of the Simpson index involves the small-ball like probabilities $\P(\sum_j p_j^2 < a)$ for which, to the best of our knowledge, no result is known under the \GEM distribution.
\section{Case study results\label{sec:applications}}
We now apply the model to the estimation of diversity and of effective concentrations $EC_x$ as described in Section~\ref{sec:diversity}, and assess the goodness of fit of the model and its sensitivity to sampling variation.
\subsection{Results\label{sec:microbial}}
The \MCMC algorithm
is run with squared exponential Gaussian processes for 50,000 iterations thinned by a factor of 5 with a burn-in of 10,000 iterations. The parameters of the hyperpriors~\eqref{eq:hyperpriors} are $a_{\Z}=b_{\Z}=1$, $\eta_{\lambda}=1$, $a_{\lambda}=b_{\lambda}=1$ and $a_M=b_M=1$. The efficiency and convergence of the \MCMC sampler was assessed by trace plots and autocorrelations of the parameters.
The results for the Simpson diversity estimation are illustrated in Figure~\ref{fig:post_shannon} for the \DGEM model (left, \ref{fig:diversity_DGEM}) and for the independent \GEM model {(right, \ref{fig:diversity_GEM})}. The horizontal axis represents the pollution level TPH and the vertical axis represents the Simpson diversity. The posterior mean of the diversity is represented by the solid line, and a 95\% credible interval is indicated by dashed lines, for the dependent model only. The dots indicate the empirical estimator of the diversity.
The \DGEM model (Figure~\ref{fig:diversity_DGEM}) suggested that diversity first increases with TPH with a maximum at 4,000mg TPH/kg soil, and then decreases with TPH. The \GEM model estimates are shown for comparison in Figure~\ref{fig:diversity_GEM}. These estimates showed more variability with respect to TPH in that they are closer to the empirical estimates of the diversity. Note that the \GEM estimates were only available at levels of the covariate that were present in the data, because of the independent nature of the model specification. The \DGEM, in contrast, provided predictions across the full range of TPH values. The credible bands are narrowest for TPH between 3,000-5,000mg TPH/kg soil, due to borrowing of information between concentrated points, and they widen both at TPH = 0, due to a lot of data points with high variability, and at large TPH, due to few data points.
\begin{figure}[ht!]
\begin{minipage}[b]{.5\linewidth}
\centering\includegraphics[width=\textwidth]{Figures/diversity_paper}
\subcaption{\DGEM}\label{fig:diversity_DGEM}
\end{minipage}%
\begin{minipage}[b]{.5\linewidth}
\centering\includegraphics[width=\textwidth]{Figures/diversity_GEM}
\subcaption{Independent \GEM}\label{fig:diversity_GEM}
\end{minipage}
\caption{Diversity estimation results. {(a)} \DGEM model estimates (50,000 MCMC samples). Solid line: \SG diversity estimate. Dashed lines: 95$\%$ credible interval for the \SG diversity.
Dots: Empirical estimates of \SG diversity.
{(b)} Independent \GEM model estimates (50,000 MCMC samples).
Triangles: posterior mean estimate of the Simpson diversity.}
\label{fig:post_shannon}
\end{figure}
The Jaccard dissimilarity curve with respect to TPH is shown in Figure~\ref{fig:ECx_ECx}. The $EC_x$ values are estimated as explained in Section~\ref{sec:EC} and provided in Table~\ref{tab:ECx_estimates}.
Dissimilarity increased with TPH, illustrating that the contaminant alters community structure. Typically, $EC_{10}$, $EC_{20}$ and $EC_{50}$ values of Table~\ref{tab:ECx_estimates} are reported in toxicity studies to be used in the derivation of protective concentrations in environmental guidelines, see Section~\ref{sec:EC}. $EC_{10}$, $EC_{20}$ and $EC_{50}$ values estimated from this model are 1,250, 1,875 and 5,000 mg TPH/kg soil respectively. For small $x$ (less than 10\%), the lower bound of the credible interval on the $EC_{x}$ value is zero, because both TPH and dissimilarity values are bounded below by zero. Conversely, for large $x$ (more than 75\%), the upper bound on the credible interval is 25,000, which is the limit of the TPH range in our analysis.
\begin{figure}[ht!]
\begin{minipage}[b]{.5\linewidth}
\centering\includegraphics[width=\textwidth]{Figures/ECx_jaccard}
\subcaption{Illustration of $EC_x$ and Jaccard dissimilarity}\label{fig:ECx_ECx}
\end{minipage}
\begin{minipage}[b]{.4\linewidth}
\centering\input{Figures/ECx_table_jaccard.tex}
\vskip2cm
\subcaption{$EC_x$ estimates and 95\% credible intervals (min, max)}\label{tab:ECx_estimates}
\end{minipage}%
\caption{Jaccard dissimilarity and $EC_x$ estimation results. {(a)} Posterior distribution (\DGEM model) of Jaccard dissimilarity between the control community, where TPH equals zero, and communities where TPH$>0$. Solid line: mean estimate Dashed lines: 95\% credible intervals of the dissimilarity estimate. Color: Illustration of estimation of $EC_x$ values and their credible intervals. {(b)} Estimates of $EC_x$ values and their credible intervals. }
\label{fig:ECx}
\end{figure}
\subsection{Posterior predictive checks\label{sec:ppc}}
Since we aim at comparing the performance of the model in terms of
diversity estimates, we also need to specify measures of goodness of fit. We resort to the conditional predictive
ordinates (CPOs) statistics, which are now widely used in several
contexts for model assessment. See, for example, \citet{gelfand1996model}.
For each species $j$, the CPO statistic
is defined as follows:
\[
\operatorname{CPO}_j=\like(\Yb_j\vert \Yb_{-j}) = \int \like(\Yb_j\vert \theta) \pi(\ddr \theta\vert \Yb_{-j})
\]
where $\like$ represents the likelihood~\eqref{eq:factorized_like}, $\Yb_{-j}$ denotes data for species $j$ over all sites, $\Yb_{-j}$ denotes the observed sample $\Yb$ with the $j$-th species excluded and $\pi(\ddr \theta \vert \Yb_{-j})$ is the posterior distribution of the model parameters
$\theta =(\Z,\sigma_{\Z},\lambda,M)$ based on data $\Yb_{-j}$. By rewriting the statistic $\operatorname{CPO}_j$ as
\[
\operatorname{CPO}_j= \biggl(\int \big(\like(\Yb_j\vert \theta )\big)^{-1} \pi(\ddr \theta \vert \Yb)
\biggr)^{-1},
\]
it can be easily approximated by Monte Carlo as
\[
\widehat{\operatorname{CPO}_j}= \Biggl(\frac{1}{T}\sum
_{t=1}^T \big(\like(\Yb_j\vert \theta^{(t)})\big)^{-1}\Biggr)^{-1},
\]
where $\{\theta^{(t)}, t=1,2,\ldots,T\}$ is an MCMC sample from
$\pi(\ddr \theta \vert \Yb)$. We illustrate the logarithm of the $\operatorname{CPO}_j$, $j=1,\ldots,J$, by boxplots in Figure~\ref{fig:CPO_fig}, and summarize their values in Table~\ref{tab:CPO_tab} in two ways, as an average of the logarithm of CPOs and as the median of the logarithm of CPOs. For the purpose of the comparison, we have estimated six models. The first three are the \DGEM model with squared-exponential (SE), Ornstein-Uhlenbeck (OU) and rational quadratic (RQ) covariance functions, see Section~\ref{sec:covariance-matrix} in \supp. The fourth is the probit stick-breaking process (PSBP) by \citet{rodriguez2011nonparametric}. For the purpose of comparison, we have set the hyperparameters of the \PSBP so as to match the expected number of clusters of the \DGEM prior. Last, we used two variants of the \GEM prior: first independent \GEM priors at each site, as in Figure~\ref{fig:diversity_GEM}, and second a single \GEM prior where the presence probabilities are all drawn from the same \GEM distribution.
The single \GEM is used as a very crude baseline (it is not shown in the boxplots) which does poorly compared to the five other models. As expected, the dependence induced by the \DGEM and the \PSBP greatly improves the predictive quality of the model as shows the comparison to the independent \GEM. The \DGEM model has a slightly better predictive fit than the \PSBP which seems to indicate that the total ordering of the species that we use helps as far as prediction is concerned.
\begin{figure}[ht!]
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=\textwidth]{Figures/CPO}
\subcaption{}\label{fig:CPO_fig}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\input{CPO_tab.tex}
\vskip1cm
\subcaption{}\label{tab:CPO_tab}
\end{minipage}%
\caption{Log-conditional predictive ordinates (log-CPO) for different models and prior specifications (see text). {(a)} Boxplots of log-CPO. {(b)} Summaries of log-CPO, mean and median.}
\label{fig:CPO}
\end{figure}
\subsection{Sensitivity to sampling variation\label{sec:sampling_var}}
A thorough sensitivity analysis to sampling variation was conducted in \citet{arbel2013applied}. It consisted in estimating the model on modified data, by (i) deleting the least abundant species; (ii) including additional species; (iii) excluding sites randomly.
This sensitivity analysis showed that the model provides consistent results with data modified as described, thus supporting some robustness to sampling variation.
\section{Model considerations and extensions\label{sec:considerations}}
In addition to looking at a sensitivity analysis to sampling variation as in Section~\ref{sec:sampling_var}, here we consider sensitivity with respect to the model itself which could be extended in a number of ways.
\subsection{Imperfect detection\label{sec:meas-error}}
As pointed out in Section~\ref{sec:div} we do not connect our model to the fields of occupancy modeling and imperfect detection developed for instance by \citet{royle2008hierarchical}. A possible extension to the current model is by accounting for imperfect detection. Following \citet{royle2006hierarchical,dorazio2008modeling}, a simple yet effective way to handle this extension is to define a probability of detection $\theta_i$ fixed for each site $i$, and to model the variability of $\theta_i$ across $i$ by an exchangeable prior. Since $\theta_i$ affects each species by the same relative proportion, the probabilities of presence $p_{j}(X_i)$ are invariant to such a formulation, and so is the diversity. Diversity being the prime focus of the present paper, we argue that there is no need to account for imperfect detection in our model, though it could be easily extended as briefly sketched if interest deviates from diversity.
\subsection{Assumption on data, stochastic decrease of the $\hat p_j$'s\label{sec:assump-data}}
We have assumed that after ordering with respect to overall abundance, the $\hat p_j$'s display a stochastically decreasing pattern as in Figure~\ref{fig:comparison_DP_prop}. In our experience, this assumption turns out to be satisfied with most of species data sets, where species can be microbes, animals, words in text, DNA sequences, \etc. However, this assumption proves to be overly restrictive in the following cases i) data might be subject to detection error: this is covered in the previous section by changing the prior adequately; ii) there are outlier species which contradict the assumption: this could be addressed by adding a mixture layer in the prior specification; iii) the underlying assumption itself is not true: this is for instance the case when all species are overall evenly distributed. A treatment would be context specific and depend on the field.
\subsection{Comparison to other models\label{sec:compar-other-models}}
In Section~\ref{sec:applications} we have compared the \DGEM model to other models: two \GEM priors and the probit stick-breaking prior (\PSBP) of \citet{rodriguez2011nonparametric}. The benefits of the \DGEM over the first two is apparent in terms of smoothing of the estimates due to the a priori dependence, see Figure~\ref{fig:post_shannon}. It also carries over better predictive fit, see Figure~\ref{fig:CPO_fig} and Table~\ref{tab:CPO_tab}, and most importantly allows us to assess the response of species to any value of the contaminant, including unsampled values. With respect to the \PSBP, the CPO indicate a slightly better predictive fit of the \DGEM prior, at least for the case study at hand.
\section{Discussion\label{sec:discussion}}
We have presented a Bayesian nonparametric dependent model for species data, based on the distribution of the weights of a \ddp, named \DGEM distribution, which is constructed thanks to Gaussian processes. A fundamental advantage of our approach based on the stick-breaking is that it brings considerable flexibility when it comes to defining the dependence structure. It is defined by the kernel of a Gaussian process, whose flexibility allows learning the different features of dependence in the data.
In terms of model fit, we have shown that the \DGEM model improves estimation compared to an independent \GEM model. This was conducted by computing conditional predictive ordinates (CPOs). In addition, our dependent model allows predictions at arbitrary covariate level (not just those that were in the data). It allows, for example, estimation of the diversity and the dissimilarity across the full range of covariates. This is an essential feature in applications where the experimental data are sparse and is instrumental in estimating the $EC_x$ values.
There are computational limitations to the use of this model. The estimation can deal with large number of observations since the complexity grows linearly with the number of different observed species $J$. However the number of unique covariate values $I$ represents the limiting factor of the algorithm, and may lead to dimensionality problems.
One could consider the use of \INLA approximations \citep[see][]{rue2009approximate} in the case of prohibitively large $I$.
Possible extensions of the present paper include the following. First, extra flexibility would be guaranteed by using the two-parameter Poisson-Dirichlet distribution instead of the \GEM distribution, since it controls more effectively the posterior distribution of the number of clusters \citep{lijoi2007bayesian}. This can be done at almost no extra cost, since it only requires one additional step in the Gibbs sampler.
Second, the \DGEM model is tested on univariate variables only, but could be extended to multivariate variables, \ie, $X\in\R^d$, $d>1$.
Instead of a Gaussian process $\Zp$, one would use a Gaussian random field $\Zp^d$. To that purpose, all the methodology presented in Section~\ref{sec:models} remains valid. The algorithm can become computationnally challenging in the case of large dimensional covariates but it does not carry additional difficulty for limited dimension.
Applications of such an extension are promising, such as testing joint effects in dynamical models (time $\times$ contaminant), in spatial models (position $\times$ contaminant), \etc.
\section{Posterior computation and inference in the \DGEM model\label{sec:app-post}}
Here we describe how to design a Markov chain Monte Carlo (\MCMC) algorithm for sampling the posterior distribution of $(\Z,\sigma_{\Z},\lambda,M)$ in the \DGEM model. Up to a transformation, it is equivalent to sample the parameters in terms of Gaussian vectors $\Z$ or Beta breaks $\V$. We denote by $\pi$ the prior distribution. We make use of the factorized form of the likelihood in Equation~\eqref{eq:factorized_like} in the main paper
in order to break the posterior sampling into $J=\max_i J_i$ independent sampling schemes. It remains a multivariate sampling scheme in terms of the $I$ sites, but avoids a very high dimensional scheme of size $I\times J$.
\subsection{\texorpdfstring{\MCMC}{MCMC} algorithm}
We use an \MCMC algorithm comprising Gibbs and Metropolis-Hastings steps for sampling the posterior distribution of $(\Z,\sigma_{\Z},\lambda,M)$, which proceeds by sequentially updating each of the parameters $\Z,\,\sigma_{\Z},\,\lambda$ and $M$ via its conditional distribution as described in Algorithm~\ref{algo:gibbs} (general sampler) and Algorithm~\ref{algo:MH} (Metropolis-Hastings step for a generic parameter $\T$). Denote by $P_{\T}(\,\cdot\,)$ the target distribution (full conditional), and by $Q_{\T}(\,\cdot\,\vertju \T)$ the proposal for a generic parameter $\T$. The variance of the latter proposal, denoted by $\sigma^2_{Q_{\T}}$, is tuned during a burn-in period.\\
\begin{minipage}[c]{5cm}
\begin{algorithm}[H]
\caption{\DGEM \label{algo:gibbs}}
\begin{itemize}
\item Update $\Z$ given $(\sigma_{\Z},\lambda,M)$
\item Update $\sigma_{\Z}$ given $(\Z,\lambda,M)$
\item Update $\lambda$ given $(\Z,\sigma_{\Z},M)$
\item Update $M$ given $(\Z,\sigma_{\Z},\lambda)$
\end{itemize}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[c]{7cm}
\begin{algorithm}[H]
\caption{Metropolis-Hastings step\label{algo:MH}}
\begin{itemize}
\item Given $\T$, propose $\T'\sim Q_{\T}(\,\cdot\,\vertju \T)$
\item Compute $\rho_{\T} = \frac{P_{\T}(\T')}{P_{\T}(\T)}\frac{Q_{\T}(\T\vert \T')}{Q_{\T}(\T'\vertju \T)}$
\item Accept $\T'$ \withproba $\min(\rho_{\T},1)$, otherwise keep $\T$
\end{itemize}
\end{algorithm}
\end{minipage}\bigskip
The full conditionals and target distributions are now fully described:
\begin{enumerate}
\item Conditional for $\Z$: Metropolis algorithm with Gaussian jumps proposal $\Z' \sim Q_{\Z}(\,\cdot\vertju \Z) = \Norm_I(\Z,\sigma_{Q_{\Z}}^2\tilde K_\lambda)$. We use a covariance matrix proportional to the prior covariance matrix $\tilde K_\lambda$, which leads to improved convergence of the algorithm compared to the use of a homoscedastic alternative. The target distribution is
$$P_{\Z}(\Z)\propto \Lc(\Yb\vert \Z,\Xb,\sigma_{\Z},M) \allowbreak \pi(\Z\vert \Xb,K(\Xb,\lambda,\sigma_{\Z})).$$
\item Conditional for $\sigma_{\Z}$: Metropolis-Hastings algorithm with a Gaussian proposal left truncated to 0, ${\sigma'_{\Z}} \sim Q_{{\sigma_{\Z}}}(\,\cdot\vertju {\sigma_{\Z}}) = \Norm_{\trunc}({\sigma_{\Z}},\sigma_{Q_{{\sigma_{\Z}}}}^2)$, and target distribution $$P_{\sigma_{\Z}}(\sigma_{\Z})\propto \Lc(\Yb\vert \Z,\Xb,\sigma_{\Z},M)\sigma_{\Z}^{-I-a_{\Z}/2}\exp\Big(-\frac{\Z^\top\tilde K_\lambda^{-1}\Z-2b_{\Z}}{2\sigma_{\Z}^2}\Big).$$
\item Conditional for $\lambda$: Metropolis-Hastings algorithm with a Gaussian proposal left truncated to 0, $\lambda' \sim Q_{\lambda}(\,\cdot\vertju \lambda) = \Norm_{\trunc}(\lambda,\sigma_{Q_{\lambda}}^2)$, and target distribution $$P_{\lambda}(\lambda)\propto \pi(\Z\vert \Xb,K(\Xb,\lambda,\sigma_{\Z}))\pi(\lambda).$$
\item Conditional for $M$: Metropolis algorithm with a Gaussian proposal left truncated to 0, $M' \sim Q_{M}(\,\cdot\vertju M) = \Norm_{\trunc}(M,\sigma_{Q_{M}}^2)$, and target distribution $$P_M(M)\propto M^{A_M-1}\exp(-b_M M)\prod_{i=1}^I g_{\sigma_{\Z},M}(Z_i)^{N_{ij}}(1-g_{\sigma_{\Z},M}(Z_i))^{\bar N_{i,j+1}}.$$
\end{enumerate}
\begin{remark}\label{rem:inla}
The dimensionality of the \MCMC algorithm described above equals the number of covariates $I$ (or blocks of covariates). Large dimensions can be an obstacle to the use of traditional methods (mainly due to matrix inversion). A direction that has not been investigated could be to replace \MCMC algorithms with faster approximations, of the type of \INLA for example, see \citet{rue2009approximate}.
\end{remark}
\subsection{Predictive distribution\label{sec:predictive}}
Up to now we have considered the vector $\Z$, which is the evaluation of the Gaussian process $\Zp$ at the observed covariates $\Xb=(X_1,\ldots,X_I)$. We are now interested in new outputs, called test outputs, $\Z_*$, associated with test covariates $\Xb_*=(X_{1}^*,\ldots,X_{I_*}^*)$ which are not observed, \ie $\{X_1,\ldots,X_I\}$ and $\{X_{1}^*,\ldots,X_{I_*}^*\}$ are pairwise distinct.
An appealing feature of the use of Gaussian processes is the possibility to easily derive the predictive distribution of $\Z_*$, which is achieved as follows. The joint distribution of the vector outputs $(\Z,\Z_*)$ according to the prior is the following $I+I_*$ multivariate Gaussian distribution
\begin{align}
\bigg(\begin{matrix}
\,\Z_{\phantom{*}}\\
\,\Z_*
\end{matrix}\bigg)
\sim
\gaussxBig{I+I_*}{
\boldsymbol{0}
}{
\bigg(\begin{matrix}
K(\Xb,\Xb) & K(\Xb,\Xb_*) \\
K(\Xb_*,\Xb) & K(\Xb_*,\Xb_*)
\end{matrix}\bigg)
},
\label{eq:gp-ads desired joint}
\end{align}
where the covariance matrices $K(\Xb,\Xb)$, $K(\Xb,\Xb_*)=K(\Xb_*,\Xb)^\top$ and $K(\Xb_*,\Xb_*)$ (resp. $I\times I$, $I\times I_*$ and $I_*\times I_*$ matrices) are defined by their entries according to the choice of the Gaussian process.
The conditional density of $\Z_*$ given $\Z$ is the following Gaussian distribution \citep[see][]{Rasmussen:2006aa}:
\begin{align}\label{eq:cond_Z_star}
&\Z_*\vertju \Xb_*,\Xb,\Z\sim \Norm_{I_*}(m_*(\Z),K_*), \text{ with } m_*(\Z)=K(\Xb_*,\Xb) K(\Xb,\Xb)^{-1}\Z,\\
&\text{ and } K_*= K(\Xb_*,\Xb_*)- K(\Xb_*,\Xb) K(\Xb,\Xb)^{-1} K(\Xb,\Xb_*).\nonumber
\end{align}
The predictive distribution of $\Z_*$ is obtained by integrating out $\Z$ in the conditional distribution (\ref{eq:cond_Z_star}) according to the posterior distribution $\pi(\Z\vert \Yb,\Xb)$:
\begin{equation}\label{eq:pred_Z_star}
\pi(\Z_*\vertju \Xb_*,\Yb) = \int \pi(\Z_*\vertju \Xb_*,\Xb,\Z) \pi(\Z\vert \Yb,\Xb) \dd \Z.
\end{equation}
Simulating from a predictive distribution of the form of (\ref{eq:pred_Z_star}) is described in Algorithm~\ref{algo:pred}. Once a sample of $\Z$ from the posterior distribution $\pi(\Z\vert \Yb,\Xb)$ is available, one obtains a sample from the predictive distribution at almost no extra cost, by sampling from the multivariate normal distribution (\ref{eq:cond_Z_star}).
One matrix, $K(\Xb,\Xb)$, has to be inverted, but that computation is already done for the \MCMC sampler. The variance $K_*$ of (\ref{eq:cond_Z_star}) is to be computed once. Then it is efficient to draw a sample of the desired size from the centred normal $\Norm(0,K_*)$, and then add the means $m_*(\Z)$ for $\Z$ in the posterior sample. We can obtain the predictive distribution of any $\Z_*$ associated with any test covariates $\Xb_*$, hence allowing prediction in the whole space $\X$.
\begin{center}
\begin{minipage}[c]{11cm}
\begin{algorithm}[H]
\caption{Predictive distribution simulation\label{algo:pred}}
\begin{itemize}
\item Sample $\Z$ from the posterior distribution $\pi(\Z\vert \Yb,\Xb)$
\item Given $\Z$, sample $\Z_*$ from the conditional distribution $\pi(\Z_*\vertju \Xb_*,\Xb,\Z)$
\end{itemize}
\end{algorithm}
\end{minipage}
\end{center}
\section{Covariance matrices\label{sec:covariance-matrix}}
We work with the squared exponential (SE), Ornstein--Uhlenbeck (OU), and rational quadratic (RQ) covariance functions. The next table provides the normalized covariance function $\tilde{K}(X_1,X_2)=\tilde{K}_{\lambda}(X_1,X_2)$ for these three options.
\begin{center}
\begin{tabular}{cc}
\hline
Covariance function & $\tilde{K}_{\lambda}(X_1,X_2)$ \\
\hline
Squared exponential (SE) & $\exp\big(- (X_1-X_2)^2/(2\lambda^2)\big)$ \\
Ornstein--Uhlenbeck (OU) & $\exp\big(- \vert X_1-X_2\vert/\lambda\big)$ \\
Rational quadratic (RQ) & $\big(1+ (X_1-X_2)^2/(2\lambda^2)\big)^{-1}$ \\
\hline
\end{tabular}
\end{center}
\section{Distributional properties\label{sec:suppl-properties}}
The purpose of this section is to present key distributional properties of the \DGEM prior in terms of (i) moments and continuity, (ii) full support, (iii) dependence and (iv) size-biased permutations. Proofs are deferred to Section~\ref{sec:appendices_methodo}.
\subsection{Marginal moments and continuity}
We start by proving the continuity of sample-paths of the process $\p\sim$\DGEM$(M)$ and providing its marginal moments.
\begin{prop}\label{prop:moments} Let $\p\sim$\DGEM$(M)$. Then $\p$ is stationary and marginally, $\p\sim\GEM(M)$. Also, $\p$ has continuous paths (\ie $X\rightarrow (p_1(X),p_2(X),\ldots)$ is continuous for the sup norm), and its marginal moments are
\begin{align*
&\E(p_j(X)) = \frac{M^{j-1}}{(M+1)^j},\quad \E(p_j^n(X)) = \frac{n!}{M_{(n)}}\left(\frac{M}{M+n}\right)^j,\\
&\var(p_j(X))=\frac{2M^{j-1}}{(M+1)(M+2)^{j}}-\frac{M^{2(j-1)}}{(M+1)^{2j}},\\
&\cov(p_j(X),p_k(X))=\frac{M^{(j\Max k)-1}}{(M+1)^{\vert j-k \vert +1}(M+2)^{j\Min k}}-\frac{M^{j+k-2}}{(M+1)^{j+k}},\,k\neq j,
\end{align*}
for any $j,k\geq 1$, $n\geq 0$, and where $M_{(n)}=M(M+1)\ldots (M+n-1)$ denotes the ascending factorial, $j\Max k=\max(j,k)$ and $j\Min k=\min(j,k)$.
\end{prop}
Note that the formula for $\cov(p_j(X),p_k(X))$ does not hold for $k=j$ as it does not reduce to $\var(p_j(X))$.
The stationarity of the process as a marginal \GEM does not constrain the data to come from a stationary process. The hierarchical level of the precision parameter $M$ enables handling of diverse data structures.
\subsection{Full support of the prior\label{sec:full_support}}
The full support of the dependent Dirichlet process is proved by \citet{barrientos2012support}. Here we consider the general case of a stick-breaking prior $\Pi$ \citep{ishwaran2001gibbs} on the infinite dimensional (open) simplex
\begin{equation}\label{eq:S}
\Simplex = \left\{\p: \sum_{i=1}^\infty p_i = 1,\,\forall i \in \N^*, p_i> 0\right\}.
\end{equation}
given by $V_i\sim Be(a_i,b_i)$ iid, $a_i,b_i>0$, and $p_i=V_i\prod_{l<i}(1-V_l)$. This class of prior distributions include the \GEM distribution, as well as the distribution of the weights of the two-parameter Poisson--Dirichlet process.
\begin{prop}\label{prop:full_supp_GEM}[Full support of the \GEM prior]
For any $\epsilon>0$ and any $p^*\in \Simplex$,
$$\Pi(p:\lVert p^*-p\rVert_1 <\epsilon) >0.$$
\end{prop}
A proof can be found in \citet{bissiri2014topological}. We provide in Section~\ref{sec:appendices_methodo} another proof based on a different technique.
For the dependent GEM we introduce $ \mathcal C(\mathcal X)_+$ the set of positive and continuous functions from $\mathcal X$ to $\R$ and $\| . \|_1$ the $L_1$ norm over $\mathcal X$.
\begin{prop}\label{prop:full_supp_DGEM}[Full support of the \DGEM prior]
Let $(V_j(X), X \in \mathcal X)$ be i.i.d stochastic processes such that almost surely $V_j \in \mathcal C(\mathcal X)_+$, with $\mathcal X$ a compact subset of $\R^d$. Let $\mathbb P$ be the distribution of $V_j$ and $\mathbb H$ be the support of the processes $V_j$, i.e. for all $v \in \mathbb H$,
$$ \forall \epsilon >0, \quad \mathbb P\left( \|V - v \|_1 \leq \epsilon \right) >0.$$
Then for all $\mathbf p^\star (.) = \psi( \mathbf v^\star ) $ with $\mathbf v^\star = (v_j^\star , j \geq 1) $ and $v_j^\star \in \mathbb H $ for all $j\geq 1$
$$ \pi \left( \sum_j \| p_j - p^\star_j \|_1 \leq \epsilon \right) >0, \quad \forall \epsilon >0$$ where $\pi$ is the distribution associated to $\mathbb P$ after the transformation $\psi$ and
$$ \| p_j - p^\star_j \|_1 = \int_{\mathcal X} |p_j(x) - p_j^\star(x) | dx. $$
\end{prop}
Note that in the case where $Z_j$ are Gaussian processes viewed as elements of $\mathcal C([0,1])$ such as those considered in this paper, with $V_j = F_M^{-1}( \Phi_{\sigma_Z}(Z_j))$, then $ \mathbb H $ contains
$$\left\{(p_j, j\geq 1); \, p_j \in \mathcal C([0,1]),\, \sum_jp_j(x)=1, \, p_j(x) \geq 0 \, \forall j\geq 1\right\}.$$
\subsection{Joint law of a sample from the prior}
First, denote by $\mu_M=\mu_M(X_1,X_2)$ the dependence factor for the process evaluated at two covariates $X_1$ and $X_2$ defined by:
\begin{equation}\label{eq:mu_M}
\mu_M(X_1,X_2) = (M+1)^2\E\big(V(X_1)V(X_2)\big),
\end{equation}
Note that no analytical expression of $\mu_M$ has been derived. We resort to numerical simulation in order to compute it, \cf Figure~\ref{fig:mu_asymptotics}, and observe that $\mu_M$ is decreasing, with respect to the distance between $X_1$ and $X_2$, between two extreme cases identified as follows:
\begin{itemize}
\item \emph{equality} case, $X_1=X_2$, \ie $V(X_1)=V(X_2)$, then $\mu_M=2(M+1)/(M+2)=1+M/(M+2)$,
\item \emph{independent} case, $V(X_1)\independent V(X_2)$ (intuitively when $\vert X_1 - X_2 \vert\rightarrow \infty$), then $\mu_M=1$.
\end{itemize}
\begin{center}
\begin{figure}[ht!]
{\centering \includegraphics[width=.45\linewidth]{Figures/asymp_mu-1} \\
}
\caption{Dependence factor $\mu_M(X_1,X_2) = (M+1)^2\E\big(V(X_1)V(X_2)\big)$ \wrt $\vert X_1-X_2\vert$ for $M\in\MM$, where $\V$ is obtained by transforming a Gaussian process with squared exponential covariance function, with $\sigma_{Z}=1$ and $\lambda=1$.}
\label{fig:mu_asymptotics}
\end{figure}
\end{center}
\begin{prop}\label{prop:joint_law}
Let observations $\Y_1^n=(Y_{1,1},\ldots,Y_{n,1})$ and $\Y_2^m=(Y_{1,2},\ldots,Y_{m,2})$ at two sites $X_1$ and $X_2$, sampled from the data model~\eqref{eq:mixture_model} conditional to the process $\p\sim $\DGEM$(M)$. The joint law of $Y_{1,1}$ and $Y_{1,2}$ is:
\begin{equation}\label{eq:joint_law_2}
\P(Y_{1,1}=j,Y_{1,2}=k) = (M+1-\mu_M)M^{\abs{j-k}-1}(M^2-1+\mu_M)^{(j\wedge k)-1}/(M+1)^{j + k},
\end{equation}
for $k\neq j$ and
\begin{equation}
\label{eq:joint_law_j_j}
\P(Y_{1,1}=j,Y_{1,2}=j) = \mu_M(M^2-1+\mu_M)^{j-1}/(M+1)^{2j},
\end{equation}
where
$\mu_M(X_1,X_2) = (M+1)^2\E\big(V(X_1)V(X_2)\big)$ and $j\wedge k=\min(j,k)$.
\end{prop}
Equation~\eqref{eq:joint_law_2} reduces to $M^{j+k-2}/(M+1)^{j+k}$ in the \emph{independent} case (\ie $V(X_1)\independent V(X_2)$), which is indeed equal to $\P(Y_{1,1}=j)\P(Y_{1,2}=k)$.
The probability that both first picks are equal is obtained by summing Equation~\eqref{eq:joint_law_j_j} for all positive $j$:
\begin{align}\label{eq:joint_law_equal}
\P(Y_{1,1}=Y_{1,2}) = \frac{\mu_M}{2M+2-\mu_M}.
\end{align}
We can see that in the \emph{independent} case, Equation~(\ref{eq:joint_law_equal}) reduces to the
probability that two draws at the same site $X_1$ belong to the same species, \ie
$\P(Y_{1,1}=Y_{2,1}) = {1}/{(2M+1)}$,
obtained by summing all squares of $M^{j-1}/(M+1)^{j}$.
\subsection{Size-biased permutations\label{sec:size-biased}}
In this section we derive some general results about size-biased permutations in a covariate-dependent model which are useful for the understanding of the \DGEM model. Let $\p\equ(p_1,p_2,\ldots)$ be a probability. A size-biased permutation ($\SBP$) of $\p$ is a sequence $\tilde \p=(\tilde p_1,\tilde p_2,\ldots)$ obtained by reordering $\p$ by a permutation $\sigma$ with particular probabilities. Namely, the first index appears with a probability equal to its weight, $\P(\sigma_1\equ j)= p_j$ ; the subsequent indices appear with a probability proportional to their weight in the remaining indices, \ie for $k$ distinct integers $j_1,\ldots,j_k$,
\begin{equation}\label{eq:size_biased}
\P(\sigma_k=j_{k} \vert \sigma_1= j_{1},\ldots ,\sigma_{k-1}= j_{k-1})= \frac{p_{j_k}}{1-p_{j_1}-\ldots-p_{j_{k-1}}}.
\end{equation}
We first extend Pitman's following result \citep[for example Equation~(2.23) of][]{pitman2006combinatorial}:
\begin{equation}\label{eq:pitman_lemma}
\E\Big(\sum f(p_j)\Big)=\E\Big(\sum f(\tilde p_j)\Big)=\E\bigg(\frac{f(\tilde p_1)}{\tilde p_1}\bigg),
\end{equation}
for any measurable function $f$.
\begin{prop}\label{prop:generalize_pitman}
Let $\tilde \p$ is a size-biased permutation of $\p$. For any measurable function $f$ and any integer $k\geq 1$, we have
\begin{equation}\label{eq:pitman_gene1}
\E\bigg(\sum_{(*)} f(p_{i_1},\ldots, p_{i_k})\bigg)=\E\bigg(f(\tilde p_{1},\ldots, \tilde p_{k})\prod_{i=1}^k(1-\tilde p_1-\cdots-\tilde p_{i-1})/\tilde p_i\bigg),
\end{equation}
where the sum $(*)$ runs over all distinct $i_1,\ldots,i_k$, and with the convention that the product in the right-hand side of Equation~\eqref{eq:pitman_gene1} equals $1/\tilde p_1$ when $k=1$.
\end{prop}
When it comes to averaging sums of transforms of $k$ weights $p_{i_1},\ldots, p_{i_k}$ over all distinct $i_1,\ldots,i_k$, the proposition shows that all required information is encoded by the first $k$ picks $\tilde p_{1},\ldots, \tilde p_{k}$. As stated before, the special case for $k=1$ is a well known lemma. We also mention that the case $k=2$ was proved by \citet{Archer:2013aa}.
We can look for a further insight into the \DGEM distribution by studying the \EPPFname (\EPPF) for the random variables $\Y_1^n=(Y_{1,1},\ldots,Y_{n,1})$ and $\Y_2^m=(Y_{1,2},\ldots,Y_{m,2})$ observed at covariates $X_1$ and $X_2$. See for instance \citet{pitman1995exchangeable,pitman2006combinatorial} for a summary of the importance of partition probability functions. The observations partition $[n]=\{1,2,\ldots,n\}$ and $[m]=\{1,2,\ldots,m\}$ into $k+k_1+k_2$ clusters of distinct values where
\begin{itemize}
\item $k$ clusters are commonly observed, with respective frequencies $\textbf{n}=(n_1,\ldots,n_{k})$ and $\textbf{m}=(m_1,\ldots,m_{k})$,
\item $k_1$ (resp. $k_2$) clusters are observed only at the site of covariate $X_1$ (resp. $X_2$), with frequencies $\tilde{\textbf{n}}=(\tilde n_1,\ldots,\tilde n_{k_1})$ (resp. $\tilde{\textbf{m}}=(\tilde m_1,\ldots,\tilde m_{k_2})$).
\end{itemize}
The \EPPF can be expressed as follows
\begin{align}\label{eq:pEPPF}
p(\textbf{n},\tilde{\textbf{n}},\textbf{m},\tilde{\textbf{m}}) &=\E\bigg(\sum_{(*)} p_{i_1}^{n_1}(X_1)p_{i_1}^{m_1}(X_2)\ldots p_{i_k}^{n_k}(X_1)p_{i_k}^{m_k}(X_2) \nonumber\\
& \times p_{j_{1}}^{\tilde n_{1}}(X_1) \ldots p_{j_{k_1}}^{\tilde n_{k_1}}(X_1) \times p_{l_{1}}^{\tilde m_{1}}(X_2) \ldots p_{l_{k_2}}^{\tilde m_{k_2}}(X_2)\bigg)
\end{align}
where the sum $(*)$ runs over all $(k+k_1+k_2)$-uples $(i_1,\ldots,i_k,j_{1},\ldots,j_{k_1},l_{1},\ldots,l_{k_2})$ with pairwise distinct elements.
In non covariate-dependent models, the \EPPF can be derived as follows. The expression of Equation~\eqref{eq:pEPPF} reduces to a simpler sum $p(\textbf{n})$ which equals the conditional expectation of the first few elements of a size-biased permutation $\tilde \p$ given $\p$, and one obtains, by application of Proposition~\ref{prop:generalize_pitman} where $f(p_{1},\ldots, p_{k}) = p_1^{n_1}\ldots p_k^{n_k}$:
\begin{equation*
p(\textbf{n})=\E\bigg[\prod_{i=1}^k \tilde p_{i}^{n_i-1}\prod_{i=1}^{k-1}\Big(1-\sum_{j=1}^i \tilde p_j\Big)\bigg].
\end{equation*}
The \ISBPname (\ISBP) property that characterizes the \GEM distribution \citep[\cf][]{pitman1996random} can then be used to replace the first few elements of the size-biased permutation $\tilde \p$ by the first few elements of $\p$:
\begin{equation*
p(\textbf{n})=\E\bigg[\prod_{i=1}^k p_{i}^{n_i-1}\prod_{i=1}^{k-1}\Big(1-\sum_{j=1}^i p_j\Big)\bigg].
\end{equation*}
The final steps are to use the stick-breaking representation of $\p$ with independent Beta random variables $\V$,
and derive the \EPPF by computing the moments of Beta random variables (see Equation~\eqref{eq:moment_of_beta})
\begin{equation*
p(\textbf{n})=\frac{M^k}{M_{(n)}}\prod_{j=1}^k \left(n_j-1\right)!
\end{equation*}
Here, the hindrance to further computation of a closed-form expression for $p(\textbf{n},\tilde{\textbf{n}},\textbf{m},\tilde{\textbf{m}})$ in~\eqref{eq:pEPPF} is, to the best of our knowledge, twofold: (i) the sum in Equation~\eqref{eq:pEPPF} does not reduce to any conditional expectation of the first few elements of a size-biased permutation of $\p$, and (ii) the \ISBPname property is not straightforward to generalize to covariate-dependent distributions, hence equality in distribution between $(\tilde p_1(X_1),\tilde p_1(X_2))$ and $(p_1(X_1), p_1(X_2))$ is not a known property (whereas it is marginally true).
Notwithstanding this, \EPPF have been obtained in the covariate-dependent literature, though not for stick-breaking constructions, but when the dependent process is defined by normalizing random probability measures, such as completely random measures. See for instance \citet{lijoi2013bayesian,kolossiatis2013bayesian,griffin2013comparing}. See also \citet{muller2011product} for an approach based on product partition models.
\section{Proofs\label{sec:appendices_methodo}}
\subsection*{Proof of Proposition \ref{prop:moments}}
The process $\V$ constructed in the main paper
is marginally $\Be(1,M)$, hence by the stick-breaking construction, the process $\p\sim$\DGEM$(M)$ has marginally the $\GEM(M)$ distribution. Let $\Zp\sim \GP$ as defined in the paper,
and suppose for simplicity of notations that it is defined on $\X = \R$. Gaussian processes have continuous paths, which in turn holds for $\mathcal{V}=F_M^{-1}\circ\Phi_{\sigma_{\Zp}}(\Zp)$ since the transformation $F_M^{-1}\circ\Phi_{\sigma_{\Zp}}$ is the composition of continuous functions. Denote by $\mathcal{V}_1,\mathcal{V}_2,\ldots$ independent processes of this type, and define $\p=(\p_1,\p_2,\ldots)$ by stick-breaking,
$\p_j=\Psi_j(\mathcal{V}_1,\ldots,\mathcal{V}_j)=\mathcal{V}_j\prod_{l<j}(1-\mathcal{V}_l)$. Then for any $j$, $\Psi_j$ is a continuous function from $(0,1)^j$ to $(0,1)$, so $\p_j$ has continuous paths. This means that $\p=(\p_1,\p_2,\ldots)$ has continuous paths in the sup norm topology.
The expressions for the moments of $p_j(X)$ are derived by using the following moments of a random variable $V\sim\Be(\alpha,\beta)$, for any $j,k\geq 0$:
\begin{equation}\label{eq:moment_of_beta}
\E(V^k) = \frac{\alpha_{(k)}}{(\alpha+\beta)_{(k)}}\,\text{ and }\,
\E\big(V^k(1-V)^j\big) = \frac{\alpha_{(k)}\beta_{(j)}}{(\alpha+\beta)_{(k+j)}}.
\end{equation}
We omit the dependence in $X$ in order to simplify the notation. Note that for $V\sim \Be(1,M)$, one has $\bar V = 1-V\sim \Be(M,1)$. For any $n\geq 0$, $\E(p_j^n)$ follows from
\begin{equation}\label{eq:epjn}
\E(p_j^n)=\E\big(V_j^n\prod_{l<j}(1-V_l)^n\big)=\frac{1_{(n)}}{(M+1)_{(n)}}\Big(\frac{1_{(n)}}{(M+1)_{(n)}}\Big)^{j-1}.
\end{equation}
The formula for $\var(p_j)$ is obtained as a consequence of~\eqref{eq:epjn}, while $\cov(p_j,p_k)$, $k\neq j$, requires the computation of $\E(p_jp_k)$ as follows (suppose without loss of generality that $j>k$)
\begin{align*
\E(V_j)&\cdot \prod_{j>l>k}\E(\bar V_l)\cdot \E(V_k \bar V_k)\cdot \prod_{k>l}\E(\bar V_l^2)\\
&= \frac{1}{M+1}\Big(\frac{M}{M+1}\Big)^{j-k-1} \Big(\frac{1}{M+1}-\frac{2}{(M+1)(M+2)}\Big) \Big(\frac{M}{M+2}\Big)^{k-1}\\
&=\frac{M^{j-1}}{(M+1)^{j-k+1}(M+2)^k}.
\end{align*}\qed
\subsection*{Proof of Proposition~\ref{prop:full_supp_GEM}}
Let $\Psi:(0,1)^{\N}\rightarrow \Simplex$ be the stick-breaking transform. It has a reciprocal defined on $\Simplex$ whose coordinates are given by
$$V_1 = p_1,\quad V_j = p_j(1-\sum_{l=1}^{j-1} p_l)^{-1}, \quad j\geq 2,$$
which are in $(0,1)$ by construction because for all $j$, $0<p_j<1$.
Let $\epsilon>0$ and $p^*\in \Simplex$. Denote by $V^*$ the reciprocal of $p^*$. Let $M = \min\{m:\lVert p_{1:m}^*\rVert_1>1-\epsilon/3\}$.
Denote by $\Psi_M$ the restriction of $\Psi$ to its first $M$ coordinates.
We have by construction $\Psi_M(V_{1:M}^*)=p_{1:M}^*$. Since $\Psi_M$ is continuous and $\lVert p_{1:M}^*\rVert_1>1-\epsilon/3$, there exist two neighborhoods of $V_{1:M}^*$ in $(0,1)^M$, denoted by $A_\epsilon$ and $B_\epsilon$, such that
$$\forall V_{1:M}\in A_\epsilon,\quad \lVert p_{1:M}\rVert_1>1-\epsilon/3 \text{ for } p_{1:M} = \Psi_M(V_{1:M})$$
and
$$\forall V_{1:M}\in B_\epsilon,\quad \lVert p_{1:M}^* - p_{1:M}\rVert_1 \leq \epsilon/3 \text{ for } p_{1:M} = \Psi_M(V_{1:M})$$
The intersection of $A_\epsilon$ and $B_\epsilon$ is an open set of $(0,1)^M$ which has no trivial coordinate because it contains $V_{1:M}^*$. Denote by $D = (A_\epsilon\cap B_\epsilon)\times (0,1)^{\N}$. Then for any $V\in D$, the image $p = \Psi(V)$ satisfies
$$\lVert p-p^* \rVert_1\leq \lVert p_{1:M}-p_{1:M}^* \rVert_1 +1-\lVert p_{1:M}^* \rVert_1+1-\lVert p_{1:M} \rVert_1 \leq \epsilon$$
In addition, $D$ has positive prior mass, which proves the proposition.\qed
\subsection*{Proof of Proposition~\ref{prop:full_supp_DGEM}}
The proof follows the same line as that of Proposition~\ref{prop:full_supp_GEM}. For the sake of simplicity and without loss of generality we assume that $\int_{\mathcal X}dx = 1$.
Let $\mathbf p^\star (.) = \psi( \mathbf v^\star ) $ with $\mathbf v^\star = (v_j^\star , j \geq 1) $ and $v_j^\star \in \mathbf H $ for all $j\geq 1$. Then
since $F_M(x) = \sum_{j=1}^M p_j^\star(x) $ is an increasing sequence (in $M$) to the constant function $1$, $\int_{\mathcal X} F_M(x) dx \uparrow 1$ and there exists $M^\epsilon $ such that
$$\int_{\mathcal X}F_{M^\epsilon}(x) dx \geq 1 -\epsilon/3.$$
The operator $\mathbf \psi_{M} : \mathbb H^M \rightarrow \mathcal C(\mathcal X)^M$ defined by
$ \mathbf \psi_M (V_j(.), j\leq M) = (V_j \prod_{i<j}(1-V_i)(.), j \leq M)$ is continuous for the $L_1$ norm on $\mathcal X$ for all $M$. Hence there exists an $L_1$ open neighbourhood of $(v_j^\star, j\leq M^\star)$, say $V_\epsilon$ such that if $(v_j , j \leq M^\star) \in V_\epsilon$
$$\sum_{j=1}^{M^\star} \|p_j - p_j^\star\|_1 \leq \epsilon/3, \quad (p_j, j\leq M^\star) = \mathbf \psi_{M^\star}( v_j , j \leq M^\star)$$
the rest of the proof is the same as in the case of Proposition~\ref{prop:full_supp_GEM}. \qed
\subsection*{Proof of Proposition~\ref{prop:joint_law}}
By conditional independence
\begin{align*
\P(Y_{1,1}=j,Y_{1,2}=k) &= \E\big(\P(Y_{1,1}=j,Y_{1,2}=k\vertju \p(X_1),\p(X_2))\big)\\
&= \E(p_j(X_1)p_k(X_2)).
\end{align*}
Suppose that $j > k$, (the case $j<k$ is symmetric) then the last quantity can be decomposed into the following product of four groups of terms
\[
\begin{array}{@{}r@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}
&\E(V_j(X_1))&\,\cdot\,&\prod_{k<l<j}\E(\bar V_l(X_1))&\,\cdot\,& \E(\bar V_k(X_1)V_k(X_2))&\,\cdot\,&\prod_{l<k}\E(\bar V_l(X_1)\bar V_l(X_2))\\
&=\frac{1}{M+1}&\,\cdot\,&\Big(\frac{M}{M+1}\Big)^{j-k-1} &\,\cdot\,&\Big(\frac{1}{M+1}-\frac{\mu_M}{(M+1)^2}\Big) &\,\cdot\,&\Big(1-\frac{2}{M+1}+\frac{\mu_M}{(M+1)^2}\Big)^{k-1}
\end{array}
\]
which sums up to the desired quantity.
The case $k=j$ is treated in a similar fashion.\qed
\subsection*{Proof of Proposition \ref{prop:generalize_pitman}}
By definition of the size-biased permutation, $\P(\tilde p_1 = p_i \vertju \p)=p_i$, $\P(\tilde p_2 = p_{i_2} \vertju \tilde p_1 = p_{i_1},\p)=\frac{p_{i_2}}{1-p_{i_1}}$, and
\begin{equation}\label{eq:proba_in_pitman_generalized}
\P\big[(\tilde p_{1},\ldots, \tilde p_{k}) = (p_{i_1},\ldots, p_{i_k}) \vertju \p\big]=\prod_{l=1}^k\frac{ p_{i_l}}{1- p_{i_1}-\cdots- p_{i_{l-1}}}.
\end{equation}
Hence the right-hand side term in Proposition~\ref{prop:generalize_pitman} can be computed by double expectation and conditioning on $\p$
\begin{equation*}
\E\big[(\E\big(f(\tilde p_{1},\ldots, \tilde p_{k})\prod_{i=1}^k(1-\tilde p_1-\cdots-\tilde p_{i-1})/\tilde p_i\vertju \p \big)\big],
\end{equation*}
and a simplification arises with the probability of \eqref{eq:proba_in_pitman_generalized} when enumerating over all distinct indices $i_1,\ldots,i_k$.\qed
\subsection*{Proof of Proposition~\ref{prop:covariance_diversity} of the main document}
Let $\bar H(X) = 1 - H_\Simp(X) = \sum_j p_j^2(X)$. Then $\cov(H_\Simp(X_1),H_\Simp(X_2)) = \cov(\bar H(X_1),\bar H(X_2))$. First note that $\E(\bar H(X)=\E(p_1(X))=\E(V_1(X))=1/(M+1)$ by virtue of Equation~\eqref{eq:pitman_lemma}. Then $\E(\bar H(X_1)\bar H(X_2))$ is obtained by summing the following terms:
\begin{align*
&\text{for all }\,j\geq 1,\,\E(p_j(X_1)p_j(X_2))=\nu_{2,2}\omega_{2,2}^{j-1}, \\
&\text{for all }\,j\neq k\geq 1,\,\E(p_j(X_1)p_k(X_2))=\nu_{2,0}\omega_{2,0}^{\vert j-k\vert-1}\gamma_{2,2}\omega_{2,2}^{(j\Min k)-1},
\end{align*}
where the same kind of development as in the proof of Proposition~\ref{prop:joint_law} is employed.
For the variance of the Simpson index, one needs, by omitting the covariate $X$ in the notation
\begin{align*
\E\Big(\big(\sum_j p_j^2\big)^2\Big)&=\E\Big(\sum_{i,j} p_i^2 p_j^2\Big)
=\E\Big(\sum_{i\neq j} p_i^2 p_j^2\Big)+\E\Big(\sum_{i} p_j^4\Big)\\
&=\E(p_1(1-p_1) p_2)+\E(p_1^3)
= \E(V_1(1-V_1)^2)\E(V_2)+\E(V_1^3)\\
&=(M+6)/(M+1)_{(3)},
\end{align*}
by Proposition~\ref{prop:generalize_pitman} and the moments~\eqref{eq:moment_of_beta}.\qed
|
1,314,259,994,277 | arxiv | \section{Introduction}
\label{sec:Intro}
The non-uniform spatial distribution of galaxies in the Universe, known as
large-scale structure, likely reflects the non-uniform spatial distribution of
dark matter. \cite{Kaiser84} showed that galaxies are biased tracers of
the total underlying mass distribution. Measurements of galaxy
clustering therefore reflect, in part, the clustering of dark matter
at a given cosmic epoch. Further, measurements of galaxy clustering
over cosmic time constrain the evolving relationship between dark
matter, which is governed by gravity and reflects cosmological
parameters, and galaxy properties, which are governed by processes
associated with baryonic physics. As such, galaxy clustering
measurements have the ability to constrain both cosmology \citep{Peacock01,
Seljak04, Eisenstein05} and galaxy evolution physics \citep{Zheng07,
Conroy09, Zehavi12}
and are thus a powerful tool in the study of large-scale structure formation.
The standard method used to measure galaxy clustering is the two-point
correlation function (2PCF), which quantifies the excess probability
of finding a galaxy in a given volume relative to a random
distribution \citep{Davis83, Landy93}. Combined
with redshift information, the 2PCF produces a statistical
representation of the three-dimensional galaxy density distribution as
a function of scale.
In recent years, the clustering of galaxies with respect to
dark matter has been interpreted using the ``halo model''
framework. The clustering amplitude of both dark matter particles and
dark matter halos can be analytically fit, for a given cosmology,
using $N$-body simulations \citep{Mo96,Sheth01}.
Simple proposed analytic models describe how galaxies
populate dark matter halos as a function of scale and mass \citep{Jing98,
Peacock00, Seljak00, Benson00, Cooray02}. In these halo occupation
distribution (HOD) models, galaxies populate dark matter halos using a
probability distribution generated to match measured galaxy clustering
statistics, therefore statistically connecting the baryonic matter of
galaxies to the dark matter halos \citep{Berlind02}. Such
HOD models typically include descriptions of the halo structure in
terms of the ``central" galaxy within a given dark matter halo and
the sub-halo ``satellite" galaxies that reside within the
parent dark matter halo \citep{Kravtsov04}.
Modern day redshift surveys are uniquely suited
to characterize the large-scale galaxy distribution through the 2PCF
over a wide range of cosmic time, including surveys of local
\citep[$z$$<$0.15,][]{York00, Colless01, DR7}, intermediate \citep[$z\sim0.5$,][]{Blake09,
Eisenstein11, Coil11}, and higher redshift galaxies
\citep[0.7$<$$z$$<$2,][]{VVDS05, Lilly07, NMBS, Newman12}.
Such surveys have shown that the clustering of galaxies is correlated with
various galaxy properties such as luminosity and color,
where the clustering amplitude increases for galaxies with redder color
and/or higher luminosity at least to $z$$<$1 (\citealt{Coil08}, hereafter C08;
\citealt{Zehavi11}).
While there is a strong, demonstrable relationship between galaxy
luminosity and color with clustering amplitude, interpreting this relationship in terms of the underlying
physics can be complicated, as there are competing effects due to varying
star formation histories, mass-to-light ratios, metallicity, dust,
and other complex galaxy physics. A cleaner physical
relationship in the mapping between galaxies and dark matter halos
may be derived from measuring clustering as a function of
properties that are easier to interpret, such as stellar mass and star formation rate (SFR).
Using these galaxy properties, one may obtain direct constraints on
the baryonic processes involved in galaxy evolution rather than with
intermediate dependencies on luminosity and color.
For example, \cite{Zheng07} constrained the HOD of the luminosity-dependent
clustering results from the SDSS and DEEP2 surveys, but the interpretation
is complicated by uncertainties in connecting luminosity and stellar mass for
different galaxy populations. \citeauthor{Zheng07} showed that while one can
use the HOD to constrain the evolution of the stellar-halo mass relationship, further
model refinement requires clustering measurements
for galaxy samples selected by stellar mass.
Measuring clustering with respect to galaxy properties has been made possible by the advent of large redshift
surveys in the last decade. Using the SDSS, \cite{Li06} found that
on large scales ($>1~h^{-1}$ Mpc), the dependence of galaxy clustering on
stellar mass closely mirrors that of luminosity; the clustering bias is
relatively flat below the characteristic stellar mass scale $M^*$ and increases
exponentially above $M^*$. They further investigate how the clustering
amplitude depends on other galaxy color, morphology,
and stellar mass, and find significant scale-dependent
differences at low stellar mass (log($M_*$/$M_{\sun}$)$\lesssim$10) between red and blue galaxy samples.
At higher redshift ($z\sim0.85$), \cite{Meneux08} measured the
clustering of $\approx$3200 galaxies in the VVDS \citep{VVDS05}
selected by stellar mass. Similar
to the local behavior, \citeauthor{Meneux08} found that the clustering amplitude
increases with stellar mass and that the
clustering amplitude at 9.5$<$log($M_*$/$M_{\sun}$)$<$10.5 is systematically lower
than in local samples. The clustering
amplitude at higher stellar masses (log($M_*$/$M_{\sun}$)$>$10.5) has not evolved in the
same span of time within the measurement errors.
While it is preferable to measure the 2PCF using spectroscopic redshifts, several studies have used photometric redshifts
to measure the two-dimensional angular correlation function with respect to stellar mass. Because photometric redshifts
depend only on accurate panchromatic photometry, larger sample sizes are more easily obtained, particularly at
higher redshifts where spectroscopy requires extensive telescope resources. \cite{Foucaud10} used $K_{s}$-band imaging
from the Palomar Observatory Wide-field Infrared Survey \citep{Bundy06} and CFHT optical photometry
to measure the angular clustering of stellar mass-limited galaxy samples and found that a strong positive correlation exists between
stellar mass and halo mass to $z$$<2$. This result was later confirmed with additional angular
clustering measurements with more precise photometric redshifts from the NEWFIRM Medium Band Survey \citep[NMBS,][]{NMBS}
between 1$<$$z$$<$2 \citep{Wake11}.
\cite{Alexi12} also used the angular correlation function along with
weak lensing maps from COSMOS \citep{Capak07, Ilbert09}
to fit an HOD model between 0.2$<$$z$$<$1.
They found that the peak of the ratio of $M_*$/$M_{\rm{halo}}$ is a
constant with cosmic time, indicating a fundamental connection
between the fraction of baryons within a given halo mass and the
conversion rate of those baryons into stars. While angular clustering though photo-$z$s is less precise
than the 2PCF, some of the most advanced HOD models to date have been built off such data.
In contrast to stellar mass, there has been very little study of how
galaxy clustering depends on SFR. For local galaxies, \cite{Li08} used SFRs of SDSS galaxies to
study the clustering dependence on the specific star-formation rate
(sSFR, the SFR divided by stellar mass), though not the dependence on
SFR alone. At higher redshifts
($z\sim2$), a recent study by \cite{Lin12} measured the angular
correlation function for $BzK$-selected star-forming galaxies in 160 arcmin$^2$ of the GOODS-North field \citep{GOODS}.
Lin et al. found a significant large-scale clustering dependence with increasing SFR and decreasing sSFR. However,
no clustering analyses using SFR or sSFR have been performed at
$z\sim1$ where the global SFR is rapidly declining \citep{Hopkins06, Zhu09}.
The effects of galaxy clustering on SFR are interesting for HOD models as the
parent dark matter halos of galaxies may influence the SFR through
various mechanisms. For galaxy populations that are highly clustered
and therefore reside in massive halos, the
available gas in the galaxy may have been stripped, depleting the galaxy of the
material necessary to sustain further star formation. Alternatively, the gas
may be heated to the virial temperature of the host halo, such that the cooling
time becomes long enough to suppress star formation. However, such
star formation ``quenching'' could arise from secular evolution, which
would not necessarily be dependent upon host halo properties.
Similarly, the sSFR is a measure of the SFR relative
to the existing stellar mass in a galaxy and
reflects how \emph{efficiently} a galaxy is processing gas into stars.
Determining the clustering properties and galaxy bias with respect to SFR and
sSFR as a function of redshift is therefore crucial to understanding how the
host dark matter halo and surrounding environment of a galaxy play a role
in the evolution of star formation.
While clustering analyses of the 2PCF as a function of stellar mass and SFR are
relatively rare at intermediate redshifts, there are measurements of the
relationship between these parameters and galaxy environment.
Environment studies generally measure the galaxy
overdensity for a given population relative to the mean density at the same
redshift, often using the N$^{th}$ nearest neighbor statistic (see \cite{Muldrew12}
for a review of various environmental measures). The advantage
of using the local environment is that it provides better statistics in what is
essentially a cross-correlation between a given galaxy sample and the mean
density field, rather than the auto-correlation that is used in 2PCF analyses.
However, environmental analyses can be difficult to model and compare to mock
galaxy simulations, in part because the mean observed galaxy
density is subject to selection effects that depend on the particular survey.
More importantly, density measurements often average across a
range of physical scales (typically with a mean scale of
$\gtrsim1 h^{-1}$ Mpc at $z$$\sim1$) to infer correlations.
Because the ``density-defining" population is generally measured on
scales that are roughly the size of a halo, such measurements preclude
smaller sub-halo scales. For example, \cite{Li06} measured the 2PCF
of SDSS galaxies with respect to their structural parameters and found
significant clustering dependence on scales below $<$1 $h^{-1}$ Mpc, a result
that was not previously seen in environment studies.
The scale-dependence of clustering as a function of galaxy properties
can yield important clues
as to the physical processes involved in galaxy formation and evolution.
As environment and 2PCF analyses are complementary, it is important
to compare the trends found in both kinds of studies to test for
consistency, particularly on large scales.
\citeauthor{Cooper08} (2008, hereafter CP08) used data from SDSS and DEEP2 to quantify
the evolution of the relationships between SFR and sSFR with
environment. They found that the relation between sSFR and overdensity is essentially unchanged
with redshift, suggesting that mergers did not drive the decline
in the global SFR at $z$$<$1. Confirming the results from \cite{Elbaz07}, CP08 also found that galaxies with high SFRs
are found in high density environments at $z\sim1$,
which leads to an ``inversion" of the SFR-density relationship between
$z\sim1$ and $z\sim0$, where high-SFR galaxies are found in relatively
sparse environments.
Other studies have measured the evolution of galaxy environment with
respect to SFR at a fixed stellar mass between 0.2$<$$z$$<$1. Several studies report that
the average color-density relation does not
appear to have evolved since $z$$<$1, indicating that the sSFR-density
relationship is relatively constant with cosmic time since $z=1$ \citep{Cucciati06, Scodeggio09,
Peng10, Sobral11, Grutz11}. These findings
generally agree with the studies within CP08, with the caveat that
finer sub-sampling of the most extreme density regions can
reveal a statistically significant difference in galaxy restframe
colors \citep{Cooper10}. Similarly, \cite{Scodeggio09} and \cite{Peng10}
find that bulk trends in the SFR-density relation for field
galaxies (typically with log($M_*$/$M_{\sun}$)$<$11) can also be explained by the
change in stellar mass at a fixed SFR. \cite{Sobral11}
suggests that most, but not all, of the SFR-density correlation is due
to changes in stellar mass in highly star-forming galaxies. The general
consensus from these studies is that there is a fundamental relationship
between stellar mass and large-scale environment and that correlations between
environment and other galaxy properties are due to second-order effects.
In this paper, we quantify the relationship between dark matter halos and
galaxy properties at $z\sim1$ by measuring the clustering of DEEP2 galaxy
samples selected by stellar mass, SFR, and sSFR, on scales of
$\sim0.1-20~h^{-1}$ Mpc. Our results will enable future HOD modeling
studies at $z\sim1$ as a function of these galaxy properties and will further
be useful for projecting the statistical power expected from
large-area baryon acoustic oscillation (BAO) surveys at these redshifts
(e.g. BigBOSS, \citealt{Schlegel11,BigBOSS}).
The structure of the paper is as follows.
In Section 2, we present the DEEP2 dataset and derived galaxy properties used
in this study. In Section~\ref{sec:Sample}, we describe the galaxy samples defined by
stellar mass, SFR, and sSFR. Section~\ref{sec:method} details the
clustering analysis methods using the 2PCF, while Section~\ref{sec:results}
presents the results of our clustering analysis
with respect to the various galaxy properties. We compare our findings to
the recent literature and discuss the broader implications of our results in
Section~\ref{sec:discuss}, and we
present our conclusions in Section~\ref{sec:conclusions}. Throughout this work,
we assume a $\Lambda$CDM cosmology with $\Omega_M$=0.3, $\Omega_\Lambda$=0.7,
$\sigma_{8}$=0.8, and $h=0.7$. All quoted magnitudes are in the AB magnitude
system.
\section{Data}
\label{sec:data}
The DEEP2 Galaxy Redshift Survey \citep{Newman12} is
one of the largest, most complete spectroscopic redshift surveys of the
Universe to $z\sim1$. The DEEP2 survey was conducted on the Keck II telescope
using the DEIMOS spectrograph \citep{Faber03} and
measured reliable redshifts for 31,656 galaxies from 0$<$$z$$<$1.4 over four
fields dispersed across the Northern Sky. The DEEP2 survey targeted galaxies
to a magnitude limit of $R_{\rm{AB}}<$24.1, and in three of the four fields
used a $BRI$ color cut to efficiently target galaxies with $z$$>$0.7.
For this study, we initially select galaxies with
0.74$<$$z$$<$1.4 and a high confidence redshift ($>$95\%, $$z$_{\rm{quality}}$$\geq$3).
As we are interested in measuring clustering properties as a function of
several galaxy properties, we retain only those galaxies with well-measured values (i.e
not flagged as an invalid measurement) for restframe $B$-band magnitude ($M_B$), color ($U-B$), stellar mass ($M_*$),
and star-formation rate (SFR or $\psi$). Because the average SFRs are now estimated
from the galaxy luminosity and restframe color \citep{Mostek12}, we do not exclude
red galaxies in our sample. These combined selections result in
a parent sample of 22,331 galaxies.
\subsection{Calibrations of Stellar Mass and SFR}
\label{sec:datacal}
The stellar masses used in this study are constructed from the prescription
described in \cite{Lin07} and the Appendix of \cite{Weiner09}, where the local restframe
color-M/L relation \citep{Bell03} is extended to higher redshift.
The prescription depends on redshift, restframe $B$-band luminosity, and
the restframe $U-B$ and $B-V$ colors, and is calibrated to a matched sample
of stellar masses derived from $K$-band luminosity measurements \citep{Bundy06}.
Figure~\ref{fig:masscheck} compares stellar masses from the
DEEP2 calibration and matched to a new, extended sample of
7,267 $K$-band stellar masses (K. Bundy, priv. communication) for galaxies
with 0.74$<$$z$$<$1.4. Independently, we confirm that the DEEP2 stellar masses used here are
in agreement with this larger sample of $K$-band derived masses with a rms scatter of 0.25 dex
and a mean offset of -0.05 dex.
The DEEP2 stellar masses are based on a Chabrier IMF, in keeping
with the \citeauthor{Weiner09} calibration.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{./fig1.pdf}
\caption{Comparison of DEEP2 color-M/L stellar masses and
$K$-band stellar masses. The rms scatter between the
stellar mass estimates is 0.25 dex and the sample mean offset is -0.05 dex
for a Chabrier IMF.}
\label{fig:masscheck}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{./fig2.pdf}
\caption{The greyscale in all three panels shows the restframe $U-B$ color
and $B$-band magnitude distribution for the 0.74$<$$z$$<$1.4~ DEEP2 galaxy sample.
(left) Stellar mass contours span 9.0$<$log($M_*$/$M_{\sun}$)$<$11.5 with levels of
$\Delta$log($M_*$/$M_{\sun}$)=0.5. More massive galaxies tend to be redder and brighter. The dashed line shows the
DEEP2 restframe galaxy color selection from
Equation~\ref{eq:color} splitting the red sequence and blue cloud.
(middle) SFR contours calibrated from
\cite{Mostek12} spanning 0$<$log($\psi$)$<$1.5 and
levels of $\Delta$log($\psi$)=0.3 $M_{\sun}$ yr$^{-1}$.
Galaxies with higher SFR are bluer and brighter. SFRs are correlated in restframe $M_B$ and $U-B$
partially by construction, where optical photometry has been used in the calibration.
(right) Contours of sSFR
spanning $-8.5$$>$log($\psi$/$M_{*}$)$>$$-11$ with levels of $\Delta$log($\psi$/$M_{*}$)=0.5 yr$^{-1}$.
Galaxies with higher sSFR are bluer; $U-B$ color and sSFR correlate tightly,
as evidenced by the horizontal contours.
}
\label{fig:colormasscontour}
\end{figure*}
In CP08, SFRs for DEEP2 galaxies were
derived from measurements of the [\ion{O}{2}]~ emission line and an empirical calibration
to galaxy $B$-band luminosity and SFR performed at low redshift
\citep{Moustakas06}. Applying this local calibration to redshifts
$z\sim1$ poses some unique problems, including accounting for
luminosity and dust evolution as well as line emission from AGN
in red sequence galaxies (Yan et al., 2006). More recent work
by \cite{Mostek12} established a new empirical SFR
calibration for DEEP2 galaxies using broadband optical color and UV/optical
SED-based SFRs \citep{Salim09}. \citeauthor{Mostek12} showed that the strong observed
evolution in the restframe $M_B$ from local redshifts
($\sim1.3$ mag to $z=1$) caused the SFRs in CP08 to be
overestimated by an average of $\sim0.5$ dex. However, as this evolution
causes to first order a zeropoint offset in the SFR calibration, the
relative trends between SFR and overdensity presented in CP08 are still valid.
Here we use the \citeauthor{Mostek12} SFR calibration and a Salpeter IMF, in keeping
with previous SFR calibrations commonly used in the literature \citep{Kennicutt98,
Moustakas06}.
The sSFRs used here are constructed from the DEEP2 stellar masses and SFRs
described above, using log($\psi$/$M_{*}$)~$\equiv$~log($M_*$/$M_{\sun}$)~$-~$log($\psi$). We convert the
stellar masses from a Chabrier IMF to a Salpeter IMF by adding a constant
0.2 dex offset and quote sSFRs for a Salpeter IMF. We note that both the \cite{Moustakas06} and \cite{Mostek12}
SFR calibrations rely on correlated mean trends in large galaxy samples, and therefore
may not be highly accurate on a individual galaxy-by-galaxy basis
(typically 0.2-0.3 dex rms scatter). However, the \emph{samples} selected from such empirical SFR
calibrations are statistically representative of the mean SFR in the galaxy population.
As clustering measurements are more commonly performed as a function of
galaxy luminosity and color, we show in Figure~\ref{fig:colormasscontour}
how our derived stellar masses, SFRs, and sSFRs depend on restframe color
and magnitude, showing contours of each derived property in the $U-B$ - $M_B$
plane. The contours of constant stellar mass generally run perpendicular to
the contours of constant SFR in this plane. When converted to sSFR, these
opposing trends cancel out to produce a largely luminosity-independent
trend with color, reflecting the fact that sSFR and restframe color
generally reflect the light-averaged age of the stellar population.
\section{Galaxy Samples}
\label{sec:Sample}
In this section, we describe the construction of samples for
0.74$<$$z$$<$1.4~ DEEP2 galaxies selected by stellar mass, SFR, and sSFR.
For galaxy clustering measurements, it is preferable to use complete
samples for which galaxies of the same `type' (selected by magnitude,
color, stellar mass, etc.) are included to the highest redshift of the sample.
In this way, within a given sample there is no redshift dependence of the
galaxy property of interest, which would necessitate weighting each galaxy
by the volume over which it could be observed. Complete galaxy samples also
facilitate comparison with simulations and HOD modeling, where a galaxy population
can be well described by a few simple selection criteria and a redshift limit.
We create both binned and threshold samples in this study. Constructing samples with independent bins
facilitates identification of clustering trends with respect to each galaxy
parameter, while constructing threshold samples facilitates future HOD
modeling applications. The binned sample sizes are chosen such that the bin
size is greater than or equal to the rms scatter in the galaxy property and so
that there is a sufficient number of galaxies per bin to minimize Poisson
errors. For each galaxy parameter, we treat the bin with the lowest number
density, which generally corresponds to the more massive or higher SFR
galaxies, as a threshold lower limit.
Our galaxy samples cover two primary redshift ranges: 0.74$<$$z$$<$1.05 for
samples that are complete for both blue and red galaxies,
and 0.74$<$$z$$<$1.4~ for samples that are complete only for blue galaxies.
The former redshift range allows us to compare clustering trends between
galaxies with different star-formation histories - including both quiescent
and star-forming galaxies - while the latter allows us to obtain the best
statistical measurements for blue galaxies.
\subsection{Stellar Mass}
\label{sec:smasssample}
As shown in the left panel of Figure~\ref{fig:colormasscontour}, galaxy samples
selected by stellar mass naturally include both red and blue galaxies, but
the ratio of red to blue galaxies strongly depends on stellar mass.
In addition, the $R$-band selection of DEEP2 corresponds to a different
restframe
wavelength selection with redshift (bluer at higher redshift), such that
DEEP2 is complete to different stellar masses for red and blue galaxies as
a function of redshift. To probe the full range of stellar masses allowed by
the data, we construct both color-independent (e.g. `all' galaxies, blue and
red) and color-dependent stellar mass samples for $z$$<$1.05, and we
construct mass samples
for blue galaxies only for $z$$<$1.4. Following \cite{Willmer06}, we use
the standard DEEP2 red and blue galaxy separation defined by
\begin{equation}
(U-B)=-0.032(M_B+21.62)+1.035.
\label{eq:color}
\end{equation}
Using this color separation, the left panel of Figure~\ref{fig:massbins}
shows stellar masses for red and blue DEEP2 galaxies as a function of
redshift, with boxes indicating our binned samples to $z$=1.05.
As a starting point, we define color-independent samples including both
red and blue galaxies in two bins in stellar mass.
For the highest stellar mass sample, we use a mass threshold of log($M_*$/$M_{\sun}$)$>$10.8
to ensure a large enough sample size such that clustering measurements can be
performed with reasonable accuracy. This high-stellar-mass sample is roughly volume-limited
for both red and blue galaxies in this stellar mass range. The lower stellar mass
color-independent mass bin spans 10.4$<$log($M_*$/$M_{\sun}$)$<$10.8 but is somewhat
incomplete for red galaxies at $z\sim1$ due to the
R-band magnitude limit in DEEP2. In this mass bin, 30\% of the galaxies
between 0.75$<$$z$$<$0.85 are red while only 12\% of galaxies between
0.95$<$$z$$<$1.05 are red, indicating that
red galaxies are increasingly excluded at the highest redshifts.
Therefore, we can construct only one sample that is
complete for both red and blue galaxies (with log($M_*$/$M_{\sun}$)$>$10.8), and
while we quote results from the lower mass color-independent bin, we
refrain from calculating a number density in this sample. We further
caution that this sample underestimates the contribution from low mass,
high redshift red galaxies missing in DEEP2, and we keep this in mind when
interpreting the results.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\columnwidth]{./fig3a.pdf}
\includegraphics[width=1.0\columnwidth]{./fig3b.pdf}
\caption{Stellar mass selected galaxy samples. (left) Colored points show all
galaxies in the parent sample, where red galaxies are defined to be redder
than the color cut given in Equation~\ref{eq:color}. Solid lines show the mass limits
for color-independent galaxy samples, where the highest mass sample is roughly volume-limited
for both red and blue galaxies. Dot-dashed lines show the lower mass sample limits for blue galaxies only
at a $z$$<$1.05 redshift limit. Red galaxy mass sample limits are given in Tables~\ref{tab:smassresults}
and \ref{tab:smassbinresults}.
(right) Stellar mass bin samples for blue galaxies only, defined
with finer $\Delta$log($M_*$/$M_{\sun}$)=0.3 bins and approximately complete in DEEP2 to $z$$<$1.4.
The highest mass bin in both samples are implemented as a mass
threshold in order to allow sufficient statistics in the clustering
measurement.}
\label{fig:massbins}
\end{figure*}
In order to compare the clustering of red and blue galaxies separately as a
function of stellar mass, we also create red and blue mass-selected samples.
The mass and redshift limits of each sample are given in Table 2, and the
blue samples are shown in Figure~\ref{fig:massbins}. For red galaxies,
we create two samples with 10.5$<$log($M_*$/$M_{\sun}$)$<$11.0 and log($M_*$/$M_{\sun}$)$>$11.0 at
$z$$<$1.05. To facilitate comparison with red galaxies over the same volume,
we create three blue mass-limited bin samples down to a lower
mass of log($M_*$/$M_{\sun}$)$>$9.6, where the blue population is complete at $z=1.05$.
We further create blue mass-limited bin samples over a wider redshift range, to
$z=1.4$, down to a mass of log($M_*$/$M_{\sun}$)$=$9.9 (right panel
of Figure~\ref{fig:massbins}). The blue galaxy mass bin samples to $z$$=$1.4
have a slightly smaller width (0.3 dex) than the lower redshift samples
(0.4 dex), but the width is still greater than the rms scatter
in the stellar mass estimates (0.25 dex, see Figure~\ref{fig:masscheck}).
An upper threshold limit of log($M_*$/$M_{\sun}$)$>$10.8 is used to
define the highest mass bin.
We also define stellar
mass-selected \emph{threshold} samples (not shown in Figure~\ref{fig:massbins})
with which to measure clustering properties. These threshold samples are useful for comparing
with simulations and semi-analytic models, where samples may be defined by a
given mass limit, and for performing HOD modeling. For the color-independent
stellar mass threshold samples, we only report the number density for stellar masses above
log($M_*$/$M_{\sun}$)$>$10.5 due to the limited number of red galaxies in the measured volume.
The sample properties for each stellar mass-selected sample are listed in
Tables~\ref{tab:smassresults} and \ref{tab:smassbinresults}.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{./fig4.pdf}
\caption{SFRs using the \cite{Mostek12} calibration versus $M_B$.
Greyscale shows the distribution of DEEP2 galaxies between 0.74$<$$z$$<$1.4.
Contours show lines of constant stellar mass, with the same mass levels as
in Figure~\ref{fig:colormasscontour}. The dashed line shows the approximate
demarcation between red and blue galaxies, as defined in the color-magnitude
diagram and Equation~\ref{eq:color}.}
\label{fig:sfrcontours}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.93\columnwidth]{./fig5a.pdf}
\includegraphics[width=0.93\columnwidth]{./fig5b.pdf}
\caption{Galaxy samples defined by star formation rate. (left) SFR versus
redshift for all red galaxies in the parent sample. SFR-selected bin samples
for red galaxies are shown with dashed lines; these samples are
complete at $z$=1.05 for log($\psi$)$>$0.3 and approximately complete for
log($\psi$)$>$-0.1. (right) SFR versus redshift for all blue galaxies in the parent sample.
SFR-selected bin samples for blue galaxies complete to $z$=1.05
are shown with red, dashed lines, while samples complete to
either $z$=1.25 or $z$=1.4 are shown with black, dot-dashed lines. As with
stellar mass, the highest SFR samples are treated as thresholds.}
\label{fig:sfrbins}
\end{figure*}
\subsection{Star Formation Rate}
\label{sec:sfrsample}
As with stellar mass, galaxies at a given SFR can have a range of restframe
color and magnitude. Figure~\ref{fig:sfrcontours} shows the SFRs for DEEP2
galaxies as a function of $M_B$; the dashed line indicates the separation of
red and blue galaxies following Equation~\ref{eq:color}. While most red
galaxies have an estimated SFR below log($\psi$)$<$0.5 $M_{\sun}$ yr$^{-1}$, only those blue
galaxies with $M_B<$-20 have similar SFRs. Figure~\ref{fig:sfrcontours}
also shows that SFR samples will select across a wide range
of stellar mass (shown with contours), particularly for blue galaxies where
$M_*$~can differ by 1.5 dex at a given SFR.
Therefore, our analysis using SFR-selected samples not only probes
the relationship between clustering and SFR but also has a weak dependence
on stellar mass. We will explicitly remove the stellar mass dependence
by constructing specific SFR samples in Section~\ref{sec:ssfrsample}.
While one could in theory construct color-independent samples in SFR given
a large enough survey,
DEEP2 does not have a sufficient number of both red and blue galaxies at a constant
SFR and complete within the same volume to obtain a
reliable color-independent clustering measurement. We therefore create color-dependent SFR samples that
are approximately complete within their restframe color as shown
in Figure~\ref{fig:sfrbins}. To mirror the selection with stellar mass, we
choose red galaxies with $z$$<$1.05 in two SFR samples limited to -0.1$<$log($\psi$)$<$0.3 $M_\sun$ yr$^{-1}$
and log($\psi$)$\ge$0.3 $M_\sun$ yr$^{-1}$. The SFR bin dividing line at 0.3 $M_\sun$ yr$^{-1}$ is chosen
such that the two samples each have $\sim1000$ galaxies, which is roughly the minimum
sample size needed to produce a reasonable clustering signal within the measured volume of this dataset.
As SFR is strongly dependent on $M_B$, particularly in blue galaxies, the redshift completeness limits are
clear; blue galaxies are complete for log($\psi$)$>$0.75$M_\sun$ yr$^{-1}$ below
$z$$<$1.05, log($\psi$)$>$1.0$M_\sun$ yr$^{-1}$ for $z$$<$1.25, and
log($\psi$)$>$1.25$M_\sun$ yr$^{-1}$ for $z$$<$1.4. The SFR bin sizes are selected to
roughly reflect the SFR accuracy quoted in \citeauthor{Mostek12} (0.5 dex and 0.25 dex rms
scatter for red and blue galaxies, respectively), while maintaining sufficient sample sizes
in each bin. The sample properties for each SFR sample are listed in Tables~\ref{tab:sfrthreshresults} and \ref{tab:sfrbinresults}.
\subsection{Specific Star Formation Rate}
\label{sec:ssfrsample}
To construct highly-complete samples selected by sSFR, we first emphasize
that sSFR, as shown in Figure~\ref{fig:colormasscontour},
is tightly correlated with restframe color. Similarly, we find that plotting
the sSFR against the restframe $B$-band magnitude (Figure~\ref{fig:ssfrcontours}) shows that nearly all
DEEP2 blue galaxies with 0.74$<$$z$$<$1.4~ have sSFRs between $-10$$<$log($\psi$/$M_{*}$)$<$$-8$ yr$^{-1}$,
while red galaxies have lower sSFRs of log($\psi$/$M_{*}$)$<$$-10$ yr$^{-1}$.
Figure~\ref{fig:ssfrcontours} also shows that like SFR, galaxies at a fixed sSFR
have a wider range of stellar masses (represented by contour lines)
in the blue galaxy population than red galaxy population. Further, a sSFR-selected
blue galaxy sample will have a residual correlation between the mean stellar mass and luminosity of the sample.
This residual correlation indicates that galaxy luminosity is only a rough approximation
to the stellar mass within the blue population at $z\sim1$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.97\columnwidth]{./fig6.pdf}
\caption{Galaxy specific SFR versus the restframe $B$-band magnitude, $M_B$. Contours correspond to constant stellar mass,
with the same levels shown as in Figure~\ref{fig:colormasscontour}. }
\label{fig:ssfrcontours}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\columnwidth]{./fig7a.pdf}
\includegraphics[width=0.85\columnwidth]{./fig7b.pdf}
\includegraphics[width=0.85\columnwidth]{./fig7c.pdf}
\includegraphics[width=0.85\columnwidth]{./fig7d.pdf}
\caption{Galaxy samples defined by specific star formation rate.
(Top row) Restframe $B$-band magnitude for red (left) and blue (right) galaxies in the parent sample are
shown as a function of redshift. To $z$=1.05, the red galaxies are
complete to $M_B$=$-21$ while blue galaxies are complete to
$M_B$=$-20.5$. To $z$=1.4, blue galaxies are limited to $M_B$=$-21.0$.
(Bottom row) From the magnitude-limited samples defined in the top row,
we create sSFR-selected samples with log($\psi$/$M_{*}$)$<$$-9.9$ for
red galaxies and log($\psi$/$M_{*}$)$<$$-8.6$ for blue galaxies.}
\label{fig:ssfrbins}
\end{figure*}
Stellar mass and SFR each have different completeness limits in DEEP2
as a function of redshift, color, and magnitude, which could in principle be
difficult to disentangle when combined as sSFR. However, we can avoid these
complications by recognizing that sSFR closely traces color. We can
therefore first limit the parent sample by restframe $B$-band magnitude, $M_B$,
for red and blue galaxies separately, and then define sSFR-selected
subsamples free from incompleteness expected at lower luminosities within each
restframe color. The final color-dependent sSFR samples will then be complete \emph{to
the magnitude limit of the sample}.
The upper panels of Figure~\ref{fig:ssfrbins} show $M_B$ as a function of
redshift for red and blue galaxies separately. At $z$$<$1.05, the parent sample
is complete for $M_B$$<$$-21$ and $M_B$$<$$-20.5$ for red and blue galaxies,
respectively. At $z$$<$1.4 the blue galaxy sample is limited to
$M_B$$<$$-21$. We choose not to use a more complicated, color-dependent
magnitude limit \citep[see][]{Gerke07} so that our samples can be easily
interpreted in comparison to other sSFR clustering studies.
After making these initial cuts on $M_B$ and redshift, we can construct
sSFR samples that are complete to the given $M_B$ limit. The bottom row of
Figure~\ref{fig:ssfrbins} shows our sSFR-selected samples. In general, the
lowest luminosity galaxies have the highest sSFRs. We define upper limits on
sSFR of log($\psi$/$M_{*}$)$<$$-9.9$ yr$^{-1}$ for red galaxies and
log($\psi$/$M_{*}$)$<$$-8.6$ yr$^{-1}$ for blue galaxies. We also limit the blue
galaxy population at $z$$<$1.05 and $z$$<$1.4 in order to facilitate comparisons
with the stellar mass and SFR-selected samples. The individual sSFR sample
properties are given in Tables~\ref{tab:ssfrthreshresults} and \ref{tab:ssfrbinresults}.
\section{Analysis Method}
\label{sec:method}
\subsection{The two-point correlation function}
\label{sec:cf}
The clustering analysis performed in this study mirrors that of C08,
and we summarize the methodology here. We measure the clustering of DEEP2
galaxies by calculating the two-point correlation function, $\xi(r)$,
in each
of the galaxy samples. In a given volume, $\xi(r)$~measures the excess
probability of finding galaxy pairs as a function of separation
over that of a random distribution. The excess probability can be
calculated using the \cite{Landy93} estimator,
\begin{equation}
\xi = \frac{1}{RR}\left [DD\left(\frac{n_R}{n_D}\right)^2 - 2DR\left(\frac{n_R}{n_D}\right) + RR\right ].
\label{eq:LSz}
\end{equation}
Here RR, DD, and DR are the random-random, data-data, and data-random
pairs, while $n_R$ and $n_D$ are the mean number densities of random
points and galaxies in each sample. Each pair is separated by a
three-dimensional distance $r$. As such, $\xi(r)$~reflects the
three-dimensional clustering, which must be inferred through
measurements of the angular separation between galaxies on the sky and
a distance measurement along the line of sight.
The redshift of a galaxy reflects both its distance (assuming a
given cosmology) and its peculiar velocity. Due to the latter effect, the
observed clustering signal has additional power that appears in redshift space
but is not attributable to clustering in real space.
On small scales, the peculiar velocities of individual galaxies within
collapsed overdensities results in an ``elongation" of the clustering
signal along the line of sight, known as the ``Fingers of God''. On
larger scales, the coherent infall of galaxies streaming into
potential wells causes a ``squashing" of the signal along the line of
sight. To remove these redshift-space distortions from the real-space
clustering signal, we separate the correlation function using two
separate coordinate vectors. We define two vectors,
\textit{\textbf{s}}=(\textit{\textbf{v}}$_1-$\textit{\textbf{v}}$_2$)
for the redshift-space separation and
\textit{\textbf{l}}=$\frac{1}{2}$(\textit{\textbf{v}}$_1+$\textit{\textbf{v}}$_2$)
for the mean coordinate of the pair, and calculate
\begin{equation}
\pi=\frac{\textit{\textbf{s}} \cdot \textit{\textbf{l}}}{\left\vert \textit{\textbf{l}}\,\right\vert},
\end{equation}
\begin{equation}
r_p=\sqrt{\textit{\textbf{s}} \cdot \textit{\textbf{s}}-\pi^2}.
\end{equation}
We then can express the three-dimensional $\xi(r)$~as a two-dimensional
projection, $\xi(r_p, \pi)$, where $r_p$ is the distance across the line of
sight and $\pi$ is the distance along the line of sight. We apply the estimator in
Equation~\ref{eq:LSz} to $\xi(r_p, \pi)$~by counting galaxy pairs. The random samples are
drawn from the measured selection function for each galaxy sample spatial distribution
(see Section~\ref{sec:masks}).
In order to recover the correlation function signal without redshift
space distortions, we project the two-dimensional correlation function
$\xi(r_p, \pi)$~onto the $r_p$ coordinate across the line of sight. The projection
requires integrating along the line of sight, following \cite{Davis83},
\begin{equation}
\omega_{p}(r_p)=2\int_{0}^{\infty}{\!\!d\pi\,\xi(r_p, \pi)} = 2\int_{0}^{\infty}{\!\!dy\,\xi\left(r_p^2+y^2\right )^{1/2}},
\label{eq:wprp}
\end{equation}
where $y$ is the real-space separation along the line of sight. By modeling
$\xi(r)$~as a power law, $\xi$=$(r/r_0)^{-\gamma}$, Equation~\ref{eq:wprp} has
an analytical solution,
\begin{equation}
\omega_{p}(r_p)=r_p\left(\frac{r_0}{r_p}\right)^{\gamma}\frac{\Gamma(1/2)\Gamma[(\gamma-1)/2]}{\Gamma(\gamma/2)},
\label{eq:wprpinf}
\end{equation}
which can be evaluated for given values of the clustering length, $r_0$, and
correlation function slope, $\gamma$. Following C08, we integrate
the projected correlation function $\omega_{p}(r_{p})$~to
a limit of $\pi_{\rm{max}}=20$ $h^{-1}$ Mpc, beyond
which the DEEP2 sample is not large enough to measure an accurate signal.
Implementing the $\pi_{\rm{max}}$ limit requires that we also correct Equation~\ref{eq:wprpinf}
to be truncated to the same integration limit. However, the resulting $\omega_{p}(r_{p})$~is accurate for measured scales
where $r_p$/$\pi_{\rm{max}}\lesssim0.25$ (see C08 for details).
We fit for a power law of the form $\xi$=$(r/r_0)^{-\gamma}$ on scales of
1-10 $h^{-1}$ Mpc. As we will show, these scales
mitigate the effects of non-linear behavior at smaller $r_p$ and avoid
noisy data at larger scales. The final values of $r_0$ and $\gamma$ are
obtained by performing a $\chi^2$ minimization between the measured $\omega_{p}(r_{p})$~and
Equation~\ref{eq:wprpinf}, after correcting for the truncation of
$\pi_{\rm{max}}$.
For each sampled scale, galaxy pair counts are averaged over 10 separate DEEP2 pointings
and errors on $\omega_{p}(r_{p})$~are computed using the standard error among these pointings. While formal errors for
$r_0$ and $\gamma$ are computed from the $\chi^2$ fit, we report the rms
errors for each parameter using 10 jackknife samples of the data. The
jackknife error estimates on $r_0$ and $\gamma$ encapsulate the cosmic
variance inherent within the DEEP2 data and take into account the covariance
among the $r_p$ bins. Because some DEEP2 pointings are correlated with
each other on the sky, the cosmic variance may be slightly underestimated. Based on calculations with QUICKCV \citep{Newman02},
the actual rms error due to cosmic variance could be up to $\sim6$\% larger than the estimates
from the 10 jackknife samples. We do not include this estimated error contribution in our analysis, electing
instead to work with the error given directly from the data.
In order to facilitate interpretation of the results and to avoid relying
solely on power-law fits which may not describe the data well over all
scales, we further estimate the galaxy bias for each sample. The bias is
a measure of the clustering of galaxies relative to that of dark matter
at the same redshift. We generate a dark matter $\xi(r_p, \pi)$~for the mean
redshift of each galaxy sample using code from \cite{Smith03}, integrating
to the same $\pi_{\rm{max}}$ as in the data. We then calculate the average bias over
scales of 1-10 $h^{-1}$ Mpc using $b^2$=[$\omega_{p}(r_{p})$]$_{\rm{gal}}$/[$\omega_{p}(r_{p})$]$_{\rm{dark~matter}}$ and
weighting each scale equally in the average bias. The average scale
of the mean bias is $\bar{r}_p$=4.1 $h^{-1}$ Mpc.
The dark matter correlation function assumes a $\Lambda$CDM cosmology with
$\Omega_m$(0)=0.3 and $\sigma_8$=0.8. A different assumed value
of $\sigma_8$ will linearly scale the bias value (e.g. a $\sigma_8$=0.9 will increase the absolute
bias by $\sim12$\%). The bias error is estimated from the mean bias rms measured from 10 jackknife
samples, similar to the errors on $r_0$ and $\gamma$ using power law fits.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.7\textwidth]{./fig8.pdf}
\caption{Contours of constant correlation strength for the two-dimensional
correlation function, $\xi(r_p, \pi)$, for blue galaxies with 0.74$<$$z$$<$1.4 binned by
stellar mass. The contour lines correspond to correlation amplitudes of 0.25,
0.5, 0.75, 1.0, 2.0, and 5.0, with $\xi(r_p, \pi)$=1 shown as a thick line. The number of galaxies in each bin is as follows: 4774 (upper left),
4279 (upper right), 2435 (lower left), and 1134 (lower right). The contours have been smoothed by
a $1\times1~h^{-1}$ Mpc boxcar for clarity in the figure.}
\label{fig:massxisp}
\end{figure*}
\subsection{\rm{DEEP2}~\it{Masks and 1/V$_{\rm{max}}$}}
\label{sec:masks}
When calculating $\xi(r_p, \pi)$~for a given galaxy sample, we generate random catalogs
of uniformly distributed points spanning the same volume as the data. There
are several observational selection effects that must be taken into account
in the random catalog in order to match the spatial distribution of the data.
Examples of these effects include the overall survey geometry, bright star
masks, incompleteness due to DEIMOS slit design and
placement, and variations in the
redshift success rate due to observing conditions for each slitmask.
The effects of survey geometry, data gaps, and the varying redshift success
rates are all accounted for in the DEEP2 window function.
To remove the window function and any redshift selection effects, we generate a selection function
from the galaxy redshift distribution and their estimated comoving distance in a $\Lambda$CDM cosmology.
We smooth and spline fit the one-dimension spatial distribution such that local overdensities due to cosmic variance are removed while preserving
the overall shape of the distribution. The galaxy distributions are calculated in comoving bins of $\Delta r$=0.011 $h^{-1}$ Mpc,
and smoothing of the spline fit occurs over scales of $\Delta r$=0.03-0.05 $h^{-1}$ Mpc. All selection functions are checked by eye such
that no power on smaller scales is retained in the smoothed selection function, which
could reduce inferred correlations on those scales. Random samples are then generated
from the probability distribution of the smoothed selection function. The procedure
ensures that the computed correlation function statistics are not affected by the survey design
or completeness of our selected galaxy samples
Although the correlation function statistics are robust, a systematic error is still imprinted on small scales where the
correlation function is more likely to be incomplete due to slit collisions.
Using DEEP2 mock catalogs \citep{Yan04}, C08 found that the bias varies
smoothly as a function of scale and varies in size from 25\% on the smallest
scales to 2\% on the largest scales. We apply a correction for this systematic
bias directly to the measured $\xi(r_p, \pi)$~and $\omega_{p}(r_{p})$, and we include an additional
rms error term for the correction which is measured from the mocks and added
in quadrature to $\omega_{p}(r_{p})$.
To aid in the application of our results to HOD models, we calculate the
number density for each of our complete samples using the nonparametric
1/$V_{\rm{max}}$ method \citep{Felten76} employed in previous DEEP2 studies
\citep{Blanton06, Zhu09}. The volume over which a given galaxy
can be observed is given by
\begin{equation}
V_{\rm{max}}=\frac{1}{3}\int d\Omega \int_{z_{\rm{min}}}^{z_{\rm{max}}} dz \frac{d[D_{c}(z)^3]}{dz} f(z),
\label{eq:vmax}
\end{equation}
where $D_{c}(z)$ is the comoving distance and a spatially flat universe
\citep{Hogg99}, $d\Omega$ is the solid angle of sky covered in the survey,
and $f(z)$ is the probability that a given galaxy is targeted and a
successful redshift is produced in a given DEEP2 mask. We calculate
$d\Omega$ using the same DEEP2 masks as those used in our clustering
analysis, which excludes regions of bright stars and accounts for mask
edges, and we find that the integrated survey area covered in our slit masks is
2.54 deg$^2$.
The DEEP2 redshift success rate, $f(z)$, is a function of both
$R$-band magnitude and restframe color; these are discussed in detail
in \cite{Newman12} (see also \citealt{Blanton06}, \citealt{Willmer06}, and \citealt{Zhu09}).
For each observed galaxy, we generate 1200 simulated galaxies with similar observational properties
between $z_{\rm{min}}$ and $z_{\rm{max}}$ and calculate $k$-corrected
magnitudes for each simulated redshift. We then apply the selection
function $f(z)$ to these simulated galaxies and produce a fraction
which is multiplied by the comoving volume to produce $V_{\rm{max}}$.
Once $V_{\rm{max}}$ is obtained, we calculate the effective number of
galaxies in the comoving volume,
\begin{equation}
N_{\rm{eff}} = \left[ \sum_{i} \frac{1}{(V_{\rm{max}})_i} \right]^2 / \left[ \sum_{i} \frac{1}{(V_{\rm{max}})_i^2} \right],
\label{eq:neff}
\end{equation}
and divide by the comoving volume between $z_{\rm{min}}$ and $z_{\rm{max}}$
to derive the number density, $n$. Upper and lower limits for $n$ are
computed by assuming a Poisson error distribution with approximations
provided by \cite{Gehrels86}. However, because DEEP2 is a small area survey,
cosmic variance ($\sigma_{\rm{cv}}$)
will be a significant source of error in the number density.
Following \cite{Zhu09}, we calculate the variance in number density
from each of the four DEEP2 fields and report the estimated
$\sigma_{\rm{cv}}$ along with each of our samples. In selected stellar mass cases where DEEP2
is complete, we have cross-checked our number densities with the published galaxy mass function between
0.75$<$$z$$<$1.0 given in \cite{Bundy06} and find good statistical agreement with their measurements.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{./fig9.pdf}
\caption{The projected correlation function, $\omega_{p}(r_{p})$, for color-independent
galaxy samples selected by stellar mass thresholds and limited to $z$$<$1.05. The plot
shows the two highest mass threshold samples with log($M_*$/$M_{\sun}$)$>$10.5; the highest mass sample
is complete for both red and blue galaxies and the lower mass sample is slightly incomplete
for red galaxies beyond $z$$>$0.9.
The projected clustering of dark matter (solid, red line) is generated from the prescription of
\citealt{Smith03} and shown for a redshift of $z$=0.9, which is similar to the
mean redshift of the galaxy samples. }
\label{fig:masscorrall}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\columnwidth]{./fig10a.pdf}
\hspace{0.3in}
\includegraphics[width=0.85\columnwidth]{./fig10b.pdf}
\caption{The projected correlation function, $\omega_{p}(r_{p})$, for color-dependent
galaxy samples selected by stellar mass.
Stellar mass threshold samples for red galaxies at $z$$<$1.05 are shown on
the left, while stellar mass threshold samples for blue galaxies at $z$$<$1.4
are shown on the right. }
\label{fig:masscorrcolor}
\end{figure*}
\section{Galaxy Clustering Results}
\label{sec:results}
\subsection{Clustering Dependence on Stellar Mass}
\label{sec:smass}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{./fig11a.pdf}
\includegraphics[width=0.95\columnwidth]{./fig11b.pdf}
\caption{The relative deviation in $\omega_{p}(r_{p})$~from the best-fit power law for
galaxy samples separated by red (left) and blue (right) restframe color
and binned by stellar mass. The power law is fit to $\omega_{p}(r_{p})$~on scales of
1-10 $h^{-1}$ Mpc. Each sample is offset by an arbitrary 0.3 dex
for clarity in the figure. The mean deviation between the relative small-scale clustering
below $r_p$$\leq$0.3 $h^{-1}$ Mpc and large-scale
clustering between 0.4$<$$r_p$$<$10.5 $h^{-1}$ Mpc
is significant in red galaxies at the $p<0.1$ level for 10.5$<$log($M_*$/$M_{\sun}$)$<$11.0 and $p<0.001$ level for log($M_*$/$M_{\sun}$)$>$11.0.
We find no significant increase in small-scale clustering for any of the blue galaxy mass samples beyond the best-fit large-scale power law. }
\label{fig:relmasscorrcolor}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{./fig12.pdf}
\caption{The clustering scale length, $r_0$, measured by fitting a
power law to $\omega_{p}(r_{p})$~over
linear scales of 1-10 $h^{-1}$ Mpc. For this figure, the power law slope is fixed to $\gamma$=1.6
so as to remove degeneracies between $r_0$ and $\gamma$. The scale lengths are measured
for galaxy samples in stellar mass (left), SFR (center), and sSFR (right) as described in
Sections~\ref{sec:smasssample}-\ref{sec:ssfrsample}. Vertical error bars are calculated from the rms
scatter of $r_0$ fit from 10 jackknifed sub-samples in each data bin, and horizontal error bars represent the
rms scatter for each bin of selected galaxy sample values. All measured values of absolute bias, $r_0$, and $\gamma$
are given in Tables~\ref{tab:smassresults}-\ref{tab:ssfrbinresults}.}
\label{fig:r0results}
\end{figure*}
Results for the two-dimensional correlation function, $\xi(r_p, \pi)$, as a function of
stellar mass are shown in Figure~\ref{fig:massxisp} for the 0.74$<$$z$$<$1.4
blue galaxy samples. The figure shows contours of constant correlation
strength for three bins in stellar mass ranging from
9.9$<$log($M_*$/$M_{\sun}$)$<$10.8 and a fourth sample with a
lower stellar mass threshold of log($M_*$/$M_{\sun}$)$>$10.8 (lower right panel). In the lowest stellar mass
bin, $\xi(r_p, \pi)$~ is relatively symmetric both across and along the line of sight,
while at higher stellar masses there is an increasing asymmetry in $\xi(r_p, \pi)$.
At all masses there is evidence of a compression along the
line of sight on large scales, due to coherent infall into gravitational
potential wells known as the ``Kaiser effect" \citep{Kaiser87}.
Redshift-space distortions on small scales (``Fingers of God'') also increase dramatically
with stellar mass, indicating that high-stellar-mass blue galaxies
have greater virial motions and therefore are more likely to be in systems
with larger halo mass. These results mirror those of C08, who found a similar
increase in redshift-space distortions with increasing luminosity
(see their Figure 7).
As redshift-space distortions arise from peculiar velocities that are not directly correlated to the real-space clustering,
we mitigate the distortions by integrating $\xi(r_p, \pi)$~
along the line of sight and projecting the clustering signal onto the
$r_p$ axis. This results in the projected correlation function, $\omega_{p}(r_{p})$,
which we show for color-independent mass threshold samples in
Figure~\ref{fig:masscorrall} and color-dependent mass threshold samples
in Figure~\ref{fig:masscorrcolor}. We also plot the projected dark
matter correlation function \citep{Smith03} generated for a
$\Lambda$CDM universe ($\sigma_{8}$=0.8) at a mean redshift similar to that of
the galaxy samples ($z=0.9$).
In all cases, $\omega_{p}(r_{p})$~is consistent with
a power law on scales larger than $r_p$$>$1 $h^{-1}$ Mpc where
the clustering of galaxies in independent halos (e.g the two-halo
term) dominates the correlation function.
On scales of $r_p$$\leq$0.3 $h^{-1}$ Mpc, we find a departure from a
single power law for the highest stellar masses in the color-independent
sample (Figure~\ref{fig:masscorrall}) and at similar masses
in red galaxies (left side of Figure~\ref{fig:masscorrcolor}). The
departure on small scales is seen more clearly in Figure~\ref{fig:relmasscorrcolor}, where we
divide out the large-scale power
law fit over 1-10 $h^{-1}$ Mpc and plot the relative correlation function for red and blue
galaxy mass samples. For clarity, the
figure shows these relative correlation functions with an offset of 0.3
dex between different stellar mass samples.
To determine the significance with which the observed clustering on small scales is a departure from the large-scale behavior,
we perform a $t$-test between the relative large-scale and small-scale clustering amplitude (Figure~\ref{fig:relmasscorrcolor}). For each jackknife
subsample of a given galaxy sample, we fit a power law over large scales with $r_p$=1-10 $h^{-1}$ Mpc and ratio the correlation function
averaged over $r_p$$\leq$0.3 $h^{-1}$ Mpc to the best-fit power law averaged over the corresponding small scales.
We then compute the mean and standard deviation
among the 10 jackknife sample ratios and compute a $t$ score in which the null hypothesis ratio value is 1. Finally, we compute the corresponding
$p$-value drawn from a $t$-distribution with 9 degrees of freedom. Assuming that there is minimal correlated error between $r_p$$\leq$0.3 $h^{-1}$ Mpc and $r_p>1$ $h^{-1}$ Mpc, this procedure encapsulates the error associated with both the large-scale power law fit and the covariance of $\omega_{p}(r_{p})$~on small scales.
For red galaxies with 10.5$<$log($M_*$/$M_{\sun}$)$<$11.0, the $t$-test
produces a $p$-value of $p$=0.005, meaning that a small-scale deviation as large as that observed or larger would occur only 0.5\% of the time
if there were in fact no true deviation.
Red galaxies with log($M_*$/$M_{\sun}$)$>$11.0 have a small-scale deviation from the large-scale power law fit with $p=0.042$, a significant deviation at the $p<0.05$ significance level. The lower significance in the higher mass galaxy sample is due in part to larger correlated variation in the large-scale power law slope $\gamma$ in the jackknifed samples.
For blue galaxies (right side of Figure~\ref{fig:relmasscorrcolor}), we find no statistically
significant increase in clustering on small scales relative to large scales at
any stellar mass probed in this study. The difference in the relative correlation function amplitude between
red and blue galaxy samples on small scales could indicate a
change in the central-to-satellite fraction or a change in the halo mass distribution
in the halo mass function \citep{Zheng09}.
We will further discuss the physical interpretation of this result in Section~\ref{sec:sscale}.
The left panel of Figure~\ref{fig:r0results} shows the measured correlation length, $r_0$,
when fitting $\omega_{p}(r_{p})$~with a power law model of $\xi$=$(r/r_0)^{-\gamma}$. As the slope of the
correlation function, $\gamma$, can be covariant with $r_0$, we
fix $\gamma$=1.6 to study the trends in $r_0$ with stellar mass.
We find a strong positive correlation between $r_0$
and stellar mass for log($M_*$/$M_{\sun}$)$<$11.0 for blue, red, and color-independent
galaxy samples. At 11.0$<$log($M_*$/$M_{\sun}$)$<$11.5, the clustering length appears to level off
at $r_{0}\sim5$ $h^{-1}$ Mpc for all colors and does not depend on stellar mass, within the errors.
The $r_{0}$ results could indicate that high-stellar-mass blue galaxies at $z$$>$1 are precursors to similar mass
red galaxies at $z$$<$1, as they likely reside in similar mass halos. We note that
the DEEP2 sample has poor constraints for \emph{all} stellar masses above log($M_*$/$M_{\sun}$)$>$11.5; the survey volume is
not large enough to robustly sample the rarest, most massive galaxies. The measured values of $r_0$,
$\gamma$, and the absolute bias for each stellar mass sample
are given in Tables~\ref{tab:smassresults} and \ref{tab:smassbinresults}. By design, the absolute
bias has a similar trend to $r_0$ where the bias increases with stellar mass. We find no
significant trend between the best fit correlation slope $\gamma$ and stellar mass.
For the threshold samples in Table~\ref{tab:smassresults}, we also estimate the minimum and mean halo mass
using the measured large-scale galaxy bias and the halo mass function for central galaxies
\citep{Sheth99, Jenkins01},
modified to accommodate the increased frequency of satellites
at low halo mass (see the Appendix of \cite{Zheng07} for details). For each
galaxy sample, we use the halo mass function calculated for the mean redshift of the sample
in a $\Lambda$CDM cosmology ($\sigma_8$=0.8),
and we assume a satellite fraction of $\sim17$\% as measured for DEEP2 $L^*$ galaxies \citep{Zheng07}.
We caution that the minimum and mean halo masses are intended as estimates
in lieu of a full HOD model fit to $\omega_{p}(r_{p})$, which we reserve to a future study.
Table~\ref{tab:smassresults} shows that the mean halo mass ranges from
$<$log($M_{\rm{halo}}$)$>=12.3 \ h^{-1} M_{\sun}$ in the
lowest stellar mass blue galaxy sample to
$<$log($M_{\rm{halo}}$)$>$$\sim13~h^{-1} M_{\sun}$ at the highest stellar masses.
This mass range is particularly interesting as halo mass assembly models predict
that galaxies at $z$=1 transition from slow clustering growth
below 10$^{12} M_{\sun}$ to near exponential clustering growth above 10$^{13}~M_{\sun}$
\citep{Moster10, Alexi12}. Although the
halo masses presented here are estimates, we expect that they will help
constrain the stellar-mass/halo-mass relationship in this important mass range
at $z\sim1$.
\subsection{Clustering Dependence on Star Formation Rate}
\label{sec:SFR}
We now investigate how clustering properties depend on SFR using the
SFR-selected galaxy samples described in Section~\ref{sec:sfrsample}. Because
complete samples of red and blue galaxies probe vastly different SFR ranges,
we consider only galaxy samples separated by color
(see Figure~\ref{fig:sfrbins}). Following the same
procedure as with stellar mass, we investigate trends in the redshift
space distortions for blue galaxies as a function of SFR by studying $\xi(r_p, \pi)$.
We find that blue galaxies at all
SFRs display ``Kaiser infall'' on large scales, and all samples have
relatively small ``Fingers of God'' on small scales.
This implies that large-scale infall dominates the $\xi(r_p, \pi)$~signal
in our SFR samples and contrasts with the significant redshift-space distortions seen as a function of
stellar mass. The lack of ``Fingers of God'' indicates that the SFR samples must span a broader
range of stellar masses than mass-selected samples, thus diluting the strong ``Fingers of God'' seen in the latter.
Turning to the projected correlation function, Figure~\ref{fig:sfrcorr}
shows $\omega_{p}(r_{p})$~for two SFR threshold levels ranging from
-0.1$<$log($\psi$)$<$0.4 $M_\sun$ yr$^{-1}$ for red galaxies and three SFR threshold
levels between 0.75$<$log($\psi$)$<$1.25 $M_\sun$ yr$^{-1}$ for blue galaxies.
The samples shown are restricted to $z$$<$1.05 for a direct comparison
between red and blue galaxy samples within the same volume.
We find that the clustering amplitude changes much less as a function of SFR
within the red or blue sample compared to the difference in amplitude between
the red and blue samples. Blue galaxies with higher SFR have higher clustering
amplitude, while there is no detected difference in clustering
for the two red galaxy samples as a function of SFR, within the errors.
Figure~\ref{fig:relsfrcorrcolor} shows the deviation of $\omega_{p}(r_{p})$~
from a power law fit on scales of 1-10 $h^{-1}$ Mpc for red (left)
and blue (right) galaxies binned by SFR. For all samples with
log($\psi$)$<$1.5 $M_\sun$ yr$^{-1}$, there is no significant deviation from
a power law on small scales ($r_p$$\leq$0.3 $h^{-1}$ Mpc). For the highest SFR
blue galaxies with log($\psi$)$>$1.5 $M_\sun$ yr$^{-1}$, however, we detect a factor
of seven increase in the mean clustering amplitude
below $r_p$$\le$0.3 $h^{-1}$ Mpc relative to the large-scale power law fit. Performing the same
$t$-test as was done for stellar mass, the excess clustering signal has a $p$-value of
$p=0.038$ and therefore is significant at the $p<0.05$ level.
The measured $r_0$ values for color-separated galaxy samples binned by SFR are
shown in the center panel of Figure~\ref{fig:r0results}.
The clustering scale length increases
with increasing SFR for blue galaxies and is constant for red galaxies as a
function of SFR, within the measurement error. However, blue galaxies
with the highest SFR have clustering amplitudes similar to red galaxies,
suggesting that they occupy the same mass halos at these redshifts.
The overall trends in clustering amplitude agree with
$z\sim1$ environment studies performed in CP08, who found that the galaxy
overdensity increases at both extremes of galaxy SFR.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\columnwidth]{./fig13a.pdf}
\hspace{0.3in}
\includegraphics[width=0.85\columnwidth]{./fig13b.pdf}
\caption{
The projected correlation function, $\omega_{p}(r_{p})$, for red (left) and blue (right)
galaxy samples as a function of SFR threshold, limited to $z$$<$1.05. The
dark matter projected correlation function is shown for $z$=0.9,
near the mean redshift of all galaxy samples. }
\label{fig:sfrcorr}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{./fig14a.pdf}
\includegraphics[width=0.95\columnwidth]{./fig14b.pdf}
\caption{The relative deviation in $\omega_{p}(r_{p})$~from the best-fit power law for
red (left) and blue (right) galaxy samples binned by SFR. Similar to stellar mass
in Figure~{ref:relmasscorrcolor}, the power law is
fit to $\omega_{p}(r_{p})$~on scales of 1-10 $h^{-1}$ Mpc, and each sample is offset
by an arbitrary 0.3 dex in the figure for clarity. We find evidence of
enhanced small-scale clustering in the highest-SFR blue galaxy sample
but not for either red galaxy sample.}
\label{fig:relsfrcorrcolor}
\end{figure*}
Tables~\ref{tab:sfrthreshresults} and~\ref{tab:sfrbinresults} list the
$r_0$, $\gamma$, and bias values for each SFR sample,
and Table~\ref{tab:sfrthreshresults} further lists the minimum and mean halo mass estimated for the
SFR threshold samples. We note that while there is a significant rise in the clustering amplitude
on small scales for the highest SFR galaxies, there is no trend between
SFR and $\gamma$, within the errors. The
red galaxy SFR samples, which are just as clustered on large scales
as the highest-SFR blue galaxy sample, have a consistent clustering slope
as all of the blue SFR samples, though the errors on the slope are large.
Interestingly, when comparing the stellar mass and SFR samples in Figure~\ref{fig:r0results},
the log($\psi$)$>$1.5 $M_\sun$ yr$^{-1}$ blue galaxy sample has
similar correlation lengths as the high-stellar-mass
red galaxy samples. Further, as shown in Figures~\ref{fig:relmasscorrcolor} and~\ref{fig:relsfrcorrcolor},
we see similar enhanced clustering on small
scales relative to the best fit large-scale power law behavior in these galaxy samples.
The similarities between high-SFR blue galaxies and high-stellar mass red galaxies
on {\it both} large and small scales indicates that they are found in
similar mass halos, perhaps linked to a shared star formation history, with
high-SFR blue galaxies evolving to high-stellar mass, low-SFR red galaxies
at later epochs after the star formation is quenched.
\subsection{Clustering Dependence on Specific Star Formation Rate}
\label{sec:sSFR}
We also measure galaxy correlation properties as a function of
specific SFR (sSFR). The sSFR is the inverse of the star-formation timescale of a galaxy.
In secular evolution scenarios, the measured sSFR timescale is proportional to the existing stellar mass
relative to the amount of cold interstellar gas available for production of stars.
Shorter timescales indicate rapid star formation from large, cold interstellar
gas reserves and vice versa for longer timescales.
As shown in Figure~\ref{fig:colormasscontour}, sSFR is strongly correlated
with restframe color as it reflects the star formation history of the
aggregate stellar population within a galaxy.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\columnwidth]{./fig15a.pdf}
\hspace{0.3in}
\includegraphics[width=0.85\columnwidth]{./fig15b.pdf}
\caption{The projected correlation function, $\omega_{p}(r_{p})$, for red (left) and blue
(right) galaxy samples selected using sSFR thresholds between 0.74$<$$z$$<1.05$. The
dark matter projected correlation function is shown for $z$=0.9,
near the mean redshift of all galaxy samples.
The dotted line in the left panel corresponds to $\omega_{p}(r_{p})$~for
blue galaxies with log($\psi$/$M_{*}$)$<$$-8.6$ yr$^{-1}$, shown in the right panel. }
\label{fig:ssfrcorr}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{./fig16a.pdf}
\includegraphics[width=0.95\columnwidth]{./fig16b.pdf}
\caption{
The relative deviation in $\omega_{p}(r_{p})$~from the best-fit large-scale power law for
red (left) and blue (right) galaxy samples binned by sSFR.
A significant rise in the clustering amplitude is seen on small scales for the
highest sSFR samples for both red and blue galaxies.}
\label{fig:relssfrcorrcolor}
\end{figure*}
Figure~\ref{fig:ssfrcorr} shows $\omega_{p}(r_{p})$~for the color-dependent sSFR threshold
samples described in Section~\ref{sec:ssfrsample}. The sSFR samples are
complete between 0.74$<$$z$$<$1.05 for $M_B$$<$$-20.5$ blue galaxies and
$M_B$$<$$-21.0$ red galaxies. In general, lower sSFR samples are more clustered
than higher sSFR samples, which is expected given the strong correlation
between clustering and galaxy color (C08).
Figure~\ref{fig:relssfrcorrcolor} shows the deviation from the best fit
power law for samples binned by sSFR. We find that for the lowest sSFR galaxies in \emph{both} red
and blue samples, $\omega_{p}(r_{p})$~deviates significantly on small scales ($r_p$$\le$0.3 $h^{-1}$ Mpc) relative to the large-scale power law fit.
There is a factor of 4 increase in the small-scale clustering amplitude
for $-10.6$$<$log($\psi$/$M_{*}$)$<$$-9.9$ red galaxies and a similar
increase for $-8.95$$<$log($\psi$/$M_{*}$)$<$$-8.6$ blue galaxies. The
$t$-test for both these samples produce $p$-values of $p=0.013$ and $p=0.018$, respectively.
Additionally, we find that the 0.74$<$$z$$<$1.4 blue galaxy sample with $M_B$$<$$-21.0$ and $-8.95$$<$log($\psi$/$M_{*}$)$<$$-8.6$
also deviates from the large-scale power law fit by an average factor of 7 and has a highly significant increase
in clustering ($p=2.8\times10^{-5}$) on $r_p$$\le$0.3 $h^{-1}$ Mpc scales.
The enhanced small-scale clustering indicates that the satellite fraction
is related to how efficiently galaxies within a single halo, regardless of
restframe color, process gas and produce stars. We discuss the implications
of this result further in Section~\ref{sec:discuss}.
The right panel of Figure~\ref{fig:r0results} shows the best fit $r_0$ values
for galaxy samples binned by sSFR.
The clustering length rises for blue galaxies with lower sSFR values
and is constant for red galaxies with log($\psi$/$M_{*}$)$<$$-10$.
Both red and blue galaxies with $-10.5$$<$log($\psi$/$M_{*}$)$<$$-9.5$ have similar
clustering lengths and therefore likely occupy similar mass dark matter halos.
The correlation slope, $\gamma$, also roughly trends towards higher
values for lower sSFRs, although the trend is not highly significant given
the errors. Tables~\ref{tab:ssfrthreshresults} and~\ref{tab:ssfrbinresults} list the $r_0$, $\gamma$, and bias values for each sSFR sample.
Table ~\ref{tab:ssfrthreshresults} further records the minimum and mean halo mass estimated for the sSFR threshold samples.
\section{Discussion}
\label{sec:discuss}
The primary goal of this work is to measure the clustering properties of complete $z\sim1$
galaxy samples selected by stellar mass, SFR, and sSFR . We present
a macroscopic view of our results in this section. To facilitate this
discussion, we separate our findings into small- and large-scale behavior at
$r_p\sim$1 $h^{-1}$ Mpc, which roughly corresponds to the transition between the
one and two-halo terms in HOD models. We first discuss the large-scale clustering
results, including a comparison with previous work and relevant environment studies,
and then discuss the small-scale clustering results.
\\
\subsection{Large-scale Clustering Behavior}
\label{sec:lscalemass}
The projected 2PCFs measured in this study allow us to quantify the relationship
between galaxy clustering and stellar mass and SFR at $z\sim1$.
We first discuss how our power law fits to the projected correlation function on
large scales compare with other studies.
In the left hand panel of
Figure~\ref{fig:r0masscomp}, we compare our $r_0$ results for
color-independent, stellar mass threshold samples to results from NMBS \citep{Wake11} and VVDS \citep{Meneux08}
at $z\sim1$.
\citeauthor{Wake11} calculated the angular correlation function of galaxy samples as a function
of stellar mass using NMBS photometric redshifts in portions of the COSMOS and AEGIS fields
\citep{Scoville07, Davis07, NMBS}, covering 0.4 deg$^2$ in total.
For NMBS, the angular correlation function is fit at 0.9$<$$z$$<$1.3 over scales of
$\sim0.05-10 h^{-1}$ Mpc with a fixed $\gamma=1.6$. The DEEP2 results shown also
have a fixed $\gamma=1.6$ to facilitate comparison and remove degeneracies. \citeauthor{Meneux08}
used spectroscopic redshifts from the VVDS-Deep field \citep{VVDS05} to calculate $\omega_{p}(r_{p})$~over 0.49 deg$^2$,
fitting $r_0$ over $r_p$=0.1-21 $h^{-1}$ Mpc. While the VVDS results also allow $\gamma$ to vary in the fits,
a slightly higher value of $\gamma=1.8$ is typical for their 2PCF data. We find good agreement with the VVDS clustering amplitudes;
both VVDS and DEEP2 results are
less clustered than the NMBS results, though the difference is not significant
given the NMBS errors.
This may be due to cosmic variance within
the NMBS $z=1.1$ sample, where independent correlation functions in the COSMOS and
AEGIS samples are quite different on large scales (although they statistically agree
given the size of the errors, see their Figure 2).
\begin{figure*}[ht]
\centering
\includegraphics[width=0.33\textwidth]{./fig17a.pdf}
\includegraphics[width=0.33\textwidth]{./fig17b.pdf}
\includegraphics[width=0.33\textwidth]{./fig17c.pdf}
\caption{(left) Clustering scale length, $r_0$, measured from color-independent stellar
mass threshold samples in DEEP2 (black square), VVDS data (green star) from \cite{Meneux08},
and NMBS data (brown circle) from \cite{Wake11}. Data point locations indicate the lower mass
threshold limit imposed on each sample. The Wake et al. measurements use NIR photometric
redshifts to compute the angular correlation function at 0.9$<$$z$$<$1.3.
Both VVDS and DEEP2 have a lower clustering amplitude than the NMBS at log($M_*$)$<$11.0 $h^{-2} M_\sun$,
although the errors from the 0.4 deg$^2$ NMBS survey are relatively large in comparison.
(center) $r_0$ resulting from power law fits to
color-independent stellar mass bin samples in SDSS \citep{Li06}, VVDS,
DEEP2, and \cite{Foucaud10}. Fits are performed over $r_p$=1-10 $h^{-1}$ Mpc in
DEEP2, \citeauthor{Foucaud10} (orange circle), and SDSS (purple diamond) while VVDS is fit over
scales of $r_p$=0.1-21 $h^{-1}$ Mpc. All points have similar mass bins sizes of 0.4-0.5 dex, and
the power law slope $\gamma$ is not fixed in the $\omega_{p}(r_{p})$~fits. The DEEP2 clustering scale lengths agree well with those from VVDS, both of
which show lower clustering amplitudes at earlier times for all stellar
masses probed. The angular clustering measured from \citeauthor{Foucaud10} is in rough agreement with both early and late epochs, demonstrating
the improved statistical power of the 2PCF with spectroscopic redshifts.
(right) Best fit $r_0$ values for restframe color-separated red and
blue galaxy stellar mass bin samples in SDSS and DEEP2 for a fixed $\gamma=1.6$ over
$r_p$=1-10 $h^{-1}$ Mpc. DEEP2 mass bins range in size from 0.3 to 0.4 dex for blue and red galaxies, respectively.
Blue galaxies in the local universe are somewhat more
clustered at a given stellar mass than at $z\sim1$, while massive red galaxies
are more clustered locally. }
\label{fig:r0masscomp}
\end{figure*}
We also note that the DEEP2 stellar mass threshold
sample with galaxies of all colors above log($M_*$/$M_{\sun}$)$>$10.5 is somewhat incomplete for red
galaxies at the low mass end beyond $z$$>$0.9.
As stated in Section~\ref{sec:smasssample}, the red galaxy incompleteness between
10.4$<$log($M_*$/$M_{\sun}$)$<$10.8 is estimated to be $\sim20\%$ between 0.74$<$$z$$<$1.05.
As the clustering amplitude at a fixed stellar mass appears to be similar
for red and blue galaxies above log($M_*$/$M_{\sun}$)$>$10.8, we expect the clustering strength
for the log($M_*$/$M_{\sun}$)$>$10.5 threshold sample to be relatively unaffected by
the red galaxy incompleteness.
If we assume that $r_0$ for the color-independent sample behaves as a weighted average
between the red and blue galaxy samples (given in Table~\ref{tab:smassresults}),
then increasing the number of red galaxies by 20\% would only increase the clustering
strength by $\sim1\%$, much less than the quoted error.
We further compare our best fit clustering amplitudes to similar
measurements in the local universe from SDSS \citep{Li06}. The center panel of Figure~\ref{fig:r0masscomp}
shows $r_0$ for color-independent, binned stellar mass samples in SDSS\footnote{The SDSS and NMBS stellar masses have
assumed a Kroupa IMF, while DEEP2, \cite{Foucaud10}, and VVDS stellar masses use a Chabrier IMF; the differences in IMF produce
a negligible difference of 0.05 dex in stellar mass for our clustering comparisons.}. As \cite{Li06} did not
measure $r_0$ from their projected 2PCFs, we fit their $\omega_{p}(r_{p})$~data over $r_p$=1-10 $h^{-1}$ Mpc scales
in a similar manner to our DEEP2 data. The \cite{Li06} $\omega_{p}(r_{p})$~data is drawn from a sample of
$\sim200,000$ SDSS galaxies with an average statistical error of 5\% per measured scale over the fitted scale range and does
not account for the full covariance between measured scales. Therefore, our power law fits to their
published data have a small $1\%$ statistical error for each
stellar mass sample and may not be fully representative of the true error in the SDSS $\omega_{p}(r_{p})$~data. \cite{Zehavi11} estimate the
$r_0$ error to be $\sim$5\% from jackknifed samples of galaxy luminosity bins in SDSS, and therefore
our conclusions comparing clustering amplitudes should be relatively robust.
The same plot also shows the three most complete stellar mass samples in VVDS. Because the VVDS-Deep data becomes incomplete
for blue galaxies below log($M_*$/$M_{\sun}$)$<$10, the measured clustering amplitude is systematically biased to lower
values of $r_0$. At most, \cite{Meneux08} calculated that the (9.5$<$log($M_*$/$M_{\sun}$)$<$10.0) stellar mass bin underestimates the
clustering amplitude by 10\% relative to mock catalogs.
DEEP2 color-independent stellar mass samples are limited to log($M_*$/$M_{\sun}$)$>$10.5,
while the VVDS data extends to somewhat lower stellar mass with an
$I_{\rm{AB}}$=24 magnitude limit. Again, we find excellent agreement in clustering strength between DEEP2 and
VVDS at $z\sim1$, both of which have significantly lower $r_0$ values than local SDSS
galaxies for all stellar masses probed. At a fixed stellar mass, we estimate that the clustering amplitude in color-independent samples has
increased by roughly 35\% from $z=1$ to $z=0.1$. The additional angular clustering data from \cite{Foucaud10} uses $K$-band stellar mass bins
and photo-$z$s at $z$=1 and is in rough statistical agreement with the NMBS mass threshold samples and \emph{both} the low and high redshift data. The measurements
from spectroscopic 2PCFs are clearly advantageous relative to angular clustering measures for equivalent surveys of a few square degrees.
The right panel of Figure~\ref{fig:r0masscomp} shows $r_0$ for stellar mass selected
samples separated by red and blue restframe colors in both SDSS and DEEP2. On large scales,
we find that blue galaxies are somewhat more clustered ($\approx15\%$) locally when compared
to $z\sim1$,
although the $r_0$ values are completely consistent at the
highest stellar mass, log($M_*$/$M_{\sun}$)=11.25. Red galaxies at the same stellar mass, however,
are much more clustered locally than at $z\sim1$. We interpret this to mean that
lower stellar mass blue galaxies have much of their current halo mass in place by
$z\sim1$, and therefore there is little evolution in the clustering amplitude over the last
8 Gyrs. However, the halos that host massive red galaxies, which are more likely to
be central galaxies, accumulate fractionally more halo mass relative to their existing stellar mass
over the same span of cosmic time. This effectively causes the clustering amplitude to increase
at a fixed stellar mass and therefore become more clustered on large scales at recent epochs.
These results support stellar-halo mass assembly history models where lower stellar mass blue galaxies
have most of their final halo mass in place at $z\sim1$ but continue to add to their stellar
mass, while higher stellar mass red galaxies form most of their final halo mass at $z$$<$1 but have little evolution in their stellar
mass \citep{Zheng07, Conroy09, Coupon12}.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{./fig18.pdf}
\caption{The galaxy clustering with respect to SFR for $z$=0.84 H$\alpha$ emitters in the HiZELS survey (diamonds, \cite{Sobral10})
and the DEEP2 0.74$<$$z$$<$1.4 blue galaxy sample binned by SFR (filled circles). The results are in good agreement given the measurement errors
and different SFR calibration methods between the surveys.}
\label{fig:SobralSFR}
\end{figure}
Our clustering measurements also agree with the blue and red
luminosity-dependent clustering results of C08, who
found that red galaxies are more clustered than blue galaxies at $z=1$ at the
same luminosity (which corresponds to a lower stellar mass for blue galaxies compared
to red). The angular clustering measurements for passive and star-forming galaxies presented in \cite{Hartley10} also broadly
agree with C08 and our results for red and blue galaxies at z$\sim1$, albeit with larger errors due to the use of photometric redshifts.
Both C08 and \citeauthor{Hartley10} find that
blue, star-forming galaxies are less clustered than their red, passive counterparts at the same luminosity at lower redshift
($r_{0}\approx5~h^{-1}$ Mpc at $z\sim1$ versus
$r_{0}\approx6~h^{-1}$ Mpc locally, see Figure 11 in C08).
Further, the results in the lower red galaxy stellar
mass bin are consistent with the results found in C08, given the errors.
We find a slightly larger clustering difference between
$z\sim1$ red galaxies selected by stellar mass and those at $z\sim0.1$, using fits to
the \citeauthor{Li06} $\omega_{p}(r_{p})$~data.
We note that our highest
stellar mass red galaxy sample is a \emph{threshold} sample limited by the total probed volume in DEEP2 volume. This upper stellar mass limit
will not include the rarest, most massive galaxies
that would be present in the larger SDSS volume within an equivalent stellar mass range.
Another concern is that less massive star-forming galaxies that have been
reddened by dust could be included in red galaxy samples, which may
cause $r_0$ to be underestimated.
Results from the PRIMUS survey \citep{Zhu11} show that such heavily reddened
star-forming galaxies are rare at the bright end of the red sequence ($\leq10$\%) at intermediate redshift and therefore should have relatively
little weight in the large-scale 2PCF amplitude.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{./fig19.pdf}
\caption{The best fit power law for $\xi(r)$~measured from the DEEP2 sSFR samples on 1-10 $h^{-1}$
Mpc scales (red circles). The real-space 2PCF is normalized to a power law of
$\xi(r)_{\rm{pow}}=(r / 5 h^{-1}$ Mpc)$^{-1.8}$ at a fixed scale of $r=5 h^{-1}$ Mpc to match the
average scale for the DEEP2 data. \cite{Li08} produced similar measurements in SDSS at equivalent scales (black crosses).
The relative trend in sSFR is similar between $z\sim1$ and local redshifts for log($\psi$/$M_{*}$)$>$$-10.5$; lower clustering amplitudes are observed at progressively
higher sSFR values. For log($\psi$/$M_{*}$)$<$$-10.5$, the $z\sim1$ galaxies are less clustered than their local counterparts at fixed
sSFR. }
\label{fig:ssfrli08}
\end{figure}
There are relatively few studies that have explored galaxy clustering at $z\sim1$ as a function of
stellar mass, SFR, and sSFR to which we can compare directly. At a mean redshift of $z$=0.8, the HiZELS survey
measured the angular clustering of a relatively small sample of H$\alpha$ emitters with respect to $K$-band magnitude and H$\alpha$ luminosity,
known tracers of stellar mass and SFR \citep{Sobral10}. Their results show similar trends as our $z\sim1$ blue galaxy samples,
with a clustering strength ranging from $r_0$=2-5 $h^{-1}$ Mpc and a general trend for increased clustering with brighter $K$-band magnitude and H$\alpha$ luminosity.
Figure~\ref{fig:SobralSFR} shows the \cite{Sobral10} clustering with H$\alpha$ luminosity after converting to SFR \citep{Kennicutt98} and the results
from our 0.74$<$$z$$<$1.4 blue galaxy sample binned by SFR. We find good agreement with the HiZELS measurements given the errors, but we
do not observe a large drop to lower clustering at log($\psi$)$<$1 corresponding to the break in H$\alpha$ luminosity function at $L^*_{\rm{H}\alpha}$.
The disagreement may arise from the fact that our SFRs are calibrated by restframe magnitudes and colors of the bulk galaxy population at $z\sim1$, which may
smooth such a rapid transition in SFR within the blue galaxy population. \cite{Sobral10} also found the clustering increased with increasing SFRs at a fixed $K$-band
magnitude, which is similar to our sSFR measurements in blue galaxies and will be further discussed in Section~\ref{sec:sfrmass}.
More recently, \cite{Lin12} performed an angular clustering analysis of $BzK$-selected
star-forming galaxies at $z\sim2$. They also found that the measured clustering scale length increases strongly with increasing stellar mass
(log($M_*$/$M_{\sun}$)$>$9), increasing SFR (log($\psi$)$>$0.5), and decreasing sSFR (log($\psi$/$M_{*}$)$<$$-8.6$), in good agreement with
the trends found here. However, because we have restricted our blue galaxy sSFR samples to $M_B$$<$$-20.5$
and log($\psi$/$M_{*}$)$<$$-8.6$ for completeness to $z$$<$1.4, we do not probe higher values of sSFR beyond $z$$>$1 in this study.
We therefore cannot confirm the existence of a turnover in the clustering behavior at
higher values of sSFR as presented in \cite{Lin12}.
We can also compare our results as a function
of sSFR to those of \cite{Li08}, who measured $\omega_{p}(r_{p})$~for a large sample of SDSS
galaxies with sSFRs ranging from $-11$$<$log($\psi$/$M_{*}$)$<$$-9$. To facilitate
this comparison, we follow \citeauthor{Li08} and compute the best fit $\xi(r)$ \ power law from
10 jackknifed samples of our $\omega_{p}(r_{p})$~data as a function of sSFR and normalize to a constant power law of
$\xi(r)_{\rm{pow}}=(r / 5 \ h^{-1}$ Mpc)$^{-1.8}$. As we fit $\omega_{p}(r_{p})$~over scales 1-10 $h^{-1}$ Mpc in each jackknife sample,
we evaluate the power law $\xi(r)_{\rm{fit}}$ for the unweighted mean values of $r_0$ and $\gamma$ at a fixed scale of 5 $h^{-1}$ Mpc and plot
the ratio of the 2PCFs as a function of sSFR (see Figure~\ref{fig:ssfrli08}).
Compared to \citet{Li08} (also at $r_p=5 \ h^{-1}$ Mpc), we find good qualitative
agreement with the trend seen for local clustering results. At log($\psi$/$M_{*}$)$>$-9.5, there
is a significant trend to lower clustering amplitudes at higher sSFR,
while the large-scale clustering amplitude is relatively independent of sSFR
at log($\psi$/$M_{*}$)$<$$-9.5$. At log($\psi$/$M_{*}$)$<$$-10.5$ we do not see a rise in the large-scale clustering
amplitude, as is seen locally. The trend of increased clustering at lower sSFRs should
be expected given the C08 results, as
sSFR is highly correlated with restframe color (see right
panel of Figure~\ref{fig:colormasscontour}) and C08 found an increased clustering
amplitude with redder colors.
Here again, as with the highest stellar mass red sample (log($M_*$/$M_{\sun}$)$>$11.0),
the sample with the lowest sSFR (log($\psi$/$M_{*}$)$<$$-10.6$) is a threshold sample which is
limited by the DEEP2 survey volume.
As the clustering signal from the rarest galaxies cannot be
measured from DEEP2 data, the clustering amplitude in this sample may be underestimated relative
to larger survey volumes such as SDSS.
The effect is expected to be minimal.
\subsection{Clustering Amplitude and the SFR-$M_{*}$ Relationship}
\label{sec:sfrmass}
While galaxy samples selected by SFR probe a wide range of stellar mass,
stellar mass and SFR are known to be correlated \citep[e.g.,][]{Noeske07a}
such that the mean stellar mass increases with increasing SFR.
In Section~\ref{sec:smass}, we showed that the clustering amplitude $r_0$ increases
monotonically with stellar mass, as
the amount of baryons processed into stars correlates with the halo mass. Therefore,
it is possible that the change in mean stellar mass for our blue SFR samples is
responsible for the observed increase in clustering amplitude with increasing SFR.
Similar investigations in environmental studies \citep{Peng10, Sobral11}
have found little to no difference between the
SFR-density and stellar mass-density relationships, indicating that the SFR-density relation
could be entirely due to differences in stellar mass.
To address this question, we generate a prediction for $r_0$ based on
the measured clustering with stellar mass using a fixed slope
of $\gamma$=1.6 (see left panel of Figure~\ref{fig:r0results}). We fit an exponential relation between
$r_0$ and log($M_*$/$M_{\sun}$) \ for binned blue galaxy stellar mass samples, finding a best fit relation
of
\begin{equation}
r_{0}(M_*)=e^{2.57\rm{log}(M_*/M_\sun)-27.1}+2.88.
\end{equation}
We then calculate a prediction for $r_0$ as a function of SFR by weighting the fit relation with the
stellar mass distribution of each SFR-selected sample. Because $\xi(r)$ is actually a pair-weighted statistic, this
method is only an approximation to the $r_0$ that would be measured from a given stellar mass distribution.
The ratio of the measured $r_0$ to the predicted $r_0$ as a function of SFR is shown in Figure~\ref{fig:sfrdetrend}. If the
large-scale clustering amplitude observed can be predicted given the stellar mass distribution of each
SFR sample and the observed relation between $r_0$ and stellar mass,
$r_{0}(\psi)_{\rm{obs}}$/$r_{0}(\psi)_{\rm{fit}}$ would equal one.
We find that the mean $r_{0}(\psi)_{\rm{obs}}$/$r_{0}(\psi)_{\rm{fit}}$ over all
SFR is 1.057$\pm$0.025; the deviation
from unity has a $p$-value of $p=0.076$ and therefore is significant at the $p<0.1$ level. We also find that
there is a trend towards larger deviations from the mass-predicted $r_0$ at higher SFRs, suggesting that
most, but not all, of the SFR - $r_0$ relationship can be explained by the relationship between SFR and stellar mass.
While most of the clustering amplitude as a function of SFR may be explained
by the $r_0$-$M_*$ relation, it is interesting to investigate
the SFR-dependent clustering behavior where the stellar mass distributions of
SFR-selected samples are equal. In Figure~\ref{fig:sfrmasscont}, we plot SFR versus stellar mass for our parent DEEP2 sample with
contours showing lines of constant log($\psi$/$M_{*}$). As we find that $r_0$ increases with decreasing
sSFR (see Figure~\ref{fig:r0results}), the clustering amplitude increases from the upper left
(high sSFR) to the lower right (low sSFR) in this space. To test whether high SFR galaxies are more
clustered than low SFR galaxies with similar stellar masses, we construct a galaxy sample limited in stellar
mass to 10$<$log($M_*$/$M_{\sun}$)$<$11 and sSFR to log($\psi$/$M_{*}$)$<$$-9.9$, which is complete in the volume between 0.74$<$$z$$<$1.05.
This sample selection is shown with a dotted line in Figure~\ref{fig:sfrmasscont}. We then perform a linear fit to the
SFR-$M_*$ dependence within this selected sample, weighting by the average errors of 0.3 dex
in $M_*$ and 0.25 dex in SFR. The resulting fit (shown with a solid black line) roughly corresponds
to the location of the star-forming ``main sequence''
\citep[(MS) e.g.,][]{Noeske07a}
for the selected sample.
Separating the galaxy sample into populations above and below the MS, we find
that the mean stellar mass is log($M_*$/$M_{\sun}$)=10.36 for both populations, while the average SFRs are
log($\psi$)=1.24 $M_{\sun}$ yr$^{-1}$ and log($\psi$)=0.84 $M_{\sun}$ yr$^{-1}$, respectively.
Comparing their clustering properties, we find galaxies above the
MS have a clustering amplitude of $r_0$=3.73$\pm$0.18, while galaxies
below the MS have $r_0$=4.36$\pm$0.21 and are therefore more clustered
at a given stellar mass. This confirms that
the $r_0$-sSFR results found above are likely not driven by stellar mass differences between the samples.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\columnwidth]{./fig20.pdf}
\caption{The measured clustering scale length ($r_{0}(\psi)_{\rm{obs}}$) for
blue galaxy SFR samples normalized to the scale length predicted by the correlation
between $r_0$ and stellar mass. The predicted
scale length ($r_{0}(\psi)_{\rm{fit}}$) is generated from an exponential fit to the blue galaxy
stellar mass samples (Figure~\ref{fig:r0results}) and weighted by the stellar mass distribution of each SFR-selected sample.
The weighted mean over all SFRs is $r_{0}(\psi)_{\rm{obs}}$/$r_{0}(\psi)_{\rm{fit}}$=1.057$\pm$0.025, and
a weighted linear fit (red, solid line) shows a trend for larger deviation from the stellar mass-predicted $r_0$ at higher SFR.
}
\label{fig:sfrdetrend}
\end{figure}
While the stellar mass and SFR values used in this study are not necessarily
highly accurate on an individual galaxy basis, they do represent the global
average well. Any scatter in these values would wash out the observed
SFR-$M_*$ relation and clustering correlations in this space.
In particular, because
the highest SFR galaxies have the bluest restframe colors at a given $K$-band magnitude, it is possible that
our color-based stellar masses underestimate the true stellar mass as measured in the IR (Weiner et al., in preparation).
We have compared our color-M/L stellar masses to galaxies with measured $K$-band stellar masses both above and
below the MS and find no statistically significant bias in the mass distributions.
The clustering dependence with SFR at a given stellar mass
is particularly interesting in light of the differences in galaxy properties
observed for galaxies above and below the MS. In general, galaxies above the MS are
found to have higher star-forming surface densities, smaller sizes, more dust attenuation, and
somewhat higher Sersic indices than galaxies on or below
the MS \citep{Schiminovich07,Elbaz11, Wuyts11}. In particular, both \citeauthor{Elbaz11} and \citeauthor{Wuyts11} interpret
their results to support major mergers as a dominant mechanism of star formation quenching,
where galaxies on the MS experience a merger event or some other instability, which leads to
a central
starburst and bulge formation, and eventually results in a quiescent elliptical galaxy. In this picture,
an individual galaxy would move from first being on the MS to being above the MS during the merger stage
and then eventually move below the MS as the galaxy becomes quiescent (blue arrows in Figure~\ref{fig:sfrmasscont}).
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{./fig21.pdf}
\caption{The dependence of SFR on stellar mass, overlaid with contours of constant sSFR.
Dashed lines show the selection of star-forming galaxies with $z$$<$1.05
used to measure the clustering of galaxies above and below the star-forming
``main sequence''. The black solid line shows a
linear fit to the SFR-stellar mass relation in this sample.
Colored lines show two possible evolution scenarios for star-forming galaxies.
The major merger
scenario (blue arrows) predicts that galaxies evolve along the star-forming
main sequence and undergo a merger event that initially boosts
the SFR, followed quickly by a period of star formation quenching
and progression onto the red sequence. The secular evolution scenario (red arrows) assumes
that galaxies initially lie above the
main sequence, during a period of intrinsically rapid star formation
and are eventually quenched through internal or secular
processes, moving through the main sequence in the process.
We find that galaxies above the main sequence have a lower clustering amplitude than
galaxies below the main sequence at a given stellar mass. As the clustering of individual
galaxies increases with time, our measurements favor the secular evolution scenario, as the galaxy
population above the main sequence can not be dominated by galaxies
that previously had a lower SFR at a given stellar mass, which are more clustered.}
\label{fig:sfrmasscont}
\end{figure}
However, the clustering results presented here do not support this general picture, as galaxies above
the MS are \emph{less} clustered than those on or below the MS. Therefore the majority of the galaxy population
above the MS (i.e. with a higher sSFR) can not have previously been on the MS, as the clustering
amplitude can not decrease with time. Our clustering results instead support a picture
in which most galaxies initially lie above the MS, where they have higher SFRs at a given
stellar mass due to internal (non-interacting) processes,
and undergo secular evolution down onto and eventually below the
MS as their star formation is quenched. While major merger events can certainly occur, their contribution to
the overall clustering trends in field galaxies is likely a small effect. The lack of a strong clustering signal from major mergers above the MS
is also supported by recent evidence that mergers play a subdominant role in the build up of the stellar mass function at $z$$<$1 \citep{Conselice13, Moustakas13}.
The clustering properties as a function of
restframe optical color in both SDSS \citep{Zehavi11} and DEEP2 (C08) also support this scenario, as restframe optical color is well correlated with
sSFR at $z\sim0.1$ and $z\sim1$, and both studies find that within the star-forming population, bluer galaxies
are less clustered than redder galaxies.
The general evolutionary picture is one in which galaxies move from the upper left in the SFR-stellar mass plane down
to the lower right to match the observed clustering measurements. However, in addition to the clustering amplitude
and therefore halo mass increasing below the MS, the satellite fraction may change as well, which could also
influence the sSFR. In particular, galaxies above the MS may have a higher satellite fraction, while galaxies below the
MS may be more likely to be central galaxies in their parent dark matter halos. Therefore while secular evolution is a
logical explanation for the clustering trend observed across the MS, a full HOD analysis is required to assess the
importance of the changing satellite fraction above and below the MS and to more robustly determine the processes
driving the observed change in sSFR.
\subsection{Comparison with Environment Studies}
\label{sec:environ}
An alternate way to study the dependence of galaxy properties on large-scale structure
is through overdensity measurements of a galaxy sample relative to the mean density of
galaxies at the same redshift. Such studies measure how a particular galaxy property
is correlated with environment and are inherently limited to probe physical scales larger than the
density-defining population. At $z$$\sim$1, a limiting scale of
$\gtrsim1~h^{-1}$ Mpc is common.
Several environment studies at $z$$\sim$1 have found a strong correlation between stellar
mass and density, in that galaxies in higher density environments tend to have
higher stellar mass \citep{Kauffmann04, Cucciati06, Cooper06,
Peng10,Sobral11}. Our results are in broad agreement with these studies, as we find higher clustering
amplitudes within increasing stellar mass between 9.6$<$log($M_*$/$M_{\sun}$)$<$11.0, above which
the clustering amplitude remains constant.
There has been some disagreement in the literature, however, as to whether the
color-density relationship evolves as a function of redshift. Several previous
studies of galaxy environments at $z\sim1$ have failed to detect any statistically significant
relationship between color and density, in contrast to studies at lower redshifts
\citep{Cucciati06, Scodeggio09, Grutz11}.
On the other hand, \cite{Cooper10} probed the most massive DEEP2 galaxies and found a
statistically-significant correlation between color and galaxy overdensity, with high mass red galaxies
found preferentially in the most dense environments at $z\sim1$. \citeauthor{Cooper10} emphasizes that
while no significant trend in the color-density relation may be \emph{detected} in several environment studies,
it is possible that a color-density correlation may simply be missed due to larger statistical
errors or observational systematics at
higher redshifts (e.g due to low sample completeness in VVDS at $z$$>$1).
In our analysis, we do not detect a difference between the clustering of red and blue galaxies at an
equivalent stellar mass. However, our samples are defined to probe the bulk color-density
relationship at $z\sim1$, and we do not probe the most over- or under-dense regions
specifically, which was a primary focus of \cite{Cooper10}. Our results are consistent with \cite{Cooper10} in that the
color-density trend for high-mass galaxies at $z$$\sim$1 is weaker than the stellar mass-density trend;
the typical densities of red and blue galaxies at a given stellar mass are not significantly
different
Other environment studies have measured the SFR-density relationship as a function of
redshift and stellar mass. Locally, this relation is monotonic, with lower-SFR galaxies
found in higher density environments \cite{Gomez03}. At earlier times,
the SFR-density relationship was quite different; dense environments are associated
with both low and high SFRs (\citealt{Elbaz07}, CP08, \citealt{Sobral11}). Our SFR-clustering results at
$z\sim1$ are in broad agreement with these studies, with the highest-SFR blue galaxies
having roughly the same clustering strength as the lowest-SFR red galaxies.
The similar environments found for galaxy populations with vastly different SFRs
(log($\psi$)$<$ 0.5 $M_\sun$ yr$^{-1}$ and log($\psi$)$>$1.5 $M_\sun$ yr$^{-1}$) indicate that the
processes leading to the large-scale clustering and quenching of star formation in massive galaxies are
primarily independent. One possible scenario is that rapidly star-forming blue galaxies at these redshifts are
progenitors of the quiescent red galaxies found at later epochs \citep[][C08]{Cooper06}.
The preponderance of quiescent galaxies found in high density regions at
$z\sim1$ and
the relative paucity of high SFR blue galaxies at low redshifts
gives the appearance that the large-scale environment and quenching are related.
Mechanisms proposed to explain the evolution of the observed
SFR-density relation involve the quenching of star formation in a variety of
isolated and merging systems \citep{Sobral11, Peng10} and include
effects such as virial heating of cold gas needed for star formation and the
rate of gas exhaustion. However, as pointed out in the previous section,
it seems likely that quenching specifically from major mergers plays a sub-dominant
role at $z\sim1$; the clustering of star-forming galaxies at a fixed stellar mass increases monotonically
with decreasing SFR, not vice-versa. Our results support the theory that a simple
sSFR-$M_*$ relation, such as proposed ``mass-quenching" models where the quenching
rate for galaxies at or above $M^*$ is proportional to their stellar mass \citep{Noeske07b, Peng10},
should reproduce the gross relationship between star-formation, stellar mass, and
clustering on large scales. However, we caution that not
\emph{all} quenching in star-forming galaxies must follow this form,
as our samples probe only the average clustering properties of the $z\sim1$ galaxy distribution.
\subsection{Small-scale Clustering Behavior}
\label{sec:sscale}
We now turn to our clustering results on small scales, below $r_p\sim$1 $h^{-1}$ Mpc.
In Section~\ref{sec:smass}, we found that $\xi(r_p, \pi)$~ for
blue galaxies selected by stellar mass exhibits stronger ``Fingers of God" on small
scales at higher stellar mass (see Figure~\ref{fig:massxisp}), indicating that
higher-stellar mass blue galaxies reside in halos with higher velocity dispersions
and therefore likely reside in higher mass halos than lower-stellar mass blue galaxies.
Such an obvious increase in the ``Fingers of God" within $\xi(r_p, \pi)$~is not seen for
blue galaxy samples selected by SFR or sSFR, suggesting that these properties do not correlate with the dark matter halo mass
as directly as stellar mass does. Although we do not show them here, we find that all red galaxy samples
exhibit strong ``Fingers of God", independent of the stellar mass or SFR and similar to the
red luminosity samples in C08 (see their Figure 8).
In order to reveal any relative change in the small-scale clustering amplitude, we divide $\omega_{p}(r_{p})$~on all scales
by the best fit power law on scales 1-10 $h^{-1}$ Mpc and look for excess clustering below $r_{p}$$\leq$0.3 $h^{-1}$ Mpc relative to this best fit power law.
In HOD models, the slope of $\omega_{p}(r_{p})$~on these small scales, where the one-halo term
dominates the correlation function, is governed by the relative number
of central and satellite galaxy pairs within an individual halo, as
well as the location of the probed halo mass scale
on the halo mass function \citep{Berlind02, Zehavi05, Zheng05, Skibba09}.
For massive galaxy samples with a minimum halo mass corresponding to $\gtrsim10^{12.5}~
h^{-1} M_{\sun}$ at $z=1$,
the $\omega_{p}(r_{p})$~amplitude rapidly increases on small scales due to a greater contribution
from central-satellite and satellite-satellite pairs (\cite{Sheth02, Tinker08}; see also Appendix A of \cite{Zheng09} for further details).
Such massive halos are near the exponential tail of the halo mass function, and higher-mass halos
are increasingly rare in the volume surveyed by DEEP2.
For our galaxy samples defined by stellar mass, we find no
significant increase in clustering on small scales for
any of the blue galaxy samples. However, both mass-selected red galaxy samples exhibit
enhanced clustering on scales below $r_{p}$$\leq$0.3 $h^{-1}$ Mpc ($p=0.005$ for 10.5$<$log($M_*$/$M_{\sun}$)$<$11.0
and $p=0.04$ for log($M_*$/$M_{\sun}$)$>$11.0).
The enhanced small-scale clustering observed in these massive red galaxies
may reflect a higher prevalence of central galaxies relative to satellite galaxies in the population
and the higher minimum halo mass ($\sim10^{12.5} h^{-1} M_{\sun}$),
which contributes more correlated power in the one-halo term as discussed above.
The lack of a detectable small-scale rise for the blue galaxy sample at an
equivalent stellar mass may be due in part to larger measurement errors in the highest mass sample
(see the right panel of Figure~\ref{fig:relmasscorrcolor}) or
a larger contribution from satellite-satellite pairs than central-satellite pairs, which smoothes
the transition in $\omega_{p}(r_{p})$~between the one-halo to two-halo terms.
The small-scale clustering in color-independent stellar mass samples
is driven
primarily by the fraction of red to blue galaxies at a fixed stellar mass.
Accordingly, we observe that the color-independent
samples exhibit a similar small-scale rise at high stellar mass due to the
high fraction of red galaxies present in the sample.
When binned by SFR, we find that red galaxies with low SFRs do not
show a rise on small scales, while blue galaxies do exhibit a 7-fold increase in
small-scale clustering amplitude over the best-fit large-scale power law
below $r_{p}$$\leq$0.3 $h^{-1}$ Mpc. While the $\omega_{p}(r_{p})$ \ error in the highest-SFR blue galaxy sample
(log($\psi$)$>$1.5 $M_{\sun}$ yr$^{-1}$) is the largest of all measured blue galaxy samples,
the deviation from the large-scale power law has a $p$ value of $p=0.038$ and is
statistically significant at the $p<0.05$ level.
One possible explanation for the correlation between high SFR and increased small-scale clustering amplitude is that
star formation is being triggered by galaxy interactions, either through merger events or tidal
interactions \citep[e.g.][]{Barton07}. Late-type galaxies in the
blue cloud contain large gas
reservoirs which can support enhanced levels of star formation triggered though
close interactions of galaxy pairs. Enhanced IR luminosity,
which is tightly correlated with star formation, has been previously observed in extremely blue
DEEP2 galaxies in close kinematic galaxy pairs ($\delta r_p=50$ $h^{-1}$ kpc) within the EGS \citep{Lin07}.
Further, \cite{Robaina09} show enhanced clustering with SFR on slightly larger scales (40$<$$r_p$$<$180 $h^{-1}$ kpc)
between 0.4$<$$z$$<$0.8 in a sample of COMBO-17 galaxies \citep{Wolf03} matched to \emph{Spitzer} 24$\mu$m measurements.
Our results show a similar increase in small-scale clustering power at high SFR, on scales $r_{p}$$\leq$0.3 $h^{-1}$ Mpc. However, we have demonstrated
that an increased SFR is also correlated with higher stellar mass, and therefore it is also possible that galaxies with higher stellar mass
have a stronger one-halo term due to their higher halo mass.
It is possible that both stellar mass and SFR enhancement are correlated with the increase clustering seen on small-scales in SFR-selected blue galaxies.
We note, however, that red galaxies with low SFR have similar halo masses to the high-SFR blue galaxies and do not show a
similar rise, indicating that SFR enhancement is likely more strongly correlated with the increased clustering signal.
These small-scale clustering trends allow us to interpret the behavior seen as a function
of luminosity in C08 (their Figure 10), who detected a relative small-scale
rise for the brightest blue galaxies and a marginal increase for bright
red galaxies.
Figure~\ref{fig:colormasscontour} shows that
blue galaxies with the highest SFRs also have the highest
luminosities, and therefore the observed small-scale rise for bright blue
galaxies in C08 may have been due in part to both higher stellar masses and SFR enhancement.
The small-scale rise observed here for massive red galaxies is likely
reflected in the marginally-significant increase seen for bright red galaxies in C08. As
the mass-to-light ratio varies across the red sequence, a given luminosity range corresponds to a wider range in stellar mass;
if stellar mass is more strongly linked to halo mass, luminosity-selected samples will have a broader host mass range
and hence exhibit a weaker small-scale clustering signal.
Turning to sSFR,
we find that galaxies that are forming stars most rapidly relative to their existing
stellar mass have an increased clustering amplitude on small scales.
Interestingly, \emph{both} the high-sSFR sub-samples of the blue and red galaxy population
show a small-scale rise at $r_{p}$$\leq$0.3 $h^{-1}$ Mpc. In each sample, we measure a 4-fold
increase in $\omega_{p}(r_{p})$~relative to the large-scale power law fit over $r_{p}$=1-10 $h^{-1}$
Mpc scales. The increase in clustering is significant at the $p<0.05$ level in both galaxy samples.
Unlike the previous small-scale clustering results with SFR, the sSFR samples have normalized the SFR
relative to the stellar mass, and therefore the measured small-scale rise will be more directly connected to
enhanced star formation at a fixed stellar mass. Because sSFR is highly correlated with
star formation history and restframe color, the galaxies demonstrating this increased small-scale clustering
are the bluest members of their respective populations (e.g. the bluest blue galaxies and the bluest red
galaxies).
\cite{Li08} measured the clustering of SDSS galaxies
selected by sSFR and found enhanced small-scale galaxy
clustering in the local Universe, with $>$40\% of these galaxies
having close ($r_p<$100 kpc) companions. They conclude that the
observed clustering behavior is a signature of tidal interactions
between galaxy pairs inside the same dark matter halo, which leads to
an inflow of cold gas and enhanced star formation. Evidence
for enhanced sSFR on small scales was also measured from close pairs of
blue galaxies ($r_{p}$$<$50 $h^{-1}$ kpc) at higher redshift in PRIMUS
between 0.25$<$$z$$<$0.5 \citep{Wong11} and in DEEP2 between
0.1$<$$z$$<$1.1 \citep{Lin07}.
\cite{Robaina09} also found enhanced clustering on scales $r_{p}$$<$40 kpc for star-forming galaxies with
$M_{*}>10^{10}~M_{\sun}$ between 0.4$<$$z$$<$0.8.
While we define ``small-scales'' at a larger scale threshold of $r_{p}$$\leq$0.3 $h^{-1}$ Mpc,
we also find statistically significant evidence for enhanced small-scale clustering with sSFR.
These results support our interpretation that enhanced star formation, possibly triggered from galaxy tidal interactions,
is correlated with the observed small-scale rise in our blue galaxy sample with high sSFR.
However, enhanced small-scale clustering for high-sSFR
red galaxies may be somewhat surprising. The
range of sSFR for which we find this enhancement
($-10$$<$log($\psi$/$M_{*}$)$<$$-11$) is often referred to as the ``green
valley", a transition region from blue star-forming galaxies to red
quiescent galaxies \citep{Martin07, Salim09, Mendez11}.
The enhanced clustering could be due to
recent merger events or tidal disruptions between red and blue
galaxy pairs where the blue galaxy has provided
additional gas for star formation in the quiescent galaxy.
\cite{Robaina09} found that major interactions can contribute significantly
to the small-scale clustering amplitude for
the most dust-obscured starbursts, which may be
selected in our DEEP2 sample as red with the restframe color cut of Equation~\ref{eq:color}.
Alternatively, close interactions between
blue galaxy pairs could temporarily boost the SFR and eventually
lead to quenching, which would move
galaxies from the blue cloud to the red sequence. As we argued
in the last subsection, however, such events must be sub-dominant to the
general quenching trend in individual halos.
\section{Conclusions}
\label{sec:conclusions}
In this study, we measure the two-point correlation function of complete DEEP2
galaxy samples selected by stellar mass, SFR and sSFR and separated by restframe color. We fit a power law to the
projected correlation function and measure the bias and clustering scale length and slope
on scales of 1-10 $h^{-1}$ Mpc ($\bar{r}_p$=4.1 $h^{-1}$ Mpc), where
the two-halo term dominates the 2PCF.
We also study the relative shape of the correlation
function on small and large scales, which depends on the satellite fraction and
dark matter halo mass. We summarize our findings as follows:
\begin{enumerate}
\item The large-scale galaxy clustering amplitude increases monotonically as a function
of stellar mass for star-forming galaxies with 9.6$<$log($M_*$/$M_{\sun}$)$<$11, indicating that stellar
mass closely tracks halo mass at $z\sim1$. Within the limited mass range probed for red
galaxies (10.5$<$log($M_*$/$M_{\sun}$)$<$11.5), we find no significant trend between stellar mass and
clustering amplitude. Within the measurement errors, our results agree with a $z\sim1$ clustering analysis from VVDS
\citep{Meneux08} in the stellar mass range where both DEEP2 and VVDS samples are complete.
\item Within the blue, star-forming galaxy population, the large-scale clustering amplitude
increases as a function of increasing SFR. Red galaxies with low SFRs are strongly clustered
and show no dependence on SFR within the range probed. We find the highest-SFR blue galaxies
have the same clustering amplitude as red galaxies, as seen in previous DEEP2 environment
studies.
\item The observed trend between large-scale clustering amplitude and SFR can
primarily be accounted for by the observed correlation between clustering
amplitude and stellar mass. However, the $r_0$-stellar mass relation alone
does not fully predict the clustering amplitude at all SFRs;
there is a small clustering excess at high SFR above what is predicted by
the stellar mass of each sample alone.
This suggests that most, but not all, of the correlation between large-scale
clustering amplitude and SFR can be attributed to the SFR-stellar mass relation.
\item Galaxy samples selected by sSFR show similar
large-scale clustering trends as samples selected by stellar mass.
The clustering amplitude decreases with increasing sSFR (corresponding to bluer restframe color)
while red galaxies with low sSFRs have a constant clustering strength in the range probed.
While the large-scale clustering amplitude only mildly depends on sSFR for
$-11.0$$<$log($\psi$/$M_{*}$)$<$$-9.5$,
$r_0$ drops significantly between $-9.5$$<$log($\psi$/$M_{*}$)$<$$-8.5$, similar to the trend observed locally in SDSS.
\item By constructing stellar mass-limited star-forming galaxy
samples above and below the SFR-stellar mass ``main sequence'',
we find that the clustering amplitude increases with decreasing SFR at a fixed stellar mass
(i.e. below the main sequence), which confirms that the clustering trends seen with sSFR are not driven entirely by
stellar mass. Given that galaxies below the main sequence are more clustered
than those above, the bulk of the population
above the main sequence can not be dominated by galaxies that used to be
on the main sequence and are currently undergoing major merger events. Instead,
galaxies must smoothly evolve from above the main sequence to below it,
as the clustering of galaxies can only increase with time.
\item We detect enhanced clustering on scales less than $r_{p}$$\leq$0.3 $h^{-1}$ Mpc, relative to
larger scales, for high stellar mass red galaxies, high SFR blue galaxies, and the highest sSFR
sub-samples of both red and blue galaxy populations. The increased small-scale clustering may reflect
a combination of effects, including a changing satellite fraction, higher halo mass, and/or enhanced SFR.
We conclude that triggered star formation due to close galaxy interactions is a likely explanation for the enhanced clustering seen at high sSFR.
\end{enumerate}
As our analysis uses the two-point correlation function formalism and measures
clustering properties for several important galaxy properties, we expect these results
to be highly useful for galaxy evolution models. In particular, HOD and abundance
matching models should greatly benefit from measurements of the clustering amplitude at
$z\sim1$ with respect to stellar mass, SFR, and sSFR. We also anticipate that the bias values
calculated here as a function of stellar mass and SFR will be used in projections of
future BAO surveys that will probe the $z$$>$1 universe. \linebreak
We thank Guangtun Zhu and Jeremy Tinker for providing their code to calculate $V_{\rm{max}}$ values and dark matter correlation functions, and we thank Kevin Bundy for providing $K$-band stellar masses to crosscheck DEEP2 stellar masses. We also appreciate the useful comments from Ramin Skibba, Lihwai Lin, and the referee of this article. This work is supported in part by the Director, Office of Science, High Energy Physics, of the U.S. Department of Energy, under contract number DE-AC03-76SF00098. ALC gratefully acknowledges support
from NSF CAREER award AST-1055081. JAN acknowledges support from DOE Early Career grant DE-SC0003960. DEEP2 survey funding has been provided by NSF grants AST95-09298, AST-0071048, AST-0071198, AST-0507428, AST-0507483, and AST-0806732 as well as NASA LTSA grant NNG04GC89G.
Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by financial support of the W. M. Keck Foundation. The DEEP2 team and Keck Observatory acknowledge the significant cultural role that the summit of Mauna Kea has within the indigenous Hawaiian community and appreciate the opportunity to conduct observations from this mountain.
\input{ms.bbl}
\input{tab1.tex}
\input{tab2.tex}
\input{tab3.tex}
\input{tab4.tex}
\input{tab5.tex}
\input{tab6.tex}
\end{document}
|
1,314,259,994,278 | arxiv | \section*{Introduction}
The notion of a parabolic vector bundle over a compact Riemann surface was first developed in \cite{MS} to obtain a version of the theorem of Narasimhan--Seshadri \cite{NS}, in the case where one wants to describe the moduli space of unitary representations of the fundamental group of the surface with a finite set of punctures. The extra structure one obtains is the data of a flag in the fibre and a set of real numbers, called weights, at each of the punctures. When the weights are rational numbers, parabolic vector bundles
can be described as equivariant vector bundles on a suitable Galois cover
\cite{BiswasOrbifold}, \cite{Boden_thesis}, \cite{NasatyrSteer}. One drawback of this correspondence is that it requires introduction to a new parameter, namely the Galois group for the covering. To remedy this, N.\ Borne has shown that the category of parabolic vector bundles over a $\mathbb{C}$-scheme with weights lying in $\tfrac{1}{r} \mathbb{Z}$, with $r \in \mathbb{N}$, is equivalent to the category of vector bundles over a related object called the ``$r$-th root stack'' which depends only on the original scheme, the parabolic divisor and the natural number $r$ \cite{Borne}, \cite{Bor2}. The root stack essentially gives the scheme some ``orbifold structure,'' by putting a cyclic group of order $r$ over the divisor. This approach of Borne for parabolic bundles has turned out to be very useful (see, for example, \cite{BD}).
A coherent generalization of the notion of a parabolic structure for a principal bundle, even over curves, has been somewhat elusive, largely because it has not been clear what the analogue of a set of weights should be. The main aim of this paper is to advocate Borne's approach of viewing a parabolic bundle over a quasi-projective variety as a bundle over an associated root stack. As such, the article begins by defining principal bundles over a smooth algebraic stack over $\mathbb{C}$ and basic constructions, such as associated fibre bundles and reduction of structure group.
One result of the paper, stated in Section \ref{ConnectionCondition}, gives a
condition for the existence of a connection over a principal bundle over an
algebraic stack in the style of \cite{AzadBiswas2002}.
To explain this condition,
let $G$ be a reductive affine algebraic group over $\mathbb{C}$. Let $\mathfrak{X}
= \mathfrak{X}_{\mathscr{O}_X(Z), s, r}$ be a complete root stack of dimension one. We prove that
a principal $G$-bundle $E_G$ over $\mathfrak{X}$ admits a connection if and only if
for any reduction $\mathscr{F}\, \subset E_G$ to a Levi factor $L$ of a parabolic subgroup
of $G$, and any character $\chi : L \to
\mathbb{C}^\times$, the associated line bundle $\mathscr{F}
\times^\chi \mathbb{C}$ satisfies $\deg_\mathfrak{X} \mathcal{M} = 0$. (See
Theorem \ref{connectioncondition}.)
In Section \ref{RootStacks}, we review the construction of a root stack as given in \cite{Cadman}. We show that in the special case of the Galois covers considered in \cite{BiswasOrbifold}, where all isotropy groups are cyclic of the same order, the associated root stack is in fact the quotient stack, and point out that in the case of a curve, one always has such a realization.
Of course, justifying the root stack approach to parabolic structures necessitates a comparison with existing approaches in the literature, and this is done in Sections \ref{TensorFunctors} and \ref{ParahoricTorsors}. In his characterization of finite vector bundles \cite{Nori1976}, M.V.\ Nori gave a realization of a principal $G$-bundle over a scheme over an arbitrary field as a tensor functor from the category of finite-dimensional representations of $G$ to the category of vector bundles over the scheme. One approach that has been taken is that of \cite{BBN}, where a parabolic principal bundle was defined as a tensor functor which takes values in the category of parabolic vector bundles. Section \ref{TensorFunctors} is concerned with showing that this notion and that of a principal bundle over a root stack are equivalent.
A notion which has appeared in the literature recently (e.g., \cite{PappasRapoport, Heinloth_Uniformization}) is that of a (torsor for a) parahoric Bruhat--Tits group scheme. The specific instances of this phenomenon which are relevant for us appear in a paper of V.\ Balaji and C.S.\ Seshadri \cite{BalajiSeshadri2012}, where such a torsor is generically a $G$-bundle. They show that equivariant $G$-bundles for a Galois cover correspond to parahoric torsors on the base. In their description, it is the isotropy representation (in $G$) over the ramification points of the Galois cover which determines the appropriate Bruhat--Tits group scheme (i.e., the analogue of the flag type for parabolic vector bundles). Such representations may be thought of as restrictions of cocharacters of the cover, and hence as rational cocharacters on the base. It is this that yields the analogous notion of a set of weights for parabolic bundle/parahoric torsor (see Section \ref{PGST}). This was already suggested by P.\ Boalch in his local classification of connections on $G$-bundles for reductive groups \cite{Boalch_Parahoric}. The aim of Section \ref{ParahoricTorsors} is to show that these ideas are all readily expressible in terms of principal bundles over root stacks. Specifically, we define the local type of a principal bundle over the root stack and show that these correspond to parahoric torsors of a given type. Finally, we restrict Boalch's definition of a logarithmic parahoric connection using a condition paralleling the one for parabolic vector bundles (e.g., as in \cite[\S2.2]{BiswasLogares2011}) and show that one has a correspondence between connections on a principal bundle over the root stack and connections on the parahoric torsor.
MLW would like to thank V.\ Balaji for some helpful clarifications and for sharing a draft version of \cite{BalajiSeshadri2012}. He is also grateful for the support of the Fonds qu\'eb\'ecois de la recherche sur la nature et les technologies in the form of a Bourse de recherche postdoctorale (B3).
\section{Principal Bundles on Algebraic Stacks} \label{pbstack}
We will work over the category of $\mathbb{C}$-schemes, which we denote by $\mathfrak{Sch}/\mathbb{C}$. If not otherwise indicated, $\mathfrak{X}$ will be a smooth algebraic stack locally of finite type over $\mathbb{C}$. We will also fix a complex algebraic group $G$.
For a $\mathbb{C}$-scheme $U$, the fibre category of $\mathfrak{X}$ over $U$ will be denoted by $\mathfrak{X}(U)$. Via the $2$-Yoneda lemma, we will freely identify an object $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}(U)$ with a $1$-morphism $\mathfrak{f} : U \to \mathfrak{X}$.
\subsection{Coherent Sheaves} \label{coherentsheaves}
A \emph{coherent sheaf} $\mathcal{V}$ on $\mathfrak{X}$ consists of the following data (e.g.\ \cite[Definition 7.18]{Vistoli_IntersectionTheory}, \cite[Definition 2.50]{Gomez_Stacks}, \cite[Lemme 12.2.1]{LMB}). If $\mathfrak{f} : U \to \mathfrak{X}$ is a smooth atlas (i.e., if $U$ is a $\mathbb{C}$-scheme and $\mathfrak{f}$ is a smooth map), then we have a coherent $\mathscr{O}_U$-module $\mathcal{V}_\mathfrak{f}$. For a (2-)commutative diagram
\begin{align} \label{overXX}
\vcenter{
\commtri{ U }{ V }{ \mathfrak{X} }{ k }{ \mathfrak{f} }{ \mathfrak{g} }
}
\end{align}
with $\mathfrak{f}, \mathfrak{g}$ smooth atlases, we are given an isomorphism
\begin{align} \label{sheafiso}
\alpha_k^\mathcal{V} = \alpha_k : \mathcal{V}_\mathfrak{f} \xrightarrow{\sim} k^* \mathcal{V}_\mathfrak{g},
\end{align}
such that for a (2-)commutative diagram
\begin{align} \label{compositionoverXX}
\vcenter{
\xymatrix{
U \ar[dr]_{\mathfrak{f}} \ar[r]^k & V \ar[d]^\mathfrak{g} \ar[r]^l & W \ar[dl]^ \mathfrak{h} \\
& \mathfrak{X} & } }
\end{align}
the diagram
\begin{align} \label{compatibilityconditionCS}
\vcenter{
\xymatrix{
\mathcal{V}_\mathfrak{f} \ar[r]^-{\alpha_{l \circ k}} \ar[d]_{\alpha_k} & (l \circ k)^* \mathcal{V}_\mathfrak{h} \ar@{=}[d] \\
k^* \mathcal{V}_\mathfrak{g} \ar[r]_{k^* \alpha_l} & k^* l^* \mathcal{V}_\mathfrak{h} }
}
\end{align}
commutes, where the two objects on the right side are identified via the canonical isomorphism of functors $(l \circ k)^* \xrightarrow{\sim} k^* l^*$.
We will call a coherent sheaf $\mathcal{V}$ on $\mathfrak{X}$ a \emph{vector bundle} if $\mathcal{V}_\mathfrak{f}$ is a locally free $\mathscr{O}_U$-module whenever $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}(U)$.
If $\mathfrak{X}$ is a Deligne--Mumford stack, then it is enough to specify $\mathcal{V}_\mathfrak{f}$ for \'etale atlases $\mathfrak{f} : U \to \mathfrak{X}$. In this case, we may define the sheaf of differentials $\Omega_\mathfrak{X}^1 = \Omega_{\mathfrak{X}/\mathbb{C}}^1$ as follows. For an \'etale morphism $\mathfrak{f} : U \to \mathfrak{X}$, we simply set
\begin{align*}
\Omega_{\mathfrak{X}, \mathfrak{f}}^1 := \Omega_{U/\mathbb{C}}^1.
\end{align*}
Given a diagram (\ref{overXX}), with $\mathfrak{f}$ and $\mathfrak{g}$, and hence $k$, \'etale, one
has an exact sequence \cite[Morphisms of Schemes, Lemma 32.16]{StacksProject}
\begin{align*}
0 \to k^* \Omega_{V/\mathbb{C}}^1 \to \Omega_{U/\mathbb{C}}^1 \to \Omega_{U/V}^1 \to 0;
\end{align*}
since $k$ is \'etale, the last term vanishes, so we obtain isomorphisms (\ref{sheafiso}). The fact that they satisfy the compatibility condition (\ref{compatibilityconditionCS}) is due to their canonical nature.
If $\mathcal{V}$ is a vector bundle over a Deligne--Mumford stack $\mathfrak{X}$, by a \emph{connection on $\mathcal{V}$} we will mean the data of a connection $\nabla_\mathfrak{f}$ on $\mathcal{V}_\mathfrak{f}$ for each $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}(U)$ such that for a diagram (\ref{overXX}), the following commutes:
\begin{align*}
\xymatrixcolsep{5pc}
\xymatrix{\mathcal{V}_\mathfrak{f} \ar[r]^{\nabla_\mathfrak{f}} \ar[d]_{ \alpha_k^\mathcal{V} } & \mathcal{V}_\mathfrak{f} \otimes_{\mathscr{O}_U} \ar[d]^{ \alpha_k^\mathcal{V} \otimes \alpha_k^{\Omega} } \\
k^* \mathcal{V}_\mathfrak{g} \ar[r]_-{ k^* \nabla_\mathfrak{g} } & k^* \mathcal{V}_\mathfrak{g} \otimes_{\mathscr{O}_U} k^* \Omega_{V/\mathbb{C}}^1 }
\end{align*}
\subsection{Principal Bundles} \label{principalbundles}
With $\mathfrak{X}$ and $G$ as in the beginning of the section, we can take as a definition of a \emph{principal $G$-bundle on $\mathfrak{X}$} the following, paralleling the definition of a coherent sheaf. For each smooth atlas $\mathfrak{f} : U \to \mathfrak{X}$, we are given the data of a principal $G$-bundle $\mathscr{E}_\mathfrak{f}$ over $U$, and for each diagram (\ref{overXX}) we have isomorphisms
\begin{align*}
\beta_k = \beta_k^\mathscr{E} : \mathscr{E}_\mathfrak{f} \xrightarrow{\sim} k^* \mathscr{E}_\mathfrak{g}
\end{align*}
such that for a diagram (\ref{compositionoverXX}), one has a commutative diagram
\begin{align} \label{compatibilitycondition}
\vcenter{
\xymatrix{
\mathscr{E}_\mathfrak{f} \ar[r]^-{\beta_{l \circ k}} \ar[d]_{\beta_k} & (l \circ k)^* \mathscr{E}_\mathfrak{h} \ar@{=}[d] \\
k^* \mathscr{E}_\mathfrak{g} \ar[r]_{k^* \beta_l} & k^* l^* \mathscr{E}_\mathfrak{h}, } }
\end{align}
where again, we use the canonical isomorphism $(l \circ k)^* \xrightarrow{\sim} k^* l^*$ to identify the two bundles on the right hand side.
Let $\mathscr{F}$ be another $G$-bundle over $\mathfrak{X}$. A \emph{morphism} of $G$-bundles $\varphi : \mathscr{E} \to \mathscr{F}$ consists of the data of a morphism $\varphi_\mathfrak{f} : \mathscr{E}_\mathfrak{f} \to \mathscr{F}_\mathfrak{f}$ for each smooth atlas $\mathfrak{f} : U \to \mathfrak{X}$, such that given a diagram (\ref{overXX}), the square
\begin{align*}
\commsq{ \mathscr{E}_\mathfrak{f} }{ \mathscr{F}_\mathfrak{f} }{ k^* \mathscr{E}_\mathfrak{g} }{ k^* \mathscr{F}_\mathfrak{g} }{ \varphi_\mathfrak{f} }{ \beta_k^\mathscr{E} }{ \beta_k^\mathscr{F} }{ k^*\varphi_\mathfrak{g} }
\end{align*}
commutes. It is not hard to see then that the category of principal $G$-bundles over $\mathfrak{X}$ is a groupoid.
We will recall that the classifying stack $BG$ is the fibred category whose objects over a $\mathbb{C}$-scheme $U$ are principal $G$-bundles over $U$ and whose morphisms are pullback diagrams of $G$-bundles. The following is not hard to verify.
\begin{lem} \label{GbundleBG}
The datum of a principal $G$-bundle over $\mathfrak{X}$ in the above sense is equivalent to the datum of a morphism $\mathfrak{X} \to BG$. Two $G$-bundles over $\mathfrak{X}$ are isomorphic if and only if the corresponding morphisms $\mathfrak{X} \to BG$ are $2$-isomorphic.
\end{lem}
Given $\mathfrak{X}, G$ as above, we may now consider the fibred category whose objects over a $\mathbb{C}$-scheme $U$ are $G$-bundles $E \to \mathfrak{X} \times U$ and whose morphisms are pullback diagrams of $G$-bundles. We will denote this category by
\begin{align*}
\textnormal{Bun}_G \mathfrak{X}.
\end{align*}
Let $\mathfrak{X}, \mathfrak{Y}$ be separated algebraic stacks of finite presentation over $\mathbb{C}$
with finite diagonals. We may consider the fibred category $\textnormal{\underline{Hom}}_\mathbb{C}( \mathfrak{X},
\mathfrak{Y})$ whose fibre category over a $\mathbb{C}$-scheme $U$ is the groupoid of functors $\textnormal{Hom}_U(\mathfrak{X} \times U, \mathfrak{Y} \times U) = \textnormal{Hom}_\mathbb{C} (\mathfrak{X} \times U, \mathfrak{Y})$.
Now, taking $\mathfrak{X} \times U$ in Lemma \ref{GbundleBG} and $\mathfrak{Y} := BG$, it is not hard to see that the following holds.
\begin{prop}
There is an equivalence
\begin{align*}
\textnormal{Bun}_G \mathfrak{X} \cong {\textnormal{\underline{Hom}}}_\mathbb{C}(\mathfrak{X}, BG).
\end{align*}
\end{prop}
\subsection{Principal Bundles on Quotient Stacks}
Let $Y$ be $\mathbb{C}$-scheme and let $\Gamma$ be a complex algebraic group acting on
$Y$ on the left with action map $\lambda : \Gamma \times Y \to Y$. Let
$p_\Gamma$ and $p_Y$ be the projections of $\Gamma \times Y$ to $p_\Gamma$ and
$p_Y$ respectively. Later, we
will primarily be concerned with the case where $\Gamma$ is finite, but what we
record here holds in greater generality.
Let $\pi : E \to Y$ be a (right) $G$-bundle over $Y$. Suppose $\Lambda : \Gamma \times E \to E$ is a left action for which $\pi$ is $\Gamma$-equivariant and which commutes with the $G$-action $\rho : E \times G \to G$, i.e.,
\begin{align*}
\xymatrix{
\Gamma \times E \times G \ar[r]^-{ \1_\Gamma \times \rho } \ar[d]_{ \Lambda \times \1_G } & \Gamma \times E \ar[d]^{ \Lambda } \\
E \times G \ar[r]_-{\rho} & E }
\end{align*}
commutes. In this case, we call $\Lambda$ a \emph{compatible $\Gamma$-action}. A
\emph{$(\Gamma, G)$-bundle} is a $G$-bundle on $Y$ together with a compatible
$\Gamma$-action. If $E \to Y$ and $F \to Y$ are $(\Gamma, G)$-bundles, a
\emph{morphism of $(\Gamma, G)$-bundles} $E \to F$ is a morphism of $G$-bundles which commutes with the $\Gamma$-action. We will denote the stack of $(\Gamma, G)$-bundles by
\begin{align*}
\textnormal{Bun}_{\Gamma, G} Y.
\end{align*}
This is the fibred category whose fibre category over a $\mathbb{C}$-scheme $U$ is the groupoid of $(\Gamma, G)$-bundles over $Y \times U$, where $Y \times U$ has the $\Gamma$-action induced from $\lambda$.
\begin{rmk}
To give a compatible $\Gamma$-action on a $G$-bundle $E$ is equivalent to giving an isomorphism $\tau : p_Y^* E \xrightarrow{\sim} \lambda^* E$ of $G$-bundles over $\Gamma \times Y$ such that the following ``cocycle condition'' holds. Consider the diagrams
\begin{align*}
& \xymatrix{
\Gamma \times \Gamma \times Y \ar[r]^-{\1_\Gamma \times \lambda} \ar[dr]^L \ar[d]_{m \times \1_X} & \Gamma \times Y \ar[d]^\lambda \\
\Gamma \times Y \ar[r]_\lambda & Y, } &
& \xymatrix{
\Gamma \times \Gamma \times Y \ar[r]^-{p_{\Gamma \times Y} } \ar[dr]^N \ar[d]_{\1_\Gamma \times \lambda} & \Gamma \times Y \ar[d]^\lambda \\
\Gamma \times Y \ar[r]_{p_Y} & Y, } &
& \xymatrix{
\Gamma \times \Gamma \times Y \ar[r]^-{p_{\Gamma \times Y} } \ar[dr]^\Pr \ar[d]_{m \times \1_X} & \Gamma \times Y \ar[d]^{p_Y} \\
\Gamma \times Y \ar[r]_{p_Y} & Y. }
\end{align*}
Then, modulo canonical isomorphisms, we get two isomorphisms $$(m \times \1_Y)^*
\tau, (\1_\Gamma \times \lambda)^* \tau \circ p_{\Gamma \times Y}^* \tau\,:\,
\Pr^* E \,\xrightarrow{\sim} \,L^* E\, .$$ The condition we require is that
\begin{align} \label{actioncocycle}
(m \times \1_Y)^* \tau = (\1_\Gamma \times \lambda)^* \tau \circ p_{\Gamma \times Y}^* \tau.
\end{align}
In what follows, we will more often talk about $(\Gamma, G)$-bundles in terms of this description.
\end{rmk}
Recall that the quotient stack $[ \Gamma \backslash Y ]$ is the fibred category whose objects over a $\mathbb{C}$-scheme $U$ are diagrams
\begin{align*}
\xymatrix{ M \ar[r] \ar[d] & Y \\ U, & }
\end{align*}
where the vertical arrow is a (left) principal $\Gamma$-bundle over $U$ and the horizontal arrow is a $\Gamma$-equivariant map, and whose morphisms over a morphism $U \to V$ of $\mathbb{C}$-schemes are diagrams
\begin{align*}
\xymatrix{ M \ar[r] \ar[d] & N \ar[r] \ar[d] & Y \\ U \ar[r] & V, & }
\end{align*}
where the square is Cartesian and the composition $M \to N \to Y$ is the same
arrow as occurring the object over $U$. There is a natural quotient morphism $\nu : Y \to [ \Gamma \backslash Y ]$ which takes a $\mathbb{C}$-morphism $f : U \to Y$ to
\begin{align*}
\xymatrix{ \Gamma \times U \ar[r] \ar[d] & Y \\ U, & }
\end{align*}
where the horizontal map is $\lambda \circ (\1_\Gamma \times f)$.
With this, the diagram
\begin{align*}
\commsq{ \Gamma \times Y }{ Y }{ Y }{ [ \Gamma \backslash Y ] }{ \lambda }{ p_Y }{ \nu }{ \nu }
\end{align*}
is Cartesian, yielding an isomorphism
\begin{align} \label{YGamma}
Y \times_{[ \Gamma \backslash Y ]} Y \cong \Gamma \times Y.
\end{align}
Hence we obtain isomorphisms
\begin{align} \label{triple}
Y \times_{[ \Gamma \backslash Y ]} Y \times_{[ \Gamma \backslash Y ]} Y \cong \Gamma \times Y \times_{[ \Gamma \backslash Y ]} Y \cong \Gamma \times \Gamma \times Y.
\end{align}
Let $\mathscr{E}$ be a $G$-bundle over $[ \Gamma \backslash Y ]$. As $\nu : Y \to [ \Gamma \backslash Y ]$ is a smooth atlas for $[ \Gamma \backslash Y ]$, to it is associated a $G$-bundle $\mathscr{E}_\nu$. The data of the bundle $\mathscr{E}$ and the diagram
\begin{align*}
\xymatrix{
Y \times_{[ \Gamma \backslash Y ]} Y \ar[r]^-{p_2} \ar[d]_{p_1} & Y \ar[d]^{\nu} \\
Y \ar[r]_{\nu} & [ \Gamma \backslash Y ] }
\end{align*}
yield isomorphisms $\beta_{p_i} : \mathscr{E}_{\nu \circ p_i} \xrightarrow{\sim} p_i^* \mathscr{E}_\nu$. Since $\mathscr{E}_{\nu \circ p_1} \cong \mathscr{E}_{\nu \circ p_2}$, we then obtain an isomorphism $\sigma : p_1^* \mathscr{E}_\nu \xrightarrow{\sim} p_2^* \mathscr{E}_\nu$ of $G$-bundles over $Y \times_{[ \Gamma \backslash Y ]} Y$. Now, if $p_{ij} : Y \times_{[ \Gamma \backslash Y ]} Y \times_{[ \Gamma \backslash Y ]} Y \to Y \times_{[ \Gamma \backslash Y ]} Y$ are the various projections, then the condition (\ref{compatibilitycondition}) implies that the cocycle condition
\begin{align} \label{quotientcocycle}
p_{13}^* \sigma = p_{23}^* \sigma \circ p_{12}^* \sigma.
\end{align}
holds
Under the isomorphism (\ref{YGamma}), $\sigma$ becomes an isomorphism $\tau : p_Y^* \mathscr{E}_\nu \xrightarrow{\sim} \lambda^* \mathscr{E}_\nu$ of $G$-bundles over $\Gamma \times Y$, and the condition (\ref{quotientcocycle}) above translates precisely into the condition (\ref{actioncocycle}), and hence $\mathscr{E}_\nu$ is a $(\Gamma, G)$-bundle over $Y$.
Conversely, let $E$ be a $(\Gamma, G)$-bundle on $Y$. Let $\mathfrak{f} : U \to [ \Gamma \backslash Y ]$ be any smooth atlas and consider the diagram
\begin{align*}
\xymatrix{
U \times_{ [ \Gamma \backslash Y ]} Y \ar[r]^-{ q_\mathfrak{f} } \ar[d]_{ \nu_\mathfrak{f} } & Y \ar[d]^{\nu} \\
U \ar[r]_{ \mathfrak{f} } & [ \Gamma \backslash Y ] }
\end{align*}
Then $\nu_\mathfrak{f} : U \times_{ [ \Gamma \backslash Y ] } Y \to U$ is a $\Gamma$-torsor and $q_\mathfrak{f}^* E$ is a $(\Gamma, G)$-bundle over $U \times_{[ \Gamma \backslash Y ]} Y$. Thus $q_\mathfrak{f}^*E$ descends to a $G$-bundle $\mathscr{E}_\mathfrak{f}$ over $U$. For a diagram (\ref{overXX}), the morphisms $\beta_k$ arise by considering the diagram
\begin{align*}
\xymatrix{
U \times_{[ \Gamma \backslash Y ]} Y \ar[r]^{\widetilde{k}} \ar[d]_{\nu_\mathfrak{f}} & V
\times_{[ \Gamma \backslash Y ]} Y \ar[r]^-{q_\mathfrak{g}} \ar[d]_{\nu_\mathfrak{g}} & Y \ar[d]^\nu \\
U \ar[r]_k & V \ar[r]_\mathfrak{g} & [ \Gamma \backslash Y ], }
\end{align*}
and the uniqueness of the descended objects. Therefore a $(\Gamma, G)$-bundle on $Y$ yields a $G$-bundle on $[ \Gamma \backslash Y ]$.
\begin{prop} \label{stackGamma}
There is an equivalence
\begin{align*}
\textnormal{Bun}_{\Gamma, G} Y \xrightarrow{\sim} \textnormal{Bun}_G [ \Gamma \backslash Y ].
\end{align*}
\end{prop}
\begin{proof}
To see this, for each $\mathbb{C}$-scheme $U$, we need to repeat the above argument with $Y$ replaced by $Y \times U$ with the induced action. One will need to use the fact that
\begin{align*}
[\Gamma \backslash Y \times U] \cong [\Gamma \backslash Y ] \times U,
\end{align*}
which is not hard to prove.
\end{proof}
\subsection{Associated Bundles} \label{associatedbundles}
For a principal $G$-bundle $E$ over a $\mathbb{C}$-scheme $X$, and a left $G$-action $\lambda : G \times F \to F$ on a $\mathbb{C}$-scheme $F$, we will denote the associated fibre bundle over $X$ with fibre $F$ by
\begin{align*}
E \times^\lambda F.
\end{align*}
\begin{lem} \label{associatedbundleiso}
Let $f : X \to Y$ be a morphism of $\mathbb{C}$-schemes, let $E$ be a principal
$G$-bundle over $Y$, and let $F$ be a $\mathbb{C}$-scheme endowed with a (left)
$G$-action $\lambda : G \times F \to F$. Then there is a canonical isomorphism
\begin{align*}
\nu_f^{E, \lambda} : f^*E \times^\lambda F \xrightarrow{\sim} f^*( E \times^\lambda F)
\end{align*}
of schemes over $X$.
\end{lem}
\begin{proof}
By definition, the diagram
\begin{align*}
\commsq{ f^* P }{ P }{ X }{ Y }{}{}{}{}
\end{align*}
is Cartesian. Hence, so are
\begin{align*}
& \vcenter{ \commsq{ f^* P \times F }{ P \times F }{ X }{ Y }{}{}{}{} } & \text{ and } & & \vcenter{ \commsq{ f^* P \times^\lambda F }{ P \times^\lambda F }{ X }{ Y, }{}{}{}{} }
\end{align*}
the latter obtained by taking the quotient of the top row of the diagram on the left by $G$. Also, by definition,
\begin{align*}
\commsq{ f^* (P \times^\lambda F) }{ P \times^\lambda F }{ X }{ Y, }{}{}{}{}
\end{align*}
is Cartesian, so there is a canonical isomorphism between the corresponding
upper-left corners.
\end{proof}
We now define for a smooth morphism $\mathfrak{f} : U \to \mathfrak{X}$, where $U$ is a scheme,
\begin{align*}
(\mathscr{E} \times^\lambda F)_\mathfrak{f} := \mathscr{E}_\mathfrak{f} \times^\lambda F.
\end{align*}
Given a diagram (\ref{overXX}), the data for $\mathscr{E}$ comes with an isomorphism $\beta_k : \mathscr{E}_\mathfrak{f} \to k^*\mathscr{E}_\mathfrak{g}$, and hence there is an induced isomorphism
\begin{align*}
\mathscr{E}_\mathfrak{f} \times^\lambda F \xrightarrow{\overline{\beta}_k} k^*\mathscr{E}_\mathfrak{g} \times^\lambda
F.
\end{align*}
Composing this with the isomorphism $\nu_k^{\mathscr{E}_g, \lambda}$ provided by the Lemma, we obtain isomorphisms
\begin{align*}
\nu_k^{\mathscr{E}_g, \lambda} \circ \overline{\beta}_k : (\mathscr{E} \times^\lambda F)_\mathfrak{f} \xrightarrow{\sim}
k^*(\mathscr{E} \times^\lambda F)_\mathfrak{g}.
\end{align*}
These will satisfy the relations (\ref{compatibilitycondition}) because the $\beta_k$ do and because of the canonical nature of the isomorphisms obtained in Lemma \ref{associatedbundleiso}.
As usual, we are mainly interested in this construction in the cases where $F =
V$ is a representation of $G$, via $\rho : G \to {\rm GL}(V)$, say, in which case
the
associated bundle $\mathscr{E} \times^\rho V$ is a vector bundle, and where $F = H$ is another algebraic group on which $G$ acts via a homomorphism $\varphi : G \to H$ (and left multiplication), yielding a principal $H$-bundle $\mathscr{E} \times^\varphi H$.
\begin{cor}
If $V$ is a finite-dimensional vector space and $\rho : G \to {\rm GL}(V)$ a
representation, then $\mathscr{E} \times^\rho V$ is a vector bundle in the sense of Section \ref{coherentsheaves}. If $H$ is a complex algebraic group and $\varphi : G \to H$ a homomorphism, then $\mathscr{E} \times^\varphi H$ is a principal $H$-bundle in the sense of Section \ref{principalbundles}.
\end{cor}
In particular, the adjoint bundle
\begin{align*}
\textnormal{ad} \, \mathscr{E} := \mathscr{E} \times^\textnormal{ad} \textnormal{Lie}(G).
\end{align*}
arising from the adjoint representation $\textnormal{ad} : G \to {\rm GL}(\textnormal{Lie}(G))$ of $G$ on
its Lie algebra $\textnormal{Lie}(G)$ is well-defined.
\begin{rmk} \label{associatedbundleBG}
A homomorphism of algebraic groups $\varphi : G \to H$ induces a $1$-morphism of algebraic stacks $B\varphi : BG \to BH$, taking a principal $G$-bundle over $U$ (an object of $BG$) to the associated $H$-bundle. One then sees that if the principal $G$-bundle $\mathscr{E}$ on $\mathfrak{X}$ corresponds to the morphism $E : \mathfrak{X} \to BG$ (via the equivalence of Lemma \ref{GbundleBG}), then the $H$-bundle $\mathscr{E} \times^\varphi H$ corresponds to $B \varphi \circ E$.
\end{rmk}
\subsection{Reduction of Structure Group}
Fix a principal $G$-bundle $\mathscr{E}$ over $\mathfrak{X}$ and let $H \subseteq G$ be a closed subgroup of $G$. Then for each $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}(U)$ and each diagram (\ref{overXX}), the isomorphism $\beta_k$ induces another one
\begin{align*}
\mathscr{E}_\mathfrak{f} /H \xrightarrow{\sim} k^* \mathscr{E}_\mathfrak{g}/H = k^*( \mathscr{E}_\mathfrak{g}/H),
\end{align*}
which we will denote by $\beta_{k,H}$. Then a \emph{reduction $\tau$ of the structure group to $H$} consists of the data of a section $\tau_\mathfrak{f} : U \to \mathscr{E}_\mathfrak{f}/H$ for each $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}(U)$ such that for each diagram (\ref{overXX}), one has
\begin{align} \label{reductioncompatibility}
\overline{\beta}_{k,H} \circ \tau_\mathfrak{f} = k^* \tau_\mathfrak{g}.
\end{align}
\begin{lem} \label{reductionequivalence}
Let $\mathscr{E}$ be a principal $G$-bundle over $\mathfrak{X}$. The following are equivalent pieces of information:
\begin{enumerate}
\item[(a)] a reduction of structure group $\tau$ to $H$;
\item[(b)] a principal $H$-bundle $\mathscr{F}$ and an isomorphism
\begin{align*}
\phi : \mathscr{E} \cong \mathscr{F} \times^\iota G,
\end{align*}
where $\iota : H \to G$ is the inclusion map;
\item[(c)] a factorization
\begin{align*}
\commtri{ \mathfrak{X} }{ BH }{ BG, }{ F }{ E }{ B\iota}
\end{align*}
where $E$ is the morphism of stacks corresponding to $\mathscr{E}$.
\end{enumerate}
\end{lem}
\begin{proof}
Suppose we are given a reduction $\tau$ of $\mathscr{E}$ to $H$. For $\mathfrak{f} \in \textnormal{Ob} \,
\mathfrak{X}(U)$, we consider the fibre product $\mathscr{F}_\mathfrak{f} := \mathscr{E}_\mathfrak{f} \times_{\mathscr{E}_\mathfrak{f}/H} U$ which
fits into a Cartesian diagram
\begin{align} \label{reduction}
\vcenter{
\xymatrix{
\mathscr{F}_\mathfrak{f} \ar[r] \ar[d] & \mathscr{E}_\mathfrak{f} \ar[d] \\
U \ar[r]_-{\tau_\mathfrak{f}} & \mathscr{E}_\mathfrak{f}/H. }
}
\end{align}
Then $\mathscr{F}_f$ is an $H$-bundle over $U$ with the property that $\mathscr{F}_f \times^\iota G \cong \mathscr{E}_\mathfrak{f}$. To see that we get isomorphisms $\beta_k^\mathscr{F} : \mathscr{F}_\mathfrak{f} \to k^* \mathscr{F}_\mathfrak{g}$, we take the diagram (\ref{reduction}) for $\mathfrak{g} \in \textnormal{Ob} \, \mathfrak{X}(V)$ and pull it back via $k$. Then using the $\beta_k^\mathscr{E}$ and (\ref{reductioncompatibility}), we observe that $\mathscr{F}_\mathfrak{f}$ and $k^* \mathscr{F}_\mathfrak{g}$ give isomorphic fibre products, so we may construct the $\beta_k^\mathscr{F}$. The compatibility condition (\ref{compatibilitycondition}) will come from that of the $\beta_k^\mathscr{E}$.
Conversely, suppose there exists an $H$-bundle $\mathscr{F}$ as in (b). Now, noting that $\mathscr{F}_\mathfrak{f} \to \mathscr{E}_\mathfrak{f}$ is an $H$-equivariant morphism and $\mathscr{F}_\mathfrak{f}/H \cong U$, we get sections $\tau_\mathfrak{f} : U \to \mathscr{E}_\mathfrak{f}/H$. To see that we have (\ref{reductioncompatibility}), we use the diagram
\begin{align*}
\commsq{ \mathscr{F}_\mathfrak{f} }{ \mathscr{E}_\mathfrak{f} }{ k^* \mathscr{F}_\mathfrak{g} }{ k^* \mathscr{E}_\mathfrak{g}. }{}{ \beta_k^\mathscr{F} }{ \beta_k^\mathscr{E} }{}
\end{align*}
This shows that (a) and (b) are equivalent.
The equivalence of (b) and (c) is clear from Remark \ref{associatedbundleBG}.
\end{proof}
\subsection{Connections}
Let $\mathscr{E}$ be a principal bundle over a Deligne--Mumford stack $\mathfrak{X}$. Then a
\emph{connection} $\nabla$ on $\mathscr{E}$ consists of the data of a connection
$\nabla_\mathfrak{f}$ on each $\mathscr{E}_\mathfrak{f}$ where $\mathfrak{f} : U \to \mathfrak{X}$ is an \'etale atlas which
pulls back properly with respect to diagrams (\ref{overXX}).
To be precise, suppose we realize the connections in
terms of $\textnormal{Lie}(G)$-valued
$1$-forms so that $\omega_\mathfrak{f} \in H^0(U, \Omega_{U/\mathbb{C}}^1 \otimes \textnormal{Lie}(G))$ is the connection $1$-form corresponding to $\nabla_\mathfrak{f}$. From the Cartesian diagram
\begin{align*}
\commsq{ k^* \mathscr{E}_\mathfrak{g} }{ \mathscr{E}_\mathfrak{g} }{ U }{ V }{ \overline{k} }{}{}{ k },
\end{align*}
we obtain a connection $\overline{k}^* \omega_\mathfrak{g}$ on $k^* \mathscr{E}_\mathfrak{g}$. Then the
condition that we want is
\begin{align} \label{connectioncompatibilitycondition}
\omega_\mathfrak{f} = \beta_k^* \overline{k}^* \omega_\mathfrak{g}.
\end{align}
One obtains the following simply because induced connections behave well with respect to pullbacks.
\begin{lem} \label{inducedconnection}
Let $\mathscr{E}$ be a principal $G$-bundle admitting a connection $\nabla$.
\begin{enumerate}
\item[(a)] If $\rho : G \to {\rm GL}(V)$ is a representation, then there is an
induced
connection $\nabla^\rho$ on the associated vector bundle $\mathscr{E} \times^\rho V$.
\item[(b)] If $\varphi : G \to H$ is a homomorphism of algebraic groups, then there is an induced connection $\nabla^\varphi$ on the associated principal bundle $\mathscr{E} \times^\varphi H$.
\end{enumerate}
\end{lem}
The following statement is justified in the course of the proof of
\cite[Proposition 2.3]{AzadBiswas2002}.
\begin{lem} \label{Levisplitting}
Let $G$ be a reductive algebraic group. Let $L \subseteq G$ be the Levi factor
of a parabolic subgroup of $G$. Then there is an $L$-equivariant splitting
$\psi : \textnormal{Lie}(G) \to \textnormal{Lie}(L)$.
\end{lem}
\begin{lem} \label{splittingconnection}
Let $L \subseteq G$ be any closed subgroup. Assume that there exists an $H$-equivariant (for the adjoint action) splitting $\psi : \textnormal{Lie}(G) \to \textnormal{Lie}(H)$ (of the inclusion map $\textnormal{Lie}(H) \hookrightarrow \textnormal{Lie}(G)$). If the $G$-bundle $\mathscr{E}$ admits a connection and a reduction to $H$, then the resulting $H$-bundle (as given in Lemma \ref{reductionequivalence}) admits a connection.
\end{lem}
\begin{proof}
This is the analogue of Lemma 2.2 of \cite{AzadBiswas2002}. Let $\mathscr{F}$ be the $H$-bundle arising from the reduction in structure group and for $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}$ let $j_\mathfrak{f} : \mathscr{F}_\mathfrak{f} \to \mathscr{E}_\mathfrak{f}$ be the inclusion morphism. Given a connection form $\omega_\mathfrak{f}$ on $\mathscr{E}_\mathfrak{f}$, the corresponding connection form on $\mathscr{F}_\mathfrak{f}$ is given in the quoted result by
\begin{align*}
\nu_\mathfrak{f} := \psi \circ j_\mathfrak{f}^* \omega_\mathfrak{f}.
\end{align*}
The compatibility condition (\ref{connectioncompatibilitycondition}) can be obtained by tracing through the diagram
\begin{align*}
\xymatrix{
\mathscr{F}_\mathfrak{f} \ar[r]^{\beta^\mathscr{F}} \ar[d]_{j_\mathfrak{f}} & k^* \mathscr{F}_\mathfrak{g} \ar[r]^-{\overline{k}_\mathscr{F}}
\ar[d]^{k^* j_\mathfrak{g}} & \mathscr{F}_\mathfrak{g} \ar[d]^{j_\mathfrak{g}} \\
\mathscr{E}_\mathfrak{f} \ar[r]^{\beta_k^\mathscr{E}} \ar[dr] & k^* \mathscr{E}_\mathfrak{g} \ar[d] \ar[r]_-{\overline{k}_\mathscr{E}} &
\mathscr{E}_g \ar[d] \\
& U \ar[r] & V. }
\end{align*}
\end{proof}
\subsection{The Atiyah Sequence}
Let $Y$ be a smooth $\mathbb{C}$-scheme (locally) of finite type and let $\pi : E \to Y$ be a principal $G$-bundle. One has an exact $G$-equivariant sequence of vector bundles
\begin{align*}
0 \to E \times \mathfrak{g} \to TE \to \pi^* TY \to 0
\end{align*}
and the Atiyah sequence can be obtained by quotienting by the $G$-action:
\begin{align} \label{Atiyahsequence}
0 \to \textnormal{ad} \, E \to \text{At} \, E \to TY \to 0.
\end{align}
\begin{lem} \label{Atiyah}
Let $f : X \to Y$ be an \'etale morphism of smooth $\mathbb{C}$-schemes (locally) of finite type and let $\pi : E \to Y$ be a principal $G$-bundle. Then there is a canonical isomorphism
\begin{align*}
\gamma_f^E : \text{At} \, f^* E \xrightarrow{\sim} f^* \text{At} \, E
\end{align*}
fitting into a commutative diagram
\begin{align} \label{Atiyahsequencepullback}
\vcenter{
\xymatrix{
0 \ar[r] & \textnormal{ad} \, f^*E \ar[r] \ar[d]^{\nu_f^\textnormal{ad}} & \text{At} \, f^* E \ar[r] \ar[d]^{\gamma_f^E} & TU \ar[r] \ar[d] & 0 \\
0 \ar[r] & f^* \textnormal{ad} \, E \ar[r] & f^* \text{At} \, E \ar[r] & f^* TV \ar[r] & 0, } }
\end{align}
where $\nu_f^\textnormal{ad}$ is the isomorphism given in Lemma \ref{associatedbundleiso},
noting that $\textnormal{ad} \, E$ is the bundle associated to the adjoint representation
$\textnormal{ad} : G \to {\rm GL}(\mathfrak{g})$.
\end{lem}
Observe that the bottom row, being the pullback of an exact sequence by a flat map, is exact.
\begin{proof}
We begin with the Cartesian diagram
\begin{align*}
\commsq{ f^* E }{ E }{ X }{ Y }{ \overline{f} }{ \overline{\pi} }{ \pi }{ f }
\end{align*}
and observe that since $\overline{f}$ is obtained from $f$ by base change it is
\'etale. Now, we have a canonical exact sequence
\begin{align*}
0 \to \overline{f}^* \Omega_{E/\mathbb{C}}^1 \to \Omega_{f^*E/\mathbb{C}}^1 \to \Omega_{f^*E/E}^1
\to 0
\end{align*}
whose last term vanishes since $\overline{f}$ is \'etale. Thus, we have a
canonical
isomorphism $T(f^*E) \xrightarrow{\sim} f^* TE$, and hence a Cartesian square
\begin{align*}
\commsq{ T f^*E }{ TE }{ X }{ Y.}{}{}{}{}
\end{align*}
Quotienting by the action of $G$ on the top row yields another Cartesian square
\begin{align*}
\commsq{ \text{At} \, f^*E }{ \text{At} \, E }{ X }{ Y,}{}{}{}{}
\end{align*}
whence the canonical isomorphism $\gamma_f^E : \text{At} \, f^*E \xrightarrow{\sim} f^* \text{At} \, E$.
To see that the first square in (\ref{Atiyahsequencepullback}) commutes, we first note that the proof of Lemma \ref{associatedbundleiso} actually shows that $\textnormal{ad} \, f^*E$ and $f^* \textnormal{ad} \, E$ are both canonically isomorphic to $\textnormal{ad} \, E \times_Y X$ and the argument above similarly shows that $\text{At} \, f^*E$ and $f^* \text{At} \, E$ are both canonically isomorphic to $\text{At} \, E \times_Y X$. Now, modulo canonical isomorphisms, the horizontal maps are essentially the base change map $\textnormal{ad} \, E \times_Y X \to \text{At} \, E \times_Y X$, and so the first square must commute. The second square commutes since it is map of cokernels and because of the canonical nature of the morphism $TU \to f^* TV$.
\end{proof}
We will now assume that $\mathfrak{X}$ is a Deligne--Mumford stack and let $\mathscr{E}$ be a principal $G$-bundle over $\mathfrak{X}$. Given an \'etale map $\mathfrak{f} : U \to \mathfrak{X}$, we may define
\begin{align*}
(\text{At} \, \mathscr{E})_\mathfrak{f} := \text{At} \, \mathscr{E}_\mathfrak{f}.
\end{align*}
For a diagram (\ref{overXX}) in which $\mathfrak{f}, \mathfrak{g}$ and hence $k$ are \'etale, we can compose the isomorphism $\text{At} \, \mathscr{E}_\mathfrak{f} \to \text{At} \, k^* \mathscr{E}_\mathfrak{g}$ with the isomorphism $\text{At} \, k^* \mathscr{E}_\mathfrak{g} \to k^* \text{At} \, \mathscr{E}_\mathfrak{g}$ obtained in Lemma \ref{Atiyah} to get isomorphisms
\begin{align*}
(\text{At} \, \mathscr{E})_\mathfrak{f} \xrightarrow{\sim} k^* (\text{At} \, \mathscr{E})_\mathfrak{g}.
\end{align*}
Arguing as in Section \ref{associatedbundles}, these will satisfy the condition in (\ref{compatibilityconditionCS}) and hence yield a well-defined vector bundle on $\mathfrak{X}$. In fact, since the diagram
\begin{align*}
\xymatrix{
0 \ar[r] & \textnormal{ad} \, \mathscr{E}_\mathfrak{f} \ar[r] \ar[d] & \text{At} \, \mathscr{E}_\mathfrak{f} \ar[r] \ar[d] & TU \ar[d] \ar[r] & 0 \\
0 \ar[r] & k^* \textnormal{ad} \, \mathscr{E}_\mathfrak{g} \ar[r] & k^* \text{At} \, \mathscr{E}_\mathfrak{g} \ar[r] & k^*TV \ar[r] & 0 }
\end{align*}
commutes, the sequence
\begin{align*}
0 \to \textnormal{ad} \, \mathscr{E} \to \text{At} \, \mathscr{E} \to T \mathfrak{X} \to 0
\end{align*}
is well-defined as an exact sequence of vector bundles on $\mathfrak{X}$. We will call it the \emph{Atiyah sequence associated to $\mathscr{E}$}.
\begin{lem} \label{Atiyahcondition}
A principal bundle $\mathscr{E}$ over a Deligne--Mumford stack $\mathfrak{X}$ admits a connection if and only if its Atiyah sequence splits.
\end{lem}
The only question here is the compatibility with the isomorphisms $\beta_k$, which is built into the definition of a connection.
\section{Root Stacks} \label{RootStacks}
\subsection{Definition} \label{rootstackdefn}
Let $X$ be a $\mathbb{C}$-scheme, $L$ an invertible sheaf over $X$, $s \in H^0(X, L)$ and $r \in \mathbb{N}$. We will define $\mathfrak{X} = \mathfrak{X}_{(L,r,s)}$ to be the category whose objects are quadruples
\begin{align} \label{Xob}
(f : U \to X, N, \phi, t),
\end{align}
where $U$ is a $\mathbb{C}$-scheme, $f$ is a morphism of $\mathbb{C}$-schemes, $N$ is an invertible sheaf on $U$, $t \in H^0(U, N)$ and $\phi : N^{\otimes r} \xrightarrow{\sim} f^*L$ is an isomorphism of invertible sheaves with $\phi (t^{\otimes r}) = f^* s$. A morphism
\begin{align*}
(f : U \to X, N, \phi, t) \to (g : V \to X, M, \psi, u)
\end{align*}
consists of a pair $(k, \sigma)$, where $k : U \to V$ is a $\mathbb{C}$-morphism making
\begin{align*}
\commtri{ U }{ V }{ X }{ k }{ f }{ g }
\end{align*}
commute and $\sigma : N \xrightarrow{\sim} k^* M$ is an isomorphism such that $\sigma(t) = h^*(u)$. If
\begin{align*}
(g : V \to X, M, \psi, u) \xrightarrow{ (l, \tau) } ( h : W \to X, J, \rho, v)
\end{align*}
is another morphism, then the composition is defined as
\begin{align} \label{rootstackcomposition}
(l, \tau) \circ (k, \sigma) := ( l \circ k, k^* \tau \circ \sigma),
\end{align}
using the canonical isomorphism $(l \circ k)^* J \xrightarrow{\sim} k^* l^* J$.
We will often use the symbols $\mathfrak{f}, \mathfrak{g}$ to denote objects of $\mathfrak{X}$. If we refer to $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}(U)$, then it will be understood that this refers to the quadruple $\mathfrak{f} = (f : U \to X, N_\mathfrak{f}, \phi_\mathfrak{f}, t_\mathfrak{f})$.
The category $\mathfrak{X}$ comes with a functor $\mathfrak{X} \to \mathfrak{Sch}/\mathbb{C}$ which simply takes $\mathfrak{f}$ to the $\mathbb{C}$-scheme $U$ and $(k, \sigma)$ to $k$.
\begin{prop}[{\cite[Theorem 2.3.3]{Cadman}}]
The category $\mathfrak{X}$, together with the structure morphism $\mathfrak{X} \to \mathfrak{Sch}/\mathbb{C}$, is a Deligne--Mumford stack.
\end{prop}
There is also a functor $\pi : \mathfrak{X} \to \mathfrak{Sch}/X$, whose action on objects and morphisms is given by
\begin{align*}
\mathfrak{f} & \mapsto f : U \to X, & (k, \sigma) & \mapsto k;
\end{align*}
this yields a $1$-morphism over $\mathfrak{Sch}/\mathbb{C}$, which we will often simply write as $\pi : \mathfrak{X} \to X$.
\begin{ex}[{\cite[Example 2.4.1]{Cadman}}]\label{affinerootstack}
Suppose $X = \textnormal{Spec} \, A$ is an affine scheme, $L = \mathscr{O}_X$ is the trivial bundle and $s \in H^0(X, \mathscr{O}_X) = A$ is a function. Consider $U = \textnormal{Spec} \, B$ where $B = A[t]/(t^r - s)$. Then $U$ admits an action of the group (scheme) of $r$th roots of unity $\boldsymbol{\mu}_r$, where the induced action of $\zeta \in \boldsymbol{\mu}_r$ is given by
\begin{align*}
\zeta \cdot s & = s, \quad s \in A, & \zeta \cdot t & = \zeta^{-1} t.
\end{align*}
In this case, the root stack $\mathfrak{X}_{(\mathscr{O}_X, s, r)}$ coincides with the quotient stack $[U/ \boldsymbol{\mu}_r]$. Thus, as a quotient by a finite group (scheme), the map $U \to \mathfrak{X}$ is an \'etale cover.
\end{ex}
The root stack $\mathfrak{X}$ comes with a tautological line bundle $\mathscr{N}$ which can be described as follows. For an \'etale morphism $\mathfrak{f} : U \to \mathfrak{X}$ with $U$ a $\mathbb{C}$-scheme, we define
\begin{align*}
\mathscr{N}_\mathfrak{f} := N_\mathfrak{f},
\end{align*}
where $N_\mathfrak{f}$ is the line bundle . The isomorphisms (\ref{sheafiso}) come from those occurring in the definition of a morphism in the category $\mathfrak{X}$. One also has a global section $\t \in H^0(\mathfrak{X}, \mathscr{N})$ by taking
\begin{align*}
\t_\mathfrak{f} := t_\mathfrak{f}.
\end{align*}
\subsection{Finite Galois Coverings} \label{finiteGalois}
Let $p : Y \to X$ be a finite Galois covering of smooth quasi-projective
varieties with Galois group $\Gamma$ and let $\widetilde{D} \subseteq Y$ be the
locus of points which have non-trivial isotropy. This is a divisor whose
irreducible components are smooth \cite[Lemma 2.8]{BiswasOrbifold}. We will
assume that all such isotropy subgroups are cyclic of order $r$. Let $D :=
p(\widetilde{D})$ so that
\begin{align*}
p^* D = r \widetilde{D}\, .
\end{align*}
Then $D$ is a divisor on $X$; let $s \in H^0(X, \mathscr{O}_X(D))$ be a section defining
$D$. Then taking $u \in H^0(Y, \mathscr{O}_Y(\widetilde{D}))$ defining $\widetilde{D}$,
adjusting by a non-vanishing function if necessary, we may assume that under the
isomorphism $\phi_Y : \mathscr{O}_Y(\widetilde{D})^{\otimes r} \xrightarrow{\sim} p^* \mathscr{O}_X(D)$, the
equation
\begin{align*}
\phi_Y( u^{\otimes r}) = p^*s
\end{align*}
holds. This defines a morphism $\mathfrak{p} : Y \to \mathfrak{X} = \mathfrak{X}_{\mathscr{O}_X(D), s, r}$.
Let $\mathfrak{f} : U \to \mathfrak{X}$ be an arbitrary morphism from a $\mathbb{C}$-scheme $U$, say $\mathfrak{f}$ is as in (\ref{Xob}), and consider the fibre product
\begin{align*}
U \times_\mathfrak{X} Y.
\end{align*}
\begin{lem} \label{Gammatorsor}
The projection morphism makes $U \times_\mathfrak{X} Y \to U$ into a $\Gamma$-torsor over $U$.
\end{lem}
\begin{proof}
Since $\mathfrak{X}$ has a representable diagonal, $U \times_\mathfrak{X} Y$ is a scheme (and hence its fibre categories are sets). The $\Gamma$-action on $U \times_\mathfrak{X} Y$ is induced by that on $Y$. We need to see that this action is free. It is enough to check this on the $W$-points $(U \times_\mathfrak{X} Y)(W)$ for an arbitrary $\mathbb{C}$-scheme $W$. Since $\Gamma$ is a finite group, there is no harm in assuming that $W$ is connected, so that we may identify $\Gamma(W)$ with $\Gamma$ as a group (in the set-theoretic sense).
The fibre category $(U \times_\mathfrak{X} Y)(W)$ is a set whose elements are triples $(a,b, \sigma)$, where $a : W \to U, b : W \to Y$ are morphisms such that
\begin{align*}
\commsq{W}{Y}{U}{\mathfrak{X}}{b}{a}{\hat{\pi}}{\mathfrak{f}}
\end{align*}
commutes and $\sigma : a^* N_\mathfrak{f} \xrightarrow{\sim} b^* \mathscr{O}_Y(\widetilde{D})$ is an
isomorphism with
\begin{align} \label{absigma}
\sigma (a^* t_\mathfrak{f}) = b^* u.
\end{align}
To explain the $\Gamma$-action on $(U \times_\mathfrak{X} Y)(W)$, we first recall that
$\Gamma$ acts on the line bundle $\mathscr{O}_Y(\widetilde{D})$ in a way that is
compatible
with the action on $Y$. This action is via isomorphisms $\mathscr{O}_Y(\widetilde{D})
\xrightarrow{\sim} \gamma^* \mathscr{O}_Y(\widetilde{D})$ for each $\gamma \in \Gamma$. Restricting
this action to $\widetilde{D}$, we get one on
$\mathscr{O}_Y(\widetilde{D})|_{\widetilde{D}}$,
which
is the normal bundle to $\widetilde{D}$ in $Y$ (at least away from the
intersections
of the irreducible components of $\widetilde{D}$), and the action of $\Gamma$ is
faithful (see the proof of Lemma 2.8 in \cite{BiswasOrbifold}).
Now, $\gamma \in \Gamma$ acts on a triple $(a, b, \sigma)$ by taking $b$ to
$\gamma \circ b$, and acting on $\sigma$ in a way to compensate for the fact that
in (\ref{absigma}), we will now have $(\gamma \circ b)^* u$ instead of $b^*u$;
this action comes from that on $\mathscr{O}_Y(\widetilde{D})$ just described.
Fixing $(a, b, \sigma)$, if $\gamma$ lies in its isotropy subgroup, then it
follows that $\gamma \circ b = b$, which implies that $b(W) \subseteq
\widetilde{D}$. But then $b^* \mathscr{O}_Y(\widetilde{D})$ is the pullback of the
normal
bundle to $\widetilde{D}$ on which $\Gamma$ acts faithfully, so if $\gamma$ also
fixes $\sigma$, then it must be the identity element.
\end{proof}
\begin{prop} \label{globalquotientstack}
Suppose $p : Y \to X$ is as above. Then there is an equivalence of stacks
\begin{align*}
\mathfrak{X} \xrightarrow{\sim} [\Gamma \backslash Y ]\, .
\end{align*}
\end{prop}
\begin{proof}
We define a functor $[\Gamma \backslash Y] \to \mathfrak{X}$. Suppose we are given an object of $[\Gamma \backslash Y]$ over $U$: this is a diagram
\begin{align*}
\xymatrix{
P \ar[r]^\sigma \ar[d]_\rho & Y \\ U & }
\end{align*}
where $\rho : P \to U$ is a $\Gamma$-torsor and $\sigma : P \to Y$ is a $\Gamma$-equivariant morphism. Since $\sigma$ is equivariant, $\sigma^* M$ gives a line bundle on $P$ with a $\Gamma$-action; obviously, it also comes with a section $\sigma^* u$ and an isomorphism $\sigma^* \alpha : (\sigma^* M)^{\otimes r} \xrightarrow{\sim} (\pi \circ \sigma)^* L$ with $(\sigma^* \alpha)( \sigma^* u)^{\otimes r} = (\pi \circ \sigma)^* s$.
As $\rho : P \to U$ is a $\Gamma$-torsor, $U$ is a geometric quotient by $\Gamma$. The composition $p \circ \sigma : P \to Y \to X$ is a $\Gamma$-invariant morphism, and hence there is a unique morphism $f : U \to X$ making
\begin{align*}
\commsq{ P }{ Y }{ U }{ X }{ \sigma }{ \rho }{ p }{ f }
\end{align*}
commute. The fact that $\sigma^* M$ has a compatible $\Gamma$-action means that it and the section $\sigma^* u$ descend to $U$ yielding an object $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}(U)$, and a (2-)commutative diagram
\begin{align} \label{torsorsquare}
\vcenter{ \commsq{ P }{ Y }{ U }{ \mathfrak{X}. }{ \sigma }{ \rho }{ p }{ \mathfrak{f} } }
\end{align}
Since a morphism in $[\Gamma \backslash Y]$ is simply a pullback diagram of $\Gamma$-torsors, it is clear to see how this functor acts on morphisms. Thus, we have defined $[\Gamma \backslash Y] \to \mathfrak{X}$.
To go the other way, suppose we are given an object of $\mathfrak{X}$ over $U$, which
translates into a morphism $\mathfrak{f} : U \to \mathfrak{X}$. Then by Lemma \ref{Gammatorsor}, the top and left arrows of the Cartesian square
\begin{align} \label{Yfibre}
\vcenter{
\commsq{ U \times_\mathfrak{X} Y }{ Y }{ U }{ \mathfrak{X}, }{}{}{p}{\mathfrak{f}} }
\end{align}
yield an object of $[ \Gamma \backslash Y]$. A morphism in (the category) $\mathfrak{X}$ translates to a ($2$-)commutative diagram
\begin{align*}
\commtri{ U }{ V }{ \mathfrak{X} }{}{}{}
\end{align*}
in which case, one sees that the appropriate definition of this functor on morphisms is to take the pullback diagram of the torsor obtained above.
One will note that the square in (\ref{torsorsquare}) is, in fact, Cartesian.
Lemma \ref{Gammatorsor} states that the fibre product $U \times_\mathfrak{X} Y$ is a $\Gamma$-torsor over $U$. The commutativity of (\ref{torsorsquare}) yields a morphism $P \to U \times_\mathfrak{X} Y$ which will be a morphism of $\Gamma$-torsors over $U$ and hence an isomorphism. Once account is taken of this, one realizes that the diagrams (\ref{torsorsquare}) and (\ref{Yfibre}) are essentially the same, and that the functors are quasi-inverses of each other.
\end{proof}
\subsection{Root Stacks Over Curves}
The following statement (and its attribution) can be found in \cite[Theorem 1.2.15]{Namba}.
\begin{theorem}[Bundgaard--Nielsen--Fox]\label{BNF}
Let $X$ be an irreducible projective algebraic curve over $\mathbb{C}$. Let $Z := \{
x_1, \cdots, x_m \} \subseteq X$ be set of $m$ distinct points and let $r_1,
\cdots, r_m \in \mathbb{N}_{\geq 2}$. If $g = 0$, then we will assume that either $m
\geq 3$ or $m = 2$ with $r_1 = r_2$. Then there exists a projective curve $Y$
and
a Galois covering $p : Y \to X$ such that $p|_{Y \setminus p^{-1}(Z)}$ is unramified and if $y \in p^{-1}(x_i)$ then the ramification index of $y$ is precisely $r_i$.
\end{theorem}
By taking $r := r_1 = \cdots = r_m$, we obtain the following from Proposition \ref{globalquotientstack}.
\begin{cor} \label{globalquotient}
Let $X$ be as in the Theorem. If $g \geq 1$ or $m > 1$, then the associated root stack $\mathfrak{X} = \mathfrak{X}_{\mathscr{O}_X(S), s, r}$ is a global quotient stack. Precisely, if a Galois covering $p : Y \to X$ is chosen as in the Theorem, with Galois group $\Gamma$, then there is an equivalence
\begin{align*}
\mathfrak{X} \xrightarrow{\sim} [\Gamma \backslash Y ].
\end{align*}
\end{cor}
\section{Bundles and Root Stacks} \label{BundlesRootStacks}
\subsection{Parabolic Vector Bundles and Root Stacks}
We recall the definition of a parabolic vector bundle over a $\mathbb{C}$-scheme $X$ with respect to an effective Cartier divisor $D$ given in \cite[\S1]{Yokogawa1995}. We will use $\mathbb{R}$ as an index category, whose objects are real numbers and in which a (single) morphism $\beta \to \alpha$ exists, by definition, precisely when $\beta \leq \alpha$. Let $\mathscr{E}_*$ be a functor $E_* : \mathbb{R}^\textnormal{op} \to \mathfrak{QCoh}(X)$, where $\mathfrak{QCoh}(X)$ is the category of quasi-coherent $\mathscr{O}_X$-modules. If $\alpha \in \mathbb{R}$, we simply write $E_\alpha$ for $E_*(\alpha)$, and $i_{\alpha, \beta}$ for the morphism $E_\alpha \to E_\beta$ given by the functor $E_*$ when $\alpha \geq \beta$.
Given $E_*$ as above and $\gamma \in \mathbb{R}$, one can define another functor $E[\gamma]_* : \mathbb{R}^\textnormal{op} \to \mathfrak{QCoh}(X)$ by setting
\begin{align*}
E[\gamma]_\alpha := E_{\alpha + \gamma},
\end{align*}
together with the only possible definition on morphisms. If $\gamma \geq 0$, then there is a natural transformation $E[\gamma]_* \to E_*$. The functor $E_*$ is called a \emph{parabolic sheaf} if it comes with a natural isomorphism of functors $j : E_* \otimes_{\mathscr{O}_X} \mathscr{O}_X(-D) \xrightarrow{\sim} E[1]_*$ fitting into a commutative diagram
\begin{align*}
\xymatrixcolsep{5pc}
\xymatrix{
E_* \otimes_{\mathscr{O}_X} \mathscr{O}_X(-D) \ar[r]^-j \ar[rd] & E[1]_* \ar[d] \\
& E_*}
\end{align*}
The sheaf $E_0$ is often referred to as the \emph{underlying sheaf} and written as simply $E$. A morphism of parabolic sheaves $(E_*, j) \to (F_*, k)$ is a natural transformation $E_* \to F_*$ intertwining $j$ and $k$.
A parabolic sheaf $(E_*, j)$ is said to be \emph{coherent} if it factors through $\mathfrak{Coh}(X) \to \mathfrak{QCoh}(X)$, where $\mathfrak{Coh}(X)$ is the category of $\mathscr{O}_X$-modules and further if there is a finite sequence of real numbers $0 \leq \alpha_1 < \alpha_2 < \cdots < \alpha_l < 1$ such that if $\alpha \in (\alpha_{i-1}, \alpha_i]$, then
\begin{align*}
i_{\alpha_i, \alpha} : E_{\alpha_i} \to E_\alpha
\end{align*}
is the identity map. We will thus have a filtration of sheaves
\begin{align} \label{sheaffiltration}
E := E_0 \supset E_{\alpha_1} \supset E_{\alpha_2} \supset \cdots \supset E_{\alpha_l} \supset E_1 \xrightarrow{\sim} E(-D).
\end{align}
A coherent parabolic sheaf $(E_*, j)$ is called a \emph{parabolic vector bundle} if $E_*$ takes values in the category $\mathfrak{Vect}(X)$ of vector bundles over $X$ and further, whenever $\beta \leq \alpha < \beta +1$, the sheaf $\textnormal{coker} \, i_{\alpha, \beta}$, which is supported on $D$, is locally free as an $\mathscr{O}_D$-module. The category of parabolic vector bundles will be denoted by $\mathfrak{ParVect}_D(X) = \mathfrak{ParVect}(X)$.
We will say that the coherent parabolic sheaf $(E_*, j)$ has \emph{rational weights} if the $\alpha_i, 1 \leq i \leq l,$ may be chosen in $\mathbb{Q}$. Since this is a finite set, these weights may be chosen in $\frac{1}{r}\mathbb{Z}$ for some $r \in \mathbb{N}$; in this case, we will think of $E_*$ as a functor $(\frac{1}{r} \mathbb{Z})^\textnormal{op} \to \mathfrak{Coh}(X)$. For obvious reasons, we can say then that $E_*$ has \emph{weights dividing $r$}. We will denote by $\mathfrak{ParVect}_{D,r}(X) = \mathfrak{ParVect}_r(X)$ the category of parabolic vector bundles with weights dividing $r$.
One of the main results of \cite{Borne} is that parabolic vector bundles on $X$ correspond to vector bundles on an appropriate root stack. We give a precise statement. With $X$ and $D$ as above. Let $s \in H^0(X, \mathscr{O}_X(D))$ be a section defining the divisor $D$ and fix $r \in \mathbb{N}$. We will let $\mathfrak{X} := \mathfrak{X}_{\mathscr{O}_X(D), s, r}$ be the corresponding root stack.
\begin{theorem}[{\cite[Th\'eor\`eme 3.13]{Borne}}]\label{BorneVB}
The functor $\mathfrak{Vect}(\mathfrak{X}) \to \mathfrak{ParVect}_{D,r}(X)$ which takes $\mathcal{V}$ to the functor $E_* : (\frac{1}{r}\mathbb{Z})^\textnormal{op} \to \mathfrak{Vect}(X)$
\begin{align*}
\tfrac{i}{r} \mapsto \pi_* ( \mathcal{V} \otimes_{\mathscr{O}_\mathfrak{X}} \mathscr{N}^{\otimes -i} )
\end{align*}
is an equivalence of tensor categories.
\end{theorem}
\subsection{Degree of a Vector Bundle over a Root Stack}
Assume now that $X$ is a smooth projective variety with very ample invertible sheaf $\mathscr{O}_X(1)$. Let $D, r,$ and $\mathfrak{X}$ be as above. Then if $\mathcal{V}$ is a vector bundle over $\mathfrak{X}$, \cite[D\'efinition 4.2]{Borne} defines its \emph{degree} as
\begin{align*}
\deg_\mathfrak{X} \mathcal{V} := q_* \big( c_1^{et}(\mathcal{V}) \cdot \pi^* c_1^{et} \big( \mathscr{O}_X(1) \big)^{n-1} \big),
\end{align*}
where $q : \mathfrak{X} \to \textnormal{Spec} \, \mathbb{C}$ is the structure morphism.
One has the following theorem.
\begin{theorem}[{\cite[Th\'eor\`eme 4.3]{Borne}}]\label{Bornedeg}
Let $\mathcal{V}$ be a vector bundle on the root stack $\mathfrak{X}$ and $E_*$ the corresponding parabolic vector bundle over $X$ (via the equivalence given in Theorem \ref{BorneVB}). Then
\begin{align*}
\textnormal{par-deg} \, E_* = \deg_\mathfrak{X} \mathcal{V}.
\end{align*}
\end{theorem}
\subsection{Principal Bundles Over Root Stacks} \label{PBRS}
We return to the situation of Section \ref{finiteGalois}, where $p : Y \to X$ is a finite Galois covering of smooth quasi-projective varieties with Galois group $\Gamma$. By Proposition \ref{stackGamma}, we immediately obtain the following.
\begin{cor} \label{GammaGcurve}
With $\mathfrak{X}$ as in Corollary \ref{globalquotient}, there is an equivalence
\begin{align*}
\textnormal{Bun}_G \, \mathfrak{X} \xrightarrow{\sim} \textnormal{Bun}_{\Gamma, G} Y.
\end{align*}
\end{cor}
We will now restrict to the case where $X$ is a smooth projective curve, where we can give something of a refinement of this equivalence. Let $x \in X, y \in Y$, and let $\mathscr{O}_x, \mathscr{O}_y$ be the respective local rings, and $\widehat{\mathscr{O}}_x, \widehat{\mathscr{O}}_y$ their completions, and $\mathcal{K}_x, \mathcal{K}_y$ the respective quotient fields. As a matter of notation, we will set
\begin{align*}
\mathbb{D}_x & := \textnormal{Spec} \, \widehat{\mathscr{O}}_x, & \mathbb{D}_x^\times & := \textnormal{Spec} \, \mathcal{K}_x, & \mathbb{D}_y & := \textnormal{Spec} \, \widehat{\mathscr{O}}_y, & \mathbb{D}_y^\times & := \textnormal{Spec} \, \mathcal{K}_y.
\end{align*}
One has a Cartesian diagram
\begin{align} \label{Xx}
\vcenter{ \commsq{ \mathbb{D}_x^\times }{ X \setminus x }{ \mathbb{D}_x }{ X. }{}{}{}{} }
\end{align}
The local type of a $(\Gamma, G)$-bundle is defined in \cite[\S2.2]{BalajiSeshadri2012} as follows. Let $E$ be a $(\Gamma, G)$-bundle over $Y$. Then for each $y \in p^{-1}(Z)$, $E|_{\mathbb{D}_y}$ is a $(\Gamma_y, G)$-bundle and this is defined by a representation $\rho_y : \Gamma_y \to G$. Let $\tau_y$ denote the equivalence class of the representation $\rho_y$. Then the \emph{local type of $E$} is defined as
\begin{align*}
\tau := \{ \tau_y \, : \, y \in p^{-1}(Z) \}.
\end{align*}
We let $\textnormal{Bun}_{\Gamma, G}^\tau Y$ denote the stack of $(\Gamma, G)$-bundles of local type $\tau$.
Because of Corollary \ref{GammaGcurve}, there should be a well-defined notion of a local type for a $G$-bundle over $\mathfrak{X}$. Fix $x \in Z$ and let $y_1, y_2 \in p^{-1}(x)$. Then there exists $\gamma \in \Gamma$ such that conjugation by $\gamma$ yields an isomorphism $\Gamma_{y_1} \xrightarrow{\sim} \Gamma_{y_2}$; since $\Gamma_{y_1}, \Gamma_{y_2}$ are abelian, this isomorphism is in fact independent of the choice of $\gamma$. Thus, it is possible to choose isomorphisms $c_y : \boldsymbol{\mu}_r \xrightarrow{\sim} \Gamma_y$ for each $y \in p^{-1}(x)$ such that for all $y_1, y_2 \in p^{-1}(x)$, the diagram
\begin{align*}
\ucommtri{ \boldsymbol{\mu}_r }{ \Gamma_{y_1} }{ \Gamma_{y_2} }{ c_{y_1} }{ c_{y_2} }{}
\end{align*}
commutes. This makes each $E|_{\mathbb{D}_y}$ a $(\boldsymbol{\mu}_r, G)$-bundle and if $\gamma$ is as above, it also yields an isomorphism of $E|_{\mathbb{D}_{y_1}} \xrightarrow{\sim} E|_{\mathbb{D}_{y_2}}$ as $(\boldsymbol{\mu}_r, G)$-bundles, and hence the local representations $\rho_{y_1} \circ c_{y_1}, \rho_{y_2} \circ c_{y_2} : \boldsymbol{\mu}_r \to G$ are equivalent.
Observe that $\mathbb{D}_x \times_X Y \cong \coprod_{y \in p^{-1}(x)} \mathbb{D}_y$, so it follows that
\begin{align*}
\mathbb{D}_x \times_X \mathfrak{X} \cong \mathbb{D}_x \times_X [\Gamma \backslash Y] \cong [ \Gamma \backslash \mathbb{D}_x \times_X Y] \cong \left[\Gamma \bigg\backslash \coprod_{y \in p^{-1}(x)} \mathbb{D}_y \right] \cong [ \Gamma_y \backslash \mathbb{D}_y ] \cong [ \boldsymbol{\mu}_r \backslash \mathbb{D}_y ],
\end{align*}
where, in the last two expressions, $y \in p^{-1}(x)$ is any choice of preimage.
Now, given a $G$-bundle $\mathscr{E}$ on $\mathfrak{X}$, $\mathscr{E}|_{\mathbb{D}_x \times_X \mathfrak{X}}$ is a $(\boldsymbol{\mu}_r, G)$-bundle over $\mathbb{D}_y$. This may be identified with the restriction of the associated $(\Gamma, G)$-bundle on $Y$ restricted to $\mathbb{D}_y$ for some $y \in p^{-1}(x)$. There is thus a well-defined equivalence class of a homomorphism $\boldsymbol{\mu}_r \to G$, which we denote by $\tau_x$ and call the \emph{local type of $\mathscr{E}$ at $x$}. We define the \emph{local type of $\mathscr{E}$} to be $\tau := \{ \tau_x \, : \, x \in Z \}$. We will denote by $\textnormal{Bun}_G^\tau \mathfrak{X}$ the stack of $G$-bundle over $\mathfrak{X}$ of local type $\tau$.
\begin{prop} \label{GammaGlocaltype}
The equivalence of Corollary \ref{GammaGcurve} restricts to equivalences
\begin{align*}
\textnormal{Bun}_G^\tau \mathfrak{X} \xrightarrow{\sim} \textnormal{Bun}_{\Gamma, G}^\tau Y
\end{align*}
for each local type $\tau$.
\end{prop}
\begin{rmk}
The local type of a $G$-bundle $\mathscr{E}$ on $\mathfrak{X}$ is independent of our realization of $\mathfrak{X}$ as a quotient stack $[\Gamma \backslash Y]$. If we fix $x \in Z$ and set $B := \widehat{\mathscr{O}}_x [t]/(t^r - z) \cong \mathbb{C}[\![ t ]\!]$, where $z \in \widehat{\mathscr{O}}_x$ is a parameter at $x$, we observe that $B$ admits a $\boldsymbol{\mu}_r$-action for which $\widehat{\mathscr{O}}_x$ is the ring of invariants and if $\mathbb{D}_{\hat{x}} := \textnormal{Spec} \, B$, then we have an abstract isomorphism
\begin{align*}
\mathbb{D}_x \times_X \mathfrak{X} \cong [\boldsymbol{\mu}_r \backslash \mathbb{D}_{\hat{x}} ].
\end{align*}
Then a $\mathscr{E}$ restricts to a $G$-bundle on $\mathbb{D}_x \times_X \mathfrak{X}$ and hence corresponds to a $(\boldsymbol{\mu}_r, G)$-bundle on $\mathbb{D}_{\hat{x}}$, which determines the local type.
\end{rmk}
\section{Connections On Vector Bundles over a Root Stack} \label{Connections}
In this section, we will assume $X$ to be a smooth irreducible curve over $\mathbb{C}$. Let $Z \subseteq X$ be a reduced divisor. Let $s \in H^0(X, \mathscr{O}_X(Z))$ be a section defining $Z$ and fix $r \in \mathbb{N}$. We will let $\mathfrak{X} = \mathfrak{X}_{ \mathscr{O}_X(Z), s, r }$ be the associated root stack.
\subsection{Parabolic Connections}
Suppose $E_*$ is a rank $n$ parabolic vector bundle over $X$ given as a filtered sheaf as in (\ref{sheaffiltration}). It is easy to recover the parabolic structure on the underlying vector bundle $E$ in the original sense of \cite{MS} in terms of a weighted flag in the fibre $E_x$ for each $x \in \text{supp} \, Z$. Given the filtration (\ref{sheaffiltration}), we take the images of the fibres of the $E_i$ in $E_x$ to get a flag
\begin{align} \label{flag}
E_x = E_{x,1} \supset E_{x,2} \supset \cdots \supset E_{x,k} \supset E_{x,k+1} = \{ 0 \},
\end{align}
and the weight $\alpha_i$ attached to $E_{x,i}$ is the largest $\alpha$ such that $E_{x,i} = i_{\alpha,0}( (E_\alpha)_x)$.
Let $D$ be a connection on $E$ with (logarithmic) simple poles at $Z$. If $x \in \text{supp} \, Z$ then the residue $\text{Res}_x D$ is a well-defined endomorphism of $E_x$. We say that $D$ is a \emph{parabolic connection} if for each $x \in \text{supp} \, Z$,
\begin{align} \label{parcxn}
\text{Res}_x D (E_i) & \subseteq E_{i+1}, & \text{ and } & & \text{Res}_x D|_{E_i/E_{i+1}} & = \alpha_i \1_{E_i/E_{i+1}},
\end{align}
for $1 \leq i \leq k$ \cite[\S2.2]{BiswasLogares2011}.
We will use the following \cite[Lemma 4.2]{BiswasLogares2011}.
\begin{lem} \label{parlineconnection}
A parabolic line bundle $L_*$ admits a connection if and only if $\textnormal{par-deg} \, L_* = 0$.
\end{lem}
\subsection{Connections on the Root Stack and Parabolic Connections}
\begin{prop} \label{rootstackconnection}
The category of vector bundles with connections on $\mathfrak{X}$ is equivalent to the category of parabolic vector bundles with parabolic connections on $X$.
\end{prop}
\begin{proof}
Suppose we are given a rank $n$ vector bundle and connection $(\mathcal{V}, \nabla)$ on $\mathfrak{X}$. We want to show that $\nabla$ induces a parabolic connection on the corresponding parabolic vector bundle $E_*$ on $X$. Since a connection is defined locally and the parabolicity condition (\ref{parcxn}) is also local, as in Example \ref{affinerootstack}, we may assume that $X = \textnormal{Spec} \, A$, that $\text{supp} \, Z = \{ x \}$ is a single parabolic point defined by $s \in A$ whose image in $\mathscr{O}_{X,x}$ is a parameter at $x$ and such that $ds$ is a local basis for $\Omega_{X/\mathbb{C}}^1$, so that if $B := A[t]/(t^r - s)$ and if $U := \textnormal{Spec} \, B$, then $\mathfrak{X} = [U/ \boldsymbol{\mu}_r]$. Note also that $\Omega_{U/\mathbb{C}}^1$ has $dt$ as a local basis. If $\gamma \in \boldsymbol{\mu}_r$ is a generator, we will assume that $\gamma \cdot t = \zeta^{-1} t$ and similarly $\gamma \cdot dt = \zeta^{-1} \, dt$.
In this case, $\mathcal{V}$ is defined by a projective module over $B$ with a compatible
$\boldsymbol{\mu}_r$-action, and $\nabla$ commutes with this action. By shrinking as necessary, we may assume that the module is free over $U$, say with basis $e = \{ e_1, \cdots, e_n \}$, and the $\boldsymbol{\mu}_r$-action is appropriately diagonalized \cite[Proposition 3.15]{Borne} so that
\begin{align*}
\gamma \cdot e_j = \zeta^{p_j} e_j,
\end{align*}
for some $p_j$ which satisfy $0 \leq p_n \leq \cdots \leq p_1 \leq r-1$. Take $1
\leq j_k < j_{k-1} < \cdots < j_1 = n$ such that if $ j_{i+1} + 1 \leq j \leq j_i$, then $p_j = p_{j_i}$; set $m_i := p_{j_i}$ and $\alpha_i := m_i/r$. The $\boldsymbol{\mu}_r$-invariants of the submodule generated by $e_j$ is generated by $t^{p_j} e_j$. Hence $\pi_* \mathcal{V}$ has a basis $f = \{ f_1 := t^{p_1} e_1, \cdots, f_n := t^{p_n} e_n \}$ or
\begin{align*}
& f_1 = t^{m_k} e_1, \cdots, f_{j_k} = t^{m_k} e_{j_k}, f_{j_k + 1} = t^{m_{k-1}}
e_{j_k + 1}, \cdots, f_{j_{k-1}} = t^{m_{k-1}} e_{j_{k-1}}, \cdots, \\
& f_{j_2 + 1} = t^{m_1} e_{j_2 + 1}, \cdots, f_n = t^{m_1} e_n.
\end{align*}
More generally, if $m_{i-1} + 1 \leq l \leq m_i$, then $\pi_*( \mathcal{V} \otimes_{\mathscr{O}_\mathfrak{X}} \mathscr{N}^{\otimes -l})$ is spanned by
\begin{align*}
f_1, \cdots, f_{j_i}, s f_{j_i+1}, \cdots, s f_n.
\end{align*}
Then the subspace $V_i$ of $V_x$ is spanned by $f_1(x), \cdots, f_{j_i}(x)$.
This describes the filtration (\ref{flag}).
Now, suppose that, with respect to the basis $e$, $\nabla$ has the connection matrix $\omega = (\omega_{ij}) dt$, so that
\begin{align*}
\nabla e_j = \sum_{i=1}^n \omega_{ij} e_i \, dt.
\end{align*}
Then comparing the two expressions
\begin{align*}
\nabla \gamma \cdot e_j & = \zeta^{p_j} \sum_{i=1}^n \omega_{ij} e_i \, dt, & \gamma \cdot \nabla e_j & = \sum_{i=1}^n \zeta^{p_i -1} (\gamma \cdot \omega_{ij}) e_i \, dt,
\end{align*}
we find that $\gamma \cdot \omega_{ij} = \zeta^{p_j - p_i + 1} \omega_{ij}$. Hence $\omega_{ij}$ is of the form
\begin{align*}
\omega_{ij} = \begin{cases} t^{p_i - p_j - 1} \nu_{ij} & \text{ if } p_i > p_j \\ st^{p_i - p_j - 1} \nu_{ij} & \text{ if } p_i \leq p_j, \end{cases}
\end{align*}
for some $\nu_{ij} \in A$.
The change of basis matrix, from $e$ to $f$, is $g = \textnormal{diag}( t^{p_1}, \cdots,
t^{p_n})$ and so the connection matrix with respect to $f$ is $g^{-1} \omega g + g^{-1} \, dg$, the $(i,j)$-entry of which is
\begin{align*}
\begin{cases} \nu_{ij} \frac{dt}{t} = \frac{1}{r} \nu_{ij} \frac{ds}{s} & \text{ if } p_i > p_j \\ \nu_{ij} \frac{ s \, dt }{t} + \delta_{ij} p_i \frac{dt}{t} = \frac{1}{r} ( s \nu_{ij} + \delta_{ij} p_i) \frac{ds}{s} & \text{ if } p_i \leq p_j. \end{cases}
\end{align*}
One sees immediately that this gives a well-defined logarithmic connection $D$ on $E_*$ and yields the following expression for the residue at $x$
\begin{align*}
(\text{Res}_x D)_{ij} = \begin{cases} \frac{1}{r} \nu_{ij}(x) & \text{ if } p_i > p_j \\ \delta_{ij} \frac{p_i}{r} & \text{ if } p_i \leq p_j. \end{cases}
\end{align*}
From this, it is straightforward to verify that $D$ is, in fact, a parabolic connection.
In the other direction, suppose we are given a parabolic connection $(E_*, D)$.
Let $f_1, \cdots, f_n$ be a local frame and suppose $V_x$ has a flag (\ref{flag}) with weights $\alpha_i = m_i/r, 1 \leq i \leq k$. Then the corresponding bundle on $\mathfrak{X}$ is represented over $U$ by the free $B$-module with basis
\begin{align*}
& e_1 := f_1 \otimes t^{-m_k}, \cdots, e_{j_k} := f_{j_k} \otimes t^{-m_k},
e_{j_k + 1} := f_{j_k + 1} \otimes t^{-m_{k-1}}, \cdots, e_{j_2} := f_{j_2}
\otimes t^{-m_2}, \\
& e_{j_2 + 1} := f_{j_2 + 1} \otimes t^{-m_1}, \cdots, e_n := f_n \otimes
t^{-m_1}.
\end{align*}
Now, reversing the argument above, we see without difficulty that the induced connection on the $B$-module has no poles and is compatible with the $\boldsymbol{\mu}_r$-action. Hence we obtain a connection on $\mathfrak{X}$.
Observe that in the above how we go from a connection on $\mathfrak{X}$ to a parabolic connection and back is essentially via a ``change of basis'' operation. When realized as such, it is clear that the two operations are inverse to each other.
A morphism $\varphi : (\mathcal{V}, \nabla^\mathcal{V}) \to (\mathcal{W}, \nabla^\mathcal{W})$ is a morphism $\varphi : \mathcal{V} \to \mathcal{W}$ which commutes with the respective connections, i.e.\ $\nabla^\mathcal{W} \circ \varphi = ( \varphi \otimes \1_{\Omega_\mathfrak{X}^1}) \circ \nabla^\mathcal{V}$. In local frames, this means that the matrix for $\varphi$ satisfies the same relation with the respective connection matrices. Let $(E_*, D^E), (F_*, D^F)$ be the corresponding parabolic connections. Then, as we saw, the connection matrices for $D^E$ and $D^F$ are obtained by changes of basis on each of $\mathcal{V}$ and $\mathcal{W}$ from the matrices for $\nabla^\mathcal{V}$ and $\nabla^\mathcal{W}$; the matrix for $\pi_* \varphi$ will be obtained from these same changes of basis, so the commutation property will be preserved and we get a morphism of connections. Again, the process is reversible.
\end{proof}
The following is immediate from the Proposition, Lemma \ref{parlineconnection} and Theorem \ref{Bornedeg}.
\begin{cor} \label{linecxnroot}
A line bundle $\mathcal{L}$ on $\mathfrak{X}$ admits a connection if and only if $\deg_\mathfrak{X} \, \mathcal{L} = 0$.
\end{cor}
\section{A Condition for the Existence of a Connection} \label{ConnectionCondition}
In this section $X$ will be an irreducible smooth complex projective curve, $Z
\subseteq X$ a reduced divisor, $s \in H^0(X, \mathscr{O}_X(Z))$ a section defining $Z$,
$r \in \mathbb{N}$ and $\mathfrak{X} = \mathfrak{X}_{\mathscr{O}_X(Z), s, r}$ the associated root stack. We will
further assume that either $g \geq 1$ or $m > 1$, so as to be able to apply
Corollary \ref{globalquotient}. As before, $G$ is a reductive complex algebraic
group.
The following theorem is a generalization of a criterion of Weil and Atiyah,
\cite{At}, \cite{We}, for
the existence of a holomorphic connection on a holomorphic vector bundle over
a compact Riemann surface
\begin{theorem} \label{connectioncondition}
A principal $G$-bundle $\mathscr{E}$ on $\mathfrak{X}$ admits a connection if and only if for any reduction to a Levi factor $L$ of a parabolic subgroup of $G$, say $\mathscr{F}$ is an $L$-bundle with $\mathscr{F} \times^\iota G \cong \mathscr{E}$, and any character $\chi : L \to \mathbb{C}^\times$, the associated line bundle $\mathcal{M} = \mathcal{M}^\chi := \mathscr{F} \times^\chi \mathbb{C}$ satisfies
\begin{align*}
\deg_\mathfrak{X} \mathcal{M} = 0.
\end{align*}
\end{theorem}
\begin{proof}
Suppose first that $\mathscr{E}$ admits a connection and let $\mathscr{F}$ be a reduction of $G$ to a Levi subgroup $L$ and $\chi : L \to \mathbb{C}^\times$ a character. Then by Lemmata \ref{Levisplitting}, \ref{splittingconnection} and \ref{inducedconnection}, $\mathcal{M}^\chi$ on $\mathfrak{X}$ admits a connection. Hence $\deg_\mathfrak{X} \, \mathcal{M}^\chi = 0$ by Corollary \ref{linecxnroot}.
We now prove the converse. Assume that $\mathscr{E}$ satisfies the condition of the Theorem. We choose a Galois cover $p : Y \to X$ as in Corollary \ref{globalquotient}. Then we obtain a surjective \'etale morphism $\mathfrak{p} : Y \to \mathfrak{X}$, so that $Y$ is an atlas for $\mathfrak{X}$. Corresponding to the morphism $\mathfrak{p}$ is a $G$-bundle $\mathscr{E}_\mathfrak{p}$ which, since $\mathfrak{X} = [\Gamma \backslash Y]$, comes with a compatible $\Gamma$-action. A reduction of the structure group $\mathscr{E}$ to a Levi subgroup $H$ corresponds to a $\Gamma$-invariant reduction of $\mathscr{E}_\mathfrak{p}$ to $H$, and conversely, so the hypotheses of the following Lemma are satisfied.
\begin{lem}
Let $E$ be a $\Gamma$-linearized principal $G$-bundle over $Y$ such that for every Levi subgroup $H \subseteq G$, every $\Gamma$-invariant holomorphic reduction $F$ of $E$ to $H$ and every character $\chi : H \to \mathbb{C}^\times$, one has
\begin{align*}
\deg F \times^\chi \mathbb{C} = 0.
\end{align*}
Then $E$ admits a $\Gamma$-invariant connection.
\end{lem}
Except for the $\Gamma$-invariance of the connection, this is the statement of \cite[Lemma 4.2]{BiswasPPB}; that the existence of one connection implies the existence of a $\Gamma$-invariant one is proved by an averaging argument on the previous page of the same paper.
Now, the existence of a $\Gamma$-invariant connection on $\mathscr{E}_\mathfrak{p}$ implies that there is a $\Gamma$-invariant splitting of the Atiyah sequence for $\mathscr{E}_\mathfrak{p}$. Since the question is now framed in terms of the existence of a section of an appropriate sheaf over $Y$, such a splitting descends to a splitting of the Atiyah sequence for $\mathscr{E}$ and we conclude by Lemma \ref{Atiyahcondition}.
\end{proof}
Let $\mathscr{E}$ be a principal $G$-bundle over $\mathfrak{X}$. Consider the short exact sequence
$$
0 \to \Omega^1_{\mathfrak{X}}\otimes \textnormal{ad} \, \mathscr{E} \to \Omega^1_{\mathfrak{X}}\otimes \text{At} \, \mathscr{E}
\stackrel{\mu}{\to} \Omega^1_{\mathfrak{X}}\otimes T \mathfrak{X} = End(T \mathfrak{X}) \to 0
$$
obtained by tensoring the Atiyah sequence associated to
$\mathscr{E}$ with $\Omega^1_{\mathfrak{X}}$. It produces the exact sequence
$$
0 \to \Omega^1_{\mathfrak{X}}\otimes \textnormal{ad} \, \mathscr{E} \to \mu^{-1}(\text{Id}_{T \mathfrak{X}}\cdot
\mathscr{O}_{\mathfrak{X}}) \stackrel{\mu}{\to} \text{Id}_{T \mathfrak{X}}\cdot
\mathscr{O}_{\mathfrak{X}} \,=\, \mathscr{O}_{\mathfrak{X}} \to 0\, .
$$
There is a natural bijective correspondence between the
splitting of this exact sequence and the above Atiyah sequence associated to
$\mathscr{E}$. In \cite{BiswasPPB}, connections on parabolic principal bundles were
defined to be the splittings of the exact sequence in the
parabolic context given by this exact sequence.
\section{Tensor Functors} \label{TensorFunctors}
Let $X$ be a scheme over an arbitrary field $k$, $G$ an affine group scheme over
$k$ and $E$ a $G$-bundle over $X$. Then the assignment taking a representation
$\rho : G \to {\rm GL}(V)$ to the associated vector bundle $E \times^\rho V$
defines
a functor $F_E : G\mhyphen\mathfrak{mod} \to \mathfrak{Vect} \, X$, where $G\mhyphen\mathfrak{mod}$ is the category of
finite-dimensional representations of $G$ and one observes that $F_E$ satisfies
the following properties:
\begin{enumerate}
\item[(i)] $F_E$ is $k$-additive;
\item[(ii)] $F_E$ is a tensor functor in in the sense that it commutes with the
formation of tensor products (in each of the respective categories), and with
the natural isomorphisms of functors which give the associativity and
commutativity of the tensor product in each category;
\item[(iii)] $F_E$ takes the trivial $1$-dimensional representation to the
trivial line bundle;
\item[(iv)] $F_E$ takes an $n$-dimensional representation to a rank $n$ vector
bundle.
\end{enumerate}
Following a Tannakian philosophy, M.V.\ Nori was able to see that any functor $G\mhyphen\mathfrak{mod} \to \mathfrak{Vect} \, X$ satisfying these conditions in fact comes from a $G$-bundle over $X$ \cite[\S2]{Nori1976}.
The approach to generalizing the notion of a parabolic vector bundle to that of a parabolic principal bundle taken by \cite{BBN} is to view a $G$-bundle in this sense. Thus, one defines a parabolic principal bundle as a functor $G\mhyphen\mathfrak{mod} \to \mathfrak{ParVect}_{D,r}(X)$, for some $r \in \mathbb{N}$, which satisfies conditions (i)-(iv) above \cite[Definition 2.5]{BBN}. One will recall that the original definition \cite{MS} of a parabolic structure (on a vector bundle over a smooth curve) consisted of a flag of subspaces of the fibre over a parabolic point $x$, together with a set of weights in $[0,1)$. While a flag has a clear $G$-bundle analogue in terms of an element in a generalized flag manifold, it was much less obvious what the correct generalization for a set of weights should be.
The definition in \cite{BBN} was meant to overcome this difficulty.
We now see that the tensor functor approach to parabolic principal bundles coincides with our approach via root stacks.
\begin{prop}
The category of $G$-bundles on the root stack $\mathfrak{X}$ is equivalent to the category of tensor functors $G\mhyphen\mathfrak{mod} \to \mathfrak{ParVect}_{D,r}(X)$.
\end{prop}
\begin{proof}
The first thing to recall is that the equivalence of $\mathfrak{Vect} \, \mathfrak{X}$ and $\mathfrak{ParVect}_{D,r} \, X$ is a tensor functor, so satisfies (ii)-(iv); it obviously satisfies (i), and it is clear that it preserves rank and that the trivial bundle on $\mathfrak{X}$ corresponds to the trivial parabolic vector bundle on $X$. We will also note that if $\mathfrak{f} : U \to \mathfrak{X}$ is an \'etale morphism, then the functor $R_\mathfrak{f} : \mathfrak{Vect} \, \mathfrak{X} \to \mathfrak{Vect} \, U$ given by
\begin{align*}
\mathcal{V} \mapsto \mathcal{V}_\mathfrak{f}
\end{align*}
also has the same properties, virtually by definition.
Suppose we are given a principal bundle $\mathscr{E}$ on the root stack $\mathfrak{X}$. Then the associated vector bundle construction of Section \ref{associatedbundles} will give a functor $F_\mathscr{E} : G\mhyphen\mathfrak{mod} \to \mathfrak{Vect} \, \mathfrak{X}$ satisfying the conditions above. Composing with the equivalence $\mathfrak{Vect} \, \mathfrak{X} \xrightarrow{\sim} \mathfrak{ParVect}_{D,r}(\mathfrak{X})$ give a parabolic principal bundle in the sense \cite{BBN}.
Suppose we are given a functor $F : G\mhyphen\mathfrak{mod} \to \mathfrak{ParVect}_{D,r} X$ satisfying (i)-(vi) above. Then given an \'etale morphism $\mathfrak{f} : U \to \mathfrak{X}$, we may consider the composition
\begin{align*}
G\mhyphen\mathfrak{mod} \xrightarrow{F} \mathfrak{ParVect}_{D,r} \, X \xrightarrow{\sim} \mathfrak{Vect} \, \mathfrak{X} \xrightarrow{R_\mathfrak{f}} \mathfrak{Vect} \, U,
\end{align*}
which we will denote by $F_\mathfrak{f}$. This will satisfy (i)-(vi) and hence determines a principal bundle $\mathscr{E}_\mathscr{F}$ over $U$. Now, given a diagram (\ref{overXX}), one obtains a diagram of categories and functors
\begin{align*}
\rcommtri{ G\mhyphen\mathfrak{mod} }{ \mathfrak{Vect} \, V }{ \mathfrak{Vect} \, U }{ F_\mathfrak{g} }{ F_\mathfrak{f} }{ k^* }
\end{align*}
In this situation, \cite[Proposition 2.9(a)]{Nori1976} provides canonical isomorphisms $\mathscr{E}_f \xrightarrow{\sim} k^* \mathscr{E}_\mathfrak{g}$. Because of the canonical nature of these isomorphisms, the compatibility condition (\ref{compatibilitycondition}) is satisfied. Thus $F$ defines a principal $G$-bundle over $\mathfrak{X}$. It is clear that these constructions are inverses of each other, as it is a question of seeing that this is the case at each $\mathfrak{f} \in \textnormal{Ob} \, \mathfrak{X}$.
\end{proof}
\section{Parahoric Torsors} \label{ParahoricTorsors}
\subsection{Parahoric Subgroups}
We will assume that $G$ is semisimple and fix a maximal torus $T$ and a Borel subgroup $B$ containing $T$. Let $A := \mathbb{C}[\![ z ]\!]$ be the ring of formal power series and $K := \mathbb{C} (\!( z )\!) = A[z^{-1}]$ its quotient field (the ring of formal Laurent series). One has a quotient map $A \to \mathbb{C}$, which yields an evaluation map $\textnormal{ev} : G(A) \to G(\mathbb{C})$. The \emph{Iwahori subgroup} $\mathcal{I}$ of $G(K)$ is defined to be
\begin{align*}
\mathcal{I} := \textnormal{ev}^{-1}(B) = \textnormal{ev}^{-1}\big(B(\mathbb{C}) \big).
\end{align*}
A \emph{parahoric subgroup} $\mathscr{P}$ of $G(K)$ is one which contains a $G(K)$-conjugate of $\mathcal{I}$. It is a theorem of \cite{BruhatTitsII} that any such subgroup $\mathscr{P}$ is the group of $A$-points for a uniquely defined smooth group scheme over $A$ which, at the risk of poor notation, we will also call $\mathscr{P}$.
Let $\mathfrak{g}$ denote the Lie algebra of $G$, let $\Phi$ be the root system for $G$ (with respect to $T$) and for $\alpha \in \Phi$, let $\mathfrak{g}_\alpha$ denote the corresponding root space and $U_\alpha \subseteq G$ the root group. We will fix non-zero $x_\alpha \in \mathfrak{g}_\alpha$.
Let $\theta \in \t_\mathbb{R} = Y(T) \otimes_\mathbb{Z} \mathbb{R}$ and consider the subgroup
\begin{align} \label{Ptheta}
\mathscr{P}_\theta := \langle T(A), U_\alpha( z^{m_\alpha(\theta)} A ) \, : \, \alpha \in \Phi \rangle.
\end{align}
where $m_\alpha(\theta) = -\lfloor \alpha(\theta) \rfloor$ when we consider $\alpha$ as an element of $\t_\mathbb{R}^*$. Such a subgroup is parahoric in the above sense and any parahoric subgroup containing $\mathcal{I}$ is of the form $\mathscr{P}_\theta$ for some $\theta$ (though such a $\theta$ is clearly not unique). In what follows, we will typically take $\theta \in Y(T) \otimes_\mathbb{Z} \mathbb{Q}$ or, when $r \in \mathbb{N}$ is fixed, in $Y(T) \otimes_\mathbb{Z} \tfrac{1}{r} \mathbb{Z}$.
For such a $\theta$, \cite[\S2.2]{Boalch_Parahoric} gives the following description of the associated parahoric Lie algebra. For $\lambda \in \mathbb{R}$, we will set
\begin{align*}
\mathfrak{g}_\lambda^\theta := \{ \xi \in \mathfrak{g} \, : \, [\theta, \xi] = \lambda \xi \}.
\end{align*}
Note that $\mathfrak{g}_0^\theta \subseteq \mathfrak{g}$ is the centralizer subalgebra of $\theta$ and that $\t \subseteq \mathfrak{g}_0^\theta$.
For $i \in \mathbb{Z}$, we set
\begin{align*}
\mathfrak{g}^\theta(i) \,:=\,
\sum_{\lambda \geq -i } \mathfrak{g}_\lambda^\theta
\end{align*}
Then we define
\begin{align*}
\wp_\theta := \left\{ \sum_{i \in \mathbb{Z}} \xi_i z^i \in \mathfrak{g}(K) \, : \, \xi_i \in \mathfrak{g}^\theta(i) \right\}.
\end{align*}
It is not hard to see that this is the same as
\begin{align*}
\wp_\theta = \t(A) \oplus \sum_{\alpha \in \Phi} \mathfrak{g}_\alpha( z^{m_\alpha(\theta)} A ),
\end{align*}
which is what one would expect from the description in (\ref{Ptheta}).
An alternative description given in \cite{Boalch_Parahoric} is to consider the (finite-dimensional) vector spaces
\begin{align*}
\mathfrak{g}(K)_\lambda^\theta := \left\{ \sum_{i \in \mathbb{Z}} \xi_i z^i \in \mathfrak{g}(K) \, : \, \xi_i \in \mathfrak{g}_{\lambda-i}^\theta \right\},
\end{align*}
for $\lambda \in \mathbb{R}$, which we will call the \emph{weight $\lambda$} piece of $\mathfrak{g}(K)$. Then $\wp_\theta$ is the sub-algebra with weights $\geq 0$. In particular, the weight zero piece
\begin{align*}
\mathfrak{g}(K)_0^\theta := \left\{ \sum_{i \in \mathbb{Z}} \xi_i z^i \in \mathfrak{g}(K) \, : \, \xi_i \in \mathfrak{g}_{-i}^\theta \right\},
\end{align*}
is a finite-dimensional sub-algebra of $\wp_\theta$ and is isomorphic to a reductive subalgebra of $\mathfrak{g}$ containing $\t$.
\subsection{Parahoric Group Schemes and Torsors} \label{PGST}
Let $X$ be a smooth (irreducible) curve. We use the notation of Section \ref{PBRS}. A group scheme $\mathcal{G}$ will be called a \emph{parahoric Bruhat--Tits group scheme} if there exists a finite set $Z \subseteq X$ such that for each $x \in Z$, there is some $\theta_x \in \t = Y(T) \otimes_\mathbb{Z} \mathbb{R}$ such that
\begin{align*}
\mathcal{G}|_{X \setminus Z} & \cong ( X \setminus Z) \times G, & \text{ and } \mathcal{G}|_{\mathbb{D}_x } & \cong \mathscr{P}_{\theta_x},
\end{align*}
for each $x \in Z$. If $\boldsymbol{\theta} := \{ \theta_x \, : \, x \in Z \}$, then this group scheme will be written $\mathcal{G}_{\boldsymbol{\theta}}$. The stack of $\mathcal{G}_{\boldsymbol{\theta}}$-torsors over $X$ will be denoted
\begin{align*}
\textnormal{Bun}_{\mathcal{G}_{\boldsymbol{\theta}}} X.
\end{align*}
For simplicity, we will only be considering the case when $Z = \{ x \}$ is a singleton, in which case we write $\theta$ for $\theta_x$ and for $\boldsymbol{\theta}$.
Let $X$ and $x \in X$ be as above. Let $s \in H^0(X, \mathscr{O}_X(x))$ be a section vanishing (only) at $x$ and fix $r \in \mathbb{N}$. Let $\mathfrak{X} := \mathfrak{X}_{\mathscr{O}_X(x), s, r}$ be the associated root stack.
Fix a local type $\tau$ for $G$-bundles over $\mathfrak{X}$ (i.e.\ an equivalence class of representations $\boldsymbol{\mu}_r \to G$) and choose a representative $\rho$. We may assume that $\rho$ is the restriction of a cocharacter $\theta : \mathbb{G}_m(L) \to T(L)$ to $\boldsymbol{\mu}_r(L) \subseteq \mathbb{G}_m(L)$ so we may associate to $\tau$ an element
\begin{align*}
\theta = \theta_\tau \in Y\big( T(L) \big) = Y\big( T( \mathcal{K}_x) \big) \otimes_\mathbb{Z} \tfrac{1}{r} \mathbb{Z}.
\end{align*}
\begin{prop}
Assume that one of the following three conditions holds for $\mathfrak{X}_{(L,r,s)}$:
\begin{itemize}
\item $g \geq 1$,
\item $g = 0$, and the support of the divisor for $s$
has cardinality at least three, and
\item $g = 0$, and the divisor for $s$ is of the form $d(x+y)$,
where $x$ and $y$ are distinct points.
\end{itemize}
Then there is an equivalence of stacks
\begin{align*}
\textnormal{Bun}_G^\tau \mathfrak{X} \xrightarrow{\sim} \textnormal{Bun}_{\mathcal{G}_\theta} X.
\end{align*}
\end{prop}
\begin{proof}
We may choose a Galois cover $p : Y \to X$ as in Corollary \ref{globalquotient} so that we have $\textnormal{Bun}_G^\tau \mathfrak{X} \cong \textnormal{Bun}_{\Gamma, G}^\tau Y$ from Proposition \ref{GammaGlocaltype}. But $\textnormal{Bun}_{\Gamma, G}^\tau Y \cong \textnormal{Bun}_{\mathcal{G}_\theta} X$, by \cite[Theorem 5.3.1]{BalajiSeshadri2012}.
\end{proof}
\subsection{Parahoric Connections}
\subsubsection{Local Connections: The Residue Condition}
Fix $\theta \in \t_\mathbb{R}$ and consider the parahoric subgroup $\mathscr{P}_\theta$ of $G(K)$. Then \cite[\S2.3]{Boalch_Parahoric} considers the subspace
\begin{align*}
\mathcal{A}_\theta := \wp_\theta \, \frac{dz}{z}
\end{align*}
of \emph{logarithmic parahoric} or \emph{logahoric connections} of the space $\mathfrak{g}(K) \, dz$, of meromorphic connections over the trivial $G$-bundle over the formal disc. Observe that the space of such connections only depends on the Lie algebra $\wp_\theta$, rather than $\theta$ itself.
We will want to restrict this definition somewhat for our purposes by imposing a condition analogous to the second condition in (\ref{parcxn}). Let $\widetilde{\omega} \, dz/z$ be a logahoric connection (for the choice of $\theta$), with $\widetilde{\omega} \in \wp_\theta$. We may consider its weight zero piece,
\begin{align*}
\widetilde{\omega}_0 \in \mathfrak{g}(K)_0^\theta.
\end{align*}
\begin{defn}
We will say that a logahoric connection \emph{satisfies the residue condition} if its weight zero piece is precisely $\theta$. We will denote by $\mathcal{A}_\theta^\text{Res}$ the space of logahoric connections satisfying the residue condition.
\end{defn}
This definition very much depends on the data of $\theta \in \t_\mathbb{R}$.
\begin{lem} \label{rescondn}
A logahoric connection $\widetilde{\omega} \, dz/z$ satisfies the residue condition if and only if
\begin{align*}
\widetilde{\omega} \in \theta + \t(z A) + \sum_{\alpha(\theta) \in \mathbb{Z}} \mathfrak{g}_\alpha( z^{1 + m_\alpha(\theta)} A) + \sum_{\alpha(\theta) \not\in \mathbb{Z}} \mathfrak{g}_\alpha( z^{m_\alpha(\theta)} A).
\end{align*}
\end{lem}
Let $B := A[t]/(t^r -z) = \mathbb{C} [\![ t ]\!]$ and let $L = \mathbb{C}(\!( t )\!)$ be its quotient field. We consider a trivial $G$-bundle $E = \widehat{\mathbb{D}} \times G$ over $\widehat{\mathbb{D}} := \textnormal{Spec} \, B$ with a compatible $\boldsymbol{\mu}_r$-action, the $\boldsymbol{\mu}_r$-action on $\widehat{\mathbb{D}}$ coming from that on $B$, where it is given by $\gamma \cdot t = \zeta^{-1} t$, with $\gamma \in \boldsymbol{\mu}_r$ a fixed generator. As above, this may be realized via a homomorphism $\theta : \boldsymbol{\mu}_r \to G(L)$, which we may assume factors through $T(L) \subseteq G(L)$. In fact, we may think of $\theta$ as the restriction of a cocharacter in $Y(T(L))$, which we lazily also denote by $\theta$. We have
\begin{align*}
Y(T(L)) = Y(T(K)) \otimes_\mathbb{Z} \tfrac{1}{r} \mathbb{Z} \subseteq Y(T(K))
\end{align*}
and so we will think of $\theta$ as an element of $Y(T(K)) \otimes_\mathbb{Z} \mathbb{Q}$.
\begin{prop} \label{logastackcxn}
Logahoric connections satisfying the residue condition for $\theta$ (i.e., elements of $\mathcal{A}_\theta^\text{Res}$) are in a one-to-one correspondence with $\boldsymbol{\mu}_r$-connections on the trivial $G$-bundle over $\widehat{\mathbb{D}}$, for which the $\boldsymbol{\mu}_r$-action is given by the homomorphism $\theta$.
\end{prop}
\begin{proof}
A connection $\omega$ on $E$ is given by a $\mathfrak{g}$-valued $1$-form on $W$; it may be written
\begin{align*}
\omega = \widetilde{\omega} \, dt = \left( \sum_{i=1}^l \omega_i h_i + \sum_{\alpha \in \Phi} \omega_\alpha x_\alpha \right) dt,
\end{align*}
for some $\omega_i, \omega_\alpha \in B$, where $h_1, \cdots, h_l$ is a basis of
$\t$.
The condition for $\omega$ to be $\boldsymbol{\mu}_r$-invariant (cf.\ proof of Proposition \ref{rootstackconnection}) is that $\gamma^* \omega = \omega$, or
\begin{align} \label{omegainvariance}
\textnormal{Ad} \, \theta(\gamma) (\gamma \cdot \widetilde{\omega}) \, (\gamma \cdot dt) = \widetilde{\omega} \, dt.
\end{align}
For $1 \leq i \leq l$, this translates to
\begin{align*}
\omega_i(t) \, dt = \zeta^{-1} \omega_i( \zeta^{-1} t) \, dt,
\end{align*}
and it follows that
\begin{align*}
\omega_i (t) \, dt = \nu_i(z) \, dz
\end{align*}
for some $\nu_i \in A$. For $\alpha \in \Phi$, (\ref{omegainvariance}) implies
\begin{align*}
\omega_\alpha(t) \, dt = \alpha\big( r\theta(\gamma) \big) \zeta^{-1} \omega_\alpha( \zeta^{-1} t) \, dt = \zeta^{ r\alpha(\theta) -1} \omega_\alpha( \zeta^{-1} t) \, dt.
\end{align*}
In this case, one sees that
\begin{align*}
\omega_\alpha \, dt = \begin{cases} \nu_\alpha \, dz, & \text{ if } m_\alpha(\theta) = -\alpha(\theta), \text{ i.e.}, \alpha(\theta) \in \mathbb{Z} \\ t^{r \alpha(\theta)} z^{ m_\alpha(\theta) - 1} \nu_\alpha \, dz, & \text{ if } \alpha(\theta) + m_\alpha(\theta) > 0 , \end{cases}
\end{align*}
for some $\nu_\alpha \in A$, where one recalls that $m_\alpha(\theta) = -\lfloor \alpha(\theta) \rfloor$. Therefore,
\begin{align*}
\omega = \left( \sum_{i=1}^l z \nu_i h_i + \sum_{\alpha(\theta) \in \mathbb{Z}} z \nu_\alpha x_\alpha + \sum_{\alpha(\theta) \not\in \mathbb{Z}} t^{r \alpha(\theta)} z^{m_\alpha(\theta)} \nu_\alpha x_\alpha \right) \frac{dz}{z}.
\end{align*}
Now, using the change of frame $t^\theta$, the connection form becomes
\begin{align*}
\textnormal{Ad} \, t^{-\theta}( \omega) + t^{-\theta} d(t^{\theta}) = \left( \theta + \sum_{i=1}^l z \nu_i h_i + \sum_{\alpha(\theta) \in \mathbb{Z}} z^{1 + m_\alpha(\theta)} \nu_\alpha x_\alpha + \sum_{\alpha(\theta) \not\in \mathbb{Z}} z^{m_\alpha(\theta)} \nu_\alpha x_\alpha \right) \frac{dz}{z},
\end{align*}
and from Lemma \ref{rescondn}, we find that the induced connection lies in $\mathcal{A}_\theta^\text{Res}$. Conversely, a logahoric connection satisfying the residue condition is of the above form, and reversing the process (via the change of frame $t^\theta$), we recover a connection on the trivial $G$-bundle over $W$ with no poles.
\end{proof}
\subsubsection{Global Connections}
Let $E \to X$ be a $\mathcal{G}_\theta$-torsor. This may be given by a $G$-bundle $E|_{X \setminus x}$ over $X \setminus x$, a parahoric torsor $E|_{\mathbb{D}_x}$ over $\mathbb{D}_x$ and an isomorphism $\eta : E|_{X \setminus x}|_{\mathbb{D}_x^\times} \xrightarrow{\sim} E|_{\mathbb{D}_x} |_{\mathbb{D}_x^\times}$ (the reader may wish to have another look at the diagram (\ref{Xx})). We define a \emph{connection on $E$} to be a pair consisting of a connection $\omega_0$ on the $G$-bundle $E|_{X \setminus x}$ and a logahoric connection $\omega_x$ on $E|_{\mathbb{D}_x}$ satisfying the residue condition such that
\begin{align*}
\omega_0|_{\mathbb{D}_x^\times} = \eta^* \omega_x|_{\mathbb{D}_x^\times}.
\end{align*}
\begin{prop}
Let $\mathscr{E}$ be a $G$-bundle on $\mathfrak{X}$ and let $E$ be the corresponding parahoric bundle on $X$. Then there is a one-to-one correspondence between connections on $\mathscr{E}$ and connections on $E$.
\end{prop}
\begin{proof}
It is clear that if we begin with a connection $\omega$ on a $G$-bundle $\mathscr{E}$ over $\mathfrak{X}$, then we can take restrictions to $(X \setminus x) \times_X \mathfrak{X} \cong X \setminus x$ and to $\mathbb{D}_x \times_X \mathfrak{X} \cong{ \widehat{\mathbb{D}} }$, the latter being a connection compatible with the $\boldsymbol{\mu}_r$-action, and hence by Proposition \ref{logastackcxn}, yields a logahoric connection on the parahoric torsor over $\mathbb{D}_x$. The isomorphism over $\mathbb{D}_x^\times$ comes from the fact that $\omega$ is defined over all of $\mathfrak{X}$.
Conversely, given $\omega_0, \omega_x$ as in the definition, the pullback of $\omega_x$ to $\mathscr{E}|_{\mathbb{D}_x \times_X \mathfrak{X} }$ gives a well-defined connection (again by Proposition \ref{logastackcxn}). The isomorphism $\eta$ gives patching data over $\mathscr{E}|_{ \mathbb{D}_x^\times \times_X \mathfrak{X} }$, and so we get a connection over $\mathscr{E}$.
\end{proof}
One cheaply obtains the following.
\begin{cor}
Let $E \to X$ be a $\mathcal{G}_\theta$-torsor. Then it admits a connection in the above sense if and only if the corresponding $G$-bundle on $\mathfrak{X}$ satisfies the condition of Theorem \ref{connectioncondition}.
\end{cor}
\small
\bibliographystyle{amsalpha}
|
1,314,259,994,279 | arxiv | \section{Introduction}
Recently, graphene monolayers have attracted much attention motivated by
their unusual two\discretionary{-}{-}{-}dimensional (2D) Dirac energy spectrum of electrons. In graphite, which is a graphene multilayer composed of
weakly coupled graphene sheets, the interlayer interaction converts
the 2D electron energy spectrum of graphen into the three-dimensional
(3D) spectrum of graphite. The application of the tilted magnetic
field is a typical method to distinguish between 2D and 3D electron
systems, as in 3D systems the orbital effect of the in-plane magnetic
field should be observable. This problem has been touched in two
recent theoretical studies of the graphene multilayer energy spectrum
in magnetic fields parallel to the layers \cite{Pershoguba_2010}, and
of the Landau levels (LLs) of bilayer graphene in magnetic fields of
arbitrary orientation \cite{Hyun}. Both papers conclude that the very
strong in-plane field components are necessary to induce observable
effects on the electronic structure. Here we show that at the H point
of the hexagonal Brillouin zone of graphite the application of the
tilted magnetic field leads to experimentally observable splitting of
LLs.
\section{Model and results}
Bulk graphite is composed of periodically repeated graphene bilayers
formed by two nonequivalent Bernal-stacked graphene sheets. There are
two sublattices, A and B, on each sheet and, therefore, four atoms in
a unit cell. The distance between the nearest atoms A and B in a
single layer is $1.42\,$ {\AA}, the interlayer distance between
nearest atoms A is $d=3.35\,${\AA}.
To describe the graphite band structure, we employ the minimal
nearest-neighbor tight-binding model introduced in
Ref.~\cite{Koshino_2008} and successfully applied in
Refs.~\cite{Pershoguba_2010,Hyun,Orlita_2009}. The wave functions are
expressed via four orthogonal components $\psi^A_j$, $\psi^B_j$,
$\psi^A_{j+1}$, $\psi^B_{j+1}$, which are, in zero magnetic field,
Bloch sums of atomic wave functions over the lattice sites of
sublattices A and B in individual layers. The tight-binding
Hamiltonian $\mathcal{H}$ includes only the intralayer interaction
$\gamma_0=3.16\, \rm{eV}$ between nearest atoms A and B in the
plane, and the interlayer interaction $t=0.38\, \rm{eV}$ between
nearest atoms A out of plane.
The continuum approximation is used in the vicinity of the $H - K - H$
axis of the graphite Brillouin zone, for small $\vec{k}$ measured from
the axis. Then the electron wavelength is larger than the distance
between atoms, and the non-zero matrix elements of $\mathcal{H}$ can
be written as
\begin{equation}
\mathcal{H}^{AB}=\hbar v_F(k_x+ik_y),\,\,\,
\mathcal{H}^{BA}=\hbar v_F(k_x-ik_y),\,\,\,
\mathcal{H}^{AA}=t.
\end{equation}
The Fermi velocity, $v_F$, is defined by $\hbar v_F
= \sqrt{3}a\gamma_0/2$ and will be used as an intralayer parameter
instead of $\gamma_0$ in the subsequent consideration.
The effect of the magnetic field $\vec{B}=(0, B_y, B_z)$ is conveniently
introduced by Peierls substitution into the zero-field Hamiltonian. If
we choose the vector potential in the form
$\vec{A}=(B_yz-B_zy, 0, 0)$ we get
\begin{equation}
\hbar k_x \rightarrow \hbar k_x-|e|B_zy+|e|B_yjd.
\label{eqkx}
\end{equation}
When solving the Schr\"{o}dinger equation for the above four
wave functions, we can make use of the structure of $\mathcal{H}$, which
allows to express the function $\psi_j^B$ via the function $\psi_j^A$
from the same layer, and leaves us with two ,,interlayer'' equations
for $\psi_j^A$ and $\psi_{j+1}^A$.
Note, that graphite is no longer periodic in the $z$-direction
(parallel to the $c$-axis) but becomes periodic in the direction of
the tilted magnetic field. Here we apply an approach developed
in \cite{Goncharuk_2005}. The new periodicity implies that
$\psi_{j}^A$ can be written as
\begin{equation}
\psi_{j}^A=e^{i k_z d j}\phi_1^A\left(\eta+j\eta_d\right),\,\,\,\,
\psi_{j+1}^A=e^{i k_z d(j+1)}\phi_2^A\left(\eta+(j+1)\eta_d\right),
\label{eqperiot}
\end{equation}
$-\pi/2\leq k_z d\leq \pi/2$. Here $\phi_1^A\left(\eta+j\eta_d\right)$
and $\phi_2^A\left(\eta+j\eta_d\right)$ can be associated with cyclotron orbits in
two layers. The dimensionless variable $\eta$ is defined by $\eta =
(y-y_0)/\ell$, where $y_0$ is the cyclotron orbit center,
$y_0=\ell_z^2k_x$, and $\ell_z$ is the magnetic length, $\ell_z^2
= \hbar/(|e|B_z)$. The small parameter
\begin{equation}
\eta_d = \frac{B_y}{B_z}\frac{d}{\ell_z}
\end{equation}
means the shift of the cyclotron orbit center in the $j$-layer due to
the in-plane component of the magnetic field, $B_y$.
Introducing the shift operator by
\begin{equation}
\phi\left(\eta+\eta_d\right)=e^{i\kappa\eta_d}\phi(\eta),\,\,\,\
\kappa=-i\partial/\partial\eta,
\end{equation}
and employing $\kappa$-representation, two ,,interlayer'' equations
can be given the form
\begin{eqnarray}
\label{main-eq1}
\left[\frac{\mathcal{B}}{2}\left(-\frac{\partial^2}{\partial\kappa^2}
+\kappa^2+1\right)-E^2\right]
\phi_2^A+
2 t\, E\cos{(\kappa\eta_d+k_zd)} \phi_1^A= 0 ,\label{eq1tt}
\\
\label{main-eq2}
\left[\frac{\mathcal{B}}{2}\left(-\frac{\partial^2}{\partial\kappa^2}
+\kappa^2
-1\right)-E^2\right]
\phi_1^A+
2 t\, E\cos{(\kappa\eta_d+k_zd)} \phi_2^A = 0,\label{eq2tt}
\end{eqnarray}
where $\mathcal{B}$ stands for $2\hbar |e|v_F^2
B_z$. Eqs.~(\ref{main-eq1},\ref{main-eq2}) represent the main result
of this work.
It is evident that $\phi_1^A$ and $\phi_2^A$ cannot be too far from
the eigenfunctions of the harmonic oscillator, $\varphi_n(\kappa)$.
If we write
\begin{equation}
\phi_1^A=\frac{1}{L_x}e^{ik_xx}\sum_{n'=0}^{\infty}A_{1,n'}\varphi_{n'},\,\,\,
\phi_2^A=\frac{1}{L_x}e^{ik_xx}\sum_{n'=0}^{\infty}A_{2,n'}\varphi_{n'},
\end{equation}
we arrive to
\begin{eqnarray}
\left[\mathcal{B}(n+1)-E^2\right] A_{2,n}+
2 E \sum_{n'=0}^{\infty}T_{n,n'}A_{1,n'}=0,\label{eq1p}
\\
\left[\mathcal{B}n-E^2\right] A_{1,n}+
2 E \sum_{n'=0}^{\infty}T_{n,n'}A_{2,n'}=0,\label{eq2p}
\end{eqnarray}
where
\begin{equation}
T_{n,n'}=
t\int_{-\infty}^{\infty}\varphi_{n}(\kappa)\cos(\kappa\eta_d+k_zd)
\varphi_{n'}(\kappa)
d\kappa.
\end{equation}
In the perpendicular magnetic field $\eta_d =0$ and
$T_{n,n'}=2\cos(k_zd)\delta_{n,n'}$. This corresponds to the
previously discussed case \cite{Koshino_2008}. At the $K$ point,
$k_zd=0$, we get the energy spectrum of an effective bilayer. At the
$H$ point, $k_zd=\pm \pi/2$, the coupling between layers disappears, and we
obtain the LLs corresponding to graphene Dirac fermions,
namely $E_{1,n}^{\pm}=\pm\sqrt{\mathcal{B}n}$ for the first layer, and
$E_{2,n}^{\pm}=\pm\sqrt{\mathcal{B}(n+1)}$ for the second layer,
$n=0,1,2,\cdots$.
Let us discuss the magnetic field of an arbitrary direction. It is
obvious that the influence of the small parameter $\eta_d$ is
negligible at the point K when $k_zd=0$ and
$\cos{(\kappa\eta_d-k_zd)}\approx 1$. This case corresponds to the
bilayer subject to the tilted field discussed in Ref.~\cite{Hyun},
with the result that corrections induced by $B_y$ are negligible.
Let us consider the influence of $B_y$ at the H point, when
$\cos(k_zd)=0$ and $\sin(k_zd)=\pm 1$. Then, unlike the case of the
perpendicular field, the interlayer interaction is not reduced to
zero, but remains finite.
Note, that the small perturbation $\eta_d$ couples the unperturbed
states $|n_2^A\rangle$ and $|(n+1)_1^A\rangle$, which belong to the
same unperturbed eigenvalues $\pm\sqrt{\mathcal{B}(n+1)}$.
Consequently, the perturbation approach suitable to describe the coupling of
these degenerated states must be applied, which yields equations
\begin{eqnarray}
\label{eq1}
\left[\mathcal{B}(n+1)-E^2\right] A_{2,n}+2 E T_{n,n+1} A_{1,n+1}=0,\label{eq1pp}
\\
\label{eq2}
\left[\mathcal{B}(n+1)-E^2\right] A_{1,n+1}+2 E T_{n+1,n}A_{2,n}=0,\label{eq2pp}
\end{eqnarray}
where
\begin{equation}
T_{n,n+1}=T_{n+1,n}= t \left[\eta_d \sqrt{\frac{n+1}{2}}-\frac{1}{2}
\left(\eta_d \sqrt{\frac{n+1}{2}}\right)^3 + \ldots \right].
\end{equation}
The secular equation derived from Eqs.(\ref{eq1},\ref{eq2}) reads
\begin{equation}
\left[\mathcal{B}(n+1)-E^2\right] ^2 - 4 E^2 T^2_{n,n+1}=0,
\end{equation}
and from here we get the four eigenenergies
\begin{equation}
E_{n+1}^{\pm,\pm}=\pm T_{n,n+1}\pm\sqrt{\mathcal{B}(n+1)+T^2_{n,n+1}},
\,\,\, n = 0,1,2,\cdots.
\end{equation}
The eigenenergies, $E_{0}^{\mp}$ and $E_{0}^{\pm}$, originated from
$E_{1,0}^{\pm}$ remain the same as in the perpendicular magnetic
field. In that case the degeneracy is not removed.
\begin{figure}
\label{fig}
\includegraphics[width=\textwidth]{Fig.eps}
\caption{LLs of graphite at the H point as a function of the
perpendicular component of the magnetic field, $B_z$.
{\emph a)} LLs, $E_{1,n}^{\pm}$ and $E_{2,n}^{\pm}$, in the perpendicular magnetic field ($\varphi = 0^o$). {\emph b)} LLs, $E_{n}^{\pm,\pm}$, splitted by tilting the magnetic field ($\varphi = 20^o$).}
\end{figure}
Lifting of LL degeneracy by the tilted magnetic
field is shown in Fig.~\ref{fig}. The LL splitting is of the
order of several meV, and it grows with the tilt angle, i.e., with $B_y$.
To conclude, we have obtained the LL structure of graphite at the H
point of the Brillouine zone in the tilted magnetic field
configuration. This effect is experimentally observable.
\begin{theacknowledgments}
The authors benefited from discussions with Milan Orlita. The support
of the European Science Foundation EPIGRAT project (GRA/10/E006), AV
CR research program AVOZ10100521 and the Ministry of Education of the
Czech Republic project LC510 is acknowledged.
\end{theacknowledgments}
\bibliographystyle{aipproc}
\bibliographystyle{aipprocl}
|
1,314,259,994,280 | arxiv |
\section{Introduction}
\label{section:Intro}
The rapid development of cloud-based systems has enabled reliable and affordable access to shared computing resources at scale.
However, this shared access raises substantial privacy and security challenges.
Therefore, new techniques are required to guarantee the confidentiality of sensitive user data when it is sent to the cloud for processing.
Fully Homomorphic Encryption (FHE)~\cite{RAD,Gentry09} enables cloud operators to perform complex computations on encrypted user data without ever needing to decrypt it.
The result of such FHE-based computation is in an encrypted form and can only be decrypted by the data owner.
An illustrative use case of how a data owner can outsource computation on private data to an untrusted third-party cloud platform is shown in Figure~\ref{fig:cloud}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.00\columnwidth]{Figures/cloud-outsourcing-new.pdf}
\end{center}
\vspace{-0.15in}
\caption{Third-party cloud platform with outsourced FHE-based computing.}
\vspace{-0.10in}
\label{fig:cloud}
\end{figure}
While FHE-based privacy-preserving computing is promising, performing large encrypted computations with FHE still remains several orders of magnitude slower than operating on unencrypted data, which makes broad adoption impractical.
This slowdown is an inherent feature of all existing lattice-based FHE schemes.
All of these schemes produce ciphertexts containing a noise term, which is necessary for security.
Each subsequent homomorphic operation performed on the ciphertext increases its noise, until it grows beyond a critical level after which recovery of the computation output is impossible.
Sustained FHE computation thus requires a periodic de-noising procedure, called {\em bootstrapping}, to keep the noise below a correctness threshold. Unfortunately, this bootstrapping step is expensive in terms of both compute and memory requirements and is often $>100\times$ more expensive than primitive operations like addition and multiplication on encrypted data.
Real-world applications commonly attempt to amortize this bootstrapping cost across multiple homomorphic operations.
Even when considering these application-specific optimizations, bootstrapping consumes more than $50\%$ of the total compute and memory budget for end-to-end operations like machine learning training~\cite{GPUBoot21}.
To make FHE-based computing practical, we need to consider a multi-layer approach to accelerate both the bootstrapping step as well as its primitive building blocks using a combination of algorithmic and hardware techniques.
In this work, we first perform a thorough compute and memory analysis of both simple and complex FHE primitives including the bootstrapping step, with an intent to determine the limits and potential opportunities for accelerating FHE.
Our analysis reveals that all FHE operations exhibit low arithmetic intensity ($<1$ Op/byte) and require working-set sizes of hundreds of MB for practical and secure parameters.
In fact, we observe that most existing performance optimization techniques for FHE often {\em increase} memory bandwidth requirements.
These include both linear and non-linear operation optimizations proposed by Han and Ki~\cite{HK19}, Han, Hhan and Cheon~\cite{CHH18}, and Bossuat, Mouchet, Troncoso-Pastoriza and Hubaux~\cite{BMTH20}.
Recent bootstrapping implementation on GPUs by Jung, Kim, Ahn, Cheon and Lee~\cite{GPUBoot21} is the first work to perform memory-centric optimizations for linear operations in bootstrapping.
Even after these optimizations, their implementation continues to be bounded by main memory bandwidth and exhibits an arithmetic intensity of $<1$ Op/byte.
On the other side of the design spectrum, recent work by Samardzic et al.~\cite{F1Paper21} presents an architecture for a high-throughput hardware accelerator for FHE.
This work primarily focuses on smaller parameter sets where full ciphertexts fit in on-chip cache memory allowing them to bypass the memory bandwidth limitation.
However, many natural applications such as SIMD boostrapping, deep-neural network inference (with complex activation functions) and machine learning require larger parameter sets that are not addressed in \cite{F1Paper21}.
In this work, we focus on presenting our three key contributions, i.e., application benchmarking, new techniques to improve memory performance, and evaluation of these techniques on end-to-end applications. More specifically:
\begin{itemize}
\item We present detailed benchmarking of the compute and memory requirements of various FHE computations ranging from primitive operations to end-to-end applications such as machine-learning training. We show that all these benchmarks exhibit low arithmetic intensity and require large working-sets in on-chip memory. We observe that these working-sets do not fit in the last level caches of today's reticle-limited chips leading to bootstrapping and other applications being bottlenecked by memory accesses.
\item We next present techniques to improve main memory bandwidth utilization by effectively managing the moderate last-level cache provided by currently available commercial hardware. For cache-pressured hardware ($<20$ MB LLC) we propose a domain-specific physical address mapping to enhance DRAM utilization. We then present hardware-independent algorithmic optimizations that reduce memory and compute requirements of FHE operations.
\item We finally propose an optimized, memory-aware cryptosystem parameter set that maximizes the throughput in FHE bootstrapping and logistic regression training by enabling up to $3.2\times$ higher arithmetic intensity and $4.6\times$ lower memory bandwidth.
\end{itemize}
The techniques that we propose often compose with prior art and can be used as drop-ins to provide performance improvements in existing implementations without the need for new hardware.
Our proposed bootstrapping parameter set represents an upper limit on the performance of FHE operations that can be attained through pure compute acceleration when paired with existing state-of-art memory subsystems.
Even with this optimal parameter set, we observe that the bootstrapping step is still primarily memory bound. Thus:
\begin{quote}
\textit{Our key conceptual take-away is that to accelerate FHE, we need novel techniques to address the underlying memory bandwidth issues. Compute acceleration alone is unlikely to make a dent.}
\end{quote}
Towards the goal of addressing memory bandwidth issues, we propose novel near-term algorithmic and architectural research directions.
\section{Conclusion}
\label{section:Conclusion}
In this paper, we undertook a thorough architecture-level analysis of the compute and memory requirements for fully homomorphic encryption to identify the limits and opportunities for hardware acceleration.
Our analysis shows that the bootstrapping step is the critical performance bottleneck in FHE-based computing, and it has low arithmetic intensity and is heavily constrained by today's main memory systems.
We argue that to accelerate FHE-based computing, the research community should focus on improving the arithmetic intensity of FHE-based computing and leverage novel memory system architectures.
We proposed several architecture-independent and cache-friendly optimizations that improve arithmetic intensity by about $2.43\times$.
We also propose custom physical address mapping for \emph{limb-wise} and \emph{slot-wise} operations to enhance the main memory bandwidth utilization.
To further mitigate the impact of memory bandwidth on FHE-based computing, we suggest directions for the research community to explore novel techniques to either increase main memory bandwidth or improve its utilization, use in-memory/near-memory computing, and/or use wafer-scale systems with large on-chip memory.
\section{Discussion}
\label{section:Discussion}
Despite our algorithmic and cache optimizations to CKKS FHE bootstrapping (see Sections~\ref{sec:cachingopts} and \ref{sec:algoopts}), our analysis reveals that FHE bootstrapping continues to have low arithmetic intensity and is heavily bounded by main memory bandwidth.
This issue is not specific to CKKS bootstrapping alone.
For one, bootstrapping algorithms for other FHE schemes such as BGV~\cite{BGV12} and B/FV~\cite{Brak12,FV12} have the same high-level structure and suffer from the same problem, although with different quantitative thresholds.
Additionally, as discussed in Section~\ref{subsec:applConcCosts}, many natural applications (e.g. logistic regression and secure neural network evaluation) have the same high-level structure as bootstrapping, namely, global linear operations followed by local non-linear operations, and consequently, they suffer from the main memory bottleneck as well.
Below, we discuss potential research avenues to solve this issue that is so central to the practicality of FHE.
\noindent \textbf{Future Improvements to Bootstrapping:} At a high-level, our optimizations can be viewed as improving the ``thrashing" of various low-level operations in the bootstrapping algorithm (as well as other natural applications of FHE such as encrypted training of machine learning models).
While future improvements may reduce thrashing in the baseline algorithms, the size of the ciphertexts and the size of the switching keys suggests that the overall arithmetic intensity is unlikely to drastically improve without a dramatic overhaul to FHE schemes.
In one extreme, we could be in the best-case-scenario for FHE bootstrapping.
In this world that we call ``FHE-mania'', all of bootstrapping can be done in cache without any DRAM reads or writes beyond the initial input and the final output.
This world would call for true hardware acceleration of bootstrapping and would make our DRAM optimizations useless.
On the other hand, we could be living in a world where the best possible bootstrapping algorithms remain bounded by the memory bandwidth.
In this world that we call ``thrashy-land'', our optimizations remain crucial to achieving the highest throughput for bootstrapping.
While it may be possible to optimize our way out of thrashy-land, as long as the RNS representation remains the dominant format of FHE data, our $\alpha$-limb and $\beta$-limb caching optimizations will remain relevant.
A realistic possibility is a world that is somewhere in between FHE-mania and Thrashy-land.
For example, it turns out that bootstrapping in GSW-like FHE schemes~\cite{GSW13,DM15} incurs slower noise growth and consequently smaller parameters $N$ and $Q$; however, it does not support packed bootstrapping as in BGV, B/FV and CKKS FHE schemes, a feature that is fundamentally important for efficiency.
Can we achieve the best of both worlds?
We believe there is exciting research to be done here (see \cite{MS18} for a preliminary attempt); our analysis provides a compelling reason to pursue this line of research.
\noindent \textbf{Increase Main Memory Bandwidth:}
There are two approaches to increasing the main memory bandwidth.
First, we can use multiple DDRx channels, effectively using parallelism to increase the main memory bandwidth.
We could also use alternate main memory technologies like HBM2/HBM2e~\cite{Jun2017Hbm} that provide several times higher bandwidth than DDRx technology.
The second approach involves improving the physical interconnect between the compute cores and the memory by using silicon-photonic link technologies~\cite{Sun2015Nature}.
Judicious use of silicon-photonic technology can help improve the main memory bandwidth, and has the additional benefit of reducing the energy consumption for memory accesses.
\noindent \textbf{Improve Main Memory Bandwidth Utilization:}
Here, there are two complementary approaches.
The first is to attempt a cleverer mapping of the data to physical memory to take advantage of spatial locality in cache lines such that we reduce the number of memory accesses required per compute operation.
To complement this, we can improve FHE-based computing algorithms such that we perform more operations per byte of data that is fetched from main memory, i.e., improve temporal locality.
The second approach is algorithmic: namely, improve FHE bootstrapping algorithms (as discussed above) so that we reduce the size of the key-switching parameter, the main culprit for low arithmetic intensity, or eliminate it altogether.
These two complementary approaches may result in an increase in the arithmetic intensity, effectively reducing the time required for bootstrapping and FHE as a whole.
\noindent \textbf{Use In-Memory/Near-Memory Computing:}
Two potential architecture-level approaches include performing the operations in FHE APIs within main memory i.e., in-memory computing, and having a custom die very close to main memory for performing operations in FHE APIs, i.e., near-memory computing.
In the in-memory computing approach, we can eliminate a large number of expensive main memory accesses by performing matrix-vector multiplication operations in the main memory itself~\cite{Chi2016Isca}.
In contrast, in case of near-memory computing, we perform all the FHE compute operations in a custom accelerator that is placed close to the main memory.
Here, we cannot eliminate the memory accesses, but the cost of a memory access is lower than that of accessing a traditional memory.
\noindent \textbf{Use Wafer-Scale Systems:}
A radical technology-level solution is to design large-scale distributed accelerators such as Cerebras style wafer-scale accelerators~\cite{Cerebras} that have $40$~GB of high-performance on-wafer memory.
Tesla's Dojo accelerator~\cite{Tesla} also fits in this category wherein a large wafer is diced into $354$~chip nodes, which provides high bandwidth and compute performance.
Effectively, we can have large SRAM arrays i.e. large caches on the same wafer as the compute blocks, thus limiting all communication to on-chip wafer communication and avoiding expensive main memory accesses after the initial loads.
\section{Related Work}
\label{section:RelatedWork}
\noindent
\textbf{Algorithmic optimizations for CPUs:}
The key bottleneck in the FHE bootstrapping process is the large {\em homomorphic matrix-vector multiplication} required to convert ciphertexts from coefficient to evaluation representation and back.
This requires many key-switching operations, which require accessing large number of switching keys from the DRAM, adding both to the computational cost and to data access latency.
Initial implementations of bootstrapping in software (for example, the HEAAN library~\cite{CKKS17}) did try to reduce the number of rotations required in this linear transformation step by using baby-step giant-step (BSGS) algorithm, originally invented by Halevi and Shoup~\cite{HS18}.
Using this algorithm, one can reduce the number of rotations to $O(\sqrt{N})$ while still requiring only $O(N)$ scalar multiplications.
The HEAAN library also optimizes the operational cost of approximating the modular reduction step by evaluating the sine function using a Taylor approximation. With these techniques, the HEAAN library takes about eight minutes to bootstrap $128$ slots within a ciphertext of degree $2^{16}$ on a CPU.
Chen, Chillotti and Song~\cite{CCS18} proposed a level collapsing technique along with BSGS for the linear transformation step to improve the number of rotations.
They also replaced the Taylor approximation with a more accurate Chebyshev approximation to evaluate a scaled-sine function instead.
For the same parameter set as the HEAAN library, they observe a $3\times$ speedup.
More recently, Han and Ki~\cite{HK19} proposed a hybrid key-switching approach to efficiently manage the amount of noise added through the key-switching operation.
They evaluated a scaled, shifted cosine function instead of the scaled-sine function in modular reduction to reduce the number of non-scalar multiplications by half.
Their optimizations led to an additional $3\times$ speedup.
Bossuat et al.~\cite{BMTH20} further lower the operational complexity of the linear transformations by optimizing rotations through double-hoisting the hybrid key-switching approach.
Double-hoisting the key-switch operation reduces the number of basis conversion operations significantly, which are expensive in terms of accessing the main memory.
They also carefully manage the scale factors for non-linear transformations for error-less polynomial evaluation.
Their implementation in Lattigo library~\cite{lattigo} shows a further speedup of $1.5\times$ on a CPU.
\noindent
\textbf{Algorithmic optimizations for GPUs:}
All the above mentioned optimizations heavily focused on lowering the operation complexity of bootstrapping, which led to a minor reduction in the main memory accesses as well.
Recently, Jung et al.~\cite{GPUBoot21} presented the first ever GPU implementation of CKKS bootstrapping.
Their analysis, even though limited to GPUs, rightly points out the main-memory-bounded nature of the bootstrapping operation.
Thus, their optimizations, such as inter- and intra-kernel fusion, are all focused on improving the memory bandwidth utilization rather than accelerating the compute itself.
Their bootstrapping implementation is so far the fastest requiring only $328.25$~ms (total time) for bootstrapping all the slots of a ciphertext of degree $N = 2^{16}$.
As discussed in Section~\ref{sec:cachingopts} and \ref{sec:algoopts}, our techniques are composable with all these prior works and consequently, result in $3.2\times$ higher arithmetic intensity and $4.6\times$ reduction in main memory accesses.
\textbf{Hardware Accelerators for HE:}
Samardzic et al.~\cite{F1Paper21} recently presented the architecture of a programmable hardware accelerator for FHE operations.
Their analysis also shows the fact that the FHE operations are memory bottlenecked.
However, they implement a massively parallel compute block (having $4096$ modular multiplications) in their accelerator.
From their performance analysis, it is evident that the compute block is underutilized due to the memory bottleneck.
\section{Evaluation}
\label{section:eval}
In this section, we compare our bootstrapping algorithm to prior art to demonstrate the improved throughput achieved by our optimizations.
In addition, we show how our improved CKKS subroutines directly result in more efficient HE applications.
\subsection{Maximizing Bootstrapping Throughput}
\paragraph*{Bootstrapping Throughput}
Our metric to evaluate bootstrapping performance is based on the \emph{bootstrapping throughput} metric of Han and Ki~\cite{HK19}.
This metric attempts to capture the effectiveness of a bootstrapping routine by improving with the number of slots the algorithm bootstraps (which is the number of plaintext slots $n$), the number of limbs $\ell$ in the resulting ciphertext (which translates to the number of compute levels supported by the ciphertext), and the bit-precision $\mathsf{bp}$ of the plaintext data.
These factors are then divided by the runtime of the bootstrapping procedure, denoted as $\mathsf{brt}$. This gives us the throughput metric in \Cref{eq:bootThroughput}.
\begin{equation}\label{eq:bootThroughput}
\mathsf{throughput} = \frac{n \cdot \ell \cdot \mathsf{bp}}{\mathsf{brt}}
\end{equation}
\paragraph*{Optimal Bootstrapping Parameters}
Given the throughput metric from \Cref{eq:bootThroughput}, we can select parameters to optimize it.
We employ our architectural modeling tool to explore the parameter space of bootstrapping to maximize the throughput.
As DRAM transfer times dominate in bootstrapping, our architectural model accounts for DRAM transfer time in the total runtime analysis, resulting in parameters that minimize DRAM transfers.
The throughput-maximizing parameters for our fully-optimized bootstrapping algorithm (with all optimizations from \Cref{sec:cachingopts} and \Cref{sec:algoopts}) are given in \Cref{tab:bootParams}.
\begin{table}[!ht]
\centering
\caption{Bootstrapping Parameters\\\emph{The $L$ parameter denotes the number of limbs in the ciphertext after the initial $\mathsf{ModUp}$ procedure in $\mathsf{Bootstrap}$. The $\mathsf{fftIter}$ parameter is the number of $\mathsf{PtMatVecMult}$ iterations in the $\mathsf{CoeffToSlot}$ and $\mathsf{SlotToCoeff}$ phases in $\mathsf{Bootstrap}$. The radix values for these iterations are all balanced, with any values that need to be larger placed at the end. The $\lambda$ value is the bit-security level.}}
\label{tab:bootParams}
\begin{threeparttable}
\begin{tabular}{cccccccccc}
\toprule
&& $L$ && $\mathsf{dnum}$ && $\mathsf{fftIter}$ && $\lambda$\\
\midrule
\textbf{Baseline} && $35$ && $3$ && $3$ && $<128$\tnote{$\dag$} \\
\textbf{Best-case} && $40$ && $2$ && $6$ && $128$ \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[$\dag$] The baseline set is based on ~\cite{GPUBoot21} originally targeting $\lambda = 128$. Updated cryptanalysis in ~\cite{BMTH20} reduces the security level for sparse keys. The parameters in this work include these updated recommendations.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Bootstrapping Performance Comparisons}
We now compare the throughput of our most optimized bootstrapping algorithm to prior art.
To compare our algorithm to prior works, we re-implemented each algorithm in our architecture model.
We then took the parameters given in each of these works and ran the algorithm in our model with these parameters.
This allowed us to measure the total operations as well as the DRAM transfer times for each of these algorithms.
From this analysis as well as our discussion in \Cref{sec:algoOptTakeaway}, we know that all of these bootstrapping algorithms are bottlenecked by the memory bandwidth.
Therefore, we used the memory bandwidth requirement of each of these algorithms as a proxy for the overall runtimes.
The memory requirement was converted to DRAM transfer time based on the memory bandwidth of the NVIDIA Tesla V100~\cite{NVIDIAV100}, which is $900$~GB/s.
The results of this analysis is presented in \Cref{tab:bootstrapping_comp}.
We now discuss each comparison in more detail.
The parameter set selected from Jung et al.~\cite{GPUBoot21} is the same parameter set used as the baseline comparison in \Cref{sec:cachingopts}, which is the parameter set they give for their logistic regression implementation.
We selected the parameter set from Bossuat et al.~\cite{BMTH20} that maximized their throughput.
Note that this parameter set maximized the throughput when the runtime was measured on a CPU.
For our architecture model, we are considering the case where computation has been accelerated to the point where runtime is completely dominated by memory transfers.
The throughput computation for Samarzdic et al.~\cite{F1Paper21} was computed slightly differently since this work gives the DRAM bandwidth of their algorithm.
However, this work only gives benchmarks for unpacked CKKS bootstrapping (i.e., there is no slot packing and the ciphertext only holds one element).
Rather than re-implementing their algorithm, we use the memory bandwidth usage they give for their unpacked CKKS bootstrapping, which is $721$~MB.
To compute the runtime, we also use the peak DRAM bandwidth provided by the authors for their architecture, which is $1$~TB/s.
Using these two numbers, we found their bootstrapping procedure runtime to be $0.721$ milliseconds leading to the throughput number mentioned in \Cref{tab:bootstrapping_comp}.
\begin{table}[ht]
\centering
\caption{Bootstrapping comparison\\\emph{This table measures the bootstrapping throughput. The \textbf{Throughput} column is computed using \Cref{eq:bootThroughput} with the DRAM transfer time as a proxy for the runtime. The DRAM transfer time is measured in microseconds.}}
\begin{tabular}{cccccc} \toprule
\textbf{Work} & $n$ & $\ell$ & $\mathsf{bp}$ & \specialcell{\textbf{DRAM}\\\textbf{ Transfers}\\\textbf{(in GB)}} & \textbf{Throughput} \\
\midrule
Jung et al.~\cite{GPUBoot21} & $2^{16}$ & $20$ & $19$ & $193.09$ & $116.07$ \\ \midrule
Bossuat et al.~\cite{BMTH20} & $2^{15}$ & $16$ & $19$ & $75.30$ & $119.05$ \\ \midrule
Samarzdic et al.~\cite{F1Paper21} & $1$ & $13$ & $24$ & $0.721$ & $0.43$ \\ \midrule
Our Best Throughput & $2^{16}$ & $19$ & $19$ & $45.33$ & $469.68$ \\
\bottomrule
\end{tabular}
\label{tab:bootstrapping_comp}
\end{table}
\subsection{Application Comparison}
A faster bootstrapping algorithm directly results in faster HE applications. Continuing with our running example of logistic regression training, we give benchmarks of the logistic regression algorithm from \Cref{section:FHEApplications} using our optimized bootstrapping routine and parameters. These benchmarks are given in \Cref{tab:building-block-improvement}.
\begin{table}[ht]
\centering
\caption{Performance of Logistic Regression Training Example\\\emph{This table displays benchmarks of the logistic regression bootstrapping application using our optimized bootstrapping parameters. In parentheses next to each benchmark, we give the improvement over \Cref{tab:building-block-cost}. }}
\label{tab:building-block-improvement}
\begin{tabular}{cccc}
\toprule
\specialcell{\textbf{Sub-routine}\\\textbf{Name}} & \specialcell{\textbf{Total}\\\textbf{Operations}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total}\\\textbf{DRAM }\\\textbf{Transfers}\\\textbf{(in GB)}} & \specialcell{\textbf{Arithmetic}\\\textbf{Intensity}\\\textbf{(in Op/byte)}}\\
\midrule
$\mathsf{InnerProduct}$ & $6.8256(1.2\times)$ & $3.3261(4.9\times)$ & $2.05(4.4\times)$ \\
\midrule
$\mathsf{PolyEval}$ & $2.2569(1.3\times)$ & $0.9745(3.6\times)$ & $1.97 (2.4\times)$ \\
\midrule
Full LR Iteration & $77.3846(1.2\times)$ & $41.0811(4.7\times)$ & $1.88(4\times)$ \\
\midrule
$\mathsf{Bootstrap}$ & $79.2401(1.9\times)$ & $45.3341(4.6\times)$ & $1.75(2.43\times)$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Key Takeaways}
In this section, we demonstrated that our optimizations, which mostly focus on improving the arithmetic intensity of bootstrapping and other CKKS building blocks, result in a much higher memory throughput than prior art that mostly focused on optimizing the compute throughput.
This shows that focusing on compute throughput overlooks a crucial bottle-neck in CKKS applications: the memory bandwidth.
To improve the overall performance of many important CKKS applications such as bootstrapping and encrypted logistic regression training, the memory bandwidth must be directly optimized.
\section{Fully Homomorphic Encryption: The API}
\label{section:FHEAPI}
To set the stage, in this section we present the operations implemented by the Cheon-Kim-Kim-Song (CKKS)~\cite{CKKS17} FHE scheme.
We organize these operations in the form of an API that can be used by any application developer to design privacy-preserving applications.
Specifying the CKKS scheme requires several parameters, and we summarize our notation for these parameters in \Cref{tab:parameters}.
Though we focus on the CKKS scheme, the API is generic and can be used for the BGV~\cite{BGV12} and B/FV~\cite{Brak12,FV12} schemes as well\footnote{An exception is the $\mathsf{Conjugate}$ function, which the BGV and B/FV schemes do not support, since they do not encrypt complex numbers.}.
\begin{table}[t]
\centering
\caption{CKKS FHE Parameters and their description.}
\label{tab:parameters}
\begin{tabular}{p{0.15\columnwidth} p{0.7\columnwidth}}
\toprule
\textbf{Parameter} & \textbf{Description}\\
\midrule
$N$ & Number of coefficients in a polynomial in the ciphertext ring.\\
$n$ & $N/2$, number of plaintext elements in a single ciphertext.\\
$Q$ & Full modulus of a ciphertext coefficient.\\
$q$ & Machine word sized prime modulus and a limb of $Q$.\\
$\Delta$ & Scaling factor of a CKKS plaintext. \\
$P$ & Product of the additional limbs added for the raised modulus.\\
$L$ & Maximum number of limbs in a ciphertext.\\
$\ell$ & Current number of limbs in a ciphertext.\\
$\mathsf{dnum}$ & Number of digits in the switching key.\\
$\alpha$ & $\lceil (L + 1)/\mathsf{dnum} \rceil$. Number of limbs that comprise a single digit in the key-switching decomposition. This value is fixed throughout the computation. \\
$\beta$ & $\lceil (\ell + 1)/\alpha \rceil$. An $\ell$-limb polynomial is split into this number of digits during base decomposition.\\
\bottomrule
\end{tabular}
\end{table}
\begin{table*}
\centering
\begin{threeparttable}[t]
\caption{CKKS Fully Homomorphic Encryption API.}
\label{tab:low-level-api-overview}
\begin{tabular}{ l l l p{\columnwidth}}
\toprule
\textbf{Operation Name} & \textbf{Output} & \textbf{Implementation} & \textbf{Description} \\
\midrule
$\mathsf{PtAdd}(\dbrack{\mathbf{x}}, \mathbf{y})$ & $\dbrack{\mathbf{x} + \mathbf{y}}$ & $\dbrack{\mathbf{x}} + \mathbf{y}$ & Adds a plaintext vector to an encrypted vector. \\
$\mathsf{Add}(\dbrack{\mathbf{x}}, \dbrack{\mathbf{y}})$ & $\dbrack{\mathbf{x} + \mathbf{y}}$ & $\dbrack{\mathbf{x}} + \dbrack{\mathbf{y}}$ & Adds two encrypted vectors.\\
$\mathsf{PtMult}(\dbrack{\mathbf{x}}, \mathbf{y})$ & $\dbrack{\mathbf{x} \cdot \mathbf{y}}$ & \Cref{algo:PtMult} & Multiplies a plaintext vector and an encrypted vector. \\
$\mathsf{Mult}(\dbrack{\mathbf{x}}, \dbrack{\mathbf{y}})$ & $\dbrack{\mathbf{x} \cdot \mathbf{y}}$ & \Cref{algo:Mult} & Multiplies two encrypted vectors. \\
$\mathsf{Rotate}(\dbrack{\mathbf{x}}, k)$ & $\dbrack{\phi_k(\mathbf{x})}$ & \Cref{algo:Rotate} & Rotates a vector by $k$ positions; see \Cref{subsection:LowLevelAPIDef} for an illustration. \\
$\mathsf{Conjugate}(\dbrack{\mathbf{x}})$ & $\dbrack{\overline{\mathbf{x}}}$ & \Cref{algo:Rotate}\tnote{$\dag$} & Outputs an encryption of the complex conjugate of the encrypted input vector. \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[$\dag$] Through a clever encoding~\cite{CKKS17}, the $\mathsf{Conjugate}$ operation implementation is identical to the $\mathsf{Rotate}$ implementation.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Homomorphic Encryption API}
\label{subsection:LowLevelAPIDef}
The basic plaintext data-type in CKKS is a vector of length $n$ where each entry is chosen from $\mathbb{C}$, the field of complex numbers.
All arithmetic operations on plaintexts are component-wise; the entries of the vector $\mathbf{x} + \mathbf{y}$ (resp. $\mathbf{x} \cdot \mathbf{y}$) are the component-wise sums (resp. products) of the entries of $\mathbf{x}$ with the corresponding entries of $\mathbf{y}$.
We denote the encryption of a length-$n$ vector $\mathbf{x}$ by $\dbrack{\mathbf{x}}$.
\Cref{tab:low-level-api-overview} gives a complete description of the API with the exception of the rotation operation, which we describe here. The $\mathsf{Rotate}$ operation takes in an encryption of a vector $\mathbf{x}$ of length $n$ and an integer $0 \leq k < n$, and outputs an encryption of a rotation of the vector $\mathbf{x}$ by $k$ positions.
As an example, when $k = 1$, the rotation $\phi_1(\mathbf{x})$ is defined as follows.
\begin{align*}
\mathbf{x} &= \begin{pmatrix}
x_0 & x_1 & \ldots & x_{n-2} & x_{n-1}
\end{pmatrix}\\
\phi_1(\mathbf{x}) &=\begin{pmatrix}
x_{n-1} & x_{0} & \ldots & x_{n-3} & x_{n-2}
\end{pmatrix}
\end{align*}
The $\mathsf{Rotate}$ operation is necessary for computations that operate on data residing in different slots of the encrypted vectors.
\subsection{Modular Arithmetic and the Residue Number System}
\label{ssec:mod_arith}
\paragraph*{Scalar Modular Arithmetic}
Nearly all FHE operations reduce to scalar modular additions and scalar modular multiplications.
Current CPU/GPU architectures do not implement modular arithmetic directly but emulate it via multiple arithmetic instructions, which significantly increases the amount of compute required for these operations. Therefore, optimizing modular arithmetic is critical to optimizing FHE computation.
To perform modular addition over operands that are already reduced, we use the standard approach of conditional subtraction if the addition overflows the modulus. For generic modular multiplications, we use the Barrett reduction technique~\cite{Barrett}.
When computing the sum of many scalars, we avoid performing a modular reduction until the end of the summation, as long as the unreduced sum fits in a machine word.
As an optimization, we use Shoup's technique~\cite{shoup2001ntl}
for constant multiplication.
That is, when computing $x\cdot y \pmod p$ where $x$ and $p$ are known in advance, we can precompute a value $x_s$ such that $\mathsf{ModMulShoup}(x, y, x_s, p) = x\cdot y \pmod p$ is much faster than directly computing $x\cdot y \pmod p$.
\paragraph*{Residue Number System (RNS)}
Often the scalars in homomorphic encryption schemes are very large, on the order of thousands of bits.
To compute on such large numbers, we use the residue number system (also called the Chinese remainder representation) where we represent numbers modulo $Q = \prod_{i=1}^\ell q_i$, where each $q_i$ is a prime number that fits in a standard machine word (less than $64$ bits), as $\ell$ numbers modulo each of the $q_i$.
We call the set $\mathcal{B} := \{q_1, \ldots, q_\ell\}$ an \emph{RNS basis}.
We refer to each $q_i$ as a \emph{limb} of $Q$.
This allows us to operate over values in $\mathbb{Z}_Q$ without any native support for multi-precision arithmetic.
Instead, we can represent $x \in \mathbb{Z}_Q$ as a length-$\ell$ vector of scalars $[x]_\mathcal{B} = (x_1, x_2, \ldots, x_\ell)$, where $x_i \equiv x \pmod{q_i}$.
We refer to each $x_i$ as a \emph{limb} of $x$.
To add two values $x, y \in \mathbb{Z}_Q$, we have $x_i + y_i \equiv x + y \pmod{q_i}$. Similarly, we have $x_i \cdot y_i \equiv x \cdot y \pmod{q_i}$.
This allows us to compute addition and multiplication over $\mathbb{Z}_Q$ while only operating over standard machine words.
The size of this representation of an element of $\mathbb{Z}_Q$ is $\ell$ machine words.
\subsection{CKKS Ciphertext Structure}
\label{subsec:CKKSStructure}
In this section, we give the general structure of a ciphertext in the CKKS~\cite{CKKS17} homomorphic encryption scheme.
A ciphertext is a pair of polynomials each of degree $N-1$.
The coefficients of these ciphertexts are elements of $\mathbb{Z}_Q$, where $Q$ has $\ell$ limbs.
Thus, in total, the size of a ciphertext is $2N\ell$ machine words.
In CKKS, we are able to encrypt non-integer values, including complex numbers.
The ciphertexts are ``packed," which means they encrypt vectors in $\mathbb{C}^n$, where $n = N/2$, in a single ciphertext.
For $\mathbf{m} \in \mathbb{C}^n$, we denote its encryption as $\dbrack{\mathbf{m}} = (\a_\mathbf{m}, \b_\mathbf{m})$ where $\a_\mathbf{m}$ and $\b_\mathbf{m}$ are the two polynomials that comprise the ciphertext.
We omit the subscript $\mathbf{m}$ when there is no cause for confusion.
An example of ciphertext parameters that achieve a $128$-bit security level is $N = 2^{17}$ and $\ell = 35$.
With an $8$-byte machine word, this gives a total ciphertext size of $\sim 73.4$~MB.
Note that in today's reticle-limited systems, the largest last-level cache size is $40$~MB~\cite{nvidiaA100}.
Consequently, we won't be able to fit even a single ciphertext in the last-level cache, which indicates the need for multiple expensive DRAM accesses when operating on ciphertexts.
\paragraph*{Polynomial Representation} In order to enable fast polynomial multiplication, we will have all polynomials represented by default as a series of $N$ evaluations at fixed roots of unity.
This allows polynomial multiplication to occur in $O(N)$ time.
We refer to this representation as the \emph{evaluation representation}.
Certain subroutines, defined in \cref{subsec:CKKSSubRout}, operate over
the polynomial's \emph{coefficient representation}, which is simply a vector of its coefficients.
Addition of two polynomials and multiplication of a polynomial by a scalar are $O(N)$ in both the coefficient and the evaluation representation.
Moving between representations requires a number-theoretic transform
(NTT) or inverse NTT, which is the finite field version of the fast Fourier transform (FFT) and takes $O(N\log N)$ time and $O(N)$ space for a degree-$(N-1)$ polynomial.
\paragraph*{Encoding Plaintexts} CKKS supports non-integer messages, so all encoded messages must include a scaling factor $\Delta$. The scaling factor is usually the size of one of the limbs of the ciphertext, which is slightly less than a machine word.
When multiplying messages together, this scaling factor grows as well.
The scaling factor must be shrunk down in order to avoid overflowing the ciphertext coefficient modulus.
We discuss how this procedure works in \Cref{subsec:CKKSSubRout}.
\subsection{Implementing the API}
\label{subsec:CKKSSubRout}
To implement the homomorphic API described in Table~\ref{tab:low-level-api-overview}, we need some ``helper'' subroutines. We first describe these subroutines and then provide the implementations of the homomorphic API using the subroutines.
\paragraph*{Handling a Growing Scaling Factor}
As mentioned in \cref{subsec:CKKSStructure}, all encoded messages in CKKS must have a scaling factor $\Delta$.
In both the $\mathsf{PtMult}$ and $\mathsf{Mult}$ implementations, the multiplication of the encoded messages results in the product having a scaling factor of $\Delta^2$.
Before these operations can complete, we must shrink the scaling factor back down to $\Delta$ (or at least a value very close to $\Delta$). If this operation is neglected, the scaling factor will eventually grow to overflow the ciphertext modulus, resulting in decryption failure.
To shrink the scaling factor, we divide the ciphertext by $\Delta$ (or a value that is close to $\Delta$) and round the result to the nearest integer. This operation, called $\mathsf{ModDown}$, keeps the scaling factor of the ciphertext roughly the same throughout the computation.\footnote{A better name for this operation would be ``divide and mod-down'' because it reduces the scaling factor {\em as well as} the ciphertext modulus. In this paper, we stick to the standard $\mathsf{ModDown}$ terminology for consistency with the literature.} For a more formal description, we refer the reader to \cite{FullRNSHEAAN}. We sometimes refer to a $\mathsf{ModDown}$ instruction that occurs at the end of an operation as $\mathsf{Rescale}$.
\paragraph*{Handling a Changing Decryption Key}
In both the $\mathsf{Mult}$ and $\mathsf{Rotate}$ implementations, there is an intermediate ciphertext with a decryption key that differs from the decryption key of the input ciphertexts.
In order to change this new decryption key back to the original decryption key, we perform a $\mathsf{KeySwitch}$ operation.
This operation takes in a switching key $\mathsf{ksk}_{\mathbf{s} \rightarrow \mathbf{s}'}$ and a ciphertext $\dbrack{\mathbf{m}}_{\mathbf{s}}$ that is decryptable under a secret key $\mathbf{s}$.
The output of the $\mathsf{KeySwitch}$ operation is a ciphertext $\dbrack{\mathbf{m}}_{\mathbf{s}'}$ that encrypts the same message but is decryptable under a different key $\mathbf{s}'$.
\paragraph*{Key Switching{\em ~\cite{BV11}}} Since the $\mathsf{KeySwitch}$ operation differs between $\mathsf{Mult}$ and $\mathsf{Rotate}$, we do not define it separately.
Instead, we go a level deeper, and define the subroutines necessary to implement $\mathsf{KeySwitch}$ for each of these operations.
In addition to the $\mathsf{ModDown}$ operation, we use the $\mathsf{ModUp}$ operation, which allows us to add primes to our RNS basis.
We follow the structure of the switching key in the work of Han and Ki~\cite{HK19}, where the switching key, parameterized by a length $\mathsf{dnum}$, is a $2 \times \mathsf{dnum}$ matrix of polynomials.
\begin{align} \label{eq:kskShape}
\mathsf{ksk} = \begin{pmatrix} \a_1 & \a_2 & \ldots & \a_\mathsf{dnum} \\
\b_1 & \b_2 & \ldots & \b_\mathsf{dnum} \\
\end{pmatrix}
\end{align}
The $\mathsf{KeySwitch}$ operation requires that a polynomial be split into $\mathsf{dnum}$ ``digits," then multiplied with the switching key. We define the function $\mathsf{Decomp}$ that splits a polynomial into $\mathsf{dnum}$ digits as well as a $\mathsf{KSKInnerProd}$ operation to multiply the $\mathsf{dnum}$ digits by the switching key.
Before proceeding further, we refer the reader to \Cref{tab:ckks_subroutines} where all the
subroutines described above are defined in more detail. The implementation of the API functions are given in Algorithms~\ref{algo:PtMult}, \ref{algo:Mult} and \ref{algo:Rotate}.
We also give a batched rotation algorithm $\mathsf{HRotate}$ in \Cref{algo:HRotate},
which computes many rotations on the same ciphertext faster than applying $\mathsf{Rotate}$ independently several times.
\begin{algorithm}
\caption{$\mathsf{PtMult}(\dbrack{\mathbf{m}}, \mathbf{m}') = \dbrack{\mathbf{m}\cdot\mathbf{m}'}$}
\label{algo:PtMult}
\begin{algorithmic}[1]
\State $(\a, \b) := \dbrack{\mathbf{m}}$
\State $(\u, \v) := (\a \cdot (\Delta\cdot\mathbf{m}'), \b \cdot (\Delta\cdot\mathbf{m}'))$\\
\Return $(\mathsf{ModDown}_{\mathcal{B}, 1}(\u), \mathsf{ModDown}_{\mathcal{B}, 1}(\v))$ \Comment{$\mathsf{Rescale}$}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{$\mathsf{Mult}(\dbrack{\mathbf{m}_1}_\mathbf{s}, \dbrack{\mathbf{m}_2}_\mathbf{s}, \mathsf{ksk}_{\mathbf{s}^2\rightarrow \mathbf{s}} ) = \dbrack{\mathbf{m}_1\cdot \mathbf{m}_2}_\mathbf{s}$}
\label{algo:Mult}
\begin{algorithmic}[1]
\State $(\a_1, \b_1) := \dbrack{\mathbf{m}_1}_\mathbf{s}$
\State $(\a_2, \b_2) := \dbrack{\mathbf{m}_2}_\mathbf{s}$
\State $(\a_3, \b_3, \c_3) := (\a_1\a_2, \a_1\b_2 + \a_2\b_1, \b_1\b_2)$
\State $\vec{\a} := \mathsf{Decomp}_\beta(\a_3)$
\State $\hat{\a}[i] := \mathsf{ModUp}(\vec{\a}[i])$ for $1 \leq i \leq \beta$.
\State $(\hat{\u}, \hat{\v}) := \mathsf{KSKInnerProd}(\mathsf{ksk}_{\mathbf{s}^2\rightarrow \mathbf{s}}, \hat{\a})$
\State $(\u, \v) := (\mathsf{ModDown}(\hat{\u}), \mathsf{ModDown}(\hat{\v}))$ \label{line:multFirstModDown}
\State $(\a', \b') := (\b_3 + \u, \c_3 + \v)$\label{line:multAdd}\\
\Return $(\mathsf{ModDown}_{\mathcal{B}, 1}(\a'), \mathsf{ModDown}_{\mathcal{B}, 1}(\b'))$ \Comment{$\mathsf{Rescale}$} \label{line:multSecondModDown}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{$\mathsf{Rotate}(\dbrack{\mathbf{m}}_\mathbf{s}, k, \mathsf{ksk}_{\psi_k(\mathbf{s})\rightarrow \mathbf{s}} ) = \dbrack{\phi_k(\mathbf{m})}_\mathbf{s}$}
\label{algo:Rotate}
\begin{algorithmic}[1]
\State $(\a, \b) := \dbrack{\mathbf{m}}_\mathbf{s}$
\State $(\a_\mathsf{rot}, \b_\mathsf{rot}) := (\mathsf{Automorph}(\a, k), \mathsf{Automorph}(\b, k))$
\State $\vec{\a_\mathsf{rot}} := \mathsf{Decomp}_\beta(\a_\mathsf{rot})$ \Comment{$\beta$ digits.}
\State $\hat{\a}[i] := \mathsf{ModUp}(\vec{\a_\mathsf{rot}}[i])$ for $1 \leq i \leq \beta$.
\State $(\hat{\u}, \hat{\v}) := \mathsf{KSKInnerProd}(\mathsf{ksk}_{\psi_k(\mathbf{s})\rightarrow \mathbf{s}}, {\hat{\a}})$
\State $(\u, \v) := (\mathsf{ModDown}(\hat{\u}), \mathsf{ModDown}(\hat{\v}))$\\
\Return $(\u, \v + \b_\mathsf{rot})$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{$$\mathsf{HRotate}(\dbrack{\mathbf{m}}_\mathbf{s}, \{k_i, \mathsf{ksk}_{\psi_{k_i}(\mathbf{s})\rightarrow \mathbf{s}}\}_{i=1}^r ) = \{\dbrack{\phi_{k_i}(\mathbf{m})}_\mathbf{s}\}_{i=1}^r$$}
\label{algo:HRotate}
\begin{algorithmic}[1]
\State $(\a, \b) := \dbrack{\mathbf{m}}_\mathbf{s}$
\State $\vec{\a} := \mathsf{Decomp}_\beta(\a)$ \Comment{$\beta$ digits.}
\State $\hat{\a}[j] := \mathsf{ModUp}(\vec{\a}[j])$ for $1 \leq j \leq \beta$.
\For{$i$ from $1$ to $r$}
\State $\hat{\a}_\mathsf{rot} := \mathsf{Automorph}(\hat{\a}, k_i)$ for $1 \leq j \leq \beta$
\State $(\hat{\u}, \hat{\v}) := \mathsf{KSKInnerProd}(\mathsf{ksk}_{\psi_{k_i}(\mathbf{s})\rightarrow \mathbf{s}}, {\hat{\a}_\mathsf{rot}})$
\State $(\u, \v) := (\mathsf{ModDown}(\hat{\u}), \mathsf{ModDown}(\hat{\v}))$
\State $\b_\mathsf{rot} := \mathsf{Automorph}(\b, k_i)$
\State $\dbrack{\phi_{k_i}(\mathbf{m})}_\mathbf{s} := (\u, \v + \b_\mathsf{rot})$
\EndFor\\
\Return $\{\dbrack{\phi_{k_i}(\mathbf{m})}_\mathbf{s}\}_{i=1}^r$
\end{algorithmic}
\end{algorithm}
\begin{table*}[t]
\centering
\caption{CKKS Subroutines: \emph{These subroutines enable the implementation of the CKKS API defined in \Cref{tab:low-level-api-overview}.}}
\label{tab:ckks_subroutines}
\begin{tabular}{ l l p{0.4in} p{4in}}
\toprule
\textbf{Sub-routine Name} & \textbf{Output} & \textbf{Used-in} & \textbf{Description} \\
\midrule
$\mathsf{ModDown}_{\mathcal{B}, d}([\mathbf{x}]_\mathcal{B})$ & $[\mathbf{x}/P + e]_{\mathcal{B}'}$ & $\mathsf{PtMult}$ $\mathsf{Mult}$ $\mathsf{Rotate}$ & This function takes in a polynomial $\mathbf{x}$ in the coefficient representation, where each coefficient is modulo $Q := \prod_{q \in \mathcal{B}} q$ and represented in the RNS basis $\mathcal{B} = \{q_1, \ldots, q_\ell\}$. Assume that $d < \ell$ and let $P := \prod_{i=\ell-d+1}^\ell q_i$ be the product of the last $d$ limbs of $\mathcal{B}$. Let $\mathcal{B}' = \{q_1, \ldots, q_{\ell-d}\}$, and note that $Q/P = \prod_{q\in \mathcal{B}'}q$. The output of this function is a polynomial $[\mathbf{y}]_{\mathcal{B}'}$ where each coefficient of $\mathbf{y}$ equals the corresponding coefficient of $\mathbf{x}$ divided by $P$ plus some small rounding error.\\
\midrule
$\mathsf{ModUp}_{\mathcal{B}, \mathcal{B}'}([\mathbf{x}]_\mathcal{B})$ & $[\mathbf{x}]_{\mathcal{B}'}$ & $\mathsf{Mult}$ $\mathsf{Rotate}$ & Takes a polynomial $\mathbf{x}$ where each coefficient is in the basis $\mathcal{B}$ and outputs the representation of $\mathbf{x}$ where each coefficient is in the basis $\mathcal{B}'$. $\mathcal{B}$ could be a subset or superset of $\mathcal{B}'$, or they could be unrelated. Note that this operation must also be performed in the coefficient representation.\\
\midrule
$\mathsf{Decomp}_\beta(\mathbf{x})$ & $\{\mathbf{x}^{(1)}, \ldots, \mathbf{x}^{(\beta)}\}$ & $\mathsf{Mult}$ $\mathsf{Rotate}$ & Takes in a polynomial $\mathbf{x}$ and a parameter $\mathsf{dnum}$ and splits $\mathbf{x}$ into $\mathsf{dnum}$ digits. If $\mathbf{x}$ has $L$ limbs, each digit of $\mathbf{x}$ has roughly $\alpha := \lceil (L+1)/\mathsf{dnum}\rceil$ limbs.\\
\midrule
$\mathsf{KSKInnerProd}(\mathsf{ksk}, \vec{\mathbf{x}})$ & $(\a, \b)$ & $\mathsf{Mult}$ $\mathsf{Rotate}$ & Takes in a key-switching key $\mathsf{ksk}$ with the structure of \cref{eq:kskShape} and a vector of polynomials $\vec{\mathbf{x}}$ of length $\mathsf{dnum}$. Let $\mathsf{ksk}_1$ be the first row of $\mathsf{ksk}$ and let $\mathsf{ksk}_2$ be the second row of $\mathsf{ksk}$. The output of this operation is two polynomials $\a := \langle \mathsf{ksk}_1, \vec{\mathbf{x}} \rangle$ and $\b := \langle \mathsf{ksk}_2, \vec{\mathbf{x}} \rangle$.\\
\midrule
$\mathsf{Automorph}(\mathbf{x}, k)$ & $\psi_k(\mathbf{x})$ & $\mathsf{Rotate}$ & Takes a vector $\mathbf{x}$ with $N$ elements and an integer $k$ and outputs a permutation $\psi_k(\cdot)$ of the elements. This permutation is an automorphism which is {\em not} simply a rotation; intuitively, the permutation $\psi_k$ of an encoded message will result in the decoded value being permuted by the natural rotation $\phi_k$. \\
\bottomrule
\end{tabular}
\end{table*}
\medskip\noindent
\textbf{\em Key Takeaway: The Shrinking Ciphertext Modulus}
A main observation coming out of our description of the homomorphic API is that the ciphertext modulus shrinks for each $\mathsf{PtMult}$ (\cref{algo:PtMult}) and $\mathsf{Mult}$ (\cref{algo:Mult}) operation. This occurs in the $\mathsf{ModDown}$ operations at the end of these functions.
If a ciphertext begins with $L$ limbs, we can only compute a circuit with multiplicative depth $L-1$, since the ciphertext modulus shrinks by a number of limbs equal to the multiplicative depth of the circuit being homomorphically evaluated.
This foreshadows the next section where we present an operation called \emph{bootstrapping}~\cite{Gentry09} that increases the ciphertext modulus.
\subsection{Concrete Costs} \label{subsec:lowLevelConCosts}
We present the hardware cost associated with various functions and subroutines in the FHE API in \Cref{tab:aux-cost} and \Cref{tab:low-level-api-cost}, and discuss the content of the tables briefly.
To generate these performance numbers, we implement an architectural modeling tool that can perform an in-depth analysis given the number of functional units, cache size, and the memory subsystem parameters.
In addition, our tool allows us to tune nearly all parameters of the algorithm, including $N$, $\mathsf{dnum}$, and the maximum ciphertext modulus for a given security level.
\medskip\noindent
\textbf{\em Key Takeaway: Low Arithmetic Intensity.}
The key take-away from the tables, in particular \Cref{tab:low-level-api-cost}, is that {\em the arithmetic intensity}, defined as the number of operations per byte transferred from DRAM, of all of the functions in the CKKS API is less than $<1$ Op/byte.
This means that when the ciphertexts do not fit in memory, {\em any natural application (e.g. logistic regression training, neural network evaluation, bootstrapping, etc.) built using these functions will have performance bounded by the memory bandwidth and not the computation speed}.
Since our ciphertexts will remain too large to fit in the chip cache, much of this work will focus on improving the arithmetic intensity of CKKS bootstrapping. This translates to progressing further in the bootstrapping algorithm per memory transfer, which, in turn, translates to a faster bootstrapping implementation.
\begin{table*}[t]
\centering
\caption{Hardware Cost of Auxiliary Subroutines: \emph{These benchmarks were taken for $\log(N) = 17$, $\ell = 35$, $\mathsf{dnum} = 3$. The \textbf{Total Operations} column counts the number of modular additions and multiplications in the operations, (note that this count for the $\mathsf{Automorph}$ function is zero). \textbf{GOP} stands for Giga operations. The \textbf{Total DRAM Transfers} is the sum of \textbf{DRAM Limb Reads}, \textbf{DRAM Limb Writes}, and \textbf{DRAM Key Reads}, the last of which counts the reads specifically for the switching keys. The $\mathsf{KSKInnerProd}$ operation has no limb writes because the limbs are immediately used in the next operation, the $\mathsf{ModDown}$. The write is counted in the $\mathsf{ModDown}$ when the limbs are written out in to be read back in \emph{slot-wise} format, as discussed in \Cref{sec:macro-fusion}. The \textbf{Arithmetic Intensity} column defines the number of operations per byte transferred from DRAM.}}
\label{tab:aux-cost}
\begin{tabular}{cccccccc}
\toprule
\specialcell{\textbf{Sub-routine}\\\textbf{Name}} & \specialcell{\textbf{Total Operations}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total Mults}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total DRAM }\\\textbf{Transfers (in GB)}} & \specialcell{\textbf{DRAM Limb}\\\textbf{Reads (in GB)}} & \specialcell{\textbf{DRAM Limb}\\\textbf{Writes (in GB)}} & \specialcell{\textbf{DRAM Key}\\\textbf{Reads (in GB)}} & \specialcell{\textbf{Arithmetic}\\\textbf{Intensity}\\\textbf{(in Op/byte)}}\\
\midrule
$\mathsf{ModDown}$ & $0.3000$ & $0.1288$ & $0.1877$ & $0.1007$ & $0.0870$ & $0$ & $\mathbf{1.59}$ \\
\midrule
$\mathsf{ModUp}$ & $0.2847$ & $0.1211$ & $0.1510$ & $0.0629$ & $0.0881$ & $0$ & $\mathbf{1.88}$ \\
\midrule
$\mathsf{Decomp}$ & $0.0092$ & $0.0092$ & $0.0734$ & $0.0367$ & $0.0367$ & $0$ & $\mathbf{0.12}$ \\
\midrule
$\mathsf{KSKInnerProd}$ & $0.0629$ & $0.0378$ & $0.4530$ & $0.1510$ & $0$ & $0.3020$ & $\mathbf{0.13}$ \\
\midrule
$\mathsf{Automorph}$ & $0$ & $0$ & $0.1468$ & $0.0734$ & $0.0734$ & $0$ & $\mathbf{0}$ \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[t]
\centering
\caption{Hardware Cost of FHE APIs: \emph{These benchmarks were taken for $\log(N) = 17$, $\ell = 35$, $\mathsf{dnum} = 3$. The number of rotations computed in the $\mathsf{HRotate}$ benchmark is $8$. See the caption of \Cref{tab:aux-cost} for a description of the columns.}}
\label{tab:low-level-api-cost}
\begin{tabular}{cccccccc}
\toprule
\specialcell{\textbf{Operation}\\\textbf{Name}} & \specialcell{\textbf{Total Operations}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total Mults}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total DRAM }\\\textbf{Transfers(in GB)}} & \specialcell{\textbf{DRAM Limb}\\\textbf{Reads (in GB)}} & \specialcell{\textbf{DRAM Limb}\\\textbf{Writes (in GB)}} & \specialcell{\textbf{DRAM Key}\\\textbf{Reads (in GB)}} & \specialcell{\textbf{Arithmetic Intensity}\\\textbf{(in Op/byte)}}\\
\midrule
$\mathsf{PtAdd}$ & $0.00459$ & $0$ & $0.1101$ & $0.0734$ & $0.0367$ & $0$ & $\mathbf{0.04}$ \\
\midrule
$\mathsf{Add}$ & $0.0092$ & $0$ & $0.2202$ & $0.1468$ & $0.0734$ & $0$ & $\mathbf{0.04}$ \\
\midrule
$\mathsf{PtMult}$ & $0.2747$ & $0.1098$ & $0.3282$ & $0.1835$ & $0.1447$ & $0$ & $\mathbf{0.84}$ \\
\midrule
$\mathsf{Mult}$ & $1.8333$ & $0.7826$ & $1.9293$ & $0.9070$ & $0.7203$ & $0.3020$ & $\mathbf{0.95}$ \\
\midrule
$\mathsf{Rotate}$ & $1.5310$ & $0.6682$ & $1.5645$ & $0.6501$ & $0.6124$ & $0.3020$ & $\mathbf{0.98}$ \\
\midrule
$\mathsf{Conjugate}$ & $1.5310$ & $0.6682$ & $1.5645$ & $0.6501$ & $0.6124$ & $0.3020$ & $\mathbf{0.98}$ \\
\midrule
$\mathsf{HRotate}$ & $6.2039$ & $2.7363$ & $8.1411$ & $3.2632$ & $2.4621$ & $2.4159$ & $\mathbf{0.76}$ \\
\bottomrule
\end{tabular}
\end{table*}
\section{Fully Homomorphic Encryption: Applications}
\label{section:FHEApplications}
In this section, we describe how the FHE API from \Cref{section:FHEAPI} can be leveraged to develop applications.
As discussed in \Cref{subsec:CKKSSubRout}, a CKKS ciphertext can only support computation up to a fixed multiplicative depth due to the shrinking ciphertext modulus.
Once this depth is reached, a \emph{bootstrapping} operation must be performed to grow the ciphertext modulus, which allows for computation to continue.
Many applications of interest have a deep circuit that requires bootstrapping multiple times: in general, machine learning training algorithms are good examples where deeper circuits for the training computation often lead to greater accuracy of the resulting model.
In this section, we use \emph{logistic regression training} over encrypted data as a running example to explain the process of FHE-based machine learning training.
Logistic regression training contains both linear (e.g. inner-products) and non-linear (e.g. sigmoid) operations.
The CKKS scheme naturally supports linear operations, while for non-linear operations we need to use a polynomial approximation (as in \cite{Kim2018LogisticRM,HELogReg}).
The greater the degree of the polynomial, the greater the accuracy of the approximation, which further drives an increase in the circuit depth, in turn requiring bootstrapping.
For our running example, we use the logistic regression training application given in Han, Song, Cheon and Park~\cite{HELogReg} and depicted in \cref{fig:LogRegTraining}.
The training process is an iterative process that repeatedly computes an inner product followed by a sigmoid function on a training data set and the model weights.
The logistic regression update equation is as follows.
\begin{align}\label{eq:logRegUpdate}
\mathbf{w} \gets \mathbf{w} + \frac{\mathsf{lr}}{n}\sum_{i=1}^n \sigma\left(\mathbf{z}_i^T\cdot\mathbf{w}\right) \cdot \mathbf{z}_i
\end{align}
The vector $\mathbf{w}$ is the weight vector, the values $n$ and $\mathsf{lr}$ are scalars, and $\mathbf{z}_i$ represents the $i^{th}$ vector of the training data set.
The $\sigma$ function is the sigmoid function.
To implement this iterative update, we split the update into two phases: a linear phase that contains the inner product\footnote{In the real implementation of \Cref{eq:logRegUpdate}, these inner products are batched into a matrix-vector product. We use the same algorithm as \cite{HELogReg}.}
and a non-linear phase that contains the sigmoid function.
We implement these phases separately with common building blocks shown in \Cref{tab:building_blocks}.
The linear phase can be implemented with an $\mathsf{InnerProduct}$ routine that computes the inner product of two encrypted vectors.
The non-linear phase is approximated with a polynomial, and the homomorphic evaluation of this polynomial can be implemented with $\mathsf{PolyEval}$.
The scalar products and summation can be implemented with the $\mathsf{PtMult}$, $\mathsf{Mult}$, and $\mathsf{Add}$ functions.
After some number of iterations, the encrypted weights are passed through the $\mathsf{Bootstrap}$ routine.
The exact placement of the $\mathsf{Bootstrap}$ operation in a circuit is application-dependent.
In our running example, bootstrapping needs to be done every three iterations (see Figure~\ref{fig:LogRegTraining}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.00\columnwidth]{Figures/LRTraining.pdf}
\end{center}
\vspace{-0.15in}
\caption{Logistic regression training on encrypted data.}
\vspace{-0.10in}
\label{fig:LogRegTraining}
\end{figure}
\begin{table*}[ht]
\centering
\caption{Homomorphic Encryption Application Building Blocks: \emph{These building blocks are implemented using the API from \Cref{tab:low-level-api-overview}.}}
\label{tab:building_blocks}
\begin{tabular}{c c p{5in}} \toprule
\textbf{Name} & \textbf{Output} & \textbf{Description}\\
\midrule
$\mathsf{InnerProduct}(\dbrack{\mathbf{x}}, \dbrack{\mathbf{y}})$ & $\dbrack{\langle \mathbf{x}, \mathbf{y} \rangle}$ & Computes the inner product of two encrypted vectors. This computation is the specific encrypted inner product algorithm from Han et al.~\cite{HELogReg}.
\\
\midrule
$\mathsf{PolyEval}(\dbrack{\mathbf{x}}, p(\cdot))$ & $\dbrack{p(\mathbf{x})}$ & This operation takes an encrypted vector $\mathbf{x}$ and a (univariate) polynomial $p$ as input. The result is an encryption of the evaluation of $p$ at $\mathbf{x}$, where each entry of $p(\mathbf{x})$ is the evaluation of $p$ on the corresponding entry of $\mathbf{x}$. \\
\midrule
$\mathsf{PtMatVecMult}(\mathbf{M}, \dbrack{\mathbf{x}})$ & $\dbrack{\mathbf{M}\mathbf{x}}$ & This operation takes a plaintext matrix $\mathbf{M}$ and multiplies it by an encrypted vector $\mathbf{x}$. The result is an encryption of the vector $\mathbf{M}\mathbf{x}$. This is a major subroutine in $\mathsf{Bootstrap}$.\\
\midrule
$\mathsf{Bootstrap}(\dbrack{\mathbf{x}})$ & $\dbrack{\mathbf{x}}$ & This operation takes in an encryption of a vector $\mathbf{x}$ and outputs an encryption of the same vector $\mathbf{x}$. This operation is necessary to be able to compute indefinitely on encrypted data. Far from being a null operation, this is nearly always the bottleneck operation when computing over encrypted data. \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Bootstrapping}
\label{sec:bootstrapping}
As discussed in \Cref{subsec:CKKSSubRout}, the ciphertext modulus of CKKS shrinks with each multiplication.
In order to compute indefinitely on a CKKS ciphertext, we must grow the ciphertext modulus without also growing the noise.
This is not as simple as performing a $\mathsf{ModUp}$ function.
The CKKS bootstrapping procedure~\cite{CKKS20} begins with this $\mathsf{ModUp}$ operation, which gives the new plaintext as $\Delta\cdot \mathbf{m} + \k q$ where $q$ is the modulus for the input ciphertext and $\k$ is some polynomial with small integer coefficients.
The primary goal of the bootstrapping operation is to homomorphically evaluate the modular reduction operation modulo $q$ on this plaintext, returning the plaintext back to $\Delta\cdot \mathbf{m}$.
The CKKS bootstrapping algorithm follows a general structure that has remained relatively static in the literature~\cite{CCS18, CHH18, HK19, BMTH20, CKKS20} over the past few years.
This structure has three main components: a linear operation, an approximation of the modular reduction function followed by another linear operation.
The linear operations in bootstrapping require homomorphically evaluating the DFT on the encrypted data so that we perform modulus reduction on the {\em coefficient representation} of plaintext, rather than the {\em evaluation (or slot) representation}.
The first of these DFT operations is called $\mathsf{CoeffToSlot}$ and the second is called $\mathsf{SlotToCoeff}$.
In between these two DFT operations is an approximation of the modular reduction function that consists of a polynomial evaluation followed by an exponentiation.
For further details on polynomial evaluation and the exponentiation, we refer the readers to \cite{HK19, BMTH20}.
To homomorphically evaluate the DFT, we use the observation that the DFT matrix can be factored into submatrices of smaller dimension.
This turns the homomorphic DFT into a series of $\mathsf{PtMatVecMult}$ operations.
However, there is a trade-off between the number of $\mathsf{PtMatVecMult}$ operations that must be computed and the size of the matrices in each $\mathsf{PtMatVecMult}$ instance.
Each $\mathsf{PtMatVecMult}$ has a multiplicative depth of 1.
The total dimension of the DFT is $n = N/2 = 2^{16}$ for our parameters.
Options to evaluate this DFT include evaluating a single $\mathsf{PtMatVecMult}$ with an $n\times n$ input, which would require a very large number of rotations, or evaluating $16$ $\mathsf{PtMatVecMult}$ instances in sequence with only two rotations per instance.
The former corresponds to treating DFT as a matrix-vector multiplication without using the structure of the DFT matrix while the latter corresponds to running the $O(N\log N)$ algorithm for DFT.
We can interpolate between these two extremes to find the optimal depth vs. computation trade-off.
Each sub-matrix in the factorization of the DFT matrix has a \emph{radix} corresponding to the number of non-zero diagonals.
The smaller the radix, fewer the rotations that must be computed during the $\mathsf{PtMatVecMult}$ instance.
The rule is that the product of the radices of the $\mathsf{PtMatVecMult}$ iterations (in the DFT algorithm) must equal $n$.
For example, for our parameter of $n = 2^{16}$, this gives the options of three $\mathsf{PtMatVecMult}$ iterations with radices of $2^5$, $2^5$, and $2^6$, or five $\mathsf{PtMatVecMult}$ iterations with four iterations having a radix of $2^3$ matrix and one iteration with a radix of $2^4$.
We call the number of iterations as $\mathsf{fftIter}$.
The homomorphic inverse DFT is computed in an analogous way.
Our approximation of the modular reduction function follows the literature, where we represent the modular reduction function modulo $q$ with a sine function with period $q$, then approximate this sine function with a polynomial.
We represent this polynomial with $\mathsf{sine}(\cdot)$, and we use the Chebyshev polynomial construction used in Han and Ki~\cite{HK19}.
The degree of this polynomial is $63$.
We give a high-level pseudocode for the bootstrapping algorithm in \Cref{algo:BootstrapHighLevel}.
\begin{algorithm}
\caption{$\mathsf{Bootstrap}(\dbrack{\mathbf{x}}) = \dbrack{\mathbf{x}}$}
\label{algo:BootstrapHighLevel}
\begin{algorithmic}[1]
\State $(\a, \b) := \dbrack{\mathbf{x}}$
\State $\dbrack{\t} := \mathsf{ModUp}(\a, \b)$ \label{line:BootModRaise}
\For{$i$ from $1$ to $\mathsf{fftIter}$} \Comment{$\mathsf{CoeffToSlot}$ phase.}
\State $\dbrack{\t} \gets \mathsf{PtMatVecMult}(\mathbf{M}_i, \dbrack{\t})$
\EndFor
\State $\dbrack{\t} \gets \mathsf{PolyEval}(\dbrack{\t}, \mathsf{sine}(\cdot))$
\For{$i$ from $1$ to $\mathsf{fftIter}$} \Comment{$\mathsf{SlotToCoeff}$ phase.}
\State $\dbrack{\t} \gets \mathsf{PtMatVecMult}(\mathbf{M}_i, \dbrack{\t})$
\EndFor\\
\Return $\dbrack{\t}$
\end{algorithmic}
\end{algorithm}
\subsection{Concrete Costs}
\label{subsec:applConcCosts}
We give the concrete costs of the logistic regression and $\mathsf{Bootstrap}$ subroutines in \Cref{tab:building-block-cost} and \Cref{tab:boot-cost} respectively.
As the table shows, the arithmetic intensity of the sub-routines is less than $1$ Op/byte.
\vinod{There has to be a discussion, potentially brief but early in the paper, as to why you are focusing on arithmetic intensity as a measure of performance bottleneck. It is currently taken for granted and I don't think it's that obvious. What if you could reduce the compute and memory by the same amount, that would keep arithmetic intensity the same, but will result in a far more practical scheme?}
\leo{definitely. the arithmetic intensity discussion is relevant because the ciphertexts cannot fit on memory. I have highlighted this here}
\vinod{What I'd like is a robust discussion on why we focus on arithmetic intensity as a measure, early on in the paper. I still don't see it.}
As discussed in \Cref{subsec:lowLevelConCosts}, since our ciphertexts do not fit in cache, this means that the performance of all sub-routines is bounded by the main memory bandwidth.
In \Cref{tab:building-block-cost}, we give benchmarks for the logistic regression implementation based on our architecture modeling discussed in \Cref{subsec:lowLevelConCosts}.
The parameters we use are from the work of Jung et al.~\cite{GPUBoot21}, and these parameters were chosen to optimize their secure logistic regression application that leverages a GPU implementation of CKKS bootstrapping.
We refer to the original work of Han et al.~\cite{HELogReg} for the full algorithm benchmarked in \Cref{tab:building-block-cost}.
We note that the logistic regression iteration is the ``most expensive" of the three iterations that follow a $\mathsf{Bootstrap}$, since the ciphertexts in this iteration are the largest.
As the ciphertext shrinks due to the reduced ciphertext modulus, the computation becomes cheaper.
However, the arithmetic intensity remains essentially the same,
and the performance of each phase of the algorithm is bottle-necked by the memory bandwidth.
Overall, roughly half of the total runtime is spent in bootstrapping.
\medskip\noindent
\textbf{\em Key Takeaway:}
Bootstrapping is often the bottle-neck operation in HE applications, especially applications that implement a deep circuit.
For example, even when using a heavily-optimized GPU implementation of bootstrapping, nearly half of the time in HE logistic regression training is spent on bootstrapping~\cite{GPUBoot21} (\cref{tab:building-block-cost}).
This motivates the need to optimize the $\mathsf{Bootstrap}$ operation to efficiently support deep circuits.
Furthermore, the building blocks of bootstrapping are the same as many other HE applications; there are essentially no subroutines that are unique to bootstrapping.
Many of the optimizations we give in \Cref{sec:cachingopts} and \Cref{sec:algoopts} apply more generally to HE applications.
\begin{table*}[ht]
\centering
\caption{Hardware Cost of FHE Applications: \emph{These benchmarks were taken for $\log(N) = 17$, $\ell = 35$, $\mathsf{dnum} = 3$. See the caption of \Cref{tab:aux-cost} for a description of the columns. The number of features in the logistic regression is $d = 256$. The $\mathsf{InnerProduct}$ and $\mathsf{PolyEval}$ benchmarks are for the first iterations after a $\mathsf{Bootstrap}$. The ``Full LR Iteration'' row is the first iteration of the training algorithm after a $\mathsf{Bootstrap}$. The degree of the polynomial evaluated in $\mathsf{PolyEval}$ is $3$.
}}
\label{tab:building-block-cost}
\begin{tabular}{cccccccc}
\toprule
\specialcell{\textbf{Sub-routine}\\\textbf{Name}} & \specialcell{\textbf{Total Operations}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total Mults}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total DRAM }\\\textbf{Transfers(in GB)}} & \specialcell{\textbf{DRAM Limb}\\\textbf{Reads (in GB)}} & \specialcell{\textbf{DRAM Limb}\\\textbf{Writes (in GB)}} & \specialcell{\textbf{DRAM Key}\\\textbf{Reads (in GB)}} & \specialcell{\textbf{Arithmetic}\\\textbf{Intensity}\\\textbf{(in Op/byte)}}\\
\midrule
$\mathsf{InnerProduct}$ & $7.8558$ & $3.3806$ & $16.5413$ & $7.2918$ & $4.8455$ & $4.4040$ & $\mathbf{0.47}$\\
\midrule
$\mathsf{PolyEval}$ & $2.9314$ & $1.2188$ & $3.5484$ & $1.7144$ & $1.3118$ & $0.5222$ & $\mathbf{0.83}$ \\
\midrule
Full LR Iteration & $92.4225$ & $39.6322$ & $195.052$ & $86.7822$ & $56.1387$ & $52.131$ & $\mathbf{0.47}$ \\
\midrule
$\mathsf{Bootstrap}$ & $149.546$ & $64.6859$ & $207.982$ & $109.91$ & $65.2434$ & $32.8288$ & $\mathbf{0.72}$ \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[ht]
\centering
\caption{Hardware Cost of Bootstrapping: \emph{These benchmarks were taken for $\log(N) = 17$, $\ell = 35$, $\mathsf{dnum} = 3$. See the caption of \Cref{tab:aux-cost} for a description of the columns. These benchmarks represent the performance of the main sub-routines of bootstrapping. The degree of the polynomial in $\mathsf{PolyEval}$ is $63$.}}
\label{tab:boot-cost}
\begin{tabular}{cccccccc}
\toprule
\specialcell{\textbf{Sub-routine}\\\textbf{Name}} & \specialcell{\textbf{Total Operations}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total Mults}\\\textbf{(in GOP)}} & \specialcell{\textbf{Total DRAM }\\\textbf{Transfers(in GB)}} & \specialcell{\textbf{DRAM Limb}\\\textbf{Reads (in GB)}} & \specialcell{\textbf{DRAM Limb}\\\textbf{Writes (in GB)}} & \specialcell{\textbf{DRAM Key}\\\textbf{Reads (in GB)}} & \specialcell{\textbf{Arithmetic}\\\textbf{Intensity}\\\textbf{(in Op/byte)}}\\
\midrule
$\mathsf{CoeffToSlot}$ & $58.486$ & $25.8087$ & $86.7424$ & $46.8651$ & $25.2875$ & $14.5899$ & $\mathbf{0.67}$ \\
\midrule
$\mathsf{PolyEval}$ & $57.834$ & $24.4496$ & $65.643$ & $33.0406$ & $23.744$ & $8.8584$ & $\mathbf{0.88}$ \\
\midrule
$\mathsf{SlotToCoeff}$ & $33.2265$ & $14.4275$ & $55.5001$ & $30.004$ & $16.1156$ & $9.3806$ & $\mathbf{0.59}$ \\
\bottomrule
\end{tabular}
\end{table*}
\section{CKKS Bootstrapping: Caching Optimizations}
\label{sec:cachingopts}
In this section and \cref{sec:algoopts}, we present our optimizations to the CKKS bootstrapping algorithm.
These optimizations fall into two categories: those that rely on hardware assumptions and those that do not.
Our first class of optimizations assume a lower bound on the amount of available cache size relative to the size of the ciphertext limbs while second class of optimizations are more general as they reduce the total operation count of CKKS bootstrapping as well as the total number of DRAM reads, regardless of the hardware architecture.
This section focuses on the first set of optimizations.
These caching optimizations do not affect the operation count of $\mathsf{Bootstrap}$; instead, they reduce DRAM reads and writes to reduce the overall memory bandwidth requirement.
Our optimizations demonstrate how best to utilize caches of various sizes relative to the size of the ciphertext limbs.
We quantify the improvements of these optimizations in \Cref{sec:cachingoptsTakeaway}, where we give benchmarks for progressively larger cache sizes.
Our baseline benchmark is the parameter set from the GPU bootstrapping implementation of Jung et al.~\cite{GPUBoot21}.
The parameters are given in \Cref{tab:bootParams}.
\subsection{Caching $O(1)$ Limbs}
\label{sec:macro-fusion}
This is the first in a series of optimizations that details how best to utilize a cache for various cache sizes relative to the ciphertext limbs.
We begin by discussing how to utilize a cache that can store a constant number of limbs.
Intuitively, this optimization computes as much as possible on a single limb before writing it back to the main memory.
This often involves performing the operations of several higher-level functions on a single limb before beginning the same sequence of operations on the next limb.
This technique was referred to by Jung et al.~\cite{GPUBoot21} as a ``fusing" of operations, and we include all fusing operations listed in their work in our bootstrapping algorithm.
In addition, we provide a novel data mapping technique to handle caching data with different \emph{data access patterns}.
\paragraph*{Data Access Patterns}
Having a small-cache (about $1$-$3$~MB) in any FHE compute system has a caveat that must be carefully addressed.
Some operations in CKKS such as $\mathsf{NTT}$ and $\mathsf{iNTT}$ operate on data within the slots of the same limb, independent of the other limbs in the ciphertext.
On the other hand, RNS basis change operations in $\mathsf{ModUp}$ and $\mathsf{ModDown}$ require interaction between a certain number of slots across various limbs.
This requires having a few slots from multiple limbs in on-chip memory to reduce the number of accesses to main memory for a single operation.
To account for this, we define two different types of data access patterns.
For the functions where limbs can be operated upon independently, we define the data access pattern as \emph{limb-wise} and for the functions where slots can be operated upon independently, we define the data access pattern as \emph{slot-wise}.
A summary of this is given in \Cref{tab:data_access}.
We have also illustrated this by giving a high-level pseudo code of $\mathsf{ModUp}$ in \Cref{algo:ModUp}.
From this algorithm, it is evident that the $\mathsf{ModUp}$ operation includes both \emph{limb-wise} and \emph{slot-wise} operations, requiring a memory mapping that is efficient for both access patterns.
A naive memory mapping would result in low throughput for at least one of these access patterns.
Therefore, we describe a novel memory mapping approach to handle these two access patterns.
\begin{algorithm}
\caption{$\mathsf{ModUp}_{\mathcal{B}, \mathcal{B}\cup\mathcal{B}'}([\mathbf{x}]_\mathcal{B}) = [\mathbf{x}]_{\mathcal{B} \cup \mathcal{B}'}$}
\label{algo:ModUp}
\begin{algorithmic}[1]
\For{$i$ from $1$ to $|\mathcal{B}|$}
\State $[\mathbf{x}]_i \gets \mathsf{iNTT}([\mathbf{x}]_i)$ \Comment{\emph{limb-wise}}
\EndFor
\For{$j$ from $1$ to $|\mathcal{B}'|$} \Comment{Basis conversion.}
\State $[\mathbf{x}]_j \gets \mathsf{NewLimb}_j([\mathbf{x}]_1, \ldots, [\mathbf{x}]_{|\mathcal{B}|})$ \label{line:modUpBasisConv} \Comment{\emph{slot-wise}}
\EndFor
\For{$j$ from $1$ to $|\mathcal{B}'|$}
\State $[\mathbf{x}]_j \gets \mathsf{NTT}([\mathbf{x}]_j)$ \Comment{\emph{limb-wise}}
\EndFor\\
\Return $[\mathbf{x}]_{\mathcal{B} \cup \mathcal{B}'}$
\end{algorithmic}
\end{algorithm}
\begin{table}[ht]
\centering
\caption{Data dependencies and access pattern in Different Functions
\\\emph{The $\mathsf{NewLimb}$ function is used in both $\mathsf{ModUp}$ and $\mathsf{ModDown}$.}
}
\begin{tabular}{cccc} \toprule
\textbf{Operation} & \textbf{Interaction} & \textbf{Independent} & \textbf{Access pattern} \\ \midrule
$\mathsf{NTT}$, $\mathsf{iNTT}$ & Intra-limb & Inter-limb & \emph{limb-wise} \\
$\mathsf{NewLimb}$ & Inter-limb & Intra-limb & \emph{slot-wise} \\
\bottomrule
\end{tabular}
\label{tab:data_access}
\vspace{-0.12in}
\end{table}
\paragraph*{Physical Address Mapping}
When we re-purpose the last level cache to support both \emph{limb-wise} and \emph{slot-wise} access patterns, we observe that the physical address mapping of the data in main memory has a substantial impact on the time it takes to transfer data from the main memory.
Figure~\ref{fig:AddrMapping} (a) shows a natural physical address mapping for a ciphertext.
We call this the baseline address mapping.
Through simulations in DRAMSim3~\cite{LYRSJ20} we notice that for this baseline address mapping, the \emph{limb-wise} accesses require $2.3$~ms to read $35$~limbs worth of data.
However, we notice that the \emph{slot-wise} access pattern requires $9.2$~ms to transfer the same amount of data.
This is significantly lower as with the peak theoretical bandwidth (i.e., $19.2$~GB/s) for DDR4 the time required to read $35$~limbs worth of data is $1.9$~ms.
There are two reasons for this performance hit while doing \emph{slot-wise} accesses.
With $L=35$, the size of the ciphertext is $36.7$~MB whose limbs can be stored sequentially within a memory bank in one of the bank groups in main memory.
Each limb of the ciphertext spans across multiple rows of the memory bank.
Typically, each bank in main memory has a currently activated row whose contents are copied into a row buffer (acting as a cache) that can be accessed quickly.
However, with \emph{slot-wise} access pattern, every access is trying to read a different row, which takes longer because each row must be activated first.
Moreover, with \emph{slot-wise} accesses, we are unable to exploit the fact that bank accesses to different banks' groups require less time delay between accesses in comparison to the bank accesses within the same bank's group.
Instead, we keep accessing data from the memory bank within the same bank group.
\begin{figure}
\begin{center}
\includegraphics[width=1.00\columnwidth]{Figures/AddressMapping.pdf}
\end{center}
\vspace{-0.15in}
\caption{DDR4 physical address mapping. \emph{Baseline address mapping indexes all the slots ($2^{17}$) using the lower-order $17$ bits and all limbs using the immediate next $6$ bits. In optimized physical address mapping, slots are indexed using $7$ bits from the column and $10$ bits from the row, accounting for $2^{17}$ slots. The limbs are indexed using the $4$ bits that index bank group, bank, and rank and $2$ bits from the row index.}}
\vspace{-0.12in}
\label{fig:AddrMapping}
\end{figure}
We propose an optimized physical address mapping as shown in Figure~\ref{fig:AddrMapping} (b).
As shown in \Cref{tab:read_times}, with this proposed address mapping, we observe that the \emph{limb-wise} access requires a data transfer time of $2.5$~ms, which is about $8\%$ reduction in the times observed for the baseline \emph{limb-wise} accesses.
However, compared to the baseline \emph{slot-wise} access pattern, our optimized \emph{slot-wise} access pattern sees an increase in data transfer time by $76\%$, which is a significant improvement.
We observe that the total data transfer time for baseline address mapping is about $2.4\times$ higher than our optimized mapping.
Our optimized physical address mapping ensures that when performing \emph{limb-wise} and \emph{slot-wise} reads/writes, we exploit bank-level parallelism, and we focus on reducing the bank thrashing by not changing a bank's currently activated row frequently.
Note that for a different DRAM type such as HBM2 or GDDR5/6, similar physical address mappings can be done to optimize the main memory bandwidth utilization.
\begin{table}[ht]
\centering
\caption{DRAM transfer times with Baseline and Optimized mapping for different access patterns: \emph{Transfer times are computed for reading $L=35$ limbs worth of data, which is $36.7$~MB for our baseline parameter set.}}
\begin{tabular}{cccc} \toprule
\textbf{Mapping} & \specialcell{\textbf{\emph{limb-wise}}\\\textbf{access}} & \specialcell{\textbf{\emph{slot-wise}}\\\textbf{access}} &
\textbf{Total Time} \\ \midrule
Baseline & $2.3$~ms & $9.2$~ms & $11.5$~ms \\
Optimized & $2.5$~ms & $2.2$~ms & $4.7$~ms \\
\bottomrule
\end{tabular}
\label{tab:read_times}
\vspace{-0.12in}
\end{table}
\subsection{$\beta$-Limb Caching}
\label{sec:beta-limb-caching}
The next optimization considers a cache size that is $O(\beta)$. Recall that $\beta$ is the number of digits generated from a polynomial key switching. We refer to Han and Ki~\cite{HK19} for more details. For our parameters where $\beta \leq \mathsf{dnum} = 3$, this amounts to about $6$~MB of cache.
We need space for $3$ limbs at all-times and $3$ limbs worth of space to store intermediate results and other required constants.
With this optimization, we can greatly reduce the number of accesses to main memory during key-switching.
Consider the $\mathsf{HRotate}$ function in \Cref{algo:HRotate}.
There are $\beta$ digits that are produced as the output of the $\mathsf{ModUp}$ operations.
Naively, for each rotation we would read the limbs for each of the $\beta$ digits, rotate them, then compute the inner product with the key-switching key.
Since now we have space in the cache for $\beta$ digits, we can instead pull in a single limb from each of the $\beta$ outputs of $\mathsf{ModUp}$, then compute the rotation and the inner product with the switching key limbs all at once.
This allows us to read in the outputs of the $\mathsf{ModUp}$ function only once, regardless of the number of rotations computed.
\subsection{$\alpha$-Limb Caching}
\label{sec:alpha-limb-caching}
For this optimization, we assume that we have a relatively large LLC that can hold $O(\alpha)$ limbs.
Recall that $\alpha$ is the number of limbs in a single digit after output by the $\mathsf{Decomp}$ function for key switching. We refer to Han and Ki~\cite{HK19} for more details.
In practice, this optimization requires only slightly more than $2\alpha$ limbs, using about $27$~MB ($2\alpha$ + $3$~MB) for $\alpha=\lceil L + 1/\mathsf{dnum} \rceil = 12$ as $L=35$ and $\mathsf{dnum}=3$.
Under this assumption, we observe a dramatic decrease in the number of accesses to the main memory.
This is because all of the \emph{slot-wise} basis conversion operations in $\mathsf{ModUp}$ (line~\ref{line:modUpBasisConv} in \cref{algo:ModUp}) and $\mathsf{ModDown}$ operate over $\alpha$ limbs.
If we can fit these $\alpha$ limbs in cache, then we can generate new limbs in their entirety within the cache.
With each new limb in cache, we can perform the NTT on the limb, which completes the basis change operation, and write this limb out to memory.
This lets us generate all new limbs in evaluation format without having to write them out in \emph{slot-wise} format and then reading them back in \emph{limb-wise} format.
\paragraph*{Accumulator Caching} We briefly mention an optimization that is easily enabled by a large cache but is also available with smaller caches ($O(\beta)$ or even smaller).
This optimization improves the memory bandwidth of the baby-step giant-step polynomial evaluation from Han and Ki~\cite{HK19}.
A straight-forward optimization is to cache the leaves (the baby-step) \vinod{What is this? A full understanding of this requires understanding what bsgs is! so you have to describe it. maybe in an appendix.} \vinod{Please add this, but you can do it in the next draft.} polynomials and reuse them to compute all of the giant-step limbs.
However, if there is not enough space for the baby-step polynomials, we can still save DRAM reads by caching the partial sums of the giant step limb.
When we read in a baby-step limb, we add this limb to all cached accumulators.
\subsection{Re-Ordering Limb Computations}
\label{sec:reorder}
For the $\mathsf{ModDown}$ operation, the limbs that are being reduced need additional operations to be performed on them.
The $\mathsf{ModDown}$ operations in key switching and bootstrapping drop $\alpha$ limbs.
In this re-ordering optimization, we propose computing these $\alpha$ limbs first so that the additional operations can be performed immediately.
This optimization is especially potent when these $\alpha$ limbs can be cached, since then there is no need to write out these limbs as they are being computed.
Once we have the $\alpha$ limbs, we can begin the $\mathsf{ModDown}$ operation by computing the output of the basis conversion.
Then, for each subsequent limb that is computed, this limb can be immediately combined with the basis conversion output, saving DRAM transfers.
\subsection{Key Takeaway}
\label{sec:cachingoptsTakeaway}
The benefits of the optimizations in this section are presented in \Cref{fig:cacheBarPlot}.
As the figure shows, growing the cache size reduces the DRAM transfers of the bootstrapping algorithm by employing the optimizations described in this section.
Note that the number of compute operations in the bootstrapping algorithm remains fixed for all these benchmarks.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{Figures/mem_opts.png}
\end{center}
\vspace{-0.15in}
\caption{DRAM transfers with various memory optimizations. As the cache size grows from left-to-right more optimizations become available. The impact is assessed cumulatively i.e. each successive optimization builds on top of the earlier ones. The order of the optimizations correspond to the order of the sections in \Cref{sec:cachingopts}.
}
\vspace{-0.15in}
\label{fig:cacheBarPlot}
\end{figure}
\section{CKKS Bootstrapping: Algorithmic Optimizations}
\label{sec:algoopts}
In this section, we present our algorithmic optimizations to the CKKS bootstrapping algorithm.
These optimizations represent strict improvements to the CKKS bootstrapping algorithm and they do not depend on the cache size.
However, as an added benefit of reducing the compute operation count, they also reduce the memory bandwidth, as displayed in \Cref{fig:algoBarPlot}.
Our baseline for demonstrating the improvements of these optimizations is the memory-optimized algorithm from \Cref{sec:cachingopts}.
Therefore, the left-most baseline bar in \Cref{fig:algoBarPlot} contains all of the memory optimizations described in \Cref{sec:cachingopts}.
For the algorithm that includes all of our optimizations, we performed a parameter search to optimize the bootstrapping throughput for a $128$-bit security level.
We discuss our parameter search method further in \Cref{section:eval}.
These parameters are given in \Cref{tab:bootParams}, and all benchmarks in \Cref{fig:algoBarPlot} were taken using these same parameters.
\subsection{Combining $\mathsf{ModDown}$ and $\mathsf{Rescale}$ in $\mathsf{Mult}$}
\label{sec:moddown-rescale}
This optimization merges the two $\mathsf{ModDown}$ operations in lines~\ref{line:multFirstModDown} and~\ref{line:multSecondModDown} in \Cref{algo:Mult}.
To merge these $\mathsf{ModDown}$ operations, we must lift the addition step in line~\ref{line:multAdd} above the first $\mathsf{ModDown}$.
We achieve this by modifying the double-hoisting method from Bossuat et al.~\cite{BMTH20}, multiplying the two polynomials by $P$ to efficiently lift the two polynomial to the modulus $PQ$.
We denote the operation that multiplies by $P$ modulo $Q$ and then interprets the result modulo $PQ$ as $\mathsf{PModUp}$.
By applying the $\mathsf{PModUp}$ function, we can move the addition above the first $\mathsf{ModDown}$, making the two $\mathsf{ModDown}$ operations adjacent, which allows them to be combined.
This new $\mathsf{Mult}$ algorithm, denoted as $\mathsf{NewMult}$, is given in \Cref{algo:NewMult}, and the lines in blue denote the differences from \Cref{algo:Mult}.
\paragraph*{Faster Encrypted Inner Product}
As a direct result of this optimization, we obtain a faster encrypted inner product.
Consider the operation that computes $\dbrack{\mathbf{z}} = \sum_i \mathsf{Mult}(\dbrack{\mathbf{x}_i}, \dbrack{\mathbf{y}_i})$ where $\vec{\dbrack{\mathbf{x}}}$ and $\vec{\dbrack{\mathbf{y}}}$ are vectors of ciphertexts.
Using the $\mathsf{NewMult}$ operation, we need to compute only one $\mathsf{ModDown}$ operation over the entire sum.
This is because we can merge the additions in line~\ref{line:newMultSum} to sum all of the polynomials before any $\mathsf{ModDown}$ is computed.
\begin{algorithm}
\caption{
$\mathsf{NewMult}(\dbrack{\mathbf{m}_1}_\mathbf{s}, \dbrack{\mathbf{m}_2}_\mathbf{s}, \mathsf{ksk} ) = \dbrack{\mathbf{m}_1\cdot \mathbf{m}_2}_\mathbf{s}$}
\label{algo:NewMult}
\begin{algorithmic}[1]
\State $(\a_1, \b_1) := \dbrack{\mathbf{m}_1}_\mathbf{s}$
\State $(\a_2, \b_2) := \dbrack{\mathbf{m}_2}_\mathbf{s}$
\State $(\a_3, \b_3, \c_3) := (\a_1\a_2, \a_1\b_2 + \a_2\b_1, \b_1\b_2)$
\State $\vec{\a} := \mathsf{Decomp}_\mathsf{dnum}(\a_3)$
\State $\hat{\a}_i := \mathsf{ModUp}(\vec{\a}[i])$ for $1 \leq i \leq \mathsf{dnum}$.
\State $(\hat{\u}, \hat{\v}) := \mathsf{KSKInnerProd}(\mathsf{ksk}_{\mathbf{s}^2\rightarrow \mathbf{s}}, \hat{\a})$
\State {\color{blue}$(\hat{\b}_3, \hat{\c}_3) := (\mathsf{PModUp}(\b_3), \mathsf{PModUp}(\c_3))$}\\
\Return {\color{blue}$(\mathsf{ModDown}(\hat{\u} + \hat{\b}_3), \mathsf{ModDown}(\hat{\v} + \hat{\c}_3))$} \label{line:newMultSum}
\end{algorithmic}
\end{algorithm}
\subsection{Hoisting the $\mathsf{ModDown}$ in $\mathsf{PtMatVecMult}$}
\label{sec:rotation-hoisting}
In section~\ref{subsec:CKKSSubRout}, we discussed how $r$ rotations on the same ciphertext can be computed more efficiently than simply applying the $\mathsf{Rotate}$ function $r$ times.
This function $\mathsf{HRotate}$ described in \Cref{algo:HRotate} achieves an improved performance by identifying an expensive common subroutine in all of the $\mathsf{Rotate}$ operations: the $\mathsf{ModUp}$ routine.
Bossuat et al.~\cite{BMTH20} present an optimization that hoists the second \emph{slot-wise} operation in the function: the $\mathsf{ModDown}$ routine.
However, their technique is similar to the one in $\mathsf{NewMult}$, where the message polynomial is lifted to the raised modulus via the inexpensive $\mathsf{PModUp}$ procedure.
They call this optimization ``double-hoisting."
Our $\mathsf{ModDown}$ hoisting optimization is used in the context of a baby-step giant-step (BSGS) algorithm that implements $\mathsf{PtMatVecMult}$.
The trade-off in this algorithm is that a larger baby-step and a smaller giant step means more DRAM reads for the switching keys, while a smaller baby-step and a larger giant step means more DRAM reads for the ciphertexts, since the baby-step ciphertexts must be read in for each giant-step.
In \Cref{sec:key-compression}, we give a simple optimization to compress the size of the keys by a factor of $2$.
Using our architecture modeling tool, we determine that this optimization shifts the balance between the baby-step size and the giant-step size so significantly that the optimal number of giant steps is $1$.
This essentially collapses the baby-step giant-step structure into just a single step that computes all $r$ iterations at once.
Therefore, by removing the giant steps in the BSGS algorithm, the $\mathsf{PtMatVecMult}$ collapses into a single instance of $\mathsf{HRotate}$ that includes the $\mathsf{PModUp}$ double-hoisting optimization, which allows the $\mathsf{PtMult}$ to be absorbed into the inner loop.
This algorithm is given in \Cref{algo:PtMatVecMult}, and the lines that differ from $\mathsf{HRotate}$ are in blue.
\begin{algorithm}
\caption{
$\mathsf{PtMatVecMult}(\mathbf{M}, \dbrack{\mathbf{x}}, \{k_i, \mathsf{ksk}_i\}_{i=1}^r ) = \dbrack{\mathbf{M}\mathbf{x}}$}
\label{algo:PtMatVecMult}
\begin{algorithmic}[1]
\State $(\a_\mathbf{x}, \b_\mathbf{x}) := \dbrack{\mathbf{x}}_\mathbf{s}$
\State $\vec{\a_\mathbf{x}} := \mathsf{Decomp}_\beta(\a_\mathbf{x})$ \Comment{$\beta$ digits.}
\State $\hat{\a}_j := \mathsf{ModUp}(\vec{\a_\mathbf{x}}^{(i)})$ for $1 \leq j \leq \beta$.
\State {\color{blue}$(\hat{\a}_\mathbf{y}, \hat{\b}_\mathbf{y}) \gets 0, 0$ \Comment{We will have $\mathbf{y} = \mathbf{M}\mathbf{x}$.}}
\For{$i$ from $1$ to $r$}
\State $\hat{\a}_\mathsf{rot}^{(j)} := \mathsf{Automorph}(\hat{\a}_j, k_i)$ for $1 \leq j \leq \beta$
\State $(\hat{\u}, \hat{\v}) := \mathsf{KSKInnerProd}(\mathsf{ksk}_{i}, \vec{\hat{\a}_\mathsf{rot}})$
\State $\b_\mathsf{rot} := \mathsf{Automorph}(\b_\mathbf{m}, k_i)$
\State {\color{blue} $\hat{\b}_\mathsf{rot}, \hat{\mathbf{M}}_i \gets \mathsf{PModUp}(\b_\mathsf{rot}), \mathsf{PModUp}(\Delta \cdot \mathbf{M}_i)$}
\State {\color{blue}\Comment{$\mathbf{M}_i$ is the $i^{th}$ non-zero diagonal of $\mathbf{M}$}}
\State {\color{blue} $(\hat{\a}_\mathbf{y}, \hat{\b}_\mathbf{y}) \mathrel{+}= \hat{\mathbf{M}}_i\cdot(\hat{\u}, \hat{\v} + \hat{\b}_\mathsf{rot})$ \Comment{$\mathsf{PtMult}$}}
\EndFor\\
\Return {\color{blue}$(\mathsf{ModDown}(\hat{\a}_\mathbf{y}), \mathsf{ModDown}(\hat{\b}_\mathbf{y}))$}
\end{algorithmic}
\end{algorithm}
\paragraph*{Removing Giant-Steps Beyond Bootstrapping}
This optimization is not a bootstrapping-only optimization.
The hoisting optimizations that are described for $\mathsf{PtMatVecMult}$ for bootstrapping are more broadly applicable to the $\mathsf{InnerProduct}$ computation.
When multiple $\mathsf{InnerProduct}$ operations needs to be performed in parallel, this hoisting optimization can be amortized across these parallel $\mathsf{InnerProduct}$ computations, which results in about $35\%$ improvement in logistic regression training iterations for our running example.
\subsection{Compressing the Key with a PRNG}
\label{sec:key-compression}
This optimization is not our own; rather, it is a folklore technique often used to reduce communication when sending ciphertexts or keys over a network (e.g. it is used in Kyber, a leading candidate public-key encryption scheme in the ongoing NIST post-quantum cryptography standardization~\cite{BDKLLSSSS18}).
However, to our knowledge, we are the first to use this optimization to reduce the memory bandwidth for hardware acceleration of homomorphic encryption as well as the first to analyze this optimization alongside the other optimizations listed in this section.
As discussed in \Cref{sec:rotation-hoisting}, this optimization has subtle yet highly impactful effects on the other optimizations that we list, drastically changing the optimal parameters for CKKS bootstrapping.
This optimization is a natural result of the observation that half of the switching key consists of truly random polynomials.
By replacing these truly random polynomials with pseudorandom polynomials generated via PRNG, we can avoid shipping the large random polynomials to and from DRAM, instead sending only the short PRNG key.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{Figures/alg_opts.png}
\end{center}
\vspace{-0.15in}
\caption{
This figure displays the algorithmic optimizations described in \Cref{sec:algoopts}. The impact is assessed cumulatively i.e. each successive optimization builds on top of the earlier ones. The baseline benchmark begins with all of the memory optimizations from \Cref{sec:cachingopts}. All benchmarks are taken with the \textbf{Best-case Parameters} from \Cref{tab:bootParams}. GOP on y-axis stands for Giga operations.}
\vspace{-0.10in}
\label{fig:algoBarPlot}
\end{figure}
\subsection{Key Takeaways}
\label{sec:algoOptTakeaway}
Figure~\ref{fig:algoBarPlot} shows how various optimizations impact the operation count and the DRAM transfers for CKKS bootstrapping.
Moving from left to right on the plot, arithmetic intensity starts to improve as each successive optimization is applied and enabling all our optimizations result in a cumulative $2.43\times$ improvement and a final arithmetic intensity value of $1.75$.
We now contextualize this compute and bandwidth optimization in the context of current computing platforms.
\paragraph*{Datacenter CPUs}
Consider an example of a top-of-line datacenter CPU such as the AMD EPYC 7763.
This CPU supports a maximum of $128$ parallel thread across $64$ SMT cores running at a base clock frequency of $2.45$~GHz.
This configuration supports peak integer theoretical throughput of $2.5$~TOp/s (Each operation here is a $64$-bit Integer Fused Multiply Add in AVX256 mode).
Each socket consists of $8$ compute die (CCD) with a local $32$~MiB L3 cache per die.
The total L3 cache per socket comes out to $256$~MiB.
Additionally, the socket offers an $8$-channel DDR4-$3200$ memory subsystem with an aggregate bandwidth of $204$~GB/s.
At first glance the total L3 capacity appears to be more than sufficient for storing multiple ciphertexts in cache.
However the die-to-die bandwidth is limited by the underlying interconnect (Infinity Fabric) to $51.2$~GiB/s reads and $25.6$~GiB/s writes.
There are similar bandwidth limits at the L1-L2 and L2-L3 interfaces on each die.
Thus, it is necessary to consider the compute available on each die in the context of the bandwidth available to that die.
Each CCD pairs $310$~GOp/s with $51.2$~GiB/s of memory bandwidth.
This gives a theoretical INT64 FMA arithmetic intensity of $\sim 6$.
On current hardware, $64$-bit modular operations need to be emulated using multiple arithmetic operations as seen in section \ref{ssec:mod_arith}.
Compensating for this, we observe that the final arithmetic intensity of our bootstrapping procedure is similar to what can be supported by state-of-art CPUs.
Note that the addition of modular arithmetic vector extension to existing vector engines would already result in the overall application being memory bottlenecked.
\paragraph*{Datacenter GPUs}
For GPU analysis we consider the NVIDIA A100 datacenter GPU.
This GPU offers a peak $19.5$~TOp/s $32$-bit Integer FMA performance when clocked at $1.41$~GHz.
It has an on-chip $40$~MB L2 last-level cache and uses an HBM2 DRAM interface supporting $1.55$~TB/s of bandwidth.
Note again that a single die cannot fit a complete ciphertext in memory.
Applications with an INT32 FMA arithmetic intensity lower than $\sim 12$ will tend to be memory bottlenecked.
In addition, $64$-bit integer arithmetic is not natively supported on a datacenter GPU and must be emulated in assembly which has a significant overhead (up to $20$ instruction for $64$-bit Integer multiply).
As such for GPU implementations, it is advisable to use an RNS representation with $32$-bit limbs to avoid this overhead.
Addition of native $32$-bit modular multiplication to future GPU will further worsen the memory bottleneck.
\emph{While the above estimations are simplistic and do not take into account the intricacies of instruction scheduling, the underlying point remains that raw access to compute power is not what bottlenecks existing FHE implementations.
Building new hardware that merely adds an order-of-magnitude to the compute capability is unlikely to give an order of magnitude performance improvements without addressing the memory side of the story.}
|
1,314,259,994,281 | arxiv | \section{Problem Specification.}In this paper, we consider the solution of the $N \times
N$ linear
system
\begin{equation} \label{e1.1}
A x = b
\end{equation}
where $A$ is large, sparse, symmetric, and positive definite. We consider
the direct solution of (\ref{e1.1}) by means of general sparse Gaussian
elimination. In such a procedure, we find a permutation matrix $P$, and
compute the decomposition
\[
P A P^{t} = L D L^{t}
\]
where $L$ is unit lower triangular and $D$ is diagonal.
\section{Design Considerations.}Several good ordering algorithms (nested dissection and
minimum degree)
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
Since our interest here does not
focus directly on the ordering, we assume for convenience that $P=I$,
or that $A$ has been preordered to reflect an appropriate choice of $P$.
Our purpose here is to examine the nonnumerical complexity of the
sparse elimination algorithm given in \cite{BANKSMITH}.
As was shown there, a general sparse elimination scheme based on the
bordering algorithm requires less storage for pointers and
row/column indices than more traditional implementations of general
sparse elimination. This is accomplished by exploiting the m-tree,
a particular spanning tree for the graph of the filled-in matrix.
\begin{theorem} The method was extended to three
dimensions. For the standard multigrid
coarsening
(in which, for a given grid, the next coarser grid has $1/8$
as many points), anisotropic problems require plane
relaxation to
obtain a good smoothing factor.\end{theorem}
Our purpose here is to examine the nonnumerical complexity of the
sparse elimination algorithm given in \cite{BANKSMITH}.
As was shown there, a general sparse elimination scheme based on the
bordering algorithm requires less storage for pointers and
row/column indices than more traditional implementations of general
sparse elimination. This is accomplished by exploiting the m-tree,
a particular spanning tree for the graph of the filled-in matrix.
Several good ordering algorithms (nested dissection and minimum degree)
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
Since our interest here does not
focus directly on the ordering, we assume for convenience that $P=I$,
or that $A$ has been preordered to reflect an appropriate choice of $P$.
\begin{proof} In this paper we consider two methods. The first method
is
basically the method considered with two differences:
first, we perform plane relaxation by a two-dimensional
multigrid method, and second, we use a slightly different
choice of
interpolation operator, which improves performance
for nearly singular problems. In the second method coarsening
is done by successively coarsening in each of the three
independent variables and then ignoring the intermediate
grids; this artifice simplifies coding considerably.
\end{proof}
Our purpose here is to examine the nonnumerical complexity of the
sparse elimination algorithm given in \cite{BANKSMITH}.
As was shown there, a general sparse elimination scheme based on the
bordering algorithm requires less storage for pointers and
row/column indices than more traditional implementations of general
sparse elimination. This is accomplished by exploiting the m-tree,
a particular spanning tree for the graph of the filled-in matrix.
\begin{Definition}{\rm We describe the two methods in \S 1.2. In \S\ 1.3. we
discuss
some remaining details.}
\end{Definition}
Our purpose here is to examine the nonnumerical complexity of the
sparse elimination algorithm given in \cite{BANKSMITH}.
As was shown there, a general sparse elimination scheme based on the
bordering algorithm requires less storage for pointers and
row/column indices than more traditional implementations of general
sparse elimination. This is accomplished by exploiting the m-tree,
a particular spanning tree for the graph of the filled-in matrix.
Several good ordering algorithms (nested dissection and minimum degree)
are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}.
Since our interest here does not
focus directly on the ordering, we assume for convenience that $P=I$,
or that $A$ has been preordered to reflect an appropriate choice of $P$.
Our purpose here is to examine the nonnumerical complexity of the
sparse elimination algorithm given in \cite{BANKSMITH}.
As was shown there, a general sparse elimination scheme based on the
bordering algorithm requires less storage for pointers and
row/column indices than more traditional implementations of general
sparse elimination.
\begin{lemma} We discuss first the choice for $I_{k-1}^k$
which is a generalization. We assume that $G^{k-1}$ is
obtained
from $G^k$
by standard coarsening; that is, if $G^k$ is a tensor product
grid $G_{x}^k \times G_{y}^k \times G_{z}^k$,
$G^{k-1}=G_{x}^{k-1} \times G_{y}^{k-1} \times G_{z}^{k-1}$,
where $G_{x}^{k-1}$ is obtained by deleting every other grid
point of $G_x^k$ and similarly for $G_{y}^k$ and $G_{z}^k$.
\end{lemma}
To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
approaches to the intersection problem for the special case of
an $n \times n$ grid ordered by nested dissection. The special
structure of this problem allows us to make exact estimates of
the complexity. To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6].
In \S 1.2, we review the bordering algorithm, and introduce
the sorting and intersection problems that arise in the
sparse formulation of the algorithm.
In \S 1.3., we analyze the complexity of the old and new
approaches to the intersection problem for the special case of
an $n \times n$ grid ordered by nested dissection. The special
structure of this problem allows us to make exact estimates of
the complexity. To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6].
For the old approach, we show that the
complexity of the intersection problem is $O(n^{3})$, the same
as the complexity of the numerical computations. For the
new approach, the complexity of the second part is reduced to
$O(n^{2} (\log n)^{2})$.
To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
approaches to the intersection problem for the special case of
an $n \times n$ grid ordered by nested dissection. The special
structure of this problem allows us to make exact estimates of
the complexity. To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6].
This is accomplished by exploiting the m-tree,
a particular spanning tree for the graph of the filled-in matrix.
To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
\cite{EISENSTAT} - \cite{LIU2}, \cite{ROSE76}, \cite{SCHREIBER}.
\subsection{Robustness.}We do not
attempt to present an overview
here, but rather attempt to focus on those results that
are relevant to our particular algorithm.
This section assumes prior knowledge of the role of graph theory
in sparse Gaussian elimination; surveys of this role are
available in \cite{ROSE72} and \cite{GEORGELIU}. More general
discussions of elimination trees are given in
\cite{LAW} - \cite{LIU2}, \cite{SCHREIBER}.
Thus, at the $k$th stage, the bordering algorithm consists of
solving the lower triangular system
\begin{equation} \label{1.2}
L_{k-1}v = c
\end{equation}
and setting
\begin{eqnarray}
\ell &=& D^{-1}_{k-1}v , \\
\delta &=& \alpha - \ell^{t} v .
\end{eqnarray}
\begin{figure}
\vspace{14pc}
\caption{This is a figure 1.1.}
\end{figure}
\section{Robustness.} We do not
attempt to present an overview
here, but rather attempt to focus on those results that
are relevant to our particular algorithm.
\subsection{Versatility.}The special
structure of this problem allows us to make exact estimates of
the complexity. For the old approach, we show that the
complexity of the intersection problem is $O(n^{3})$, the same
as the complexity of the numerical computations
\cite{GEORGELIU}, \cite{ROSEWHITTEN}. For the
new approach, the complexity of the second part is reduced to
$O(n^{2} (\log n)^{2})$.
To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
approaches to the intersection problem for the special case of
an $n \times n$ grid ordered by nested dissection. The special
structure of this problem allows us to make exact estimates of
the complexity. To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6].
In \S 1.2, we review the bordering algorithm, and introduce
the sorting and intersection problems that arise in the
sparse formulation of the algorithm.
In \S 1.3., we analyze the complexity of the old and new
approaches to the intersection problem for the special case of
an $n \times n$ grid ordered by nested dissection. The special
structure of this problem allows us to make exact estimates of
the complexity. To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6].
For the old approach, we show that the
complexity of the intersection problem is $O(n^{3})$, the same
as the complexity of the numerical computations. For the
new approach, the complexity of the second part is reduced to
$O(n^{2} (\log n)^{2})$.
To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6]. In \S 1.3., we analyze the complexity of the old and new
approaches to the intersection problem for the special case of
an $n \times n$ grid ordered by nested dissection. The special
structure of this problem allows us to make exact estimates of
the complexity. To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
[4] - [10], [5], [6].
This is accomplished by exploiting the m-tree,
a particular spanning tree for the graph of the filled-in matrix.
To our knowledge, the m-tree previously has not been applied in this
fashion to the numerical factorization, but it has been used,
directly or indirectly, in several optimal order algorithms for
computing the fill-in during the symbolic factorization phase
\cite{EISENSTAT} - \cite{LIU2}, \cite{ROSE76}, \cite{SCHREIBER}.
\section{Introduction}
In computational biology, it is commonplace to use dissimilarity information between species to reconstruct a phylogenetic tree, in which the leaves are the species and the internal nodes represent common ancestors.
Sequence distances can be used for this task, but are known to be unreliable~\cite{felsenstein2004inferring,philippe2011resolving}. In 2002, Nishimura et al.~\cite{nishimura2002graph} proposed that each pair of species should simply be considered as either \emph{close} or \emph{far}. There should then be a threshold $k$ such that in the phylogeny, close species are at distance at most $k$ and far species at distance more than $k$. The $k$-leaf power problem arises when we model species as the vertices of a graph $G$ in which edges represent closeness.
More specifically, we say that a graph $G$ is a \emph{$k$-leaf power} if there exists a tree $T$ such that the set of leaves of $T$ is $V(G)$, and such that $uv \in E(G)$ if and only if $dist_T(u, v) \leq k$, where $dist_T(u, v)$ is the distance between $u$ and $v$ in $T$ and $u, v$ are distinct.
The tree $T$ is called a \emph{$k$-leaf root} of $G$.
The $G$ graph is a \emph{leaf power} if it is a $k$-leaf power for some $k$.
Since their introduction, these graph classes have attracted the attention of both algorithm designers and graph theoreticians.
Two important problems have remained open for the last two decades. The first is to obtain a precise graph-theoretical characterization of $k$-leaf powers in terms of $k$. A fundamental question asks whether, for all $k$, $k$-leaf powers can be characterized as chordal graphs that forbid a finite set of induced subgraphs. This is known to be true for $k = 2, 3, 4$, but unknown for higher $k$~\cite{brandstadt2006structure,brandstadt2008structure}.
The second open problem is whether one can decide in polynomial time whether a graph $G$ is a $k$-leaf power, where $k$ could be fixed or given. This has been a longstanding problem even in the case $k \in O(1)$. Polynomial-time recognition is possible for $k \leq 6$~\cite{chang2006linear,ducoffe20194}, and the technical feats required to solve the case $k = 6$ show that extending these results is far from trivial.
In this work, we tackle the latter question and show that polynomial-time recognition is indeed possible for any constant $k$.
\subsection{Related work}
It is well-known that all $k$-leaf powers are chordal, and they are also strongly chordal (see e.g.~\cite{brandstadt2010rooted}).
The $2$-leaf powers are collections of disjoint cliques and
the $3$-leaf powers are exactly the chordal graphs that are bull, dart and gem-free~\cite{brandstadt2006structure,rautenbach2006some}. This can be used to recognize them in linear time.
For $k = 4$, a characterization of twin-free $4$-leaf powers in terms of chordality and a small set of forbidden induced subgraphs is established, again leading to a linear time algorithm~\cite{brandstadt2008structure}.
For $k \geq 5$, a characterization still escapes us, except for distance-hereditary $5$-leaf powers~\cite{brandstadt2009forbidden}.
It is known that all $k$-leaf powers are also $(k+2)$-leaf powers, but that they are not all $(k+1)$-leaf powers~\cite{brandstadt2008k}.
Chang and Ko~\cite{chang20073} have developed a linear time recognition algorithm for $5$-leaf powers, using a reduction from a similar problem known as the $3$-Steiner root, in which members of $V(G)$ can also be in internal nodes in the desired tree.
Recently, Ducoffe showed that $6$-leaf powers were polynomial-time recognizable~\cite{ducoffe20194}, this time using a reduction from the $4$-Steiner root problem and an elaborate dynamic programming approach. The case $k = 6$ is the farthest that could be achieved so far.
Let us mention that $k$-leaf powers have bounded clique-width~\cite{gurski2009nlc}, and that in~\cite{dom2004error,dom2005extending}, the problem of editing a graph to a $k$-leaf power is studied.
A recent result of Eppstein and Havvaei~\cite{eppstein2020parameterized} states that recognizing $k$-leaf powers is fixed parameter tractable (FPT) in $k + \delta(G)$, where $\delta(G)$ is the degeneracy of the graph. Note that since a $k$-leaf power $G$ is chordal, it is known that $tw(G) = \omega(G) - 1 \leq \delta(G) \leq tw(G)$, where $tw(G)$ and $\omega(G)$ are the treewidth and clique number, respectively, showing that the problem is also FPT in $tw(G) + k$.
We also mention that in~\cite{chen2003computing}, Chen et al. show that if we require each node of the $k$-leaf root to have degree between $3$ and $d$ (except leaves), then finding such a $k$-leaf root, if any, is FPT in $k + d$.
Recognizing the larger class of leaf powers is also a challenging open problem. Subclasses of strongly chordal graphs have been shown to be leaf powers~\cite{kennedy2006strictly,brandstadt2008ptolemaic,nevries2016towards}, but not all strongly chordal graphs are leaf powers~\cite{brandstadt2010rooted,lafond2017strongly,jaffke2019mim}.
Other variants and generalizations of $k$-leaf powers have also been proposed~\cite{brandstadt2007k,calamoneri2016pairwise}.
\subsection{Our contributions}
In this work, we show that for any constant $k \geq 2$, one can decide whether a graph $G$ is a $k$-leaf power in time $O(n^{f(k)})$.
Here, the function $f$ depends only on $k$, and thus $k$-leaf powers can be recognized in polynomial time for any constant $k$. We must reckon that $f(k)$ grows faster than a power tower function with base $k$ and height $3k$, i.e. $f(k) \in \Omega(k \uparrow \uparrow (3k))$, using Knuth's up arrow notation. We did not attempt to optimize $f(k)$, and it is possible that the techniques presented here can be refined in the future to attain a more reasonable $f(k)$ exponent, or even to obtain an FPT algorithm in parameter $k$.
To the best of our knowledge, several tools developed for this result have not been applied before.
The main idea is that if $G$ is a $k$-leaf power, then either $G$ admits a $k$-leaf root of low maximum degree, in which case it can be found ``easily", or all $k$-leaf roots have large maximum degree, in which case it contains redundant substructures.
By this, we mean that $G$ has a large number of vertex-disjoint subgraphs that are easy to solve individually and admit the ``same kind" of $k$-leaf roots. We can then argue that we can simply remove one of those redundant subsets from $G$, obtain an equivalent instance, and repeat the process. We do borrow ideas from~\cite{chen2003computing,eppstein2020parameterized}, since we handle the ``easy" instances mentioned above using dynamic programming on a tree decomposition.
Although the above ideas are used for algorithmic purposes, they may shed light on the graph theoretical characterization of $k$-leaf powers. Indeed, our collection of similar subgraphs satisfy a number of graph properties of interest (see next subsection). Combined with the knowledge gained from prior work, our side results may thus help understanding the structure of $k$-leaf powers. It is also plausible that our techniques can be applied to solve open problems on the larger class of pairwise compatibility graphs~\cite{calamoneri2016pairwise} and their variants, where edges represent a distance in an interval $[d_1, d_2]$ and non-edges represent a distance not in that interval.
\subsection{Overview of our algorithm}
Our approach requires a bit of a setup in terms of definitions, so here we first provide the main intuitions.
Assume for the moment that $G$ admits a $k$-leaf root $T$ of maximum degree $d := d(k)$, some quantity that depends only on $k$. Then for any leaf $v$ in $T$, there are at most $d^k$ other leaves of $T$ at distance at most $k$ from $v$. This implies that in $G$, $v$ has at most $d^k$ neighbors, and so the maximum degree of $G$ is at most $d^k$. This bounds the maximum clique number and, since $G$ is chordal, also bounds the treewidth by $d^k$. Eppstein and Havvaei~\cite{eppstein2020parameterized} have shown that in this setting, one can decide whether $G$ is a $k$-leaf power in time $O((k d^k)^{c d^k} n)$ for some constant $c$.
The difficult cases therefore arise when every $k$-leaf root of $G$ has maximum degree above $d$.
At a very high level, our approach for this case can be described in four essential steps.
\begin{enumerate}
\item
Find a large collection $\{C_1 \cup Y_1, \ldots, C_d \cup Y_d\}$ of disjoint subsets of $V(G)$ that have a ``similar" neighborhood structure and that are easy to solve.
Each $C_i$ is small and cuts $Y_i$ from the rest of the graph, and each subgraph induced by $C_i \cup Y_i$ has maximum degree at most $d^k$.
\item
Consider the set of \emph{all} $k$-leaf roots of each subgraph of $G$ induced by the $C_i \cup Y_i$ subsets, and ensure that many of these sets of $k$-leaf roots are ``similar".
If $d$ is large enough, this will be the case.
\item
Check whether $G - (C_1 \cup Y_1)$ admits a $k$-leaf root. If not, we are done, but if so, let $T$ be such a $k$-leaf root.
\item
Look at how the $C_i \cup Y_i$ subsets are organized within $T$, for $i > 1$. Since they have a similar structure as $C_1 \cup Y_1$ and admit the same type of $k$-leaf roots, we can find a $k$-leaf root $T_1$ of $G[C_1 \cup Y_1]$ that we can embed into $T$, while mimicking the organization of the $C_i \cup Y_i$'s.
\end{enumerate}
Of course, we need to be precise about what is meant by a ``similar" neighborhood structure, and a ``similar" set of $k$-leaf roots. We need two ingredients: the notion of a \emph{similar structure of $G$}, and the notion of the \emph{signature} of a tree.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{leaf-power-intuition.pdf}
\caption{On the left: a $k$-leaf root $T$ with a node $v$ with more than $d$ children. We have chosen $d$ of them, namely $v_1, \ldots, v_d$. The leaf $z$ is chosen to be at minimum distance from $v$ in $T$. The $C_i$ labels represent the leaves descending from $v_i$ at distance at most $k$ from $z$. Each $C_i$ is organized into at most $k$ layers, where layer $j$ consists of the vertices at distance exactly $j$ from $v$. The $Y_i$ labels represent the deeper leaves below $v_i$.
On the right: the structure of $G$ implied by $z$ and the $C_i \cup Y_i$ leaves of $T$. In $G$, vertices from the same layer have the same neighbors outside their respective $C_i \cup Y_i$.
}
\label{fig:intuition}
\end{figure}
\paragraph{Similar structures.}
Let $T$ be a $k$-leaf root of $G$ of maximum degree above some large enough $d$, and suppose that $T$ is rooted.
We look at the leaves below a deepest high degree node of $T$, and want to understand how their structure is reflected in $G$. This is represented in Figure~\ref{fig:intuition}.
More specifically, $T$ has some deepest node $v$ with \emph{at least} $d + 1$ children and whose descendants all have \emph{at most} $d$ children. Let $v_1, \ldots, v_d$ be $d$ arbitrary children of $v$. Let $T_1, \ldots, T_d$ denote the subtrees rooted at the $v_i$'s. Take a leaf $z$ of $T$ that is as close as possible to $v$, but that does not belong to a $T_i$ subtree (such a $z$ exists because $v$ has more than $d$ children).
Then the leaves at distance at most $k$ from $z$ in a $T_i$ subtree form a subset $C_i \subseteq V(G)$ of neighbors of $z$. Since $T_i$ has maximum degree $d$, one can argue that $|C_i| \leq d^k$. Moreover, if we assume that $G$ is connected, it can be argued that $C_i \neq \emptyset$.
In fact in $G$, each $C_i$ separates the other leaves contained in $T_i$ from the rest of $G$. Let us call these other leaves $Y_i$. Note that $Y_i$ might be empty, so $C_i$ might not exactly be a separator, and even if it is, it might not be minimal.
Nevertheless, the degree bound on $v_i$ and its descendants implies that in $G$, the induced subgraph $G[C_i \cup Y_i]$ has maximum degree $d^k$.
In other words, a high-degree $k$-leaf root of $G$ implies the existence of a large number of disjoint $C_i \cup Y_i$ subsets of vertices that are each ``easy" to solve, and such that there is a vertex $z$ that is a neighbor of each member of each $C_i$.
There is yet another property of the $C_i$'s that is useful. If we look at the leaves in $C_i$ at distance $j$ from $v$ for some $j \in [k]$, and the leaves in some other $C_j$ also at distance exactly $j$ from $v$, all these leaves share the same distance to the leaves ``outside" of $T_i$ and $T_j$. In terms of $G$, the $C_i$ vertices can be ``layered" so that vertices in the same layer have the same outside neighborhood in $G$, where the layer of a vertex is an integer between $1$ and $k$.
This is what is meant by a similar neighborhood structure.
\paragraph{Exploiting similar structures.}
Obviously, we do not have access to such a $k$-leaf root to find the structure as we just did, even though we know it exists. However, it can be found in $G$ by brute force, and
our ultimate goal is to show that $G$ is a $k$-leaf power if and only if $G - (C_1 \cup Y_1)$ is a $k$-leaf power (where the choice of $C_1 \cup Y_1$ is arbitrary).
So, imagine that we have found $z$ and the $C_i \cup Y_i$'s as in Figure~\ref{fig:intuition} on the right, along with a layering of the $C_i$'s, but that we do not have the tree $T$ on the left. In fact, the actual $k$-leaf root might not look like $T$ at all, and it might intertwine its $C_i \cup Y_i$ leaf sets.
Nevertheless, if $d$ is large enough, we show that many of the $G[C_i \cup Y_i \cup \{z\}]$ subgraphs admit a ``similar" set of $k$-leaf roots.
To make this notion clearer, suppose that we take \emph{every} $k$-leaf root of a $G[C_i \cup Y_i \cup \{z\}]$ subgraph, look at their restriction to $C_i \cup \{z\}$, and replace each leaf of $C_i$ by its layer (see Figure~\ref{fig:sig}.c).
By ``similar" sets of $k$-leaf roots, we mean that these sets of restricted $k$-leaf roots are exactly the same for many of the $G[C_i \cup Y_i \cup \{z\}]$ subgraphs.
These can be computed using dynamic programming on a tree decomposition of $G[C_i \cup Y_i \cup \{z\}]$.
This is an oversimplification, but let us go with it for a moment.
Assuming the above is feasible, we could first find a $k$-leaf root $T$ of $G - (C_1 \cup Y_1)$ (if none exists, then $G$ is not a $k$-leaf power). If $d$ is large enough, several of the $C_i \cup \{z\}$ subsets will be organized in the same manner in $T$, i.e. restricting $T$ to these $C_i \cup \{z\}$ subsets and replacing $C_i$ leaves by their layer will yield the same tree. We can then find a $k$-leaf root $T_1$ of $G[C_1 \cup Y_1 \cup \{z\}]$ with the same structure as these, and embed $T_1$ with the same organization as the others in $T$. Because $C_1$ is layered in the same manner as the other $C_i$'s, mimicking their structure in $T$ ensures that the distance relationships with the other vertices are satisfied.
The $z$ vertex is important, since it serve as a common point between all the $C_i$'s and indicates where to start embedding.
At this point, it becomes difficult to describe how this embedding is performed more concretely,
and the interested reader is redirected to the proof of Theorem~\ref{thm:iffc1} for more details.
\paragraph{Tree signatures.}
As we mentioned, expecting the $G[C_i \cup Y_i \cup \{z\}]$ subgraphs to admit exactly the same sets of $k$-leaf roots, even if we restrict them to $C_i \cup \{z\}$ and replace the leaves by their layer, does not quite work.
First, each restricted subtree must also remember how far the $Y_i$ vertices are from the nodes of the restricted subtree, to ensure that we do not make the $Y_1$ nodes too close to other nodes during the embedding.
This can be done by labeling the internal nodes of our restricted trees with the distance to the closest $Y_i$ leaf, as in Figure~\ref{fig:sig}.c. Note that a similar trick was done by Chen et al. in~\cite{chen2003computing}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{signatures_nolayer.pdf}
\caption{(a) A subgraph of $G$ induced by $C_i \cup Y_i \cup \{z\}$. Vertices in black are those of $C_i$, and in gray those of $Y_i$. The $\l_1, \l_2, \l_3$ represent the layer assigned to each vertex of $C_i$ (several vertices can have the same layer). (b) A $4$-leaf root $T$ for this subgraph. The leaves of $C_i$ are labeled by their layer. (c) The restriction of $T$ to $C_i \cup \{z\}$, but in which each internal node remembers the distance to the closest pruned leaf that it led to (if any). (d) A tree representing the signature of the restricted $4$-leaf root. Each time a node has three identical subtrees, one of them is considered redundant and is pruned.}
\label{fig:sig}
\end{figure}
Second, we cannot guarantee that enough of the $G[C_i \cup Y_i \cup \{z\}]$ will admit \emph{exactly} the same set of restricted $k$-leaf roots, since the number of possibilities is too large.
The solution is to define a compact representation of $k$-leaf roots restricted to some $C_i \cup \{z\}$, which we call its \emph{signature}.
This representation prunes information from the tree that is not necessary for our embedding. Conceptually, to obtain the signature of a tree, we look at each node and if we find a node that has three or more child subtrees that are identical, we remove one of these subtrees. This is fine for our embedding, since the identical subtree can be inserted along the others that are identical. We repeat until this is not possible.
This is illustrated in Figure~\ref{fig:sig}. From (a) a $G[C_i \cup Y_i \cup \{z\}]$ subgraph, we look at (b) each $k$-leaf root with leaves of the $C_i$'s replaced by their layer, then (c) restrict to $C_i \cup \{z\}$ and remember the distances to the removed leaves, and finally (d) prune redundant subtrees. This gives the compact representation of one $k$-leaf root, and we must obtain them all.
This compact representation allows much less possibilities, and then we can guarantee with a pigeonhole argument that with large enough $d$, many $G[C_i \cup Y_i \cup \{z\}]$ will admit the exact same set of signatures. This works because the number of possible signatures does not depend on $d$, only on $k$. Therefore, we can make $d$ as large as desired for our argument.
Note that concretely, signatures are represented as vectors of integers that encode the same information, since it allows for simpler proofs (see Section 3).
We now proceed with the details.
\section{Preliminary notions}
For an integer $n$, we use the notation $[n] = \{1, 2, \ldots, n\}$.
All graphs in this paper are finite, simple and undirected. For a graph $G$ and $v \in V(G)$, we denote by $N_G(v)$ the set of neighbors of $v$ in $G$, and we write $N_G[v] = N_G(v) \cup \{v\}$.
For $X \subseteq V(G)$, we denote by $N_G(X) = \bigcup_{x \in X} (N_G(x) \setminus X)$ the neighbors of members of $X$ that are outside of $X$. Also, we write $N_G[X] = N_G(X) \cup X$.
We may drop the subscript $G$ if it is clear from the context. For $X \subseteq V(G)$, we denote by $G[X]$ the subgraph of $G$ induced by $X$.
We define a \emph{connected component} as a maximal \emph{set} of vertices $X$ such that $G[X]$ is connected.
Unless stated otherwise, all trees in this paper are \emph{rooted}. Hence we will usually say \emph{tree} instead of \emph{rooted tree}.
We denote the root of a tree $T$ by $r(T)$.
For a node $v \in V(T)$, we write $ch_T(v)$ for the set of children of $v$ in $T$. The \emph{arity} of $T$ is $\max_{v \in V(G)}|ch_T(v)|$.
We say that $v$ is a \emph{leaf} if it is has no children.
We write $L(T)$ to denote the set of leaves of $T$.
It is important to note that leaves are sometimes defined as nodes with a single neighbor in $T$. This slightly differs from our definition, since if $r(T)$ has a single child in $T$, it is not treated as a leaf here.
The only case in which the root is also a leaf is when $T$ has a single vertex (which has no children).
A node $u \in V(T)$ is a \emph{descendant} of another node $v \in V(T)$ if $v$ is on the path from $r(T)$ to $u$. In this case, $v$ is an \emph{ancestor} of $u$. Note that $v$ is a descendant and ancestor of itself.
Given a tree $T$ and some $v \in V(T)$, we let $T(v)$ denote the subtree rooted at $v$, i.e. the subgraph of $T$ induced by $v$ and all its descendants.
The distance between two nodes $u$ and $v$ of $T$ is denoted $dist_T(u, v)$.
We define $height(T) = 1 + \max_{l \in L(T)} dist_T(r(T), l)$.
Two trees $T_1$ and $T_2$ are \emph{equal} if $r(T_1) = r(T_2)$ and $(V(T_1), E(T_1)) = (V(T_2), E(T_2))$, in which case we write $T_1 = T_2$. Two trees $T_1$ and $T_2$ are called \emph{leaf-isomorphic}, denoted $T_1 \simeq_L T_2$, if $L(T_1) = L(T_2)$, and there exists a bijection $\mu : V(T_1) \rightarrow V(T_2)$ such that $\mu(u) = u$ for every $u \in L(T_1)$, $\mu(r(T_1)) = r(T_2)$, and such that $uv \in E(T_1)$ if and only if $\mu(u)\mu(v) \in E(T_2)$. We call $\mu$ a leaf-isomorphism.
Note that this is stronger than the usual notion of isomorphism, since we require $T_1$ and $T_2$ to be built with the same set of leaves. Also observe that since leaves must be matched, the $\mu$ function is unique\footnote{To see this, observe that $\mu$ is forced for the leaves of $T_1$. Then, $\mu$ is forced for the parent of those leaves, and then $\mu$ is forced for the grand-parents, and so on.}.
Let $X \subseteq V(T)$.
The \emph{restriction} of $T$ to $X$, denoted $T|X$, is the subgraph of $T$ induced by $X$ and every vertex of $T$ on the shortest path between two elements of $X$. We shall repeatedly use the fact that for $u, v \in X$, $dist_{T|X}(u, v) = dist_T(u, v)$, and that
$V(T|X) \subseteq V(T)$.
Note that for $v \in V(T|X)$, $ch_{T}(v) \setminus ch_{T|X}(v)$ denotes the set of children of $v$ that were ``removed" by the restriction.
\subsection*{Leaf powers and their properties}
The main definition of interest in this paper is the following.
\begin{definition}
Let $G$ be a graph and let $k$ be a positive integer. A \emph{$k$-leaf root} of $G$ is a tree $T$ such that $L(T) = V(G)$, and such that for all distinct $u, v \in V(G)$, $uv \in E(G)$
if and only if $dist_T(u, v) \leq k$.
The graph $G$ is a \emph{$k$-leaf power} if there exists a $k$-leaf root of $G$.
\end{definition}
Note that according to our definitions, the tree $T$ is implicitly rooted.
This is not required in the usual definition of leaf powers, although this is inconsequential for $k \geq 2$. This is because if an unrooted $k$-leaf root $T$ has an internal node or has a single node, we may use it as the root. If not, $T$ has two vertices $u$ and $v$, which are both of degree $1$. This implies that $uv \in E(G)$, but since $k \geq 2$, we may add an internal node between $u$ and $v$ in $T$.
Also observe that the $k$-leaf power property is hereditary. That is,
if $G$ is a $k$-leaf power and $X \subseteq V(G)$, then $G[X]$ is a $k$-leaf power. This is because if $T$ is a $k$-leaf root of $G$, then $T|X$ is a $k$-leaf root of $G[X]$, since restrictions preserve distances.
A graph is \emph{chordal} if it has no induced cycle of length $4$ or more. The treewidth of $G$ is denoted $tw(G)$ (we defer the definition of treewidth to Section~\ref{sec:tw}).
The following is a well-known property of $k$-leaf powers, for any $k$ (see~\cite{nishimura2002graph,brandstadt2010rooted}).
\begin{lemma}\label{lem:chordal}
Let $G$ be a $k$-leaf power. Then $G$ is chordal.
\end{lemma}
We will be interested in $k$-leaf roots that have high arity.
In~\cite[section 7]{eppstein2020parameterized}, Eppstein and Havvaei have shown that if $G$ is a graph of bounded treewidth, then deciding if $G$ is a $k$-leaf power is fixed-parameter tractable in $k + tw(G)$.
As we show, this implies that low-arity $k$-leaf roots, if any, can be found using this result.
\begin{lemma}\label{lem:prelim:bounded-arity}
Let $G$ be a graph with $n$ vertices. If there exists a $k$-leaf root of $G$ of arity at most $d$, then $G$ has maximum degree $d^k$.
Moreover, one can decide in time $O(n (d^k k)^{c d^k})$ whether such a $k$-leaf root exists, where $c$ is a constant.
\end{lemma}
\begin{proof}
Assume that $T$ is a $k$-leaf root of $G$ of arity at most $d$.
Let $v \in V(G)$. Then in $T$, the number of leaves at distance at most $k$ from $v$ is bounded by $d^k$ (since if we imagine rerooting $T$ at $v$, each node will still have at most $d$ children --- the astute reader will see that we could optimize to $d^{k-1}$, but let us not bother). This implies that $v$ has at most $d^k$ neighbors in $G$. Since this holds for every vertex, $G$ has maximum degree $d^k$.
As for the second part of the lemma, we know by~\cite{eppstein2020parameterized} that deciding if $G$ is a $k$-leaf power can be done in time $O(n \cdot (tw(G) k)^{c \cdot tw(G)})$ for some constant $c$. To use this, we can first check whether $G$ is chordal in linear time, and if not reject it. Since $G$ is chordal, it is well-known that $tw(G) = \omega(G) - 1$, where $\omega(G)$ is the size of a maximum clique in $G$. In any graph, $\omega(G)$ is at most the maximum degree plus one. In our case, this implies that $tw(G) = \omega(G) - 1 \leq d^k$.
Using the algorithm of~\cite{eppstein2020parameterized}, we can check whether $G$ is a $k$-leaf power in time $O(n (d^k k)^{c d^k})$.
\end{proof}
\section{Finding redundant structures in $k$-leaf powers}
Let us fix a positive integer $k$ and an arbitrary graph $G$ for the rest of the paper. We assume that $G$ is connected, as otherwise, each connected component can be treated separately (if a $k$-leaf root is found for each component, we can join their roots under a new root at distance more than $k$ from them).
As we mentioned, an important difficulty is when $G$ does admit $k$-leaf roots, but they all have large arity.
We now define our similar structures precisely.
We then introduce the notion of a \emph{signature} for their $k$-leaf roots. After that, we argue that many subsets admit the same $k$-leaf root signatures, and that we can prune one.
In Section~\ref{sec:tw}, we show how the set of $k$-leaf root signatures can be found.
\subsection{Similar structures}
A \emph{similar structure} of a graph $G$ is a tuple $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ where:
\begin{itemize}
\item
$\mathcal{C} = \{C_1, \ldots, C_d\}$ is a collection of $d \geq 2$ pairwise disjoint, non-empty subsets of vertices of $G$;
\item
$\mathcal{Y} = \{Y_1, \ldots, Y_d\}$ is a collection of pairwise disjoint subsets of vertices of $G$, some of which are possibly empty. Also, $C_i \cap Y_j = \emptyset$ for any $i, j \in [d]$;
\item
$z \in V(G)$ and does not belong to any subset of $\mathcal{C}$ or $\mathcal{Y}$;
\item
$\L = \{\l_1, \ldots, \l_d\}$ is a set of functions where, for each $i \in [d]$, we have $\l_i: C_i \cup \{z\} \rightarrow \{0, 1, \ldots, k\}$.
The functions in $\L$ are called \emph{layering functions}.
\end{itemize}
Additionally, $\S$ must satisfy several conditions.
Let us denote $C^* = \bigcup_{i\in [d]} C_i$.
Let $X = \{X_1, \ldots, X_t\}$ be the connected components of $G - C^*$.
For each $i \in [d]$, denote $X^{(i)} = \{X_j \in X : N_G(X_j) \subseteq C_i\}$, i.e. the components that have neighbors only in $C_i$.
Then all the following conditions must hold:
\begin{enumerate}
\item \label{cut:yi}
for each $i \in [d]$, $Y_i = \bigcup_{X_j \in X^{(i)}} X_j$ ($Y_i = \emptyset$ is possible);
\item \label{cut:znbrhood}
there is exactly one connected component $X_z \in X$ such that for all $i \in [d]$,
${N_G(X_z) \cap C_i \neq \emptyset}$. Moreover, $z \in X_z$ and $C^* \subseteq N_G(z)$;
\item \label{cut:ccs}
for all $X_j \in X \setminus \{X_z\}$, $X_j \subseteq Y_i$ for some $i \in [d]$. In particular, $X_z$ is the only connected component of $G - C^*$ with neighbors in two or more $C_i$'s;
\item \label{cut:layers}
the layering functions $\L$ satisfy the following:
\begin{enumerate}
\item \label{cut:zlayer}
for each $i \in [d]$, $\l_i(z) = 0$. Moreover, $\l_i(x) > 0$ for any $x \in C_i$;
\item \label{cut:layerssame}
for any $i, j \in [d]$ and any $x \in C_i, y \in C_j$, $\l_i(x) = \l_j(y)$ implies $N_G(x) \setminus (C_i \cup Y_i \cup C_j \cup Y_j) = N_G(y) \setminus (C_i \cup Y_i \cup C_j \cup Y_j)$.
Note that this includes the case $i = j$;
\item \label{cut:layers-leq-k}
for any $i, j \in [d]$ and any $x \in C_i, y \in C_j$, $\l_i(x) + \l_j(y) \leq k$ implies $xy \in E(G)$. Note that this includes the case $i = j$.
\item \label{cut:layers-gt-k}
for any \emph{two distinct} $i, j \in [d]$ and any $x \in C_i, y \in C_j$, $\l_i(x) + \l_j(y) > k$ implies $xy \notin E(G)$. Note that this does \emph{not} include the case $i = j$
\end{enumerate}
\end{enumerate}
We will refer to the value of $d$ as the \emph{size} of $\S$.
Although somewhat burdensome, the properties of a similar structure occur naturally when we look at the subtrees under a given node of a $k$-leaf root. The similar structure is the one that is described in Figure~\ref{fig:intuition}, but in a more precise manner.
The properties of similar structures essentially say that after removing each $C_i \in \mathcal{C}$ from $G$, there is one connected component $X_z$ that touches every $C_i$, with a $z \in X_z$ that is a neighbor of each $C_i$. All the other connected components are separated from the rest of the graph by exactly one $C_i$, and these form the $Y_i$'s. As for the layering functions, they state that $z$ is a special vertex with layer $0$.
For the $C_i$ vertices, the layers represent how the neighborhoods of the $C_i$ members are organized. One can imagine that $G$ has a $k$-leaf root and that the layer of $x \in C_i$ is the distance from $x$ to the lowest common ancestor of $C_i$ (of course, this is conceptual since we don't have this $k$-leaf root).
Any two vertices at the same layer must have the same neighborhood outside of their $C_i$'s. Vertices from ``close" layers, i.e. with sum at most $k$, must be neighbors. If the layers are ``far", i.e. with sum more than $k$, then the vertices should not be neighbors (unless they are in the same $C_i$).
We first show that similar structures can always be found on graphs with $k$-leaf roots of high arity.
This is essentially a formalization of Figure~\ref{fig:intuition}.
\begin{lemma}\label{lem:exists-similar}
Let $d \geq 2$ and let $G$ be a connected graph that admits a $k$-leaf root of arity at least $d + 1$.
Then there exists a similar structure $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ of $G$ such that
$|\mathcal{C}| = d$. Moreover, for each $C_i \in \mathcal{C}$, $|C_i| \leq d^k$
and $G[C_i \cup Y_i \cup \{z\}]$ has maximum degree at most $d^k$.
\end{lemma}
\begin{proof}
Let $T$ be a $k$-leaf root of $G$ of arity at least $d + 1$. We show how to construct $\mathcal{C} = \{C_1, \ldots, C_d\}$, $\mathcal{Y} = \{Y_1, \ldots, Y_d\}, z$, and $\L= \{\l_1, \ldots, \l_d\}$ using the relationship between $T$ and $G$.
Let $v$ be a deepest node of $T$ with $d + 1$ or more children (such a node must exist). By the choice of $v$, all the descendants of $v$ have at most $d$ children.
Let $z \in L(T)$ be a leaf of $T$ at minimum distance from $v$, i.e. $z$ minimizes $dist_T(z, v)$ among all leaves of $T$.
Note that $z$ may or may not be a descendant of $v$.
Let $v_1, \ldots, v_d$ be $d$ children of $v$, none of which is an ancestor of $z$ (such a choice of $d$ children exists since $v$ has at least $d + 1$ children).
For each $i \in [d]$, define $C_i = L(T(v_i)) \cap N_G(z)$.
Note that since $z$ is not a descendant of $v_i$, the vertices of $L(T(v_i)) \cap N_G(z)$ must be at distance at most $k$ from $v$ (actually, $k - 1$, but we shall not bother). Since the $T(v_i)$ subtree has arity at most $d$, there are at most $d^k$ leaves in $T(v_i)$ at distance at most $k$ from $v$. It follows that $|C_i| \leq d^k$, for each $i \in [d]$.
Note that by construction, the $C_i$ subsets are pairwise disjoint. To see that the $C_i$'s are non-empty, assume that $C_i$ is empty for some $i \in [d]$.
Since $z$ is the closest leaf to $v$, and since every path from a member of $L(T(v_i))$ to a member of $V(G) \setminus L(T(v_i))$ passes through $v$, no element of $L(T(v_i))$ is at distance $k$ or less in $T$ from and element of $V(G) \setminus L(T(v_i))$. Since $L(T(v_i)) \neq \emptyset$, this implies that $G$ is disconnected, a contradiction. We may thus assume that $C_i \neq \emptyset$.
Now, for convenience, define $C^* = \bigcup_{i \in [d]} C_i$.
Also define $G' = G - C^*$.
Let $X_z$ be the connected component of $G'$ that contains $z$. By construction, $z$ is a neighbor of every vertex in each $C_i$. We must show that only $X_z$ intersects with every $C_i$.
Let $Z = L(T) \setminus \left( \bigcup_{i \in [d]} L(T(v_i))) \right)$,
and notice that $z \in Z$.
We argue that $G'[Z]$ is connected.
Assume otherwise. Then $G'[Z]$ has at least two connected components, one of them being $X_z$, and the other some other component $X_q$. Since $G$ is connected and removing $C^*$ separates $X_z$ from $X_q$, there is some $q \in X_q$ such that $N_G(q) \cap C^* \neq \emptyset$.
Let $c \in N_G(q) \cap C^*$.
Consider the location of $q$ in $T$. Because $q \in Z$, $q$ is not a descendant of any $v_i$, and thus the path from $q$ to $c$ in $T$ passes through $v$. But the choice of $z$ implies
\begin{align*}
k \geq dist_T(q, c) = dist_T(q, v) + dist_T(v, c) \geq dist_T(q, v) + dist_T(v, z) \geq dist_T(q, z)
\end{align*}
Since $T$ is a $k$-leaf root of $G$, we have $qz \in E(G)$, contradicting that they belong to different connected components of $G'$.
Thus $G'[Z]$ is connected.
Now for $i \in [d]$, consider a leaf $x \in L(T(v_i)) \setminus C_i$.
We argue that all neighbors of $x$ are in $L(T(v_i))$.
Suppose that $x$ has a neighbor $q \in L(T) \setminus L(T(v_i))$.
Then $dist_T(x, q) \leq k$, and the path from $x$ to $q$ goes through $v$. But again, by the choice of $z$,
\[
k \geq dist_T(x, q) = dist_T(x, v) + dist_T(v, q) \geq dist_T(x, v) + dist_T(v, z) = dist_T(x, z)
\]
But then, $x$ should be in $C_i$ (by its definition), which is a contradiction. It follows that $x$ has no neighbor outside of $L(T(v_i))$.
In particular, $x \notin Z$.
It follows that $Z$ is a connected component of $G'$, that it is equal to $X_z$, and that it satisfies Property~\ref{cut:znbrhood} of similar structures.
Moreover, as we argued any other connected component $X_j$ of $G'$ contains vertices that appear as leaves of some $T(v_i)$ subtree, and $N_G(X_j)$ must be a subset of $C_i$ since $X_j$ vertices have no neighbor outside $T(v_i)$.
We may then define $Y_i = L(T(v_i)) \setminus C_i$ for each $i \in [d]$ and it follows that $Y_i$ regroups all connected components of $G'$ that have neighbors only in $C_i$, which satisfies Property~\ref{cut:yi}.
Property~\ref{cut:ccs} is easily seen to be satisfied, since the elements in $\{X_z, Y_1, \ldots, Y_d\}$ are disjoint and cover all of $L(T) \setminus C^*$.
It remains to describe our layering functions $\L$.
For $i \in [d]$, put $\l_i(z) = 0$ and for $x \in C_i$, define
\[
\l_i(x) = dist_T(x, v)
\]
Note that $\l_i(x) > 0$, and that $\l_i(x) = 1$ is possible if $C_i$ only contains a leaf of $T$.
Also note that $\l_i(x) \leq k$ since each $x$ is a neighbor of $z$.
Property~\ref{cut:zlayer} is thus satisfied. We show that each remaining property is satisfied.
\emph{Property~\ref{cut:layerssame}}.
Let $i, j \in [d]$ and let $x \in C_i, y \in C_j$ such that $\l_i(x) = \l_j(y)$.
Note that $q \in N_G(x) \setminus (C_i \cup Y_i \cup C_j \cup Y_j)$ only if $q$ is a leaf of $T$ at distance at most $k$ from $x$, and $q$ does not descend from $v_i$ or $v_j$. Therefore, the path from $x$ to $q$ goes through $v$.
Since $dist_T(x, v) = dist_T(y, v)$ (because they have the same layer), we have $dist_T(x, q) = dist_T(y, q)$, and thus $q \in N_G(y) \setminus (C_i \cup Y_i \cup C_j \cup Y_j)$ as well. Thus $N_G(x) \setminus (C_i \cup Y_i \cup C_j \cup Y_j) \subseteq N_G(y) \setminus (C_i \cup Y_i \cup C_j \cup Y_j)$, and the other containment direction can be argued symmetrically. Thus $\S$ satisfies Property~\ref{cut:layerssame}.
\emph{Property~\ref{cut:layers-leq-k}}. Let $i, j \in [d]$ and let $x \in C_i, y \in C_j$ with $\l_i(x) + \l_j(y) \leq k$. We see that $dist_T(x, y) \leq dist_T(x, v) + dist_T(y, v) = \l_i(x) + \l_j(y) \leq k$, implying that $xy \in E(G)$. Thus Property~\ref{cut:layers-leq-k} is satisfied.
\emph{Property~\ref{cut:layers-gt-k}}. Let $i, j \in [d]$, $i \neq j$, and let $x \in C_i, y \in C_j$ with $\l_i(x) + \l_j(y) > k$. Since $x$ and $y$ descend from distinct $v_i$ and $v_j$, $dist_T(x, y) = dist_T(x, v) + dist_T(v, y) = \l_i(x) + \l_j(y) > k$, and thus $xy \notin E(G)$.
Thus Property~\ref{cut:layers-gt-k} is satisfied.
We have therefore shown that $\S$ satisfies all requirements to be a similar structure, and that $|C_i| \leq d^k$ for each $i \in [d]$.
To finish the proof, recall that the lemma requires that each $G[\hC{i}]$ has maximum degree at most $d^k$. First note that $z$ has at most $d^k$ neighbors in this subgraph.
Now let $x \in C_i \cup Y_i$. Any neighbor of $x$ in $G[\hC{i}]$ is at distance at most $k - 1$ from the parent $p$ of $x$ in $T$. Since $T_i$ has arity at most $d$, the number of other leaves of $C_i \cup Y_i$ at distance at most $k - 1$ from $p$ is at most $d^{k-1}$, and thus $x$ has at most $d^{k-1}$ neighbors in $C_i \cup Y_i$ (this also counts the leaves that do not descend from $p$). Note that $x$ could also have $z$ as a neighbor, so its degree in $G[\hC{i}]$ is at most $d^{k-1} + 1 \leq d^k$, as desired.
\end{proof}
\subsection{Valued trees and signatures}
We now know that similar structures exist in difficult $k$-leaf powers. This is not enough though. We would like to say that the $G[C_i \cup Y_i \cup \{z\}]$ subgraphs each admit ``similar'' sets of $k$-leaf roots, so that pruning one $C_i \cup Y_i$ does not matter. In fact, we are only interested in how the $C_i \cup \{z\}$ subsets behave in these $k$-leaf roots, so we develop the notion of a \emph{valued restriction} to remove the $Y_i$'s, while retaining the essential distance information to these $Y_i$'s. This is still not enough though, since the sets of such restricted $k$-leaf roots may still differ. We introduce the notion of a \emph{signature} for these pruned trees, which is a compact representation that retains the essence of the structure of the trees.
A \emph{valued tree} $\mathcal{T}$ is a pair $(T, \sigma)$ where $T$ is a tree and $\sigma : V(T) \setminus L(T) \rightarrow \mathbb{N} \cup \{\infty\}$ assigns each internal node of $T$ an integer, or possibly the special value $\infty$.
We say that $\mathcal{T}$ is \emph{$s$-bounded} if $\sigma(v) \leq s$ or $\sigma(v) = \infty$ for each $v \in V(T) \setminus L(T)$. We define $height(\mathcal{T}) = height(T)$.
For $v \in V(T)$, $\mathcal{T}(v) = (T', \sigma')$ denotes the valued tree in which $T' = T(v)$ and $\sigma'(w) = \sigma(w)$ for $w \in V(T(v)) \setminus L(T)$.
We say that two valued trees $(T_1, \sigma_1)$ and $(T_2, \sigma_2)$ are \emph{value-isomorphic} if $T_1 \simeq_L T_2$, with leaf-isomorphism $\mu$, and $\sigma_1(w) = \sigma_2(\mu(w))$ for all $w \in V(T_1) \setminus L(T_1)$.
The notion of valued restrictions will be fundamental for the rest of this paper.
\begin{definition}
Let $T$ be a tree and let $X \subseteq L(T)$.
We say that $(T', \sigma)$ is the \emph{valued restriction of $T$ to $X$} if it satisfies the following:
\begin{itemize}
\item
$T' = T|X$;
\item
for each $v \in V(T') \setminus L(T')$,
let $L^*(v) = \bigcup_{x \in ch_T(v) \setminus ch_{T'}(v)} L(T(x))$. Then
\begin{itemize}
\item
if $L^*(v) = \emptyset$, $\sigma(v) = \infty$;
\item
otherwise, $\sigma(v) = \min_{l \in L^*(v)} dist_T(v, l)$.
\end{itemize}
\end{itemize}
\end{definition}
See Figure~\ref{fig:sig2}.b for an illustration. Intuitively, the valued restriction of $T$ to $X$ takes $T' = T|X$ as a tree.
By doing so, each remaining $v \in V(T')$ might have lost some children that were in $T$. We want $v$ to remember how far it is from a ``hidden" leaf, i.e. descending from one of these lost children, and $\sigma(v)$ stores this information.
The point of this is that $T$ represents a $k$-leaf root in which the leaves not in $X$ should be at distance more than $k$ from any leaf not in $T$ (concretely, these will be the $Y_i$ leaves). The tree $T'$ is a compressed version of $T$, and we will embed $T'$ into a larger $k$-leaf root instead of $T$. During this embedding we want to know how far the hidden leaves are to ensure that outside leaves remain at distance more than $k$ from them.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{signatures2.pdf}
\caption{(a) A tree $T$. The set of leaves $v$ with a label $\l(v)$ in $\{0, 1, \ldots, 4\}$ forms a set $X$, and the leaves in gray are those not in $X$. Note that these labels do not take part of the definition of a valued restriction, but are needed for signatures. (b) The valued restriction $\mathcal{T}$ of $T$ to $X$, with the leaves of $X$ preserving their labels. (c) An illustration of the signature of each rooted subtree of the valued restriction. For each leaf $v$, the signature of $\mathcal{T}(v)$ is $(\l(v))$. We only give the full signature for one internal node. We chose to order its entries such that for $i \in 1, \ldots, 5$, the $i$-th coordinate is the number of children of that node that have signature $(i - 1)$, or $2$ if this value is above $2$. The last entry is the $\sigma$ value of the node (as for every other node). The parent of that node would then have one entry for each possible signature for a tree of height $2$, and only one of those entries would be set to $1$.}
\label{fig:sig2}
\end{figure}
\subsubsection*{Valued tree signatures}
Let $\mathcal{T} = (T, \sigma)$ be a valued tree that is $s$-bounded for some $s$. Furthermore, let $\l$ be a layering function that maps each leaf in $L(T)$ to an integer in $\{0, 1, \ldots, k\}$.
We now define the \emph{signature of $\mathcal{T}$ with respect to $\l$}, denoted $sig_{\l}(\mathcal{T})$, which encodes some properties of $\mathcal{T}$ in a vector of integers. The signature is defined recursively and depends on the height of $\mathcal{T}$, as illustrated in Figure~\ref{fig:sig2}.c.
If $height(\mathcal{T}) = 1$, then $T$ has a single node $v$, which is a leaf. In this case, define $sig_{\l}(\mathcal{T}) = (\l(v))$.
Now, assume that $height(\mathcal{T}) = h > 1$. Let $S(s, h - 1) = \{s_1, \ldots, s_{m}\}$ denote the set of all possible signatures for an $s$-bounded valued tree of height $h - 1$ \emph{or less}, with respect to any layering function that assigns leaves to $\{0, 1, \ldots, k\}$. We will later show that $m$ is finite. We may assume that the $s_i$ subscripts order the the signatures lexicographically (but this is merely for concreteness, the ordering does not matter).
Then $sig_{\l}(\mathcal{T})$ is a vector or dimension $m + 1$ in which, for $i \in [m]$, the $i$-th coordinate of $sig_{\l}(\mathcal{T})$ is defined as
\[
sig_{\l}(\mathcal{T})[i] = \min(2, |\{v \in ch_T(r(T)) : sig_{\l}(\mathcal{T}(v)) = s_i \}|)
\]
Moreover, $sig_{\l}(\mathcal{T})[m + 1] = \sigma(r(T))$.
It should be obvious that if $\mathcal{T}$ has height $h$, then $sig_{\l}(\mathcal{T}(v)) \in S(s, h - 1)$ for each $v \in ch_{T}(r(T))$, and hence the signature summarizes every child subtree of $r(T)$.
Two signatures are \emph{equal} in the usual sense of vector equality, i.e. if they have the same length and, at every position, the values of the two vectors are equal.
In words, one may think of each integer $i \in [m]$ as a code for the $i$-th signature $s_i$, and $sig_{\l}(\mathcal{T})[i]$ is the number of children of $r(T)$ whose subtree has signature $s_i$, except that we only bother remembering whether there are $0, 1$ or more of those (which we encode by $2$).
We also reserve the last coordinate of $sig_{\l}(\mathcal{T})$ for $\sigma(r(T))$.
For valued trees with one node, we only need to remember the layer of the leaf.
We list the basic properties of signatures that will be needed, and then we will proceed with our $k$-leaf power algorithm.
\begin{lemma}\label{lem:basic-sigs}
Let $\mathcal{T}_1 = (T_1, \sigma_1)$ and $\mathcal{T}_2 = (T_2, \sigma_2)$ be valued trees satisfying $sig_{\l_1}(\mathcal{T}_1) = sig_{\l_2}(\mathcal{T}_2)$ for some layering functions $\l_1$ and $\l_2$.
Then the following holds:
\begin{enumerate}
\item \label{lem:basic-onechild}
if $r(T_1)$ has a child $u$ such that $sig_{\l_1}(\mathcal{T}_1(u)) \neq sig_{\l_1}(\mathcal{T}_1(v))$ for every $v \in ch_{T_1}(r(T_1)) \setminus \{u\}$, then $r(T_2)$ has exactly one child $u'$ such that $sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_2}(\mathcal{T}_2(u'))$;
\item \label{lem:basic-twochild}
if $r(T_1)$ has two distinct children $u, v$ such that $sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_1}(\mathcal{T}_1(v))$, then
$r(T_2)$ has at least two distinct children $u', v'$ such that
$sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_2}(\mathcal{T}_2(u')) = sig_{\l_2}(\mathcal{T}_2(v'))$;
\item \label{lem:basic-samex}
for any $x \in V(T_1)$, there exists $y \in V(T_2)$ such that $dist_{T_1}(x, r(T_1)) = dist_{T_2}(y, r(T_2))$ and $sig_{\l_1}(\mathcal{T}_1(x)) = sig_{\l_2}(\mathcal{T}_2(y))$. In particular, this includes the case $x \in L(T_1)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Property~\ref{lem:basic-onechild} is due to the fact that the entry corresponding to $sig_{\l_1}(\mathcal{T}_1(u))$ must be equal to $1$ in both the $sig_{\l_1}(\mathcal{T}_1)$ and $sig_{\l_2}(\mathcal{T}_2)$ vectors. Property~\ref{lem:basic-twochild} is because the entry corresponding to $sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_1}(\mathcal{T}_1(v))$ must be equal to $2$ in both the $sig_{\l_1}(\mathcal{T}_1)$ and $sig_{\l_2}(\mathcal{T}_2)$ vectors.
Property~\ref{lem:basic-samex} can be argued inductively on $dist_{T_1}(x, r(T_1))$. If the distance is $0$, then $x = r(T_1)$ and $y = r(T_2)$ shows that the property holds.
If $dist_{T_1}(x, r(T_1)) > 0$, let $x'$ be the child of $r(T_1)$ on the path between $x$ and $r(T_1)$.
By the previous properties, $r(T_2)$ has a child $y'$ such that $sig_{\l_1}(\mathcal{T}_1(x')) = sig_{\l_2}(\mathcal{T}_2(y'))$.
By induction, there is $y \in V(T_2(y'))$ such that $dist_{T_1(x')}(x, x') = dist_{T_2(y')}(y, y')$ and $sig_{\l_1}(\mathcal{T}_1(x')) = sig_{\l_2}(\mathcal{T}_2(y'))$.
The distance from $x$ to $r(T_1)$ is thus the same as the distance from $y$ to $r(T_2)$, and therefore $y$ show that the property holds.
\end{proof}
Importantly, we can argue that the number of possible signatures depends only on height and $s$-boundedness.
\begin{lemma}\label{lem:nbsigs-bound}
Let $S(s, h)$ denote the set of all possible signatures for $s$-bounded valued trees of height $h$ or less, for any layering function that map leaves to $\{0, 1, \ldots, k\}$.
Then
\begin{align*}
|S(s, h)| \leq \begin{cases}
k + 1 &\mbox{if $h = 1$} \\
(s + 2) \cdot 3^{|S(s, h - 1)|} + |S(s, h - 1)| &\mbox{otherwise}
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
All valued trees with $h = 1$ have a signature of the form $(\l(v))$, and $\l(v)$ can take up to $k + 1$ values.
For $h > 1$, consider an $s$-bounded valued tree of height $h$. For each of the $S(s, h - 1)$ possible signatures of valued trees of height $h - 1$ or less, the signature vector has an entry in $\{0, 1, 2\}$, i.e. $3$ possible values for each element of $S(s, h - 1)$.
Moreover, the last entry is $\sigma(r(T))$, which can take up to $s + 2$ values in $\{0, 1, \ldots, s, \infty\}$.
This counts the number of signatures for a valued tree of height exactly $h$. Since $S(s, h)$ includes signatures for valued trees of height $h$ \emph{or less}, we must add the term $|S(s, h - 1)|$ to count the valued trees of height $h - 1$ or less.
\end{proof}
We note that the upper bound on $|S(s, h)|$ can be computed in time proportional to $h$ (omitting the bit manipulations required to handle the memory required to represent the large value of $|S(s, h)|$), since for each $h \geq 2$, it suffices to store $|S(s, h - 1)|$ after computing it, and use it to compute $|S(s, h)|$ in constant time, and repeat.
We now come back to $k$-leaf powers, and show that in all the cases that will be of interest to us, $s$ and $h$ are bounded by a function of $k$.
\begin{lemma}\label{lem:kbounded-and-height-general}
Assume that $G$ is a connected $k$-leaf power and let
$X \subseteq V(G)$. Assume that there is $C \subseteq X$ such that $C$ is a clique and $X \setminus C \subseteq N_G(C)$.
Let $T^*$ be a $k$-leaf root of $G$, and let $(T, \sigma)$ be the valued restriction of $T^*$ to $X$.
Then $\mathcal{T}$ is $k$-bounded and $height(\mathcal{T}) \leq 3k$.
\end{lemma}
\begin{proof}
Let us first argue that $\mathcal{T}$ has height at most $3k$.
Consider the tree $T|C$.
Then $T|C$ has height at most $k$, as otherwise the members of the $C$ clique cannot be at distance at most $k$ from each other.
Moreover, any $x \in X \setminus C$ is at distance at most $k$ from some node of $T|C$, as otherwise $x$ cannot be at distance at most $k$ from any member of $C$. Thus in $T$, $r(T)$ is at distance at most $k$ from $r(T|C)$, and any leaf in $X \setminus C$ is at distance at most $2k$ from $r(T|C)$, since the farthest possible leaf is at distance $k$ below the deepest node of $T|C$.
Hence $height(\mathcal{T}) \leq 3k$.
As for $k$-boundedness,
suppose that $\mathcal{T}$ is not $k$-bounded.
Then there is some $v \in V(T) \setminus L(T)$ such that $\sigma(v) > k$ but $\sigma(v) \neq \infty$.
Let
\[L^*_v = \bigcup_{v' \in ch_{T^*}(v) \setminus ch_T(v)} L(T^*(v'))\]
Since $(T, \sigma)$ is the valued restriction of $T^*$ to $X$, each leaf $w \in L^*_v$ is in $V(G) \setminus X$ and is at distance greater than $k$ from $v$.
Moreover, we know that at least one such $w$ exists, as otherwise we would have $\sigma(v) = \infty$.
Then $w$ only has neighbors in $L^*_v$, since any other neighbor would require a path of length at most $k$ that passes through $v$, which is not possible.
It follows that every member of $L^*_v$ only has neighbors in $L^*_v$. In other words, $L^*_v$ is disconnected from the rest of the graph, a contradiction since $G$ is connected. Therefore, $\mathcal{T}$ is $k$-bounded.
\end{proof}
Note that the $k$ and $3k$ bounds could be slightly improved with a more detailed analysis, but we would still obtain a power tower behavior for the number of possible signatures.
\subsection{Pruning subsets with redundant $k$-leaf root signatures}
We now establish the connections between similar structures and signatures.
Let $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ be a similar structure of $G$. Unless mentioned otherwise, we will always assume that $\mathcal{C} = \{C_1, \ldots, C_d\}, \mathcal{Y} = \{Y_1, \ldots, Y_d\}$ and $\L = \{\l_1, \ldots, \l_d\}$ for some $d$.
We will mainly look at the $k$-leaf roots of the $G[\hC{i}]$ subgraphs.
For $i \in [d]$, let $LR_z(\hC{i})$ be the set of all $k$-leaf roots of $G[\hC{i}]$ whose root is the unique neighbor of $z$ in the tree. Note that $LR_z(\hC{i})$ captures every possible $k$-leaf root, except that a specific node is chosen as the root.
\begin{lemma}\label{lem:kbounded-and-height}
Assume that $G$ is a connected $k$-leaf power and let
$\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ be a similar structure of $G$.
For $i \in [d]$, let $T^* \in LR_z(\hC{i})$, and let $\mathcal{T} = (T, \sigma)$
be the valued restriction of $T^*$ to $C_i \cup \{z\}$.
Then $\mathcal{T}$ is $k$-bounded and $height(\mathcal{T}) \leq 3k$.
\end{lemma}
\begin{proof}
Note that $G[\hC{i}]$ is connected, since $Y_i$ consists of connected components that have neighbors in $C_i$, and all vertices in $C_i$ have $z$ as a neighbor.
Thus, we can apply Lemma~\ref{lem:kbounded-and-height-general}, since $\{z\}$ is a clique and $C_i \subseteq N_G(z)$.
\end{proof}
Together, Lemma~\ref{lem:nbsigs-bound} and Lemma~\ref{lem:kbounded-and-height} allow us to
bound the number of possible signatures of $k$-leaf roots of $G[\hC{i}]$ subgraphs by $|S(k, 3k)|$, which grows quickly but is a function of $k$ only.
The last piece we need is that all the $(\hC{i})$'s have $k$-leaf roots with the same signatures.
Let $s \in S(k, 3k)$ be a possible signature for a $k$-bounded valued tree of height at most $3k$, for any layering function.
We say that $\hC{i}$ \emph{accepts} $s$ if there exists
$T^* \in LR_z(\hC{i})$ such that
$sig_{\l_i}(\mathcal{T}) = s$, where $\mathcal{T}$ is the valued restriction of $T^*$ to $C_i \cup \{z\}$.
We then define
\[
accept(\S, C_i) = \{s \in S(k, 3k) : \hC{i} \mbox{ accepts } s \}
\]
It is important to note that by Lemma~\ref{lem:kbounded-and-height}, for any $T^* \in LR_z(\hC{i})$, $sig_{\l_i}(\mathcal{T}) \in S(k, 3k)$, and therefore $sig_{\l_i}(\mathcal{T}) \in accept(\S, C_i)$, where $\mathcal{T}$ is the valued restriction of $T^*$ to $C_i \cup \{z\}$. In other words, $accept(\S, C_i)$ captures the signature of \emph{every} $k$-leaf root of $G[\hC{i}]$.
We need one last definition.
\begin{definition}
A similar structure $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ of $G$ is called \emph{homogeneous}
if, for each $C_i, C_j \in \mathcal{C}$, $accept(C_i) = accept(C_j) \neq \emptyset$.
\end{definition}
For our purposes, we need a homogeneous similar structure of size at least $3|S(k, 3k)|$. If all $k$-leaf roots have large enough arity, this is guaranteed to exist.
\begin{lemma}\label{lem:hom-struct}
Let $G$ be a connected $k$-leaf power.
Let $d = 3|S(k, 3k)| \cdot 2^{|S(k, 3k)|}$, and assume that $G$ admits a $k$-leaf root of arity at least $d + 1$.
Then there is a \emph{homogeneous} similar structure $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ of $G$ such that $|\mathcal{C}| = 3|S(k, 3k)|$. Moreover, for each $C_i \in \mathcal{C}$, $|C_i| \leq d^k$
and $G[C_i \cup Y_i \cup \{z\}]$ has maximum degree at most $d^k$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:exists-similar}, there exists
a similar structure $\S' = (\mathcal{C}', \mathcal{Y}', z, \L')$ of $G$ with $|\mathcal{C}'| = d$, with each $C'_i \in \mathcal{C}'$ having $|C'_i| \leq d^k$ and $G[C'_i \cup Y'_i \cup \{z\}]$ having maximum degree $d^k$ or less.
Denote $\mathcal{C}' = \{C'_1, \ldots, C'_d\}$, $\mathcal{Y}' = \{Y'_1, \ldots, Y'_d\}$ and
$\L' = \{\l'_1, \ldots, \l'_d\}$.
The problem is that $\S'$ might not be homogeneous.
Notice that by Lemma~\ref{lem:kbounded-and-height}, for any $C'_i \in \mathcal{C}$, $accept(\S', C'_i)$ is a subset of $S(k, 3k)$.
Thus the number of possible distinct $accept(\S, C'_i)$ sets is $2^{|S(k, 3k)|}$.
By the pigeonhole principle, the fact that $|\mathcal{C}'| = d = 3|S(k, 3k)| 2^{|S(k, 3k)|}$ implies that there is
some $\mathcal{C} \subseteq \mathcal{C}'$ such that $|\mathcal{C}| = 3|S(k, 3k)|$
and such that $accept(\S', C'_i) = accept(\S', C'_j)$
for every $C'_i, C'_j \in \mathcal{C}$.
Note that since $G$ is a $k$-leaf power, none of these $accept$ sets is empty, since each $G[\hC{i}]$ is a $k$-leaf power.
We thus know that there exists a set of indices $h_1, \ldots, h_{3|S(k, 3k)|}$ such that $\mathcal{C} = \{C'_{h_1}, C'_{h_2}, \ldots, C'_{h_{3|S(k, 3k)|}}\}$. Let $\mathcal{Y} = \{Y'_{h_1}, Y'_{h_2}, \ldots, Y'_{h_{3|S(k, 3k)|}}\}$, and similarly let $\L = \{\l'_{h_1}, \l'_{h_2}, \ldots, \l'_{h_{3|S(k, 3k)|}}\}$.
One can easily verify that all properties of a similar structure hold for $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ (in particular, all the $C'_i \cup Y'_i$ that were not kept in $\mathcal{C}$ now join the same connected component as $z$). We have $|\mathcal{C}| = 3|S(k, 3k)|$.
This similar structure is homogeneous.
Finally, the bounds for $|C_{h_i}| \leq d^k$
and the maximum degree at most $d^k$ for $G[C_{h_i} \cup Y_{h_i} \cup \{z\}]$ still hold for every $h_i$.
\end{proof}
We can finally present the main result of this section.
\begin{theorem}\label{thm:iffc1}
Let $G$ be a connected graph.
Let $\S = (\mathcal{C}, \S, z, \L)$ be a homogeneous similar structure of $G$, with $\mathcal{C} = \{C_1, \ldots, C_l\}, \mathcal{Y} = \{Y_1, \ldots, Y_l\}$ and $l = 3|S(k, 3k)|$.
Then $G$ is a $k$-leaf power if and only if $G - (C_1 \cup Y_1)$ is a $k$-leaf-power.
\end{theorem}
\begin{proof}
First, note that if $G - (C_1 \cup Y_1)$ is not a $k$-leaf-power, then by heredity, $G$ is not a $k$-leaf-power. We now focus on the other direction of the statement.
Suppose that $G - (C_1 \cup Y_1)$ is a leaf power, and let $R$ be a $k$-leaf root of $G - (C_1 \cup Y_1)$. Assume without loss of generality that $R$ is rooted at the single neighbor of $z$. The proof is constructive: we show algorithmically how to insert $C_1 \cup Y_1$ into $R$ to obtain a $k$-leaf root of $G$.
For $i \in \{2, \ldots, l\}$, let $T^*_i = R|(\hC{i})$, which is a $k$-leaf root of $G[\hC{i}]$.
Furthermore let $\mathcal{T}_i = (T_i, \sigma_i)$ be the valued restriction of $T^*_i$ to $C_i \cup \{z\}$. Note that $V(T_i) \subseteq V(T^*_i) \subseteq V(R)$.
Since $l - 1 > |S(k, 3k)|$, there must be distinct $i, j \in \{2, \ldots, l\}$ such that $sig_{\l_i}(\mathcal{T}_i) = sig_{\l_j}(\mathcal{T}_j)$, by the pigeonhole principle.
Assume, without loss of generality, that $i = 2$ and $j = 3$ (otherwise, simply rename the $C_i$'s).
Note that since $V(T_2) \subseteq V(T_2^*) \subseteq V(R)$, each node of $T_2$ and $T^*_2$ is in $R$. The same holds for $T_3$ and $T^*_3$.
Moreover, all of $T_2, T^*_2, T_3$ and $T^*_3$ contain $z$ and hence also contain the root of $R$. It follows that $r(T_2) = r(T^*_2) = r(T_3) = r(T^*_3) = r(R)$.
Since $accept(\S, \hC{1}) = accept(\S, \hC{2}) = accept(\S, \hC{3})$ by homogeneity, there exists a $k$-leaf root $T^*_1$ of $G[\hC{1}]$, with $r(T^*_1)$ being the parent of $z$, such that $sig_{\l_1}(\mathcal{T}_1) = sig_{\l_2}(\mathcal{T}_2) = sig_{\l_3}(\mathcal{T}_3)$, where $\mathcal{T}_1 = (T_1, \sigma_1)$ is the valued restriction of $T^*_1$ to $C_1 \cup \{z\}$.
Note that $r(T_1) = r(T^*_1)$, again because of $z$.
We now describe how to insert $T^*_1$ into $R$, based on the signature of $\mathcal{T}_1$.
This is shown in Algorithm~\ref{alg:insertt1}.
By inserting a subtree $T^*_1(u)$ as a child of $r$, we mean to add all the nodes and edges of the $T^*_1(u)$ tree to $R$, and to add an edge between $r$ and $r(T^*_1(u))$.
\begin{algorithm2e}[H]
\SetAlgoLined
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{}{end}
$insert(r(T^*_1), r(R))$ //initial call \;
\;
\Fn{insert($t, r$)}
{
//$t \in V(T^*_1)$ is the node of $T^*_1$ we are inserting\;
//$r \in V(R)$ is the node of $R$ we are inserting on\;
\ForEach{child $u \in ch_{T^*_1}(t) \setminus ch_{T_1}(t)$}
{
Insert the $T^*_1(u)$ subtree as a child of $r$ \label{line:child-tstar}\;
}
\ForEach{child $u \in ch_{T_1}(t)$}
{
\uIf{$\exists w \in ch_{T_1}(t) \setminus \{u\}$ such that $sig_{\l_1}(\mathcal{T}_1(w)) = sig_{\l_1}(\mathcal{T}_1(u))$}
{
Insert the $T^*_1(u)$ subtree as a child of $r$ \label{line:child-two}\;
}
\uElse
{
Let $u_2 \in ch_{T_2}(r)$ such that $sig_{\l_2}(\mathcal{T}_2(u_2)) = sig_{\l_1}(\mathcal{T}_1(u))$\;
Let $u_3 \in ch_{T_3}(r)$ such that $sig_{\l_3}(\mathcal{T}_3(u_3)) = sig_{\l_1}(\mathcal{T}_1(u))$\;
\uIf{$u_2 \neq u_3$}
{
Insert the $T^*_1(u)$ subtree as a child of $r$ \label{line:child-onediff} \;
}
\uElse
{
\uIf{$u_2 \neq z$}
{
Recursively call $insert(u, u_2)$ \label{line:recurse}\;
}
}
}
}
}
\caption{Insertion of $T_1^*$ into $R$.}
\label{alg:insertt1}
\end{algorithm2e}
Let us begin with a bit of intuition on this algorithm.
The idea is that $insert(t, r)$ embeds $T_1^*(t)$ into $R(r)$, where $t$ should be a node of $T_1$ and $r$ a node of $R$.
The initial call says that $r(T^*_1) = r(T_1)$ should correspond to $r(R)$ after $T^*_1$ is inserted, and the recursive calls make similar correspondences with children of $r(T^*_1)$ and $r(R)$, recursively.
We will maintain the invariant that at any point, $r \in V(R) \cap V(T_2) \cap V(T_3)$, such that $\mathcal{T}_2(r), \mathcal{T}_3(r)$ have the same signature as $\mathcal{T}_1(t)$ (we prove this below).
Then in any recursion, when $t$ has children $u$ in $T_1^*$ but not in $T_1$, we know that $T_1^*(u)$ only has leaves in $Y_1$ (line~\ref{line:child-tstar}). In this case, we just insert the $T_1^*(u)$ subtree. The $Y_i$ leaves to insert must be at distance more than $k$ to all of $L(R)$, and the fact that $\mathcal{T}_2(r)$ and $\mathcal{T}_3(r)$ have similar subtrees at the $r$ location helps us guarantee it. That is, $\mathcal{T}_2(r)$ lets us argue on the distance relationships between $Y_1$ and the leaves in $L(R) \setminus L(\mathcal{T}_2(r))$, and $\mathcal{T}_3(r)$ helps us with all relationships with leaves in $L(\mathcal{T}_2(r))$.
This idea of complementarity is the reason we need both $\mathcal{T}_2$ and $\mathcal{T}_3$.
A similar idea applies when $T_1^*(u)$ is inserted on lines~\ref{line:child-two} and~\ref{line:child-onediff}. A recursion is needed for each $u \in ch_{T_1}(t)$ such that the node of $T_2$ and $T_3$ corresponding to $u$ is the same in $ch_R(r)$, since the complementarity trick cannot be applied.
Let us proceed with the details.
Since the algorithm only inserts subtrees as children of nodes of $R$, it is not hard to see that the modified $R$ will also be a tree.
For the rest of the proof, let $R'$ be the tree obtained after the algorithm has terminated, assuming that the initial call is $insert(r(T^*_1), r(R))$.
The rest of the proof is dedicated to showing that $R'$ is a $k$-leaf root of $G$. The proof is divided in a series of claims.
\newpage
\begin{claim}\label{claim:basic}
At any point that $insert(t, r)$ is called and arguments $t$ and $r$ are received, the following holds:
\begin{itemize}
\item
$t \in V(T_1) \setminus L(T_1)$;
\item
$r \in V(T_2) \cap V(T_3)$;
\item
$sig_{\l_1}(\mathcal{T}_1(t)) = sig_{\l_2}(\mathcal{T}_2(r)) = sig_{\l_3}(\mathcal{T}_3(r))$.
\end{itemize}
\end{claim}
\begin{proof}
We argue this by induction on the depth of the recursion.
As a base case, consider the initial call with $t = r(T^*_1)$ and $r = r(R)$. We have $t \in V(T_1)$ since $r(T^*_1) = r(T_1)$. To see that $t \notin L(T_1)$, note that $|L(T_1)| = |C_1 \cup \{z\}| \geq 2$ implies that $r(T_1)$ is not a leaf itself, since leaves have $0$ children. Thus initially, $t \in V(T_1) \setminus L(T_1)$.
Also, we have already argued that $r = r(R) = r(T_2) = r(T_3)$. Moreover, in this initial case, $sig_{\l_1}(\mathcal{T}_1(t)) = sig_{\l_1}(\mathcal{T}_1) = sig_{\l_2}(\mathcal{T}_2) = sig_{\l_2}(\mathcal{T}_2(r))$ and the same holds for $\mathcal{T}_3$.
Now, assume that the statement holds for any recursive call of depth $\delta$.
We argue that it also holds for recursive calls at depth $\delta + 1$.
When a call at recursion depth $\delta$ with parameters $t$ and $r$ is made, the only way a deeper recursive call could be made is when line~\ref{line:recurse} is reached. This happens when $t$ has a child $u$ in $T_1$ such that no other child of $t$ in $T_1$ has the same signature.
Since we assume by induction that $sig_{\l_1}(\mathcal{T}_1(t)) = sig_{\l_2}(\mathcal{T}_2(r))$, by Lemma~\ref{lem:basic-sigs}.\ref{lem:basic-onechild}, $r$ must also have exactly one child $u_2$ in $T_2$ such that $sig_{\l_2}(\mathcal{T}_2(u_2)) = sig_{\l_1}(\mathcal{T}_1(u))$. By the same argument, $r$ has exactly one child $u_3$ in $T_3$ such that $sig_{\l_3}(\mathcal{T}_3(u_3)) = sig_{\l_1}(\mathcal{T}_1(u))$. In particular, $u_2$ and $u_3$ as used in the algorithm always exist.
A recursive call is only made if $u_2 = u_3$ and $u_2 \neq z$, in which case $u$ and $u_2$ are passed to a recursive call.
We know that $u \in V(T_1)$, and must argue that $u \notin L(T_1)$.
Assume instead that $u$ is a leaf of $T_1$.
Since $sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_2}(\mathcal{T}_2(u_2)) = sig_{\l_3}(\mathcal{T}_3(u_3))$, $u_2$ would also be a leaf of $T_2$ and $u_3$ a leaf of $T_3$. We have $L(T_2) = C_2 \cup \{z\}$ and $L(T_3) = C_3 \cup \{z\}$. Since $C_2$ and $C_3$ are disjoint, only $u_2 = u_3 = z$ is possible, but this is a contradiction since the algorithm explicitly states that line~\ref{line:recurse} could not be reached in this case. Hence, $u \notin L(T_1)$, which proves the first property of the claim.
The second and third properties of the claim hold for the recursive call because $u_2 = u_3$, and because they are explicitly chosen to have the same signature.
\end{proof}
We show that to create $R'$, the algorithm inserts a subtree that is leaf-isomorphic to $T^*_1$. This ensures that distance relationships between leaves of $T^*_1$ are the same in $R'$.
\begin{claim}\label{claim:iso}
$R'|(\hC{1}) \simeq_L T^*_1$.
\end{claim}
\begin{proof}
We prove the claim by induction on the height of $T_1(t)$.
That is, we show
that for any $t$ and $r$ passed to the algorithm, $R'$ contains a subtree $R'_t$ that is leaf-isomorphic to $T_1^*(t)$, and that the leaf-isomorphism from $T_1^*(t)$ to $R'_t$ maps $t$ to $r$.
By Claim~\ref{claim:basic}, $t$ is in $T_1$ but not a leaf of $T_1$. So as a base case, consider that the algorithm receives arguments $t$ and $r$, such that $T_1(t)$ is a subtree of $T_1$ of height $2$. That is, all the children of $t$ in $T_1$ are leaves (but not necessarily all the children of $t$ in $T^*_1$).
Consider a child $u \in ch_{T^*_1}(t) \setminus ch_{T_1}(t)$.
Then line~\ref{line:child-tstar} ensures that $T_1^*(u)$ is inserted as a child subtree of $r$ in $R'$.
Consider now a child $u \in ch_{T_1}(t)$, which is a leaf by assumption.
If some other child $w$ of $T_1$ has the same signature (i.e. $sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_1}(\mathcal{T}_1(w))$), line~\ref{line:child-two} ensures that $u$ is inserted as a child of $r$.
Otherwise, there are $u_2 \in ch_{T_2}(r)$ and $u_3 \in ch_{T_3}(r)$ with the same signature as $u$, i.e. $sig_{\l_2}(\mathcal{T}_2(u_2)) = sig_{\l_3}(\mathcal{T}_3(u_3)) = sig_{\l_1}(\mathcal{T}_1(u))$.
As in the previous claim, $u_2$ and $u_3$ must also be leaves and one of $u_2 \neq u_3$ or $u_2 = u_3 = z$ must hold, because $C_2$ and $C_3$ are disjoint.
If $u_2 = u_3 = z$, then $u = z$ as well, since this is the only leaf with signature $(0)$. In this case, $t = r(T_1)$ and $r = r(R)$, and the check that $u_2 \neq z$ in the algorithm ensures that nothing happens (which is correct, since $z$ is already a child of $r$). If $u_2 \neq u_3$, $u$ is added as a child of $r$ on line~\ref{line:child-onediff} (which is correct since $u \neq z$).
It follows that all children of $t$ in $T^*_1$ are children of $r$ in $R'$. Therefore, $R'$ contains a subtree leaf-isomorphic to $T^*_1(t)$ with $t$ mapped to $r$.
Now assume that $T_1(t)$ is a subtree of height greater than $2$.
Notice that for each child $u \in ch_{T^*_1}(t)$, $T_1^*(u)$ is inserted as a child subtree of $r$ in $R'$,
unless $t$ has no other child with the same signature as $u$, and there are $u_2 \in ch_{T_2}(r)$ and $u_3 \in ch_{T_3}(r)$, with $u_2 = u_3$, with the same signature as $u$.
As before, if $u = z$, then $u_2 = u_3 = z$ and nothing happens, which is correct.
Otherwise, line~\ref{line:recurse} is reached and the algorithm is called recursively with arguments $u$ and $u_2$.
In this case, we may assume by induction that $R'$ contains a subtree leaf-isomorphic to $T^*_1(u)$, with the leaf-isomorphism mapping $u$ to $u_2$.
Let $U$ be the set of children of $t$ in $T_1$ for which line~\ref{line:recurse} is reached. Notice that for each distinct $u, u' \in U$, $sig_{\l_1}(\mathcal{T}_1(u)) \neq sig_{\l_1}(\mathcal{T}_1(u'))$, since children with a non-unique signature are inserted on line~\ref{line:child-two}.
Moreover, each $u \in U$ is put in correspondence with a child $u_2$ of $r$ with the same signature as $u$, implying that every $u \in U$ has a distinct correspondent among the children of $r$.
We can therefore conclude that for each $u \in ch_{T^*_1}(t)$ (other than $z$), either $T^*_1(u)$ is inserted as a child subtree of $r$, or $T^*_1(u)$ is inserted recursively with $u$ being mapped to a unique child $u_2$ or $r$. It follows that $R'$ contains a subtree isomorphic to $T_1^*(t)$ with $t$ mapped to $r$.
To finish the proof, we observe that the above statement holds for $t = r(T_1) = r(T^*_1)$ and $r = r(R)$. Thus $R'$ contains a subtree leaf-isomorphic to $T^*_1$ with a leaf-isomorphism that maps $r(T^*_1)$ to $r(R)$.
\end{proof}
It remains to argue that distance relationships in $R'$ are also correct for pairs of leaves with one in $\hC{1}$ and the other in $V(G) \setminus (\hC{1}) = L(R) \setminus \{z\}$.
\begin{claim} \label{claim:cross}
Let $x \in L(T^*_1)$ and $y \in L(R) \setminus \{z\}$.
If $xy \in E(G)$, then $dist_{R'}(x, y) \leq k$, and otherwise,
$dist_{R'}(x, y) > k$.
\end{claim}
\begin{proof}
We argue that anytime that a leaf of $T^*_1$ is inserted, the conditions of the claim are satisfied.
Consider any recursion of the algorithm where parameters $t$ and $r$ are given, and let us focus on the set of leaves of $T^*_1$ inserted during that recursion.
Let $u \in ch_{T^*_1}(t)$ and assume that $T^*_1(u)$ is inserted as a child of $r$ in the current recursion. Let $x \in L(T^*_1(u))$ be an inserted leaf.
We consider two possible cases.
\medskip
\noindent
\textbf{Case 1: $x \in L(T^*_1(u)) \setminus L(T_1)$.}
Observe that $x \in L(T^*_1) \setminus L(T_1) = (\hC{1}) \setminus (C_1 \cup \{z\}) = Y_1$, and thus $x$ has no neighbors in $L(R)$. Thus we must argue that $x$ is at distance more than $k$ to any leaf $y$ of $R$.
Let $x'$ be the lowest ancestor of $x$ in $T^*_1$ (i.e. farthest from the root) such that $x' \in V(T_1)$. Note that if $T_1^*(u)$ was inserted on line~\ref{line:child-tstar}, then $x' = t$ since $u \notin V(T_1)$ but $t \in V(T_1)$ by Claim~\ref{claim:basic}, and otherwise, $x'$ is a descendant of $u$.
In any case, $x'$ is in $V(T_1(t))$. Recall that $\mathcal{T}_1 = (T_1, \sigma_1)$, and that $\sigma_1(x')$ is the minimum distance from $x'$ to a leaf descending from a node in $ch_{T^*_1}(x') \setminus ch_{T_1}(x')$. We note that $x$ is such a leaf, and thus $dist_{T^*_1}(x, x') \geq \sigma_1(x')$.
Also recall that by Claim~\ref{claim:basic}, $r \in V(T_2) \cap V(T_3)$ and $sig_{\l_1}(\mathcal{T}_1(t)) = sig_{\l_2}(\mathcal{T}_2(r)) = sig_{\l_3}(\mathcal{T}_3(r))$.
By Lemma~\ref{lem:basic-sigs}.\ref{lem:basic-samex}, there is $x_2' \in V(T_2(r))$ such that $dist_{T_1}(x', t) = dist_{T_2}(x_2', r)$ and $sig_{\l_1}(\mathcal{T}_1(x')) = sig_{\l_2}(\mathcal{T}_2(x_2'))$.
In particular, $\sigma_1(x') = \sigma_2(x_2')$.
Since $\mathcal{T}_2$ is the valued restriction of $T^*_2$ to $C_2 \cup \{z\}$, this implies that there exists $x_2 \in L(T_2^*) \setminus L(T_2) = Y_2$, descending from a node in $ch_{T^*_2}(x_2') \setminus ch_{T_2}(x_2')$, such that $dist_{T^*_2}(x_2, x_2') = dist_{R}(x_2, x_2') = \sigma_2(x_2') = \sigma_1(x_1') \leq dist_{T^*_1}(x, x')$.
Since the distance from $x_2'$ to $r$ is the same as the distance from $x'$ to $t$, this also implies that $dist_{R}(x_2, r) \leq dist_{T_1^*}(x, t)$.
Now, let $y \in L(R) \setminus (C_2 \cup Y_2)$.
We have $x_2y \notin E(G)$ and obtain
\begin{align*}
k < dist_R(x_2, y) &\leq dist_R(x_2, r) + dist_R(r, y) \\
&\leq dist_{T_1^*}(x, t) + dist_{R}(r, y) \\
&= dist_{R'}(x, r) + dist_{R'}(r, y) \\
&= dist_{R'}(x, y)
\end{align*}
where, for the last two equalities, we used the fact that $T^*_1(u)$ was inserted as a child subtree of $r$, and thus that the path from $x$ to $y$ in $R'$ must first go to $r$, and then to $y$.
This shows that $x$ has the correct distance to all $y \in L(R) \setminus (C_2 \cup Y_2)$. It remains to handle the vertices in $C_2 \cup Y_2$.
For the moment, let us instead consider $y \in L(R) \setminus (C_3 \cup Y_3)$. We can apply the same argument as above, but using $T_3$ instead. That is, by the equality of the signatures, there is some $x_3 \in Y_3$ such that $dist_{T^*_3}(x_3, r) = dist_R(x_3, r) \leq dist_{T^*_1}(x, t)$.
By repeating the above argument, we deduce that for any $y \in L(R) \setminus (C_3 \cup Y_3)$, $dist_R(x_3, y) > k$, which in turn implies
$dist_{R'}(x, y) > k$.
In particular, this holds for any $y \in C_2 \cup Y_2$.
We have therefore shown that $dist_{R'}(x, y) > k$ for any $y \in L(R)$, and thus that $x$ is correctly inserted.
\medskip
\noindent
\textbf{Case 2: $x \in L(T_1)$}.
In this case, $u \in ch_{T_1}(t)$ and $T_1^*(u)$ must have been inserted as a child of $r$ on line~\ref{line:child-two} or on line~\ref{line:child-onediff}. Note that $x \in C_1$ ($x = z$ is not possible since $z$ is not reinserted in $R'$).
Let us define $u_2 \in ch_{T_2}(r)$ and $u_3 \in ch_{T_3}(r)$ to handle both cases.
If $T^*_1(u)$ is inserted on line~\ref{line:child-onediff}, then define $u_2$ and $u_3$ as in the algorithm. In this case, we must have $u_2 \neq u_3$.
If instead $T^*_1(u)$ is inserted on line~\ref{line:child-two}, then there is $w \in ch_{T_1}(t)$ such that $sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_1}(\mathcal{T}_1(w))$ and $w \neq u$.
By Lemma~\ref{lem:basic-sigs}.\ref{lem:basic-twochild}, there exist distinct $u_2, u_2' \in ch_{T_2}(r)$ and $u_3, u_3' \in ch_{T_3}(r)$ such that
$sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_2}(\mathcal{T}_2(u_2)) = sig_{\l_2}(\mathcal{T}_2(u_2')) = sig_{\l_3}(\mathcal{T}_3(u_3)) = sig_{\l_3}(\mathcal{T}_3(u_3'))$.
Observe that at least one of $u_2 \neq u_3$ or $u_2 \neq u'_3$ holds. Assume without loss of generality that $u_2 \neq u_3$.
In either case, note that $u_2$ and $u_3$ are defined so that $u_2 \neq u_3$, $u_2 \in ch_{T_2}(r)$, $u_3 \in ch_{T_3}(r)$, $sig_{\l_1}(\mathcal{T}_1(u)) = sig_{\l_2}(\mathcal{T}_2(u_2)) = sig_{\l_3}(\mathcal{T}_3(u_3))$.
By Lemma~\ref{lem:basic-sigs}.\ref{lem:basic-samex}, there exist $x_2 \in L(T_2(u_2))$
such that $dist_{T_2^*}(x_2, u_2) = dist_{T_1}(x, u)$ and $sig_{\l_1}(\mathcal{T}_1(x)) = sig_{\l_2}(\mathcal{T}_2(x_2))$. Since the signature of a leaf is just its layer, this implies that $\l_1(x) = \l_2(x_2)$.
Also note that $dist_{R'}(x, r) = dist_{R'}(x_2, r)$.
Similarly, there exists $x_3 \in L(T_3)$
such that $\l_1(x) = \l_3(x_3)$ and such that $dist_{T_3^*}(x_3, u_3) = dist_{T_1}(x, u)$, i.e.
that $dist_{R'}(x, r) = dist_{R'}(x_3, r)$.
Also note that the paths from $x$ to $r$, $x_2$ to $r$ and $x_3$ to $r$
only intersect at $r$, since $u, u_2$ and $u_3$ are all distinct nodes of $R'$.
First consider $y \in L(R) \setminus (C_2 \cup Y_2 \cup C_3 \cup Y_3)$.
Since the layers are all equal, by Property~\ref{cut:layerssame} of similar structures, $xy \in E(G) \Leftrightarrow x_2y \in E(G) \Leftrightarrow x_3y \in E(G)$.
If $y$ is not a descendant of $u_2$, then the path from $x_2$ to $y$ goes through $r$, implying $dist_{R'}(x, y) = dist_{R'}(x_2, y)$.
If $y$ is not a descendant of $u_3$,
then the path from $x_3$ to $y$ goes through $r$, implying $dist_{R'}(x, y) = dist_{R'}(x_3, y)$.
Since $u_2 \neq u_3$, $y$ cannot be a descendant of both $u_2$ and $u_3$, and so at least one of $dist_{R'}(x, y) = dist_{R'}(x_2, y)$ or $dist_{R'}(x, y) = dist_{R'}(x_3, y)$ must hold.
If $xy \in E(G)$, then $x_2y, x_3y \in E(G)$ and $dist_{R'}(x_2, y) \leq k$, $dist_{R'}(x_3, y) \leq k$ both hold, implying $dist_{R'}(x, y) \leq k$.
Similarly, if $xy \notin E(G)$, then $x_2y, x_3y \notin E(G)$ and, in the same manner, and $dist_{R'}(x_2, y) > k$, $dist_{R'}(x_3, y) > k$ both hold, implying $dist_{R'}(x, y) > k$.
Now consider $y \in C_2 \cup Y_2$. Assume that $xy \notin E(G)$.
Then by Property~\ref{cut:layerssame}, $x_3y \notin E(G)$.
If $y$ is not a descendant of $u_3$, then the path from $x_3$ to $y$ goes through $r$ and $dist_{R'}(x, y) = dist_{R'}(x_3, y) > k$.
If $y$ is a descendant of $u_3$, then we have
\[
k < dist_{R'}(x_3, y) \leq dist_{R'}(x_3, r) + dist_{R'}(r, y) = dist_{R'}(x, r) + dist_{R'}(r, y)
\]
and, since $dist_{R'}(x, r) + dist_{R'}(r, y) = dist_{R'}(x, y)$, the distance from $x$ to $y$ is correctly above $k$.
Assume that $xy \in E(G)$. Again by Property~\ref{cut:layerssame}, $x_3y \in E(G)$. Note that $y \in Y_2$ is not possible since $Y_2$ only has neighbors in $C_2 \cup Y_2$. So we know that $y \in C_2$, and hence $\l_2(y)$ is defined. If $y$ is not a descendant of $u_3$, the usual argument applies, since the path from $x_3$ to $y$ goes through $r$, implying $dist_{R'}(x, y) = dist_{R'}(x_3, y) \leq k$.
Suppose that $y$ is a descendant of $u_3$.
By Property~\ref{cut:layers-gt-k} of similar structures, we must have $\l_1(x) + \l_2(y) \leq k$ (because if $\l_1(x) + \l_2(y) > k$, $xy$ could not be an edge). As $\l_1(x) = \l_2(x_2)$,
we have $\l_2(x_2) + \l_2(y) \leq k$ as well. Luckily, Property~\ref{cut:layers-leq-k} is applicable to vertices in the same $C_i$, and so $x_2y \in E(G)$.
The path from $x_2$ to $y$ goes through $r$, and thus
$dist_{R'}(x, y) = dist_{R'}(x_2, y) \leq k$.
This covers the case $y \in C_2 \cup Y_2$.
The last remaining case is $y \in C_3 \cup Y_3$. This can be handled exactly as the previous case, but swapping the roles of $x_2$ and $x_3$.
This completes the proof, as we have covered every possible case for a leaf inserted by the algorithm, at any point.
\end{proof}
To conclude the proof, we can argue that $R'$ is a $k$-leaf-power of $G$ by looking at every pair of vertices of $G$.
First note that by Claim~\ref{claim:iso}, all the leaves of $T^*_1$ are present in $R'$, and so $V(G) = L(R')$.
Let $x, y \in V(G)$, with $x \neq y$.
If $x, y \in V(G) \setminus (C_1 \cup Y_1)$, then
$x, y \in L(R)$ and, since $dist_{R}(x, y) = dist_{R'}(x, y)$, we know that $xy \in E(G) \Leftrightarrow dist_{R'}(x, y) \leq k$.
If $x, y \in C_1 \cup Y_1 \cup \{z\}$, then by Claim~\ref{claim:iso},
$R'|(C_1 \cup Y_1 \cup \{z\})$ is leaf-isomorphic to $T^*_1$. Since $T^*_1$ is a $k$-leaf root of $G[\hC{1}]$,
we know that $xy \in E(G) \Leftrightarrow dist_{R'}(x, y) \leq k$.
Finally, if $x \in C_1 \cup Y_1$ and $y \in V(G) \setminus (C_1 \cup Y_1)$, Claim~\ref{claim:cross} ensures that $xy \in E(G) \Leftrightarrow dist_{R'}(x, y) \leq k$.
Therefore, $R'$ is a $k$-leaf-power of $G$.
\end{proof}
\section{Computing the set of accepted signatures} \label{sec:tw}
We have not yet discussed how to find an homogeneous similar structure $\S$.
Since $k$ is fixed, it is not too hard to find a similar structure of $G$ with the properties of Lemma~\ref{lem:hom-struct}, if one exists.
It suffices to brute-force every combination of $3|S(k, 3k)|$ subsets of $V(G)$ of size at most $d^k$ in $G$ to find the $C_i$'s, and check that all similar structure properties hold.
In particular, each $G[C_i \cup Y_i \cup \{z\}]$ has maximum degree at most $d^k$, so using Lemma~\ref{lem:prelim:bounded-arity}, we can check whether this is a $k$-leaf power.
However, this is not enough, since homogeneity requires enumerating all accepted signatures for the found $\hC{i}$, in order to construct $accept(\S, \hC{i})$ and ensure that they are all equal.
We will achieve this through a tree decomposition of $G[\hC{i}]$.
Let us assume that $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ is a similar structure satisfying $|\mathcal{C}| = 3|S(k, 3k)|$, with each $|C_i| \leq d^k$ such that $G[\hC{i}]$ has maximum degree at most $d^k$.
We want to compute $accept(\S, C_i)$ for each $i \in [d]$.
This can be achieved using a slightly more general result.
\begin{lemma}\label{lem:enumerate}
Let $G$ be a connected graph of maximum degree at most $d^k$ and let $z \in V(G)$. Then in time $O(n \cdot d^{8k} \cdot f(k)^3)$, where $n = |V(G)|$ and $f(k) = d^{4kd^{4k}} \cdot (k + 2)^{d^{4k}}$, one can enumerate the set of all valued trees $\mathbb{T} = \{\mathcal{T}_1, \ldots, \mathcal{T}_l\}$, up to value-isomorphism, such that $\mathcal{T}_i \in \mathbb{T}$
if and only if there exists a $k$-leaf root $T^*$ of $G$ such that (1) $T^*$ is rooted at the parent of $z$; and (2) $\mathcal{T}_i$ is the valued restriction of $T^*$ to $N_G[z]$.
\end{lemma}
To see why Lemma~\ref{lem:enumerate} allows us to compute $accept(\S, C_i)$, recall that the latter contains the signature of every $\mathcal{T}$ that is the valued restriction of $T^*$ to $C_i \cup \{z\}$, where $T^*$ is a $k$-leaf root of $G[\hC{i}]$ with the root as the parent of $z$. Since $N_{G[\hC{i}]}[z] = C_i \cup \{z\}$, we can pass $G[\hC{i}]$ and $z$ to the above lemma.
By taking the signature of every $\mathcal{T}_i$ returned, we obtain $accept(\S, C_i)$. Note that the lemma does not deal with layers, so the leaf sets of the desired $\mathcal{T}_i$'s are $N_G[z]$ (instead of integers representing layers as in the previous section).
The rest of this section is dedicated to the proof of Lemma~\ref{lem:enumerate}.
We will write $N(v)$ and $N[v]$ instead of $N_G(v)$ and $N_G[v]$, respectively, with the understanding that $G$ is the graph stated in Lemma~\ref{lem:enumerate}. Likewise, for $X \subseteq V(G)$, we write $N(X)$ and $N[X]$ instead of $N_G(X)$ and $N_G[X]$, respectively.
The proof is based on the tree decomposition of $G$ and uses a relatively standard dynamic programming algorithm.
Recall that given a graph $G$, a nice tree decomposition of $G$ is a tree $B = (V_B, E_B)$ in which (1) $B_i \subseteq V(G)$ for each $B_i \in V_B$; (2) for each $uv \in E(G)$, there is some $B_i \in V_B$ with $u, v \in B_i$; (3) for each $u \in V(G)$, the set of $B_i$'s that contain $u$ form a connected subgraph of $B$.
Moreover, each $B_i \in V_B$ can be one of four types: $B_i \in L(B)$, in which case $B_i = \{v\}$ for some $v \in V(G)$; $B_i$ is an introduce node, in which case $B_i$ has a single child $B_j$ with $B_j = B_i \setminus \{v\}$ for some $v \in V(G)$; $B_i$ is a forget node, in which case $B_i$ has a single child $B_j$ with $B_i = B_j \setminus \{v\}$ for some $v \in B_j$; $B_i$ is a join node, in which case $B_i$ has two children $B_l, B_r$ and $B_i = B_l = B_r$.
The width of $B$ is $max_{B_i \in V_B}(|B_i| - 1)$, and the treewidth of $G$ is the minimum width of a nice tree decomposition of $G$.
We note that~\cite{eppstein2020parameterized} also use a tree decomposition based algorithm. However, it does not seem adaptable directly for our purposes, since it is not guaranteed to return every structure of every $k$-leaf root of the given graph, and it does not maintain the $\sigma$ information that we need.
For our algorithm,
we first check whether $G$ is chordal: if not, we know by Lemma~\ref{lem:chordal} that $G$ is not a $k$-leaf power and we can return $\mathbb{T} = \emptyset$.
Otherwise, since $G$ has maximum degree at most $d^k$ and is chordal, $G$ has clique number at most $d^k + 1$ and thus treewidth at most $d^k$.
The properties of the tree decomposition that we will need are summarized here.
\begin{lemma}\label{lem:treedecomp}
Let $G$ be a connected chordal graph of maximum degree at most $d^k$, and let $z \in V(G)$. Then there exists a nice tree decomposition $B = (V_B, E_B)$ of $G$ with $O(|V(G)|)$ nodes and of width at most $d^k$, such that $r(B) = \{z\}$ and such that each $B_i \in V_B$ is a non-empty clique.
\end{lemma}
We omit the proof, which uses standard arguments. The idea is that connected chordal graphs admit a tree decomposition in which every bag is a non-empty clique (see e.g.~\cite{blair1993introduction}). We can take such a decomposition, root it at a bag containing $z$, and apply the standard transformation to make it nice (if the root is not exactly $\{z\}$, it can be made $\{z\}$ by adding enough forget nodes above).
Let $B = (V_B, E_B)$ be a nice tree decomposition that satisfies all the properties of Lemma~\ref{lem:treedecomp}.
For $B_i \in V_B$, we will denote by $V_i = \bigcup_{B_j \in B(B_i)} B_j$ the set of vertices of $G$ found in the bags at $B_i$ or its descendants.
Note that for each $V_i$, $G[V_i]$ is connected. This can be seen inductively. First, it is true for leaves. At introduce nodes, we introduce a vertex $v$ connected to the rest of $B_i$, so $G[V_i]$ remains connected. At forget nodes, $V_i$ is the same as $V_j$ and remains connected. At join nodes, $V_i$ is the union of the vertices of two connected graphs that intersect at $B_i$, and thus remains connected.
Let $B_i$ be a bag of $B$ and let $(T, \sigma)$ be a valued tree.
We say that $(T, \sigma)$ is \emph{valid for} $B_i$ if it satisfies the following conditions:
\begin{itemize}
\item
$L(T) = N[B_i] \cap V_i$;
\item
there exists a $k$-leaf root $T^*$ of $G[V_i]$ such that $(T, \sigma)$ is the valued restriction of $T^*$ to $N[B_i] \cap V_i$, and such that $r(T) = r(T^*)$.
\end{itemize}
Since $r(B) = \{z\}$, our goal is to obtain the set \emph{all} of valued trees that are valid for $r(B)$, as this will yield the valued restrictions to $N[z]$ required by Lemma~\ref{lem:enumerate}.
Note that unlike in the previous section, there is no requirement on the root of $T$ or $T^*$ being the parent of $z$.
The requirement that $r(T) = r(T^*)$ is there to ensure that the root we see in the restricted $T$ is also the root in $T^*$ (since otherwise, the restriction could hide nodes of $T^*$ above the root of $T$). This is important for our purposes, since it allows us to safely look at the valid valued trees for $r(B) = \{z\}$ whose root is the parent of $z$.
Also note that it is tempting to define $(T, \sigma)$ as the valued restriction of $T^*$ to $B_i$, and not bother with $N[B_i] \cap V_i$. This does not quite work --- the inclusion of the neighborhood is necessary to retain enough information to update the $\sigma(w)$ values accurately (see proof for details).
For the rest of this section, we shall slightly abuse notation and treat two valued trees that are value-isomorphic as \emph{the same}. In other words, we assume the understanding that two valued trees are \emph{distinct} only if they are not value-isomorphic.
We first show that the number of distinct $(T, \sigma)$ valued trees to consider is bounded (with a crude estimate), meaning that we can enumerate all candidates.
\begin{lemma}\label{lem:valid-bounded}
Let $B_i \in V(B)$ and let $(T, \sigma)$ be a valid valued tree for $B_i$.
Then $T$ has at most $d^{4k}$ nodes and is $k$-bounded.
Consequently, there are at most $d^{4kd^{4k}} \cdot (k + 2)^{d^{4k}}$ possible valid valued trees for $B_i$ (up to value-isomorphism).
\end{lemma}
\begin{proof}
Let $T$ be a valid valued tree for $B_i$. Then $L(T) = N[B_i] \cap V_i$. First recall that $|B_i| \leq d^k + 1$.
Since $G$ has maximum degree at most $d^k$, $N[B_i] \leq d^k + 1 + (d^k + 1) \cdot d^k \leq 2d^{2k}$, assuming $d \geq 2$. Let $T'$ be the unique tree such that $T'$ has no node with only one child, and such that $T$ is a subdivision of $T'$. In other words, $T'$ is obtained from $T$ by contracting every node of degree $2$ (except the root). Then $T'$ has at most $|L(T')| \leq 2d^{2k}$ internal nodes and at most $4d^{2k}$ edges.
Moreover, each edge $uv$ of $T'$ represents a path $(u, x_1, \ldots, x_l, v)$ in $T$, where $u, x_1, \ldots, x_l$ have one child.
Note that $l \leq k$, as otherwise, the leaves in $T(v)$ would be at distance more than $k$ from the other leaves, whereas $N[B_i] \cap V_i$ is a connected subgraph of $G$.
We may thus assume that each edge of $T'$ corresponds to at most $k$ additional nodes in $T$. Therefore, $T$ has at most $2d^{2k}$ leaves and at most $4kd^{2k}$ internal nodes, for a total of at most
$(4k + 2)d^{2k} \leq d^{4k}$ nodes (assuming $d \geq 2$).
The number of trees with at most $d^{4k}$ nodes is bounded by $(d^{4k})^{d^{4k}} = d^{4kd^{4k}}$.
We know that $(T, \sigma)$ is $k$-bounded by Lemma~\ref{lem:kbounded-and-height-general} (since $N[B_i] \cap V_i$ consists of a clique $B_i$ and a subset of $N[B_i]$).
Thus, for each possible tree, each of the at most $d^{4k}$ internal nodes can receive a value in $\{0, 1, \ldots, k, \infty\}$, i.e. $k + 2$ possibilities. Thus the number of valued trees is bounded by $d^{4kd^{4k}} \cdot (k + 2)^{d^{4k}}$.
\end{proof}
We now describe a dynamic programming recurrence over $B$ that constructs a set $Q[B_i]$ for each $B_i$. The set $Q[B_i]$ must contain \emph{every} valid valued tree for $B_i$, i.e. $\mathcal{T} \in Q[B_i]$ if and only if $\mathcal{T}$ is valid for $B_i$. We assume that we are enumerating each candidate valued tree from the above lemma, and must decide whether to put it in $Q[B_i]$ or not.
For convenience, denote $\mathcal{N}[B_i] := N[B_i] \cap V_i$ for the rest of this section.
\begin{itemize}
\item
\emph{Leaf node}. Let $B_i = \{v\}$ be a leaf of $B$. Then $(T, \sigma) \in Q[B_i]$ if and only if $T$ is the single node $v$ (and $\sigma$ has an empty domain).
\item
\emph{Introduce node}. Let $B_i$ be an introduce node with child $B_j = B_i \setminus \{v\}$.
Then put $(T, \sigma) \in Q[B_i]$ if and only if there exists $(T_j, \sigma_j) \in Q[B_j]$ such that all the following conditions are satisfied:
\begin{itemize}
\item
$T|\mathcal{N}[B_j] \simeq_L T_j$. Let $\mu$ be the leaf-isomorphism from $T|\mathcal{N}[B_j]$ to $T_j$;
\item
for every internal node $w \in V(T|\mathcal{N}[B_j])$, $\sigma(w) = \sigma_j(\mu(w))$ and for every internal node $w \in V(T) \setminus V(T|\mathcal{N}[B_j])$, $\sigma(w) = \infty$;
\item
for each $w \in L(T) \setminus \{v\}$,
$vw \in E(G)$ if and only if $dist_T(v, w) \leq k$;
\item
for each internal node $w \in V(T)$, $dist_T(v, w) + \sigma(w) > k$.
\end{itemize}
\item
\emph{Forget node}. Let $B_i$ be a forget node with child $B_j = B_i \cup \{v\}$.
Then put $(T, \sigma) \in Q[B_i]$ if and only if there exists $(T_j, \sigma_j) \in Q[B_j]$ such that
\begin{itemize}
\item
$r(T_j)$ has at least two distinct children $u, v$ such that $T_j(u)$ and $T_j(v)$ both have a leaf in $\mathcal{N}[B_i]$;
\item
$T \simeq_L T_j|\mathcal{N}[B_i]$.
Let $\mu$ be the leaf-isomorphism from $T$ to $T_j|\mathcal{N}[B_i]$;
\item
for each $w \in V(T) \setminus L(T)$, let $C(w) = \{c \in ch_{T_j}(\mu(w)) : \mu^{-1}(c) = \emptyset\}$, and let
\begin{align*}
\hat{L} = \bigcup_{c \in C(w)} L(T_j(c)) \quad \mbox { and } \quad \hat{I} = \{w\} \cup \bigcup_{c \in C(w)} V(T_j(c)) \setminus \hat{L}
\end{align*}
Then
\begin{align*}
\sigma(w) = \min \begin{cases}
\min_{v \in \hat{I}} \sigma_j(\mu(v)) + dist_{T_j}(\mu(w), \mu(v)) \\
\min_{l \in \hat{L}} dist_{T_j}(\mu(w), l)
\end{cases}
\end{align*}
\end{itemize}
\item
\emph{Join node}. Let $B_i$ be a join node of $B$ with children $B_l$ and $B_r$, where $B_i = B_l = B_r$.
Then $(T, \sigma) \in Q[B_i]$ if and only if there exists $(T_l, \sigma_l) \in Q[B_l]$ and $(T_r, \sigma_r) \in Q[B_r]$ that satisfy:
\begin{itemize}
\item
$T|\mathcal{N}[B_l] \simeq_L T_l$. Let $\mu_l$ be the leaf-isomorphism from $T|\mathcal{N}[B_l]$ to $T_l$;
\item
$T|\mathcal{N}[B_r] \simeq_L T_r$. Let $\mu_r$ be the leaf-isomorphism from $T|\mathcal{N}[B_r]$ to $T_r$;
\item
for each $w \in V(T) \setminus L(T)$, we have $\sigma(w) = \min(\sigma_l(\mu_l(w)), \sigma_r(\mu_r(w)))$ if $w$ is in both $T|\mathcal{N}[B_l]$ and $T|\mathcal{N}[B_r]$,
$\sigma(w) = \sigma_l(\mu_l(w))$ if $w$ is only in $T|\mathcal{N}[B_l]$, and $\sigma(w) = \sigma_r(\mu_r(w))$ otherwise;
\item
for each distinct $u, v \in L(T)$, $uv \in E(G)$ if and only if $dist_T(u, v) \leq k$;
\item
for each $u \in \mathcal{N}[B_l] \setminus B_i$ and each internal node $w \in V(T|\mathcal{N}[B_r])$, $dist_{T}(u, w) + \sigma_r(\mu_r(w)) > k$.
Likewise, for each $u \in \mathcal{N}[B_r] \setminus B_i$ and each $w \in V(T|\mathcal{N}[B_l])$, $dist_{T}(u, w) + \sigma_l(\mu_l(w)) > k$;
\item
for each internal node $w_l \in V(T|\mathcal{N}[B_l])$ and each internal node $w_r \in V(T|\mathcal{N}[B_r])$, $\sigma_l(\mu_l(w_l)) + dist_T(w_l, w_r) + \sigma_r(\mu_r(w_r)) > k$.
\end{itemize}
\end{itemize}
\begin{lemma}\label{lem:q-is-correct}
Consider the recurrence given above. For any bag $B_i$, $(T, \sigma)$ is a valid valued restriction for $B_i$ if and only if $(T, \sigma) \in Q[B_i]$ (up to value-isomorphism).
\end{lemma}
\begin{proof}
The proof is by induction on the depth of $B_i$.
As a base case, assume that $B_i$ is a leaf.
Then $B_i = \{v\}$, $\mathcal{N}[B_i] = \{v\}$, and $(T, \sigma)$ with $T$ containing $v$ only is the only possible valid valued tree. Hence the leaf case is correct.
Now consider the induction step. Let $B_i \in V(B) \setminus L(B)$ and assume that the statement is correct for all descendants of $B_i$.
We prove the two directions of the statement separately.
\medskip
\noindent
($\Rightarrow$) : suppose that $(T, \sigma)$ is a valid valued restriction for $B_i$. Then there exists a $k$-leaf-power $T^*$ of $G[V_i]$ such that $(T, \sigma)$ is the valued restriction of $T^*$ to $\mathcal{N}[B_i]$, and such that $r(T) = r(T^*)$.
We must argue that $(T, \sigma)$ is in $Q[B_i]$.
\medskip
\noindent
\paragraph{Introduce node.}
Suppose that $B_i$ is an introduce node, with child $B_j = B_i \setminus \{v\}$. Notice that by the properties of tree decompositions, $N(v) \cap V_i = B_i \setminus \{v\}$ (since $B_i$ is a clique).
Let $T^*_j = T^*|V_j$, noting that $V_j = V_i \setminus \{v\}$.
Moreover, $T^*_j$ is a $k$-leaf root of $G[V_j]$, since distances between leaves do not change when taking a restriction.
Let $(T_j, \sigma_j)$ be the valued restriction of
$T^*_j$ to $\mathcal{N}[B_j]$.
We would like to use induction and argue that $(T_j, \sigma_j) \in Q[B_j]$, but for that we need $r(T_j) = r(T^*_j)$, which is not immediate.
Indeed, it is possible that $r(T_j) \neq r(T^*_j)$, which happens if and only if in $T^*_j$, some ancestor of $r(T_j)$ (other than itself) has descending leaves of $V_j$ not in $\mathcal{N}[B_j]$. We argue that this does not occur.
Let $L' = V_j \setminus L(T^*_j(r(T_j)))$, and assume that $L'$ is non-empty.
For that to happen, observe that $v \notin L(T^*(r(T_j)))$. This is because $r(T) = r(T^*)$, and if $v$ was a descendant of $r(T_j)$ in $T^*$, then the root of $T = T^*|\mathcal{N}[B_i]$ would still be $r(T_j) \neq r(T^*)$. Thus in $T^*$, the path from $r(T_j)$ to $v$ goes ``above" $r(T_j)$.
Also note that no member of $L'$ has a neighbor in $B_j$, as otherwise this member of $L'$ would be included in $T_j = T^*_j|\mathcal{N}[B_j]$. Moreover, members of $L'$ are not neighbors of $v$.
Let $u \in L'$ be at minimum distance from $r(T_j)$ in $T^*_j$.
Then $u$ is also at minimum distance from $r(T_j)$ in $T^*$, among all leaves in $L'$.
Let $d_{u} = dist_{T^*}(u, r(T_j))$ and $d_v = dist_{T^*}(v, r(T_j))$.
First assume that $d_u \leq d_v$.
We know that $B_j \neq \emptyset$ and that $B_i$ is a clique, implying that there is $v' \in B_j$ such that $vv' \in E(G)$ and that $dist_{T^*}(v, v') \leq k$. Since $v' \in B_j$, $v'$ descends from $r(T_j)$ and thus the path from $v$ to $v'$ passes through $r(T_j)$ in $T^*$. Then $d_u \leq d_v$ implies $dist_{T^*}(u, v') \leq k$. This is not possible since $u$ has no neighbor in $B_j$.
Suppose instead that $d_u > d_v$. Note that $u$ must have at least one neighbor $u'$ in $L(T^*(r(T_j)))$ (otherwise, since $u$ is at minimum distance from $r(T_j)$, this would imply that no member of $L'$ has a neighbor outside of $L'$, and thus contradict that $G[V_i]$ is connected). Note that $u' \notin B_j$ because $u$ has no neighbor in $B_j$. The path from $u$ to $u'$ in $T^*$ goes through $r(T_j)$, and $u'$ is at distance at most $k$ from $u$. Since $d_v < d_u$, $u'v \in E(G)$, a contradiction since all the neighbors of $v$ are in $B_j$.
We deduce that $L'$ is empty.
Hence, we can safely assume that $r(T^*_j) = r(T_j)$.
Therefore, by induction, $(T_j, \sigma_j) \in Q[B_j]$.
It remains to show that $(T_j, \sigma_j)$ satisfies all conditions of introduce nodes.
Let us argue that $T|\mathcal{N}[B_j] = T_j$.
Note that $V(T|\mathcal{N}[B_j])$ is the set of all vertices of $T^*$ on the path between two leaves in $\mathcal{N}[B_j]$, and $V(T_j)$ the set of all vertices of $T^*_j$ between two leaves in $\mathcal{N}[B_j]$. This makes it clear that $T|\mathcal{N}[B_j] = T_j$, but let us argue this for completeness.
Consider two vertices $x, y \in \mathcal{N}[B_j]$, and let $P_{xy}$ be the path of $T^*$ from $x$ to $y$.
All vertices of $P_{xy}$ must be in $T$ since $x, y \in \mathcal{N}[B_i]$, and all vertices of $P_{xy}$ are in $T|\mathcal{N}[B_j]$ since $x, y \in \mathcal{N}[B_j]$.
All vertices of $P_{xy}$ must be in $T^*_j$ since $x, y \in V_j$, and must also all be in $T_j$ since $x, y \in \mathcal{N}[B_j]$. Since this holds for every $x, y \in \mathcal{N}[B_j]$, we obtain $T|\mathcal{N}[B_j] = T_j$.
We also have the leaf-isomorphism $\mu(w) = w$ for all $w \in V(T|\mathcal{N}[B_j])$ since $T|\mathcal{N}[B_j]$ and $T_j$ share the same set of vertices of $T^*$.
We shall use $w$ and $\mu(w)$ interchangeably, since they represent the same node.
We next justify $\sigma(w) = \infty$ for every internal node $w \in V(T) \setminus V(T_j)$.
There are two possible cases.
First suppose that $r(T) \neq r(T_j)$. Recall that $T$ and $T_j$ differ only by the $v$ leaf. Thus, $r(T) \neq r(T_j)$ happens if and only if $r(T^*)$ has at least two distinct two children, such that one leads to $v$ and the other leads to $r(T_j)$.
In this situation, $w \in V(T) \setminus V(T_j)$ must be a node on one of these two paths (excluding $r(T_j)$ and $v$).
As we have already argued, all leaves of $T^*_j$ are descendants of $r(T_j)$ in $T^*$. Therefore, $w$ cannot have a descendant from $T^*_i$ that is not in $V_i \setminus \mathcal{N}[B_i]$. If follows that $\sigma(w) = \infty$ represents the correct distance information for $w$.
The second case is when $r(T) = r(T_j)$.
In this case, the only difference between $T_j$ and $T$ is that there is some vertex $x \in V(T_j)$ and a path $P = (x_1, \ldots, x_l = v)$ such that $x_1 \in ch_{T}(x)$ and $x_1 \notin V(T_j)$. In other words, $T$ was obtained from $T_j$ by appending the path leading to $v$ under some node $x$. It follows that only the nodes $x_1, \ldots, x_{l-1}$ are internal nodes in $T$ but not in $T_j$.
The recurrence assumes that $\sigma(x_i) = \infty$ for each $x_i$.
Again, we must argue that in $T^*$, no such $x_i$ has a descending leaf in $L(T^*) \setminus \mathcal{N}[B_i]$, i.e. that $L(T^*(x_1)) = \{v\}$.
Assume that this is not the case.
Let $u = \neq v$ be a leaf of $T^*(x_1)$ at minimum distance from $x$. This is similar to the previous situation. We have that $u \notin \mathcal{N}[B_j]$ since $x_1$ is not in $T_j$.
Let $d_{u} = dist_{T^*}(u, x)$ and $d_{v} = dist_{T^*}(v, x)$.
Assume that $d_{u} \leq d_{v}$.
Then the path from $v$ to any $v' \in B_j$ (which exists) passes through $x$, implying that $uv' \in E(G)$, a contradiction.
Assume instead that $d_u > d_v$.
There must exist $u' \in N(u)$ such that $u' \notin L(T^*(x_1))$ (otherwise, no leaf in $L(T^*(x_1))$ has a neighbor in $G$ outside of $L(T^*(x_1))$ and $G[V_i]$ is not connected). Note that $u' \notin B_j$. The path from $u$ to $u'$ goes through $x$, implying that $vu' \in E(G)$, a contradiction. Thus in $T^*$, each $x_i$ only has $v$ has a leaf descendant. This justifies putting $\sigma(x_i) = \infty$ for each $x_i$.
Let $w \in V(T|\mathcal{N}[B_j])$ be an internal node that is in both $T$ and $T_j$.
Let $w' \in ch_{T^*}(w) \setminus ch_{T}(w)$. Then $w'$ only leads to leaves in $V_i \setminus \mathcal{N}[B_i] = V_j \setminus \mathcal{N}[B_j]$ and must also be in $ch_{T^*_j}(w) \setminus ch_{T_j}(w)$.
Conversely, let $w'' \in ch_{T^*_j}(w) \setminus ch_{T_j}(w)$. Then in $T^*_j$, $w''$ only leads to leaves in $V_j \setminus \mathcal{N}[B_j]$.
We have a problem if $w'' \in ch_{T}(w)$. Again, since $T$ and $T_j$ differ only by $v$, this is only possible if $w''$ has $v$ as a descendant. But as we argued in the previous paragraph, $w''$ would not have descendants in $V_j \setminus \mathcal{N}[B_j]$ (here, $w''$ plays the same role as $x_1$). Therefore, we may assume that $w'' \in ch_{T^*}(w) \setminus ch_{T}(w)$.
Since the set of hidden children of $w$ is the same in either $T$ and $T_j$, we deduce that $\sigma(w) = \sigma_j(w)$ represents the correct distance information.
Also, since $T = T^*|\mathcal{N}[B_i]$ and $T^*$ is a $k$-leaf root of $G$, it is clear that for all $w \in L(T) \setminus \{v\}$, $vw \in E(G) \Leftrightarrow dist_T(v, w) \leq k$. Moreover, since we have argued that $\sigma(w)$ is correct for each $w \in V(T) \setminus L(T)$, we must have $dist_T(v, w) + \sigma(w) > k$, as otherwise $v$ would have a neighbor outside of $\mathcal{N}[B_i]$. This agrees with the recurrence, and it will therefore put $(T, \sigma)$ in $Q[B_i]$.
\medskip
\noindent
\paragraph{Forget node.}
Suppose that $B_i$ is a forget node, with child $B_j = B_i \cup \{v\}$. Note that $T^*$ is a $k$-leaf root of $G[V_j]$.
Let $(T_j, \sigma_j)$ be the valued restriction of $T^*$ to $\mathcal{N}[B_j]$.
Because $\mathcal{N}[B_i] \subseteq \mathcal{N}[B_j]$, the root of $T_j$ satisfies $r(T_j) = r(T) = r(T^*)$.
Then by induction, $(T_j, \sigma_j) \in Q[B_j]$.
Note that $r(T)$ must have two distinct children with descendants in $\mathcal{N}[B_i]$, and thus the same holds for $T_j$, as in the recurrence.
Because $T = T^*|\mathcal{N}[B_i]$ and $T_j = T^*|\mathcal{N}[B_j]$, it is not hard to see that $T = T_j|\mathcal{N}[B_i]$.
The leaf-isomorphism is $\mu(w) = w$ for all $w \in V(T)$ because $T$ and $T_j|\mathcal{N}[B_i]$ use the same set of vertices.
Now let $w \in V(T) \setminus L(T)$. Let $L^* = \bigcup_{c \in ch_{T^*}(w) \setminus ch_T(w)} L(T^*(c))$.
To find the minimum distance from $w$ to a leaf $u \in L^*$, we must first consider all leaves of $L(T_j)$ that descend from a child $c \in ch_{T_j}(w) \setminus ch_T(w)$, in which case $\mu^{-1}(c) = \emptyset$. The minimum distance to such a leaf is given by $\min_{l \in \hat{L}} dist_{T_j}(\mu(w), l)$ in the recurrence. We must also consider all leaves of $V_j \setminus L(T_j)$ that descend from a child $c \in ch_{T^*}(w) \setminus ch_{T}(w)$.
Consider such a leaf $u$ at minimum distance from $w$, and let $w'$ be the lowest ancestor of $u$ that is in $V(T_j)$ (which exists, since $w$ is an ancestor of $u$).
Then either $w' = w$ if $u$ descends from some child $c \in ch_{T^*}(w) \setminus ch_{T_j}(w)$, in which case the distance is $\sigma_j(w)$, or $w'$ is a descendant of some child of $w$ in $ch_{T_j}(w) \setminus ch_T(w)$, in which case this distance is $\sigma_j(w') + dist_{T_j}(w, w')$.
In any case, the distance to such a $u$ is given by the expression $\min_{v \in \hat{I}} \sigma_j(\mu(v)) + dist_{T_j}(\mu(v), \mu(w))$ in the recurrence.
Since the recurrence takes the minimum over all possibilities, it will take the minimum possible distance, and thus $\sigma(w)$ describes the same distance as in the recurrence.
\medskip
\noindent
\paragraph{Join node.}
Suppose that $B_i$ is a join node, with children $B_l = B_r = B_i$.
In this part of the proof, we will use $B_i, B_l$ and $B_r$ interchangeably, but use $B_l$ and $B_r$ when we want to emphasize that we are dealing with the ``left'' or ``right'' side of the decomposition.
Let $T^*_l = T^*|V_l$ and $T^*_r = T^*|V_r$.
Let $(T_l, \sigma_l)$ be the valued restriction of $T^*_l$ to $\mathcal{N}[B_l]$.
Similarly, let $(T_r, \sigma_r)$ be the valued restriction of $T^*_r$ to $\mathcal{N}[B_r]$.
As in the introduce case, we would like to argue that $r(T_l) = r(T^*_l)$. Assume that this is not the case, and that $r(T_l) \neq r(T^*_l)$.
Then $L(T^*(r(T^*_l))) \setminus L(T^*(r(T_l)))$ is non-empty. Let $u \in V_l$ be one of those leaves, and choose $u$ that has minimum distance to $r(T_l)$. Notice that $u \notin \mathcal{N}[B_l]$, and thus $u \in V_l \setminus \mathcal{N}[B_l]$. Moreover, since $r(T^*) = r(T)$, $r(T^*)$ has at least two distinct children with a descendant in $\mathcal{N}[B_i]$. Let $w$ be such a child, chosen so that $w$ does not have $r(T_l)$ as a descendant. Let $v \in L(T^*(w)) \cap \mathcal{N}[B_i]$. Note that $v \notin \mathcal{N}[B_l]$ by the choice of $w$. Thus, $v \in \mathcal{N}[B_i] \setminus \mathcal{N}[B_l] = N(B_r) \cap V_r$. In particular, $v \in V_r \setminus B_i$, and
it is not hard to see that by the properties of tree decompositions, this implies $u \neq v$.
Now, let $d_v = dist_{T^*}(v, r(T_l))$ and $d_u = dist_{T^*}(u, r(T_l))$. Assume that $d_u \leq d_v$. We know that $v$ has a neighbor $v'$ in $B_i$, and that all of $B_i$ descend from $r(T_l)$ in $T^*$. Thus the path from $v$ to $v'$ passes through $r(T_l)$. Since $dist_{T^*}(v, v') \leq k$, the assumption $d_u \leq d_v$ means that $dist_{T^*}(u, v') \leq k$, and thus $uv' \in E(G)$. This is a contradiction since $u \notin \mathcal{N}[B_i]$.
Assume that $d_u > d_v$. One can see that $u$ must have a neighbor $u'$ in $L(T^*(r(T_l)))$ (if not, by the choice of $u$, all the leaves in $L(T^*(r(T^*_l))) \setminus L(T(r(T_l)))$ have only neighbors in $L(T^*(r(T^*_l))) \setminus L(T(r(T_l)))$, contradicting that $G[V_i]$ is connected). Recall that $u \in V_l \setminus \mathcal{N}[B_l]$, and so $u'$ must be in $V_l \setminus B_l$. Moreover, $d_u > d_v$ implies that $vu' \in E(G)$. But $v \in V_r \setminus B_i$ and $u' \in V_l \setminus B_l$, which is not allowed by the properties of tree decompositions.
We deduce that $r(T_l) = r(T^*_l)$.
By a symmetric argument, $r(T_r) = r(T^*_r)$.
By induction, $(T_l, \sigma_l) \in Q[B_l]$ and $(T_r, \sigma_r) \in Q[B_r]$.
Since $T = T^*|\mathcal{N}[B_i]$ and $\mathcal{N}[B_l] \subseteq \mathcal{N}[B_i]$,
it is not hard to see that $T|\mathcal{N}[B_l] = T_l$, with the leaf-isomorphism $\mu_l(w) = w$ for $w \in V(T|\mathcal{N}[B_l])$.
Likewise, $T|\mathcal{N}[B_r] = T_r$, with the leaf-isomorphism $\mu_r(w) = w$ for $w \in V(T|\mathcal{N}[B_r])$. We shall not distinguish $w$ and $\mu_l(w), \mu_r(w)$ since they refer to the same node.
Let $w \in V(T) \setminus L(T)$. Note that $w$ is in at least one of $T|\mathcal{N}[B_l]$ or $T|\mathcal{N}[B_r]$ since $w$ has a descendant in at least one of $\mathcal{N}[B_l]$ or $\mathcal{N}[B_r]$.
If $w$ is in both $T|\mathcal{N}[B_l]$ and $T|\mathcal{N}[B_r]$, then $\sigma(w) = \min(\sigma_l(w), \sigma_r(w))$ holds because all leaves of $V_i \setminus \mathcal{N}[B_i]$ that have $w$ as their first ancestor in $T$ are either leaves of $V_l \setminus \mathcal{N}[B_l]$ or $V_r \setminus \mathcal{N}[B_r]$, and $\sigma_l(w)$, $\sigma_r(w)$ cover both cases.
Suppose that $w$ is in $T|\mathcal{N}[B_l]$ but not in $T|\mathcal{N}[B_r]$.
If all leaves in $T^*(w)$ are in $V_l$, having $\sigma(w) = \sigma_l(w)$ as in the recurrence is correct.
Suppose that $L(T^*(w)) \setminus V_l$ is non-empty, and choose $u$ among those of minimum distance to $w$.
Note that $u \notin \mathcal{N}[B_r]$, as otherwise $w$ would be in $T|\mathcal{N}[B_r]$.
Thus $u \in V_r \setminus \mathcal{N}[B_r]$.
We repeat the same type of argument one last time (this is a tedious endeavor, but unfortunately, the cases appear to be not similar enough to handle them with a reusable claim).
Let $v \in L(T(w))$. We note that $v \in \mathcal{N}[B_l] \setminus B_l = N(B_l) \cap V_l$, since otherwise $w$ would be in $T|\mathcal{N}[B_r]$. In particular, $uv \notin E(G)$, by the properties of tree decompositions.
Let $d_{u} = dist_{T^*}(u, w)$ and $d_{v} = dist_{T^*}(v, w)$.
Assume that $d_{u} \leq d_{v}$. Let $v' \in N(v) \cap B_l$, which we know exists. Note that $v' \in B_r$ as well. Thus, the path from $v$ to $v'$ in $T^*$ must go through $w$, since otherwise $T|\mathcal{N}[B_r]$ would contain $w$. Then, $d_{u} \leq d_{v}$ implies that $dist_{T^*}(u, v') \leq k$, a contradiction since $u$ has no neighbor in $B_l$. Assume instead that $d_{u} > d_{v}$. By the choice of $u$, there must exist $u' \in V_r \setminus B_r$ such that $uu' \in E(G)$, and such that $u' \notin L(T^*(w))$ (otherwise, $u$ cannot be connected to the rest of the graph). This implies that $vu' \in E(G)$. This is not possible by the properties of tree decompositions, since $v \in N(B_l) \subseteq V_l \setminus B_i$, and $u' \in V_r \setminus B_i$.
We deduce that $w$ has no descending leaf in $V_r \setminus \mathcal{N}[B_r]$, and that $\sigma(w) = \sigma_l(w)$ is correct.
Suppose that $w$ is in $T|\mathcal{N}[B_r]$ but not in $T|\mathcal{N}[B_l]$.
A symmetric argument justifies that $\sigma(w) = \sigma_r(w)$.
The fact that for $u, v \in L(T)$, $uv \in E(G) \Leftrightarrow dist_T(u, v) \leq k$ follows from the fact that $T = T^*|\mathcal{N}[B_i]$ and that $T^*$ is a $k$-leaf root of $G[V_i]$.
Now let $u \in \mathcal{N}[B_l] \setminus B_i$ and take an internal node $w \in V(T|\mathcal{N}[B_r])$. We note that $u \in V_l \setminus B_i$. Moreover in $T^*(w)$, there is a leaf $v$ of $V_r \setminus \mathcal{N}[B_r]$ at distance $\sigma_r(w)$ from $w$.
It follows that $uv \notin E(G)$, and thus that $dist_{T^*}(u, w) + \sigma_r(w) > k$. This implies that $dist_T(u, w) + \sigma_r(w) > k$.
A symmetric argument justifies the same property for $u \in \mathcal{N}[B_r] \setminus B_i$.
Finally, let $w_l \in V(T|\mathcal{N}[B_l])$ and $w_r \in V(T|\mathcal{N}[B_r])$.
Then any leaf of $T^*(w_l) \setminus \mathcal{N}[B_i]$ must be at distance more than $k$ to any leaf of $T^*(w_r) \setminus \mathcal{N}[B_i]$.
This justifies $\sigma_l(w_l) + dist_T(w_l, w_r) + \sigma_r(w_r) > k$.
\medskip
\noindent
($\Leftarrow$)
Suppose that $(T, \sigma) \in Q[B_i]$. We must argue that there is a $k$-leaf root $T^*$ of $G[V_i]$, with $r(T^*) = r(T)$, such that $(T, \sigma)$ is the valued restriction of $T^*$ to $\mathcal{N}[B_i]$.
\medskip
\noindent
\paragraph{Introduce node.}
Let $B_i$ be an introduce node with child $B_j = B_i \setminus \{v\}$.
Then there exists $(T_j, \sigma_j) \in Q[B_j]$ that satisfies the properties of the recurrence. Let $\mu$ be the leaf-isomorphism from $T|\mathcal{N}[B_j]$ to $T_j$.
By induction, there is a $k$-leaf power $T^*_j$ of $G[V_j] = G[V_i \setminus \{v\}]$ such that $(T_j, \sigma_j)$ is the valued restriction of $T^*_j$ to $\mathcal{N}[B_j]$, and such that $r(T^*_j) = r(T_j)$.
We can construct a $k$-leaf root $T^*$ of $G[V_i]$ as follows.
For each $w \in V(T|\mathcal{N}[B_j])$, let $U(w) = ch_{T^*_j}(\mu(w)) \setminus ch_{T_j}(\mu_j(w))$.
Then for each $u \in U(w)$, add the $T^*_j(u)$ subtree as a child of $w$ in $T$. Because $r(T^*_j) = r(T_j)$, every leaf in $V_j \setminus \mathcal{N}[B_j]$ gets inserted. In essence, we simply add the subtrees of $T^*_j$ that $T$ is missing at the locations indicated by $T_j$.
We claim that $T^*$ meets all the requirements for $(T, \sigma)$ to be valid. We note that $r(T^*) = r(T)$ since in our construction, we started with $T$, and only added subtrees under some of its internal nodes (thus, we have not accidentally ``raised'' the root in $T^*$).
Let us argue that $(T, \sigma)$ is the valued restriction of $T^*$ to $\mathcal{N}[B_i]$.
Notice that $T = T^*|\mathcal{N}[B_i]$, since $L(T) = \mathcal{N}[B_i]$, and to obtain $T^*$ we took $T$ and only added extra subtrees under its existing nodes.
Moreover, it is not hard to see that $T^*|V_j \simeq_L T^*_j$. This is because we started with $T$ such that $T|\mathcal{N}[B_j] \simeq_L T_j$, and we added the missing subtrees of $T^*_j$ on the vertices of $T|\mathcal{N}[B_j]$ at the appropriate locations.
Consider $w \in V(T) \setminus L(T)$ and let us argue that $\sigma(w)$ represents the correct distance to a hidden leaf under $w$.
First assume that $w \notin V(T|\mathcal{N}[B_j])$. Then one can see from the construction that we never insert subtrees under $w$, and hence $\sigma(w) = \infty$ represents the correct distance.
Assume instead that $w \in V(T|\mathcal{N}[B_j])$.
Then our insertion procedure only adds the leaves descending from children in $ch_{T^*_j}(\mu(w)) \setminus ch_{T_j}(\mu(w))$ under $w$. The $\sigma_j(\mu(w))$ quantity gives the minimum distance from $\mu(w)$ to such an inserted leaf and in this case, the recurrence specifies that $\sigma(w) = \sigma_j(\mu(w))$, which is correct.
Let us now argue that $T^*$ is a $k$-leaf root of $G[V_i]$. Because $T^*|V_j \simeq_L T^*_j$, for each $x, y \in V_j$, $xy \in E(G) \Leftrightarrow dist_{T^*}(x, y) \leq k$.
As for $v$, we know by the recurrence that for each $w \in \mathcal{N}[B_i]$, $vw \in E(G) \Leftrightarrow dist_T(v, w) = dist_{T^*}(v, w) \leq k$.
For any $w \in V_i \setminus \mathcal{N}[B_i]$, let $w'$ be the lowest ancestor of $w$ in $T^*$ such that $w' \in V(T)$, which exists since $r(T) = r(T^*)$. The recurrence ensures that $dist_{T}(v, w') + \sigma(w') > k$, and thus that $dist_{T^*}(v, w) = dist_{T}(v, w') + dist_{T^*}(w', w) > k$. It follows that $T^*$ is a $k$-leaf root of $G[V_i]$.
\medskip
\noindent
\paragraph{Forget node.}
Let $B_i$ be a forget node with child $B_j = B_i \cup \{v\}$.
Then there exists $(T_j, \sigma_j) \in Q[B_j]$ that satisfies the properties of the recurrence. Let $\mu$ be the leaf-isomorphism from $T$ to $T_j|\mathcal{N}[B_i]$.
By induction, there is a $k$-leaf-power $T^*_j$ of $G[V_j]$
such that $(T_j, \sigma_j)$ is the valued restriction of $T^*_j$ to $\mathcal{N}[B_j]$, and whose root is $r(T^*_j) = r(T_j)$.
Since $V_j = V_i$, $T^*_j$ is also a $k$-leaf root of $G[V_i]$.
Let us note that since the recurrence requires $r(T_j)$ to have two children with descending leaves in $\mathcal{N}[B_i]$, we have $\mu(r(T)) = r(T_j) = r(T_j^*)$.
We want to argue that $T$ is the valued restriction of $T^*_j$ to $\mathcal{N}[B_i]$.
Strictly speaking, $V(T)$ is not a subset of $V(T^*_j)$, so we cannot say that $T$ is the valued restriction of $T^*_j$ to $\mathcal{N}[B_i]$ (recall that valued restrictions require usage of same set of nodes). The simplest solution is to construct a new $k$-leaf root $T^*$ ``around" $T$.
For each $w \in V(T)$, let $C(w) = \{c \in ch_{T_j}(\mu(w)) : \mu^{-1}(w) = \emptyset\}$. Moreover, let $C^*(w) = \{c \in ch_{T^*_j}(\mu(w)) \setminus ch_{T_j}(\mu(w))\}$.
Then for each $u \in C(w) \cup C^*(w)$, add the $T^*_j(u)$ subtree as a child of $w$ in $T$.
Because $\mu(r(T)) = r(T_j) = r(T^*_j)$, we know that every leaf of $V_j \setminus \mathcal{N}[B_i]$ is inserted by this procedure.
Moreover, it is not hard to see that $T^* \simeq_L T^*_j$, since we add all the missing subtrees to $T$ at the appropriate locations specified by $T_j$. Hence, $T^*$ is a $k$-leaf root of $G[V_i]$.
It remains to argue that $(T, \sigma)$ is the valued restriction of $T^*$ to $\mathcal{N}[B_i]$.
Since $T^*$ is obtained from $T$ by inserting child subtrees under the nodes of $T$, we have $T = T^*|\mathcal{N}[B_i]$ and $r(T) = r(T^*)$.
Now let $w \in V(T) \setminus L(T)$. Then in $T^*$, consider the minimum distance from $w$ to a leaf in $u \in V_i \setminus \mathcal{N}[B_i]$ that descends from a node in $ch_{T^*}(w) \setminus ch_T(w)$.
Because $T^* \simeq_L T^*_j$,
this is identical to the $\Rightarrow$ direction of forget nodes.
That is, the $u$ leaf is either in $L(T_j) \setminus L(T)$, in which case $dist_{T^*}(w) = \min_{l \in \hat{L}} dist_{T_j}(\mu(w), l)$ as in the recurrence, or this leaf is not in $T_j$.
In the latter case, $u$ has some $w'$ as its lowest ancestor in $T_j$, and the distance to $u$ is given by $dist_{T_j}(\mu(w), w') + \sigma_j(w')$. The value of $\sigma(w)$ should be the minimum of all possibilities, as in the recurrence.
Thus $(T, \sigma)$ is indeed the valued restriction of $T^*$ to $\mathcal{N}[B_i]$.
\medskip
\noindent
\paragraph{Join node.}
Let $B_i$ be a join node with children $B_l, B_r$, with $B_i = B_l = B_r$.
Then there exist $(T_l, \sigma_l) \in Q[B_l]$ and $(T_r, \sigma_r) \in Q[B_r]$ that satisfy the properties of the recurrence. Let $\mu_l$ and $\mu_r$ be the leaf-isomorphisms from $T|\mathcal{N}[B_l]$ to $T_l$, and from $T|\mathcal{N}[B_r]$ to $T_r$, respectively.
By induction, there is a $k$-leaf root $T^*_l$ of $G[V_l]$
such that $(T_l, \sigma_l)$ is the valued restriction of $T^*_l$ to $\mathcal{N}[B_l]$, with $r(T^*_l) = r(T_l)$.
Likewise, there is a $k$-leaf root $T^*_r$ of $G[V_r]$
such that $(T_r, \sigma_r)$ is the valued restriction of $T^*_r$ to $\mathcal{N}[B_r]$, with $r(T^*_r) = r(T_r)$.
We can construct $T^*$ as follows.
For each $w \in V(T|\mathcal{N}[B_l])$, let $U_l(w) = ch_{T^*_l}(\mu_l(w)) \setminus ch_{T_l}(\mu_l(w))$.
Then for each $u \in U_l(w)$, add the $T^*_l(u)$ subtree as a child of $w$ in $T$. Because $r(T^*_l) = r(T_l)$, every leaf in $V_l \setminus \mathcal{N}[B_l]$ gets inserted.
Similarly, for each $w \in V(T|\mathcal{N}[B_l])$, let $U_r(w) = ch_{T^*_r}(\mu_r(w)) \setminus ch_{T_r}(\mu_r(w))$.
Then for each $u \in U_r(w)$, add the $T^*_r(u)$ subtree as a child of $w$ in $T$. Again, every leaf in $V_r \setminus \mathcal{N}[B_r]$ gets inserted.
We claim that $T^*$ meets all the requirements for $(T, \sigma)$ to be valid. We note that $r(T^*) = r(T)$ since in our construction, we started with $T$, and only added subtrees under some of its internal nodes.
Let us argue that $(T, \sigma)$ is the valued restriction of $T^*$ to $\mathcal{N}[B_i]$.
Notice that $T = T^*|\mathcal{N}[B_i]$, since $L(T) = \mathcal{N}[B_i]$, and to obtain $T^*$ we took $T$ and only added extra subtrees under its existing nodes.
Moreover, it is not hard to see that $T^*|V_l \simeq_L T^*_l$. This is because we started with $T$ such that $T|\mathcal{N}[B_l] \simeq_L T_l$, and we added the missing subtrees of $T^*_l$ on the vertices of $T|\mathcal{N}[B_l]$ at the appropriate locations.
Similarly, $T^*|V_r \simeq_L T^*_r$.
Consider $w \in V(T) \setminus L(T)$ and let us argue that $\sigma(w)$ represents the correct distance to a hidden leaf under $w$.
If $w \in V(T|\mathcal{N}[B_l])$ but is not is $V(T|\mathcal{N}[B_r])$,
then our insertion procedure only adds the leaves descending from children in $ch_{T^*_l}(\mu_l(w)) \setminus ch_{T_l}(\mu_l(w))$ under $w$. The $\sigma_l(\mu_l(w))$ quantity gives the minimum distance from $\mu_l(w)$ to such an inserted leaf and in this case, the recurrence specifies that $\sigma(w) = \sigma_l(\mu_l(w))$, which is correct.
The same argument applies if $w \in V(T|\mathcal{N}[B_r])$ but is not is $V(T|\mathcal{N}[B_l])$.
If $w \in V(T|\mathcal{N}[B_l]) \cap V(T|\mathcal{N}[B_r])$, we have inserted both leaves under $\mu_l(w)$ from $T^*_l$ and leaves under $\mu_r(w)$ from $T^*_r$. In this case, $\sigma(w) = \min(\sigma_l(\mu_l(w)), \sigma_r(\mu_r(w)))$ correctly represent the minimum distance from $w$ to such a leaf.
It follows that $(T, \sigma)$ is the valued restriction of $T^*$ to $\mathcal{N}[B_i]$.
Let us now argue that $T^*$ is a $k$-leaf root of $G[V_i]$. Because $T^*|V_l \simeq_L T^*_l$, for each $u, v \in V_l$, $uv \in E(G) \Leftrightarrow dist_{T^*}(u, v) \leq k$.
Similarly, $T^*|V_r \simeq_L T^*_r$ implies that for each $u, v \in V_r$, $uv \in E(G) \Leftrightarrow dist_{T^*}(u, v) \leq k$.
Now, consider $u \in \mathcal{N}[B_l]$ and $v \in \mathcal{N}[B_r]$. The fact that $uv \in E(G) \Leftrightarrow dist_{T^*}(u, v) \leq k$ is explicitly stated in the recurrence, using the fact that $T^*|\mathcal{N}[B_i] = T$.
Next, consider $u \in \mathcal{N}[B_l]$ and $v \in V_r \setminus \mathcal{N}[B_r]$.
Then $uv \notin E(G)$, by the properties of tree decompositions. Let $w$ be the lowest ancestor of $v$ in $T^*$ such that $w \in V(T|\mathcal{N}[B_r])$ (which must exist, by the construction of $T^*$, since the subtree containing $v$ was appended under a node of $T|\mathcal{N}[B_r]$ at some point). In $T^*$, the path from $u$ to $v$ must pass through $w$.
We have $dist_{T^*}(u, v) = dist_T(u, w) + dist_{T^*}(w, v) \geq dist_T(u, w) + \sigma_r(w)$, which is strictly greater than $k$, by the properties of the recurrence.
The symmetric argument holds for $u \in \mathcal{N}[B_r]$ and $v \in V_l \setminus \mathcal{N}[B_l]$.
Finally, consider $u \in V_l \setminus \mathcal{N}[B_i]$ and $v \in V_r \setminus \mathcal{N}[B_i]$. Then $uv \notin E(G)$. Let $w_u$ (resp. $w_v$) be the lowest ancestor of $u$ (resp. $v$) in $T^*$ such that $w_u \in V(T|\mathcal{N}[B_l])$ (resp. $w_v \in V(T|\mathcal{N}[B_r])$).
Then $dist_{T^*}(u, v) = dist_{T^*}(u, w_u) + dist_{T^*}(w_u, w_v) + dist_{T^*}(w_v, v) \geq \sigma_l(w_u) + dist_T(w_u, w_v) + \sigma_r(w_v)$, which is strictly greater than $k$, by the properties of the recurrence.
We have handled every possible pair of leaves and deduce that $T^*$ is indeed a $k$-leaf root of $G[V_i]$.
This concludes the proof.
\end{proof}
We conclude this section with the proof of the main lemma.
\begin{proof}[Proof of Lemma~\ref{lem:enumerate}]
We first need to show that the dynamic programming procedure above can be used to enumerate $\mathbb{T} = \{\mathcal{T}_1, \ldots, \mathcal{T}_l\}$.
Recall that $\mathcal{T}_i \in \mathbb{T}$ if and only if there is a $k$-leaf root $T^*$ of $G$ such that the root of $T^*$ is the parent of $z$, and $\mathcal{T}_i = (T_i, \sigma_i)$ is the valued restriction of $T^*$ to $N[z]$.
Let $B_z = \{z\}$ be the root of the tree decomposition from above.
Let $\mathcal{T}_i = (T_i, \sigma_i) \in \mathbb{T}$ and let $T^*$ be a corresponding $k$-leaf root. Then $L(T_i) = N[z] = N[B_z] = N[B_z] \cap V_z$ (since $V_z = V(G)$). Moreover, $(T_i, \sigma_i)$ is the valued restriction of $T^* = T^*[V_z]$ to $N[z] = N[B_z] \cap V_z$, and $r(T_i) = r(T^*)$ since both roots must be the parent of $z$.
Thus $(T_i, \sigma_i)$ is valid for $B_z$, and
it follows from Lemma~\ref{lem:q-is-correct} that $(T_i, \sigma_i) \in Q[B_z]$.
Now let $(T, \sigma) \in Q[B_z]$ such that $r(T)$ is the parent of $z$.
Then by Lemma~\ref{lem:q-is-correct}, $(T, \sigma)$ is valid for $B_z$ and there is a $k$-leaf root $T^*$ of $G[V_z] = G$ such that $(T, \sigma)$ is the valued restriction of $T^*$ to $N[B_z] \cap V_z = N_G[z]$. Moreover, $r(T^*) = r(T)$ is the parent of $z$.
Therefore, $(T, \sigma)$ must belong to $\mathbb{T}$.
We have thus shown that by enumerating all the valued trees in the computed $Q[B_z]$ and keeping only those whose root is the parent of $z$, we reconstruct exactly $\mathbb{T}$.
Let us discuss the complexity. Note that $B$ has $O(n)$ nodes.
For each bag $B_i \in V(B)$, by Lemma~\ref{lem:valid-bounded}, we must enumerate $O(f(k))$ possible valued trees and test each of them for membership in $Q[B_i]$, where $f(k) = d^{4kd^{4k}} \cdot (k + 2)^{d^{4k}}$.
Join nodes take the longest to test, since they require to check every combination of valued trees in $Q[B_l]$ and $Q[B_r]$, which amounts to $f(k)^2$ tests. It is not hard to see that for each combination, checking whether the recurrence holds can be done in time $O(d^{4k} \cdot d^{4k})$ (the longest condition to check is to test each $w_l, w_r$ pairs).
Therefore, each $B_i$ can be computed in time $O(d^{8k} \cdot f(k)^3)$.
Since there are $O(n)$ such $B_i$ bags, the complexity is $O(n \cdot d^{8k} \cdot f(k)^3)$.
\end{proof}
\section{Putting it all together}
The results accumulated above lead to an immediate algorithm.
First, we check whether $G$ admits a $k$-leaf root of arity at most by $d^k$, where $d = 3|S(k, 3k)|2^{|S(k, 3k)|}$. This can only happen if $G$ has maximum degree at most $d^k$, in which case we can use the algorithm of Eppstein and Havvaei~\cite{eppstein2020parameterized} (or even our algorithm from Section 4 would work).
If there is no such $k$-leaf root but that $G$ is a $k$-leaf power, by Lemma~\ref{lem:hom-struct}, we must be able to find a homogeneous similar structure $\S$ of size $3|S(k, 3k)|$.
To find $\S$, we begin by searching for $\mathcal{C}$. We brute-force every $3|S(k, 3k)|$ disjoint subsets of at most $d^k$ vertices from $G$, which is the only reason our algorithm takes time $O(n^{f(k)})$ instead of $O(f(k) n^c)$.
Assuming a suitable $\mathcal{C}$ has been identified, we look at the connected components obtained after removing the $C_i$'s. At this point, it is easy to verify that the properties of similar structures hold and to find $z$ and the $Y_i$'s.
As for the layering functions, we brute-force them all, but since the size of the $C_i$'s is bounded, this adds little complexity compared to the gargantuan time taken to enumerate the possible $\mathcal{C}$'s.
Once a suitable set of layering functions is found, we use the algorithm from Section 4 to compute all the $accept(\S, C_i)$ sets, and it remains to check that they are equal.
If so, we have found a redundant substructure of $G$. By Theorem~\ref{thm:iffc1}, we may remove $C_1 \cup Y_1$ from $G$ to obtain an equivalent instance. We then repeat with $G - (C_1 \cup Y_1)$. The algorithm ends when it either reaches a graph of maximum degree at most $d^k$, which is ``easy'' to verify, or when no homogeneous similar structure can be found. In the latter case, we know by Lemma~\ref{lem:hom-struct} that $G$ cannot be a $k$-leaf power.
\begin{algorithm2e}[h]
\SetAlgoLined
\SetKwProg{Fn}{Function}{}{end}
\Fn{isLeafPower($G, k$)}
{
$d \gets 3|S(k, 3k)|2^{|S(k, 3k)|}$\;
\uIf{$G$ has maximum degree at most $d^k$}
{
Check if $G$ is a $k$-leaf power and return the result\; \label{line:bounded-degree}
}
\ForEach{collection $\mathcal{C} = \{C_1, \ldots, C_l\}$ of disjoint subsets of $V(G)$, with $l = 3|S(k, 3k)|$ and each $|C_i| \leq d^k$}
{
Let $G' = G - \bigcup_{i \in [l]} C_i$\;
Let $X = \{X_1, \ldots, X_t\}$ be the connected components of $G'$\;
Let $z \in V(G')$ such that $\bigcup_{i \in [l]} C_i \subseteq N_G(z)$\; \label{line:the-z}
\uIf{$z$ does not exist}
{
continue to the next $\mathcal{C}$\;
}
Let $X_z \in X$ such that $z \in X_z$\; \label{line:ccs1}
\uIf{some $X_j \in X \setminus \{X_z\}$ has neighbors in two distinct $C_i, C_j$}
{
continue to the next $\mathcal{C}$\;
}
For $i \in [l]$, let $Y_i$ be the union of every $X_j \in X \setminus X_z$ such that $N_G(X_j) \subseteq C_i\}$\; \label{line:ccs2}
\uIf{$\exists i \in [l], G[C_i \cup Y_i \cup \{z\}]$ has maximum degree above $d^k$ \label{line:yi-size}}
{
continue to the next $\mathcal{C}$\;
}
\ForEach{set of layering functions $\L = \{\l_1, \ldots, \l_l\}$}
{
\uIf{$\S = (\mathcal{C}, \mathcal{Y} = \{Y_1, \ldots, Y_d\}, z, \L)$ is a similar structure}
{
\ForEach{$i \in [l]$}
{
Compute $accept(\S, C_i)$\;
}
\uIf{all the $accept(\S, C_i)$ are equal and non-empty}
{
return $isLeafPower(G - (C_1 \cup Y_1), k)$ \label{line:homreccall}\;
}
}
}
}
return ``Not a $k$-leaf power"\;
}
\caption{Deciding if a graph is a $k$-leaf power.}
\label{alg:findhomsim}
\end{algorithm2e}
\begin{theorem}
Let $k$ be a fixed positive integer. Then Algorithm~\ref{alg:findhomsim} correctly decides whether a graph $G$ is a $k$-leaf power, and runs in time $O(n^{(d^k + 1)3|S(k, 3k)| + 6})$, where $d = 3|S(k, 3k)| \cdot 2^{|S(k, 3k)|}$.
\end{theorem}
\begin{proof}
We argue correctness and complexity separately.
\noindent
\emph{Correctness.}
Assume that $G$ admits a $k$-leaf root of arity at most $d$. Then it will be found on line~\ref{line:bounded-degree}, by Lemma~\ref{lem:prelim:bounded-arity}.
Otherwise, if $G$ is a $k$-leaf power, all its $k$-leaf roots have arity at least $d + 1$. By Lemma~\ref{lem:hom-struct}, $G$ admits a homogeneous similar structure $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$ with $|\mathcal{C}| = 3|S(k, 3k)|$, with each $|C_i| \leq d^k$ and $G[C_i \cup Y_i \cup \{z\}]$ having maximum degree $d^k$ or less.
We show that the algorithm finds such a structure, if one exists.
By brute-force, the main for loop will find a $\mathcal{C}$ that belongs to a desired homogeneous similar structure $\S = (\mathcal{C}, \mathcal{Y}, z, \L)$.
The $z$ vertex described on line~\ref{line:the-z} exists, by Property~\ref{cut:znbrhood} of similar structures.
By Property~\ref{cut:ccs}, only the connected component $X_z$ of $G'$ that contains $z$ can have neighbors in more than one $C_i$, and all the others have neighbors in exactly one $C_i$. Thus lines~\ref{line:ccs1}-\ref{line:ccs2} correctly build the $Y_i$ subsets. Moreover, by the properties described in Lemma~\ref{lem:hom-struct}, each $G[C_i \cup Y_i \cup \{z\}]$ must have maximum degree at most $d^k$, and thus we will not enter the \emph{if} on line~\ref{line:yi-size}.
After that, since we brute-force every possible set of layering functions, we will eventually find the correct $\L$ for $\S$.
We then explicitly check whether $\S$ is a similar structure and compute all the $accept$ sets to verify homogeneity. Assuming that such a $\S$ exists, it follows that it will be found, and that we will eventually reach line~\ref{line:homreccall}.
We can also argue the converse, i.e. that when line~\ref{line:homreccall} is reached, $\S$ is homogeneous and satisfies all the requirements of Lemma~\ref{lem:hom-struct}. When this line is reached, $\S$ is a similar structure (this is checked explicitly), and is homogeneous since we compute every $accept$ set. We have $|\mathcal{C}| = 3|S(k, 3k)|$ and each $|C_i| \leq d^k$, since this is what we enumerate. Moreover, it is checked that each $G[C_i \cup Y_i \cup \{z\}]$ has degree bounded by $d^k$. Therefore, when line~\ref{line:homreccall} is reached, $\S$ meets all the requirements of Lemma~\ref{lem:hom-struct}.
We can thus apply Theorem~\ref{thm:iffc1} and state that $G$ is a $k$-leaf power if and only if $G - (C_1 \cup Y_1)$ is a $k$-leaf power. Thus the recursive call on line~\ref{line:homreccall} is correct.
If the algorithm never reaches line~\ref{line:homreccall}, then by the above, $G$ does not admit an homogeneous structure with all the desired properties. By contraposition of Lemma~\ref{lem:hom-struct}, $G$ cannot be a $k$-leaf power.
This proves the correctness of the algorithm.
\noindent
\emph{Complexity.} We can handle the case where $G$ has maximum degree at most $d^k$ in time $O(n (d^k k)^{c d^k})$ for some constant $c$, by Lemma~\ref{lem:prelim:bounded-arity}.
The enumeration of the possible $\mathcal{C}$'s requires choosing $l = 3|S(k, 3k)|$ subsets of $V(G)$ of size at most $d^k$.
We can asymptotically bound the number of $\mathcal{C}$'s to enumerate by
\[
\left(\sum_{i=1}^{d^k} {n \choose i} \right)^{3|S(k, 3k)|} \leq \left(\sum_{i=1}^{d^k} n^i \right)^{3|S(k, 3k)|} \leq \left( n^{d^k + 1} \right)^{3|S(k, 3k)|}
\]
(for large enough $n$). For each such $\mathcal{C}$, the construction and verifications for $G', X, z$ and the $Y_i$'s can be done in time $O(n)$, until we must enumerate every set of layering functions.
Such a $\L$ must assign each vertex in $\mathcal{C}$ an integer between $0$ and $k$. The total number of vertices in $\mathcal{C}$ is at most $3|S(k, 3k)| \cdot d^k$, and so the number of layering functions is at most
$(k + 1)^{3|S(k, 3k)| \cdot d^k}$.
Then we must check whether $\S$ is truly a similar structure.
At this point, we must only check that $\L$ satisfies all the requirements of similar structures, which can be done in time $O(n^3)$, since if suffices to compare pairs of vertices of $\mathcal{C}$ and their neighborhoods.
Using Lemma~\ref{lem:enumerate}, one can see that we can compute one $accept(\S, C_i)$
in time $O(|S(k, 3k)| \cdot n^2 d^{8k} f(k)^3)$.
To see this, recall that we can enumerate every valued restriction of $k$-leaf roots for $C_i \cup \{z\} \cup Y_i$ in time $n d^{8k} f(k)^3$. We need to compute the signature of each of those valued trees. Such a signature can be computed by traversing each node of the valued tree in post-order. Each node requires filling a vector with at most $|S(k, 3k)|$ entries, and so computing the signature takes time $O(|S(k, 3k)| \cdot n)$. Since there are $3|S(k, 3k)|$ $C_i$'s to consider, computing every $accept$ set takes time $O(|S(k, 3k)|^2 \cdot n^2 d^{8k} f(k)^3)$.
Finally, we note that at each recursion, $G$ becomes smaller since $C_1$ is non-empty, so this whole procedure is repeated at most $n$ times.
Let us mention that the complexity of the case of maximum degree at most $d^k$ is dominated by the main loop, so we may omit it.
To sum up we have an asymptotic complexity of
\[
n \cdot
\left(n^{d^k + 1} \right)^{3|S(k, 3k)|} \cdot (k + 1)^{3|S(k, 3k)| \cdot d^k} \cdot n^3 \cdot |S(k, 3k)|^2 \cdot n^2 d^{8k} f(k)^3
\]
where $d, f(k)$, and $|S(k, 3k)|$ depend only on $k$.
Assuming that $k \in O(1)$, this amounts to
\[
O(n^{(d^k + 1)3|S(k, 3k)| + 6})
\]
\end{proof}
\section{Conclusion}
Although this work answers a longstanding open question, there is still much to do on the topic of leaf powers and $k$-leaf powers.
\begin{itemize}
\item
Is recognizing $k$-leaf powers fixed-parameter tractable in $k$? That is, can it be done in time $O(f(k) n^c)$ for some function $f$ and some constant $c$?
Using the techniques of this work would require finding an homogeneous similar structure in FPT time, thereby avoiding brute-force enumeration. Although this appears difficult, it is possible that such structures have graph-theoretical properties that can be exploited for fast identification. For instance, we have not used the fact that $G$ is strongly chordal, which may help finding similar structures.
\item
Can $k$-leaf powers be recognized in time $O(n^{f(k)})$, where $f(k)$ is more reasonable than in this work? In particular, can a power tower function be avoided? It may be possible to find a better type of signature that is more succinct, but still allows proving Theorem~\ref{thm:iffc1}.
\item
Can the techniques used here be used to recognize leaf powers?
In particular, can leaf powers be recognized easily if there is an upper bound on the arity of its leaf root? Or conversely, is there a structure in leaf powers that admit high arity leaf roots?
\end{itemize}
\section*{Acknowledgements}
The author thanks the anonymous reviewers of the SODA 2022 conference for their extremely useful suggestions.
\bibliographystyle{plain}
|
1,314,259,994,282 | arxiv | \section{Introduction}
Zero forcing was introduced in \cite{AIM} to provide an upper bound for the maximum nullity
of symmetric matrices described by a graph, and independently in \cite{graphinfect} in the study of control of quantum systems. Zero forcing starts with a set of blue vertices and uses a color change rule to color the remaining vertices blue (this is called forcing). The propagation time of a graph was introduced formally in 2012 by Hogben et al.~\cite{proptime} and Chilakamarri et al.~\cite{Chil12}. The propagation time of a zero forcing set is the number of time steps needed to fully color a graph blue when performing independent forces simultaneously, and the propagation time of a graph is the minimum of the propagation times over minimum zero forcing sets.
Positive semidefinite (PSD) forcing was defined in \cite{smallparam} to provide an upper bound for the maximum nullity of positive semidefinite matrices described by a graph (precise definitions of PSD forcing and other terms used throughout are given at the end of this introduction). PSD forcing was studied more extensively in \cite{ekstrand} and Warnberg introduced the study of PSD propagation time in \cite{warnberg}. It is well known that the propagation time of a path is one less than its order, and other families of graphs attain propagation time close to the order of the graph. However, the behavior for PSD propagation time is very different. Warnberg showed in \cite{warnberg} that the number of graphs that have propagation time at least $|V(G)|-2$ is finite, but did not provide an upper bound on PSD propagation time that is tight for graphs of arbitrarily large order.
In this paper, we give two proofs of a tight upper bound on the PSD propagation time of a graph,
\begin{equation}\label{eq-ub}\pt_+(G)\leq \left\lceil \frac{|V(G)|-\Z_+(G)}{2} \right\rceil.\end{equation}
This bound generalizes the next (well-known) result.
\begin{remark}
\thlabel{TreePropTimeLemma}
If $T$ is a tree of order $n$, then $\pt_+(T) \leq \left\lceil \frac{n-1}{2} \right\rceil$
with equality when $T$ is a path, since a blue vertex PSD forces every white neighbor in a tree.
\end{remark}
The bound \eqref{eq-ub} implies that $\pt_+(G)\leq \frac n 2$ for a graph of order $n$ and that there are only a finite number of graphs having $\pt_+(G)\ge |V(G)|-k$ for any fixed natural number $k$ (see Section \ref{s:consequences}). The techniques used to prove \eqref{eq-ub} involve transforming one PSD forcing set into another, thereby reducing the propagation time if it was greater than $\left\lceil \frac{|V(G)|-\Z_+(G)}{2} \right\rceil$. In Section \ref{svm} a single vertex in the PSD forcing set is exchanged, whereas in Section \ref{mvm} multiple vertices are exchanged. Both these techniques are called migration. Algorithms using migration methods to transform any minimum PSD forcing set into one that achieves the bound in \eqref{eq-ub} are presented in Sections \ref{svm} and \ref{mvm}.
In Section \ref{s:consequences} we also derive additional consequences of the bound \eqref{eq-ub}, including tight Nordhaus-Gaddum sum bounds on PSD propagation time.
In the remainder of this introduction we provide precise definitions and introduce notation.
A (simple) \emph{graph} is denoted by $G=(V(G),E(G))$ where $V(G)$ is the finite nonempty set of \emph{vertices} of $G$ and $E(G)$ is the \emph{edge set} of $G$; an edge is a two-element set of vertices and the edge $\{u,v\}$ can also be denoted by $uv$ (or $vu$).
The \emph{order} of $G$ is the number of vertices $|V(G)|$. In a graph, two vertices $u$ and $v$ are \emph{adjacent} if $uv$ is an edge and each of $u$ and $v$ is \emph{incident} to $uv$. The \emph{degree} $\deg_G(v)$ of a vertex $v$ is the number of vertices that are adjacent to $v$ in $G$; when the context is clear, the subscript is omitted. Vertex $u$ is a \emph{neighbor} of $v$ if $vu\in E(G)$ and the \emph{neighborhood} of $v$ is $N(v)=\{u\in V(G): vu\in E(G)\}$.
For $U\subseteq V(G)$, the \emph{subgraph of $G$ induced by $U$}, denoted by $G[U]$, is the graph obtained from $G$ by removing all vertices not in $U$ and their incident edges; $G[U]$ is also called an
\emph{induced subgraph} of $G$. For $S\subseteq V(G)$, $G-S=G[V(G)\setminus S]$, i.e., the subgraph obtained from $G$ by deleting vertices in $S$ and incident edges. A \emph{path in $G$} is a sequence of distinct vertices $v_0,v_1,\dots, v_k$ such that $v_{i-1}v_i\in E(G)$ for $i=1,\dots, k$. A graph is \emph{connected} if for two distinct vertices $v$ and $u$ there is a path from $v$ to $u$ (and thus also from $u$ to $v$). A \emph{component} of a graph is a maximal connected induced subgraph. A \emph{path} (or \emph{path graph}) $P_n$ is a graph with $V(P_n)=\{v_1,\dots,v_n\}$ and $E(P_n)=\{v_iv_{i+1}: i=1,\ldots,n-1\}$.
A \emph{complete graph} $K_n$ is a graph with $V(K_n)=\{v_1,\dots,v_n\}$ and $E(K_n)=\{v_iv_{j}: 1\le i<j\le n\}$.
Each variant of zero forcing is a process. At every stage of the process, each vertex is either blue or white. A white vertex may change color to blue at some step, but a blue vertex will remain blue during all subsequent steps. Each variant of zero forcing is determined by a color change rule that defines when a vertex may change the color of a white vertex to blue, i.e., perform a force.
The \emph{standard color change rule} is: A blue vertex $u$ can change the color of a white vertex $w$ to blue if $w$ is the unique white neighbor of $u$. A force $u\to w$ using the standard color change rule is called a \emph{standard force}.
Let $B$ be the set of blue vertices (at a particular stage of the process), and let $W_1,\dots, W_k$ be the sets of vertices of the $k\ge 1$ components of $G-B$.
The \emph{PSD color change rule} is: If $u\in B$, $w\in W_i$, and $w$ is the only white neighbor of $u$ in $G[W_i\cup B]$, then change the color of $w$ to blue. A force $u\to w$ using the PSD color change rule is called a \emph{PSD force}. Note that it is possible that there is only one component of $G-B$, and in that case a PSD force is the same as a standard force. A set that can color every vertex in the graph blue by repeated applications of the PSD forcing rule is a \emph{PSD forcing set} and the minimum cardinality of a PSD forcing set of $G$ is the \emph{PSD zero forcing number} $\Z_+(G)$. Given a PSD forcing set $B$, a set of forces $\mathcal F$ that colors the entire graph blue, and a vertex $b\in B$, define $V_b$ to be the set of vertices $w$ such that there is a sequence of forces $b=v_1\to v_2\to\dots\to v_k=w$ in $\mathcal F$ (the empty sequence of forces to reach $w$ is permitted, i.e., $b\in V_b$).
The \emph{forcing tree} of $b$ is the induced subgraph $G[V_b]$.
Starting with a set $B\subseteq V(G)$ of blue vertices, we define two sequences of sets, the set $B^{(i)}$ of vertices that are forced (change color from white to blue) at time step $i$ and the set $B^{[i]}$ of vertices that are blue after time step $i$. Thus $B^{[0]}=B^{(0)}=B$ is the set of vertices that are blue initially and after each subsequent time step $i+1$ we have $B^{[i+1]}=B^{[i]}\cup B^{(i+1)}$. To construct $B^{(i+1)}$ (and thus $B^{[i+1]}$), if $B^{(i)}$ and $B^{[i]}$ have been determined, then
\[B^{(i+1)}=\{w:
\mbox{ $w$ can be PSD forced by some vertex (given the vertices in $B^{[i]}$ are blue)}\}.\]
The \emph{PSD propagation time} of $B\subseteq V(G)$, denoted by $\pt_+(G;B)$, is the least $t$ such that $B^{[t]}=V(G)$, or infinity if $B$ is not a PSD forcing set of $G$.
The \emph{PSD propagation time of $G$}, $\pt_+(G)$, is
\[\pt_+(G)=\min\{\pt_+(G;B) : |B|=\Z_+(G)\}.\]
We also define the \emph{$k$-PSD propagation time of $G$} to be
$ \pt_+(G,k)=\min_{|B|= k}\pt_+(G;B),$ so
$\pt_+(G)=\pt_+(G,\Z_+(G)).$
\section{Single-vertex migration}
\label{svm}
For a graph $G$, we denote the set of connected components of $G$ by $\comp(G)$. A {\em valid initial PSD force for $S$} is a PSD force that is valid when only the vertices of $S$ have been colored blue.
\begin{observation}
\thlabel{PSDForceSwitch}
Let $G$ be a graph, $S\subset V(G)$, $v,w\in V(G)\setminus S$, $vw \in E(G)$ and $v\neq w$. The following are equivalent:
\begin{itemize}
\item $v\to w$ is a valid initial PSD force for $S\cup\{v\}$;
\item The removal of $vw$ from $G-S$ disconnects $v$ and $w$;
\item $vw$ is a bridge in $G-S$; and
\item $w\to v$ is a valid initial PSD force for $S\cup\{w\}$.
\end{itemize}
\end{observation}
The next result is Lemma 2.1.1 in \cite{Peters-thesis}. We provide a shorter proof for completeness.
\begin{lemma}
\thlabel{PSDMigration}
Let $G$ be a graph, let $B$ be a PSD forcing set of $G$, and let $v\to w$ be a valid initial PSD force for $B$. Then $B'=(B\setminus\{v\})\cup\{w\}$ is a PSD forcing set for $G$.
\end{lemma}
\begin{proof}
By \thref{PSDForceSwitch} (applied to $S=B\setminus\{v\}$), $w\to v$ is a valid initial PSD force for $B'=(B\setminus\{v\})\cup\{w\}$. Thus, $B$ (which PSD forces $G$) is in the final coloring of $B'$, so $B'$ is a PSD forcing set for $G$.
\end{proof}
We call the process of switching $v$ and $w$ in \thref{PSDMigration} \emph{single-vertex migration}. This is illustrated in Figure \ref{singlevertexmigration}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\filldraw[fill=black,draw=black] (1,0) circle (2pt);
\filldraw[fill=black,draw=black] (2,0) circle (2pt);
\filldraw[fill=black,draw=black] (1,1) circle (2pt);
\filldraw[fill=black,draw=black] (2,1) circle (2pt);
\filldraw[fill=black,draw=black] (1,2) circle (2pt);
\filldraw[fill=black,draw=black] (2,2) circle (2pt);
\filldraw[fill=black,draw=black] (3,0) circle (2pt);
\filldraw[fill=black,draw=black] (3,1) circle (2pt);
\filldraw[fill=black,draw=black] (3,2) circle (2pt);
\draw (1,0) -- (3,0) -- (3,2) -- (1,2) -- (1,0) -- (3,2);
\draw (2,0) -- (3,1);
\draw (2,2) -- (2,0);
\draw (1,1) -- (3,1);
\node at (0.75,2.25) {$b_1$};
\node at (0.75,0.25) {$b_3$};
\node at (1.75,2.25) {$w$};
\node at (1.75,1.25) {$b_2$};
\node at (2.75,2.25) {$u$};
\end{tikzpicture}
\caption{For the graph $G$ shown above with PSD forcing set $B=\{b_1,b_2,b_3\}$, single-vertex migrations allow us to obtain several new PSD forcing sets. Since $b_1\to w$ at the first time step, one possibility is $B'=\{w,b_2,b_3\}$. A subsequent migration with $w \to u$ produces the PSD forcing set $B''=\{u,b_2,b_3\}$.}
\label{singlevertexmigration}
\end{figure}
When starting with a PSD forcing set $B$, a force must happen at time step $i$ within each component of $G-B^{[i-1]}$. This leads to the next observation.
\begin{observation}
\thlabel{SeparationBound} For any graph $G$ and PSD forcing set $B$,
\[\pt_+(G;B)=\max_{C\in\comp(G-B)}\pt_+(G[V(C)\cup B];B)\leq\max_{C\in\comp(G-B)}|C|.\]
\end{observation}
The next lemma exhibits a critical property of single-vertex migration that permits iterative progress towards achieving the bound \eqref{eq-ub}.
\begin{lemma}
\thlabel{HalfwayFS}
Let $G$ be a graph of order $n$ that has a PSD forcing set $B$ of size $k$ such that $\max_{C\in\comp(G-B)}|V(C)|>\left\lceil\frac{n-k}{2}\right\rceil$. Then there exists a PSD forcing set $B'$ such that $|B'|=k$ and $\max_{C\in\comp(G-B')}|V(C)|<\max_{C\in\comp(G-B)}|V(C)|$.
\end{lemma}
\begin{proof}
Let $C_0$ be the largest component of $G-B$. Note that $|V(C_0)|\geq \frac{n-k}{2}+1$ and thus $|V(G)\setminus(V(C_0)\cup S)|\le \frac{n-k}{2}-1$. Observe that $B$ must be able to force directly into $C_0$ (or else it could not force $G$); let $v\to w$ be a first force into $C_0$. By single-vertex migration, the set $B'=(B\setminus\{v\})\cup\{w\}$ PSD forces $G$. Then $|B'|=|B|=k$, and $\comp(C_0-\{w\})\subseteq\comp(G-B')$ as the removal of $B$ disconnects $C_0$ from the rest of $G$ and $N(v)\cap C_0=\{w\}$. Furthermore, $\max_{C\in\comp(G-B')}|V(C)|<|V(C_0)|=\max_{C\in\comp(G-B)}|V(C)|$ because both $\max_{C\in\comp(C_0-\{w\})}|V(C)|<|V(C_0)|$ and \[|V(G)\setminus(V(C_0)\cup B')|= |V(G)\setminus(V(C_0)\cup B)|+1\le \frac{n-k}{2}-1+1=\frac{n-k}{2}<|V(C_0)|.\qedhere\]
\end{proof}
\begin{theorem}
\thlabel{nover2}
Let $G$ be a graph of order $n$, and let $\Z_+(G)\leq k\leq n$. Then $\pt_+(G,k)\leq\left\lceil\frac{n-k}{2}\right\rceil$.
\end{theorem}
\begin{proof}
Let $B$ be a PSD forcing set of $G$ of size $k$. By applying \thref{HalfwayFS} repeatedly (if needed), there exists a PSD forcing set $B^\star$ of size $k$ such that $\max_{C\in\comp(G-B^\star)}|V(C)|\leq\left\lceil\frac{n-k}{2}\right\rceil$. Then $\pt_+(G;B^\star)\leq\max_{C\in\comp(G-B^\star)}|V(C)|\leq\left\lceil\frac{n-k}{2}\right\rceil$.
\end{proof}
Observe that the bound in \thref{nover2} is a refinement of \eqref{eq-ub}.
\begin{corollary}\thlabel{c:nover2}
For every graph $G$ of order $n$,
\[\pt_+(G)\leq\left\lceil\frac{n-\Z_+(G)}{2}\right\rceil\le \left\lceil\frac{n-1}{2}\right\rceil\le \frac n 2.\]
\end{corollary}
The proofs of \thref{HalfwayFS} and \thref{nover2} provide the basis for the next algorithm, which modifies a PSD forcing set $B$ to obtain $B^\star$ such that $|B^\star|=|B|$ and $\max_{C\in\comp(G-B^\star)}|V(C)|\leq\left\lceil\frac{|V(G)|-|B^\star|}{2}\right\rceil$.
The PSD forcing set returned by this algorithm achieves the bound in \thref{nover2}.
\newpage
\begin{algorithm}[h!]
\flushleft
\caption{ \\
\textbf{Input:} graph $G$, PSD forcing set $B$ for $G$ \\
\textbf{Output:} PSD forcing set $B^\star$ for $G$ with $|B^\star|=|B|$ and \\$\max_{C\in \comp(G-B^\star)}|V(C)|\leq \left\lceil \frac{|V(G)|-|B^\star|}{2} \right\rceil$}
\label{alg1}
\begin{algorithmic}[1]
\STATE{$B^\star\coloneqq B$}
\STATE{$C_0\coloneqq$ component of $G\setminus B^\star$ with the most vertices}
\WHILE{$|V(C_0)|>\left\lceil\frac{|V(G)|-|B^\star|}{2}\right\rceil$}
\STATE{$v,w\coloneqq$ a pair of vertices in $B^\star$ and $C_0$ such that $v\rightarrow w$ at the first time step}
\STATE{$B^\star\coloneqq (B^\star\setminus \{v\})\cup \{w\}$}
\STATE{$C_0\coloneqq$ component of $G\setminus B^\star$ with the most vertices}
\ENDWHILE
\RETURN{$B^\star$}
\end{algorithmic}
\end{algorithm}
\section{Multiple-vertex migration}
\label{mvm}
In this section we present an additional technique for modifying PSD forcing sets and use it to give an alternate proof of Theorem \ref{nover2}. An example of the technique in \thref{time1shift} is shown in Figure \ref{time1shiftex}.
\begin{figure}[!hb]
\centering
\begin{tikzpicture}
\filldraw[fill=black,draw=black] (1,0) circle (2pt);
\filldraw[fill=black,draw=black] (2,0) circle (2pt);
\filldraw[fill=black,draw=black] (1,1) circle (2pt);
\filldraw[fill=black,draw=black] (2,1) circle (2pt);
\filldraw[fill=black,draw=black] (1,2) circle (2pt);
\filldraw[fill=black,draw=black] (2,2) circle (2pt);
\filldraw[fill=black,draw=black] (3,0) circle (2pt);
\filldraw[fill=black,draw=black] (3,1) circle (2pt);
\filldraw[fill=black,draw=black] (3,2) circle (2pt);
\draw (1,0) -- (3,0) -- (3,2) -- (1,2) -- (1,0) -- (3,2);
\draw (2,0) -- (3,1);
\draw (2,2) -- (2,0);
\draw (1,1) -- (3,1);
\node at (0.75,2.25) {$b_1$};
\node at (0.75,1.25) {$b_2$};
\node at (0.75,0.25) {$b_3$};
\node at (1.75,2.25) {$v_1$};
\node at (1.75,1.25) {$v_2$};
\end{tikzpicture}
\caption{For the graph $G$ shown above with PSD forcing set $B=\{b_1,b_2,b_3\}$, the method in \thref{time1shift} allows us to obtain the PSD forcing set $B'=\{v_1,v_2,b_3\}$ with $\pt_+(G;B')=\pt_+(G;B)-1$.}
\label{time1shiftex}
\end{figure}
\begin{lemma}
\thlabel{time1shift}
Let $G$ be a graph with PSD forcing set $B$ such that $G-B$ is connected. Let $B'$ be the endpoints of the PSD forcing trees after the first time step. Then $B'$ is another PSD forcing set of $G$ with $|B|=|B'|$. Furthermore, if $\pt_+(G;B)\geq 2$, then $\pt_+(G;B')=\pt_+(G;B)-1$.
\end{lemma}
\begin{proof}
Note that connectedness of $G- B$ implies that no vertex performs more than one force at the first time step. Let $B=\{b_1,b_2,\ldots,b_k\}$ and $B'=\{v_1,v_2,\ldots,v_j,b_{j+1},\ldots,b_k\}$ where $b_i\to v_i$ at the first time step for $i\le j$, and no other forces occur at the first time step. Notice that $|B'|=|B|$, and we claim that $B'$ is a PSD forcing set.
We first show that for $i\le j$, the only vertex in $B\setminus B'$ that is adjacent to $v_i$ is $b_i$. If $v_i$ is adjacent to $b\in B$ such that $b\ne b_i$, then we have two cases:
\begin{itemize}
\item If $b$ is not adjacent to any other vertex in $G -B$, then $b$ did not perform a force in $G$ during the first time step when using $B$ in the chosen forcing process.
\item If $b$ is adjacent to some other vertex in $G-B$, then $b$ had more than one neighbor in $G- B$ when $B$ was selected as the PSD forcing set. Again, $b$ does not perform a force in $G$ during the first time step.
\end{itemize}
In both cases, we see that $b\in B'$ since $b$ did not perform a force during the first time step. Hence, the only vertex in $B\setminus B'$ that is adjacent to $v_i$ is $b_i$.
Since $G- B$ is connected and $b_i$ performed a force when $B$ was chosen as a PSD forcing set, the only neighbors of $b_i\in B\setminus B'$ are other elements of $B$ and the vertex $v_i$.
{This means that $\comp(G-B')$ can be partitioned into $\comp(G-(B \cup B'))$ and $\comp(G[B \setminus B'])$. As a result, $b_i$ is the unique neighbor of $v_i$ in the component of $G- B'$ containing $b_i$.} Therefore, when $B'$ is chosen as an initial set of blue vertices, $v_i\to b_i$ at the first time step. Since all of $B$ will be blue by the end of the first time step and $B$ is a PSD forcing set, we conclude that $B'$ is also a PSD forcing set of $G$.
Now suppose that $\pt_+(G;B)\geq 2$. Let $H=G-(B\setminus B')$. From the preceding paragraph, we observe that $B'$ begins forcing vertices in $H$ at the first time step since $B\setminus B'$ is disconnected from $H- B'$. The forcing steps in $H$ are then the same as when $B$ was the initial PSD forcing set, but shifted by one time step. Since $\pt_+(G;B)\geq 2$, we know $\pt_+(H;B')\geq 1$, and this allows us to conclude that \[\pt_+(G;B')=\max\{1,\pt_+(H;B')\}=\pt_+(H;B')=\pt_+(G;B)-1. \qedhere\]
\end{proof}
\begin{remark}
In Lemma \ref{time1shift}, if we define $B''=\{v_1,v_2,\ldots,v_{j'},b_{j'+1},\ldots,b_k\}$ with $j'<j$, then the same argument shows that $B''$ is a PSD forcing set, though we cannot guarantee the second result $\pt_+(G;B'')=\pt_+(G;B)-1$. Choosing $j'=1$ and combining this with the next lemma generalizes single-vertex migration.
\end{remark}
Since PSD forcing occurs independently in the components of $G-B$, we can apply \thref{time1shift} within the closed neighborhood of one component of $G-B$. We call this process of replacing $B$ with $B'$ within the closed neighborhood of one component (as described in Lemma \ref{componentshift}) \emph{multiple-vertex migration}.
The assumption in \thref{time1shift} that $G- B$ is connected cannot be removed without such a restriction. Figure \ref{connected} illustrates both multiple-vertex migration (using one component) and a failure when moving vertices in more than one component.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\filldraw[fill=black,draw=black] (0,0) circle (2pt);
\filldraw[fill=black,draw=black] (1,0) circle (2pt);
\filldraw[fill=black,draw=black] (2,0) circle (2pt);
\filldraw[fill=black,draw=black] (0,1) circle (2pt);
\filldraw[fill=black,draw=black] (1,1) circle (2pt);
\filldraw[fill=black,draw=black] (2,1) circle (2pt);
\filldraw[fill=black,draw=black] (0,2) circle (2pt);
\filldraw[fill=black,draw=black] (1,2) circle (2pt);
\filldraw[fill=black,draw=black] (2,2) circle (2pt);
\filldraw[fill=black,draw=black] (-1,0) circle (2pt);
\filldraw[fill=black,draw=black] (-1,1) circle (2pt);
\filldraw[fill=black,draw=black] (-1,2) circle (2pt);
\filldraw[fill=black,draw=black] (3,0) circle (2pt);
\filldraw[fill=black,draw=black] (3,1) circle (2pt);
\filldraw[fill=black,draw=black] (3,2) circle (2pt);
\draw (0,0)--(0,2)--(2,2)--(2,0)--(0,0);
\draw (0,1)--(2,1)--(1,0)--(1,2)--(0,0)--(1,1);
\draw (0,0)--(-1,0)--(-1,1)--(0,1);
\draw (-1,1)--(-1,2)--(0,2);
\draw (2,0)--(3,0)--(3,1)--(2,1);
\draw (3,1)--(3,2)--(2,2);
\node at (1.25,2.25) {$b_1$};
\node at (1.25,1.25) {$b_2$};
\node at (0.75,0.25) {$b_3$};
\node at (2.25,2.25) {$v_1$};
\node at (2.25,1.25) {$v_2$};
\node at (-0.25,0.25) {$v_3$};
\end{tikzpicture}
\caption{Notice that $B=\{b_1,b_2,b_3\}$ is a PSD forcing set, but $\{v_1,v_2,v_3\}$ is not. However, we can construct the PSD forcing sets $\{b_1,b_2,v_3\}$ and $\{v_1,v_2,b_3\}$.}
\label{connected}
\end{figure}
\begin{lemma}
\thlabel{componentshift}
Let $G$ be a graph with PSD forcing set $B$. Let $C$ be a connected component of $G- B$, and let $H=G[V(C)\cup B]$. If $B'$ is the set of endpoints in $H$ of the PSD forcing trees after the first time step, then $B'$ is another PSD forcing set of $G$ with $|B'|=|B|$.
\end{lemma}
\begin{proof}
By definition of $H$, we know $H- B$ is connected. Using Lemma \ref{time1shift}, $B'$ is a PSD forcing set for $H$ with $|B|=|B'|$. Notice that the vertices in $H- B$ are not adjacent to any vertices in $G- V(H)$. From this, we see that $B'$ will force any white vertices in $B$ at the first step when forcing within $G$. Therefore $B'$ will force $G$ since $B$ is a PSD forcing set of $G$. Thus $B'$ is a PSD forcing set of $G$.
\end{proof}
A PSD forcing set $B$ with $|B|=k$ is called \textit{$k$-efficient} if $\pt_+(G;B)=\pt_+(G,k)$. When $k=\Z_+(G)$, $B$ is said to be \textit{efficient}. An application of the previous result allows us to conclude that for $k$-efficient PSD forcing sets, the two components that take the longest to force should take approximately the same time.
\begin{theorem}
\label{closeproptime}
Let $G$ be a graph and let $B$ be a PSD forcing set with $|B|=k$. Let $C_1,C_2,\ldots,C_m$ be the connected components of $G-B$, indexed so that
$\pt_+(G[V(C_i)\cup B];B)\leq \pt_+(G[V(C_{i+1})\cup B];B)$ for $i=1,2,\ldots,m-1$. If $B$ is $k$-efficient, then \[\pt_+(G[V(C_{m})\cup B];B)-\pt_+(G[V(C_{m-1})\cup B];B)\leq 1,\]
where we use the convention $C_1=\emptyset$ and $C_2=G-B$ when $G- B$ is connected.
\end{theorem}
\begin{proof}
Define $G_i=G[V(C_i)\cup B]$. Notice that
\[\pt_+(G;B)=\max_{i=1,2,\ldots ,m}\pt_+(G_i;B)=\pt_+(G_m;B).\]
We prove the contrapositive, so suppose that $\pt_+(G_m;B)-\pt_+(G_{m-1};B)> 1$.
Using Lemma \ref{componentshift}, if we let $B'$ be the endpoints in $G_m$ of the PSD forcing trees after the first time step, then $B'$ is a PSD forcing set of $G$. Additionally, since $\pt_+(G_m;B)-\pt_+(G_{m-1};B)> 1$, nonnegativity of propagation time implies $\pt_+(G_m,B)\geq 2$. The vertices of $G_m-B$ are not adjacent to any vertices in $G- G_m$, so Lemma \ref{time1shift} implies
\[\pt_+(G_m;B')=\pt_+(G_m;B)-1.\] Since $B'$ forces $B$ at the first time step and $B$ is a PSD forcing set for $G$, we also see that the vertices in $G- V(G_m)$ will be blue by time \[\pt_+(G_{m-1};B)+1.\] Since we assumed $\pt_+(G_m;B)-\pt_+(G_{m-1};B)>1$, we see that
\begin{eqnarray*}
\pt_+(G;B') & =& \max\{\pt_+(G_m;B)-1,\pt_+(G_{m-1};B)+1\} \\
& =&\pt_+(G_m;B)-1 \\
& =&\pt_+(G;B)-1.
\
\end{eqnarray*}
Thus, $B$ cannot be $k$-efficient.
\end{proof}
Theorem \ref{closeproptime} can be used to give another independent proof of Theorem \ref{nover2}.
\begin{proof}[Alternate proof of Theorem \ref{nover2}]
Let $B$ be a $k$-efficient PSD forcing set of $G$. Using the notation as in Theorem \ref{closeproptime}, the assumption that $B$ is $k$-efficient implies \[\pt_+(G_m;B)-\pt_+(G_{m-1};B)\leq 1.\] Propagation in $G_m$ and $G_{m-1}$ occur independently, so $\pt_+(G_m;B)-\pt_+(G_{m-1};B)\leq 1$ implies that at each time step, at least one force occurs in $G_m$ and at least one force occurs in $G_{m-1}$, except possibly during the last step. Since at least two forces occur at each time step with the possible exception of the last time step, we conclude that
\[\pt_+(G,k)= \pt_+(G;B) \leq \left\lceil \frac{|V(G)|-k}{2}\right\rceil.\qedhere\]
\end{proof}
The proof of Theorem \ref{closeproptime} provides the basis for an algorithm for finding a PSD forcing set such that $\pt_+(G[V(C_{m})\cup B];B)-\pt_+(G[V(C_{m-1})\cup B];B)\leq 1$ holds, which we present next. The PSD forcing set returned by this algorithm achieves the bound in Theorem \ref{nover2}, though it is not necessarily $k$-efficient.
\begin{algorithm}[h]
\flushleft
\caption{\\
\textbf{Input:} graph $G$, PSD forcing set $B$ for $G$ \\
\textbf{Output:} PSD forcing set $B'$ for $G$ with $|B'|=|B|$ such that the two components of $G\setminus B'$ that take the longest to propagate will finish propagating within one time step of each other}
\label{alg2}
\begin{algorithmic}[1]
\STATE{$B'\coloneqq B$}
\STATE{$G_1\coloneqq$ subgraph of $G$ induced by $B'$}
\STATE{$m\coloneqq |\comp(G-B')|+1$}
\STATE{$G_2,\ldots,G_m\coloneqq$ subgraphs induced by the components of $G- B'$ combined with the vertices in $B'$, indexed so that $\pt_+(G_i;B')\leq \pt_+(G_{i+1};B')$ for $i=2,3\ldots,m-1$}
\WHILE{$\pt_+(G_m)-\pt_+(G_{m-1})\geq 2$}
\STATE{$b_1,b_2,\ldots,b_j\coloneqq$ vertices in $B'$ that perform a force in $G_m$ at the first time step}
\STATE{$v_1,v_2,\ldots v_j\coloneqq$ vertices of $G_m$ such that $b_i\rightarrow v_i$ at the first time step}
\STATE{$B'\coloneqq (B'\cup \{v_1,v_2,\ldots,v_j\})\setminus \{b_1,b_2,\ldots,b_j\}$}
\STATE{$m\coloneqq |\comp(G-B')|$}
\STATE{$G_1,\ldots,G_m\coloneqq $ subgraphs induced by the components of $G-B'$ combined with the vertices in $B'$, indexed so that $\pt_+(G_i;B')\leq \pt_+(G_{i+1};B')$ for $i=1,2,\ldots,m-1$}
\ENDWHILE
\RETURN{$B'$}
\end{algorithmic}
\end{algorithm}
\newpage
\section{Consequences of the bound}\label{s:consequences}
In this section we derive several consequences of \thref{nover2}, including a tight upper bound on the PSD throttling number and tight Nordhaus-Gaddum sum bounds for PSD propagation time. We begin by showing that the number of graphs having PSD propagation time within a fixed amount of the order is finite.
\begin{corollary}
\thlabel{finiteness}
Let $k\in \mathbb{N}$. For any graph $G$ with $|V(G)|\geq 2k+1$,
\[\pt_+(G)< |V(G)|-k.\]
The number of graphs with $\pt_+(G)\geq |V(G)|-k$ is therefore finite.
\end{corollary}
\begin{proof}
By \thref{c:nover2}, $\pt_+(G)\leq \left\lceil\frac{|V(G)|-1}{2}\right\rceil.$
For any graph with $|V(G)|\geq 2k+1$,
\[\pt_+(G)\leq \left\lceil \frac{|V(G)|-1}{2} \right\rceil< |V(G)|-k.\]
Since there are only finitely many graphs with $|V(G)|\leq 2k$, this implies that the number of graphs with $\pt_+(G)\geq |V(G)|-k$ is finite.
\end{proof}
Warnberg \cite{warnberg} characterized the graphs achieving $\pt_+(G)\geq |V(G)|-k$ for $k=1,2$, thereby establishing the results in Corollary \ref{finiteness} for $k=1,2$.
\begin{theorem}\thlabel{warnberg-small}\cite{warnberg}
Let $G$ be a graph.
\begin{enumerate}
\item $\pt_+(G)=|V(G)|-1$ if and only if $G=P_2$.
\item $\pt_+(G)=|V(G)|-2$ if and only if $G$ is one of $P_3$, $P_4$, $C_3$, or $P_2\mathbin{\,\sqcup\,} P_1$, where $G\mathbin{\,\sqcup\,} H$ denotes the disjoint union of $G$ and $H$.
\end{enumerate}
\end{theorem}
As the value of $k$ increases, the number of graphs $G$ such that $\pt_+(G) = |V(G)|-k$ grows rapidly. The graphs having $\pt_+(G)=|V(G)|-3$ and $\pt_+(G)=|V(G)|-4$ are shown in Figures \ref{f:nminus3} and \ref{f:nminus4}, respectively.
\begin{figure}[!h]
\tikzsetnextfilename{figure_nminus3}
\begin{tikzpicture}[set graph scales={1.17}{1.44}{1.44}]\DrawGraphs{n minus 3}{5x4}\end{tikzpicture}
\caption{Graphs with $\pt_+(G)=|V(G)|-3$\label{f:nminus3}.}
\end{figure}
\begin{figure}[!h]
\tikzsetnextfilename{figure_nminus4}
\begin{tikzpicture}[set graph scales={1}{1}{0.9}]\DrawGraphs{n minus 4}{10x10}\end{tikzpicture}
\caption{Graphs with $\pt_+(G)=|V(G)|-4$\label{f:nminus4}.}
\end{figure}
Throttling minimizes the sum of the resources used to accomplish a task (number of blue vertices) and the time needed to complete that task (propagation time).
Theorem \ref{nover2} yields a bound on the \emph{PSD throttling number} of a graph $G$ of order $n$, which is defined to be $\thr_+(G)=\min_{\Z_+(G)\le k\le n}(k+\pt_+(G,k))$.
\begin{corollary}
For any graph $G$ on $n$ vertices,
\[\thr_+(G)\leq
\left\lceil \frac{n+\Z_+(G)}{2}\right\rceil,\]
and this bound is tight for $K_n$ for all $n$.
\end{corollary}
\begin{proof}
By the definition of throttling and \thref{nover2}, $\thr_+(G)\le k+\pt_+(G,k)\le k+\left\lceil\frac{n-k}{2}\right\rceil=
\left\lceil \frac{n+k}{2}\right\rceil$ for $k=\Z_+(G),\dots,n$.
Thus $\thr_+(G)\le \left\lceil \frac{n+\Z_+(G)}{2}\right\rceil$.
For tightness, $\Z_+(K_n)=n-1$ and $\pt_+(K_n)=1$, so $\thr_+(K_n)=n=\left\lceil\frac{n+(n-1)}2\right\rceil$.
\end{proof}
Given a graph $G$, its \emph{complement} $\overline{G}$ is the graph with vertex set $V(G)$ and edge set
\[E \left(\overline{G} \right) = \left\{ uv : u,v \in V(G) \text{ distinct and } uv \not \in E(G) \right \} .\]
The \emph{Nordhaus-Gaddum sum problem} for a graph parameter $\zeta$ is to determine a lower or upper bound on $\zeta(G)+\zeta(\overline{G})$ that is tight for graphs of arbitrarily large order. We can use Theorem \ref{nover2} to give a tight Nordhaus-Gaddum sum upper bound for the PSD propagation time of a graph and its complement. We recall the next (tight) Nordhaus-Gaddum sum bounds for the PSD zero forcing number.
\begin{theorem}\cite{ekstrand}\label{ZpNG}
Let $G$ be a graph of order $n\ge 2$. Then $n-2 \leq \Z_+(G) + \Z_+(\overline{G}) \leq 2n-1$, and both bounds are tight for arbitrarily large $n$.
\end{theorem}
\begin{theorem}\label{t:NG}
Let $G$ be a graph of order $n \geq 2$. Then \[1 \leq \pt_+(G) + \pt_+(\overline{G}) \leq \frac{n}{2} + 2.\] The lower bound is tight for every $n\ge 2$ and the upper bound is tight for every even $n\ge 8$.
\end{theorem}
\begin{proof}
Since $n$ is at least 2, either the graph or its complement has an edge. Therefore either $\pt_+(G) \geq 1$ or $\pt_+(\overline{G})\geq 1$. Observe that the lower bound is achieved by the complete graph $K_n$ for $n\geq 2$.
To establish the upper bound:
\begin{eqnarray*}
\pt_+(G) + \pt_+(\overline{G}) &\leq& \left\lceil \frac{n-\Z_+(G)}{2} \right\rceil + \left\lceil \frac{n-\Z_+(\overline{G})}{2} \right\rceil
\\
&\leq &\frac{n-\Z_+(G)}{2} + \frac{n-\Z_+(\overline{G})}{2} +1
\\
&\leq & n+1 - \frac{\Z_+(G) + \Z_+(\overline{G})}{2} \\
&\leq & n+1 - \frac{n-2}{2}
\\
&\leq & \frac{n}{2} + 2.
\end{eqnarray*}
To establish tightness for even $n\geq 8$, consider the graph $H_{2k+8}$ on $2k+8$ vertices (with $k\geq 0$) shown in Figure \ref{fig:H8exp}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\filldraw[fill=black,draw=black] (0,0) circle (2pt);
\filldraw[fill=black,draw=black] (1,-0.5) circle (2pt);
\filldraw[fill=black,draw=black] (2,-1) circle (2pt);
\filldraw[fill=black,draw=black] (1,-1.5) circle (2pt);
\filldraw[fill=black,draw=black] (0,-2) circle (2pt);
\filldraw[fill=black,draw=black] (-1,-1.5) circle (2pt);
\filldraw[fill=black,draw=black] (-2,-1) circle (2pt);
\filldraw[fill=black,draw=black] (-1,-0.5) circle (2pt);
\filldraw[fill=black,draw=black] (3,-1) circle (2pt);
\filldraw[fill=black,draw=black] (-3,-1) circle (2pt);
\filldraw[fill=black,draw=black] (5,-1) circle (2pt);
\filldraw[fill=black,draw=black] (-5,-1) circle (2pt);
\draw (0,0) -- (2,-1) -- (0,-2) -- (-2,-1) -- (0,0) -- (0,-2);
\draw (-1,-1.5) -- (-1,-0.5) -- (1,-0.5) -- (-1,-1.5) -- (1,-1.5) -- (1,-0.5);
\node at (4,-1) {$\ldots$};
\node at (-4,-1) {$\ldots$};
\draw (2,-1) -- (3.5,-1);
\draw (-2,-1) -- (-3.5,-1);
\draw (4.5,-1) -- (5,-1);
\draw (-4.5,-1) -- (-5,-1);
\node at (2,-1.375) {$b_0$};
\node at (-2,-1.375) {$a_0$};
\node at (3,-1.375) {$b_1$};
\node at (-3,-1.375) {$a_1$};
\node at (5,-1.375) {$b_k$};
\node at (-5,-1.375) {$a_k$};
\node at (-1.25,-0.25) {$x$};
\node at (-1.25,-1.75) {$y'$};
\node at (1.25,-0.25) {$y$};
\node at (1.25,-1.75) {$x'$};
\node at (0,0.25) {$z$};
\node at (0,-2.25) {$z'$};
\end{tikzpicture}
\caption{The graph $H_{2k+8}$, which has order $2k+8$ and $\pt_+(H_{2k+8})+\pt_+(\overline{H_{2k+8}})=(k+4)+2$.
\label{fig:H8exp}}
\end{figure}
It is straightforward to verify the following properties of $H_8$: $\Z_+(H_8)=\pt_+(H_8)=3$. For any minimum PSD forcing set $B$ of $H_8$, $a_0\not\in B$ or $b_0\not\in B$, and one of $a_0$ or $b_0$ is the last vertex forced.
Since $\overline{H_8}\cong H_8$, \[\pt_+(H_8)+\pt_+(\overline{H_8})=6=\frac{8}{2}+2,\] so $H_8$ gives a tight bound for $n=8$.
Now assume $k\ge 1$. By results in \cite{ekstrand}, $\Z_+(H_{2k+8})=3$ and any minimum PSD forcing set for $H_{2k+8}$ must contain at least two vertices in $V(H_8)\setminus \{a_0,b_0\}$. If $B$ is a PSD forcing set with two vertices in $V(H_8)\setminus \{a_0,b_0\}$ and a third vertex $a_i$ from $\{a_1,a_2,\ldots,a_k\}$, then migration from $a_i$ to $a_{i-1}$ produces a PSD forcing set $B'=(B\setminus \{a_{i}\})\cup \{a_{i-1}\}$ with $\pt_+(H_{2k+8};B')=\pt_+(H_{2k+8};B)-1$, as $b_0$ will be the last vertex in $H_8$ forced regardless of what $B$ and $B'$ are. A similar argument applies when $B$ contains some vertex in $\{b_1,b_2,\ldots,b_k\}$, and in these situations, we conclude that $B$ cannot be efficient. From this, we see that if we wish to select an efficient PSD forcing set $B$ for $H_{2k+8}$, we must select three vertices from $H_8$. Regardless of which three vertices we select, either $a_0$ or $b_0$ will be the last vertex of $H_8$ forced, implying that
\[\pt_+(H_{2k+8})=\pt_+(H_{2k+8};B)=\pt_+(H_8)+k=k+3.\]
Since the order of $\overline{H_{2k+8}}$ is $2k+8$ and $\pt_+(H_{2k+8})=k+3$, to complete the proof it suffices to show that $\pt_+(\overline{H_{2k+8}})=3$. The software \cite{sage-PSDprop} was used to verify that $\pt_+(\overline{H_{10}})=\pt_+(\overline{H_{12}})=3$, so we focus on the remaining cases. Fix $k\ge 3$, and let $H=H_{2k+8}$, so $\overline{H}=\overline{H_{2k+8}}$.
We first show $\Z_+(\overline{H})= 2k+3$ and $\pt_+(\overline{H})\le 3$. Notice that Theorem \ref{ZpNG} and $\Z_+(H)=3$ imply that $\Z_+(\overline{H})\ge 2k+3$. Since $\overline{H}$ contains $\overline{H_8}$ as a subgraph, we let $B=B_0\cup X$ where $B_0$ is an efficient PSD forcing set for $\overline{H_8}\cong H_8$ and $X=\{a_1,\ldots,a_k,b_1,\ldots,b_k\}$. Notice that $|B|=2k+3$.
The graph $\overline{H}$ also contains $\overline{H_{12}}$ as a subgraph, and whenever all vertices of $X$ are blue, we may assume forcing takes place entirely within $\overline{H_{12}}$, since $a_i\to w$ can be replaced by $a_2\to w$ for $i\ge 3$, and similarly for $b_i$. Combined, we conclude that $B$ is a PSD forcing set for $\overline{H}$, $\Z_+(\overline{H})=2k+3$, and $\pt_+(\overline{H})\le \pt_+(\overline{H};B)=\pt_+(\overline{H_{12}};B\cap V(\overline{H_{12}}))=3$.
To prove $\pt_+(\overline{H})\geq 3$, we show that an efficient PSD forcing set $B$ must contain all vertices of $X$. Let $B\subset V(\overline H)$ be a PSD forcing set of $\overline H$ such that $|B|=2k+3$ and $X\not\subseteq B$. Let $W=V(\overline H)\setminus B$, so $W\cap X\ne\emptyset$.
We begin by showing $\overline{H}[W]$ is connected. Suppose first that $|W\cap X|\ge 2$, and without loss of generality, assume there is some $a_i\in W\cap X$.
Then $\overline{H}-B=\overline H[W]$ is connected because every vertex except $a_{i-1}$ and $a_{i+1}$ is adjacent to $a_i$, $|W\cap X|\ge 2$, and $|W|=5$.
Alternatively, suppose $|W\cap X|=1$, and assume without loss of generality that $W\cap X=\{a_i\}$. If $a_0\notin W$, then $a_i$ is adjacent to all other vertices in $W$, again implying $\overline{H}[W]$ is connected. If $a_0\in W$, then there must be three white vertices in $\overline{H_8}\setminus \{a_0\}$, which are all adjacent to $a_i$. Furthermore, $\deg_{\overline{H_8}}a_0=5$ and $|W\cap V(\overline{H_8})|=4$ imply that some neighbor of $a_0$ is white. In all cases, $\overline{H}[W]$ is connected.
Since $\overline{H}-B$ is connected, the first force must be a standard force. If vertex $u$ performs the first force, then $\deg_{\overline{H}}u\le \Z_+(\overline H)= 2k+3$, so $\deg_H u\ge 4$, i.e., $u\in \{x,x',y,y'\}$. If $|W\cap X|\geq 2$, then $u$ is adjacent to multiple white vertices, implying $u$ cannot perform a force, and $B$ would not be a PSD forcing set. If $|W\cap X|=1$, notice that $a_i\in W\cap X$ is adjacent to all of $x,x',y$ and $y'$. Then the only vertex forced at the first time step is $a_i$, implying $\pt_+(\overline{H};B)\geq 2$.
So Lemma \ref{time1shift} implies that $B$ is not efficient.
Thus, any efficient PSD forcing set $B$ for $\overline{H}$ must contain all of $X$. For any such $B$, we can again assume forcing takes place entirely within $\overline{H_{12}}$, and we conclude that $\pt_+(\overline{H})=\pt_+(\overline{H};B)= \pt_+(\overline{H_{12}};B\cap V(\overline{H_{12}}))\geq 3$.
\end{proof}
In order for the upper bound in Theorem \ref{t:NG} to be tight, $n$ must be even. In addition to the family $H_{2k+8}$ presented in the proof, the upper bound is tight for $P_4$. A computer search shows there is no graph $G$ of order 6 realizing $\pt_+(G) + \pt_+(\overline{G}) =5=\frac{6}{2} + 2$.
Finally we consider the maximum PSD propagation time for graphs with arbitrary order and fixed PSD forcing number. For a fixed positive integer $k$ and $n>k$, define \[\zeta(n,k)=\max\{\pt_+(G): \, |V(G)|=n \text{ and } \Z_+(G)=k\}.\]
By \thref{nover2}, $\zeta(n,k)\le \left\lceil \frac{n-k}{2}\right\rceil$. We construct a family of examples realizing this bound.
Define the lollipop graph $L_{m,r}$ as the graph obtained by starting with the complete graph $K_m$ with $m\ge 3$ and a (disjoint) path $P_r$, and then adding an edge between some vertex $v$ of $K_m$ and an endpoint of $P_r$; the order of $L_{m,r}$ is $m+r$. See Figure \ref{lollipop}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\filldraw[fill=black,draw=black] (0,0) circle (2pt);
\filldraw[fill=black,draw=black] (3,0) circle (2pt);
\filldraw[fill=black,draw=black] (4,0) circle (2pt);
\filldraw[fill=black,draw=black] (5,0) circle (2pt);
\filldraw[fill=black,draw=black] (6,0) circle (2pt);
\filldraw[fill=black,draw=black] (7,0) circle (2pt);
\filldraw[fill=black,draw=black] (8,0) circle (2pt);
\filldraw[fill=black,draw=black] (1,1) circle (2pt);
\filldraw[fill=black,draw=black] (2,1) circle (2pt);
\filldraw[fill=black,draw=black] (1,-1) circle (2pt);
\filldraw[fill=black,draw=black] (2,-1) circle (2pt);
\draw (8,0) -- (0,0) -- (1,1) -- (2,1) -- (3,0) -- (2,-1) -- (1,-1) -- (0,0) -- (2,1) -- (2,-1) -- (0,0);
\draw (1,1) -- (3,0) -- (1,-1) -- (1,1) -- (2,-1);
\draw (1,-1) -- (2,1);
\node at (3,-0.25) {$v$};
\node at (8,-0.25) {$w$};
\end{tikzpicture}
\caption{The lollipop graph $L_{6,5}$\label{lollipop}.}
\end{figure}
\begin{proposition}
\label{nover2tight} For $m\ge 3$ and $r\ge 1$, \[\pt_+(L_{m,r})=\left\lceil \frac{|V(L_{m,r})|-\Z_+(L_{m,r})}{2}\right\rceil.\]
\end{proposition}
\begin{proof}
Let $v$ be the vertex of degree $m$ and let $w$ be the vertex of degree one in $L_{m,r}$. It is clear that $\Z_+(L_{m,r})=m-1$ since $L_{m,r}$ contains $K_m$ as a subgraph and any set of $m-1$ of the vertices in $K_m$ is a PSD forcing set.
Consider a PSD forcing set $B$ consisting of $m-2$ of the vertices in $K_m \setminus \{v\}$ and the vertex of $P_r$ at distance $\left\lceil \frac r 2\right\rceil$ from $w$,
It will take $\left\lceil \frac r 2\right\rceil$ time steps to force the vertices of $P_r$ and $r-\left\lceil \frac r 2 \right\rceil+1=\left\lceil \frac {r+1} 2\right\rceil$ time steps to force the last vertex of $K_m$.
Thus, \[\pt_+(L_{m,r};B)=\left\lceil \frac {r+1} 2\right\rceil=\left\lceil \frac {(m+r)-(m-1)} 2\right\rceil = \left\lceil \frac{|V(L_{m,r})|-\Z_+(L_{m,r})}{2}\right\rceil.\]
The set $B$ is efficient for $L_{m,r}$ because any PSD forcing set must contain at least $m-2$ of the vertices in $K_m$ and any other choice for the last vertex results in a propagation time that is at least as large.
\end{proof}
\begin{corollary}
For any $n\ge k\ge 1$, there exists a graph $G$ such that $|V(G)|=n$, $\Z_+(G)=k$, and $\pt_+(G)= \left\lceil \frac{n-k}{2}\right\rceil$. Thus, the bound
$\pt_+(G)\leq \left\lceil \frac{|V(G)|-\Z_+(G)}{2} \right\rceil$
is tight for each $\Z_+(G)$.
\end{corollary}
\begin{corollary}
For a fixed positive integer $k$,
\[\lim_{n\to\infty} \frac{\zeta(n,k)}{n}=\frac{1}{2}.\]
\end{corollary}
\begin{proof}
Starting with Theorem \ref{nover2} and letting $n\to\infty$ implies
\[\lim_{n\to\infty} \frac{\zeta(n,k)}{n}\leq \frac{1}{2}.\] For the lower bound with fixed $k$, Proposition \ref{nover2tight} implies that for any $n\ge k+3$,
$\pt_+(L_{k+1,n-k-1})=\left\lceil \frac{n-k}{2}\right\rceil.$
Then \[\frac{\zeta(n,k)}{n}\geq \frac{\lceil \frac{n-k}{2}\rceil}{n}\geq \frac{n-k}{2n},\]
and letting $n\to\infty$ implies the result.
\end{proof}
\section*{Acknowledgements}
The research of all the authors was partially supported by NSF grant 1916439. The research of Yaqi Zhang was also partially supported by Simons Foundation grant 355645 and NSF grant 2000037.
\bibliographystyle{plain}
|
1,314,259,994,283 | arxiv | \section{Introduction}
The presence of heteroskedasticity significantly impacts estimations and inferences in a time series analysis.
Becker and Hurn (2009) and Pavlidis, Paya, and Peel (2010), for example, demonstrate that
the presence of heteroskedasticity frequently leads to over-rejections of the null hypothesis
when testing the null for the linearity of a conditional mean model against the alternative hypothesis of nonlinear time series models.
Pavlidis, Paya, and Peel (2013) show that causality tests on the conditional mean demonstrate spurious causality relationships in the presence of multivariate heteroskedasticity.
These facts indicate that tests for heteroskedasticity in data-generating processes (DGP) play an important role in time series analyses.
The most representative model for heteroskedasticity is Engle's (1982) autoregressive conditional heteroskedasticity (ARCH) model.
ARCH is a simple and popular volatility model and continues to be widely used in the literature.
When testing for heteroskedasticity,
a regression model for the assumed conditional mean is first estimated.
Next, ARCH is examined to use statistics such as the Lagrange multiplier (LM).
If the conditional mean regression model is correctly specified,
the ARCH test performs well.
However, a misspecified conditional mean severely impedes the ARCH tests.
Lumsdaine and Ng (1999) examine the properties of ARCH tests under a misspecified conditional mean.
They show that the misspecification of the conditional mean over-rejects the null hypothesis for homoskedasticity.
Similarly, Balke and Kapetanios (2007) clarify the influence of the neglected nonlinearity of the conditional mean on ARCH tests.
Their analysis evidences the over-rejection of no ARCH effects when the nonlinearity of the conditional mean regression model is neglected.
To appropriately test for ARCH, it is necessary to avoid the misspecified model of the conditional mean.
This study compares the statistical properties of ARCH tests that do not depend on the conditional mean model.
The tests are applicable to various nonlinear conditional mean models and are robust to the misspecified conditional mean model.
We employ two nonparametric approaches to avoid the misspecification of the conditional mean model.
First is a regression using the Nadaraya-Watson kernel estimator, which is a representative nonparametric method.
Nadaraya (1964) and Watson (1964) propose the method using a kernel density function in a regression analysis that does not depend on the model.
McMillan (2001) and Exterkate, Groenen, Heij, and van Dijk (2016) show that the Nadaraya-Watson estimator is useful under various nonlinear models.
Second is the regression analysis using a polynomial approximation of a general unknown nonlinear model.
Stone (1977) and Katkovnik (1979) propose the local polynomial estimator on the basis of a polynomial approximation.
Balke and Kapetanios (2007) develop a method to approximate unknown models using a neural network.
P\'{e}guin-Feissolle, Strikholm, and Ter\"{a}svirta (2013) introduce a causality test that
is based on a Taylor approximation of a general nonlinear model and is applicable to various nonlinear models.
These approaches are relevant from the viewpoint of a polynominal approximation.
This study introduces ARCH tests using these nonparametric regression approaches to avoid the misspecification of the conditional mean
and investigates the statistical properties of the introduced tests in various linear and nonlinear models.
Erroneous ARCH tests based on misspecified conditional mean models and the failure to obtain sufficient reliability for the derived results increasingly impedes
model constructions and statistical evaluation.
Thus, it is important to clarify the influence of the introdued tests that do not depend on the model specification for various models.
In this study, we examine rejection frequencies under the null and alternative hypotheses for the introduced ARCH tests using Monte Carlo simulations.
The simulation analyzes the influence of the lag length, the bandwidth selection for the Nadaraya-Watson estimator, and the approximation order for the polynominal approximation method on the results.
The conditional mean models ivestigated in this study are linear autoregressive, threshold autoregressive, smooth transition autoregressive, Markov switching, and bi-linear models.
These are popular nonlinear models used for empirical analysis and tend to cause spurious ARCH effects
because it is difficult to distinguish between nonlinear models with homoskedastic variance and linear models with an ARCH effect.
The Monte Carlo simulation results evidence that ARCH tests that are based on the polynomial approximation regression approach have better statistical properties
than those using the Nadaraya-Watson kernel regression approach when DGP are various nonlinear models.
The remainder of this paper is organized as follows:
Section 2 presents the influence of a misspecified conditional mean on the ARCH tests and proposes ARCH tests using nonparametric regression approaches for the conditional mean.
Section 3 presents the statistical properties of the tests under nonlinear models.
Section 4 concludes the paper.
\section{ARCH tests using nonparametric regression approaches for conditional mean}
We consider the following DGP with lag order $m$:
\begin{equation}
y_t=f(y_{t-1},\cdots, y_{t-m};\mbox{\boldmath $\beta$})+ u_{t}, \ \ \ t=1,\cdots,T
\end{equation}
where $f(\cdot;\cdot)$ is an unknown function and {\boldmath$\beta$} is a parameter vector.
$u_t$ is a disturbance term with mean zero and variance denoted by
\begin{equation}
u_t=\sigma_t \epsilon_t; \ \ \ \sigma_t^2=\gamma_0+\sum_{i=1}^p \gamma_i u_{t-i}^2,
\end{equation}
where $\epsilon_t$ are independently and identically distributed (iid) random variables with mean zero and variance equal to one.
Although the conditional variance could have model misspecification similar to the conditional mean,
standard heteroskedastic tests have the ability to find linear ARCH effects even if the true conditional variance is generalized ARCH (GARCH) with or without nonlinear parts.
On the other hand, spurious ARCH effects tend to be observed when the conditional mean has model misspecifications.
The misspecification of the conditional mean has clear impacts on the inference of variance, as shown by Lumsdaine and Ng (1999) and Balke and Kapetanios (2007).
Thus, we focus on investigating the influence that the model misspecification of the conditional mean has on ARCH effects.
The null hypothesis of homoskedasticity to test for the ARCH effect is denoted by
\begin{equation}
H_0: \gamma_1=\cdots=\gamma_p=0,
\end{equation}
and the alternative hypothesis is
\begin{equation}
H_1: \text{at least one} \ \gamma_i=0, \ \ \ i=1,\cdots,p.
\end{equation}
Even if we assume a GARCH model to be heteroskedastic,
the testing procedure is the same as that in by Lee(1991) and Gel and Chen(2012).
Therefore, we focus only on the ARCH test.
Engle's (1982) standard ARCH test uses the auxiliary regression of squared residuals:
\begin{equation}
\hat{u}_t^2=\gamma_0+\gamma_1 \hat{u}_{t-1}^2+\cdots+\gamma_p \hat{u}_{t-p}^2+\eta_t,
\end{equation}
where $\eta_t$ is an error term.
The LM test statistics is given by
\begin{equation}
LM=\frac{T\hat{d}^{\prime}\hat{W}(\hat{W}^{\prime}\hat{W})^{-1}\hat{W}^{\prime}\hat{d}}{\hat{d}^{\prime}\hat{d}},
\end{equation}
where $\hat{d}^{\prime}=(\hat{d}_1,\cdots,\hat{d}_T)$, $\hat{d}_t=(\hat{u}_t^2/\hat{\sigma}_u-1)$, $\hat{\sigma}^2=(1/T)\sum_{t=1}^T\hat{u}_t^2$,
$\hat{W}^{\prime}=(\hat{w}_1,\cdots,\hat{w}_T)$, and $\hat{w}_t=(1,\hat{u}_{t-1}^2,\cdots,\hat{u}_{t-p}^2)$.
The LM test statistic (6) is equivalent to $TR^2$, where $R^2$ is the coefficient for the determination of (5)$^1$ .
Under the null hypothesis of no ARCH effects, the asymptotic distribution of (6) is $\chi^2(p)$.
When true DGP are denoted by (1),
suppose that we estimate the following misspecified model:
\begin{equation}
y_t=g(y_{t-1},\cdots, y_{t-\tilde{m}};\mbox{\boldmath $\alpha$})+ u_{t},
\end{equation}
where $g(\cdot;\cdot)$ is a misspecified function, $\tilde{m}$ is the lag length, and {\boldmath$\alpha$} is a parameter vector for the misspecified model.
Accordingly, the residual is denoted by
\begin{equation}
\hat{u}_t=u_t+f(y_{t-1},\cdots, y_{t-m};\mbox{\boldmath $\beta$})-\hat{g}(y_{t-1},\cdots, y_{t-m};\mbox{\boldmath $\alpha$})=u_t+e_t,
\end{equation}
where $e_t=f(y_{t-1},\cdots, y_{t-m};\mbox{\boldmath $\beta$})-\hat{g}(y_{t-1},\cdots, y_{t-m};\mbox{\boldmath $\alpha$})$.
The squared residual for $\hat{u}_t$ is
\begin{equation}
\hat{u}_t^2=u_t^2+2u_t e_t+e^2_t.
\end{equation}
Equation (9) means that the ARCH test correctly performs when $e_t \xrightarrow{p} 0$,
whereas the ARCH test is subject to a model misspecification and leads to unreliable results when $e_t \xrightarrow{p} 0$ does not hold.
For example, when true DGP (1) are a threshold autoregressive (TAR) model and misspecified estimation model (7) is a linear AR model,
$e_t$ includes nonlinearity.
As highlighted by Lumsdaine and Ng (1999) and Blake and Kapetanios (2007),
such a misspecification results in a spurious ARCH effect.
Therefore, a regression approach that does not depend on a specific model is necessary to avoid model misspecification and spurious ARCH effects.
The first approach that is robust to model misspecification is a nonparametric regression that is based on the Nadaraya-Watson kernel estimator.
We consider the following conditional mean regression regression model:
\begin{equation}
y_t=m(y_{t-1},\cdots, y_{t-s})+ u_{t}, \ \ \ t=1,\cdots,T,
\end{equation}
where $m(\cdot)$ is the unknown regression function without any parametric form.
The regression function for $y_t$ on $Y_t=(y_{t-1},\cdots, y_{t-s})^{\prime}$ is
\begin{equation}
z(y_{t-1},\cdots,y_{t-s})=E(y_t|Y_t=y).
\end{equation}
The most representative method to estimate the function is the Nadaraya-Watson estimator.
The estimator is denoted by
\begin{equation}
\hat{z}(y_{t-1},\cdots, y_{t-s})=\frac{\sum_{t=1}^T K(\frac{Y_t-y}{h})y_t }{\sum_{t=1}^T K(\frac{Y_t-y}{h})},
\end{equation}
where $K(\frac{Y_t-y}{h})=K(\frac{y_{t-1}-y_1}{h_1})K(\frac{y_{t-2}-y_2}{h_2}) \cdots K(\frac{y_{t-s}-y_s}{h_{s}})$ is a product kernel function
and $h$ denotes the bandwidth to determine the smoothness of the kernel function.
Each kernel funcion satisfies the following:
\begin{equation}
\int K(y)dy=1, \ \ \ \int yK(y)dy=0, \ \ \ \int y^2 K(y)dy>0.
\end{equation}
This study uses the Gaussian kernel denoted by$^2$:
\begin{equation}
K(\cdot)=\frac{1}{\sqrt{2\pi}}\exp(-\frac{y^2}{2}).
\end{equation}
We use two bandwidth selections for $h$ that are derived by minimizing the integrated mean squared error (IMSE).
First is Silverman's (1986) the plug-in method.
The bandwidth obtained using the plug-in method is based on the following equation:
\begin{equation}
h=c_0 T^{-1/1+s},
\end{equation}
where $c_0$ is a constant that depends on the kernel function and $s$ is the number of the regressor.
When we use the Gaussian kernel, the optimal bandwidth selection is denoted by
\begin{equation}
h_{opt} \approx 1.06\sigma T^{-1/(s+4)},
\end{equation}
where $\sigma$ is the standard deviation for $y_t$.
The modified $h_{opt}$ that is robust to outliers is written as
\begin{equation}
h=1.06 \min(\hat{\sigma}, \hat{Q}/1.34)T^{-1/(s+4)},
\end{equation}
where $\hat{Q}$ is the estimate for the interquartile range of $y_t$$^{3}$.
Second is the cross-validation procedure developed by Rudemo (1982).
When using the Gaussian kernel, we consider the following mean squeared error called the cross-validation criterion:
\begin{equation}
CV(h)=\frac{1}{T}\sum_{i=1}^T (y_i-\hat{z}(Y_{-i}))^2,
\end{equation}
where $\hat{z}(Y_{-i})$ is a leave-one-out estimator that excludes $i$th observation.
The optimal bandwidth $h$ for the cross-validation procedure is determined by minimizing $CV(h)$.
Stone (1984) shows that bandwidth $h$ for the cross-validation can asymptotically select the optimal bandwidth
from an IMSE viewpoint and has probability convergence to the bandwidth for the plug-in method.
While bandwidth $h$ for the plug-in method depends on the assumed kernel density function,
the cross-validation is not required to assume the kernel density function and
can obtain a consistent estimator for the bandwidth that minimizes IMSE.
It is possible that the residuals obtained using Nadaraya-Watson estimator (12) with bandwidth selection (17) or (18) have similar properties.
Accordingly, the above-mentioned nonparametric regression approach is robust to the model misspecification of the conditional mean
and thus, the ARCH test is correctly performed$^4$.
The next approach adopted to avoid misspecification is a polynomial approximation of a general unknown nonlinear model.
When we apply a $k$th-order Taylor approximation to true model (1), the regression model is denoted by
\begin{equation}
y_t=\beta_0+\sum_{j=1}^q \beta_j y_{t-j}+\sum_{j_1=1}^q \sum_{j_2=j_1}^q \beta_{j_1j_2} \beta_{j_1 j_2} y_{t-j_1}y_{t-j_2}+\cdots+\sum_{j_1=1}^q \sum_{j_2=j_1}^q \cdots \sum_{j_k=j_{k-1}}^q \beta_{j_1\cdots j_k} y_{t-j_1}\cdots y_{t-j_k}+\epsilon_t,
\end{equation}
where $q$ is the lag length and $\epsilon_t$ is an error term that includes the remainder term of the Taylor series approximation.
We assume $q \le k$ as a simple notation.
If the true model is a linear AR model, all $\beta_{j_1 j_2}$ and $\beta_{j_1 \cdots j_k}$ are zero.
In contrast, if the true model is nonlinear, one $\beta_{j_1 j_2}$ or $\beta_{j_1 \cdots j_k}$ is not zero at least.
We investigate this using a standard Wald test.
For example, (19) with $p=2$ and $k=2$ can be written as
\begin{equation}
y_t=\beta_0+\sum_{j=1}^2 \beta_j y_{t-j}+ \sum_{j_1=1}^2 \sum_{j_2=j_1}^2 b_{j_1j_2}y_{t-j_1}y_{t-j_2}+\epsilon_t.
\end{equation}
The difference between the true model and the polynomial approximation regression model reduces because
the polynomial regression can approximate various nonlinear models including the TAR and Markov switching models.
When testing for ARCH effects under an unknown (true) model,
ussing residuals obtained from polynomial approximation regression (19) can be advantageous
since they show statistical properties similar to those of the true model.
Therefore, the ARCH test using the residuals from the polynomial approximation regression does not appear to be influenced by model misspecification.
\section{Statistical properties of ARCH tests using nonparametric regression models}
This section examines the statistical properties of the ARCH tests using nonparametric regression models for the conditional mean presented in Section 2.
We conduct Monte Carlo simulations to compare the rejection frequencies of the test statistics under various conditional mean models with and without ARCH effects.
The simulations are based on 10,000 replications; a significance level of 5\%; and sample sizes with $T=100$, $250$, and $500$.
To avoid the effect of initial conditions, data with $T+100$ are generated.
We discard the initial 100 samples and use the data with sample size $T$.
We compare ARCH tests (6) using the following regression models for the conditional mean:
the AR model denoted $AR(p)$, polynomial approximation model (19) with second- and third-order Taylor approximation
denoted as $T2(p)$ and $T3(p)$,
and nonparametric regression model (12) with plug-in method (17) and cross validation method (18) denoted as $NP_{pl}(p)$ and $NP_{cv}(p)$.
We set lag length $p$ to $p=1$ or $p=2^5$.
The AR model is used as a benchmark for comparison.
First, we consider the following AR processes to examine the influence of lag length on the tests' performance.
\begin{gather}
y_t=\beta_0+ \beta_1 y_{t-1} +\beta_2 y_{t-2} + u_{t}, \\
u_t =\sigma_t \epsilon_t, \\
\sigma_t^2=\gamma_0+\gamma_1 u_{t-1}^2,
\end{gather}
where $u_{t} \sim \text{i.i.d.}N(0,1)$.
$\beta_0$ is set to $\beta_0=0$.
Table 1 presents the rejection frequencies for the ARCH tests obtained from each regression model for the conditional mean.
We use the following DGP: \\
DGP1-1: $y_t= 0.2 y_{t-1} + u_{t}$, \\
DGP1-2: $y_t= 0.7 y_{t-1} + u_{t}$, \\
DGP1-3: $y_t= 0.7 y_{t-1} -0.2 y_{t-2}+ u_{t}$, \\
DGP1-4: $y_t= 0.7 y_{t-1} -0.5 y_{t-2}+ u_{t}$. \\
These DGP have homoskedastic errors with $\gamma_0=1$ and $\gamma_1=0$ for (23).
The rejection frequencies presented in Table 1 indicate the empirical size of the ARCH tests on the basis of each regression model.
For DGP1-1 and DGP1-2, which have lag order one, most of the tests have a small under-rejection but reasonable size performance,
except for $NP_{pl}(2)$ and $NP_{cv}(2)$.
$NP_{pl}(2)$ and $NP_{cv}(2)$ report over-rejections for DGP1-1 and DGP1-2.
The rejection frequencies of $NP_{pl}(2)$ for DGP1-1 with $T=500$ and of $NP_{cv}(2)$ for DGP1-2 with $T=500$ are 0.143 and 0.101.
An additional lag for the nonparametric regression of the conditional mean using the Nadaraya-Watson estimator leads to size distortions in the ARCH tests.
In contrast, $AR(2)$, $T2(2)$, and $T3(2)$ do not report over-rejections for DGP1-1 and DGP1-2.
The results show that the additional lag for $AR$ and polynomial approximation regression do not impact the size of the ARCH tests.
However, a lower lag length clearly influences the empirical size of all the tests.
We see that the ARCH tests based on $AR(1)$, $T2(1)$, $T3(1)$, $NP_{pl}(1)$, and $NP_{cv}(1)$ over-reject the null hypothesis of homoskedastic variance
under DGP1-3 or DGP1-4, which have a lag order of two.
For example, the rejection frequencies of $AR(1)$, $T2(1)$, $T3(1)$, $NP_{pl}(1)$, and $NP_{cv}(1)$ for DGP1-4 with $T=250$ are
0.127, 0.116, 0.097, 0.115, and 0.113, respectively.
The size distortions in DGP1-4 are greater than those in DGP1-3.
The influence of the lower lag length on the empirical size depends on the persistence parameter of DGP.
Compared with the size distortions for the model with a lower lag length,
those for the model with an additional lag length are smaller.
Accordingly, we present the statistical properties for the models with two lags.
We examine the empirical size of the ARCH tests under the following conditional mean generated by the TAR models.\\
DGP2-1: $y_t=(0.7 y_{t-1}- 0.2 y_{t-2})I(y_{t-1}\ge0)+(0.1 y_{t-1}- 0.2 y_{t-2})I(y_{t-1} <0) + u_{t}$, \\
DGP2-2: $y_t=(0.7 y_{t-1}- 0.2 y_{t-2})I(y_{t-1}\ge0)+(-0.5 y_{t-1}- 0.2 y_{t-2})I(y_{t-1} <0) + u_{t}$ , \\
DGP2-3: $y_t=(0.7 y_{t-1}+ 0.2 y_{t-2})I(y_{t-1}\ge0)+(0.7 y_{t-1}- 0.7 y_{t-2})I(y_{t-1} <0) + u_{t}$, \\
DGP2-4: $y_t=(0.7 y_{t-1}- 0.2 y_{t-2})I(\Delta y_{t-1}\ge0)+(0.1 y_{t-1}- 0.2 y_{t-2})I(\Delta y_{t-1} <0) + u_{t}$, \\
DGP2-5: $y_t=(0.7 y_{t-1}- 0.2 y_{t-2})I(\Delta y_{t-1}\ge0)+(-0.5 y_{t-1}- 0.2 y_{t-2})I(\Delta y_{t-1} <0) + u_{t}$, \\
DGP2-6: $y_t=(0.7 y_{t-1}+ 0.2 y_{t-2})I(\Delta y_{t-1}\ge0)+(0.7 y_{t-1}- 0.7 y_{t-2})I(\Delta y_{t-1} <0) + u_{t}$, \\
where $I(\cdot)$ is an indicator function that takes the value of 1 if $I(\cdot)$ is ture and 0 if $I(\cdot)$ is not true.
$u_t$ denotes a homoskedastic error similar to that from DGP1-1 to 1-4.
While DGP2-1, 2-2, and 2-3 are standard TAR models whose indicator functions depend on $y_{t-1}$,
DGP2-4, 2-5, and 2-6 are momentum threshold autoregressive (MTAR) models wherein the threshold is the difference $\Delta y_{t-1}$.
These TAR models allow for asymmetric adjustments.
In addtion, MTAR can capture the spiky properties of the process.
Figures 1 and 2 illustrate the sample path for DGP2-1 with homoskedastic errors and the ARCH effect $\gamma_0=0.3$ for (23).
Figure 2 clearly shows the volatile behavior generated by the ARCH effect.
However, Figure 3 illustrates that the sample path for DGP2-3 demonstrates a similar volatile movement even if the error is homoskedastic.
As shown in figures 2 and 3, it is generally difficult to distinguish between the nonlinear conditional mean model with the homoskedastic error and the linear AR model with ARCH effect.
Such a similarlity between the TAR model with homoskedastic errors and the linear AR model with ARCH effects may produce spurious statistical properties.
Table 2 tabulates the simulation results.
$AR(2)$ reports over-rejections for the null hypothesis of no ARCH effects.
For DGP2-2 and DGP2-5, which have strong asymmetry, the size distortions of $AR(2)$ are significantly large.
These results indicate that the use of the AR model for the conditional mean leads to spurious ARCH effects
when the true DGP are based on the TAR or MTAR model.
In additon, the over-rejections increase with a large sample size.
Unlike the performance of $AR(2)$,
the polynomial approximation regression models $T2(2)$ and $T3(2)$ and nonparametric regression models $NP_{pl}(2)$ and $NP_{cv}(2)$ perform better.
For example, the rejection frequencies of $AR(2)$, $T2(2)$, $T3(2)$, $NP_{pl}(2)$, and $NP_{cv}(2)$ for DGP2-2 with $T=250$ are
0.373, 0.040, 0.033, 0.042, and 0.051, respectively.
For $T3(2)$, on the other hand, the rejection frequency is 0.058.
$T3(2)$ has a more reasonable size compared with those for $T2(2)$, $NP_{cl}(2)$, and $NP_{cv}(2)$.
$T2(2)$, $NP_{cl}(2)$, and $NP_{cv}(2)$ report size distortions in certain cases.
The rejection frequencies of $T2(2)$, $NP_{cl}(2)$, and $NP_{cv}(2)$ for DGP2-3 with $T=500$ are 0.096, 0.139, and 0.104, respectively.
Thus, the polynomial approximation regression model $T3(2)$ is a more appropriate approach to test for ARCH than other approaches under the TAR or MTAR model.
Table 3 presents the rejection frequencies for each test under smooth transition autoregressive (STAR) models generated by the followings:\\
DGP3-1: $y_t=0.7 y_{t-1}- 0.2 y_{t-2}+(-0.5 y_{t-1}- 0.2 y_{t-2})(1-\exp (-0.1y_{t-1}^2)) + u_{t}$, \\
DGP3-2: $y_t=0.7 y_{t-1}- 0.2 y_{t-2}+(-y_{t-1}- 0.2 y_{t-2})(1-\exp (-0.1y_{t-1}^2)) + u_{t}$, \\
DGP3-3: $y_t=0.7 y_{t-1}- 0.2 y_{t-2}+(-y_{t-1}- 0.2 y_{t-2})(1-\exp (-y_{t-1}^2)) + u_{t}$, \\
DGP3-4: $y_t=0.7 y_{t-1}- 0.2 y_{t-2}+(-0.5 y_{t-1}- 0.2 y_{t-2})(1+\exp (-0.1y_{t-1}))^{-1} + u_{t}$, \\
DGP3-5: $y_t=0.7 y_{t-1}- 0.2 y_{t-2}+(- y_{t-1}- 0.2 y_{t-2})(1+\exp (-0.1y_{t-1}))^{-1} + u_{t}$, \\
DGP3-6: $y_t=0.7 y_{t-1}- 0.2 y_{t-2}+(- y_{t-1}- 0.2 y_{t-2})(1+\exp (-y_{t-1}))^{-1} + u_{t}$, \\
where $u_t$ denotes homoskedastic errors similar to those in tables 1 and 2.
STAR models have the time-varying properties of the conditional mean.
DGP3-1, 3-2, and 3-3 impose symmetry constraints on the time-varying properties,
whereas DGP3-4, 3-5, and 3-6, which are logistic STAR models, allow asymmetry.
DGP3-2 and 3-5 produce a smoother and more marginal change than DGP3-3 and 3-6.
We observe that $AR(2)$, $T2(2)$, and $NP_{pl}(2)$ partially reject the null hypothesis of no ARCH effects.
The rejection frequencies of $AR(2)$ is higher than those of the other regression models for DGP3-2 and 3-6.
$T2(2)$ shows size distortions for DGP3-2.
$NP_{pl}(2)$ reports a slight over-rejection with $T=500.$
In contrast, the shape of the transition function does not have a clear impact on the empirical size of $T3(2)$ and $NP_{cv}(2)$.
$T3(2)$ and $NP_{cv}(2)$ can capture the properties of STAR models and allows the ARCH test to perform well.
In addition, we present the results of each test for the other nonlinear processes: \\
DGP4-1: $y_t=(0.7 y_{t-1}- 0.2 y_{t-2})s_t+(0.3 y_{t-1}- 0.2 y_{t-2})(1-s_t) + u_{t}$, \ \ $p_{00}=p_{11}=0.7$, \\
DGP4-2: $y_t=(0.7 y_{t-1}- 0.2 y_{t-2})s_t+(0.3 y_{t-1}- 0.2 y_{t-2})(1-s_t) + u_{t}$ , \ \ $p_{00}=p_{11}=0.98$, \\
DGP4-3: $y_t=(0.7 y_{t-1}+ 0.2 y_{t-2})s_t+(0.3 y_{t-1}- 0.2 y_{t-2})(1-s_t) + u_{t}$, \ \ $p_{00}=p_{11}=0.98$, \\
DGP4-4: $y_t=0.1y_{t-1}u_{t-1} +0.1 y_{t-2}u_{t-2} + u_{t}$, \\
DGP4-5: $y_t=0.3y_{t-1}u_{t-1} +0.1 y_{t-2}u_{t-2} + u_{t}$, \\
DGP4-6: $y_t=0.1y_{t-1}u_{t-1} -0.1 y_{t-2}u_{t-2} + u_{t}$, \\
where $u_{t} \sim \text{i.i.d.}N(0,1)$ and $s_t$ is a random variable that takes the value of 0 or 1.
DGP4-1, 4-2, and 4-3 are Markov switching processes and $s_t$ determines the behavior.
Whether $s_t$ takes the value of 0 or 1 depends on the transition probabilities $p_{11}$ and $p_{00}$.
$p_{11}=P(s_{t+1}=1|s_t=1)$ denotes the change probability from the state $s_t=1$ to state $s_{t+1}=1$.
Similarly, the transition probabilities are denoted by $p_{00}=P(s_{t+1}=0|s_t=0)$, $p_{10}=1-p_{00}=P(s_{t+1}=1|s_t=0)$, and $p_{01}=1-p_{11}=P(s_{t+1}=0|s_t=1)$, respectively.
They are set to $p_{11}=p_{00}=0.7$ for DGP4-1 and $p_{11}=p_{00}=0.98$ for DGP4-2 and 4-3.
While DGP4-1 has frequent switches in the AR parameters, DGP4-2 and 4-3 show persistent switches.
DGP4-4, 4-5, and 4-6 are bilinear models that are used to model rare, volatile, or outburst processes.
$AR(2)$ that neglects nonlinearity causes spurious ARCH effect, which are similar to the results in tables 2 and 3.
The results for the nonparametric regression models using the Nadaraya-Watson estimator depend on the bandwidth selection.
$NP_{pl}(2)$ under-rejects the null hypothesis for DGP4-1, DGP4-2, and DGP4-5 and over-rejects that for DGP4-3, 4-4, and 4-6.
$NP_{cv}(2)$ performs well for DGP4-2, DGP4-4, and DGP4-6 and over-rejects the null hypothesis for DGP4-1, 4-3, and 4-5.
$T2(2)$ has relatively reasonable emirical sizes for $T=100$ and $200$, but reports size distortions for DGP4-1, 4-2, 4-3, 4-5, and 4-6 with $T=500$.
Here as well, we find that $T3(2)$ generally performs better.
The simulation results from tables 1 to 4 evidence that the model misspecification of the conditional mean causes size distiortions for the null hypothesis of no ARCH effects.
The ARCH tests using the AR regression model are sensitive to the presence of the nonlinear conditional mean and show high over-rejections.
This can be attributed by neglected nonlinearity and difficulties in distinguishing between the nonlinearity of the conditional mean and the ARCH effects.
While the noparametric regression models using the Nadaraya-Watson estimator partially perform well,
the rejection frequencies strongly depend on DGP and the bandwidth selection.
By contrast, the size properties of $T3(2)$ outperform those of other models and are close to the nominal size at 5\%.
Therefore, $T3(2)$ can approximate the (unknown) linear and nonlinear conditional mean models well and produce reliable ARCH tests.
Tables 5 and 6 report the nominal power and size-corrected power properties for the ARCH tests.
We use DGP1-3, DGP2-1, 2-4, 3-1, 3-4, 4-1, and 4-4 for power comparison.
Each DGP has an ARCH effect denoted by
\begin{gather}
u_t =\sigma_t \epsilon_t, \\
\sigma_t^2=\gamma_0+\gamma_1 u_{t-1}^2,
\end{gather}
where $\gamma_0$ and $\gamma_1$ are set to $\gamma_0=1$ and $\gamma_1=(0.1,0.3)$.
The powers of $AR(2)$ are clearly higher than those of other models in Table 5.
We have a relatively reasonable evaluation of the power for DGP1-3
because the size properties of $AR(2)$ and other tests are close to the nominal level 0.05 (Table 1).
However, we cannot correctly evaluate the high nominal powers of $AR(2)$ for other DGP.
The higher powers of $AR(2)$ are influenced by size distortions presented in tables from 2-4.
The power properties of the nonparametric models are more appropriately interpreted
because $T2(2)$ and $T3(3)$ do not over-reject the null hypothesis for DGP in Table 5 and the size distortions of $NP_{pl}(2)$ and $NP_{cv}(2)$ are smaller than those of $AR(2)$.
In comparison, we observe that the polynomial approximation models $T2(2)$ and $T3(2)$ perform better than $NP_{pl}(2)$ and $NP_{cv}(2)$.
Note that the powers of $NP_{pl}$(2) are quite small when the ARCH effect is $\gamma_1=0.1$.
For $\gamma_1=0.3$, the nonparametric regression models report sufficient power to identify the ARCH effects.
We compare the power properties among the models without the influences of size distortions.
Table 6 demonstrates the size-corrected power.
The powers of $AR(2)$ in Table 6 are lower than those in Table 5 because the size distortions are corrected.
$AR(2)$ still performs well even if the size is corrected.
The ability to detect ARCH effects in the nonlinear models for $T2(2)$ is high, similar to that of $AR(2)$.
While the powers of $T3(2)$ is slightly smaller than those of $T2(2)$ because $T3(2)$ has additional regression parameters for the conditional mean,
it has sufficient power to find the ARCH effect.
The rejection frequencies of $NP_{pl}(2)$ and $NP_{cv}(2)$ for $\gamma_1=0.1$ are inferior to those of other models in Table 6.
While they relatively perform well for $\gamma_1=0.3$ with $T=100$,
other models have better power properties, particularly for $T=250$ and $500$.
The comparison of the ARCH tests using each regression model for the conditional mean indicates that
the presence of the nonlinear conditional mean has influences of size and power properties on the ARCH tests.
The AR regression models have higher over-rejection of the null hypothesis of no ARCH effects for the nonlinear conditional mean models.
The ARCH tests based on AR models for the nonlinear conditional mean are not effective from the viewpoints of size and power.
This is because size-corrected tests are needed and the true model is generally unknown a priori.
The nonparametric regression models using the Nadaraya-Watson estimator tend to have slight size distortions and low power.
The polynomial approximation model $T2(2)$ shows slight over-rejection depending on the nonlinear conditional mean and sample size,
although it has better power properties for the ARCH effect with the nonlinear conditional mean.
$T3(2)$ has reasonable size and power properties and yeilds reliable results for the ARCH tests irrespective of the conditional mean models.
\section{Summary and conclusion}
This study compares the statistical properties of the ARCH tests that are robust to misspecified conditional mean models.
ARCH tests are important for statistical modeling because the presence of ARCH affects the statistical inference of the conditional mean regression model and the analysis of volatility.
However, it is difficult to determine the correct specified conditional mean model and possible to employ a misspecified conditonal mean model.
This may lead to unreliable results.
Therefore, it is neccesary to compare robust ARCH tests to various unknown conditional mean model and clarify their statistical properties.
The approaches employed in this study are based on two nonparametric regressions:
an ARCH test using the Nadaraya-Watson kernel regression and an ARCH test with the polynomial approximation.
The two approches can adapt to various nonlinear models.
Since a true model is generally unknown a priori,
they are robust to misspecfied models.
The Monte Carlo simulations evidence that the ARCH tests based on the polynomial regression approach have better statistical properties
than those using the Nadaraya-Watson kernel regression approach for various nonlinear conditional mean models.
In particular, the test using the regression approach based on the third-order Taylor approximation has a reasonable and acceptable size and sufficient power for any time series models.
The results further show that the ARCH test using the polynomial approximation approach is useful
when testing if DGP have an ARCH effect and for ARCH without model specifications when the conditional mean model is unknown a priori.
Robust univariate and multivariate ARCH tests that do not depend on the model specification of the conditional variance in addition to the conditional mean are left for further study.
\newpage
\fontsize{10pt}{20pt}\selectfont
\begin{flushleft}
{\Large Footnotes}\\
\end{flushleft}
1. Catani and Ahlgren (2017) propose an LM test for ARCH using high-dimentional vector autoregressive models.
In addition, Gel and Chen (2012) introduce bootstrap ARCH tests.
\\
\\
2. Other kernel functions include uniform, Epanechnikov, biweight, and triweight kernel functions.
In general, while the type of kernel functions does not have a large impact on the estimation results,
the selection of bandwidth significantly influences the estimation results.
\\
\\
3. Sneather and Jones (1991) propose another bandwidth selection that is based on the plug-in method.
\\
\\
4. Shimizu (2014) introduces the estimation of nonparametric AR(1)-ARCH(1) using wild bootstrap.
Shin and Hwang (2015) apply stationary bootstrap to estimate nonparametric AR(1)-ARCH(1).
\\
\\
5. Zambom and Kim (2017) propse lag selection in the nonparametric conditional heteroskedastic models.
Compared to conventional methods,
this method more appropriately selects lag length for various nonlinear models.
We fix lag length in this paper to investigate the statistical performance of the nonparametric regression models.
\\
\\
\newpage
\fontsize{11pt}{20pt}\selectfont
|
1,314,259,994,284 | arxiv | \section{INTRODUCTION}
In the last few years it has been clearly demonstrated that not only charged
ions but also neutral atoms can be conveniently trapped and cooled by means
of electro-magnetic fields. Although the physics of the various ingenious
scenarios developed to accomplish this is already interesting in itself
\cite{chu}, the opportunities offered by an atomic gas sample at very low
temperatures are exciting too. Examples in this respect are the performance
of high-precision spectroscopy, the search for a violation of CP invariance
by measuring the electric dipole moment of atomic cesium \cite{bern}, the
construction of an improved time standard based on an atomic fountain
\cite{clairon}, and the achievement of Bose-Einstein condensation in a
weakly-interacting gas.
In particular the last objective is an important motivation for studying
cold atomic gases and has been pursued most vigorously with atomic hydrogen
\cite{mit,adam}. However, it was recently proposed that also the alkali-metal
vapors cesium \cite{wieman} and lithium \cite{hulet} are suitable candidates
for the achievement of Bose-Einstein condensation. We will nevertheless
concentrate here on atomic hydrogen, because it still seems to be the most
promising system for the observation of the phase transition in the near
future. Moreover, it has the advantage that the atomic interaction potential
is known to a high degree of accuracy. As a result we can have confidence in
the fact that the scattering length is positive, which is required for the
condensation to take place in the gaseous phase \cite{nega}, and small enough
to rigorously justify the approximations made in the following for the
typical temperatures ($T \simeq 10 \: \mu K$) and densities
($n \simeq 1 \cdot 10^{14} \: cm^{-3}$) envisaged in the experiments.
Due to the spin of the electron and the proton, the 1s-hyperfine manifold
of atomic hydrogen consists of four states which are in order of increasing
energy denoted by $|a \rangle$, $|b \rangle$, $|c \rangle$, and
$|d \rangle$, respectively. Only the $|c \rangle$ and $|d \rangle$ states
can be trapped in a static magnetic trap, because in a magnetic field they
have predominantly an electron spin-up component and are therefore
low-field seeking \cite{review}. Furthermore, if we load a trap with atoms
in these two hyperfine states, the $|c \rangle$ state is rapidly depopulated
as a result of the much larger probability for collisional relaxation to the
high-field seeking $|a \rangle$ and $|b \rangle$ states which are
expelled from the trap. In this manner the system polarizes spontaneously and
we obtain a gas of $|d \rangle$-state atoms, known as doubly spin-polarized
atomic hydrogen since both the electron as well as the proton spin are
directed along the magnetic field. Unfortunately, such a doubly-polarized
hydrogen gas still decays due to the dipole interaction between the magnetic
moments of the atoms. Although the time scale $\tau_{inel}$ for this decay is
much longer than the time scale for the depopulation of the $|c \rangle$
state mentioned above, it nevertheless limits the lifetime of the gas sample
to the order of seconds for the densities of interest \cite{rates}.
Having filled the trap with doubly-polarized atoms, we must subsequently
lower the temperature of the gas to accomplish Bose-Einstein condensation.
At present it is believed that the most convenient way to achieve this is
by means of conventional \cite{doyle,luiten} or light-induced \cite{setija}
evaporative cooling. In both cases the idea is to remove, by lowering the
well-depth or by photon absorption in the perimeter, the most energetic
particles from the trap and thus to create momentarily a highly
nonequilibrium energy distribution that will evolve into a new equilibrium
distribution at a lower temperature. According to the quantum Boltzmann
equation describing this process, a typical time scale for the evolution is
the average time between two elastic collisions
$\tau_{el} = 1/n \langle v \sigma \rangle$, with $\langle v \sigma \rangle$
the thermal average of the relative velocity $v$ of two colliding atoms times
their elastic cross section $\sigma$. Clearly, $\tau_{el}$ must be small
compared to $\tau_{inel}$ to ensure that thermal equilibrium is achieved
within the lifetime of the system. As a result, the minimum temperature that
can be reached by evaporative cooling is about $1\: \mu K$ and indeed below
the critical temperature of atomic hydrogen at a density of
$1 \cdot 10^{14} \: cm^{-3}$.
The previous discussion appears to indicate that a typical time scale for
the formation of the condensate is given by $\tau_{el}$. However, this is not
correct because simple phase-space arguments show that a kinetic equation
cannot lead to a macroscopic occupation of the one-particle ground state:
Considering a homogeneous system of $N$ bosons in a volume $V$, we find from
the Boltzmann equation that the production rate of the condensate fraction is
\begin{equation}
\frac{d}{dt} \left. \frac{N_{\vec 0}}{N} \: \right|_{in}
= C \frac{\langle v \sigma \rangle}{V}
(1+N_{\vec 0}) \:,
\end{equation}
where $N_{\vec 0}$ is the number of particles in the zero-momentum state and
$C$ is a constant of $O(1)$. Hence, in the thermodynamic limit
($N,V \rightarrow \infty$ in such a way that their ratio $n=N/V$ remains
fixed) a nonzero production rate is only possible if a condensate already
exists \cite{levich} and we are forced to conclude that Bose-Einstein
condensation cannot be achieved by evaporative cooling of the gas.
\section{NUCLEATION}
In the above argument we have only considered the effect of two-body
collisions. It is therefore legitimate to suggest that perhaps three or
more body collisions are required for the formation of the condensate,
even though they are very improbable in a dilute gas \cite{snoke}. However,
we can easily show that the same argument also applies to these
processes: For a $m$-body collision that produces one particle with zero
momentum we have $2m-2$ independent momentum summations, leading to a
factor of $V^{2m-2}$. Moreover, the transition matrix element is
proportional to $V \cdot V^{-m}$ due to the integration over the
center-of-mass coordinate and the normalization of the initial and final
state wave functions, respectively. In total the production rate for the
condensate fraction is thus proportional to
$V^{2m-2} (V^{1-m})^{2} V^{-1} (1+N_{\vec 0})$ or $V^{-1} (1+N_{\vec 0})$,
which again vanishes in the thermodynamic limit if there is no nucleus of
the condensed phase. As expected, the contributions from collisions that
produce more than one zero-momentum particle have additional factors of
$V^{-1}$ and vanish even more rapidly if $V \rightarrow \infty$.
Clearly, we have arrived at a nucleation problem for the achievement of
Bose-Einstein condensation which seriously endangers the success of future
experiments. Fortunately, we suspect that the line of reasoning presented
above is not completely rigorous because otherwise it implies that also
liquid helium cannot become superfluid, in evident disagreement with our
experience. Indeed, by using a kinetic equation to discuss the time evolution
of the gas we have in effect neglected the buildup of coherence which is
crucial for the formation of the condensate. Our previous argument therefore
only shows that by means of evaporative cooling the gas is quenched into the
critical region on a time scale $\tau_{el}$, not that Bose-Einstein
condensation is impossible. To discuss that point we need a different
approach that accurately describes the time evolution of the system after the
quenching by taking the buildup of coherence into account exactly. Such a
nonequilibrium approach was recently developed on the basis of the Keldysh
formalism and can, in the case of a dilute Bose gas, be seen as a
generalization of the Landau theory of second-order phase transitions
\cite{stoof}. As a consequence it is useful to consider the Landau theory
first. This leads to a better understanding of the more complicated
nonequilibrium theory and ultimately of the physics involved in the
nucleation of Bose-Einstein condensation.
\subsection{Landau theory}
As an introduction to the Landau theory of second-order phase transitions we
use the example of a ferromagnetic material \cite{landau}. To be more
specific we consider a cubic lattice with spins $\vec{S}_{i}$ at the sites
$\{i\}$. The Hamiltonian is taken to be
\begin{equation}
\label{energy}
H = -J \sum_{\langle i,j \rangle} \vec{S}_{i} \cdot \vec{S}_{j} \:,
\end{equation}
where $J$ is the exchange energy and the sum is only over nearest neighbors.
For further convenience we also introduce the magnetization
\begin{equation}
\vec{M} = \frac{1}{V} \sum_{i} \vec{S}_{i} \:.
\end{equation}
Physically it is clear that this model has a phase transition at a critical
temperature $T_{c}$ of $O(J/k_{B})$. Above the critical temperature the
thermal fluctuations randomize the direction of the spins and the system is
in a disordered (paramagnetic) state having a vanishing average magnetization
${\langle \vec{M} \rangle}_{eq}$. However, below the critical temperature the
thermal fluctuations are not large enough to overcome the directional effect
of the Hamiltonian and the spins favor an ordered (ferromagnetic) state with
${\langle \vec{M} \rangle}_{eq} \neq \vec{0}$. The different phases of the
material are thus conveniently characterized by the average magnetization,
which for this reason is known as the order parameter of the ferromagnetic
phase transition.
In the phenomenological approach put forward by Landau the above mentioned
temperature dependence of the equilibrium order parameter
${\langle \vec{M} \rangle}_{eq}$ is reproduced by anticipating that the
free-energy density of the system at a fixed but not necessarily equilibrium
value of the average magnetization has the following expansion
\begin{equation}
\label{fenergy}
f(\langle \vec{M} \rangle,T) \simeq f(\vec{0},T) +
\alpha(T) {\langle \vec{M} \rangle}^{2} +
\frac{\beta(T)}{2} {\langle \vec{M} \rangle}^{4}
\end{equation}
for small values of $\langle \vec{M} \rangle$, and that the coefficients of
this expansion behave near the critical temperature as
\begin{equation}
\alpha(T) \simeq \alpha_{0} \left( \frac{T}{T_{c}} - 1 \right)
\end{equation}
and
\begin{equation}
\beta(T) \simeq \beta_{0} \:,
\end{equation}
respectively, with $\alpha_{0}$ and $\beta_{0}$ positive constants.
Hence, above the critical temperature $\alpha(T)$ and $\beta(T)$ are both
positive. As a result the free energy is minimal for
$\langle \vec{M} \rangle = \vec{0}$, which corresponds exactly to the
paramagnetic phase. Moreover, for temperatures below the critical one
$\alpha(T)$ is negative and the free energy is indeed minimized by a nonzero
average magnetization with magnitude $\sqrt{- \alpha(T) / \beta(T)}$. Just
below the critical temperature the latter equals
\begin{equation}
{\langle M \rangle}_{eq} \simeq \sqrt{ \frac{\alpha_{0}}{\beta_{0}}
\left( 1 - \frac{T}{T_{c}} \right) } \;,
\end{equation}
which after substitution in Eq.\ (\ref{fenergy}) gives rise to an equilibrium
free-energy density of
\begin{equation}
f({\langle \vec{M} \rangle}_{eq},T) \simeq f(\vec{0},T) -
\frac{\alpha_{0}^{2}}{2\beta_{0}}
{\left( 1 - \frac{T}{T_{c}} \right)}^{2} \:.
\end{equation}
Therefore, the second derivative $d^{2}f/dT^{2}$ is discontinuous at the
critical temperature and the phase transition is of second order according
to the Ehrenfest nomenclature.
Note that minimizing the free energy only fixes the magnitude and not the
direction of ${\langle \vec{M} \rangle}_{eq}$. This degeneracy is caused by
the fact that the Hamiltonian in Eq.\ (\ref{energy}) is symmetric under an
arbitrary rotation of all the spins $\vec{S}_{i}$. Consequently, the free
energy must be symmetric under a rotation of the average magnetization and
only even powers of $\langle \vec{M} \rangle$ can appear in its expansion
(cf.\ Eq.\ (\ref{fenergy})). Due to this behavior the ferromagnet is a good
example of a system with a spontaneously broken symmetry, i.e.\ although the
Hamiltonian is invariant under the operations of a group, its ground state
is not. In the case of a ferromagnet the symmetry group is $SO(3)$, which is
broken spontaneously below the critical temperature because the average
magnetization points in a certain direction. Which direction is chosen in
practice, depends on the surroundings of the system and in particular on
(arbitrary small) external magnetic fields that favor a specific direction.
After this summary of the Landau theory we are now in a position to
introduce two time scales which turn out to be of great importance for the
nucleation of Bose-Einstein condensation. To do so we consider the following
experiment: Imagine that we have a piece of ferromagnetic material
at some temperature $T_{1}$ above the critical temperature. Being in thermal
equilibrium the material is in the paramagnetic phase with
${\langle \vec{M} \rangle}_{eq} = \vec{0}$. We then quickly cool the
material to a new temperature $T_{2}$ below the critical temperature.
If this is done sufficiently fast, the spins will have no time to react and
we obtain a nonequilibrium situation in which the free energy has developed
a `double-well' structure but the average magnetization is still zero. This
is depicted in Fig.\ \ref{fig1}(a). In such a situation there is a typical
time scale for the relaxation of the average magnetization to its new
equilibrium value $\sqrt{- \alpha(T_{2}) / \beta(T_{2})}$, which we denote
$\tau_{coh}$.
However, in the case of magnetically trapped atomic hydrogen, the gas is
isolated from its surroundings and it is not possible to perform the cooling
stage mentioned above. As a result the gas has to develop the instability
associated with the phase transition by itself. The time scale corresponding
to this process is called $\tau_{nucl}$ and is schematically shown in
Fig.\ \ref{fig1}(b). Combining the two processes we are lead to the
following physical picture for the nucleation of Bose-Einstein
condensation. After the quench into the critical region the gas develops an
instability on the time scale $\tau_{nucl}$. On this time scale the actual
nucleation takes place and a small nucleus of the condensate is formed, which
then grows on the time scale $\tau_{coh}$ as shown in Fig.\ \ref{fig2}.
To solve the nucleation problem we are thus left with the actual
determination of these two time scales. Clearly, before this can be done we
need to know the correct order parameter of the phase transition.
\subsection{Order parameter}
Ever since the pioneering work of Bogoliubov \cite{bogol} it is well
known that the order parameter for Bose-Einstein condensation in a
weakly-interacting Bose gas is a somewhat abstract quantity, which is most
conveniently discussed by using the method of second quantization. In this
method all many-body observables are expressed in terms of the creation and
annihilation operators of a particle at position $\vec{x}$ denoted by
$\psi^{\dagger}(\vec{x})$ and $\psi(\vec{x})$, respectively \cite{fetter}.
For example, for a gas of particles with mass $m$ and a two-body interaction
potential $V(\vec{x}-\vec{x}')$ the Hamiltonian equals
\begin{equation}
\label{hamil}
H = \int d\vec{x} \: \psi^{\dagger}(\vec{x}) \frac{-\hbar^{2}\nabla^{2}}{2m}
\psi(\vec{x}) +
\frac{1}{2} \int d\vec{x} \int d\vec{x}' \:
\psi^{\dagger}(\vec{x}) \psi^{\dagger}(\vec{x}')
V(\vec{x}-\vec{x}')
\psi(\vec{x}') \psi(\vec{x})
\end{equation}
and the total number of particles is given by
\begin{equation}
\label{number}
N = \int d\vec{x} \: \psi^{\dagger}(\vec{x}) \psi(\vec{x}) \:.
\end{equation}
The method is also particularly useful for a Bose system because the
permutation symmetry of the many-body wave function is automatically
accounted for by assuming the commutation relations
$[\psi(\vec{x}),\psi(\vec{x}')] =
[\psi^{\dagger}(\vec{x}),\psi^{\dagger}(\vec{x}')] = 0$ and
$[\psi(\vec{x}),\psi^{\dagger}(\vec{x}')] = \delta(\vec{x}-\vec{x}')$
between the creation and annihilation operators.
In the language of second quantization the order parameter for the dilute
Bose gas is the expectation value $\langle \psi(\vec{x}) \rangle$.
Analogous to the case of the ferromagnetic phase transition, a nonzero
value of this order parameter signals a spontaneously broken symmetry.
Here the appropriate symmetry group is $U(1)$, since the Hamiltonian of
Eq.\ (\ref{hamil}) is invariant under the transformation
$\psi(\vec{x}) \rightarrow \psi(\vec{x})e^{i\vartheta}$ and
$\psi^{\dagger}(\vec{x}) \rightarrow
\psi^{\dagger}(\vec{x})e^{-i\vartheta}$
of the field operators, whereas their expectation values are clearly not.
Notice that the $U(1)$ symmetry of the Bose gas is closely related to the
conservation of particle number. This is most easily seen by observing
that the invariance of the Hamiltonian is due to the fact that each term in
the right-hand side of Eq.\ (\ref{hamil}) contains an equal number of
creation and annihilation operators. The relationship can also be
established in a more formal way by noting that the $U(1)$ gauge
transformations are generated by the particle number operator. As we will
see later on, it has important consequences for the dynamics of the order
parameter.
To understand why $\langle \psi(\vec{x}) \rangle$ is the order parameter
associated with Bose-Einstein condensation, it is convenient to use a
momentum-space description and to introduce the annihilation operator
for a particle with momentum $\hbar \vec{k}$
\begin{equation}
a_{\vec{k}} = \int d\vec{x} \: \psi(\vec{x})
\frac{e^{-i \vec{k} \cdot \vec{x}}}{\sqrt{V}}
\end{equation}
and the corresponding creation operator ${a_{\vec{k}}}^{\dagger}$ by
Hermitian conjugation. The basis of states for the gas is then characterized
by the occupation numbers $\{N_{\vec{k}}\}$. If the gas is condensed, there
is a macroscopic occupation of the zero-momentum state and the relevant
states are $|N_{\vec{0}}, {\{N_{\vec{k}}\}}_{\vec{k} \neq \vec{0}} \rangle$
with only $N_{\vec{0}}$ proportional to $N$. Within this subspace of states
we have
\begin{equation}
\langle {a_{\vec{0}}}^{\dagger} a_{\vec{0}} \rangle =
\langle N_{\vec{0}} \rangle \simeq
\langle N_{\vec{0}} \rangle + 1 =
\langle a_{\vec{0}} {a_{\vec{0}}}^{\dagger} \rangle
\end{equation}
and we can neglect that $a_{\vec{0}}$ and ${a_{\vec{0}}}^{\dagger}$ do not
commute. As a result we can treat these operators as complex numbers
\cite{bogol} and say that
$\langle {a_{\vec{0}}}^{\dagger} a_{\vec{0}} \rangle =
\langle {a_{\vec{0}}}^{\dagger} \rangle \langle a_{\vec{0}} \rangle$
or equivalently that $\langle a_{\vec{0}} \rangle = \sqrt{N_{\vec{0}}}$.
In coordinate space the latter reads
$\langle \psi(\vec{x}) \rangle = \sqrt{n_{\vec{0}}}$, with
$n_{\vec{0}} = N_{\vec{0}}/V$ the condensate density.
The above argument essentially tells us that a sufficient condition for a
nonzero value of $\langle \psi(\vec{x}) \rangle$ is
$\langle N_{\vec{0}} \rangle \gg 1$. Although this is intuitively appealing,
it is important to point out that it is not generally true. Consider for
example the ideal Bose gas \cite{huang}. In the grand canonical ensemble the
total number of particles in the gas is given by
\begin{equation}
\label{density}
N = \sum_{\vec{k}} \langle N_{\vec{k}} \rangle =
\sum_{\vec{k}} \frac{1}{\zeta^{-1} e^{\beta \epsilon_{\vec{k}}} - 1} \:,
\end{equation}
where $\beta$ is $1/k_{B}T$, $\epsilon_{\vec{k}}$ is the kinetic energy
$\hbar^{2} \vec{k}^{2}/2m$, $\zeta$ is the fugacity $e^{\beta \mu}$ and
$\mu$ is the chemical potential.
At high temperatures the fugacity is small and we are allowed to take the
continuum limit of Eq.\ (\ref{density}), which results in the equation of
state
\begin{equation}
\label{eqst}
n = \frac{1}{\Lambda^{3}} g_{3/2}(\zeta) \:,
\end{equation}
using the thermal de Broglie wavelength
$\Lambda = \sqrt{2 \pi \hbar^{2}/mk_{B}T}$ and the Bose functions
$g_{n}(\zeta)$ defined by
\begin{equation}
g_{n}(\zeta) = \frac{1}{\Gamma(n)} \int_{0}^{\infty} dx \:
\frac{x^{n-1}}{\zeta^{-1} e^{x} - 1} \:.
\end{equation}
Lowering the temperature while keeping the density fixed, the fugacity
increases until it ultimately reaches the value one at the critical
temperature
\begin{equation}
T_{0} = \frac{2 \pi \hbar^{2}}{mk_{B}}
{\left( \frac{n}{g_{3/2}(1)} \right)}^{2/3} \simeq
\frac{2 \pi \hbar^{2}}{mk_{B}}
{\left( \frac{n}{2.612} \right)}^{2/3} \:.
\end{equation}
At this point Eq.\ (\ref{eqst}) ceases to be valid because the occupation
number of the zero-momentum state, which is equal to $\zeta/(1-\zeta)$,
diverges and must be taken out of the discrete sum in Eq.\ (\ref{density})
before we take the continuum limit. Moreover, we only need to treat the
zero-momentum term separately because in the thermodynamic limit the chemical
potential goes to zero as $V^{-1}$, whereas the kinetic energy for the
smallest nonzero momentum decreases only as $V^{-2/3}$. Consequently, below
the critical temperature the equation of state becomes
\begin{equation}
n = n_{\vec{0}} + \frac{1}{\Lambda^{3}} g_{3/2}(1)
\end{equation}
and leads to a condensate density equal to
\begin{equation}
\label{cond}
n_{\vec{0}} = n \left( 1- {\left( \frac{T}{T_{0}} \right)}^{3/2} \right) \:.
\end{equation}
We thus find that the average occupation number
$\langle N_{\vec{0}} \rangle$ is at all temperatures given by
$\zeta/(1-\zeta)$, i.e. its value in the grand canonical ensemble with the
density matrix $e^{-\beta(H - \mu N)}$. Since this density matrix
commutes with the particle number operator, we conclude that in the case of
an ideal Bose gas there is a macroscopic occupation of the zero-momentum
state without a spontaneous breaking of the $U(1)$ symmetry. To show
more rigorously that ${\langle \psi(\vec{x}) \rangle}_{eq} = 0$ at all
temperatures we determine the free-energy density of the gas as a function
of the order parameter $\langle \psi(\vec{x}) \rangle$. Dealing with a
noninteracting system it is not difficult to obtain
\begin{equation}
f(\langle \psi(\vec{x}) \rangle,T) =
- \mu(T) {| \langle \psi(\vec{x}) \rangle |}^{2}
\end{equation}
for a homogeneous value of the order parameter. Because $\mu \leq 0$ the
minimum is indeed always at $\langle \psi(\vec{x}) \rangle = 0$ and it is
necessary to identify the condensate density $n_{\vec{0}}$ with the order
parameter of the ideal Bose gas (cf.\ Eq.\ (\ref{cond})).
Notwithstanding the previous remarks, the order parameter for Bose-Einstein
condensation in a weakly-interacting Bose gas is given by
$\langle \psi(\vec{x}) \rangle$. This was put on a firm theoretical basis by
Hugenholtz and Pines \cite{hugen}, who calculated the free energy as a
function of the above order parameter and showed that at sufficiently low
temperatures the system develops an instability that is removed by a
nonzero value of $\langle \psi(\vec{x}) \rangle$. In addition, they derived
an exact relationship between the chemical potential and the condensate
density, which turns out to be valid also in the nonequilibrium problem of
interest here and is important for an understanding of how the $U(1)$
symmetry is broken dynamically.
\subsection{Condensation time}
We have argued that by means of evaporative cooling a doubly-polarized atomic
hydrogen gas can be quenched into the critical region of the phase transition
and that this kinetic part of the condensation process is described by a
quantum Boltzmann equation. As a result the gas acquires on the time scale
$\tau_{el}$ an equilibrium distribution with some temperature $T$, which is
slightly above the critical temperature $T_{0}$ of the ideal Bose
gas because a condensate cannot be formed at this stage.
For the study of the subsequent coherent part of the condensation process it
is therefore physically reasonable to assume that at a time $t_{0}$ the
density matrix $\rho(t_{0})$ of the gas is well approximated by the density
matrix of an ideal Bose gas with temperature $T$. The evolution of the order
parameter $\langle \psi(\vec{x}) \rangle$ for times larger than $t_{0}$ is
then completely determined by the Heisenberg equation of motion
\begin{equation}
i \hbar \frac{d\psi(\vec{x},t)}{dt} = [\psi(\vec{x},t) , H] \:,
\end{equation}
for the field operator. Substituting herein the Hamiltonian of
Eq.\ (\ref{hamil}) and taking the expectation value with respect to
$\rho(t_{0})$ we find
\begin{equation}
\label{heisen}
i \hbar \frac{d\langle \psi(\vec{x},t) \rangle}{dt} =
\frac{-\hbar^{2}\nabla^{2}}{2m} \langle \psi(\vec{x},t) \rangle +
\int d\vec{x}' \: V(\vec{x} - \vec{x}')
\langle \psi^{\dagger}(\vec{x}',t) \psi(\vec{x}',t)
\psi(\vec{x},t) \rangle \:,
\end{equation}
where the complicated part is of course the evaluation of
$\langle \psi^{\dagger}(\vec{x}',t)\psi(\vec{x}',t)\psi(\vec{x},t) \rangle$.
In lowest order we simply have
\begin{eqnarray}
\langle \psi^{\dagger}(\vec{x}',t) \psi(\vec{x}',t) \psi(\vec{x},t) \rangle
\simeq \langle \psi^{\dagger}(\vec{x}',t) \rangle
\langle \psi(\vec{x}',t) \rangle \langle \psi(\vec{x},t) \rangle
&+& \langle \psi^{\dagger}(\vec{x}',t) \psi(\vec{x}',t) \rangle
\langle \psi(\vec{x},t) \rangle \nonumber \\
&+& \langle \psi^{\dagger}(\vec{x}',t) \psi(\vec{x},t) \rangle
\langle \psi(\vec{x}',t) \rangle \:,
\end{eqnarray}
which after substitution into Eq.\ (\ref{heisen}) leads to
\begin{eqnarray}
\left( i \hbar \frac{d}{dt} + \frac{\hbar^{2}\nabla^{2}}{2m} \right)
\langle \psi(\vec{x},t) \rangle
=& & \int d\vec{x}' \: V(\vec{x} - \vec{x}')
\langle \psi^{\dagger}(\vec{x}',t) \psi(\vec{x}',t) \rangle
\langle \psi(\vec{x},t) \rangle \nonumber \\
&+& \int d\vec{x}' \: V(\vec{x} - \vec{x}')
\langle \psi^{\dagger}(\vec{x}',t) \psi(\vec{x},t) \rangle
\langle \psi(\vec{x}',t) \rangle \nonumber \\
&+& \int d\vec{x}' \: V(\vec{x} - \vec{x}')
\langle \psi^{\dagger}(\vec{x}',t) \rangle
\langle \psi(\vec{x}',t) \rangle
\langle \psi(\vec{x},t) \rangle \:,
\end{eqnarray}
and thus corresponds exactly to the Hartree-Fock approximation.
To proceed we must restrict ourselves to the case of a dilute Bose gas in the
quantum regime. Introducing the scattering length $a$, which is of the order
of the range of the interaction, the quantum regime is characterized by
$a/\Lambda \ll 1$. We therefore need to consider only $s$-wave scattering and
can neglect the momentum dependence of various collisional quantities. In
particular, we can replace the potential $V(\vec{x} - \vec{x}')$ by the
contact interaction $V_{\vec{0}} \, \delta(\vec{x} - \vec{x}')$ with
$V_{\vec{0}} = \int d\vec{x} \: V(\vec{x})$.
Hence, in the Hartree-Fock approximation we obtain
\begin{equation}
\left( i \hbar \frac{d}{dt} + \frac{\hbar^{2}\nabla^{2}}{2m} \right)
\langle \psi(\vec{x},t) \rangle =
\left( 2nV_{\vec{0}} +
V_{\vec{0}} {|\langle \psi(\vec{x},t) \rangle|}^{2} \right)
\langle \psi(\vec{x},t) \rangle \:,
\end{equation}
having only the trivial solution $\langle \psi(\vec{x},t) \rangle = 0$ for
a space and time-independent order parameter. Within this lowest order
approximation we thus conclude that $\tau_{nucl} = \infty$ and that
the formation of a condensate will not take place.
Fortunately, it is well known that the Hartree-Fock approximation is not
sufficiently accurate for a dilute Bose gas because the diluteness condition
$na^{3} \ll 1$ implies that we should consider all two-body processes, i.e.\
two particles must be allowed to interact also more than once. The
appropriate approximation is therefore the ladder or $T$-matrix approximation
and is diagrammatically explained in Fig.\ \ref{fig3}. Moreover, in the
degenerate regime where the temperature $T$ is slightly larger than $T_{0}$
and the degeneracy parameter $n\Lambda^{3}$ is of $O(1)$, the condition
$a/\Lambda \ll 1$ implies that also $na\Lambda^{2} \ll 1$ or physically
that the average kinetic energy of the gas is much larger than the
typical interaction energy. Consequently, an accurate discussion of the
nucleation of Bose-Einstein condensation in a weakly-interacting Bose gas
requires an evaluation of
$\langle \psi^{\dagger}(\vec{x}',t)\psi(\vec{x}',t)\psi(\vec{x},t) \rangle$
within the $T$-matrix approximation and in zeroth order in the gas
parameters $a/\Lambda$ and $na\Lambda^{2}$.
Although it is easy to formulate this objective, to actually perform the
calculation is considerably more difficult. It is most conveniently
accomplished by making use of the Keldysh formalism \cite{keldysh} which
has been reviewed by Danielewicz \cite{daniel} using operator methods. For a
functional formulation of this nonequilibrium theory and for the technical
details of the somewhat tedious mathematics we refer to our previous papers
\cite{stoof}. Here we only present the final results and concentrate on the
physics involved.
Due to the fact that we are allowed to neglect the (relative) momentum
dependence of the $T$ matrix, the equation of motion for the order parameter
$\langle \psi(\vec{x},t) \rangle$ acquires the local form of a
time-dependent Landau-Ginzburg theory
\begin{equation}
\label{motion}
\left( i \hbar \frac{d}{dt} + \frac{\hbar^{2}\nabla^{2}}{2m} \right)
\langle \psi(\vec{x},t) \rangle =
\left( S^{(+)}(t) +
T^{(+)} {|\langle \psi(\vec{x},t) \rangle|}^{2} \right)
\langle \psi(\vec{x},t) \rangle \:,
\end{equation}
which is recovered from a variational principle if we use the action
\begin{equation}
\label{action}
S(\langle \psi(\vec{x},t) \rangle,T) =
\int dt \int d\vec{x} \:
{\langle \psi(\vec{x},t) \rangle}^{*}
\left( i \hbar \frac{d}{dt}
+ \frac{\hbar^{2}\nabla^{2}}{2m} - S^{(+)}(t)
- \frac{T^{(+)}}{2} {|\langle \psi(\vec{x},t) \rangle|}^{2}
\right)
\langle \psi(\vec{x},t) \rangle \:.
\end{equation}
Here $S^{(+)}(t) \delta(t-t')$ is a good approximation for the retarded
self-energy $\hbar \Sigma^{(+)}(\vec{0};t,t')$ of a hydrogen atom with zero
momentum and $T^{(+)} \simeq 4\pi\hbar^{2}a/m$ is the effective interaction
between two such atoms. Clearly, the action in Eq.\ (\ref{action}) is the
desired generalization of the Landau free energy and corresponds precisely to
the physical picture presented previously in Figs.\ \ref{fig1}(b) and
\ref{fig2}(b).
Therefore, $\tau_{nucl}$ is determined by the time dependence of
the coefficient $S^{(+)}$ which is shown in Fig.\ \ref{fig4} for three
different initial temperatures. If the temperature $T$ is much larger than
$T_{0}$, $S^{(+)}(t)$ is constant and equal to $8\pi\hbar^{2}an/m$. In this
region of the phase diagram coherent processes are negligible and the
evolution of the gas is described by a Boltzmann equation. Lowering the
temperature, the occupation numbers for momenta $\hbar k < O(\hbar/\Lambda)$
rise and lead to an enhancement of the coherent population of states with
momenta $\hbar k < O(\hbar \sqrt{na}) \ll O(\hbar/\Lambda)$. This is
signaled by the increasing correlation length
$\xi = \hbar/\sqrt{2mS^{(+)}(\infty)}$. At the critical temperature
$T_{c} = T_{0}( 1 + O(a/\Lambda_{0}) )$ we have $S^{(+)}(\infty) = 0$ and the
correlation length diverges. Below that temperature, but still above $T_{0}$
so as not to have a condensate already in the initial state, $S^{(+)}(t)$
actually changes sign and the gas develops the required instability for a
Bose-Einstein condensation. The change of sign takes place at
\begin{equation}
t \equiv t_{c} = t_{0} + O \left( \frac{a}{\Lambda_{c}}
\frac{\hbar}{k_{B}(T_{c}-T)} \right) \:,
\end{equation}
which shows that $\tau_{nucl}$ is in general of $O(\hbar/k_{B}T_{c})$ except
for temperatures very close to the critical temperature. Clearly, this time
scale is due to the fact that all states with momenta
$\hbar k < O(\hbar/\Lambda)$ cooperate in the coherent population of the
one-particle ground state.
After a small nucleus of $O(n(a/\Lambda_{c})^{2})$ has been formed, the
subsequent buildup of the condensate density is determined by the equation
of motion Eq.\ (\ref{motion}). Looking at the right-hand side we immediately
see that the time scale $\tau_{coh}$ involved in this process is typically
of $O(\hbar/n_{\vec{0}}T^{(+)})$ or equivalently of
$O((\hbar/k_{B}T_{c})(1/n_{\vec{0}}a\Lambda_{c}^{2}))$. Therefore,
$\tau_{coh} \gg \tau_{nucl}$ as anticipated in Fig.\ \ref{fig2}. The physical
reason for this time scale is that after the nucleation of the phase
transition the buildup of the condensate density is accompanied by a
depopulation of the momentum states with
$\hbar k < O(\hbar \sqrt{n_{\vec{0}}a})$. As a result it is not difficult to
show that in the limit $t \rightarrow \infty$ the condensate density is of
$O(na/\Lambda_{c})$ and thus that
$\tau_{coh} = O((\Lambda_{c}/a)^{2} \hbar/k_{B}T_{c})$.
Finally, it is interesting to point out how the gas can conserve the total
number of particles and apparently at the same time break the $U(1)$ gauge
symmetry that is responsible for this conservation law. To that end we write
the field operator $\psi(\vec{x},t)$ as the sum of its expectation value
$\langle \psi(\vec{x},t) \rangle$ and the fluctuation $\psi'(\vec{x},t)$,
and introduce a time-dependent chemical potential $\mu(t)$ by means of
\begin{equation}
\langle \psi(\vec{x},t) \rangle = \sqrt{n_{\vec{0}}(t)} \:
exp \left( - \frac{i}{\hbar} \int_{t_{0}}^{t} dt' \: \mu(t') \right) \:.
\end{equation}
Substituting the latter in the action of Eq.\ (\ref{action}) and minimizing
with respect to $\sqrt{n_{\vec{0}}(t)}$ gives for $t > t_{c}$
\begin{equation}
n_{\vec{0}}(t) = \frac{- S^{(+)}(t) + \mu(t)}{T^{(+)}} \:,
\end{equation}
which determines the growth of the condensate density and is in effect a
nonequilibrium version of the Hugenholtz-Pines theorem \cite{hugen}.
Furthermore, by considering the fluctuations around $\sqrt{n_{\vec{0}}(t)}$
we can show that the chemical potential is determined by the constraint
\begin{equation}
n = n_{\vec{0}}(t) + \frac{1}{V} \int d\vec{x} \:
\langle \psi'^{\dagger}(\vec{x},t) \psi'(\vec{x},t) \rangle \:,
\end{equation}
enforcing the conservation of particle number at all times. In the complex
plane $\langle \psi(\vec{x},t) \rangle$ thus moves radially outward along a
spiral as shown in Fig.\ \ref{fig5}. Consequently, the phase of the order
parameter has never a fixed value and the $U(1)$ symmetry is not
really broken dynamically. This is of course expected since the system
evolves according to a symmetric Hamiltonian.
\section{CONCLUSIONS AND DISCUSSION}
We studied the evolution of a doubly-polarized atomic hydrogen gas in a
magnetic trap and showed that by means of evaporative cooling the gas can
accomplish the Bose-Einstein phase transition within its lifetime
$\tau_{inel}$. The condensation process proceeds under these conditions in
three stages: In the first kinetic stage the gas is quenched into the
critical region $T_{0} < T \leq T_{c}$. A typical time scale in this part of
the evolution is given by the time between elastic collisions $\tau_{el}$,
which for a degenerate gas is of $O((\Lambda_{c}/a)^{2} \hbar/k_{B}T_{c})$.
In the following coherent stage the actual nucleation takes place on the time
scale $\tau_{nucl} = O(\hbar/k_{B}T_{c})$ by means of a coherent population
of the zero-momentum state. The small nucleus formed in this manner then
grows on the much longer time scale
$\tau_{coh} = O((\Lambda_{c}/a)^{2} \hbar/k_{B}T_{c})$ by a depopulation of
the low-momentum states, having $\hbar k < O(\hbar \sqrt{n_{\vec{0}}a})$. In
the third and last stage of the evolution the Bogoliubov quasiparticles
produced in the previous stage have to come into equilibrium with the
condensate. This process can again be treated by a kinetic equation and was
studied by Eckern \cite{eckern}, who found that the corresponding relaxation
time $\tau_{rel}$ is of $O((\Lambda_{c}/a)^{3} \hbar/k_{B}T_{c})$. In the
case of atomic hydrogen this turns out to be comparable to the lifetime of
the system. Summarizing, we thus have the sequence
$\tau_{nucl} \ll \tau_{coh} \simeq
\tau_{el} \ll \tau_{rel} \simeq \tau_{inel}$
for the various time scales involved in the phase transition. The most
important requirement for the achievement of the phase transition is
therefore $\tau_{el} \ll \tau_{inel}$, which is relatively mild and should
not pose an insurmountable problem for future experiments aimed at the
realization of Bose-Einstein condensation.
Having arrived at this conclusion, it is necessary to discuss a recent paper
by Kagan, Svistunov, and Shlyapnikov \cite{kagan} that also considers the
evolution of a weakly-interacting Bose gas after the removal of the most
energetic atoms. In this paper the authors agree that the evolution of the
gas is divided into a kinetic and a subsequent coherent stage. Moreover,
their detailed study of the kinetic part of the evolution confirms our
conjecture that the gas is quenched into the critical region on the time
scale $\tau_{el}$. The investigation of the coherent part, however, leads to
the extreme result that a Bose-Einstein condensation cannot occur in a
finite amount of time. To understand why this conclusion is reached we
briefly present their line of thought.
At the end of the kinetic stage the gas has acquired large average
occupation numbers for the states with momenta
$\hbar k < \hbar k_{0} = O(\hbar \sqrt{n_{0}a})$, where $n_{0}$ is the
density of particles with these small momenta. Therefore, Kagan, Svistunov,
and Shlyapnikov argue that for a study of the coherent part of the evolution
we must use the initial condition
\begin{equation}
\label{initial}
\langle \psi(\vec{x},t_{0}) \rangle = \sum_{k<k_{0}}
\sqrt{\langle N_{\vec{k}} \rangle} \:
\frac{e^{i \vec{k} \cdot \vec{x}}}{\sqrt{V}} \neq 0
\end{equation}
together with the nonlinear Schr\"{o}dinger equation
\begin{equation}
\label{schroe}
i \hbar \frac{d \langle \psi(\vec{x},t) \rangle}{dt} =
\left( \frac{- \hbar^{2}\nabla^{2}}{2m} +
T^{(+)} {|\langle \psi(\vec{x},t) \rangle|}^{2} \right)
\langle \psi(\vec{x},t) \rangle \:,
\end{equation}
which has the equilibrium solution
$\langle \psi(\vec{x},t) \rangle = \sqrt{n_{0}} \, exp( -i \mu_{0} t )$ and
$\mu_{0} = n_{0}T^{(+)}$. Consequently, all the particles that have initially
momenta $\hbar k < \hbar k_{0}$ are in the limit $t \rightarrow \infty$
assumed to be in the condensate.
Linearizing the Hamiltonian of Eq.\ (\ref{schroe}) around this equilibrium
solution they then observe that the energy involved with a magnitude
fluctuation of the order parameter is $\epsilon_{\vec{k}} + n_{0}T^{(+)}$,
whereas the energy involved with a phase fluctuation is only
$\epsilon_{\vec{k}}$. As a result they assert that on the time scale
$\tau_{ampl} = \tau_{coh} = O(\hbar/n_{0}T^{(+)})$ a state is formed in which
the amplitude of $\langle \psi(\vec{x},t) \rangle$ is fixed, but the phase is
still strongly fluctuating because the corresponding time scale $\tau_{ph}$
is much longer and even diverges as $V^{2/3}$ in the thermodynamic limit.
Hence, for finite times the gas is in a state with a so-called
quasicondensate \cite{popov} and a real condensate is only formed in the
limit $t \rightarrow \infty$.
Clearly, this physical picture of two different time scales for the
amplitude and phase fluctuations of the order parameter is only applicable
if these fluctuations exist independently of each other. Looking only at the
Hamiltonian this indeed seems to be the case. However, a correct discussion
of the fluctuations must be based on the equations of motion or equivalently
the Lagrangian. The latter contains a first-order time derivative which
strongly couples the amplitude and phase fluctuations. Therefore, a dilute
Bose gas does not have two but only one dispersion relation, i.e.\ the
well-known Bogoliubov dispersion
$\sqrt{\epsilon_{\vec{k}}(\epsilon_{\vec{k}} + 2n_{0}T^{(+)})}$, and we are
lead to $\tau_{ph} = \tau_{ampl}$. It is interesting to note that in the case
of a neutral BCS-type superfluid we do have two different time scales because
the Lagrangian now contains a second-order time derivative and the amplitude
and phase fluctuations are indeed independent in lowest order
\cite{bog,anders}.
An even more serious problem with the approach of Kagan, Svistunov, and
Shlyapnikov is their claim that the use of the initial condition in
Eq.\ (\ref{initial}) is justified because
$\langle N_{\vec{k}} \rangle \gg 1$. As we have pointed out before this is
not true in general. For $\langle \psi(\vec{x},t) \rangle$ to be nonzero we
must show that the system has a corresponding instability. However, within
the $T$-matrix approximation it is not difficult to show that the instability
associated with a quasicondensate is always preceded by the instability
corresponding to the formation of a condensate. This implies that we always
have to take Bose-Einstein condensation into account first. After that has
been accomplished by means of the theory reviewed here, it is of course
no longer relevant to consider the appearance of a quasicondensate.
\section*{ACKNOWLEDGMENTS}
It is a great pleasure to thank Tony Leggett for various helpful discussions
and for giving me the opportunity to visit the University of Illinois at
Urbana-Champaign. I also benefited from conversations with Steve Girvin,
Daniel Loss, Kieran Mullen, and Jook Walraven. This work was supported by
the National Science Foundation through Grant No.\ DMR-8822688.
|
1,314,259,994,285 | arxiv | \section{Introduction}
\label{sc1}
\quad In the past decade, blockchain technology has evolved tremendously, and is now regarded as an endeavor
to facilitate the next generation digital exchange platform with a broad range of established or emerging applications including
cryptocurrency (\cite{Naka08, Wood14, HR17}),
healthcare (\cite{MC19, Do19, TPE20}),
supply chain (\cite{SK19, CTT20}),
electoral voting (\cite{W18}) and new-born arts as non-fungible tokens (\cite{WL21, Dow22}).
\quad A blockchain is a growing chain of records or transactions, called {\em blocks},
which are jointly maintained by a set of {\em miners} or {\em validators} using cryptography.
The idea of blockchain originated from {\em distributed consensus}, where multiple machines in a mission-critical system are required to make consistent decisions.
The work of \cite{LSP82}, which introduced the famous {\em Byzantine Generals problem}, laid the foundation for distributed consensus.
From then on, distributed consensus has been deployed in many digital infrastructures such as Google Wallet and Facebook Credit.
Since 2009, Bitcoin (\cite{Naka08}) and various other cryptocurrencies have come around,
and allowed the secure transfer of assets without an intermediary such as a bank or payment processing network.
These cryptocurrencies achieved a new breakthrough in distributed consensus
because they showed for the first time that consensus is viable in a decentralized and permissionless environment
where {\em anyone} is allowed to work.
This is in contrast with traditional trusted payment processing systems
as well as distributed consensus-based computing infrastructures such as Google Wallet and Facebook Credit,
where only a small number of permissioned personnels can participate.
This way, Google Wallet and Facebook Credit are regarded as {\em permissioned blockchains},
while Bitcoin and other cryptocurrencies are {\em permissionless blockchains}.
\quad At the core of a blockchain is the {\em consensus protocol} or {\em smart contract},
which specifies a set of voting rules allowing miners to agree on an ever-growing, linearly-ordered log of transactions forming the distributed ledger.
There are several existing blockchain protocols,
among which the most popular are {\em Proof of Work} (PoW, \cite{Naka08}) and {\em Proof of Stake} (PoS, \cite{KN12, Wood14}).
\begin{itemize}[itemsep = 3 pt]
\item
In the PoW protocol, miners compete with each other by solving a puzzle,
and the one who solves the puzzle first is allowed to append a new block to the blockchain.
Thus, the probability of a miner being selected is proportional to the computational power that the miner has.
The PoW coins include Bitcoin, Ethereum, Dogecoin...etc.
\item
In the PoS protocol, the blockchain is updated by a randomly selected miner
where the probability of a miner being drawn is proportional to the stake that the miner owns.
The PoS coins include BNB, Cardano, Solana...etc.
\end{itemize}
\quad As of May 28, 2022, Cryptoslate lists $352$ PoW coins with a total $\$ 800$B ($66 \%$) market capitalization,
and $279$ PoS coins contributing a $\$ 122$B ($10\%$) market capitalization.
One disadvantage of the PoW protocol is that competition among miners has led to exploding levels of energy consumption.
For instance, \cite{Mora18} pointed out the unsustainability of PoW-based blockchains,
and \cite{PS21} showed that energy consumption of Bitcoin is at least $10,000$ times higher than PoS-based blockchains.
\cite{CK17, Saleh19, CHL20} also discussed drawbacks of PoW blockchains from economic perspectives.
Though PoS-based blockchains are still not as widely used as PoW-based ones,
there is a strong incentive among blockchain practitioners to switch from a PoW to PoS ecosystem
as is the case for Ethereum $2.0$ (\cite{Wick21}).
\quad A PoS blockchain mitigates the problem of energy efficiency and scalability; however,
it receives several criticisms.
\begin{itemize}[itemsep = 3 pt]
\item
{\em Security concern}.
PoS protocols may suffer from the {\em Nothing-at-Stake} problem,
where it is effortless for (malicious) miners to copy every outdated history and participate in all of them.
As shown in \cite{BD19}, any miner with more than $1/(1+e) \approx 27 \%$ of the total stake can launch a {\em double-spending attack},
which seems to be less robust than the $51\%$ attack to a PoW-based blockchain.
\cite{BN19} also pointed out difficulties in detecting Nothing-at-Stake for incentive-compatible PoS protocols.
One way to address this challenge is to come up with a clever block rewarding scheme,
as discussed in \cite{KR17, DPS19, Saleh21}.
\item
{\em Centralization concern}.
Critics have argued that the PoS protocol induces wealth concentration, thus leading to centralization (\cite{AF18, IR18, FK19}).
As one key feature of a permissionless blockchain is decentralization,
the problem of centralization may put questions to the necessity of using a permissionless blockchain
since it yields concentration in voting mechanism as does a permissioned blockchain.
Previous works (\cite{AC20, AW22}) showed that the PoW protocol may generate concentration in mining power, and thus centralization in various settings.
At a conceptual level, the effect of centralization or decentralization in both PoW and PoS blockchains arises from randomness.
In a PoW blockchain, the randomness is external to the cryptocurrency,
while in a PoS blockchain, the randomness comes from the cryptocurrency itself.
\end{itemize}
\quad In this paper we consider the problem of wealth stability
for different types of miners of a PoS blockchain.
In the sequel, a PoS blockchain miner is also called an {\em investor}.
The prominent work of \cite{RS21} took the first step to study the long term evolution of large investor
shares in a PoS cryptocurrency via a {\em P\'olya urn model}.
They proved under various reward assumptions that the shares in the long run do not deviate much from the initial ones
as the initial coin offerings are large;
that is,
\begin{equation*}
[\mbox{share at time } \infty] - [\mbox{share at time } 0] \approx 0 \quad \mbox{for each investor}.
\end{equation*}
That is to say, the {\em rich-get-richer phenomenon} does not occur when there is a small number of investors each having a large proportion of initial coins.
Statistical analysis from (\cite{FKP19, TGT20}) suggested that there be increasing levels of activities from smaller investors in blockchain networks.
Moreover, online platforms such as Robinhood Crypto allow for fractional trading.
Thus, it is indispensable to understand the evolution of small investor shares as well.
Typically for these investors,
$[\mbox{share at time 0}] \approx 0$
so the above relation holds trivially as `$0 - 0 = 0$'.
This prompts us to consider a more meaningful measure of evolution --
the {\em ratio} $[\mbox{share at time } \infty] / [\mbox{share at time } 0]$,
where `$0/0$' is indeterminate for small investors.
\quad The first contribution of this work is to provide a systematic study of the aforementioned ratio
under various rewarding schemes and for different types of investors
in the setting of \cite{RS21}.
The investors are categorized into {\em large}, {\em medium} and {\em small} ones in terms of their initial endowment of coins.
The key observation is that for investors whose initial coins scale differently to the total initial coin offering,
the ratio exhibits different asymptotic behaviors such as concentration and anti-concentration.
This is a {\em phase transition} phenomenon, which has yet appeared in the literature on the economics of the PoS protocol.
For instance, we prove that under a constant rewarding scheme,
the ratio of evolution of shares concentrates at one for large investors,
and converges to a Gamma random variable for medium investors,
and decays towards zero for small investors (Theorem \ref{thm:1}).
Similar but slightly weaker results are established under a decreasing rewarding scheme (Theorem \ref{thm:2})
and an increasing rewarding scheme (Theorem \ref{thm:3}).
In the case of a geometric reward, our result shows that with probability one,
all the shares will eventually go to one investor in a completely random way.
Such a behavior, which we call {\em chaotic centralization}, indicates that
the geometric rewarding scheme induces long-run instability in a PoS blockchain.
This, however, is not contradictory to \cite{FK19} where it was shown that a geometric reward is optimal in a {\em fixed} time horizon.
\quad As is clear from the growing popularity of blockchain technology, it may not be realistic to assume a {\em fixed} finite number of investors throughout.
The second contribution of this paper is to propose a dynamical approach, which starts with a finite number of investors and evolves to an infinite number of investors.
Our model can be viewed as an infinite population version of the P\'olya urn model,
and the idea comes from the {\em Blackwell-MacQueen urn model} in Bayesian nonparametrics, as well as {\em species sampling} in computational biology.
In this dynamical population setting,
we study the ratio of evolution of shares under various rewarding schemes (Theorem \ref{thm:6}).
Our result shows that under a decreasing rewarding scheme with a suitably large decay,
the shares of the initial capitalists will be diluted in the long run.
This observation is consistent with several existing practice (e.g. Bitcoin),
and it may also provide guidance on the choice of rewarding schemes
so that decentralization is incorporated in a PoS blockchain.
\quad Let us also mention some relevant works.
\cite{TF20} studied the stake-based voting for crowd-sourcing on a blockchain,
and showed that asymmetric information may lead to suboptimal outcomes.
\cite{BFT21} considered a committee-based protocol, and highlighted its advantage over other PoS protocols.
To the best of our knowledge, it is the first time in this work that heterogeneity of investors is taken into account,
leading to a phase transition in centralization vs decentralization.
This paper also proposes and studies for the first time a dynamical model in which the number of investors is not fixed
for the PoS protocol.
Our paper thus contributes to both the literature on the decentralization of blockchains
and that on the economics of the PoS protocol.
\quad The remainder of the paper is organized as follows.
In Section \ref{sc2}, we adopt the P\'olya urn to model the PoS protocol, and study the ratio of evolution of shares under various rewarding schemes.
In Section \ref{sc3}, we propose a dynamical population model
which allows the number of investors to increase over the time.
There we also provide analysis on the ratio of evolution of shares.
We conclude with Section \ref{sc4}.
All the proofs are given in Appendix.
\section{Finite population model}
\label{sc2}
\quad In this section, we follow \cite{RS21} to use a time-dependent P\'olya urn model to describe the PoS protocol.
Below we collect the notations that will be used throughout this paper.
\begin{itemize}[itemsep = 3 pt]
\item
$\mathbb{N}_{+}$ denotes the set of positive integers, and $\mathbb{R}_{+}$ denotes the set of positive real numbers.
\item
The symbol $a = \mathcal{O}(b)$ means that $a/b$ is bounded from above as $b \to \infty$;
the symbol $a = \Theta(b)$ means that $a/b$ is bounded from below and above as $b \to \infty$;
and the symbol $a = o(b)$ or $b \gg a$ means that $a/b$ decays towards zero as $b \to \infty$.
\end{itemize}
\quad Let $K \in \mathbb{N}_{+}$ be the number of investors,
and $N \in \mathbb{R}_{+}$ be the number of initial coins/tokens in a PoS blockchain.
The investors are indexed by $[K]: = \{1, \ldots, K\}$, and investor $k$'s initial endowment of coins is $n_{k,0}$ with $\sum_{k = 1}^K n_{k,0} = N$.
We define the {\em investor share} as the fraction of coins each investor owns.
So the initial investor shares $(\pi_{k, 0}, \, k \in [K])$ are given by
\begin{equation}
\label{eq:share0}
\pi_{k, 0}: = \frac{n_{k,0}}{N}, \quad k \in [K].
\end{equation}
Similarly, we denote by $n_{k,t}$ the number of coins owned by investor $k$ at time $t \in \mathbb{N}_{+}$, and the corresponding share is
\begin{equation}
\label{eq:sharet}
\pi_{k,t}:= \frac{n_{k,t}}{N_t}, \quad k \in [K], \quad \mbox{with } N_t:= \sum_{k=1}^K n_{k,t}.
\end{equation}
Here $N_t$ is the total number of coins at time $t$, and thus $N_0 = N$.
Clearly, for each $t \ge 0$ $(\pi_{k, t},\, k \in [K])$ forms a probability distribution on $[K]$.
\quad Now we provide a formal description of the PoS dynamics.
At time $t \in \mathbb{N}_{+}$, investor $k$ is selected at random among $K$ investors with probability $\pi_{k,t-1}$.
Once selected, the investor receives a deterministic reward of $R_t \in \mathbb{R}_{+}$ coins.
Let $S_{k,t}$ be the random event that investor $k$ is selected at time $t$.
Thus, the number of coins owned by each investor evolves as
\begin{equation}
\label{eq:TPolya}
n_{k,t} = n_{k-1, t} + R_t 1_{S_{k,t}}, \quad k \in [K].
\end{equation}
It was shown in \cite[Proposition 5]{RS21} that investors have no incentive to trade their coins under some risk neural conditions.
Without loss of generality, we exclude the possibility of exchanges among investors.
Note that the total number of coins satisfies $N_t = N_{t-1} + R_t$.
Combining \eqref{eq:sharet} and \eqref{eq:TPolya} yields a recursion of the investor shares:
\begin{equation}
\label{eq:Dshare}
\pi_{k,t} = \frac{N_{t-1}}{N_t} \pi_{k, t-1} + \frac{R_t}{N_t} 1_{S_{k,t}}, \quad k \in [K].
\end{equation}
Let $\mathcal{F}_t$ be the filtration as the $\sigma$-algebra generated by the random events $(S_{k, r}: k \in [K], r \le t)$.
It is easily seen that for each $k \in [K]$, the process of investor $k$'s share $(\pi_{k,t}, \, t \ge 0)$ is an $\mathcal{F}_t$-martingale.
By the martingale convergence theorem (see e.g. \cite{Durrett}),
\begin{equation}
(\pi_{1,t}, \ldots, \pi_{K,t}) \longrightarrow (\pi_{1,\infty}, \ldots, \pi_{K,\infty}) \quad \mbox{as } t \rightarrow \infty \mbox{ with probability }1,
\end{equation}
where $(\pi_{1,\infty}, \ldots, \pi_{K,\infty})$ is some random probability distribution on $[K]$.
\quad We consider the evolution of the investor shares $(\pi_{k,t}, \, k \in [K])$, as well as its long time limit $(\pi_{k,\infty}, \, k \in [K])$.
One major problem is to know whether the PoS protocol triggers the rich-get-richer, or concentration phenomenon by comparing the investor shares at the initial time and in the long time limit.
As briefly discussed in the introduction,
\cite{RS21} were concerned with large investors, i.e. $\pi_{k, 0} = \Theta(1)$ or equivalently $n_{k,0} = \Theta(N)$.
They proved under various reward assumptions that
\begin{equation}
\label{eq:RSres}
\lim_{N \to \infty} \mathbb{P}(|\pi_{k, \infty} - \pi_{k,0}| > \varepsilon) = 0 \quad \mbox{for each } k \in [K] \mbox{ and each fixed } \varepsilon > 0.
\end{equation}
That is, the rich-get-richer phenomenon does not occur typically when there is a small number $K = \Theta(1)$ of investors each having a significant proportion $\pi_{k, 0} = \Theta(1)$ of initial coins.
However, there may also be medium or even small investors whose initial shares $\pi_{k,0} = o(1)$ is relatively small.
For instance,
\begin{itemize}[itemsep = 3 pt]
\item
When the number of investors $K \approx N$, there are less rich investors with initial endowment of coins $n_{k, 0} = f(N)$ such that $f(N) \to \infty$ but $f(N)/N \to 0$, and small investors with initial number of coins $n_{k,0} = \Theta(1)$.
\item
When the number of investors $K \gg N$, there may also exist small investors who own fractional number of coins $n_{k,0} = o(1)$.
\end{itemize}
In these cases, we have $\pi_{k,0} = o(1)$,
so for each fixed $\varepsilon > 0$ the probability $\mathbb{P}(|\pi_{k, \infty} - \pi_{k,0}| > \varepsilon)$ always goes to $0$ as $N \to \infty$.
Thus, it is more reasonable to consider the ratios $\pi_{k, t}/\pi_{k,0}$ or $\pi_{k, \infty}/\pi_{k,0}$ instead of the differences.
In the following subsections,
we study the ratio of evolution of shares under various rewarding schemes and for different types of investors,
encompassing all the above instances.
In particular, we observe phase transitions for different types of investors in terms of wealth stability.
\subsection{Constant reward}
We start with the constant reward $R_t \equiv R > 0$ (e.g. Blackcoin).
Let $\Gamma(z):=\int_0^{\infty} x^{z-1} e^{-x} dx$ be the Gamma function.
Recall that the Dirichlet distribution with parameters $(a_1, \ldots, a_K)$, which we simply denote $\Dir(a_1, \ldots, a_K)$, has support on the standard simplex $\{(x_1, \ldots, x_K) \in \mathbb{R}_{+}^K: \sum_{k = 1}^K x_k = 1\}$ and has density
\begin{equation}
\label{eq:Dirichlet}
f(x_1, \ldots, x_K) = \frac{\Gamma\left(\sum_{k=1}^K a_k \right)}{\prod_{k=1}^K \Gamma(a_k)} \prod_{k=1}^K x_k^{a_k-1}.
\end{equation}
For $K=2$, the Dirichlet distribution reduces to the beta distribution which is denoted by $\bet(a_1, a_2)$.
It is easily seen that if $(x_1, \ldots,x_K) \stackrel{d}{=} \Dir(a_1, \ldots, a_K)$ then for each $k \in [K]$
$x_k \stackrel{d}{=} \bet(a_k, \sum_{j \ne k} a_j)$.
The following result is concerned with the evolution of shares in a PoS protocol with a constant reward.
\begin{theorem}
\label{thm:1}
Assume that the coin reward is $R_t \equiv R > 0$.
Then the investor shares have a limiting distribution
\begin{equation}
\label{eq:limDirichlet}
(\pi_{1,\infty}, \ldots, \pi_{K,\infty}) \stackrel{d}{=} \Dir\left(\frac{n_{1,0}}{R}, \ldots,\frac{n_{K,0}}{R}\right).
\end{equation}
Moreover,
\begin{enumerate}[itemsep = 3 pt]
\item[(i)]
For $n_{k,0} = f(N)$ such that $f(N) \to \infty$ as $N \to \infty$ (i.e. $\pi_{k,0} \gg 1/N$), we have for each $\varepsilon > 0$ and each $t \ge 1$ or $t = \infty$:
\begin{equation}
\label{eq:coninter}
\mathbb{P}\left(\left|\frac{\pi_{k, t}}{\pi_{k,0}} - 1\right| > \varepsilon \right) \le \frac{5 R}{4 \varepsilon^2 f(N)},
\end{equation}
which converges to $0$ as $N \to \infty$.
\item[(ii)]
For $n_{k,0} = \Theta(1)$ (i.e. $\pi_{k,0} = \Theta(1/N)$), we have the convergence in distribution:
\begin{equation}
\label{eq:consmall}
\frac{\pi_{k,\infty}}{\pi_{k,0}} \stackrel{d}{\longrightarrow} \frac{R}{n_{k,0}} \gamma\left( \frac{n_{k,0}}{R}\right) \quad \mbox{as } N \to \infty,
\end{equation}
where $\gamma\left(\frac{n_{k,0}}{R}\right)$ is a Gamma random variable with density $x^{\frac{n_{k,0}}{R}-1} e^{-x} 1_{x > 0}/\Gamma\left(\frac{n_{k,0}}{R}\right)$.
\item[(iii)]
For $n_{k,0} = o(1)$ (i.e. $\pi_{k,0} = o(1/N)$), we have $\var(\pi_{k,\infty}/\pi_{k,0}) \to \infty$ as $N \to \infty$.
Moreover, for each $\varepsilon > 0$:
\begin{equation}
\label{eq:continy}
\mathbb{P}\left(\frac{\pi_{k,\infty}}{\pi_{k,0}} < \varepsilon \right) \to 1 \quad \mbox{as } N \to \infty.
\end{equation}
\end{enumerate}
\end{theorem}
\quad The proof of Theorem \ref{thm:1} is given in Appendix \ref{scA}.
Let us make a few comments.
The theorem reveals a phase transition of shares in the long run between large, medium and small investors.
Part ($i$) shows that for large investors, their shares are stable in the sense that the ratio between the share at a later time $t$ and the initial share, i.e. $\pi_{k,t}/\pi_{k,0}$, converges in probability to $1$ as the initial coin offerings $N \to \infty$.
In particular, for extremely large investors with initial coins $n_{k,0} = \Theta(N)$
this is equivalent to the stability as defined by \eqref{eq:RSres}.
The stability also holds for less rich large investors with $n_{k,0} \gg 1$ but $n_{k,0} = o(N)$.
Note that the concentration bound \eqref{eq:coninter} is uniform in time $t$, implying that the rich-get-richer phenomenon does not occur at any time.
See Figure \ref{fig:1} for an illustration of the bound.
\begin{figure}[htb]
\includegraphics[width=0.45\columnwidth]{c_large_2.png}
\caption{Constant reward: stability of $\pi_{k,t}/\pi_{k,0}$ for large investors.
Blue curve: $P_{\max}$ is a MC estimate of $\max_{1 \le t \le 50000} \mathbb{P}\left(\left| \frac{\pi_{k,t}}{\pi_{k,0}}\right| > 0.05 \right)$.
Orange curve: right side upper bound in \eqref{eq:coninter} with $R = 1$, $\varepsilon = 0.05$, $n_{k,0} = N/2$ and $N \in \{1000, 1500, 2000, \ldots, 10000\}$.}
\label{fig:1}
\end{figure}
\quad On the other hand, the evolution of shares for medium or small investors has very different limiting behaviors.
Part ($ii$) shows that medium investor's shares are volatile in such a way that the ratio $\pi_{k,\infty}/\pi_{k,0}$ is approximated by a gamma distribution independent of the initial coin offerings, and hence $\var(\pi_{k,\infty}/\pi_{k,0}) \approx \frac{1}{n_{k,0}}$.
For instance, if $n_{k,0} = R = 1$ the limiting distribution of the ratio $\pi_{k,\infty}/\pi_{k,0}$ reduces to the exponential distribution with parameter $1$.
See Figure \ref{fig:2A} for an illustration of this approximation.
In this case, we have
\begin{equation*}
\mathbb{P}\left(\frac{\pi_{k,\infty}}{\pi_{k,0}} > \theta \right) \approx e^{-\theta} \quad \mbox{as } N \to \infty.
\end{equation*}
So with probability $e^{-2} \approx 0.135$ a medium investor's share will double, and with probability $1 - e^{-0.5} \approx 0.393$ this investor's share will be halved.
Part ($iii$) shows that for small investors, their shares will be shrinking along the time, and the ratio $\pi_{k,\infty}/\pi_{k,0}$ converges to $0$ in probability as $N \to \infty$.
This is indeed the {\em poor-get-poorer} phenomenon as illustrated in Figure \ref{fig:2B}.
\begin{figure}[htb]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{c_small.png}
\caption{Blue curve: histogram of $\pi_{k,50000}/\pi_{k,0}$ with $n_{k,0} = R = 1$ and $N = 100$.
Orange curve: Gamma distribution.}
\label{fig:2A}
\end{subfigure}\hfil
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{c_tiny.png}
\caption{MC estimates of $\mathbb{P}\left(\pi_{k,50000}/\pi_{k,0} > 0.05 \right)$ and $\var\left(\pi_{k,50000}/\pi_{k,0} \right)$ with $R = 1$, $n_{k,0} = N^{-1.1}$ and $N \in \{100, 110, \ldots, 300\}$}
\label{fig:2B}
\end{subfigure}\hfil
\caption{Constant reward: instability of $\pi_{k,t}/\pi_{k,0}$ for medium and small investors.}
\end{figure}
\quad Finally, we note that the results in Theorem \ref{thm:1} hold jointly for a finite number of investors belonging to the same category.
That is,
\begin{itemize}[itemsep = 3 pt]
\item
For $n_{k_1, 0}, \ldots, n_{k_\ell, 0}$ with $k_1, \ldots, k_\ell \in [K]$ satisfying the conditions in ($i$),
\begin{equation*}
\left(\frac{\pi_{k_1,t}}{\pi_{k_1,0}}, \ldots, \frac{\pi_{k_\ell,t}}{\pi_{k_\ell,0}} \right) \rightarrow 1 \quad \mbox{in probability as } N \to \infty.
\end{equation*}
\item
For $n_{k_1, 0}, \ldots, n_{k_\ell, 0}$ with $k_1, \ldots, k_\ell \in [K]$ satisfying the condition in ($ii$),
\begin{equation*}
\left(\frac{\pi_{k_1,\infty}}{\pi_{k_1,0}}, \ldots, \frac{\pi_{k_\ell,\infty}}{\pi_{k_\ell,0}} \right) \stackrel{d}{\longrightarrow} \left(\frac{R}{n_{k_1,0}} \gamma\left(\frac{n_{k_1,0}}{R}\right), \ldots, \frac{R}{n_{k_\ell,0}} \gamma\left(\frac{n_{k_{\ell},0}}{R}\right) \right) \quad \mbox{as } N \to \infty,
\end{equation*}
where $\gamma(n_{k_1,0}/R), \ldots, \gamma(n_{k_{\ell},0}/R)$ are independent Gamma random variables.
\item
For $n_{k_1, 0}, \ldots, n_{k_\ell, 0}$ with $k_1, \ldots, k_\ell \in [K]$ satisfying the conditions in ($iii$),
\begin{equation*}
\left(\frac{\pi_{k_1,t}}{\pi_{k_1,0}}, \ldots, \frac{\pi_{k_\ell,t}}{\pi_{k_\ell,0}} \right) \rightarrow 0 \quad \mbox{in probability as } N \to \infty.
\end{equation*}
\end{itemize}
\subsection{Decreasing reward}
We consider the decreasing reward such that $R_t \ge R_{t+1}$ for each $t \ge 0$ (e.g. Bitcoin).
The following theorem shows that the ratio of evolution of shares is more complicated
in the PoS protocol with a decreasing reward scheme, and the phase transition may depend on how the reward function decreases over the time.
\begin{theorem}
\label{thm:2}
Assume that the coin reward is $R_t$ with $R_t \ge R_{t+1}$ for each $t \ge 0$.
\begin{enumerate}[itemsep = 3 pt]
\item
If $R_t$ is bounded away from $0$, i.e. $\lim_{t \ge 0} R_t = \underline{R} > 0$, then
\begin{enumerate}[itemsep = 3 pt]
\item[(i)]
For $n_{k,0} = f(N)$ such that $f(N) \to \infty$ as $N \to \infty$ (i.e. $\pi_{k,0} \gg 1/N$), we have for each $\varepsilon > 0$ and each $t \ge 1$ or $t = \infty$:
\begin{equation}
\label{eq:decinter1}
\mathbb{P}\left(\left|\frac{\pi_{k, t}}{\pi_{k,0}} - 1\right| > \varepsilon \right) \le \frac{R_1}{\varepsilon^2 f(N)},
\end{equation}
which converges to $0$ as $N \to \infty$.
\item[(ii)]
For $n_{k,0} = \Theta(1)$ (i.e. $\pi_{k,0} = \Theta(1/N)$), we have $\var \left(\frac{\pi_{k, \infty}}{\pi_{k,0}}\right) = \Theta(1)$.
Moreover, there is $c > 0$ independent of $N$ such that for $\varepsilon > 0$ sufficiently small:
\begin{equation}
\label{eq:decsmall1}
\mathbb{P}\left(\left|\frac{\pi_{k, \infty}}{\pi_{k,0}} - 1 \right| > \varepsilon\right) \ge c.
\end{equation}
\item[(iii)]
For $n_{k,0} = o(1)$ (i.e. $\pi_{k,0} = o(1/N)$), we have $\var \left(\frac{\pi_{k, \infty}}{\pi_{k,0}}\right) \to \infty$ as $N \to \infty$.
\end{enumerate}
\item
If $R_t = \Theta(t^{-\alpha})$ for $\alpha > \frac{1}{2}$, then
\begin{enumerate}[itemsep = 3 pt]
\item[(i)]
For $n_{k,0} > 0$ such that $N n_{k,0} \to \infty$ as $N \to \infty$ (i.e. $N^2 \pi_{k,0} \to 0$), we have for each $\varepsilon > 0$ and each $t \ge 1$ or $t = \infty$:
\begin{equation}
\label{eq:decinter2}
\mathbb{P}\left(\left|\frac{\pi_{k, t}}{\pi_{k,0}} - 1\right| > \varepsilon \right) \le \frac{\sum_{t \ge 1} R_t^2}{\varepsilon^2 N n_{k,0}},
\end{equation}
which converges to $0$ as $N \to \infty$.
\item[(ii)]
For $n_{k,0} = \Theta(1/N)$ (i.e. $\pi_{k,0} = \Theta(1/N^2)$), we have $\var \left(\frac{\pi_{k, \infty}}{\pi_{k,0}}\right) = \Theta(1)$.
\item[(iii)]
For $n_{k,0} = o(1/N)$ (i.e. $\pi_{k,0} = o(1/N^2)$), we have $\var \left(\frac{\pi_{k, \infty}}{\pi_{k,0}}\right) \to \infty$ as $N \to \infty$.
\end{enumerate}
\item
If $R_t = \Theta(t^{-\alpha})$ for $\alpha < \frac{1}{2}$, then
\begin{enumerate}[itemsep = 3 pt]
\item[(i)]
For $n_{k,0} > 0$ such that $N^{\frac{\alpha}{1-\alpha}} n_{k,0} \to \infty$ as $N \to \infty$ (i.e. $N^{\frac{1}{1-\alpha}} \pi_{k,0} \to 0$),
there is $C > 0$ independent of $t$ and $N$ such that for each $\varepsilon > 0$ and each $t \ge 1$ or $t = \infty$:
\begin{equation}
\label{eq:decinter3}
\mathbb{P}\left(\left|\frac{\pi_{k, t}}{\pi_{k,0}} - 1\right| > \varepsilon \right) \le \frac{C}{N^{\frac{\alpha}{1-\alpha}} n_{k,0}},
\end{equation}
which converges to $0$ as $N \to \infty$.
\item[(ii)]
For $n_{k,0} = \Theta(N^{-\frac{\alpha}{1-\alpha}})$ (i.e. $\pi_{k,0} = \Theta(N^{-\frac{1}{1-\alpha}})$), we have $\var \left(\frac{\pi_{k, \infty}}{\pi_{k,0}}\right) = \Theta(1)$.
Moreover, there is $c > 0$ independent of $N$ such that for $\varepsilon > 0$ sufficiently small:
\begin{equation}
\label{eq:decsmall3}
\mathbb{P}\left(\left|\frac{\pi_{k, \infty}}{\pi_{k,0}} - 1 \right| > \varepsilon\right) \ge c.
\end{equation}
\item[(iii)]
For $n_{k,0} = o(N^{-\frac{\alpha}{1-\alpha}})$ (i.e. $\pi_{k,0} = o(N^{-\frac{1}{1-\alpha}})$), we have $\var \left(\frac{\pi_{k, \infty}}{\pi_{k,0}}\right) \to \infty$ as $N \to \infty$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\quad The proof of Theorem \ref{thm:2} is given in Appendix \ref{scB}.
The theorem distinguishes three ways that the reward function decreases,
leading to different thresholds in the phase transition to categorize large, medium and small investors.
Part ($1$) assumes that the reward function decreases to a nonzero value.
In this case, the threshold to identify large, medium and small investors is $n_{k,0} = \Theta(1)$,
which is the same as that of the PoS protocol with a constant reward.
This may not be surprising,
since the underlying dynamics is not much different from the one with a constant reward in the long run.
For large investors, the ratio $\pi_{k, \infty}/\pi_{k,0}$ concentrates at $1$;
while for medium investors there is the {\em anti-concentration} bound \eqref{eq:decsmall1},
indicating that the evolution of a medium investor's share is no longer stable, and may be volatile.
For small investors, we know that the variance of $\pi_{k, \infty}/\pi_{k,0}$ is large.
\quad Part ($2$) considers a fast decreasing reward $R_t = \Theta(t^{-\alpha})$ with $\alpha > 1/2$.
In this case, the threshold to identify different types of investors is $n_{k,0} = 1/N$,
which is independent of the exact decreasing rate of $R_t$.
For large investors, the ratio $\pi_{k, \infty}/\pi_{k,0}$ concentrates at $1$;
while for medium (resp. small) investors,
the variance of $\pi_{k, \infty}/\pi_{k,0}$ is bounded (resp. tends to infinity).
\quad Finally, part ($3$) deals with a slow decreasing reward $R_t = \Theta(t^{-\alpha})$ with $\alpha < 1/2$.
In contrast with the previous cases,
the threshold $n_{k,0} = \Theta(N^{-\frac{\alpha}{1 - \alpha}})$ to identify different types of investors
depends on the decreasing rate of $R_t$.
When $\alpha \to 0$, the threshold becomes $\Theta(1)$ which is consistent with that in part ($1$),
and when $\alpha \to 1/2$, the threshold becomes $\Theta(1/N)$ which agrees with that in part ($2$).
For large investors, the ratio $\pi_{k, \infty}/\pi_{k,0}$ concentrates at $1$;
while for medium investors there is the anti-concentration bound \eqref{eq:decsmall3}.
For small investors, the variance of $\pi_{k, \infty}/\pi_{k,0}$ is large.
\quad Note that the statements for medium and small investors in all three cases
are weaker than those in Theorem \ref{thm:1}.
This is due to the fact the exact distributions of $\pi_{k,\infty}$ are not available for general rewarding schemes.
See Appendix \ref{scE1} for numerical illustrations concerning Theorem \ref{thm:2}.
\subsection{Increasing reward}
We consider the increasing reward of form $R_t = \rho N_{t-1}^{\gamma}$ for some $\rho > 0$ and $\gamma > 0$ (e.g. EOS).
The following theorem shows that the shares evolve very differently in the PoS protocol
with a geometric reward vs a sub-geometric one.
\begin{theorem}
\label{thm:3}
Assume that the reward $R_t = \rho N_{t-1}^{\gamma}$ for some $\rho > 0$ and $\gamma > 0$.
\begin{enumerate}[itemsep = 3 pt]
\item
If $\gamma > 1$,
then $\pi_{k, \infty} \in \{0,1\}$ almost surely with
\begin{equation}
\label{eq:incextreme}
\mathbb{P}(\pi_{k, \infty} = 1) = \pi_{k,0}, \quad \mathbb{P}(\pi_{k, \infty} = 0) = 1 - \pi_{k,0}
\end{equation}
\item
If $\gamma < 1$, then
\begin{enumerate}[itemsep = 3 pt]
\item[(i)]
For $n_{k,0} = f(N)$ such that $f(N)/N^{\gamma} \to \infty$ as $N \to \infty$ (i.e. $\pi_{k,0})/N^{\gamma - 1} \to \infty$), we have for each $\varepsilon > 0$ and each $t \ge 1$ or $t = \infty$:
\begin{equation}
\label{eq:incinter}
\mathbb{P}\left(\left|\frac{\pi_{k, t}}{\pi_{k,0}} - 1\right| > \varepsilon \right) \le \frac{\rho N^{\gamma}}{(1 - \gamma) n_{k,0} \varepsilon^2},
\end{equation}
which converges to $0$ as $N \to \infty$.
\item[(ii)]
For $n_{k,0} = \Theta(N^{\gamma})$ (i.e. $\pi_{k,0} = \Theta(N^{\gamma-1})$), we have $\var \left(\frac{\pi_{k, \infty}}{\pi_{k,0}}\right) = \Theta(1)$.
Moreover, there exists $c > 0$ independent of $N$ such that for $\varepsilon > 0$ sufficiently small:
\begin{equation}
\label{eq:incsmall}
\mathbb{P}\left(\left|\frac{\pi_{k, \infty}}{\pi_{k,0}} - 1 \right| > \varepsilon\right) \ge c.
\end{equation}
\item[(iii)]
For $n_{k,0} = o(N^{\gamma})$ (i.e. $\pi_{k,0} = o(N^{\gamma-1})$), we have $\var \left(\frac{\pi_{k, \infty}}{\pi_{k,0}}\right) \to \infty$ as $N \to \infty$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\quad The proof of Theorem \ref{thm:3} is given in Appendix \ref{scC}.
The theorem deals with two increasing reward schemes: a geometric reward and a sub-geometric one.
Part ($1$) considers a geometric reward,
and shows that with probability one, all the shares will eventually go to one investor in such a way that
\begin{equation*}
\mathbb{P}(\pi_k = 1 \mbox{ and } \pi_j = 0 \mbox{ for all } j \ne k) = \pi_{k,0}, \quad k \in [K].
\end{equation*}
We call this chaotic centralization because
the underlying dynamics will lead to the dictatorship, but the dictator is selected in a completely random manner.
The investor with the largest initial coins will have the highest chance to control the PoS blockchain with a geometric reward.
\quad Part ($2$) considers a polynomial reward $R_t = \Theta(t^{\frac{1}{1-\gamma}})$ for $\gamma < 1$.
In this case, the threshold to distinguish large, medium and small investors is $n_{k,0} = \Theta(N^\gamma)$.
For large investors, the ratio $\pi_{k, \infty}/\pi_{k,0}$ concentrates at $1$;
while for medium investors there is the anti-concentration bound \eqref{eq:incsmall}.
For small investors, the variance of $\pi_{k, \infty}/\pi_{k,0}$ explodes.
See Appendix \ref{scE2} for numerical illustrations related to Theorem \ref{thm:3}.
\section{Infinite population model}
\label{sc3}
\quad This section is concerned with modeling the PoS protocol which allows for an ``infinite'' number of investors in response to the growing popularity of blockchains.
In Section \ref{sc31}, we start with a discrete infinite population model as a direct extension to the P\'olya urn studied in Section \ref{sc2}.
In Section \ref{sc32}, we consider an infinite population model sampled from a continuous space.
Combining the ideas from these two subsections,
we propose and analyze a dynamical population model for the PoS protocol in Section \ref{sc33}.
\subsection{Discrete infinite population}
\label{sc31}
In Section \ref{sc2}, we consider the P\'olya urn model with a finite number of investors, i.e. $K < \infty$,
and study the limiting behavior of the ratio $\pi_{k,t}$ or $\pi_{k,\infty}$ as the initial coin offerings $N \to \infty$.
Here we work directly with $K = \infty$ investors to model the PoS protocol.
\quad We give a formal description of the model.
The P\'olya urn model with an infinite population is just as described in Section \ref{sc2} but for $K = \infty$.
More precisely, there are a countably infinite number of investors indexed by $\mathbb{N}$.
Let $n_{k, t}$, $k \in \mathbb{N}$ be investor $k$'s endowment of coins at time $t$
with $\sum_{k = 1}^{\infty} n_{k,t} = N_t$.
At time $t+1$, investor $k$ is selected at random among all investors with probability $\pi_{k,t} := n_{k,t}/N_t$.
Once selected, the investor receives a deterministic reward of $R_t \ge 0$.
This way, the equations \eqref{eq:share0}--\eqref{eq:Dshare} hold for $K = \infty$,
and there are infinitely many small investors whose initial coins are $n_{k,0} = o(1)$ (i.e. $\pi_{k,0} = o(1/N)$).
The investor shares are given by a vector of infinite length
$(\pi_{k,t}, \, k \in \mathbb{N}_+)$
Again by the martingale convergence theorem,
\begin{equation}
(\pi_{1,t}, \pi_{2,t}, \ldots) \to (\pi_{1,\infty}, \pi_{2,\infty}, \ldots) \quad \mbox{as } t \to \infty,
\end{equation}
where $(\pi_{k,\infty}, \, k \in \mathbb{N}_+)$ is random probability measure on $\mathbb{N}_+$.
\quad We have seen in Theorem \ref{thm:1} that for a finite number of $K$ investors, if the reward $R_t$ is constant, the limiting investor shares $(\pi_{1,\infty}, \ldots, \pi_{K,\infty})$ have the Dirichlet distribution.
For the P\'olya urn model with an infinite population,
one important problem is to understand limiting shares $(\pi_{1,\infty}, \pi_{2, \infty}, \ldots)$
under constant or more general rewarding schemes.
To this end, we recall the definition of the {\em Dirichlet-Ferguson measure}, or simply {\em Dirichlet measure}
which was introduced by \cite{Fer73, BM73} in the context of nonparametric Bayesian analysis.
\begin{definition}
\label{def:Dirichlet}
Let $S$ be a Polish space with Borel $\sigma$-field $\mathcal{S}$,
and let $\mu$ be a positive measure on $(S, \mathcal{S})$ with $0 < \mu(S) < \infty$.
We say that $F$ has $\Dir(\mu)$ distribution if $F$ is a random distribution on $S$
such that for every measurable partition $B_1, \ldots, B_k$ of $S$, the random vector $(F(B_1), \ldots, F(B_k))$ has $\Dir(\mu(B_1), \ldots, \mu(B_k))$ distribution.
\end{definition}
\quad The following theorem studies the evolution of shares in a PoS protocol modeled by the P\'olya urn with an infinite population.
\begin{theorem}
\label{thm:4}
Assume that the coin reward $R_t \equiv R > 0$. Then the investor shares have a limiting distribution
\begin{equation}
\label{eq:limDirinf}
(\pi_{1, \infty}, \pi_{2, \infty}, \ldots) \stackrel{d}{=} \Dir(\mu),
\end{equation}
where $\mu$ is a positive measure on $\mathbb{N}$ with $\mu(\{k\}) = \frac{n_{k,0}}{R}$, $k \in \mathbb{N}$.
Moreover, for each investor or a finite number of investors in the same category,
the results in Theorems \ref{thm:1}, \ref{thm:2} and \ref{thm:3} hold under the corresponding rewarding schemes.
\end{theorem}
\quad The proof of Theorem \ref{thm:4} is given in Appendix \ref{scD1}.
Note that if there are a finite number of $K$ investors,
the measure $\mu$ is then supported on $[K]$.
This recovers the identity in distribution \eqref{eq:limDirichlet} in Theorem \ref{thm:1}.
\subsection{Infinite population from continuum}
\label{sc32}
We consider an urn model with an infinite population which is sampled from a continuous space --
we call it a {\em PoS feature model}.
The motivation comes from understanding the influence induced by common features among investors in the PoS protocol.
The influence of a particular feature is measured by the total shares that investors having this feature own.
In many generic cases, features are represented or approximated by elements in a continuous sample space,
e.g. geolocation of an investor, market experience of an investor measured in time, index assessing the level of risk aversion of an investor, and so on.
Without loss of generality, we abstract the feature space as the unit interval $S = [0,1]$.
\quad The PoS feature model is inspired from the Blackwell-MacQueen construction of a P\'olya urn on general state spaces as described in Lemma \ref{lem:BMscheme}.
At each time $t \ge 1$, an investor with some feature $X_t \in S = [0,1]$ is selected to receive a deterministic reward $R_t \ge 0$.
Now we specify the selection rule over the time.
At time $t = 0$, the initial coin offering is $N_0 = N$, and these coins are distributed among the investors with features in $S = [0,1]$ according to a diffuse probability measure $\nu$ on $S = [0,1]$.
That is, the number of coins owned by investors with features in $[x, x+dx]$ is $N \nu(x) dx$.
At time $t = 1$, an investor with feature $X_1$ is selected by the rule
\begin{equation}
\label{eq:selection1}
\mathbb{P}(X_1 \in \cdot) = \nu(\cdot),
\end{equation}
and then receives a reward $R_1$.
So the total number of coins becomes $N_1 = N + R_1$.
At time $t = 2$, an investor with feature $X_2$ is selected with probability $\mathbb{P}(X_2 \in \cdot) = (R_1 \delta_{X_1}(\cdot) + N \nu(\cdot))/N_1$.
More generally, at time $t \ge 2$ an investor with feature $X_t$ is selected by the rule:
\begin{equation}
\label{eq:selectiont}
\mathbb{P}(X_t \in \cdot|X_1, \ldots, X_{t-1}) = \frac{\sum_{n = 1}^{t-1}R_n \delta_{X_n}(\cdot)}{N + \sum_{n = 1}^{t-1} R_n} + \frac{N \nu(\cdot)}{N + \sum_{n = 1}^{t-1} R_n}.
\end{equation}
\quad The main difference between the PoS feature model \eqref{eq:selection1}--\eqref{eq:selectiont} and the urn models discussed in the previous sections is that
there are uncountably many features of the investors for selection,
but there are only a countable number of investors selected at time $t = 1,2, \ldots$
Two questions arise naturally:
($1$). How to label the features of the investors selected by the feature model \eqref{eq:selection1}--\eqref{eq:selectiont}?
($2$). What are the limiting shares corresponding to these features?
\quad These problems are closely related to the problem of species sampling and exchangeable partitions studied in \cite{Pitman95, Pitman96}.
Let us spell out in the PoS setting as follows.
For (1) the simplest way to label the features among the selected investors is by their order of appearance.
For $j \ge 1$, denote $\widetilde{X}_j$ as the $j^{th}$ feature to appear in the sequence of $X_1, X_2, \ldots$
Let $M_1:= 1$ and
$M_j: = \inf\{n: n > M_{j-1}, X_n \notin \{X_1, \ldots, X_{n-1}\}\}$
for $j \ge 2$,
with the convention $\inf \emptyset = \infty$.
So $M_j$ is the index at which the $j^{th}$ feature appears for the first time, and $\widetilde{X}_j = X_{M_j}$ on the event $\{M_j < \infty\}$.
For instance, if
$(X_1, X_2, \ldots) = (0.1, 0.1, 0.3, 0.2, 0.2, 0.3, 0.1, 0.4, \ldots)$,
then $M_1 =1$, $M_2 = 3$, $M_3 = 4$, $M_4 = 8, \ldots$ and $\widetilde{X}_1 = 0.1$, $\widetilde{X}_2 = 0.3$,
$\widetilde{X}_3 = 0.2$, $\widetilde{X}_4 = 0.4, \ldots$
For general rewards $R_t$, it seems challenging to specify the limiting distribution of shares $(\pi_{\widetilde{X}_1, \infty}, \pi_{\widetilde{X}_2, \infty}, \ldots)$.
One exception is for the constant reward $R_t \equiv R$ as stated in the following theorem.
\begin{theorem}
\label{thm:5}
Assume that the coin reward $R_t \equiv R > 0$.
Let $\widetilde{X}_1, \widetilde{X}_2, \ldots$ be the features appearing in the order of appearance
of the PoS feature model specified by \eqref{eq:selection1}--\eqref{eq:selectiont}.
Then there is the stick-breaking representation for the limiting shares:
\begin{equation}
\label{eq:stickbreaking}
\pi_{\widetilde{X}_j} = \left[\prod_{i = 1}^{j-1} (1 - W_i)\right] W_j \quad \mbox{for } j \ge 1,
\end{equation}
where $W_1, W_2, \ldots$ are independent and identically distributed as $\bet(1, N/R)$.
Moreover, let $K_t: = \sup\{j: M_j \le t\}$ be the number of features appeared among the first $t$ selected investors.
Then $K_t/\log t \to N/R$ almost surely.
\end{theorem}
\quad The proof of Theorem \ref{thm:5} is given in Appendix \ref{scD2}.
Theorem \ref{thm:5} shows that whatever the initial distribution of features is,
the PoS feature model \eqref{eq:selection1}--\eqref{eq:selectiont} with a constant reward yields a limiting share distribution on a countable number of features.
This distribution, which only depends on the initial coin offerings $N$ and the reward $R$, is known as the {\em Griths-Engen-McCloskey} (GEM) distribution (\cite{Ewens90}).
\quad We may also consider more general PoS feature models.
One such instance is when the selection rule relies on the history of the features of previously selected investors.
For $j \ge 1$, let
$N_{jt}:=\sum_{n = 1}^t 1(X_n = \widetilde{X}_j, M_j < \infty)$
be the number of times that the investors with feature $j$ (in the order of appearance) are selected up to time $t$,
and $\pmb{N}_t:= (N_{1t}, N_{2t}, \ldots)$ be the vector of counts of various features of the investors up to time $t$.
We can regroup the investors according to their features, and rewrite the selection rule \eqref{eq:selectiont} as
\begin{equation}
\label{eq:selectiont2}
\mathbb{P}(X_t \in \cdot|X_1, \ldots, X_{t-1}, K_{t-1} = k) = \sum_{j = 1}^k \frac{N_{jt-1}}{N/R + t- 1} 1(\widetilde{X}_j \in \cdot) + \frac{N/R}{N/R +t-1} \nu(\cdot).
\end{equation}
Here we look for general selection rules of form
$\mathbb{P}(X_1 \in \cdot) = \nu(\cdot)$ and for $t \ge 2$,
\begin{equation}
\label{eq:selectiontgen}
\mathbb{P}(X_t \in \cdot|X_1, \ldots, X_{t-1}) = \sum_{j = 1}^k p_j(\pmb{N}_{t-1}) 1(\widetilde{X}_j \in \cdot) + p_{k+1}(\pmb{N}_{t-1}) \nu(\cdot),
\end{equation}
for some functions $p_j$, $j = 1,2, \ldots$ defined on $\cup_{k =1}^{\infty} \mathbb{N}_+^k$.
The meaning of the selection rule \eqref{eq:selectiontgen} is as follows: given the histogram $\pmb{N}_{t-1}$ of $k$ features of investors selected from time $1$ to time $t-1$, an investor with feature $j$ is selected with probability $p_j(\pmb{N}_{t-1})$ for $1 \le j \le k$, and an investor with a new feature $k+1$ is selected with probability $p_{k+1}(\pmb{N}_{t-1})$.
It is easily seen the selection rule \eqref{eq:selectiont2} is a special case of the general rule \eqref{eq:selectiontgen} with
$p_j(n_1, \ldots, n_k) = \frac{n_j}{N/R + t-1} 1(1 \le j \le k) + \frac{N/R}{N/R +t-1} 1(j = k-1)$,
with $\sum_{j=1}^k n_j = t-1$.
A closely related selection rule is defined by the functions
$p_j(n_1, \ldots, n_k) = \frac{n_j - \alpha}{N/R + t-1} 1(1 \le j \le k) + \frac{N/R + k \alpha}{N/R +t-1} 1(j = k-1)$,
for some $\alpha \ge 0$.
In this case, the limiting share distribution also has the stick-breaking representation \eqref{eq:stickbreaking}
with $W_1, W_2, \ldots$ independent, and $W_k$ distributed as $\bet(1-\alpha, N/R + k \alpha)$.
This is known as the {\em Pitman-Yor distribution} (\cite{Pitman96, PY97}).
\quad In general, we need the following condition on $p_j$ to define the selection rule \eqref{eq:selectiontgen}:
$p_j(\pmb{n}) \ge 0$ and $ \sum_{j = 1}^{|\pmb{n}|_0 + 1} p_j(\pmb{n}) = 1$ for $ \pmb{n} \in \cup_{k =1}^{\infty} \mathbb{N}_+^k$,
where $|\pmb{n}|_0$ is the number of nonzero entries in $\pmb{n}$.
\cite{Pitman96} provides an additional condition on $p_j$ in terms of the {\em exchangeable partition probability functions} (EPPFs) so that
the sequence $X_1, X_2, \ldots$ specified by the selection rule \eqref{eq:selection1}--\eqref{eq:selectiontgen} is exchangeable, and thus the limiting share distribution is well-defined.
Such a sequence $X_1, X_2, \ldots$ is called the {\em species sampling sequence}, see also \cite{HP00} for related discussions.
\subsection{From finite to infinite population}
\label{sc33}
In this final subsection, we propose a dynamical approach to model the PoS protocol.
We start with a finite number of investors, and then at each time a new investor may come into the market,
which evolves to an infinite number of investors.
This combines the ideas from previous sections, especially the Blackwell-MacQueen urn model.
\quad We proceed to the description of the model.
At time $t = 0$ there are $K$ investors indexed by $[K]$.
These investors are initial capitalists in the blockchain network, so they play a very important role in the PoS protocol.
For $k \in [K]$, let $n_{k,0}$ be the initial endowment of coins of investor $k$, and
$\pi_{k,0}: = n_{k,0}/N$ with $N = \sum_{k=1}^K n_{k,0}$ be the corresponding shares.
At time $t = 1$, there are two possibilities: either one of these $K$ initial investors are selected, or a new investor is selected from the population.
This is realistic since many cryptocurrencies are initially owned by a handful of coin miners or venture capitalists,
and then their shares will be diluted by new investors over the time.
Since the population is large, we approximate the population space by the unit interval $S = [0,1]$, and a new investor is selected from $S = [0,1]$ by a diffuse probability measure $\nu$ as in Section \ref{sc32}.
We also introduce a {\em dilution parameter} $\theta > 0$, which is the weight that a new investor is introduced to the blockchain network.
More precisely,
\begin{itemize}[itemsep = 3 pt]
\item
For each $k \in [K]$, investor $k$ is selected with probability $\frac{n_{k,0}}{N + \theta}$.
\item
A new investor with ``index' in $(0,1)$ is selected with probability $\frac{\theta \nu(\cdot)}{N+\theta}$.
\end{itemize}
By letting $X_1$ be the index of the investor selected at time $1$, we have
\begin{equation}
\label{eq:selection1f}
\mathbb{P}(X_1 \in \cdot) = \sum_{k = 1}^K \frac{n_{k,0}}{N+\theta} \delta_k(\cdot) + \frac{\theta \nu(\cdot)}{N + \theta},
\end{equation}
and the selected investor receives a deterministic reward $R_1 > 0$.
More generally, at time $t$ the selection rule is given by
\begin{equation}
\label{eq:selectiontf}
\mathbb{P}(X_t \in \cdot |X_1, \ldots, X_{t-1}) = \sum_{k = 1}^K \frac{n_{k,t-1}}{N_{t-1} + \theta} \delta_k(\cdot) + \sum_{X_n \in (0,1)} \frac{n_{X_n, t-1}}{N_{t-1} + \theta} \delta_{X_n}(\cdot) + \frac{\theta \nu(\cdot)}{N_{t-1} + \theta},
\end{equation}
where $n_{k,t}$ is the number of coins that investor $k$ owns at time $t$,
$n_{X_n, t}$ is the number of coins that a new investor with index $X_n \in (0,1)$ owns at time $t$, and
$N_{t} = N + \sum_{n = 1}^t R_n$ is the total number of coins up to time $t$.
There are three terms on the right side of the selection rule \eqref{eq:selectiontf}:
the first one comes from the $K$ initial investors,
the second term is from the new investors previously entering the market,
and the third term is the probability that a new investor is introduced.
The following theorem studies how the shares of the initial capitalists are diluted over the time.
\begin{theorem}
\label{thm:6}
For $k \in [K]$, let $\pi_{k,t}$ be the shares of investor $k$ at time $t$ under the PoS protocol \eqref{eq:selection1f}--\eqref{eq:selectiontf}.
Then $(\pi_{k,t}, \, t \ge 0)$ is a supermartingale.
Consequently, the shares of initial investors $(\pi_{1,t}, \ldots, \pi_{K,t}) \to (\pi_{1,\infty}, \ldots, \pi_{K,\infty})$ almost surely for some
random sub-probability distribution $(\pi_{1,\infty}, \ldots, \pi_{K,\infty})$.
\begin{enumerate}[itemsep = 3 pt]
\item
Assume that the coin reward $R_t$ is decreasing with $R_t \ge R_{t+1}$ for each $t \ge 0$.
\begin{enumerate}[itemsep = 3 pt]
\item[(i)]
If $\lim_{t \to \infty} R_t = R > 0$ or $R_t = \Theta(t^{-\alpha})$ for $\alpha < 1$, we have
$0 < \mathbb{E}\left( \frac{\pi_{k, \infty}}{\pi_{k,0}}\right) < 1$,
and $\lim_{N \to \infty} \mathbb{E}\left( \frac{\pi_{k, \infty}}{\pi_{k,0}}\right) = 1$.
\item[(ii)]
If $R_t = \Theta(t^{-\alpha})$ for $\alpha > 1$, we have $\pi_{k, \infty} = 0$ almost surely.
\end{enumerate}
\item
Assume that the coin reward $R_t = \rho N_{t-1}^{\gamma}$ for $\rho, \gamma > 0$.
We have $0 < \mathbb{E}\left( \frac{\pi_{k, \infty}}{\pi_{k,0}}\right) < 1$,
and $\lim_{N \to \infty} \mathbb{E}\left( \frac{\pi_{k, \infty}}{\pi_{k,0}}\right) = 1$.
\item
Assume that the coin reward $R_t = R > 0$.
We have $\pi_{k, \infty} \stackrel{d}{=} \bet\left(\frac{n_{k,0}}{R}, \frac{N + \theta - n_{k,0}}{R}\right)$.
Consequently, the results in Theorem \ref{thm:1} hold.
\end{enumerate}
\end{theorem}
\quad The proof of Theorem \ref{thm:6} is given in Appendix \ref{scD3}.
The theorem shows that for the dynamical PoS model \eqref{eq:selection1f}--\eqref{eq:selectiontf}, the shares of initial investors will decrease over the time.
If the coin reward does not decay too fast, i.e. $R_t \gg t^{-1}$, the expectation of the ratio $\pi_{k, \infty}/\pi_{k,0}$ tends to $1$ as the initial coin offering $N$ is large.
On the contrary, if the coin reward decays very fast, i.e. $R_t \ll t^{-1}$, the initial investors' shares will be eventually diluted to zero.
More is known if the coin reward is constant.
As is clear in the proof, the selection rule \eqref{eq:selectiontf} as a probability measure converges
with probability one to a random discrete probability distribution $F \stackrel{d}{=} \Dir ((\sum_{k = 1}^K n_{k,0} \delta_k + \theta \nu)/R)$, and given $F$ the indices of selected investors, i.e. $X_1, X_2, \ldots$ are independent and identically distributed as $F$.
Recall that $F$ has the representation $\sum_{t =1}^{\infty} P_t \delta_{Z_t}$ where $(P_1, P_2, \ldots)$ has $\GEM(\frac{N + \theta}{R})$ distribution, and $Z_1, Z_2, \ldots$ are independent and identically distributed as $(\sum_{k = 1}^K n_{k,0} \delta_k + \theta \nu)/(N + \theta)$ and also independent of $(P_1, P_2, \ldots)$.
Thus, the indices of new investors (in order of their appearance) are just independent and identically distributed as $\nu$, and the limiting expectation of their total shares is $\frac{\theta}{N + \theta}$.
So the larger the initial coin offerings $N$ are, the less influence of new investors have.
\section{Conclusion}
\label{sc4}
\quad In this paper, we study the evolution of investor shares in a PoS blockchain under various rewarding schemes and for different types of investors.
In contrast with the previous works where only large investors are considered,
we take the heterogeneity of investors into account,
and show that medium to small investors may suffer from share instability
-- their shares may be volatile, or even shrink to zero in the long run.
This leads to the phase transition phenomenon,
where thresholds for stability vs instability are characterized under different rewarding schemes.
In particular, for the PoS protocol with a geometric reward we observe chaotic centralization;
that is, all the shares will go to one investor in a random manner.
In response to the increasing activities in blockchain networks,
we also propose and analyze a dynamical population model for the PoS protocol which allows the number of investors to grow over the time.
Our quantitative analysis also provides guidance on the choice of rewarding schemes so that decentralization is indeed implemented in a PoS blockchain.
\quad There are a few directions to extend this work.
For instance, one can study the trading incentive in the dynamical population setting.
This requires incorporating a game-theoretic component to the analysis,
and formulating a suitable reward optimization problem.
Another problem is to consider other types of urn dynamics to model the voting rule in the PoS protocol,
e.g. square voting rule (\cite{Pen46}),
and study the problems of long time stability and reward incentives.
\bigskip
{\bf Acknowledgement:}
We thank Alex Y. Wu for the help with the numerical experiments and the careful reading of the manuscript.
We thank David Yao for helpful discussions, and Agostino Capponi and Jing Huang for various pointers to the literature.
We gratefully acknowledges financial support through an NSF grant DMS-2113779 and through a start- up grant at Columbia University.
|
1,314,259,994,286 | arxiv | \section{Introduction}
After over 100 years since the discovery of the diffuse interstellar bands (DIBs) in 1919 \citep{Heger1922}, over 600 DIBs have
been confirmed between 0.4 and 2.4\,{\micron} \citep{Cox2014,Galazutdinov2017b,Fan2019,Hamano2022,Ebenbichler2022}. As a set of weak
and broad absorption features, today DIBs are thought to be produced by carbon-bearing molecules, like carbon or hydrocarbon chains
\citep[e.g.,][]{Maier2011a,ZM2014iaus}, polycyclic aromatic hydrocarbons \citep[PAHs; e.g.,][]{Salama1996,Shen2018,Omont2019}, and
fullerenes and their derivatives \citep[e.g.,][]{Fulara1993a,Cami2014iaus,Omont2016}. However, due to the difficulties in the experimental
research on complex molecules \citep{Hardy2017,Kofman2017} and in the comparison between the experimental measurements and astronomical
observations, buckminsterfullerene $(C_{60}^{+})$ is the first and only identified DIB carrier for five near-infrared DIBs so far
\citep[e.g.,][]{FE1994,Campbell2015,Walker2016,Linnartz2020}, although some debates about the wavelength match and the relative
strength of these bands still exist \citep{Galazutdinov2017a,Galazutdinov2021}.
Besides the comparison between astronomical observations and experimental results, investigating the correlations between different
DIBs is also one of the most important ways to study the relations between their carriers and even to find the common carrier for a set
of DIBs \citep[e.g.,][]{Friedman2011,Ensor2017,Elyajouri2017b,Elyajouri2018}. The tightest correlation was found between two DIBs at
619.6\,nm and 661.4\,nm (in this work, we cite DIBs with their central wavelengths in nanometer) with very high Pearson coefficient
($r_p\,{>}\,0.98$; e.g., \citealt{McCall2010}; \citealt{Friedman2011}; \citealt{KZ2013}; \citealt{Bondar2020}). But the variation of
their strength ratio has also been reported by \citet{Krelowski2016} and \citet{Fan2017}, verifying that only a tight intensity
correlation is not enough to conclude a common origin for different DIBs. The behavior of the relative strength between different
DIBs as a function of $f_{\rm H_2}\,{\equiv}\,2N({\rm H_2})/[N(\ion{H}{i}) + 2{N}({\rm H_2})]$ was used by \citet{Fan2017} to study
the relative positions of the DIB carriers. \citet{Lan2015} also investigated the correlation between DIB strength and $N(\ion{H}{i})$
and $N({\rm H_2})$ for 20 DIBs at high latitudes. Nevertheless, it is hard to measure $N(\ion{H}{i})$ and $N({\rm H_2})$ in ultraviolet
spectra or decipher the positions of \ion{H}{i} and $\rm H_2$ from the radio observations. Another way is to explore the spatial
distributions of different DIBs, which requires the probing of interstellar medium (ISM) above and below the Galactic plane over a large range of
latitudes if we would like to get more information and conclusions from the correlation study on different DIBs. \citet{MW1993}
studied the variation of the relative strength between DIBs $\lambda$442.8 , $\lambda$578.0, and $\lambda$579.7 as a function of Galactic latitudes
based on a sample of 65 stars. They found the carrier abundance of $\lambda$442.8 to be highest at low latitudes,
which agrees with our results (see Sect. \ref{subsect:distribution2}).
The DIB research benefits from the arrival of large spectroscopic surveys allowing to perform large statistically studies. The
three-dimensional (3D) distributions of the DIB carriers have been mapped by \citet{Kos2014} and \citet{Schultheis2022} for DIB\,$\lambda$862.1
(we take here its central wavelength as 862.086\,nm measured in \citealt{Schultheis2022}), and \citet{Zasowski2015c} for
DIB\,$\lambda$1527.3, based on the data from the Radial Velocity Experiment \citep[RAVE;][]{Steinmetz2006}, {\it
Gaia}--DR3 \citep{Vallenari2022}, and the Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE;][]{Majewski2017}, respectively.
\citet{Zasowski2015c}, \citet{hz2021b}, and \citet{Schultheis2022} made preliminary studies on the kinematics of the DIB carriers
with the APOGEE, {\it Gaia}--ESO \citep{Gilmore2012}, and {\it Gaia} data sets. Nevertheless, the individual spectra in large
spectroscopic surveys usually have less integral time than specifically designed DIB observations, resulting in a lower signal-to-noise
ratio ($\rm S/N$). Taking this into account, stacking spectra in an arbitrary spatial volume is a practical and useful method to achieve
better $\rm S/N$ and to precisely measure DIB features \citep[e.g.,][]{Kos2013,Lan2015,Baron2015a,Baron2015b}.
Based on the survey data, some studies devoted to the investigation on the intensity correlations between different DIBs.
\citet{Elyajouri2017b} made use of $\sim$300 spectra of early-type stars in APOGEE to explore the correlations between the strong
DIB at 1.5273\,{\micron} and the three weak DIBs at 1.5627, 1.5653, and 1.5673\,{\micron}. A comparison between the DIB at 1.5273\,{\micron}
and some optical DIBs was done as well. Based on 250 stacked spectra at high latitudes, \citet{Baron2015b} successfully clustered
26 weak DIBs into six groups and four of them were tightly associated with $\rm C_2$ or CN. A data-driven analysis was also done
by \citet{Fan2022} for 54 strong DIBs measured in 25 high-quality spectra of early-type stars. And they suggested a continuous
change of properties of the DIB carriers between different groups. The results of \citet{Puspitarini2015} showed a similar variation
of the strength with the distance of background stars for DIBs $\lambda$661.4 and $\lambda$862.1 in a field centered at
$(\ell,b)\,{=}\,(212.9^{\circ},{-}2.0^{\circ})$. But a direct comparison between $\lambda$661.4 and $\lambda$862.1 was not made.
In this work, we take advantage of the data from the metal-poor Pristine Inner Galaxy Survey \citep[PIGS;][]{Arentsen2020b} which
contain a large number of spectra (13\,235) and two strong DIBs ($\lambda$442.8 and $\lambda$862.1) in its blue-band and red-band
spectra, respectively. Metal-poor stars have the advantage that the DIBs are less or not at all affected by stellar lines. We
measure the two DIBs in stacked spectra and investigate their relative vertical distributions. In Sect. \ref{sect:pigs}, we briefly
introduce the PIGS survey. The stacking of spectra and the DIB measurements are described in Sect. \ref{sect:fit-dib}. The results
of the intensity correlations and vertical distributions of the two DIBs and dust grains are presented in Sect. \ref{sect:result}
and discussed in Sect. \ref{sect:discuss}. The main conclusions are summarized in Sect. \ref{sect:conclusion}.
\begin{figure}
\centering
\includegraphics[width=8cm]{figures/fields.pdf}
\caption{Spatial distribution ($\ell,b$) of 6980 PIGS targets, overplotted on the dust reddening map of \citet{Planck2016dust}.
Colored dots represent the targets assigned into different fields (black circles), as a result of the k-means clustering with
$N=36$ (see Sect. \ref{sect:fit-dib} for details).}
\label{fig:field}
\end{figure}
\section{Pristine Inner Galaxy Survey (PIGS)} \label{sect:pigs}
The Pristine Inner Galaxy Survey \citep[PIGS;][]{Arentsen2020b,Arentsen2020a,Arentsen2021} is an extension of the Pristine survey,
which uses the metallicity-sensitive narrow-band $CaHK$ filter on the Canada-France-Hawaii-Telescope (CFHT) to search for and study
the most metal-poor stars \citep{Starkenburg2017}. PIGS aims at obtaining spectra for the metal-poor stars in the Galactic bulge
and studying their kinematics \citep{Arentsen2020a}, as well as the chemical and dynamical evolution of the inner Galaxy \citep{Arentsen2020b,
Arentsen2021,Sestito2022}. The PIGS targets were selected with a magnitude limit of $13.5\,{<}\,G\,{<}\,16.5$\,mag for {\it Gaia} \citep{Gaia2018}
or $14.0\,{<}\,g\,{<}\,17.0$\,mag for Pan--STARRS1 \citep{Chambers2016}, and an reddening limit of $\EBV\,{\lesssim}\,0.7$\,mag
from \citet{Green2018}. Most of these targets (88\%) have $\feh\,{<}\,{-}1.0$\,dex, with a peak around --1.5\,dex and a tail down
to --3.0\,dex \citep{Arentsen2020b}. The targets were observed with AAOmega+2dF on the AAT, obtaining simultaneous blue-band
(370--550\,nm, $R\,{\sim}\,1300$) and red-band (840--880\,nm, $R\,{\sim}\,11\,000$) spectra.
The spectra were analyzed with the FERRE\footnote{FERRE \citep{Allende2006} is available from \url{http://github.com/callendeprieto/ferre}}
code, which simultaneously derived effective temperatures, surface gravities, metallicities, and carbon abundances. For details on
the analysis, see \citet{Arentsen2020b}. In the original analysis, both the observed and model spectra were normalized using a
running mean. For this work, we perform a re-normalization of the original observed and best-fitting synthetic spectra using the
\textit{fit-continuum} task in the Python {\it specutils} package. A third and fifth order Chebyshev polynomial was used for
red-band and blue-band spectra, respectively.
There are 13\,235 PIGS spectra observed between 2017 and 2020, of which we make use of 6980 of them, distributed into 36 fields
(see Fig. \ref{fig:field}), with $\rm S/N\,{>}\,50$ measured between 840--880\,{nm} and $\Teff\,{<}\,7000$\,K, which assures the
quality of the observed and synthetic spectra. For this subsample, $\rm S/N$ is mostly below 150 per pixel for red-band (computed
between 840--880\,{nm}) with a mean of 77, and below 50 per pixel for blue-band (computed between 400--410\,{nm}) with a mean of 30.
Thus, in this work, we only fit and measure two strong DIBs, $\lambda$442.8 and $\lambda$862.1, in stacked blue-band and red-band
spectra, respectively, due to the low $\rm S/N$ of individual PIGS spectra. We applied a simple k-means algorithm \citep{Lloyd1982}
to cluster targets into different fields to avoid the possible overlap of observed PIGS fields, especially in the southern footprint,
and to have a cleaner selection of target stars in the same $(\ell,\,b)$ range, because an overlap of fields would smooth the variation of
dust reddening and DIB strength with $(\ell,\,b)$. In some cases it is also helpful for fields with worse quality spectra, to have a
larger number of stars, such as the field at $(\ell_0,\,b_0)=(9.91^{\circ},\,{-}10.11^{\circ})$. The clustering was completed by the Python {\it
scikit-learn} package \citep{scikit-learn} with $N=36$, and the result is shown in Fig. \ref{fig:field}. In the following analysis,
the PIGS ``fields'' refer to the assigned clustered regions, which follow but are not exactly the same as their observational footprints.
Discrete footprints are well clustered (e.g., the targets at $5^{\circ}\,{<}\,b\,{<}\,12^{\circ}$), while in crowded regions, such
as the targets around $(\ell,\,b)=(5^{\circ},\,{-}8^{\circ})$, the clustered fields may be different from the observational
ones. The central coordinates $(\ell_0,b_0)$, radius, and target number of each field are listed in Table \ref{tab:stack-fit4430}.
\begin{table*}
\begin{center}
\normalsize
\caption{Field information and fitting results of DIB\,$\lambda$442.8 in the blue-band stacked ISM spectra. \label{tab:stack-fit4430}}
\begin{tabular}{l r r c r c r c c c }
\hline\hline
Field & $\ell_0$ & $b_0$ & radius & $N^a$ & $\EBV^b$ & $\rm S/N^c$ & $\lambda_C^d\pm{\rm err}$ & $\rm FWHM^e\pm{\rm err}^e$ & $\rm EW_{fit}^f \pm err$ \\
Nr & ($^\circ$) & ($^\circ$) & ($^\circ$) & & (mag) & & (nm) & (nm) & ({\AA}) \\ [0.5ex]
\hline
1 & 6.66 & --7.15 & 1.14 & 259 & $0.37\pm0.05$ & 145.5 & $442.76^{+0.08}_{-0.08}$ & $1.99^{+0.18}_{-0.16}$ & $1.10\pm0.08$ \\ [0.5ex]
2 & --2.94 & 11.04 & 1.09 & 257 & $0.31\pm0.05$ & 104.7 & $442.71^{+0.10}_{-0.09}$ & $2.01^{+0.18}_{-0.18}$ & $0.93\pm0.09$ \\ [0.5ex]
3 & 8.96 & 6.37 & 1.10 & 236 & $0.57\pm0.09$ & 119.0 & $442.79^{+0.07}_{-0.08}$ & $1.83^{+0.16}_{-0.16}$ & $1.53\pm0.09$ \\ [0.5ex]
4 & 6.20 & --13.97 & 1.00 & 196 & $0.14\pm0.02$ & 119.7 & $442.54^{+0.10}_{-0.10}$ & $2.07^{+0.18}_{-0.18}$ & $0.73\pm0.10$ \\ [0.5ex]
5 & 3.22 & --3.30 & 0.98 & 224 & $0.66\pm0.10$ & 89.7 & $442.68^{+0.08}_{-0.08}$ & $2.18^{+0.18}_{-0.18}$ & $1.98\pm0.16$ \\ [0.5ex]
6 & --4.59 & 8.63 & 1.00 & 104 & $0.26\pm0.06$ & 101.7 & $442.53^{+0.10}_{-0.10}$ & $2.23^{+0.20}_{-0.18}$ & $0.99\pm0.11$ \\ [0.5ex]
7 & 3.86 & 10.00 & 1.09 & 135 & $0.40\pm0.10$ & 93.5 & $442.77^{+0.09}_{-0.09}$ & $2.03^{+0.18}_{-0.16}$ & $1.13\pm0.10$ \\ [0.5ex]
8 & --3.64 & 5.29 & 1.02 & 180 & $0.68\pm0.09$ & 112.6 & $442.73^{+0.08}_{-0.08}$ & $2.08^{+0.16}_{-0.16}$ & $1.84\pm0.10$ \\ [0.5ex]
9 & 9.91 & --10.11 & 1.11 & 99 & $0.32\pm0.07$ & 141.4 & $442.71^{+0.09}_{-0.09}$ & $2.40^{+0.20}_{-0.18}$ & $1.13\pm0.09$ \\ [0.5ex]
10 & --8.22 & 10.31 & 1.11 & 185 & $0.29\pm0.04$ & 122.5 & $442.52^{+0.09}_{-0.09}$ & $2.00^{+0.18}_{-0.18}$ & $0.92\pm0.09$ \\ [0.5ex]
11 & 3.53 & --7.45 & 0.95 & 203 & $0.33\pm0.03$ & 128.8 & $442.65^{+0.09}_{-0.08}$ & $2.12^{+0.18}_{-0.18}$ & $1.25\pm0.12$ \\ [0.5ex]
12 & 0.22 & 6.16 & 0.98 & 225 & $0.75\pm0.07$ & 105.7 & $442.81^{+0.07}_{-0.07}$ & $1.97^{+0.16}_{-0.16}$ & $2.11\pm0.12$ \\ [0.5ex]
13 & 5.51 & --9.18 & 1.61 & 292 & $0.27\pm0.07$ & 133.7 & $442.74^{+0.08}_{-0.08}$ & $1.93^{+0.18}_{-0.18}$ & $1.23\pm0.09$ \\ [0.5ex]
14 & 5.32 & --3.97 & 1.09 & 249 & $0.70\pm0.10$ & 117.4 & $442.75^{+0.07}_{-0.07}$ & $2.10^{+0.16}_{-0.16}$ & $2.01\pm0.10$ \\ [0.5ex]
15 & 8.93 & 9.50 & 1.06 & 182 & $0.39\pm0.04$ & 126.2 & $442.66^{+0.09}_{-0.09}$ & $2.05^{+0.18}_{-0.16}$ & $1.24\pm0.09$ \\ [0.5ex]
16 & 9.00 & --4.50 & 0.99 & 183 & $0.61\pm0.11$ & 98.2 & $442.72^{+0.09}_{-0.09}$ & $2.26^{+0.18}_{-0.18}$ & $1.77\pm0.12$ \\ [0.5ex]
17 & 1.76 & 9.98 & 1.18 & 146 & $0.59\pm0.16$ & 122.3 & $442.82^{+0.08}_{-0.09}$ & $2.05^{+0.18}_{-0.18}$ & $1.20\pm0.10$ \\ [0.5ex]
18 & 8.91 & --6.86 & 1.04 & 130 & $0.39\pm0.08$ & 133.4 & $442.68^{+0.09}_{-0.09}$ & $2.13^{+0.18}_{-0.16}$ & $1.35\pm0.10$ \\ [0.5ex]
19 & --5.27 & 10.50 & 1.05 & 212 & $0.25\pm0.03$ & 121.7 & $442.65^{+0.10}_{-0.10}$ & $1.86^{+0.18}_{-0.18}$ & $0.72\pm0.08$ \\ [0.5ex]
20 & 5.87 & 9.60 & 1.08 & 238 & $0.53\pm0.18$ & 127.7 & $442.68^{+0.08}_{-0.08}$ & $2.14^{+0.18}_{-0.16}$ & $1.40\pm0.11$ \\ [0.5ex]
21 & 8.58 & --10.02 & 1.06 & 250 & $0.36\pm0.06$ & 130.4 & $442.72^{+0.09}_{-0.09}$ & $1.95^{+0.18}_{-0.18}$ & $0.95\pm0.06$ \\ [0.5ex]
22 & 7.42 & 8.06 & 1.06 & 104 & $0.46\pm0.10$ & 112.3 & $442.74^{+0.09}_{-0.09}$ & $1.94^{+0.16}_{-0.16}$ & $1.33\pm0.09$ \\ [0.5ex]
23 & --0.14 & 8.98 & 1.10 & 136 & $0.52\pm0.12$ & 120.6 & $442.71^{+0.08}_{-0.08}$ & $1.97^{+0.18}_{-0.16}$ & $1.49\pm0.10$ \\ [0.5ex]
24 & --5.09 & 6.64 & 1.06 & 177 & $0.42\pm0.12$ & 121.9 & $442.67^{+0.08}_{-0.09}$ & $2.18^{+0.18}_{-0.18}$ & $1.31\pm0.10$ \\ [0.5ex]
25 & 5.12 & --6.68 & 1.08 & 151 & $0.35\pm0.05$ & 128.1 & $442.66^{+0.09}_{-0.09}$ & $1.97^{+0.18}_{-0.16}$ & $1.22\pm0.09$ \\ [0.5ex]
26 & 5.54 & --12.41 & 0.98 & 185 & $0.15\pm0.02$ & 135.8 & $442.62^{+0.09}_{-0.10}$ & $1.98^{+0.18}_{-0.18}$ & $0.74\pm0.07$ \\ [0.5ex]
27 & 2.47 & --5.12 & 1.06 & 279 & $0.40\pm0.08$ & 140.5 & $442.78^{+0.08}_{-0.08}$ & $2.24^{+0.18}_{-0.16}$ & $1.54\pm0.09$ \\ [0.5ex]
28 & 7.06 & --5.25 & 1.04 & 179 & $0.54\pm0.06$ & 102.4 & $442.68^{+0.08}_{-0.08}$ & $2.25^{+0.18}_{-0.18}$ & $1.71\pm0.13$ \\ [0.5ex]
29 & 7.39 & --8.16 & 1.07 & 289 & $0.37\pm0.09$ & 139.7 & $442.80^{+0.08}_{-0.08}$ & $1.92^{+0.18}_{-0.16}$ & $1.14\pm0.07$ \\ [0.5ex]
\hline
\multicolumn{10}{l}{$^a$ The number of spectra used for stacking in each field.} \\
\multicolumn{10}{l}{$^b$ Median $\EBV$ $\pm$ its standard deviation in each field derived from the \citetalias{Planck2016dust} map.} \\
\multicolumn{10}{l}{$^c$ Signal-to-noise ratio of the stacked blue-band ISM spectra.} \\
\multicolumn{10}{l}{$^d$ Measured central wavelength in the heliocentric frame.} \\
\multicolumn{10}{l}{$^e$ Full width at half maximum of DIB\,$\lambda$442.8.} \\
\multicolumn{10}{l}{$^f$ Fitted equivalent width of DIB\,$\lambda$442.8.} \\
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/ini-example.pdf}
\caption{Two examples showing the DIB signals of $\lambda$442.8 ({\it upper panel}) and $\lambda$862.1 ({\it lower panel}) in their
ISM spectra, respectively, derived by the observed blue-band and red-band PIGS spectra (black lines) and the corresponding synthetic spectra
(blue lines). The DIB positions are marked. The PIGS ID and stellar atmospheric parameters of the two background stars are
indicated as well.}
\label{fig:ISeg}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/blue-4430.pdf}
\caption{{\it Upper panel}: Example of stacking ISM spectra in the field $\left(\ell_0,b_0\right)=\left(6.66^{\circ},{-}7.15^{\circ}\right)$.
Black lines are individual ISM spectra, and the blue line is the stacked ISM spectrum. {\it Lower panel}: Fit of DIB\,$\lambda$442.8
in the stacked ISM spectrum after local renormalization (blue line). The red line shows the fitted Lorentzian profile. Field
position and $\rm S/N$ of the stacked ISM spectrum are indicated.}
\label{fig:fit-4430}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/red-8620.pdf}
\caption{The same as Fig. \ref{fig:fit-4430}, but for DIB\,$\lambda$862.1.}
\label{fig:fit-8620}
\end{figure}
\section{Fit and measure DIBs in stacked spectra} \label{sect:fit-dib}
Limited by the PIGS sample size and the low S/N of individual spectra, we choose to stack spectra in each field according
to their Galactic coordinates $(\ell,b)$ without taking the stellar distance into account. Thus the DIB measured in the stacked
spectra is a measure of the average column density of its carrier toward a given sightline.
Before stacking spectra in each field, the stellar components in observed spectra are first subtracted by the synthetic spectra,
providing the ISM spectra for each target. Figure \ref{fig:ISeg} shows two ISM spectra derived from the blue-band and red-band observed
spectra, respectively, subtracted by their synthetic spectra. The DIB signals are clear, although their profiles are contaminated by
noise and the residuals of stellar lines, such as the \ion{Fe}{i} line close to the center of $\lambda$862.1. We emphasize
that in the stacked ISM spectra these contamination are significantly alleviated (see Figs. \ref{fig:fit-4430} and \ref{fig:fit-8620})
due to the averaging of a substantial number of spectra and the large velocity dispersion of stars in each field (standard deviation
${>}100\,\kms$). The second step is to shift the ISM spectra back to the heliocentric frame using the stellar radial velocities
(${\rm RV_{star}}$, in $\kms$), that is $\lambda_{\rm pixel}^{\prime} = \lambda_{\rm pixel} + {\rm RV_{star}} \times \lambda_{\rm pixel}/c$,
where $\lambda_{\rm pixel}^{\prime}$ and $\lambda_{\rm pixel}$ are the wavelength pixels in the heliocentric and stellar frames,
respectively, and $c\,{=}\,3\,{\times}\,10^5\,\kms$ is the speed of light. Finally, stacking of individual ISM spectra in each field
is done for the blue band and red band, respectively, by taking the median value of their flux which could reduce the influence
of the outlier pixels and discrepancy between the individual observed and synthetic spectra. A $\rm S/N$ is calculated between 860.5
and 861.5\,nm for red-band stacked ISM spectra by $\rm mean(flux)/std(flux)$. For blue-band stacked ISM spectra, fluxes are used in
two windows, that is 430--433\,nm and 452--455\,nm. Examples for blue-band and red-band stacked ISM spectra can be found in the upper
panels in Figs. \ref{fig:fit-4430} and \ref{fig:fit-8620}, respectively. The $\rm S/N$ of stacked spectra (listed in Tables
\ref{tab:stack-fit4430} and \ref{tab:stack-fit8620}) are dramatically increased.
Compared to the red band, the blue-band stacked ISM spectra are more noisy and have much more residual features with stellar
origins. This could partly be due to the abundance variations which are common at low metallicity. Thus, fitting a continuum straight
through the ISM spectra could lead to an underestimation of the strength of $\lambda$442.8. Therefore, we apply the Gaussian process
regression \citep[GPR;][]{GB12,RW06} to fit the profile of $\lambda$442.8, the stellar residuals, and the random noise simultaneously.
This method has been successfully applied to the spectra of early-type stars \citep{Kos2017,hz2021a}. Specifically, the blue-band
stacked ISM spectra are first locally renormalized with the spectral window of 430--455\,nm by an iterated method using a second-order
polynomial (see Sect. 2.2 in \citealt{hz2021a} for details). Then the ISM spectra are initially fitted by a Lorentzian function for
the profile of $\lambda$442.8, as suggested by \citet{Snow2002b}, and a constant continuum. Finally, GPR is applied with a Lorentzian
function as its mean function and a Mat\'{e}rn 3/2 kernel to model the correlated noise, including both the stellar residuals
and random noise. Only one kernel is used here because DIB\,$\lambda$442.8 is the broadest feature in the ISM spectra. The priors
of fitting parameters mainly follow those in \citet{Kos2017} and \citet{hz2021a}, that is Gaussian priors centered at the initial
fitting results with a width of 0.15\,nm for the DIB central wavelength and the Lorentzian width and a flat prior for the scale
length of the Mat\'{e}rn 3/2 kernel. Details about GPR and the priors can be found in \citet{Kos2017} and \citet{hz2021a}.
The stellar residuals are much less in the red-band stacked ISM spectra. Among them, the strongest one is from the
\ion{Ca}{ii} line around 866.5\,nm. But it is far away from the DIB\,$\lambda$862.1. Using stacked {\it Gaia}--RVS spectra
\citep{Seabroke2022}, \citet{hz2022a} confirmed that the weak DIB around 864.8\,nm is very broad, with a Full Width at Half Maximum
(FWHM) of $\sim$1.6\,nm, and its profile could affect the placement of the continuum (see Fig. 1 in \citealt{hz2022a}). Therefore,
we fit each red-band stacked ISM spectrum between 860.5 and 867\,nm with a Gaussian function for the profile of $\lambda$862.1, a
Lorentzian function for the profile of $\lambda$864.8, and a linear continuum. Nevertheless, the DIB\,$\lambda$864.8 profile cannot
be well described due to the lower quality of the individual ISM spectra at longer wavelength (see upper panel in Fig. \ref{fig:fit-8620}).
We emphasize that because DIBs are absorption features, the usage of the whole spectra between 860.5 and 867\,nm can help us to get
a better placement of the continuum, although the $\lambda$864.8 profile cannot be well fitted.
The parameter optimization, for both red band and blue band, is implemented by a Markov Chain Monte Carlo (MCMC) procedure
\citep{Foreman-Mackey13}. The 50\% values in the posterior distribution generated by MCMC are treated as the best estimates, with
lower and upper errors derived from the differences of 16\% and 84\% to 50\% values, respectively. Fit examples of $\lambda$442.8
and $\lambda$862.1 are shown in Figs. \ref{fig:fit-4430} and \ref{fig:fit-8620}, respectively.
\begin{table}
\begin{center}
\small
\caption{Fitting results of DIB\,$\lambda$862.1 in the red-band stacked ISM spectra. The field numbers are the same as Table
\ref{tab:stack-fit4430}. \label{tab:stack-fit8620}}
\begin{tabular}{l c c c c c}
\hline\hline
Field & $\rm S/N^a$ & $\lambda_C^b\pm{\rm err}$ & $\rm FWHM^c\pm err$ & $\rm EW_{fit}^d \pm err$ & $\rm EW_{int}^e$ \\
Nr & & (nm) & (nm) & ({\AA}) & ({\AA}) \\ [0.5ex]
\hline
1 & 853.6 & $862.06^{+0.02}_{-0.02}$ & $0.45^{+0.06}_{-0.05}$ & $0.088\pm0.004$ & 0.091 \\ [0.5ex]
2 & 555.0 & $862.07^{+0.02}_{-0.02}$ & $0.46^{+0.06}_{-0.05}$ & $0.091\pm0.003$ & 0.093 \\ [0.5ex]
3 & 756.1 & $862.06^{+0.02}_{-0.02}$ & $0.48^{+0.05}_{-0.05}$ & $0.157\pm0.006$ & 0.159 \\ [0.5ex]
4 & 996.9 & $862.06^{+0.03}_{-0.02}$ & $0.56^{+0.08}_{-0.07}$ & $0.105\pm0.006$ & 0.107 \\ [0.5ex]
5 & 529.5 & $862.06^{+0.02}_{-0.02}$ & $0.47^{+0.05}_{-0.04}$ & $0.189\pm0.004$ & 0.190 \\ [0.5ex]
6 & 519.3 & $862.04^{+0.03}_{-0.03}$ & $0.46^{+0.08}_{-0.07}$ & $0.085\pm0.005$ & 0.091 \\ [0.5ex]
7 & 930.8 & $862.06^{+0.02}_{-0.02}$ & $0.47^{+0.05}_{-0.05}$ & $0.105\pm0.004$ & 0.111 \\ [0.5ex]
8 & 595.4 & $862.04^{+0.02}_{-0.02}$ & $0.46^{+0.05}_{-0.05}$ & $0.187\pm0.005$ & 0.191 \\ [0.5ex]
9 & 447.1 & $862.05^{+0.03}_{-0.03}$ & $0.42^{+0.07}_{-0.06}$ & $0.071\pm0.004$ & 0.072 \\ [0.5ex]
10 & 583.6 & $861.96^{+0.02}_{-0.02}$ & $0.48^{+0.06}_{-0.05}$ & $0.094\pm0.004$ & 0.093 \\ [0.5ex]
11 & 722.8 & $862.08^{+0.02}_{-0.02}$ & $0.48^{+0.06}_{-0.06}$ & $0.109\pm0.005$ & 0.112 \\ [0.5ex]
12 & 563.9 & $862.05^{+0.01}_{-0.01}$ & $0.46^{+0.03}_{-0.03}$ & $0.220\pm0.007$ & 0.227 \\ [0.5ex]
13 & 780.2 & $862.07^{+0.02}_{-0.02}$ & $0.46^{+0.05}_{-0.04}$ & $0.140\pm0.004$ & 0.145 \\ [0.5ex]
14 & 754.3 & $862.07^{+0.01}_{-0.01}$ & $0.47^{+0.03}_{-0.03}$ & $0.199\pm0.005$ & 0.203 \\ [0.5ex]
15 & 740.9 & $862.04^{+0.02}_{-0.02}$ & $0.49^{+0.05}_{-0.05}$ & $0.123\pm0.004$ & 0.127 \\ [0.5ex]
16 & 719.3 & $862.09^{+0.02}_{-0.02}$ & $0.47^{+0.04}_{-0.04}$ & $0.170\pm0.007$ & 0.172 \\ [0.5ex]
17 & 868.2 & $862.06^{+0.02}_{-0.02}$ & $0.48^{+0.07}_{-0.06}$ & $0.103\pm0.007$ & 0.111 \\ [0.5ex]
18 & 830.6 & $862.08^{+0.02}_{-0.02}$ & $0.45^{+0.06}_{-0.05}$ & $0.100\pm0.004$ & 0.102 \\ [0.5ex]
19 & 579.6 & $862.07^{+0.02}_{-0.02}$ & $0.48^{+0.07}_{-0.06}$ & $0.084\pm0.004$ & 0.087 \\ [0.5ex]
20 & 944.7 & $862.06^{+0.02}_{-0.02}$ & $0.49^{+0.05}_{-0.04}$ & $0.129\pm0.004$ & 0.135 \\ [0.5ex]
21 & 915.9 & $862.06^{+0.02}_{-0.02}$ & $0.48^{+0.06}_{-0.05}$ & $0.090\pm0.003$ & 0.094 \\ [0.5ex]
22 & 730.9 & $862.06^{+0.01}_{-0.02}$ & $0.46^{+0.04}_{-0.04}$ & $0.124\pm0.006$ & 0.127 \\ [0.5ex]
23 & 886.1 & $862.07^{+0.02}_{-0.02}$ & $0.50^{+0.06}_{-0.05}$ & $0.140\pm0.007$ & 0.147 \\ [0.5ex]
24 & 589.3 & $862.04^{+0.02}_{-0.02}$ & $0.46^{+0.05}_{-0.04}$ & $0.114\pm0.004$ & 0.117 \\ [0.5ex]
25 & 882.8 & $862.07^{+0.02}_{-0.02}$ & $0.45^{+0.07}_{-0.06}$ & $0.113\pm0.006$ & 0.116 \\ [0.5ex]
26 & 674.8 & $862.05^{+0.02}_{-0.02}$ & $0.52^{+0.08}_{-0.07}$ & $0.100\pm0.005$ & 0.105 \\ [0.5ex]
27 & 627.4 & $862.07^{+0.02}_{-0.02}$ & $0.47^{+0.06}_{-0.05}$ & $0.133\pm0.003$ & 0.134 \\ [0.5ex]
28 & 491.1 & $862.06^{+0.02}_{-0.02}$ & $0.49^{+0.07}_{-0.06}$ & $0.137\pm0.005$ & 0.142 \\ [0.5ex]
29 & 547.3 & $862.05^{+0.02}_{-0.02}$ & $0.42^{+0.04}_{-0.04}$ & $0.102\pm0.004$ & 0.101 \\ [0.5ex]
\hline
\multicolumn{6}{l}{$^a$ Signal-to-noise ratio of the red-band stacked ISM spectra.} \\
\multicolumn{6}{l}{$^b$ Measured central wavelength in the heliocentric frame.} \\
\multicolumn{6}{l}{$^c$ Full width at half maximum of DIB\,$\lambda$862.1.} \\
\multicolumn{6}{l}{$^d$ Fitted equivalent width of DIB\,$\lambda$862.1.} \\
\multicolumn{6}{l}{$^e$ Integrated equivalent width of DIB\,$\lambda$862.1.} \\
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/snr-fwhm.pdf}
\caption{FHWM versus $\rm S/N$ of stacked ISM spectra for DIBs $\lambda$442.8 ({\it upper panel}) and $\lambda$862.1 ({\it lower panel}).
The dashed green lines indicate $\rm S/N=85$ and 400, respectively.}
\label{fig:snr-fwhm}
\end{figure}
\section{Result} \label{sect:result}
The goodness of fit is significantly affected by the $\rm S/N$ of the stacked ISM spectra. Figure \ref{fig:snr-fwhm} shows the $\rm
S/N$ of each stacked ISM spectrum versus the corresponding fitted FWHMs of DIBs $\lambda$442.8 and $\lambda$862.1. It
can be seen that in the spectra with low $\rm S/N$, the fitted FWHM could be much larger than the average value and/or have much
larger uncertainties than average. Therefore, we limit $\rm S/N\,{>}\,85$ and $\rm S/N\,{>}\,400$ for blue-band and red-band stacked
ISM spectra, respectively, which gives us 29 fields with reliable fitting results of the two DIBs. Further analysis is based on
this sample. The stacked ISM spectra and fits to the two DIBs are shown in Figs. \ref{fig:recover-4428} and \ref{fig:recover-862.10},
respectively. The measured central wavelength, FWHM, and fitted equivalent width (EW) of the two DIBs $\lambda$442.8 and $\lambda$862.1
are listed in Tables \ref{tab:stack-fit4430} and \ref{tab:stack-fit8620}, respectively. The EW uncertainty is calculated as ${\rm
\Delta EW}\,{=}\,\sqrt{6\,{\rm FWHM}\,\delta \lambda} \times {\it R_C}$, where $\delta \lambda$ is the spectral pixel resolution
(0.1\,nm for blue-band and 0.025\,nm for red-band) and $R_C\,{=}\,{\rm std(data-model)}$ is the noise level of the profile. This
formula is similar to those in \citet{Vos2011} and \citet{VE2006}, who considered the main source of EW uncertainty as S/N and the
placement of the continuum. ${\rm EW_{442.8}}$ has larger uncertainties than that of ${\rm EW_{862.1}}$ due to the lower $\rm S/N$
of the blue band than that of the red band.
The integrated EW of $\lambda$862.1 is also calculated and listed in Table \ref{tab:stack-fit8620}. This is not done for
$\lambda$442.8 because of the stellar residuals within the DIB profile. The comparison between fitted and integrated $\rm EW_{862.1}$
is presented in Fig \ref{fig:int-fit}. The EW difference is on average smaller than its uncertainty. But the integrated EW tends
to be slightly larger than the fitted EW, which could be caused by the residuals of stellar lines near the DIB or the potential
asymmetry of the DIB profile. The fitted ${\rm EW_{442.8}}$ and ${\rm EW_{862.1}}$ are used for following analysis.
The average FWHM of $\lambda$442.8 measured in this work is $2.06\,{\pm}\,0.13$\,nm, which is slightly larger
than the report in \citet[][1.725\,nm]{Snow2002b}. \citet{Lai2020} attributed the wide range of FWHM values of $\lambda$442.8
in literature (e.g., 1.7\,nm in \citealt{Galazutdinov2020}; 2.4\,nm in \citealt{Fan2019}; 3.37\,nm in \citealt{Lai2020}) to the
differences in the local radiation field. The average FWHM of $\lambda$862.1 measured in this work is $0.47\,{\pm}\,0.03$\,nm,
which is close to the measure of 0.43\,nm in \citet{HL1991} and 0.469\,nm in \citet{Maiz-Apellaniz2015a}.
\begin{figure}
\centering
\includegraphics[width=8cm]{figures/int-fit.pdf}
\caption{{\it Upper panel:} Comparison between integrated and fitted EW for DIB\,$\lambda$862.1. The dashed green line traces
the one-to-one correspondence. The average difference between integrated and fitted EW ($\rm \Delta=EW_{int}-EW_{fit}$) and the
mean EW uncertainty ($\rm \bar{\sigma}_{EW}$) are indicated. {\it Lower panel:} The EW differences as a function of the integrated
EW. The dashed black line marks a zero difference.}
\label{fig:int-fit}
\end{figure}
\subsection{Latitude groups} \label{subsect:group}
In this work, we derive $\EBV$ for each PIGS target from the map of \citet{Planck2016dust} using the python package {\it dustmaps}
\citep{Green2018python} because these target stars are mainly very distant and at high latitudes ($93.6\%$ with $|b|\,{>}\,4^{\circ}$).
The median $\EBV$ in each PIGS field, together with its standard deviation as a measure of uncertainty, are listed in Table
\ref{tab:stack-fit4430}. To check the reliability of the Planck map for PIGS targets, we compare the reddening values with
estimates from two other sources: the 3D reddening values derived with the StarHorse algorithm \citep{Queiroz2018} specifically for
the PIGS stars using the PIGS spectroscopic stellar parameters, Pan--STARRS1 photometry, and {\it Gaia} parallaxes
(Arentsen et al. in prep.), and the 3D Bayestar reddening map \citep{Green2019} applying the StarHorse distances. The usage of PIGS
stellar parameters into StarHorse delivers a more constrained and less uncertain reddening and distances than those from the StarHorse
{\it Gaia} database \citep{Anders2022} for the same stars, but that only used photometry and parallaxes as input. About 80\% PIGS
target stars have StarHorse and Bayestar $\EBV$, among which, 90\% stars are further than 5\,kpc. The comparison between the Planck,
StarHorse, and Bayestar reddenings is presented in Fig. \ref{fig:ebv}. $\EBV$ from Planck and StarHorse are highly consistent with
each other (the mean difference is smaller than one thousandth magnitude), while the Bayestar $\EBV$ is slightly larger (0.033\,mag
on average) than the Planck values, which could be due to the different methods of reddening inference. The differences of median
$\EBV$ in the 29 used PIGS fields (red squares in Fig. \ref{fig:ebv}) between Planck, StarHorse, and Bayestar are mostly smaller
than their uncertainties. This ensures that the usage of Planck $\EBV$, which can be derived for all the PIGS targets, can be a
safe measure of the dust column densities toward these sightlines.
In our sample, $\EBV$ is strongly correlated with the Galactic latitude. Thus, the PIGS fields are divided into three latitude
stripes to highlight the effect of latitude in the following analysis. The middle stripe is further separated into two at
$\ell\,{=}\,{-}1^{\circ}$ considering the effect of $\ell$ toward the Galactic center (GC). Finally, we roughly define four
latitude groups (see Fig. \ref{fig:bgroup}): G1: $|b|\,{>}\,12^{\circ}$ (red), G2: $8^{\circ}\,{<}\,|b|\,{<}\,12^{\circ}$ and
$\ell\,{>}\,{-}1^{\circ}$ (yellow), G3: $8^{\circ}\,{<}\,|b|\,{<}\,12^{\circ}$ and $\ell\,{<}\,{-}1^{\circ}$ (cyan), and G4:
$|b|\,{<}\,8^{\circ}$ (blue).
\begin{figure}
\centering
\includegraphics[width=8cm]{figures/extinction.pdf}
\caption{Comparison of $\EBV$ derived from \citet{Planck2016dust} with those from the StarHorse algorithm ({\it upper panel}) and
from the Bayestar map ({\it lower panel}). The colored dots with uncertainties are for individual PIGS target stars. Their colors
represent the number densities estimated by a Gaussian kernel density estimation. The zoom-in panels show the $\EBV$ differences
of Planck--StarHorse and Planck--Bayestar, respectively. The number of PIGS target stars ($N$), the mean differences ($\Delta$),
and its standard deviation ($\sigma$) are indicated in the panels. The red squares with errorbars are the median $\EBV$ and their
standard deviations calculate in 29 used PIGS fields. The dashed green lines trace the one-to-one correspondence.}
\label{fig:ebv}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{figures/bGroup.pdf}
\caption{Spatial distribution $(\ell,b)$ of 29 PIGS fields, overplotted on the dust reddening map of \citet{Planck2016dust}.
Different colors indicate different latitude groups defined in Sect. \ref{subsect:group}, that is red: G1, $|b|\,{>}\,12^{\circ}$;
yellow: G2, $8^{\circ}\,{<}\,|b|\,{<}\,12^{\circ}$ and $\ell\,{>}\,{-}1^{\circ}$; cyan: G3, $8^{\circ}\,{<}\,|b|\,{<}\,12^{\circ}$
and $\ell\,{<}\,{-}1^{\circ}$, and blue: G4, $|b|\,{<}\,8^{\circ}$. The median $\EBV$ of each filed is also indicated.}
\label{fig:bgroup}
\end{figure}
\subsection{Linear relations between different interstellar materials} \label{subsect:linear}
One of the basic characteristics of most strong DIBs is the increase of their strength with dust reddenings. Figure \ref{fig:ew-ebv}
shows the correlation between DIB strength ($\rm EW_{442.8}$ and $\rm EW_{862.1}$) and $\EBV$. Linear correlation with $\EBV$ can
be found for both $\lambda$442.8 and $\lambda$862.1 with a Pearson coefficient of $r_p\,{=}\,0.92$ and $r_p\,{=}\,0.83$, respectively.
The two outliers in G2 (yellow points) are due to the local variation of $\EBV$, that one with $\EBV\,{=}\,0.59$\,mag is higher
than its vicinity and the other with $\EBV\,{=}\,0.27$\,mag is lower than its neighboring values (see Fig. \ref{fig:bgroup}).
Deviations from the linear correlation between DIB and $\EBV$ can also be found at high
latitudes (see red points in Fig. \ref{fig:ew-ebv}), which will be discussed in detail below.
A linear fit of $\EBV=0.363(\pm0.041) \times {\rm EW_{442.8}} - 0.048(\pm0.051)$ corresponds to ${\rm EW_{442.8}}/\EBV\,{=}\,2.75$\,{\AA}\,mag$^{-1}$,
which is in the intermediate range compared to the results in literature (e.g., 2.89 of \citealt{Isobe1986} and 2.01 of \citealt{Fan2019}). For
$\lambda$862.1, we derived a coefficient of $\EBV/{\rm EW_{862.1}}\,{=}\,3.500\pm0.459$\,mag\,{\AA}$^{-1}$ with a very small intercept
(${-}0.007\pm0.059$). This value is slightly larger than previous results \citep[e.g.,][]{Munari2008,Kos2013,Krelowski2019b} but
between the values derived from {\it Gaia}--DR3 DIB results with different $\EBV$ sources \citep[see Table 3 in][]{Schultheis2022}.
It has been known that the ratio of $\EBV/{\rm EW}$ can vary significantly from one sightline to another and is also affected by
the use of different data samples and methods. Nevertheless, we argue that the positive correlation between DIB strength and dust
reddening in diffuse or intermediate ISM with a proper coefficient can be treated as a validation of the DIB measurement. For our
results, the range of $\rm EW_{442.8}$ at given $\EBV$ between 0.2 and 0.8\,mag is consistent with archival data shown in Fig. 5
in \citet{Lai2020} and early results of \citet{Herbig1975} shown in Fig. 6 in \citet{Snow2002b}. The variation of $\rm EW_{862.1}$
relative to $\EBV$ is also within the regions, considering the scatter, shown in Fig. 8 in \citet{Schultheis2022}.
The tight linear correlation ($r_p\,{=}\,0.94$) between the DIB strength of $\lambda$442.8 and $\lambda$862.1 can be seen in Fig.
\ref{fig:dib-dib} for $\rm EW_{442.8}\,{>}\,0.9$\,{\AA}. A linear fit yields $\rm EW_{862.1}/EW_{442.8}=0.098\pm0.007$ with a very
small offset of ${-}0.008\pm0.009$. Note that the strongest DIB $\lambda$442.8 is stronger than $\lambda$862.1 by a factor of 10
for our results, which is consistent with the average of their relative strength measured in \citet{Fan2019}.
However, three fields with $\rm EW_{442.8}\,{<}\,0.9$\,{\AA} have $\rm EW_{862.1}$ much larger than that expected by the linear relation.
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/EW-EBV.pdf}
\caption{Correlation between DIB EW of $\lambda$442.8 ({\it upper panel}) and $\lambda$862.1 ({\it lower panel}) measured in stacked
ISM spectra and median $\EBV$ from \citetalias{Planck2016dust} map in corresponding fields. The red lines are the linear fits.
The fitting results and the Pearson coefficient ($r_p$) are also indicated. The points in different colors belong to different
latitude groups defined in Sect. \ref{subsect:group} and shown in Fig. \ref{fig:bgroup}.}
\label{fig:ew-ebv}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/EW.pdf}
\caption{Correlation between $\rm EW_{442.8}$ and $\rm EW_{862.1}$ measured in blue-band and red-band stacked ISM spectra,
respectively. The red line is the linear fit to the dots with $\rm EW_{442.8}>0.9$\,{\AA} (indicated by the dashed green
line). The fitted slope ($\alpha$) and Pearson coefficient ($r_p$) are indicated. The points are colored differently
according to the latitude groups they are assigned to (see Sect. \ref{subsect:group} and Fig. \ref{fig:bgroup}).}
\label{fig:dib-dib}
\end{figure}
\subsection{Variation of the relative strength with Galactic latitude and reddening} \label{subsect:variation}
The systematic variation of $\rm EW_{862.1}/EW_{442.8}$ with the Galactic latitude ($|b|$) and dust reddening ($\EBV$) are presented in
the upper panels in Figs. \ref{fig:var-b} and \ref{fig:var-ebv}, respectively, whose uncertainty considers both the contribution
of $\rm EW_{862.1}$ and $\rm EW_{442.8}$ by error propagation. $\rm EW_{862.1}/EW_{442.8}$ becomes larger than average for
$|b|\,{\gtrsim}\,10^{\circ}$ or $\EBV\,{\lesssim}\,0.3$\,mag, where an increase of $\rm EW_{862.1}/EW_{442.8}$ can be found with the
increasing $|b|$ and the decreasing $\EBV$. The uncertainty of $\rm EW_{862.1}/EW_{442.8}$ tends to be larger in the fields with high
latitudes or small $\EBV$, which have a risk to blur the variation of the DIB relative strength. But we notice that for the G1 fields
(red points in Figs. \ref{fig:var-b} and \ref{fig:var-ebv}) with $\EBV\,{<}\,0.2$\,mag and $|b|\,{>}\,12^{\circ}$, the increasing
magnitude of their mean $\rm EW_{862.1}/EW_{442.8}$ 0.140) to the fitted coefficient (0.098) is 0.042, which is bigger than their
mean uncertainty (0.018) by a factor of two. Moreover, a tight negative correlation ($r_p\,{=}\,{-}0.88$) can be found between
$\rm EW_{862.1}/EW_{442.8}$ and $\EBV$ for $\EBV\,{<}\,0.31$\,mag, which also confirms that the variation of $\rm EW_{862.1}/EW_{442.8}$
with $|b|$ and $\EBV$ is not caused by the EW uncertainty but indicates the different distributions of the carriers of the two DIBs
(see Sect. \ref{sect:discuss} for more discussions). For $\EBV\,{\gtrsim}\,0.45$\,mag, $\rm EW_{862.1}/EW_{442.8}$
tends to slightly increase with $\EBV$ (see top panel in Fig. \ref{fig:var-ebv}), but more data are needed to confirm this trend.
We also emphasize that in Fig. \ref{fig:dib-dib}, three G3 fields (cyan points)
with $\rm EW_{442.8}\,{>}\,0.9$\,{\AA} were used for the linear fit of DIB strength to make the offset of the line close to zero,
but their $\rm EW_{862.1}/EW_{442.8}$ already present a systematic variation with respect to $\EBV$.
In our sample, fields at high latitudes in general have small $\EBV$, but the dust distribution shown in Fig \ref{fig:bgroup} also
varies with longitudes and sightlines. Consequently, we can find a G3 field, with $(\ell,b)\,{=}\,({-}4.59^{\circ},8.63^{\circ})$
and $\EBV\,{=}\,0.26$\,mag, that follows the negative trend between $\rm EW_{862.1}/EW_{442.8}$ and $\EBV$ but does not present a
clear variation of $\rm EW_{862.1}/EW_{442.8}$ with $|b|$. The non-monotonic relationship between $|b|$ and $\EBV$, as well as the
averaging of ISM and the complicated environments toward the GC, also introduces the scatters in Figs. \ref{fig:var-b} and
\ref{fig:var-ebv}.
Similar pictures can also be found for ${\rm EW_{442.8}}/\EBV$ and ${\rm EW_{862.1}}/\EBV$ which are shown in the middle and lower
panels in Figs. \ref{fig:var-b} and \ref{fig:var-ebv}, respectively. A remarkable increase of ${\rm EW}/\EBV$ can be found
for two G1 fields in our sample for both $\lambda$442.8 and $\lambda$862.1. ${\rm EW_{862.1}}/\EBV$ stay around the
linear relation in a wide range of $|b|\,{\lesssim}\,11^{\circ}$ or $\EBV\,{\gtrsim}\,0.3$\,mag. Nevertheless, ${\rm EW_{442.8}}/\EBV$
presents larger scatters with respect to $|b|$ and $\EBV$ than ${\rm EW_{862.1}}/\EBV$, which implies that the carrier abundance
of $\lambda$442.8 would be more sensitive to the dust column densities and latitudes than that of $\lambda$862.1. ${\rm EW}/\EBV$
tends to be larger than average for $\EBV$ between 0.25 and 0.4\,mag, but no linear tendency can be found.
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/ratio-babs.pdf}
\caption{Variation of relative strength between DIBs $\lambda$442.8, $\lambda$862.1, and dust with Galactic latitude ($|b|$). The
dashed green lines indicate their average strength ratios from linear fits (see Sect. \ref{subsect:linear}). See Sect.
\ref{subsect:group} and Fig. \ref{fig:bgroup} for the point colors representing different latitude groups.}
\label{fig:var-b}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/ratio-EBV.pdf}
\caption{The same as Fig. \ref{fig:var-b}, but for the variation with $\EBV$.}
\label{fig:var-ebv}
\end{figure}
\section{Discussion} \label{sect:discuss}
\subsection{Relative vertical distributions between DIB carriers and dust grains} \label{subsect:distribution1}
By covering a wide range of Galactic latitude ($4\degr\,{<}\,|b|\,{<}\,15\degr$) and dust reddening ($0.1\,{<}\,\EBV\,{<}\,0.8$\,mag),
our results show that the DIBs $\lambda$442.8 and $\lambda$862.1 engage similar behavior with dust grains, that is the change of
their ${\rm EW}/\EBV$ is constant and around the mean value considering the uncertainties for $|b|\,{<}\,12^{\circ}$ or
$\EBV\,{>}\,0.3$\,mag, which indicates that the abundance of the DIB carriers and dust grains increase with each other
in the Galactic middle plane. On the other hand, ${\rm EW}/\EBV$ becomes significantly larger than average in the G1
fields at high latitudes. This phenomenon could be interpreted as an evidence that DIBs and dust grains have different vertical
distributions in the Milky Way, because in our sample small $\EBV$ generally indicate sight lines towards higher latitudes.
However, clearly more data especially at higher galactic latitudes are necessary to confirm the different vertical distributions
between the DIBs and the dust.
For our present sample, we cannot quantitatively estimate a scale height for the DIB carrier or dust because of the limited
sample size and the gaps at $|b|\,{\sim}\,12^{\circ}$ and $\EBV\,{\sim}\,0.2\,$mag. But the increase of ${\rm EW}/\EBV$ at high
latitudes indicates the decrease of the column density of dust grains with respect to that of the DIB carriers, implying a larger
scale height of the DIB carriers for both $\lambda$442.8 and $\lambda$862.1 than that of the dust grains if we assume a
simple disk model for them and that one with a larger scale height would have a larger scale length as well. This result is consistent
with the detection of DIBs toward sightlines with negligible reddenings at high latitudes \citep{Baron2015b} and the result of
\citet{Kos2014} who measured $\lambda$862.1 in RAVE spectra in a range of $240^{\circ}\,{\leqslant}\,\ell\,{\leqslant}\,330^{\circ}$.
However, based on the {\it Gaia}--DR3 results, \citet{Schultheis2022} determined a scale height of DIB\,$\lambda$862.1 as
$98.69^{{+}10.81}_{{-}8.35}$\,pc in a range of $0^{\circ}\,{\leqslant}\,{\ell}\,{\leqslant}\,360^{\circ}$ and
$4^{\circ}\,{\leqslant}\,{|b|}\,{\leqslant}\,12^{\circ}$, which is smaller than the usually suggested scale height of dust grains,
such as $134.4{\pm}8.5$\,pc \citep{DS2001} and $125^{{+}17}_{{-}7}$\,pc \citep{Marshall2006}. The discrepancy could be a result of
the variation of the distribution of DIB carriers and dust grains from one sightline to another (see the wavy pattern of the dust
shown in \citealt{Lallement2022} for example). Furthermore, the vertical distribution of interstellar materials would be more
complicated than a single exponential model. \citet{GuoHL2021} fitted the dust distribution with a two-disk model and got two scale
heights of $72.7{\pm}2.4$\,pc and $224.6{\pm}0.7$\,pc for the thin and thick disks. Similarly, \citet{Su2021} also characterized the
molecular disk, traced by $^{12}$CO $J\,{=}\,(1{-}0)$ emission \citep{Su2019}, by two components with a thickness of ${\sim}85$\,pc
and ${\sim}280$\,pc, respectively, in a range of $16^{\circ}\,{\leqslant}\,{\ell}\,{\leqslant}\,25^{\circ}$ and ${|b|}\,{<}\,5.1^{\circ}$.
The {\it Gaia} result presents an average of the scale height of the carrier of $\lambda$862.1 in vicinity of the Sun ($\lesssim$3\,kpc).
Nevertheless, the PIGS target stars are located toward the GC ($\ell\,{<}\,11^{\circ}$) and more distant (90\% $>$5\,kpc) that could
trace a different relative vertical distribution between DIB carriers and dust grains, if we expect their distributions vary in
different manners.
\subsection{Different vertical distributions between carriers of different DIBs} \label{subsect:distribution2}
Tight intensity correlations have been reported for many strong optical DIBs \citep[e.g.,][]{Friedman2011,Xiang2012,KZ2013}. But
most of these works rely on OB stars that mainly reside in the Galactic middle plane, where one can always get a linear relationship
between different interstellar materials in a broad enough distance range. Thus, a tight linearity is a necessary but not sufficient
condition to conclude a common origin for different DIBs. An example is $\lambda$578.8 and $\lambda$579.7 that they have been proved
to have different origins \citep[e.g.,][]{KW1988,Cami1997,KZ2013} but high-level correlations can still be found with $r_p\,{>}\,0.9$
\citep[e.g.,][]{Friedman2011,Xiang2012,KZ2013}.
The variation of $\rm EW_{862.1}/EW_{442.8}$ with $|b|$ and $\EBV$ is a strong evidence that $\lambda$442.8 and $\lambda$862.1 do
not share a common carrier. Moreover, their carriers could be well mixed in the fields with high reddenings or low latitudes seen
though their tight intensity correlation ($r_p\,{=}\,0.94$). But the carrier of $\lambda$862.1 becomes more abundant with respect
to $\lambda$442.8 at higher latitudes, which is consistent with \citet{MW1993} who found that the strength of $\lambda$442.8
relative to those of $\lambda$578.0 and $\lambda$5797.7 was greatest at low latitude and decreased with increasing latitude.
\citet{Baron2015a} also showed that $\lambda$442.8 was absent in their spectra at high latitudes while $\lambda$578.0 and $\lambda$5797.7 tended to
have higher EW per reddening. Our results are in agreement with their findings that different DIB carriers could present different
vertical distributions and the carrier of $\lambda$442.8 seems to be mainly located in the Galactic plane compared to other DIBs.
A different origin for $\lambda$442.8 and $\lambda$862.1 is not unexpected as they are far away from each other in wavelength
and their profiles show different shapes. However, they can be treated as an example to illustrate the significance
to explore the spatial distributions, especially for high-latitude regions, of different DIBs when we would like to confirm a
common or different origin for them.
A potential risk of the above interpretation is the change of the environmental conditions, like temperature, from the
Galactic middle plane to the regions far away from it. It is therefore possible that a single DIB carrier produces two DIBs from
two different transitions that show different vertical structures. As $C_{60}^{+}$ is the only identified DIB carrier and the
variation of its DIBs with environments has not been observed, it is hard to characterize this effect. Further studies, combined
with other known atomic or molecular species, could address this problem to some extent. In the future, much can be gained from
the large sky-area spectroscopic surveys to investigate if there exists a layered structure along the vertical direction for a set
of DIBs, revealing a hierarchical distribution of various macromolecules or a dependence of their electronic transitions
on the interstellar environments.
\section{Conclusions} \label{sect:conclusion}
Based on stacking blue-band and red-band ISM spectra from the PIGS sample, we successfully fitted and measured two DIBs
$\lambda$442.8 and $\lambda$862.1 in 29 fields with a mean radii of 1{\degr}. Their FWHM was estimated as
$2.06\,{\pm}\,0.13$\,nm for $\lambda$442.8 and $0.47\,{\pm}\,0.03$\,nm for $\lambda$862.1, which are both consistent with previous
measurements.
Our results depict a general image of the relative distributions of two DIBs and dust grains toward the GC with $|\ell|\,{<}\,11^{\circ}$
and $4\degr\,{<}\,|b|\,{<}\,15\degr$. The DIB carriers and dust grains are well mixed with each other for $|b|\,{<}\,12^{\circ}$
or $\EBV\,{>}\,0.3$\,mag. Tight linear correlations are derived between EW and $\EBV$ for both $\lambda$442.8 ($r_p\,{=}\,0.92$)
and $\lambda$862.1 ($r_p\,{=}\,0.83$). For $|b|\,{>}\,12^{\circ}$, $\lambda$442.8 and $\lambda$862.1 have larger relative
strength with respect to the dust grains, which implies a larger scale heights of the carriers of $\lambda$442.8 and $\lambda$862.1
than that of dust grains toward the GC.
A tight linear intensity correlation ($r_p\,{=}\,0.94$) is also found between $\lambda$442.8 and $\lambda$862.1 when $|b|\,{\lesssim}\,10^{\circ}$
or $\EBV\,{\gtrsim}\,0.3$\,mag, with a relative strength of $\rm EW_{862.1}/EW_{442.8}=0.098\pm0.007$. But an increase of $\rm EW_{862.1}/EW_{442.8}$
with the increasing $|b|$ and the decreasing $\EBV$ for the fields at high latitudes concludes different carriers for the two DIBs.
Our results suggest that the carrier of $\lambda$862.1 could have a larger scale height than that of $\lambda$442.8.
This work can be treated as an example to show the significance and potentials of the DIB research covering a large range of
latitudes listed below:
\begin{enumerate}
\item The variation of the DIB relative strength at high latitudes is a strong evidence to conclude a common or different
origin for different DIBs.
\item Vertical distributions of different DIBs can help us to reveal the structure of the Galactic ISM, especially the
carbon-bearing macromolecules which are supposed to be the DIB carriers.
\item Relative distributions between different DIBs are also clues of their carrier properties. For example, a DIB with larger
scale height would imply that its carrier can be formed earlier or more quickly in the Galactic plane and then be transported
to the high-latitude regions. Alternatively, we would trace carriers formed in the Galactic halo.
\end{enumerate}
\section*{Acknowledgements}
HZ is funded by the China Scholarship Council (No. 201806040200). AA and NFM acknowledge funding from the European Research Council
(ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No. 834148). ES acknowledges funding
through VIDI grant ``Pushing Galactic Archaeology to its limits'' (with project number VI.Vidi.193.093) which is funded by the Dutch
Research Council (NWO). MS, VH, and NFM gratefully acknowledge support from the French National Research Agency (ANR) funded project
``Pristine'' (ANR-18-CE31-0017).
\section*{Data Availability}
The DIB fitting results in each fields are shown in Tables \ref{tab:stack-fit4430} and \ref{tab:stack-fit8620}.
The spectra underlying this article will be shared on reasonable request to Anke Arentsen.
\bibliographystyle{mnras}
|
1,314,259,994,287 | arxiv | \section{Introduction}
In order to obtain higher convergence rates in the approximation of stochastic differential equations,
in general, we need to incorporate the information contained in iterated integrals.
However, in general these integrals can not be simulated directly. Therefore,
we need to replace these terms by an approximation. We illustrate this statement in a finite
dimensional setting first, although, we are concerned about the approximation of iterated It\^{o}
integrals in infinite dimensions in this work. \\ \\
In the numerical approximation of stochastic ordinary differential equations (SODEs) that do not
possess commutative noise, iterated stochastic integrals
have to be simulated to achieve a high order of convergence, see~\cite{MR1214374}, \cite{MR1843055}.
One example of such a higher order scheme is the Milstein scheme developed in \cite{MR1335454},
which we present below to illustrate the issue.
For some fixed $d, K\in\mathbb{N}$, we consider a $d$-dimensional SODE of type
\begin{equation}\label{SODE}
\mathrm{d}X_t = a(X_t) \, \mathrm{d}t + \sum_{j=1}^K b^j(X_t)\, \mathrm{d} \beta_t^j
\end{equation}
with functions $a \colon \mathbb{R}^d \rightarrow \mathbb{R}^d$,
$b^j=(b^{1,j},\ldots,b^{d,j})^T \colon \mathbb{R}^d \rightarrow \mathbb{R}^d$, $j\in\{1,\ldots,K\}$,
for all $t\geq 0$ and initial value $X_0=x_0 \in \mathbb{R}^d$.
Moreover, $(\beta^j_t)_{t\geq 0}$, $j \in\{1,\ldots,K\}$,
denote independent real-valued Brownian motions.
For some $T>0$, we divide the time interval $[0,T]$ into $M\in\mathbb{N}$ equal
time steps $h = \frac{T}{M}$ and
denote $t_m =mh$ for $m\in\{0,\ldots,M\}$. The increments of the
Brownian motion are given as
$\Delta \beta^j_m = \beta^j_{t_{m+1}}-\beta^j_{t_m}$
for all $j\in\{1,\ldots,K\}$ and $m\in\{0,\ldots,M-1\}$.
Then, the Milstein scheme \cite{MR1335454} reads as $Y_0=x_0$ and
\begin{align*}
Y_{m+1} &= Y_m + a(Y_m) h + \sum_{j=1}^K b^j(Y_m) \Delta \beta^j_m
+ \sum_{i,j =1}^K
\Big(\frac{\partial b^{l,i}}{\partial x_k}(Y_m)\Big)_{1\leq l,k\leq d}b^{j}(Y_m)
\int_{t_m}^{t_{m+1}}\int_{t_m}^s \,\mathrm{d}\beta_r^i \,\mathrm{d}\beta_s^j
\end{align*}
for $m\in\{0,\ldots,M-1\}$ using the notation $Y_m=Y_{t_m}$.
Under suitable assumptions,
we obtain the following error estimate
\begin{equation}\label{ErrorMilstein}
\big( \mathrm{E}\big[|X_T-Y_M|^2\big]\big)^{\frac{1}{2}} \leq Ch,
\end{equation}
see~\cite{MR1214374}.
If SODE \eqref{SODE} does not possess commutative noise, see \cite{MR1214374} for details,
the Milstein scheme cannot be simplified and
one has to approximate the iterated stochastic integrals involved in
the method.
We denote these iterated It\^{o} integrals by
\begin{equation*}
I_{(i,j)}(h) = I_{(i,j)}(t,t+h) := \int_{t}^{t+h} \int_{t}^s
\mathrm{d}\beta_r^i \, \mathrm{d}\beta_s^j
\end{equation*}
for some $t\geq 0$, $h>0$, and for all $i,j \in\{1,\ldots,K\}$, where $K\in\mathbb{N}$
is the number of independent Brownian motions driving the SODE.
The research by Kloeden, Platen and Wright \cite{MR1178485} and by Wiktorsson \cite{MR1843055}
suggests different methods for an approximation of these integrals,
the main ideas are outlined in Section \ref{Sec:Approx}.
We denote by $\bar{I}^{(D)}_{(i,j)}(h)$ the approximation of $I_{(i,j)}(h)$
with the algorithm derived in \cite{MR1178485}
for $i,j \in\{1,\ldots,K\}$, $D,K\in\mathbb{N}$, $h>0$.
In \cite{MR1178485}, the authors proved that for all
$i,j \in\{1,\ldots,K\}$ and $h>0$, it holds
\begin{equation}\label{ErrDoubleKP}
\mathrm{E}\Big[\big\vert I_{(i,j)}(h) - \bar{I}^{(D)}_{(i,j)}(h)\big\vert^2\Big] \leq C\frac{h^2}{D},
\end{equation}
where $D\in\mathbb{N}$ denotes the index of the summand at which the series representation
of the stochastic double integral
is truncated to obtain the approximation $\bar{I}^{(D)}_{(i,j)}(h)$.
If we use the algorithm derived in \cite{MR1843055} instead, we denote the approximation
of ${I}_{(i,j)}(h)$
by $\hat{I}^{(D)}_{(i,j)}(h)$ for all $i,j \in\{1,\ldots,K\}$, $h>0$.
This scheme employs the same series representation as proposed in \cite{MR1178485}
but incorporates an approximation of the truncated term additionally.
The error resulting from this scheme is estimated as
\begin{equation}\label{ErrDoubleW}
\sum_{\substack{i,j=1 \\ i<j}}^K \mathrm{E}\Big[\big\vert I_{(i,j)}(h)
-\hat{I}^{(D)}_{(i,j)}(h)\big\vert^2\Big]
\leq \frac{5h^2}{24\pi^2D^2}K^2(K-1),
\end{equation}
where $D$ is again the index of the summand at which the series is truncated to obtain the approximation
and $K$ is the number of independent Brownian motions, see~\cite{MR1843055}.
For fixed $h$ and $K$, both approximations converge in
the mean-square sense as $D$ goes to infinity - with a different
order of convergence, however.
In the numerical approximation of SODEs, the integer $D$ is determined
such that the overall order
of convergence in the time step is not distorted. For the Milstein scheme,
for example,
error estimate \eqref{ErrorMilstein} is considered, that is,
a strong order of convergence of 1
can be achieved. Therefore,
$D\geq \frac{C}{h}$ is chosen for the method derived in \cite{MR1178485}, whereas
$D\geq \frac{\sqrt{5K^2(K-1)}}{\sqrt{24\pi^2h}}$
is selected for the algorithm developed in \cite{MR1843055}, see also
\cite[Cor. 10.6.5]{MR1214374}.
This shows that if we decrease the step size $h$, the value for $D$
has to increase faster for the scheme developed
in \cite{MR1178485}.
Note that the error estimate \eqref{ErrDoubleW} depends on the
number of Brownian motions $K$ as well.
As this number is fixed in the setting of finite dimensional SODEs,
this factor is not crucial but simply a constant.
Therefore, the algorithm proposed by Wiktorsson \cite{MR1843055}
is superior to the one derived in
\cite{MR1178485} in terms of the computational effort when a given
order of convergence in the step size $h$
is to be achieved. \\ \\
The same issue arises in the context of higher order numerical
schemes designed for infinite dimensional stochastic differential equations
that need not have commutative noise. There, we also have to
approximate the involved iterated stochastic integrals in
order to implement the scheme. This time, however,
the stochastic process is infinite dimensional, in general.
In this work, we aim at devising numerical algorithms for the simulation of iterated integrals
which arise, for example, in the approximation of the mild solution
of stochastic partial differential
equations (SPDEs) of type
\begin{equation}\label{SPDE}
\mathrm{d} X_t = \big( AX_t+F(X_t)\big) \, \mathrm{d}t + B(X_t)\, \mathrm{d}W_t,
\quad t\in(0,T], \quad X_0 = \xi,
\end{equation}
where the commutativity condition from \cite{MR3320928}
\begin{equation}\label{CommutativityCond}
B'(y)\big(B(y)u,v\big)= B'(y)\big(B(y)v,u\big)
\end{equation}
for all $y\in H_{\beta}$, $u,v\in U_0$ is \emph{not} assumed to hold.
Here, $H_{\beta}=D((-A)^{\beta})$ denotes a separable Hilbert space for some $\beta \in[0,1)$.
The operators $A$, $F$, $B$,
and the initial value $\xi$
are assumed to fulfill the conditions imposed for the existence of a unique mild
solution, see~\cite{MR3236753},
and are not specified further.
The spaces are introduced in Section~\ref{Sec:Approx} and
$(W_t)_{t\geq 0}$ denotes a
$Q$-Wiener process taking values in some separable Hilbert space $U$
for some trace class operator $Q$.
In order to approximate the mild solution of SPDEs of type \eqref{SPDE}
with a higher order scheme,
we need to simulate iterated stochastic integrals of the form
\begin{equation}\label{DoubleIntSPDE}
\int_t^{t+h} \Psi\left(\int_t^s \Phi \, \mathrm{d}W_r\right)\,
\mathrm{d}W_s,
\end{equation}
for $t \geq 0$, $h>0$, and some operators $\Psi$, $\Phi$ specified in Section~\ref{Sec:Approx}.
These terms arise if condition \eqref{CommutativityCond}
is not fulfilled, for example,
in the Milstein scheme for SPDEs \cite{MR3320928}.
In this Milstein scheme, it holds $\Psi = B'(Y_t)$ and $\Phi = B(Y_t)$ for some
$B \colon H \rightarrow L_{HS}(U_0,H)$ and an approximation
$Y_t \in H_{\beta}$ with $t\geq 0$ and $\beta \in[0,1)$, where
$L_{HS}(U_0,H)$ denotes the space of all Hilbert-Schmidt operators from $U_0$
to $H$. For more details, we refer to~\cite{MR3320928}. \\ \\
We want to emphasize that the algorithms developed for
the approximation of iterated stochastic
integrals in the setting of SODEs are designed for some fixed
finite number $K$ of driving Brownian motions
and that the approximation error \eqref{ErrDoubleW} even
involves this number $K$ as a constant.
In contrast, when approximating the solution of SPDEs driven by
an infinite dimensional $Q$-Wiener process,
this number corresponds to the dimension of the finite-dimensional
approximation subspace where the $Q$-Wiener process is projected in.
Thus, the dimension $K$ of the approximation subspace has to increase, in general,
to attain higher accuracy, i.e., $K$ is not constant anymore but has to
increase as well; see the error estimate of the
Milstein scheme for SPDEs in \cite{MR3320928}, for example.
Therefore, this aspect has to be taken into account in order to
identify an appropriate approximation algorithm.
In the following, we derive two algorithms for the approximation of iterated integrals of type
\eqref{DoubleIntSPDE} based on the methods developed for the finite dimensional setting
by Kloeden, Platen, and Wright \cite{MR1178485} and by Wiktorsson \cite{MR1843055}.
These algorithms allow for the first time to implement higher order schemes
for SPDEs that do not possess commutative noise and include the algorithms
that can be used for finite dimensional SODEs as a special case.
We show that the algorithm that is superior in the setting of an infinite dimensional
$Q$-Wiener process cannot be uniquely determined in general
but is dependent on the covariance operator $Q$.
In the analysis of the approximation error,
we need to incorporate the eigenvalues of the covariance operator $Q$.
For the algorithm based on the approach by Kloeden, Platen, and Wright \cite{MR1178485},
we obtain a similar estimate as in \eqref{ErrDoubleKP} in
the mean-square sense, see Corollary~\ref{Algo1Lemma}.
For the method derived in the work of
Wiktorsson \cite{MR1843055}, we can prove two differing error estimates for
the case of infinite dimensional $Q$-Wiener processes by different means.
One is the same, apart from constants, as estimate \eqref{ErrDoubleW}.
Moreover, the fact that we integrate with respect
to a $Q$-Wiener process with a trace class operator $Q$ allows for an alternative proof to
the one given in \cite{MR1843055}. The result allows a possibly
superior convergence in $K$ - this depends on the rate of decay of the eigenvalues of
$Q$. Details can be found in Theorem~\ref{Algo2} and Theorem~\ref{Algo2Alternative}.
\section{Approximation of Iterated Stochastic Integrals}\label{Sec:Approx}
Throughout this work, we fix the following setting and notation.
Let $H$ and $U$ be separable real-valued Hilbert spaces.
In the following, let $(\Omega,\mathcal{F},P,(\mathcal{F}_t)_{t\geq 0})$
be a probability space, let $(W_t)_{t \geq 0}$ denote a $U$-valued
$Q$-Wiener process with respect to $(\mathcal{F}_t)_{t\geq 0}$
where $Q$ is a trace class covariance operator,
and let $U_0 := Q^{{1}/{2}}U$.
We define
$L(U,H)_{U_0} :=\{T|_{U_0} : T\in L(U,H)\}$ which is a dense subset of the space
of Hilbert-Schmidt operators $L_{HS}(U_0,H)$ \cite{MR2329435}.
Moreover, we assume that the operators $\Phi$ and $\Psi$ in \eqref{DoubleIntSPDE} fulfill
\begin{itemize}
\item[(A1)] $\Phi \in L(U,H)_{U_0}$ with $\|\Phi Q^{-\alpha}\|_{L_{HS}(U_0,H)}<C$,
\item[(A2)] $\Psi\in L(H,L(Q^{-\alpha}U,H)_{U_0})$
\end{itemize}
for some $\alpha\in(0,\infty)$. The parameter $\alpha$ determines the rate of convergence
for the approximation of the $Q$-Wiener process, see Theorem~\ref{Algo1} or Theorem~\ref{Algo2}.
Note that assumption (A1), needed to prove the convergence of
the approximation algorithms for iterated
integrals in Theorem~\ref{Algo1}, Theorem~\ref{Algo2}, and Theorem~\ref{Algo2Alternative},
is less restrictive
than the condition imposed on the operator $B$ in SPDE \eqref{SPDE}
to obtain the error estimate
for some numerical scheme to approximate its mild solution,
e.g., in \cite{2015arXiv150908427L}.
However, for the Milstein scheme in \cite{MR3320928},
assumption (A2) does not need to be
fulfilled for the error analysis of the Milstein scheme to hold true. \\ \\
If we are interested in
the approximation of, for example, the mild solution of \eqref{SPDE},
a combination of the error estimate for a numerical scheme
to obtain this process
and the error from the approximation of the iterated integrals has to be analyzed.
In this case, we impose the following assumptions instead
\begin{itemize}
\item[(B1)] $\Phi \in L(U,H)_{U_0}$,
\item[(B2)] $\Psi \in L(H,L(U,H)_{U_0})$.
\end{itemize}
For the convergence results in this case, we refer to
Corollary~\ref{Algo1Lemma} and Corollary~\ref{Algo2Lemma}, which have to be combined with estimates
on the respective numerical scheme.
These weaker conditions are sufficient as in the proof
the $Q$-Wiener process is approximated before the iterated
integral is compared to the approximation. \\ \\
Let $Q \in L(U)$ be a nonnegative and symmetric trace class operator with
eigenvalues $\eta_j$ and corresponding eigenfunctions $\tilde{e}_j$ for
$j \in \mathcal{J}$ where $\mathcal{J}$ is some countable index set.
The eigenfunctions $\{\tilde{e}_j, j\in\mathcal{J}\}$ constitute an
orthonormal basis of $U$, see~\cite[Prop. 2.1.5]{MR2329435}.
Then, for the $Q$-Wiener process $(W_t)_{t \geq 0}$, the following series
representation holds, see~\cite[Prop. 2.1.10]{MR2329435},
\begin{equation}\label{QSeries}
W_t = \sum_{j \in \mathcal{J}} \sqrt{\eta_j} \beta_t^j \tilde{e}_j,
\quad t\geq 0.
\end{equation}
Here, $(\beta^j_t)_{t\geq 0}$ with $j\in\{k\in\mathcal{J} \, |\, \eta_k>0 \}$
are independent real-valued Brownian motions.
As the $Q$-Wiener process $(W_t)_{t\geq 0}$ is an infinite dimensional stochastic process,
it has to be projected to some finite dimensional subspace
by truncating the series \eqref{QSeries} such that it can be simulated
in a numerical scheme.
For $K \in \mathbb{N}$, we denote by $(W_t^K)_{t\geq 0}$ the projected
$Q$-Wiener process, which is defined as
\begin{equation}\label{QSeriesK}
W_t^K = \sum_{j\in\mathcal{J}_K}
\langle W_t, \tilde{e}_j \rangle_U \, \tilde{e}_j , \quad t\geq 0,
\end{equation}
for some finite index set $\mathcal{J}_K \subset \mathcal{J}$ with $|\mathcal{J}_K|=K$.
This expression allows to write the iterated integral with respect to the projected
$Q$-Wiener process $(W_t^K)_{t\geq 0}$ for any $t\geq0$ and $h>0$ as
\begin{align*}
\int_t^{t+h} \Psi \bigg( \int_t^s \Phi \, \mathrm{d}W_r^K \bigg) \, \mathrm{d}W_s^K
&= \int_t^{t+h} \Psi \bigg( \int_t^s \Phi \sum_{i \in \mathcal{J}_K}
\langle \mathrm{d}W_r, \tilde{e}_i \rangle_U \tilde{e}_i \bigg)
\sum_{j \in \mathcal{J}_K} \langle \mathrm{d}W_s, \tilde{e}_j \rangle_U \tilde{e}_j \\
& = \sum_{i,j \in \mathcal{J}_K} I^Q_{(i,j)}(t,t+h)
\, \Psi\big(\Phi\tilde{e}_i, \tilde{e}_j\big)
\end{align*}
with
\begin{equation*
I^Q_{(i,j)}(t,t+h) := \int_{t}^{t+h} \int_{t}^s
\langle \mathrm{d} W_r, \tilde{e}_i \rangle_U \,
\langle \mathrm{d} W_s, \tilde{e}_j \rangle_U
\end{equation*}
for $i,j \in \mathcal{J}_K$.
Therefore, we aim at devising a method to approximate the iterated stochastic integrals
$I^Q_{(i,j)}(t,t+h)$ for all $i,j\in\mathcal{J}_K$.
Below, we introduce two such algorithms and analyze
as well as discuss their convergence properties.
For simplicity of notation, we assume, without loss of generality,
$\mathcal{J}_K = \{1,2,\ldots,K\}$ with $\eta_j \neq 0$ for $j \in \mathcal{J}_K$
and denote $I^Q_{(i,j)}(h)=I^Q_{(i,j)}(t,t+h)$ in the following.
\subsection{Algorithm~1}
In the following, we mainly adapt the method introduced by Kloeden, Platen,
and Wright~\cite{MR1178485} to the setting of infinite dimensional stochastic processes.
Here, we additionally have to take into account the error arising from the projection
of the $Q$-Wiener process to a finite dimensional subspace.
\\ \\
For some $t \geq 0$, the coefficients of the projected $Q$-Wiener process
$\w_{t}^j := \langle W_t, \tilde{e}_j \rangle_U$
are independent real valued random variables that are
$N(0, \eta_j \, t)$ distributed for $j \in \mathcal{J}$.
Thus, the increments $\Delta \w_{h}^j :=
\langle W_{t+h}-W_t, \tilde{e}_j \rangle_U$ can be easily simulated
since $\Delta \w_{h}^j$ is $N(0, \eta_j \, h)$ distributed for
$j \in \mathcal{J}$ and $h \geq 0$.
Our goal is to obtain an approximation of the iterated integrals $I^Q_{(i,j)}(h)$ for all
$i,j \in \mathcal{J}_K$, $K\in\mathbb{N}$, $h>0$
given the realizations of the increments $\Delta \w_{h}^j$ for $j \in \mathcal{J}_K$.
The following derivation of the approximation method follows the
representation in \cite{MR1178485} closely.
Below, let $K \in \mathbb{N}$ be arbitrarily fixed and
let us introduce the scaled Brownian bridge process
$(\w_s^j-\frac{s}{h}\w_h^j)_{0\leq s\leq h}$ for
$j \in \mathcal{J}_K$ and some $h \in (0,T]$.
We consider its series expansion
\begin{align}\label{FourierBBridge}
\w_s^j -\frac{s}{h}\w_h^j = \frac{1}{2} a_0^j
+\sum_{r=1}^{\infty} \Big(a^j_r\cos\Big(\frac{2r\pi s}{h}\Big)
+b^j_r \sin\Big(\frac{2r\pi s}{h}\Big)\Big)
\end{align}
which converges in $L^2(\Omega)$.
The coefficients are given by the following expressions
\begin{align*}
a^j_r = \frac{2}{h} \int_0^h (\w_u^j-\frac{u}{h} \w_h^j)
\cos\Big(\frac{2r\pi u}{h}\Big)\,\mathrm{d}u,
\quad
b^j_r = \frac{2}{h} \int_0^h (\w_u^j-\frac{u}{h} \w_h^j)
\sin\Big(\frac{2r\pi u}{h}\Big)\,\mathrm{d}u
\end{align*}
for all $j \in \mathcal{J}_K$, $r \in\mathbb{N}_0$,
and all $0\leq s\leq h \leq T$,
see also \cite{MR1178485}.
All coefficients $a^j_r$ and $b^j_r$ are independent and
$N(0,\tfrac{\eta_j h}{2 \pi^2 r^2})$ distributed for $r \in \mathbb{N}$
and $j \in \mathcal{J}_K$ and it holds $a_0^j = -2 \sum_{r=1}^{\infty} a_r^j$.
In contrast to \cite{MR1178485}, the distributions
of the coefficients additionally depend on the eigenvalues $\eta_j$
of the covariance operator $Q$.
In order to obtain an approximation of the
scaled Brownian motion $(\w_s^j)_{0\leq s\leq h}$
for some $h \in (0,T]$,
we truncate expression \eqref{FourierBBridge} at some integer $R\in\mathbb{N}$ and define
\begin{equation}\label{BrownianBridgeApprox}
{\w_s^j}^{(R)} =\frac{s}{h} \w_h^j + \frac{1}{2} a_0^j +\sum_{r=1}^{R}
\Big(a^j_r\cos\Big(\frac{2r\pi s}{h}\Big) +b^j_r \sin\Big(\frac{2r\pi s}{h}\Big)\Big).
\end{equation}
In fact, we are interested in the integration with respect to this process.
According to Wong and Zakai \cite{MR0195142,MR0183023},
or \cite[Ch. 6.1]{MR1214374}, an integral with respect to process
\eqref{BrownianBridgeApprox} converges to a Stratonovich
integral $J(h)$ as $R\rightarrow \infty$. We are, however,
interested in the It\^{o} stochastic integral.
Following \cite[p.~174]{MR1214374}, the Stratonovich integral $J^Q_{(i,j)}(h)$
can be converted to an It\^{o}
integral $I^Q_{(i,j)}(h)$,
$i,j \in \mathcal{J}_K$, according to
\begin{equation*}
{I}^Q_{(i,j)}(h) = {J}^Q_{(i,j)}(h) -\frac{1}{2} \, h \, \eta_i \, \mathds{1}_{i=j}.
\end{equation*}
That is, ${I}^Q_{(i,j)}(h) = {J}^Q_{(i,j)}(h)$ for all
$i,j \in \mathcal{J}_K$ with $i \neq j$.
Moreover, we compute
\begin{equation*}
I^Q_{(i,i)}(h) = \frac{\big(\Delta \w^i_h)^2 - \eta_i \, h}{2}
\end{equation*}
directly for $ i \in \mathcal{J}_K$, see~\cite[p.~171]{MR1214374}.
This implies that we only have to approximate ${I}^Q_{(i,j)}(h)$
for $i,j \in \mathcal{J}_K$ with $i \neq j$.
Thus, we obtain the desired approximation of the It\^{o} stochastic
integral directly by integrating with respect
to process \eqref{BrownianBridgeApprox}.
Without loss of generality let $t=0$.
By \eqref{FourierBBridge}, we obtain the following expression for
the iterated stochastic integral
\begin{align}\label{DoubleLevy}
I^Q_{(i,j)}(h)
&= \int_0^h \w^i_u \, \mathrm{d}\w^j_u \nonumber \\
&= \int_0^h \bigg( \frac{u}{h} \w_h^i + \frac{1}{2} a^i_0
+ \sum_{r=1}^{\infty} \Big( a^i_r \cos \Big( \frac{2r \pi u}{h} \Big)
+ b^i_r \sin \Big( \frac{2r \pi u}{h} \Big) \Big) \bigg) \, \mathrm{d}\w^j_u \nonumber \\
&= \frac{ \w_h^i}{h} \int_0^h u \, \mathrm{d}\w^j_u
+ \frac{1}{2} a^i_0 \w^j_h \nonumber \\
& \quad + \sum_{r=1}^{\infty} \Big( a^i_r \Big( \w_h^j
+ \int_0^h \frac{2r \pi}{h}
\sin \Big( \frac{2r \pi u}{h} \Big) \w_u^j \, \mathrm{d}u \Big)
-b^i_r \int_0^h \frac{2r \pi}{h}
\cos \Big( \frac{2r \pi u}{h} \Big) \w_u^j \, \mathrm{d}u \Big) \nonumber \\
&= \frac{1}{2} \w_h^i \w_h^j
- \frac{1}{2} (a^j_0 \w_h^i - a^i_0 \w^j_h)
+ \sum_{r=1}^{\infty} \Big( a^i_r \Big( \w_h^j
+ r \pi \Big( b_r^j - \frac{\w_h^j}{r \pi} \Big) \Big)
- \frac{2r \pi}{h} b^i_r \frac{h}{2} a_r^j \Big) \nonumber \\
&= \frac{1}{2} \w_h^i \w_h^j
- \frac{1}{2} ( a^j_0 \w_h^i - a^i_0 \w^j_h)
+ \pi \sum_{r=1}^{\infty} r (a^i_r b^j_r - b^i_r a^j_r ) \nonumber \\
&= \frac{1}{2} \Delta\w_h^i \Delta\w_h^j
+ \pi \sum_{r=1}^{\infty} r \Big(a^i_r \Big( b^j_r - \frac{1}{\pi r} \Delta \w^j_h \Big)
- \Big( b^i_r - \frac{1}{\pi r} \Delta \w^i_h \Big) a^j_r \Big)
\end{align}
for all $i,j \in \mathcal{J}_K$, $i \neq j$, and $h>0$.
Here, we employed the fact that
$\int_0^h f(u) \, \mathrm{d} \w_u^j = f(h) \w_h^j - \int_0^h f'(u) \w_u^j \, \mathrm{d}u$ for
a continuously differentiable function $f \colon [0,h] \rightarrow \mathbb{R}$,
$h > 0$, see~\cite[p.~89]{MR1214374},
$a_0^j = \frac{2}{h} \int_0^h \w_u^j \, \mathrm{d}u -\w_h^j$, and
especially $a_0^j = -2 \sum_{r=1}^{\infty} a_r^j$,
$j \in \mathcal{J}_K$.
Expression \eqref{DoubleLevy} involves some scaled L\'{e}vy stochastic area
integrals which are defined as
\begin{equation}\label{AreaIntKP}
A_{(i,j)}^Q(h) :=
\pi \sum_{r=1}^{\infty} r \Big(a^i_r \Big( b^j_r - \frac{1}{\pi r} \Delta \w^j_h \Big)
- \Big( b^i_r - \frac{1}{\pi r} \Delta \w^i_h \Big) a^j_r \Big)
\end{equation}
for all $ i,j \in \mathcal{J}_K$, $i \neq j$, $h>0$.
We approximate these terms instead of the iterated stochastic integrals, as proposed in
\cite{MR1178485} and \cite{MR1843055}.
Due to the relations
\begin{align}
I_{(i,j)}^Q(h) &= \frac{\Delta \w_h^i \ \Delta\w_h^j
- h \, \eta_i \, \delta_{ij}}{2} + A_{(i,j)}^Q(h) \label{AandI1} \\
A^Q_{(j,i)}(h) &= -A^Q_{(i,j)}(h) \label{AandI2}\\
A^Q_{(i,i)}(h)& = 0 \label{AandI3}
\end{align}
$\Prob$-a.s.\ for all $i,j \in \mathcal{J}_K$, $h>0$, see~\cite{MR1843055}, it is sufficient to
simulate $A_{(i,j)}^Q(h)$ for $i,j \in \mathcal{J}_K$ with $i<j$.
By the distributional properties
of $a_r^i$ and $b_r^i$ for $r \in \mathbb{N}_0$, $i \in \mathcal{J}_K$,
we write
\begin{equation*}
A_{(i,j)}^Q(h) = \frac{h}{2\pi} \sum_{r=1}^{\infty} \frac{1}{r} \Big(U_{ri}^Q
\Big( Z_{rj}^Q - \sqrt{\frac{2}{h}} \Delta \w_h^j \Big)
- U_{rj}^Q \Big( Z_{ri}^Q - \sqrt{\frac{2}{h}} \Delta \w_h^i \Big) \Big)
\end{equation*}
for all $ i,j \in \mathcal{J}_K$, $i \neq j$, $h>0$
and $A^Q(h) = \big( A_{(i,j)}^Q(h)\big)_{1 \leq i, j \leq K}$ in order to relate to
the derivation in \cite{MR1843055}.
This representation entails the random variables $U_{ri}^Q \sim N(0,\eta_i)$, $Z_{ri}^Q \sim N(0,\eta_i)$, and
$\Delta \w_h^i \sim N(0,\eta_i \, h)$ that are all independent for $i \in \mathcal{J}_K$,
$r \in\mathbb{N}$.
As described above, we only need to approximate $A_{(i,j)}^{Q}(h)$, $h>0$,
for $i, j \in \mathcal{J}_K$ with $i <j $, that is, we want to simulate
\begin{equation*}
\tilde{A}^Q(h) =(A_{1,2}^Q(h),\ldots,A_{1,K}^Q(h),A_{2,3}^Q(h),\ldots,A_{2,K}^Q(h),
\ldots,A_{l,l+1}^Q(h),\ldots,A_{l,K}^Q(h),\ldots,A_{K-1,K}^Q(h)).
\end{equation*}
Therefore, we write
\begin{equation*}
\text{vec}(A^Q(h)^T) = (A^Q_{1,1}(h),\ldots,A^Q_{1,K}(h),A^Q_{2,1}(h),\ldots,A^Q_{2,K}(h),\ldots,
A^Q_{K,1}(h),\ldots,A^Q_{K,K}(h))^T
\end{equation*}
and introduce the selection matrix
\begin{equation}\label{SelectionMatrix}
H_K = \begin{pmatrix}
0_{K-1\times 1} & I_{K-1} & 0_{K-1\times K(K-1)}\\
0_{K-2\times K+2} & I_{K-2} & 0_{K-2\times K(K-2)}\\
\vdots & \vdots & \vdots\\
0_{K-l\times(l-1)K+l} & I_{K-l} & 0_{K-l\times K(K-l)}\\
\vdots & \vdots & \vdots\\
0_{1\times(K-2)K+K-1} & 1 & 0_{1\times K}
\end{pmatrix}
\end{equation}
which defines the integrals that have to be computed, compare to \cite{MR1843055}.
Further, we define the matrix
\begin{equation*}
Q_K := \diag({\eta_1}, \ldots, {\eta_K}) .
\end{equation*}
This allows to express the vector $\tilde{A}^Q(h)$ as
\begin{align}
\tilde{A}^Q(h) &= H_K \text{vec}(A^Q(h)^T) \nonumber \\
&= \AQ \label{A_Vector}
\end{align}
with $\Delta\w_h^Q =(\Delta\w_h^1, \dots, \Delta\w_h^K)^T$ and the random vectors
$U_r^Q = (U_{r1}^Q, \ldots, U_{rK}^Q)^T$
and $Z_r^Q = (Z_{r1}^Q, \ldots, Z_{rK}^Q)^T$ that are independent
and identically $N(0_K,Q_K)$ distributed for all $r \in \mathbb{N}$.
As expression \eqref{A_Vector} contains an infinite sum, we need to
truncate it in order to compute this vector.
For some $D \in \mathbb{N}$, this approximation is denoted as
\begin{equation} \label{Alg1-AQD-truncated}
\tilde{A}^{Q,(D)}(h) := \AQD
\end{equation}
and we specify the remainder
\begin{equation} \label{Alg1-RQD-truncation-error}
\tilde{R}^{Q,(D)}(h) := \RQ .
\end{equation}
Let $A^I(h) = Q_K^{-{1}/{2}} A^Q(h) Q_K^{-{1}/{2}}$ denote the
matrix containing the standard L\'{e}vy stochastic area integrals that correspond to the case
that $Q_K=I_K$, i.e., $\eta_j=1$ for all $j \in \mathcal{J}_K$.
Therewith, we obtain the relationship
\begin{align*
\tilde{A}^Q(h) &= H_K \text{vec}(A^Q(h)^T)
= H_K \big( Q_K^{{1}/{2}} \otimes Q_K^{{1}/{2}} \big)
\text{vec}(A^I(h)^T) \\
&= H_K \big( Q_K^{{1}/{2}} \otimes Q_K^{{1}/{2}} \big)
H_K^T H_K \text{vec}(A^I(h)^T) \\
&= H_K \big( Q_K^{{1}/{2}} \otimes Q_K^{{1}/{2}} \big) H_K^T \tilde{A}^I(h),
\end{align*}
where $\tilde{A}^I(h) := H_K \text{vec}(A^I(h)^T)$ and where we employed
\begin{equation*}
H_K^T H_K = \diag( 0, \mathbf{1}^T_{K-1}, 0, 0, \mathbf{1}^T_{K-2}, \ldots, \mathbf{0}^T_l,
\mathbf{1}^T_{K-l}, \ldots, \mathbf{0}^T_{K-1}, 1, \mathbf{0}^T_{K} ) \in \mathbb{R}^{K^2 \times K^2}
\end{equation*}
and the fact that we are interested in indices $i,j \in \mathcal{J}_K$ with $i<j$ only. We denote
\begin{equation*}
\tilde{Q}_K := H_K \big(Q_K^{{1}/{2}} \otimes Q_K^{{1}/{2}}\big)H_K^T,
\end{equation*}
which is of size $L \times L$ with $L = \frac{K(K-1)}{2}$, such that the vector of interest
is given by
\begin{equation*}
\tilde{A}^Q(h) = \tilde{Q}_K \tilde{A}^I(h).
\end{equation*}
Now, we can represent the approximation $\tilde{A}^{Q,(D)}(h)$ of $\tilde{A}^{Q}(h)$
as
\begin{equation*
\tilde{A}^{Q,(D)}(h) = \tilde{Q}_K \tilde{A}^{I,(D)}(h)
\end{equation*}
and the vector of truncation errors by
$\tilde{R}^{Q,(D)}(h) = \tilde{Q}_K \tilde{R}^{I,(D)}(h)$
where $\tilde{A}^{I,(D)}(h)$ and $\tilde{R}^{I,(D)}(h)$ denote, in analogy to
\eqref{Alg1-AQD-truncated} and \eqref{Alg1-RQD-truncation-error},
the truncated part of $\tilde{A}^I(h)$ and its truncation error, respectively.
Note that $\tilde{A}^I(h)$ and especially $\tilde{A}^{I,(D)}(h)$ correspond
to the case where $\eta_j=1$ for all $j \in \mathcal{J}_K$, i.e.,
$Q_K=I_K$ in \eqref{A_Vector} and \eqref{Alg1-AQD-truncated}, respectively.
This also corresponds to the setting in \cite{MR1178485} if $\mathcal{J}$ is finite.\\ \\
\phantomsection \label{Sec:Algo1}\noindent
We summarize the representation above to formulate Algorithm~1
for some $h>0$, $t,t+h \in [0,T]$, and $D,K\in\mathbb{N}$:
\begin{sffamily}
\begin{enumerate}
\item For $j \in \mathcal{J}_K$, simulate the Fourier coefficients
$\Delta \w_{h}^j = \langle W_{t+h}-W_t, \tilde{e}_j \rangle_U$
of the increment $W_{t+h}-W_t$
with
$\Delta \w_h^Q = \big( \Delta \w^1_h, \ldots, \Delta \w^K_h \big)^T$ as
\begin{equation*}
\Delta\w_h^Q = \sqrt{h} \, Q_K^{{1}/{2}} V
\end{equation*}
where $V \sim N(0_K,I_K)$.
\item Approximate $\tilde{A}^Q(h)$ as
\begin{equation*}
\tilde{A}^{Q,(D)}(h)
= H_K \big( Q_K^{{1}/{2}} \otimes Q_K^{{1}/{2}} \big) H_K^T
\ADV
\end{equation*}
where $U_r, Z_r \sim N(0_K, I_K)$ are independent.
\item Compute the approximation $ \text{vec}((\bar{I}^{Q,(D)}(h))^T)$ of $ \text{vec}((I^Q(h)^T)$ as
\begin{equation*}
\text{vec}((\bar{I}^{Q,(D)}(h))^T) = \frac{\Delta\w_h^Q \otimes \Delta\w_h^Q - \text{vec}(h \, Q_K)}{2}
+ (I_{K^2} - S_K) H_K^T \tilde{A}^{Q,(D)}(h)
\end{equation*}
with $S_K := \sum_{i=1}^K \mathbf{e}_i^T \otimes (I_K \otimes \mathbf{e}_i)$, where
$\mathbf{e}_i$ denotes the $i$-th unity vector.
\end{enumerate}
\end{sffamily}
We obtain the following error estimate for this approximation method;
the mean-square error converges with
order $1/2$ in $D$ while the convergence in $K$
is determined by the operator $Q$.
The first term results from the approximation of the
$Q$-Wiener process by $(W_t^K)_{t\geq 0}$,
whereas the second term is due to the approximation of
the iterated integral with respect to this
truncated process by Algorithm~1.
\begin{thm}[Convergence for Algorithm~1]\label{Algo1}
Assume that $Q \in L(U)$ is a nonnegative and symmetric trace class operator with eigenvalues
$\{ \eta_j : j \in \mathcal{J}\}$.
Further, let $\Phi \in L(U,H)_{U_0}$ with $\| \Phi Q^{-\alpha}\|_{L_{HS}(U_0,H)}<C$ for some $C>0$,
let
$\Psi \in L(H,L(Q^{-\alpha}U,H)_{U_0})$ for some $\alpha\in(0,\infty)$, i.e., (A1) and (A2)
are fulfilled, and let $(W_t)_{t \in [0,T]}$ be a $Q$-Wiener process. Then, it holds
\begin{align*}
&\bigg( \mathrm{E}\bigg[\Big\|\int_t^{t+h}
\Psi\Big( \int_t^s\Phi \, \mathrm{d}W_r\Big) \,\mathrm{d}W_s
- \sum_{i, j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h) \;
\Psi\big(\Phi\tilde{e}_i, \tilde{e}_j\big)
\Big\|_H^2\bigg]\bigg)^{\frac{1}{2}} \\
&\leq C_Qh\Big(\sup_{j\in\mathcal{J} \setminus
\mathcal{J}_K}\eta_j\Big)^{\alpha} + C_Q\frac{h}{\pi \sqrt{D}}
\end{align*}
for some $C_Q>0$ and all $h>0$, $t,t+h\in[0,T]$, $D, K \in \mathbb{N}$,
and $\mathcal{J}_K \subset \mathcal{J}$ with $|\mathcal{J}_K|=K$.
\end{thm}
\begin{proof}
For a proof, we refer to Section~\ref{Sec:Proofs}.
\end{proof}
Note that in the convergence analysis of numerical schemes for SPDEs, we compare the approximation
of the iterated stochastic integrals to integrals with respect
to $(W_t^K)_{t\geq0}$, $K\in\mathbb{N}$, see the proofs in \cite{MR3320928} and
\cite{2015arXiv150908427L}, for example, that is, the analysis involves
the error estimate stated in Corollary~\ref{Algo1Lemma} below.
We want to emphasize that this estimate is independent of the integer $K$.
\begin{cor}\label{Algo1Lemma}
Assume that $Q$ is a nonnegative and symmetric trace class operator
and $(W_t)_{t \geq 0}$ is a $Q$-Wiener process.
Furthermore, let $\Phi \in L(U,H)_{U_0}$ and $\Psi \in L(H,L(U,H)_{U_0})$, i.e., assumptions
(B1) and (B2) are fulfilled. Then, it holds
\begin{align*}
&\bigg( \mathrm{E} \bigg[ \Big\| \int_t^{t+h} \Psi
\Big( \int_t^s\Phi \, \mathrm{d}W_r^K \Big) \, \mathrm{d}W_s^K
- \sum_{i, j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi\big(\Phi \tilde{e}_i, \tilde{e}_j \big)
\Big\|_H^2 \bigg] \bigg)^{\frac{1}{2}} \leq C_Q \frac{h}{\pi \sqrt{D}}
\end{align*}
for some $C_Q>0$ and all $h>0$, $t,t+h\in[0,T]$, $D,K\in\mathbb{N}$,
and $\mathcal{J}_K \subset \mathcal{J}$ with $|\mathcal{J}_K|=K$.
\end{cor}
\begin{proof
If we set $\eta_i=0$ for all $i \in \mathcal{J} \setminus \mathcal{J}_K$,
the result follows directly from Theorem~\ref{Algo1}.
\end{proof}
Next, we outline an alternative algorithm to approximate integrals of type \eqref{DoubleIntSPDE}.
In contrast
to the method presented above, the vector of tail sums $\tilde{R}^{Q,(D)}(h)$ is approximated and included
in the computation.
\subsection{Algorithm~2}
The following derivation is based on the scheme developed by Wiktorsson \cite{MR1843055} for SODEs.
In the finite dimensional setting, the error estimate \eqref{ErrDoubleW} depends on the
number of Brownian motions $K$ additionally to the time step size $h$.
This suggests that the computational cost
involved in the simulation of
the stochastic double
integrals is much larger in the setting of
SPDEs as the number of independent Brownian motions is, in general, not finite,
see also expression \eqref{QSeries}.
The eigenvalues of the $Q$-Wiener process are, however,
not incorporated in the error estimate \eqref{ErrDoubleW}.
For example, if we assume $\eta_j = \Oo(j^{-\rho_Q})$
for some $\rho_Q>1$, $C>0$, $j\in\mathcal{J} \subset \mathbb{N}$, we obtain -- for $\rho_Q\in(1,3)$ -- an
improved error estimate which depends on the rate of decay
of the eigenvalues instead of some fixed exponent of $K$.
This results from the fact that we integrate with respect to a $Q$-Wiener process in our setting,
where $Q$ is a nonnegative, symmetric trace class operator.
For $\rho_Q\geq 3$, we can show that the exponent of $K$ is bounded by 3.\\ \\
As before, we truncate the series \eqref{A_Vector} at some integer $D\in\mathbb{N}$ and obtain
the approximation $\tilde{A}^{Q,(D)}(h)$ in \eqref{Alg1-AQD-truncated}.
The vector of tail sums $\tilde{R}^{Q,(D)}(h)$ in \eqref{Alg1-RQD-truncation-error},
however, is not discarded but approximated by a
multivariate normally distributed random
vector instead, as described in \cite{MR1843055} for $Q_K = I_K$ and $|\mathcal{J}|=K$.
First, we determine the distribution of the tail sums;
for $r\in \mathbb{N}$, we compute the covariance matrix of
\begin{equation*}
\begin{split}
V_r^Q &:= \VQr \\
\end{split}
\end{equation*}
conditional on
$Z_r^Q$ and $\Delta \w_h^Q$ as
\begin{align}
\SigmaQCond
&= \mathrm{E}\big[V_r^Q {V_r^Q}^T | Z_r^Q, \Delta \w_h^Q \big]
- \mathrm{E}\big[ V_r^Q | Z_r^Q, \Delta \w_h^Q \big]
\mathrm{E}\big[V_r^Q | Z_r^Q, \Delta \w_h^Q \big]^T \nonumber \\
&= (S_K-I_{K^2}) \Big( \Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)
\Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)^T \otimes Q_K \Big) (S_K-I_{K^2})
\end{align}
with $S_K = \sum_{i=1}^K \mathbf{e}_i^T \otimes (I_K \otimes \mathbf{e}_i)$, where
$\mathbf{e}_i$ denotes the $i$-th unity vector.
This expression can be reformulated without using the operator $S_K$
by taking into account that
\begin{align*}
&\mathrm{E} \Big[ U_r^Q \Big(Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)^T
\otimes \Big(Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) {U_r^Q}^T \Big| Z_r^Q, \Delta \w_h^Q \Big] \\
&= \Big( I_K \otimes \diag\Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) \Big)
\big( \mathbf{1}_K^T \otimes (Q_K \otimes \mathbf{1}_K) \big)
\Big( \diag \Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) \otimes I_K \Big) \\
&= \Big( Q_K^{1/2} \otimes \diag\Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) \Big)
\Big( \Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)^T \otimes (Q_K^{1/2} \otimes \mathbf{1}_K) \Big)
\end{align*}
as
\begin{align}
&\SigmaQCond \nonumber \\
&= Q_K \otimes \Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)
\Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)^T
+ \Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)
\Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)^T \otimes Q_K \nonumber \\
&\quad - \Big( Q_K^{1/2} \otimes \diag\Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) \Big)
\Big( \Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big)^T \otimes (Q_K^{1/2} \otimes \mathbf{1}_K) \Big)
\nonumber \\
&\quad - \Big( \Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) \otimes (Q_K^{1/2} \otimes \mathbf{1}_K^T) \Big)
\Big( Q_K^{1/2} \otimes \diag\Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) \Big)
.
\end{align}
Analogously to \cite{MR1843055},
by taking the expectation, we define
\begin{align} \label{SigmaInf}
\SQinfty &= \mathrm{E} \Big[ H_K \Sigma^Q(V_1^Q)_{|Z_1^Q, \Delta \w_h^Q} H_K^T
\Big| \Delta \w_h^Q \Big] \nonumber \\
&=
2 H_K (Q_K \otimes Q_K) H_K^T
+ \frac{2}{h} H_K (I_{K^2}-S_K) \big( Q_K \otimes \big(
\Delta\w_h^Q {\Delta\w_h^Q}^T \big) \big)
(I_{K^2}-S_K) H_K^T.
\end{align}
Taking into consideration that
\begin{align*}
&\mathrm{E} \Big[ \Big( I_K \otimes \diag\Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) \Big)
\big( \mathbf{1}_K^T \otimes (Q_K \otimes \mathbf{1}_K) \big)
\Big( \diag \Big( Z_r^Q - \sqrt{\frac{2}{h}} \Delta\w_h^Q \Big) \otimes I_K \Big) \Big| \Delta \w_h^Q \Big] \\
&= \frac{2}{h} \Big( I_K \otimes \diag\big( \Delta\w_h^Q \big) \Big)
\big( \mathbf{1}_K^T \otimes (Q_K \otimes \mathbf{1}_K) \big)
\Big( \diag \big( \Delta\w_h^Q \big) \otimes I_K \Big) \\
&\quad + \sum_{i=1}^K \big( Q_K^{1/2} \mathrm{e}_i \big)^T \otimes
\big( I_K \otimes Q_K^{1/2} \mathrm{e}_i \big)
\end{align*}
and that $H_K \big( \sum_{i=1}^K \big( Q_K^{1/2} \mathrm{e}_i \big)^T \otimes
\big( I_K \otimes Q_K^{1/2} \mathrm{e}_i \big) \big) H_K^T = 0$, it follows that
expression \eqref{SigmaInf} can be rewritten as
\begin{align} \label{SigmaInf2}
\SQinfty
&=2 H_K (Q_K \otimes Q_K) H_K^T
+ \frac{2}{h} H_K \Big( Q_K \otimes \Delta\w_h^Q {\Delta \w_h^Q}^T
+ \Delta\w_h^Q {\Delta \w_h^Q}^T \otimes Q_K \nonumber \\
& \quad - \big(Q_K^{{1}/{2}} \otimes \diag(\Delta\w_h^Q) \big)
\big( {\Delta\w_h^Q}^T \otimes (Q_K^{{1}/{2}} \otimes \mathbf{1}_K) \big) \nonumber \\
& \quad - \big( \Delta\w_h^Q \otimes (Q_K^{{1}/{2}} \otimes \mathbf{1}_K^T) \big)
\big( Q_K^{{1}/{2}} \otimes \diag(\Delta\w_h^Q) \big) \Big) H_K^T.
\end{align}
This implies that, given $Z^Q=(Z_r^Q)_{r \in \mathbb{N}}$
and $\Delta \w_h^Q$, the vector of tail sums $\tilde{R}^{Q,(D)}(h)$
is conditionally Gaussian distributed with the following parameters
\begin{equation*}
\tilde{R}^{Q,(D)}(h)_{|Z^Q, \Delta \w_h^Q}
\sim N \Big(0_{L},
\Big( \frac{h}{2\pi} \Big)^2 \sum_{r=D+1}^{\infty}
\frac{1}{r^2} H_K \SigmaQCond H_K^T
\Big)
\end{equation*}
for $D \in \mathbb{N}$.
Hence, given $Z^Q$ and $\Delta \w_h^Q$, we can approximate
the tail sums by simulating
a conditionally standard Gaussian random vector
${\Upsilon^{Q,(D)}}_{|Z^Q,\Delta \w_h^Q} \sim N(0_{L}, I_{L})$ defined as
\begin{equation*}
\Upsilon^{Q,(D)} = \frac{2\pi}{h} \bigg( \sum_{r={D+1}}^{\infty} \frac{1}{r^2}
H_K \SigmaQCond H_K^T \bigg) ^{-\frac{1}{2}} \tilde{R}^{Q,(D)}(h)
\end{equation*}
and, therewith, obtain the vector of tail sums
\begin{equation}\label{ApproxRemainder}
\tilde{R}^{Q,(D)}(h) = \frac{h}{2\pi} \bigg(\sum_{r={D+1}}^{\infty} \frac{1}{r^2}
H_K \SigmaQCond H_K^T \bigg)^{\frac{1}{2}} \Upsilon^{Q,(D)} .
\end{equation}
It remains to examine, how the covariance matrix evolves as $D \to \infty$.
For $D \in \mathbb{N}$, we define the matrix
\begin{equation}\label{SigmaD}
\SigmaDQ := \bigg( \sum_{r=D+1}^{\infty} \frac{1}{r^2} \bigg)^{-1}
\sum_{r=D+1}^{\infty} \frac{H_K \SigmaQCond H_K^T}{r^2} .
\end{equation}
By the proof of Theorem~\ref{Algo2} below,
we get convergence in the following sense
\begin{equation*}
\lim_{D \to \infty} \mathrm{E} \Big[ \big\| \SigmaDQ - \SQinfty \big\|_F^2 \Big] = 0,
\end{equation*}
where $\|\cdot\|_F$ denotes the Frobenius norm. Thus, it follows
\begin{equation*}
\frac{2\pi}{h} \bigg( \sum_{r={D+1}}^{\infty} \frac{1}{r^2} \bigg)^{-\frac{1}{2}} \tilde{R}^{Q,(D)}(h)
\stackrel{d}{\longrightarrow} \mathbf{\xi} \sim N\big(0_{L}, \SQinfty \big)
\end{equation*}
as $D \to \infty$, see also \cite{MR1843055}. \\ \\
\phantomsection \label{Sec:Algo2}\noindent
Combining the above, we obtain an algorithm very similar to the one in \cite{MR1843055},
where steps $1,2$, and $4$ equal Algorithm~1. Additionally, we approximate the vector of
tail sums in step 3. For some $h>0$, $t,t+h \in [0,T]$, and $D,K\in\mathbb{N}$ the Algorithm~2
is defined as follows:
\begin{sffamily}
\begin{enumerate}
\item For $j \in \mathcal{J}_K$, simulate the Fourier coefficients
$\Delta \w_{h}^j = \langle W_{t+h}-W_t, \tilde{e}_j \rangle_U$
of the increment $W_{t+h}-W_t$
with
$\Delta \w_h^Q = \big( \Delta \w^1_h, \ldots, \Delta \w^K_h \big)^T$ as
\begin{equation*}
\Delta\w_h^Q = \sqrt{h} \, Q_K^{{1}/{2}} V
\end{equation*}
where $V \sim N(0_K,I_K)$.
\item Approximate $\tilde{A}^Q(h)$ as
\begin{equation*}
\tilde{A}^{Q,(D)}(h) = H_K \big(Q_K^{{1}/{2}} \otimes Q_K^{{1}/{2}} \big) H_K^T
\ADV
\end{equation*}
where $U_r, Z_r \sim N(0_K, I_K)$ are independent.
\item Simulate $\Upsilon^{Q,(D)} \sim N(0_{L},I_{L})$ and compute
\begin{equation}\label{ApproxA}
\hat{A}^{Q,(D)}(h) = \tilde{A}^{Q,(D)}(h)
+ \frac{h}{2\pi} \bigg( \sum_{r=D+1}^{\infty}\frac{1}{r^2} \bigg)^{\frac{1}{2}}
\sqrt{\SQinfty} \Upsilon^{Q,(D)} .
\end{equation}
\item Compute the approximation $ \text{vec}((\hat{I}^{Q,(D)}(h))^T)$ of $ \text{vec}((I^Q(h)^T)$ as
\begin{equation*}
\text{vec}((\hat{I}^{Q,(D)}(h))^T) = \frac{\Delta\w_h^Q \otimes \Delta\w_h^Q
-\text{vec}(h Q_K)}{2}
+ (I_{K^2}-S_K) H_K^T \hat{A}^{Q,(D)}(h)
\end{equation*}
with $S_K = \sum_{i=1}^K \mathbf{e}_i^T \otimes (I_K \otimes \mathbf{e}_i)$.
\end{enumerate}
\end{sffamily}
Note that the matrix $\sqrt{\SQinfty}$ in step 3 is the Cholesky
decomposition of $\SQinfty$. This expression, which is specified in the following theorem,
can be obtained in closed
form and does not have to be computed numerically.
\begin{thm}[Cholesky Decomposition]\label{CholeskyDecomp}
Let $\SQinfty$ be defined as in \eqref{SigmaInf} or \eqref{SigmaInf2} with
$\Delta\w_h^Q = \sqrt{h} \, Q_K^{{1}/{2}} V$ and let $\Sinfty$ be
defined by \eqref{SigmaInf} or \eqref{SigmaInf2} with $Q_K=I_K$.
Then, it holds
\begin{equation*}
\sqrt{\SQinfty}
= \tilde{Q}_K \frac{\Sinfty + 2 \sqrt{1+ V^T V} I_{K^2}}
{\sqrt{2} \big(1+\sqrt{1+ V^T V}\big)}.
\end{equation*}
\end{thm}
\begin{proof}
For a proof, we refer to Section~\ref{Sec:Proofs}.
\end{proof}
Now, we analyze the error resulting from Algorithm~2.
In the following theorem, the first
term is the same as in the error estimate of Algorithm~1, see Theorem~\ref{Algo1}.
Due to the second term, the approximations converge with order $1$ in $D$,
which is twice the order that Algorithm~1 attains. However,
this expression is dependent on $K$
as well. Below, we state an alternative estimate -- there, the exponent of $K$ is not fixed but
dependent on the eigenvalues $\eta_j$, $j\in\mathcal{J}_K$, see Theorem~\ref{Algo2Alternative}.
The algorithm that is superior can only be
determined in dependence on the operator $Q$.
\begin{thm}[Convergence for Algorithm~2]\label{Algo2}
Assume that $Q$ is a nonnegative and symmetric trace class operator
and $(W_t)_{t \geq 0}$ is a $Q$-Wiener process.
Further, let $\Phi \in L(U,H)_{U_0}$ with $\| \Phi Q^{-\alpha} \|_{L_{HS}(U_0,H)}<C$,
$\Psi\in L(H,L(Q^{-\alpha}U,H)_{U_0})$ for some $\alpha\in(0,\infty)$, i.e.,
assumptions (A1) and (A2) are fulfilled.
Then, it holds
\begin{align*}
&\bigg(\mathrm{E} \bigg[ \Big\| \int_t^{t+h}
\Psi\Big( \int_t^s\Phi \, \mathrm{d}W_r \Big) \, \mathrm{d}W_s
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi\big(\Phi \tilde{e}_i, \tilde{e}_j \big)
\Big\|_H^2 \bigg] \bigg)^{\frac{1}{2}} \\
&\leq C_Q h \Big(\sup_{j\in\mathcal{J} \setminus
\mathcal{J}_K} \eta_j \Big)^{\alpha}
+ C_Q\frac{h}{D} \sqrt{K^2(K-1)}
\end{align*}
for some $C_Q>0$ and all $h>0$, $t,t+h\in[0,T]$, $D,K\in\mathbb{N}$,
and $\mathcal{J}_K \subset \mathcal{J}$ with $|\mathcal{J}_K|=K$.
\end{thm}
\begin{proof
For a proof, we refer to Section~\ref{Sec:Proofs}.
\end{proof}
For completeness, we state the following error estimate.
Again, this is the estimate that we employ
when incorporating the approximation of the iterated
integrals into a numerical scheme; see also the
notes on Corollary~\ref{Algo1Lemma}.
\begin{cor}\label{Algo2Lemma}
Assume that $Q$ is a nonnegative and symmetric trace class operator
and $(W_t)_{t \geq 0}$ is a $Q$-Wiener process.
Furthermore, let $\Phi \in L(U,H)_{U_0}$, $\Psi\in L(H,L(U,H)_{U_0})$,
i.e., conditions (B1) and (B2)
are fulfilled.
Then, it holds
\begin{align*}
&\bigg(\mathrm{E}\bigg[ \Big\| \int_t^{t+h}
\Psi \Big( \int_t^s \Phi \, \mathrm{d}W_r^K \Big) \, \mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi\big(\Phi \tilde{e}_i, \tilde{e}_j\big) \Big\|_H^2 \bigg] \bigg)^{\frac{1}{2}}
\leq C_Q\frac{h}{D}\sqrt{K^2(K-1)}
\end{align*}
for some $C_Q > 0$ and all $h>0$, $t,t+h\in[0,T]$, $D,K\in\mathbb{N}$,
and $\mathcal{J}_K \subset \mathcal{J}$ with $|\mathcal{J}_K|=K$.
\end{cor}
\begin{proof
The proof of this corollary is detailed in the proof of Theorem~\ref{Algo2}.
\end{proof}
If we assume $\eta_j \leq Cj^{-\rho_Q}$ for $C>0$, $\rho_Q>1$, and all $j\in\{1,\ldots,K\}$,
we can improve the result in Theorem~\ref{Algo2} in the case $\rho_Q <3$. Precisely,
we obtain an error term
that involves the factor $K^{\frac{\rho_Q}{2}}$. The main difference
is that the alternative proof
works with the entries of the covariance matrices explicitly.
A statement
along the lines of Corollary~\ref{Algo2Lemma} can be obtained analogously.
\begin{thm}[Convergence for Algorithm~2]\label{Algo2Alternative}
Assume that $Q$ is a nonnegative and symmetric trace class operator
and $(W_t)_{t \geq 0}$ is a $Q$-Wiener process.
Further, let $\Phi \in L(U,H)_{U_0}$ with $\| \Phi Q^{-\alpha}\|_{L_{HS}(U_0,H)}<C$,
$\Psi\in L(H,L(Q^{-\alpha}U,H)_{U_0})$ for some $\alpha\in(0,\infty)$, i.e.,
assumptions (A1) and (A2) are fulfilled. Then, it holds
\begin{align*}
&\bigg(\mathrm{E}\bigg[\Big\|\int_t^{t+h}
\Psi\Big( \int_t^s\Phi \, \mathrm{d}W_r\Big) \,\mathrm{d}W_s
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi\big(\Phi \tilde{e}_i, \tilde{e}_j\big)
\Big\|_H^2\bigg]\bigg)^{\frac{1}{2}} \\
&\leq C_Qh\Big(\sup_{j\in\mathcal{J} \setminus
\mathcal{J}_K}\eta_j\Big)^{\alpha}
+ C_Q\frac{h}{D} \Big(\min_{j\in\mathcal{J}_K}\eta_j\Big)^{-\frac{1}{2}}
\end{align*}
for some $C_Q>0$ and all $h>0$, $t,t+h\in[0,T]$, $D,K\in\mathbb{N}$,
and $\mathcal{J}_K \subset \mathcal{J}$ with $|\mathcal{J}_K|=K$.
\end{thm}
\begin{proof
For a proof, we again refer to Section~\ref{Sec:Proofs}.
\end{proof}
\begin{remark}
Note that if $(W_t)_{t\geq 0}$ is a cylindrical Wiener process,
we get the same estimate \eqref{ErrDoubleW} as in the finite dimensional case.
\end{remark}
In general, for $h=\frac{T}{M}$, we obtain convergence of
this algorithm for $K,M\rightarrow \infty$ if
we choose $D>(\min_{j\in\mathcal{J}_K}\eta_j)^{-\frac{1}{2}}h^{1-\theta}$ or, respectively,
$D>\sqrt{K^2(K-1)}h^{1-\theta}$
for some $\theta >0$.
For Algorithm~1, we require $D> h^{2-2\theta}$, instead.
However, we need a more careful choice of
$D$ to maintain the order of convergence in the mean-square sense in $h$
for a given numerical scheme
-- we call this convergence rate $\gamma>0$.
Precisely, we have to choose $D \geq h^{1-2\gamma}$ for Algorithm~1 and
$D \geq h^{\frac{1}{2}-\gamma}(\min_{j\in\mathcal{J}_K}\eta_j)^{-\frac{1}{2}}$, respectively,
$D \geq h^{\frac{1}{2}-\gamma}\sqrt{K^2(K-1)}$ for Algorithm~2.
\section{Proofs}\label{Sec:Proofs}
\subsection{Convergence for Algorithm~1}
\begin{proof}[Proof of Theorem~\ref{Algo1}]
We determine the error resulting from the
approximation of the iterated stochastic integral \eqref{DoubleIntSPDE} by Algorithm~1
which also contains the projection of the $Q$-Wiener process in \eqref{QSeriesK}.
Below, we employ error estimates of the following form several times, see also the proof
in \cite{MR3320928}.
It holds
\begin{equation}\label{ErrorWK}
\begin{split}
\mathrm{E}\bigg[\Big\|\int_{t}^{t+h} \Phi \, \mathrm{d}(W_s-W_s^K) \Big\|_H^2\bigg]
&= \mathrm{E}\bigg[\Big\| \sum_{j\in\mathcal{J}\setminus{\mathcal{J}_K} }
\int_{t}^{t+h}\Phi \sqrt{\eta_j} \tilde{e}_j \, \mathrm{d}\beta_s^j\Big\|_H^2\bigg] \\
& = \sum_{j\in\mathcal{J}\setminus{\mathcal{J}_K}}\eta_j\int_{t}^{t+h}
\mathrm{E}\Big[\big\|\Phi Q^{-\alpha}Q^{\alpha}\tilde{e}_j \big\|_H^2\Big]\,
\mathrm{d}s \\
& = \sum_{j\in\mathcal{J}\setminus{\mathcal{J}_K} }\eta_j^{2\alpha+1}\int_{t}^{t+h}
\mathrm{E}\Big[\big\|\Phi Q^{-\alpha}\tilde{e}_j \big\|_H^2\Big]\, \mathrm{d}s \\
&\leq \Big(\sup_{j\in \mathcal{J}\setminus\mathcal{J}_K}\eta_j\Big)^{2\alpha} \int_{t}^{t+h}
\mathrm{E}\Big[\sum_{j \in \mathcal{J}}\eta_j\big\|\Phi Q^{-\alpha}\tilde{e}_j \big\|_H^2\Big]\,
\mathrm{d}s \\
& = \Big(\sup_{j\in \mathcal{J}\setminus\mathcal{J}_K}\eta_j\Big)^{2\alpha}\int_{t}^{t+h}
\mathrm{E}\Big[\big\|\Phi Q^{-\alpha}\big\|_{L_{HS}(U_0,H)}^2\Big]\, \mathrm{d}s,
\end{split}
\end{equation}
where we used the expression
\begin{equation*}
\mathrm{d} (W_s-W_s^K)
= \sum_{j\in\mathcal{J}\setminus{\mathcal{J}_K} }\sqrt{\eta_j}\tilde{e}_j\, \mathrm{d}\beta_s^j
\end{equation*}
for all $s\in[0,T]$, $K\in\mathbb{N}$ in the first step.
We fix some arbitrary $h>0$, $t, t+h \in [0,T]$, and $K\in\mathbb{N}$ throughout the proof
and decompose the error into several parts
\begin{equation}\label{SplitDouble}
\begin{split}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi\Big( \int_t^s \Phi \,\mathrm{d}W_r\Big) \,\mathrm{d}W_s
- \sum_{i,j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi\big(\Phi \tilde{e}_i, \tilde{e}_j\big) \Big\|_H^2\bigg]\\
& \leq C\bigg(\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r\Big) \,\mathrm{d}W_s
-\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s\Big\|_H^2\bigg] \\
&\quad +\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s
-\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K\Big\|_H^2\bigg] \\
&\quad+\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg]\bigg).
\end{split}
\end{equation}
For now, we neglect the last term in \eqref{SplitDouble} and estimate the other parts.
By It\^{o}'s isometry, the properties (A1) and (A2) of the operators $\Phi $, $\Psi $,
and estimate \eqref{ErrorWK}, we get
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r\Big) \,\mathrm{d}W_s
-\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s\Big\|_H^2\bigg] \nonumber\\
&\quad+\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s
-\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K\Big\|_H^2\bigg] \nonumber\\
& \leq \int_t^{t+h} \mathrm{E}\bigg[\Big\|\Psi
\Big( \int_t^s \Phi \,\mathrm{d}\big(W_r-W_r^K\big)\Big)
\Big\|_{L_{HS}(U_0,H)}^2\bigg]\,\mathrm{d}s \nonumber\\
&\quad+\Big(\sup_{j\in\mathcal{J}\setminus \mathcal{J}_K} \eta_j\Big)^{2\alpha}
\int_t^{t+h} \mathrm{E}\bigg[\Big\|\Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big)Q^{-\alpha}
\Big\|_{L_{HS}(U_0,H)}^2\bigg] \,\mathrm{d}s \nonumber \\
& \leq C \int_t^{t+h} \mathrm{E}\bigg[\Big\| \int_t^s \Phi \,\mathrm{d}\big(W_r-W_r^K\big)
\Big\|_{H}^2\bigg]\,\mathrm{d}s
+\Big(\sup_{j\in\mathcal{J}\setminus \mathcal{J}_K} \eta_j\Big)^{2\alpha}
\int_t^{t+h} \mathrm{E}\bigg[\Big\|\int_t^s \Phi \,\mathrm{d}W_r^K\Big\|_{H}^2
\bigg] \,\mathrm{d}s \nonumber \\
& \leq C\Big(\sup_{j\in\mathcal{J}\setminus \mathcal{J}_K}
\eta_j\Big)^{2\alpha} \int_t^{t+h} \int_t^s
\mathrm{E}\Big[\big\| \Phi Q^{-\alpha}\big\|_{L_{HS}(U_0,H)}^2\Big]\,\mathrm{d}r\,\mathrm{d}s
+ \Big(\sup_{j\in\mathcal{J}\setminus \mathcal{J}_K} \eta_j\Big)^{2\alpha}
\int_t^{t+h} \int_t^s C \,\mathrm{d}r \,\mathrm{d}s.\nonumber
\end{align*}
Finally, assumption (A1) yields
\begin{align}\label{W-WK}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r\Big) \,\mathrm{d}W_s
-\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s\Big\|_H^2\bigg] \nonumber\\
&\quad+\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s
-\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \,\mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K\Big\|_H^2\bigg] \nonumber\\
& \leq C\Big(\sup_{j\in\mathcal{J}\setminus \mathcal{J}_K} \eta_j\Big)^{2\alpha} h^2.
\end{align}
Now, we concentrate on the last term in \eqref{SplitDouble}; this part also proves
Corollary~\ref{Algo1Lemma}.
We get
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big)\, \mathrm{d}W_s^K
-\sum_{i,j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big) \Big\|_H^2\bigg] \\
&= \mathrm{E}\bigg[\Big\| \sum_{i,j \in \mathcal{J}_K} I_{(i,j)}^Q(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)
-\sum_{i,j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big) \Big\|_H^2\bigg] \\
&= \sum_{i,j \in \mathcal{J}_K} \mathrm{E}\Big[\big(I_{(i,j)}^Q(h)
-\bar{I}_{(i,j)}^{Q,(D)}(h)\big)^2\Big]\big\|\Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\big\|_H^2
\end{align*}
as $\mathrm{E}\Big[ \big(I_{(i,j)}^Q(h)-\bar{I}_{(i,j)}^{Q,(D)}(h) \big)
\big(I_{(k,l)}^Q(h)-\bar{I}_{(k,l)}^{Q,(D)}(h) \big) \Big] =0$ for all
$i,j,k,l \in\mathcal{J}_K$ with $(i,j) \neq (k,l)$, $K\in\mathbb{N}$,
see~\cite{MR1214374}.
By assumptions (B1) and (B2), we obtain
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big)\, \mathrm{d}W_s^K
-\sum_{i,j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big) \Big\|_H^2\bigg] \\
&\leq \sum_{i,j \in \mathcal{J}_K} \mathrm{E}\Big[\big( I_{(i,j)}^Q(h)
-\bar{I}_{(i,j)}^{Q,(D)}(h)\big)^2\Big]\big\|\Psi\big\|_{L(H,L(U,H))}^2 \big\|\Phi\big\|_{L(U,H)}^2 \\
&\leq C \sum_{i,j \in \mathcal{J}_K} \mathrm{E}\Big[\big( I_{(i,j)}^Q(h)
-\bar{I}_{(i,j)}^{Q,(D)}(h)\big)^2\Big].
\end{align*}
Due to the relations \eqref{AandI1}-\eqref{AandI3}, it is enough to examine $\tilde{A}^Q(h)$
and $\tilde{A}^{Q,(D)}(h)$ which implies
\begin{align}\label{ErrorGleich}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big)\, \mathrm{d}W_s^K
-\sum_{i,j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big) \Big\|_H^2\bigg] \nonumber\\
&\leq 2C \sum_{i=1}^L \mathrm{E}\Big[\big( \tilde{A}_{(i)}^Q(h)
-\tilde{A}_{(i)}^{Q,(D)}(h)\big)^2\Big].
\end{align}
By \eqref{DoubleLevy}, \eqref{AreaIntKP}, and the properties of $a_r^j$, $b_r^j$ for $r\in\mathbb{N}_0$,
$j \in\mathcal{J}_K$, $K\in\mathbb{N}$, we obtain
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big)\, \mathrm{d}W_s^K
-\sum_{i,j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big) \Big\|_H^2\bigg] \nonumber\\
& \leq 2C \sum_{\substack{i,j \in \mathcal{J}_K \\ i<j}}
\mathrm{E}\bigg[\Big( \pi \sum_{r=D+1}^{\infty} r
\Big(a_r^i \Big( b^j_r-\frac{1}{\pi r} \Delta\w_h^j \Big)
- \Big(b^i_r-\frac{1}{\pi r} \Delta\w_h^i \Big) a^j_r \Big) \Big)^2\bigg] \\
&= 2C \pi^2 \sum_{\substack{i,j \in \mathcal{J}_K \\ i<j}}
\sum_{r=D+1}^{\infty} r^2 \, \mathrm{E}\Big[\Big(a_r^i b^j_r - a_r^i \frac{1}{\pi r} \Delta \w_h^j \Big)^2
+ \Big(b^i_r a^j_r - \frac{1}{\pi r} \Delta \w_h^i a^j_r \Big)^2 \Big] \\
&= 3 C \frac{h^2}{\pi^2} \sum_{\substack{i,j \in \mathcal{J}_K \\ i<j}}
\eta_i \eta_j \sum_{r=D+1}^{\infty} \frac{1}{r^2}
\leq 3C \frac{h^2}{\pi^2} \, (\tr Q)^2 \sum_{r=D+1}^{\infty} \frac{1}{r^2}
\end{align*}
for all $D\in\mathbb{N}$.
As in \cite{MR1178485}, we finally estimate
\begin{equation*}
\sum_{r=D+1}^{\infty}\frac{1}{r^2} \leq \int_D^{\infty}\frac{1}{s^2}\,\mathrm{d}s =\frac{1}{D}
\end{equation*}
and, in total, we obtain for this part
\begin{align}\label{ErrorDoubleK}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \,\mathrm{d}W_r^K\Big)\, \mathrm{d}W_s^K
-\sum_{i,j \in \mathcal{J}_K} \bar{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big) \Big\|_H^2\bigg]
\leq 3C \,(\tr Q )^2 \frac{h^2}{D\pi^2}
\end{align}
for all $h>0$, $t, t+h\in[0,T]$, $D,K\in\mathbb{N}$.
\end{proof}
\subsection{Cholesky Decomposition}
\begin{proof}[Proof of Theorem~\ref{CholeskyDecomp}]
It holds $\SinftyQ = \tilde{Q}_K \Sinfty \tilde{Q}_K^T$, where $\Sinfty$
is given by \eqref{SigmaInf} for $Q_K=I_K$, and
$\Delta \w_h^I = Q_K^{-1/2} \Delta \w_h^Q = \sqrt{h} V$.
We assume that
\begin{align*}
\sqrt{\SinftyQ}
&= \tilde{Q}_K \frac{\Sinfty+2\sqrt{1+V^T V} I_{K^2} }
{\sqrt{2}\big(1+\sqrt{1+V^T V}\big)}
= \tilde{Q}_K \frac{\Sinfty+2\sqrt{1+\frac{1}{h} {\Delta \w_h^I}^T \Delta \w_h^I
} I_{K^2} }
{\sqrt{2}\Big(1+\sqrt{1+\frac{1}{h} {\Delta \w_h^I}^T \Delta \w_h^I
}\Big)}
\end{align*}
holds and compute for $a := \sqrt{1+\frac{1}{h} {\Delta \w_h^I}^T \Delta \w_h^I}$
the expression
\begin{align*}
&\sqrt{\SinftyQ} \sqrt{\SinftyQ}^T
= \frac{\tilde{Q}_K \Sinfty \big(\Sinfty \big)^T\tilde{Q}_K^T
+2a\tilde{Q}_K \Sinfty \tilde{Q}_K^T
+2a\tilde{Q}_K \big( \Sinfty \big)^T\tilde{Q}_K^T
+4 a^2\tilde{Q}_K\tilde{Q}_K^T} {2(1+a)^2}\\
&= \frac{\tilde{Q}_K \Sinfty \big( \Sinfty \big)^T\tilde{Q}_K^T
-(2+2a^2)\tilde{Q}_K \Sinfty \tilde{Q}_K^T
+4 a^2\tilde{Q}_K\tilde{Q}_K^T} {2(1+a)^2}
+ \frac{2+4a+2a^2}{2(1+a)^2} \tilde{Q}_K \Sinfty \tilde{Q}_K^T \\
&= \frac{\tilde{Q}_K \Sinfty \big( \Sinfty \big)^T\tilde{Q}_K^T
-(2+2a^2)\tilde{Q}_K \Sinfty \tilde{Q}_K^T
+4 a^2\tilde{Q}_K \tilde{Q}_K^T} {2(1+a)^2}
+ \SinftyQ.
\end{align*}
The idea in \cite{MR1843055} is to show that the first term, which slightly differs in \cite{MR1843055},
is zero, i.e.,
\begin{align*}
\tilde{Q}_K \big(\Sinfty \big(\Sinfty \big)^T
-(2+2a^2) \Sinfty +4 a^2 I_L \big)\tilde{Q}_K^T &= 0_{L\times L}
\\
\Leftrightarrow \quad \Sinfty\big(\Sinfty\big)^T
-(2+2a^2)\Sinfty+4 a^2 I_L &= 0_{L\times L},
\end{align*}
which proves that the expression for $\sqrt{\SinftyQ} $ is correct.
In the proof of Theorem 4.1 in \cite{MR1843055}, the author shows
\begin{equation*}
\Sinfty\big(\Sinfty\big)^T
-(2+2a^2)\Sinfty +4 a^2 I_L = 0_{L\times L},
\end{equation*}
arguing by the eigenvalues of the minimal polynomial of this equation. We do not repeat this
ideas here but refer to \cite{MR1843055} for further details.
\end{proof}
\subsection{Convergence for Algorithm~2}
\begin{proof}[Proof of Theorem~\ref{Algo2}]
We split the error term as in the proof of Theorem~\ref{Algo1},
see equation \eqref{SplitDouble},
and obtain the same expression \eqref{W-WK}
from the approximation of the $Q$-Wiener process
by $(W^K_{t})_{t\in[0,T]}$, $K\in\mathbb{N}$.
Further, we get as in equation \eqref{ErrorGleich}
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \\
&\leq 2 C \sum_{i=1}^L\mathrm{E}\Big[\big(\tilde{A}_{(i)}^Q(h)
-\hat{A}_{(i)}^{Q,(D)}(h)\big)^2\Big]
\end{align*}
for all $h>0$, $t, t+h\in[0,T]$, $K\in\mathbb{N}$.
The following part also proves Corollary~\ref{Algo2Lemma}.
Let $\|\cdot\|_F$ denote the Frobenius norm.
With the expressions for $\tilde{R}^{Q,(D)}(h)$ in \eqref{ApproxRemainder},
with $\Sigma^{Q,(D)} = \tilde{Q}_K \Sigma^{I,(D)} \tilde{Q}_K^T$,
$\SinftyQ = \tilde{Q}_K \Sinfty \tilde{Q}_K^T$
where $\Sigma^{I,(D)}$, $\Sinfty$ are given by \eqref{SigmaD} and \eqref{SigmaInf} for $Q_K=I_K$, respectively,
and the definition of the algorithm \eqref{ApproxA}, we obtain
\begin{align} \label{Eqn-Frobenius}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \nonumber \\
& \leq2C \sum_{i=1}^L \mathrm{E}\bigg[\Big( \Big( \tilde{R}^{Q,(D)}(h)
-\frac{h}{2\pi} \Big(\sum_{r=D+1}^{\infty} \frac{1}{r^2}\Big)^{\frac{1}{2}}
\sqrt{\SinftyQ}\Upsilon^{Q,(D)}\Big)_{(i)}\Big)^2\bigg]\nonumber\\
&=2C \sum_{i=1}^L \mathrm{E}\bigg[\Big( \Big(\frac{h}{2\pi}
\Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}
H_K \SigmaCondQ H_K^T \Big)^{\frac{1}{2}}\Upsilon^{Q,(D)} \nonumber \\
&\quad -\frac{h}{2\pi} \Big(\sum_{r=D+1}^{\infty}
\frac{1}{r^2}\Big)^{\frac{1}{2}} \tilde{Q}_K
\sqrt{\Sinfty} \Upsilon^{Q,(D)}\Big)_{(i)}\Big)^2\bigg]\nonumber\\
&= \frac{Ch^2}{2\pi^2} \Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}\Big)
\sum_{i=1}^L\nonumber\\
&\quad \cdot \mathrm{E}\Bigg[\bigg( \bigg(\Big( \, \Big(
\sum_{r=D+1}^{\infty}\frac{1}{r^2} \Big)^{-\frac{1}{2}} \tilde{Q}_K
\Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2} H_K \SigmaICond H_K^T\Big)^{\frac{1}{2}}
-\tilde{Q}_K \sqrt{\Sinfty} \Big) \Upsilon^{Q,(D)}\bigg)_{(i)}\bigg)^2\Bigg]
\nonumber\\
&= C \frac{h^2}{2\pi^2} \Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}\Big)
\sum_{i=1}^L\mathrm{E}\Big[\Big( \Big( \Big( \tilde{Q}_K \big(\sqrt{\Sigma^{I,(D)}}
-\sqrt{\Sinfty}\big) \Big)\Upsilon^{Q,(D)}\Big)_{(i)}\Big)^2\Big] \nonumber \\
&= C \frac{h^2}{2\pi^2} \Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}\Big)
\sum_{i=1}^L \mathrm{E}\bigg[\mathrm{E}\Big[\Big( \Big( \Big(\tilde{Q}_K \big(\sqrt{\Sigma^{I,(D)}}
-\sqrt{\Sinfty} \big) \Big)
\Upsilon^{Q,(D)}\Big)_{(i)}\Big)^2\Big\vert Z^Q,\Delta \w_h^Q\Big]\bigg]\nonumber \\
&= C \frac{h^2}{2\pi^2} \Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}\Big)
\,\mathrm{E}\Big[\big\| \tilde{Q}_K \big(\sqrt{\Sigma^{I,(D)}}
-\sqrt{\Sinfty}\big) \big\|_F^2\Big]
\end{align}
for all $h>0$, $t, t+h\in[0,T]$, $D,K\in\mathbb{N}$. Here, we used the fact that
${\Upsilon^{Q,(D)}}_{|Z^Q,\Delta \w_h^Q}\sim N(0_{L}, I_{L})$ for $h>0$, $D,L\in\mathbb{N}$
and that $\tilde{Q}_K$ is a diagonal matrix.
Precisely, for $G:= \sqrt{\Sigma^{I,(D)}}-\sqrt{\Sinfty}$
with $G:= (g_{ij})_{1\leq i,j\leq L}$ and
$\Upsilon^{Q,(D)} = (\Upsilon^{Q,(D)}_j)_{1\leq j\leq L}$, we compute
\begin{align*}
&\sum_{i=1}^L \mathrm{E} \bigg[ \mathrm{E} \Big[ \Big( \Big( \Big( \tilde{Q}_K \big(\sqrt{\Sigma^{I,(D)}}
-\sqrt{\Sinfty} \big) \Big)\Upsilon^{Q,(D)} \Big)_{(i)} \Big)^2
\Big\vert Z^Q,\Delta \w_h^Q\Big]\bigg]\\
&= \sum_{i=1}^L \mathrm{E} \bigg[ \mathrm{E} \Big[ \Big( \big( \tilde{Q}_K G
\Upsilon^{Q,(D)} \big)_{(i)} \Big)^2
\Big\vert Z^Q,\Delta \w_h^Q\Big]\bigg] \\
&= \sum_{i=1}^L \mathrm{E}\bigg[\mathrm{E}\Big[
\Big( \sum_{j=1}^L (\tilde{Q}_K)_{ii} \, g_{ij} \, \Upsilon^{Q,(D)}_j
\Big)^2 \Big\vert Z^Q,\Delta \w_h^Q\Big]\bigg] \\
&= \sum_{i,j=1}^L \mathrm{E}\Big[ (\tilde{Q}_K)_{ii}^2 \, g_{ij}^2 \Big]
= \mathrm{E}\Big[ \big\| \tilde{Q}_K G \big\|_F^2\Big] \\
&= \mathrm{E} \Big[ \big\| \tilde{Q}_K \big( \sqrt{\Sigma^{I,(D)}}
-\sqrt{\Sinfty}\big) \big\|_F^2\Big].
\end{align*}
In order to relate to the proof in \cite{MR1843055}, we write
\begin{align*}
\mathrm{E} \Big[ \big\| \tilde{Q}_K \big(\sqrt{\Sigma^{I,(D)}}
-\sqrt{\Sinfty}\big) \big\|_F^2\Big]
&=\mathrm{E} \Big[ \sum_{i,j=1}^L (\tilde{Q}_K)_{ii}^2 g_{ij}^2\Big]
\leq \max_{1\leq i \leq K} \eta_i^2 \, \mathrm{E}\Big[ \sum_{i,j=1}^L g_{ij}^2\Big]
\\
&\leq \max_{1\leq i \leq K} \eta_i^2 \, \mathrm{E}\Big[\big\| \sqrt{\Sigma^{I,(D)}}-\sqrt{\Sinfty}\big\|_F^2\Big].
\end{align*}
In total, we obtain
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \\
& \leq C \max_{1\leq i \leq K} \eta_i^2 \, \frac{h^2}{2\pi^2} \Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}\Big)
\,\mathrm{E}\Big[\big\| \sqrt{\Sigma^{I,(D)}}-\sqrt{\Sinfty}\big\|_F^2\Big].
\end{align*}
Now, we can insert the results obtained in the proofs of \cite[Theorem~4.1, Theorem~4.2, Theorem~4.3]{MR1843055};
this yields
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \\
& \leq C \max_{1\leq i \leq K} \eta_i^2 \,
\frac{h^2 K(K-1) \big(K+4
\mathrm{E}\big[V^T V\big]
\big)}{12 \pi^2 D^2}
\leq C \frac{5 h^2 K^2 (K-1)}{12 \pi^2 D^2}
\end{align*}
for all $h>0$, $t, t+h\in[0,T]$, $D,K\in\mathbb{N}$ where $V = h^{-1/2} Q_K^{-1/2} \Delta \w_h^Q$.
\end{proof}
\vspace{0.5cm}
\begin{proof}[Proof of Theorem~\ref{Algo2Alternative}]
We split the error term as in the proof of Theorem~\ref{Algo1} and Theorem~\ref{Algo2},
see equation \eqref{SplitDouble},
and obtain the same expression \eqref{W-WK}
from the approximation of the $Q$-Wiener process
by $(W^K_{t})_{t\in[0,T]}$, $K\in\mathbb{N}$.
Moreover, as in the previous proof, we get from \eqref{Eqn-Frobenius} that
\begin{align}\label{SqrtSigma}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s\Phi \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \nonumber \\
& \leq C \frac{h^2}{2\pi^2} \Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}\Big)
\,\mathrm{E}\Big[\big\| \sqrt{\SigmaDQ}-\sqrt{\SinftyQ} \big\|_F^2\Big]
\end{align}
for all $h>0$, $t, t+h\in[0,T]$, $D,K\in\mathbb{N}$.
In this alternative proof, we consider the elements of the matrices $\SigmaDQ$ and $\SinftyQ$ explicitly.
Therefore, we define the index set of interest as
$\mathcal{I}_A = ((1,2),\ldots,(1,K),\ldots,(l,l+1),\ldots,(l,K),\ldots,(K-1,K))
= (I_1,\ldots,I_{L})$ which selects the same entries of some matrix as the matrix transformation
by $H_K$ given in \eqref{SelectionMatrix}.
The $L \times L$-matrix $H_K \Sigma^Q(V_1^Q)_{|Z_1^Q,\Delta\w_h^Q} H_K^T $
has entries of type
\begin{align*}
& \mathrm{E}\Big[ \big( U_{1i}^Q(Z_{1j}^Q
-\sqrt{\frac{2}{h}}\Delta\w^j_h)
-(Z_{1i}^Q-\sqrt{\frac{2}{h}}\Delta\w^i_h)U_{1j}^Q \big) \\
&\quad \cdot
\big( U_{1m}^Q(Z_{1n}^Q-\sqrt{\frac{2}{h}}\Delta\w^n_h)
-(Z_{1m}^Q-\sqrt{\frac{2}{h}}\Delta\w^m_h)U_{1n}^Q \big)
\Big|Z_1^Q,\Delta w^Q_h \Big]
\end{align*}
for some $i,j,m,n \in \{1, \ldots, K\}$ with $i<j$ and $m<n$.
Especially, its diagonal entries are of type
\begin{equation*}
\eta_{i}\Big(Z_{1j}
+\sqrt{\frac{2}{h}}\Delta \w^{j}_h\Big)^2
+ \eta_{j}\Big(Z_{1i}+\sqrt{\frac{2}{h}}\Delta \w^{i}_h\Big)^2
\end{equation*}
with $(i,j) \in \mathcal{I}_A$ and $i \neq j$.
The off-diagonal entries of the matrix $H_K \Sigma^Q(V_1^Q)_{|Z_1^Q,\Delta\w_h^Q} H_K^T$
are of the form
\begin{align*}
&\mathrm{E}\Big[\big(U_{1i}^Q(Z_{1j}^Q
-\sqrt{\frac{2}{h}}\Delta\w^j_h)
-(Z_{1i}^Q-\sqrt{\frac{2}{h}}\Delta\w^i_h)U_{1j}^Q\big)\\ & \qquad \cdot
\big(U_{1m}^Q(Z_{1n}^Q-\sqrt{\frac{2}{h}}\Delta\w^n_h)
-(Z_{1m}^Q-\sqrt{\frac{2}{h}}\Delta\w^m_h)U_{1n}^Q\big)\Big|Z_1^Q,\Delta w^Q_h\Big]\\
&= \left\{ \begin{matrix*}[{l}]
\quad 0 , & i,j \notin \{m,n\} \\
\quad \eta_i \big(Z_{1j}^Q-\sqrt{\frac{2}{h}}\Delta \w_h^j\big)
\big(Z_{1n}^Q-\sqrt{\frac{2}{h}}\Delta \w_h^n\big), & i = m, \, j \neq n \\
-\eta_i\big(Z_{1j}^Q-\sqrt{\frac{2}{h}}\Delta \w_h^j\big)
\big(Z_{1m}^Q-\sqrt{\frac{2}{h}}\Delta \w_h^m\big), & i=n, \, j \neq m \\
-\eta_j\big(Z_{1i}^Q-\sqrt{\frac{2}{h}}\Delta \w_h^i\big)
\big(Z_{1n}^Q-\sqrt{\frac{2}{h}}\Delta \w_h^n\big), & j=m, \, i \neq n \\
\quad \eta_j\big(Z_{1i}^Q-\sqrt{\frac{2}{h}}\Delta \w_h^i\big)
\big(Z_{1m}^Q-\sqrt{\frac{2}{h}}\Delta \w_h^m\big), & j=n, \, i \neq m
\end{matrix*} \right.
\end{align*}
with $i,j,m,n \in \{1,\ldots,K\}$, $i < j$ and $m < n$.
Therewith, it is easy to see that for
$\SinftyQ = \mathrm{E}\Big[H_K \Sigma^Q(V_1^Q)_{|Z_1^Q,\Delta \w_h^Q} H_K^T
\Big|\Delta \w_h^Q\Big]$,
we get
\begin{align*}
\big(\SinftyQ\big)_{(k,k)} = 2\eta_{i}\eta_{j} +\frac{2}{h}\eta_{i}(\Delta\w_h^{j})^2
+\frac{2}{h}\eta_{j}(\Delta\w_h^{i})^2
\end{align*}
and for the off-diagonal entries, it holds
\begin{align*}
\big(\SinftyQ\big)_{(k,l)} = \left\{ \begin{matrix*}[l]
0 , & i,j \notin \{m,n\} \\
\frac{2}{h}\eta_i
\Delta\w_h^j\Delta\w_h^n, & i= m, \, j \neq n \\
- \frac{2}{h}\eta_i
\Delta\w_h^j\Delta\w_h^m, & i=n, \, j \neq m \\
-\frac{2}{h} \eta_j
\Delta\w_h^i\Delta\w_h^n, & j=m, \, i \neq n \\
\frac{2}{h} \eta_j
\Delta\w_h^i\Delta\w_h^m ,& j=n, \, i \neq m
\end{matrix*} \right.
\end{align*}
with
$k,l\in\{1,\ldots, L\}$, $l \neq k$,
$i,j,m,n\in\{1,\ldots,K\}$, $i < j$, and $m < n$.
Next, we employ the following lemma
from \cite{MR1843055} in order to rewrite \eqref{SqrtSigma}.
\begin{lma} \label{CholEV}
Let $A$ and $G$ be symmetric positive definite matrices and denote the smallest eigenvalue
of matrix $G$ by $\lambda_{min}$. Then, it holds
\begin{equation*}
\|A^{\frac{1}{2}}-G^{\frac{1}{2}}\|_F^2 \leq \frac{1}{\sqrt{\lambda_{min}}} \|A-G\|_F^2.
\end{equation*}
\end{lma}
\begin{proof}[Proof of Lemma~\ref{CholEV}]
A proof can be found in \cite[Lemma 4.1]{MR1843055}.
\end{proof}
For simplicity, we assume $\eta_1\geq\eta_2\geq\ldots\geq\eta_K$ for all $K\in\mathbb{N}$.
We decompose $\SinftyQ$ as
\begin{equation*}
\SinftyQ= 2\eta_{K-1}\eta_KI_{L}+\widehat{\SinftyQ}
\end{equation*}
to determine its smallest eigenvalue.
The matrix $\widehat{\SinftyQ}$ is defined as follows: For the
diagonal elements, we get values
\begin{equation*}
\big(\widehat{\SinftyQ}\big)_{(k,k)} = \big(\SinftyQ\big)_{(k,k)}-2\eta_{K-1}\eta_K
= 2(\eta_i\eta_j-\eta_{K-1}\eta_K)+\frac{2}{h}\eta_i(\Delta \w_h^j)^2
+\frac{2}{h}\eta_j(\Delta \w_h^i)^2 \geq 0
\end{equation*}
with $k \in\{1,\ldots,L\}$, $(i,j) \in \mathcal{I}_A$, and $h>0$.
For the off-diagonal elements, we get $\big(\widehat{\SinftyQ}\big)_{(k,l)}
= \big({\SinftyQ}\big)_{(k,l)}$ for all $k,l \in\{1,\ldots,L\}$, $k\neq l$.
As the matrix $\widehat{\SinftyQ}$ is symmetric and positive semi-definite, the smallest
eigenvalue $\lambda_{\min}$ of ${\SinftyQ}$ fulfills
$\lambda_{\min} \geq 2\eta_{K-1} \, \eta_K \geq 2\eta_K^2$. \\ \\
Below, we use the notation $c_D = \sum_{r=D+1}^{\infty}\frac{1}{r^2}$ for legibility.
The matrices $\SigmaDQ$ and $\SinftyQ$
are symmetric positive definite. By Lemma~\ref{CholEV} and the
definitions of $\SigmaDQ$, $\SinftyQ$ in
\eqref{SigmaD} and \eqref{SigmaInf}, respectively,
we obtain from \eqref{SqrtSigma}
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \\
&\leq \frac{C h^2 c_D}{2\sqrt{2}\eta_K\pi^2}
\mathrm{E}\Big[\big\| \SigmaDQ-\SinftyQ \big\|_F^2\Big]\\
&= \frac{C h^2 c_D}{2\sqrt{2}\eta_K\pi^2}
\mathrm{E}\bigg[\Big\| c_D^{-1}
\sum_{r=D+1}^{\infty}\frac{1}{r^2} H_K \SigmaCondQ H_K^T
-\mathrm{E}\Big[ H_K \Sigma^Q(V_1^Q)_{|Z_1^Q,\Delta \w_h^Q} H_K^T \Big|\Delta \w_h^Q\Big] \Big\|_F^2\bigg]\\
&= \frac{C h^2 c_D}{2\sqrt{2} \eta_K\pi^2} \mathrm{E}\bigg[\Big\|
c_D^{-1}
\Big( \sum_{r=D+1}^{\infty} \tfrac{H_K \SigmaCondQ H_K^T}{r^2}
- \sum_{r=D+1}^{\infty} \tfrac{\mathrm{E}
\big[H_K \Sigma^Q(V_1^Q)_{|Z_1^Q,\Delta \w_h^Q} H_K^T \big| \Delta \w_h^Q\big]}{r^2} \Big)\Big\|_F^2\bigg]\\
&= \frac{C h^2 c_D^{-1}}{2\sqrt{2} \eta_K\pi^2} \sum_{k,l=1}^L
\mathrm{E}\Bigg[ \mathrm{E}\bigg[ \Big( \sum_{r=D+1}^{\infty}
\tfrac{H_K \SigmaCondQ H_K^T - \mathrm{E}\big[ H_K \Sigma^Q(V_1^Q)_{|Z_1^Q,\Delta \w_h^Q} H_K^T \big| \Delta \w_h^Q\big]}{r^2}
\Big)_{(k,l)}^2 \Big| \Delta \w_h^Q \bigg] \Bigg]
\end{align*}
for $h>0$, $t, t+h\in[0,T]$, $D,K \in\mathbb{N}$.
Following ideas from \cite{MR1843055}, we get
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big( \int_t^s \Phi \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \\
&\leq C \frac{h^2}{2 \sqrt{2} \eta_K \pi^2} \Big(\sum_{r=D+1}^{\infty}
\frac{1}{r^2}\Big)^{-1} \sum_{k,l=1}^L
\mathrm{E}\Big[\operatorname{Var}\Big(\Big( \sum_{r=D+1}^{\infty}\frac{1}{r^2}
H_K \SigmaCondQ H_K^T \Big)_{(k,l)} \Big| \Delta \w_h^Q\Big)\Big] \\
&= C \frac{h^2}{2 \sqrt{2} \eta_K \pi^2} \Big(\sum_{r=D+1}^{\infty} \frac{1}{r^2}\Big)^{-1} \
\sum_{k,l=1}^L \sum_{r=D+1}^{\infty}\frac{1}{r^4}\mathrm{E}\Big[\operatorname{Var}
\Big( \Big(H_K \SigmaCondQ H_K^T \Big)_{(k,l)} \Big| \Delta \w_h^Q\Big)\Big] .
\end{align*}
Next, we compute the conditional expectation involved in this estimate.
We insert the expressions detailed above for $H_K \SigmaCondQ H_K^T$,
$r\in\mathbb{N}$, and $\SinftyQ$
and split the sum into diagonal entries
and off-diagonal elements of the matrix.
This yields for $h>0$, $t, t+h\in[0,T]$, $D, L\in\mathbb{N}$
\begin{align}\label{EqProof1}
&\sum_{k,l=1}^L \sum_{r=D+1}^{\infty}\frac{1}{r^4} \mathrm{E}\Big[
\operatorname{Var}\Big( \Big(H_K \SigmaCondQ H_K^T \Big)_{(k,l)}\Big|\Delta \w_h^Q\Big)\Big]
\nonumber \\
&= \sum_{k=1}^L \sum_{r=D+1}^{\infty}\frac{1}{r^4}
\mathrm{E}\bigg[ \mathrm{E}\Big[ \Big(H_K \SigmaCondQ H_K^T - \SinftyQ\Big)^2_{(k,k)}
\Big| \Delta \w_h^Q \Big]\bigg] \nonumber \\
&\quad +\sum_{\substack{k,l=1 \\ k\neq l}}^L \sum_{r=D+1}^{\infty}\frac{1}{r^4}
\mathrm{E}\bigg[ \mathrm{E}\Big[ \Big(H_K \SigmaCondQ H_K^T - \SinftyQ \Big)^2_{(k,l)}
\Big| \Delta \w_h^Q\Big]\bigg] \nonumber \\
&= \sum_{\substack{i,j \in \mathcal{J}_K \\ i<j}} \sum_{r=D+1}^{\infty}\frac{1}{r^4}
\bigg(\mathrm{E}\bigg[ \mathrm{E}\Big[\Big(\eta_i\Big((Z_{rj}^Q)^2 - 2Z_{rj}^Q\sqrt{\frac{2}{h}}
\Delta\w_h^j\Big) \nonumber \\
&\quad \quad +
\eta_j\Big((Z_{ri}^Q)^2 - 2Z_{ri}^Q\sqrt{\frac{2}{h}}\Delta\w_h^i\Big)-2\eta_i\eta_j\Big)^2
\Big|\Delta \w_h^Q\Big]\bigg]\bigg) \nonumber \\
&\quad +\sum_{\substack{i,j,m,n\in\mathcal{J}_K \\ i<j; \, m<n}} \sum_{r=D+1}^{\infty}\frac{1}{r^4}
\bigg(\mathrm{E}\bigg[ \mathrm{E}\Big[\eta_i^2\Big( Z_{rj}^QZ_{rm}^Q - Z_{rj}^Q\sqrt{\frac{2}{h}}
\Delta\w_h^m - Z_{rm}^Q\sqrt{\frac{2}{h}}\Delta\w_h^j\Big)^2
\mathds{1}_{i=n}\mathds{1}_{j\neq m} \nonumber \\
&\quad \quad + \eta_i^2 \Big(Z_{rj}^QZ_{rn}^Q - Z_{rj}^Q \sqrt{\frac{2}{h}}\Delta\w_h^n
- Z_{rn}^Q\sqrt{\frac{2}{h}}\Delta\w_h^j\Big)^2
\mathds{1}_{i=m}\mathds{1}_{j\neq n}\nonumber \\
&\quad \quad + \eta_j^2 \Big(Z_{ri}^QZ_{rm}^Q - Z_{ri}^Q \sqrt{\frac{2}{h}}\Delta\w_h^m
- Z_{rm}^Q\sqrt{\frac{2}{h}}\Delta\w_h^i\Big)^2
\mathds{1}_{j=n}\mathds{1}_{i\neq m} \nonumber \\
&\quad \quad + \eta_j^2\Big( Z_{ri}^QZ_{rn}^Q - Z_{ri}^Q \sqrt{\frac{2}{h}}\Delta\w_h^n
- Z_{rn}^Q\sqrt{\frac{2}{h}}\Delta\w_h^i\Big)^2
\mathds{1}_{j=m}\mathds{1}_{i\neq n}
\Big| \Delta \w_h^Q\Big]\bigg]\bigg).
\end{align}
We compute the terms in \eqref{EqProof1} separately and obtain
\begin{align*}
&\mathrm{E}\bigg[ \mathrm{E}\Big[\Big(\eta_i\Big((Z_{rj}^Q)^2
- 2Z_{rj}^Q\sqrt{\frac{2}{h}}
\Delta\w_h^j\Big)+
\eta_j\Big((Z_{ri}^Q)^2 - 2Z_{ri}^Q\sqrt{\frac{2}{h}}
\Delta\w_h^i\Big)-2\eta_i\eta_j\Big)^2
\Big|\Delta \w_h^Q\Big]\bigg]\\
&= \mathrm{E}\Big[ 3\eta_i^2\eta_j^2 +2\eta_i^2\eta_j^2
-4\eta_i^2\eta_j^2 +\frac{8}{h}\eta_i^2\eta_j
(\Delta\w_h^j)^2+3\eta_i^2\eta_j^2 -4\eta_i^2\eta_j^2
+\frac{8}{h}\eta_i\eta_j^2 (\Delta\w_h^i)^2
+4\eta_i^2\eta_j^2 \Big] \\
&= 20\eta_i^2\eta_j^2
\end{align*}
and
\begin{align*}
&\mathrm{E}\bigg[ \mathrm{E}\Big[\eta_i^2\Big( Z_{rj}^QZ_{rm}^Q
- Z_{rj}^Q \sqrt{\frac{2}{h}} \Delta\w_h^m
- Z_{rm}^Q \sqrt{\frac{2}{h}}\Delta\w_h^j \Big)^2
\mathds{1}_{i=n}\mathds{1}_{j\neq m}\Big|
\Delta \w_h^Q\Big]\bigg]\\
&= \mathrm{E}\Big[ \eta_i^2\Big(\eta_j\eta_m
+\frac{2}{h}\eta_j(\Delta\w_h^m)^2+\frac{2}{h}
\eta_m(\Delta \w_h^j)^2\Big) \mathds{1}_{i=n} \mathds{1}_{j\neq m}\Big]
= 5\eta_i^2\eta_j\eta_m \mathds{1}_{i=n}\mathds{1}_{j\neq m}
\end{align*}
for all $i,j,m,n \in \mathcal{J}_K$ with $i< j$ and $m<n$.
For the other terms of this type, we get similar results.
Moreover, we compute bounds for the following expressions
\begin{align*}
\sum_{r=D+1}^{\infty} \frac{1}{r^4}
\leq \int_D^{\infty}\frac{1}{s^4}\,\mathrm{d}s = \frac{1}{3D^3},
\qquad
\sum_{r=D+1}^{\infty}\frac{1}{r^2}
\geq \int_{D+1}^{\infty}\frac{1}{s^2}\,\mathrm{d}s = \frac{1}{D+1}
\end{align*}
for all $D\in\mathbb{N}$. A combination of these estimates yields
\begin{align*}
\Big(\sum_{r=D+1}^{\infty}\frac{1}{r^4}\Big)
\Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}\Big)^{-1}
\leq \frac{D+1}{3D^3}\leq \frac{2}{3D^2}
\end{align*}
for all $D\in\mathbb{N}$. At this point, the main difference to Algorithm~1
arises -- we obtain a higher order of convergence in $D$.
In total, we get
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h} \Psi
\Big(\Phi \int_t^s \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \\
&\leq C \frac{h^2}{2 \sqrt{2} \eta_K\pi^2} \Big(\sum_{r=D+1}^{\infty}\frac{1}{r^2}\Big)^{-1}
\sum_{r=D+1}^{\infty}\frac{1}{r^4} \,\Bigg(
\sum_{\substack{i,j\in\mathcal{J}_K \\ i<j}} 20\eta_i^2\eta_j^2 \nonumber \\
&\quad +\sum_{\substack{i,j,m,n\in\mathcal{J}_K \\ i<j; \, m<n}}
5\bigg(\eta_i^2\eta_j\eta_m \mathds{1}_{i=n}\mathds{1}_{j\neq m} +
\eta_i^2\eta_j\eta_n \mathds{1}_{i=m}\mathds{1}_{j\neq n}
+\eta_j^2\eta_i\eta_m \mathds{1}_{j=n}\mathds{1}_{i\neq m}
+\eta_j^2\eta_i\eta_n \mathds{1}_{j=m}\mathds{1}_{i\neq n}\bigg)\Bigg)\\
&\leq C \frac{h^2}{2 \sqrt{2} \eta_K\pi^2} \frac{2}{3D^2}
\sum_{\substack{i,j\in\mathcal{J}_K \\ i<j}} \Big( 20\eta_i^2\eta_j^2
+ 10 \eta_i^2 \eta_j \sum_{\substack{m\in\mathcal{J}_K \\ m \neq j}} \eta_m
+ 10 \eta_j^2 \eta_i \sum_{\substack{m\in\mathcal{J}_K \\ m\neq i}}
\eta_m \Big).
\end{align*}
Finally, this implies for all $h>0$, $t, t+h\in[0,T]$, $D,K\in\mathbb{N}$
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h}
\Psi \Big(\Phi \int_t^s \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg] \nonumber \\
&\leq C \frac{h^2}{2 \sqrt{2} \eta_K\pi^2} \frac{2}{3D^2}
\Big(20\Big(\sup_{j\in\mathcal{J}_K} \eta_j \Big)^2\big(\tr Q\big)^2
+ 20 \Big(\sup_{j\in\mathcal{J}_K} \eta_j\Big)
\big(\tr Q\big)^3 \Big)
\leq C_Q \frac{h^2}{\eta_K D^2},
\end{align*}
that is, more generally,
\begin{align*}
&\mathrm{E}\bigg[\Big\|\int_t^{t+h}
\Psi \Big(\Phi \int_t^s \, \mathrm{d}W_r^K\Big) \,\mathrm{d}W_s^K
- \sum_{i,j \in \mathcal{J}_K} \hat{I}_{(i,j)}^{Q,(D)}(h)
\; \Psi \big(\Phi \tilde{e}_i, \tilde{e}_j\big)\Big\|_H^2\bigg]
\leq C_Q \frac{h^2}{\big(\min_{j\in\mathcal{J}_K}\eta_j\big) D^2}.
\end{align*}
The statement of the theorem follows by combing this estimate with \eqref{W-WK}.
\end{proof}
\bibliographystyle{plain}
|
1,314,259,994,288 | arxiv | \section{Introduction}
Much of the structure of quantum field theory (QFT) is predicated on the principle of locality.
Adherence to locality is pursuant to convictions rooted in relativity, and is achieved in QFT by the association of regions of spacetime with algebras of observables.
Although, by construction, the observables of QFT are local objects, one may also consider characterizing the spatial or spacetime features of a \emph{state}.
For example, if we have a single-particle state, how can we say that the particle is localized in a certain region of space?
It turns out that often such a characterization is obstructed by one of a collection of no-go theorems, which, for example, imply the absence of any suitable definition of a position operator or local number operators.
These difficulties seem to suggest that relativistic QFT cannot support an ontology in terms of localizable particles. The tension between the local nature of observables and the non-localizability of particle states has been studied in \cite{Colosi-Rovelli2008}.
Our aim here is to investigate relativity as the source of this tension, by examining the non-relativistic regime of a relativistic QFT.
What precisely are the difficulties with localized particle states in QFT?
The problem can be seen as a result of competing requirements that one would like to attribute to such states.
Perhaps the most basic requirement is that particles, despite not being able to carry labels, should be aggregable entities that can be counted, i.e., there should be an observable number operator acting on a corresponding Fock space.
Other important stipulations include that particles should persist in time (at least in free theories) and that the particle excitations (quanta) should exhibit the appropriate relativistic dispersion relation between mass, momentum, and energy \cite{Teller1997, Fraser2008}.
As Fraser writes:
\begin{quote}
Without the mass-energy relation, there would be no grounds for interpreting the eigenstates of total number operator $N$ as representing definite numbers of particles rather than merely more examples of discrete energy level states that are the hallmark of non-relativistic quantum mechanics. However, what special relativity gives, special relativity also takes away. For free systems, relativistic assumptions are required to obtain the result that quanta are not localizable in a finite region of space. \cite{FraserDraft}
\end{quote}
An instinctive means to characterize the localizability of a state is to concoct a position operator in QFT analogous to that of non-relativistic quantum mechanics (NRQM).
The spectral projections of such an operator can be used to determine the probability of finding the particle in a certain region of space, given by $|\psi(x)|^2$ for a state $\ket{\psi}$.
\updated{Unlike the momentum and energy operators, the Poincar\'e group does not naturally provide us with an operator that corresponds to a position observable.
Newton and Wigner \cite{newton1949localized} devised a position operator for elementary ``particles'' (i.e., irreducible representations of the Poincar\'e group).
However, it is well-known that this operator suffers from issues with superluminal signalling \cite{Fulling1989}.}
In a more general context, Malament's theorem \cite{Malament1996} demonstrates that the only projections satisfying a short list of reasonable requirements are trivial, suggesting that there cannot be any analogue of a position operator in a relativistic quantum theory.
Another possibility may be to construct a local number operator, in an attempt to count the number of particles in a particular region of space.
This is obstructed by a corollary to the Reeh-Schlieder theorem \cite{Reeh-Schlieder1961}, which has been argued is a result of entanglement in the vacuum state \cite{Redhead1995}.
A more operational approach to describe the localizability of a state in QFT is in terms of expectation values of observables restricted to particular regions of spacetime.
A theorem of Hegerfeldt \cite{Hegerfeldt1998a,Hegerfeldt1998b,Hegerfeldt1974,Halvorson2001} demonstrates that fixed particle number states localized in some region will have an effect on expectation values of observables in spacelike separated regions.
Here we will investigate whether the incompatibility of these requirements is placated in the non-relativistic regime of the theory, where one expects to recover localizable particle states.
Why should one insist in characterizing the localizability of fixed particle-number states?
There are both phenomenological and ontological aspects to this question.
Of course, ultimately it is well-known that the notion of particle is observer-dependent.
Despite this, there are situations where one may be inclined to describe localizable particle states for a fixed observer.
For instance, high energy `particle physics' experiments (e.g., in particle colliders) as well as low energy quantum information experiments (e.g., single photon detectors) demonstrate particle-like phenomenology that should be accounted for, perhaps using suitable detector models.
Nevertheless, a successful detector model capturing this particle-like phenomenology may evade---but cannot answer---the ontological aspects of the question.
Philosophers of QFT have offered proposals about why we are allowed `particle talk without particle ontology' \cite{Halvorson-Clifton2002}.
If relativistic QFT does not admit a particle ontology, it is fair to ask: what do particle detectors detect?
Looking towards low energies, one finds the widespread applicability of NRQM, a theory in which particle states are localizable by means of their wavefunction.
This seems to imply that NRQM can support a particle ontology, so it is natural to ask whether one can make contact between the NRQM description of particles and some appropriate notion in the latent QFT.
Admittedly QFT amd NRQM are very different theories, both at the dynamical and kinematical level, and recovering features of one from the other cannot come with no cost.
The undertaking of this paper will be to illuminate this connection, by starting with a relativistic QFT and making suitable approximations to recover features of NRQM.
Departing from a relativistic QFT, we wish to clarify the measures that need to be taken for one to (partially) land on the somehow safe and familiar grounds of NRQM.
Can we recover a position operator or a local number operator in the non relativistic regime?
These questions are based on the intuition that relativistic QFT, being the best known theory that combines quantum theory with special relativity, should `contain' its predecessor NRQM.
This expectation is because if QFT is indeed more fundamental than NRQM, then it needs to account for the experimentally-verified predictions of NRQM.
However, one needs to specify what is meant by the containment of NRQM within QFT, since it is far from straightforward which features of the predecessor theory can be retrieved.
Such a retentionist view is further obscured by the fact that different theories suggest different ontological commitments, which means that part of the fundamental ontology of a theory is dropped once the `next' theory is established.
Here particle ontology is dropped in the case of relativistic QFT succeeding NRQM.
This means that even if one is able to identify elements of the mathematical description of the previous theory, here a wavefunction or a position operator, the interpretation of these objects will be essentially different.
The main challenge is to identify these differences, even in cases that some mathematical elements are relatively easy to retrieve.
As we find, the recovery of NRQM from relativistic QFT is only approximate, and we will discuss how it differs from standard NRQM.
For a philosophical perspective see \cite{Wallace2001,myrvold2015wavefunction, Benjamin}, and for recent developments \cite{Padmanabhan2018}.
Another undertaking of this paper is to clarify whether \updated{ground state entanglement} is a feature of the relativistic regime of \updated{relativistic} QFT.
At first, this may seem unrelated to the issue of particle localizability, but the Reeh-Schlieder theorem provides this connection through the intuition expounded in \cite{Redhead1995,VazquezEtal2014}.
\updated{Also, the presence of entanglement in the ground state of a relativistic QFT is starkly manifested in the Unruh effect.
Since ground state entanglement is a necessary (although not sufficient) ingredient for both the Reeh-Schlieder theorem and the Unruh effect, these features would subside in the non-relativistic regime if the ground state entanglement were to vanish.}
\updated{Of course, there are many non-relativistic systems (e.g., in condensed matter) with ground states exhibiting entanglement.
The focus of the investigation here is whether ground state entanglement in a relativistic QFT depends on its relativistic nature.
Here we aim to address this by examining whether non-zero measures of this entanglement persist in the non-relativistic regime of the relativistic QFT.}
In Section~\ref{sec:background}, we review the different Hilbert spaces that one can associate to a free QFT and how they relate to each other.
The continuous tensor product that can naturally accomodate the infinite degrees of freedom of the QFT is suitably defined as a Fock space, both in momentum and in position space (the global and local Fock spaces, respectively).
We derive the Bogoliubov transformation between the local and the global degrees of freedom, which allows us to concretely compare two different localization schemes in QFT, which we refer to as `standard' and `non-relativistic'.
In Section~\ref{sec:NR_limit}, we describe our implementation of the non-relativistic approximation by means of a bandlimited theory.
We identify the subspace of the QFT Hilbert space where the approximation holds, in order determine whether the non-relativistic localization scheme faithfully represents the local degrees of freedom in this subspace.
As we find, the answer is negative, but yet it is by means of this scheme that we can recover features of NRQM in this subspace, as described in Section~\ref{sec:localizability}.
The cutoff we need to recruit to implement the non-relativistic approximation non-trivially affects the local degrees of freedom and the entanglement that they share.
In Section~\ref{sec:entanglement}, we quantify the remaining entanglement between these local degrees of freedom.
These considerations reduce to examining the expression for the $\beta$-coefficients in the Bogoliubov transformation between the local and global annihilation and creation operators.
Here we consider the non-relativistic dispersion relation as an approximation to the relativistic one, but many of the conclusions would qualitatively hold for quantum field theories with more general dispersion relations, as occur with condensed matter systems, sonic analogues \cite{Unruh1981,Unruh1995}, and quantum gravity inspired effective Planck-scale corrections \cite{AmelinoCameliaEtal1998,Gambini-Pullin1999,Magueijo-Smolin2002,Myers-Pospelov2003,GirelliEtal2007}.
\section{Background: Local and global Fock spaces}\label{sec:background}
In this section, we review two quantization schemes (called local and global) for a massive real Klein-Gordon field, as well as how these two schemes are related to each other.
This will serve to establish notation and to emphasize aspects which will be important for the subsequent discussion.
\subsection{The Hilbert space zoo of QFT}
The Hilbert space that is most commonly attributed to a free QFT is a Fock space.
If we aspire to describe a free QFT in terms of particles, and demand that particles are entities that can be counted by an observable number operator, then the full Hilbert space of the QFT should decompose into a direct sum of fixed particle number subspaces.
Explicitly, if the single-particle subspace is described by a Hilbert space, $\mathcal{H}$, then the corresponding Fock space is:
\begin{equation}
\mathcal{F}[\mathcal{H}] := \bigoplus_{n=0}^\infty ( \mathcal{H}^{\otimes n} )_{S,A} = \mathds{C} \oplus \mathcal{H} \oplus ( \mathcal{H}^{\otimes 2} )_{S,A} \oplus \cdots , \label{fock}
\end{equation}
where the subscripts $S$ and $A$ denote symmetrization and anti-symmetrization of the tensor product, for bosonic or fermionic particles respectively.
Given a particular QFT, one can build many different Fock spaces to use as the state space.
The number operators associated with each of these Fock spaces are used to count different entities.
Furthermore, it is a unique feature of quantum theories of infinite degrees of freedom that there exist unitarily inequivalent representations of the canonical commutation relations, due to the absence of an analogue of the Stone-von Neumann theorem \cite{Haag1992,Ruetsche2011}.
As a result, some of these Fock spaces may be unitarily inequivalent\footnote{When referring to unitary inequivalence of two Fock spaces, indeed we do not mean that the two Hilbert spaces are not isomorphic (e.g., any two separable Hilbert spaces are isomorphic), but rather that the representations of the algebra generated by a set of operators satisfying the canonical commutation relations cannot be intertwined using a unitary operator.}, and the associated notions of counting will be incommensurable.
The typical starting point for concretely constructing the state space of a QFT is through recruiting the analogy with an infinite collection of harmonic oscillators, either in position or momentum space.
The corresponding Hilbert space is then formally defined as the infinite tensor product over this collection of harmonic oscillators.
The most naive quantization procedure for a massive real Klein-Gordon field begins by viewing the Hamiltonian written in position space as a collection of harmonic oscillators, labelled by the position $\boldsymbol{x}$ (in $n$ spatial dimensions) and coupled through the spatial derivative,
\begin{equation}\label{eq:ham}
H = \frac12 \int d\boldsymbol{x} \hspace{1mm} [ c^2 \Pi(\boldsymbol{x})^2 + ( \boldsymbol{\nabla} \Phi(\boldsymbol{x}) )^2 + k_c^2 \Phi(\boldsymbol{x})^2 ],
\end{equation}
where $k_c := mc/\hbar$ is the wavenumber associated with the Compton scale.
One can then proceed to construct a Fock space for each $\boldsymbol{x}$ by defining local annihilation (and corresponding creation) operators as,
\begin{equation}\label{eq:bx_defn}
\hat{b}_{\boldsymbol{x}} := \sqrt{\frac{m}{2\hbar^2}} \hat{\Phi}(\boldsymbol{x}) + \frac{i}{\sqrt{2m}} \hat{\Pi}(\boldsymbol{x}).
\end{equation}
These annihilation and creation operators are deemed local because they are labeled with the same point, $\boldsymbol{x}$, as the field operators, but also because they diagonalize the ``uncoupled'' terms in the Hamiltonian, i.e.,
\begin{equation}
\frac12 \int d\boldsymbol{x} \hspace{1mm} [ c^2 \hat{\Pi}(\boldsymbol{x})^2 + k_c^2 \hat{\Phi}(\boldsymbol{x})^2 ] = \frac12 \int d\boldsymbol{x} \hspace{1mm} mc^2 ( \hat{b}_{\boldsymbol{x}}^\dagger \hat{b}_{\boldsymbol{x}} + \hat{b}_{\boldsymbol{x}} \hat{b}_{\boldsymbol{x}}^\dagger ).
\end{equation}
We will consider the Fock space associated with the degree of freedom at the point $\boldsymbol{x}$ to be that generated by $\hat{b}_{\boldsymbol{x}}$ and $\hat{b}_{\boldsymbol{x}}^\dagger$, explicitly given by $\mathcal{H}_{\boldsymbol{x}} := L_2(\mathds{R},d\Phi(\boldsymbol{x}))$.
We then formally take a continuous tensor product over these Fock spaces to construct a Hilbert space for the full QFT, $\otimes_{\boldsymbol{x}} \mathcal{H}_{\boldsymbol{x}}$.
The most common means of defining such a continuous tensor product leads to a non-separable Hilbert space \cite{vonNeumann1939,Wald1994,Streater-Wightman1964}.
Furthermore, it is unclear how to connect this construction of the full Hilbert space to the general form of a Fock space as defined above.
That is, can one write $\otimes_{\boldsymbol{x}} \mathcal{H}_{\boldsymbol{x}} \cong \mathcal{F}[\mathcal{H}]$ for some $\mathcal{H}$?
It turns out that such a description is possible using an alternative definition of the continuous tensor product, which also leads to a separable Hilbert space.
This alternative definition uses a construction called the \emph{exponential Hilbert space} \cite{Streater1969,Isham-Linden1995,Isham-Linden-Savvidou-Schreckenberg1998,Klauder1970}, which is applicable if $\mathcal{H}_{\boldsymbol{x}}$ is a Fock space for every $\boldsymbol{x}$, as is the case in the QFT considered here.
Following \cite{Isham-Linden1995,Isham-Linden-Savvidou-Schreckenberg1998}, we make the identification,
\begin{equation}
\otimes_{\boldsymbol{x}} \mathcal{H}_{\boldsymbol{x}} = \otimes_{\boldsymbol{x}} L_2(\mathds{R},d\Phi(\boldsymbol{x})) \cong \mathcal{F}[L_2(\mathds{R}^n,d\boldsymbol{x})] =: \mathcal{F}_L.
\end{equation}
Thus, we can indeed identify the full Hilbert space of the QFT with a Fock space, which we will call the \emph{local Fock space}, $\mathcal{F}_L$.
Correspondingly, we will denote the total local number operator by $\hat{N}_L := \int d\boldsymbol{x} \hspace{1mm} \hat{b}_{\boldsymbol{x}}^\dagger \hat{b}_{\boldsymbol{x}}$, and the local vacuum as $\ket{0}_L := \otimes_{\boldsymbol{x}} \ket{0}_{\boldsymbol{x}}$ (i.e., $\hat{b}_{\boldsymbol{x}} \ket{0}_L = 0$ for all $\boldsymbol{x}$).
It may seem that our ability to define these objects is in contradiction with the Reeh-Schlieder theorem, which implies the non-existence of local number operators.
However, we will see that this is not the case, as the Reeh-Schlieder theorem is formulated in the context of a different Fock space construction.
Distinct from the local quantization, the most popular quantization scheme, as seen in most particle physics textbooks, is to quantize the normal modes of the free Hamiltonian of the theory.
In flat spacetime, the normal modes for the free Klein-Gordon theory are simply plane waves.
In Fourier space, the Hamiltonian is:
\begin{equation}
H = \frac12 \int \frac{d\boldsymbol{k}}{(2\pi)^n} \left[ c^2 |\Pi_{\boldsymbol{k}}|^2 + \frac{\omega_{\boldsymbol{k}}^2}{c^2} |\Phi_{\boldsymbol{k}}|^2 \right],
\end{equation}
with $\omega_{\boldsymbol{k}} := c \sqrt{\boldsymbol{k}^2 + k_c^2}$.
The Fock space for mode $\boldsymbol{k}$ is constructed with annihilation (and corresponding creation) operators,
\begin{equation}
\hat{a}_{\boldsymbol{k}} := \sqrt{\frac{\omega_{\boldsymbol{k}}}{2\hbar c^2}} \hat{\Phi}_{\boldsymbol{k}} + i \sqrt{\frac{c^2}{2\hbar \omega_{\boldsymbol{k}}}} \hat{\Pi}_{\boldsymbol{k}},
\end{equation}
and can be written as\footnote{Note that we are using a slight abuse of notation: since we are only considering real-valued fields, the functions in this space must satisfy the constraint $\Phi_{\boldsymbol{k}}^\ast = \Phi_{-\boldsymbol{k}}$.
However, this technical point will not be of concern in the following discussion as it is accounted for implicitly by writing the field operator as $\hat{\Phi}_{\boldsymbol{k}} = \sqrt{\frac{\hbar c^2}{2\omega_{\boldsymbol{k}}}} ( \hat{a}_{\boldsymbol{k}} + \hat{a}_{-\boldsymbol{k}}^\dagger )$.} $\mathcal{H}_{\boldsymbol{k}} := L_2(\mathds{C},d\Phi_{\boldsymbol{k}})$.
In a similar manner to the above identification for the continuous tensor product with the local Fock space, we have
\begin{equation}
\otimes_{\boldsymbol{k}} \mathcal{H}_{\boldsymbol{k}} = \otimes_{\boldsymbol{k}} L_2(\mathds{C},d\Phi_{\boldsymbol{k}}) \cong \mathcal{F}[L_2(\mathds{R}^n,d\boldsymbol{k})] =: \mathcal{F}_G.
\end{equation}
We will refer to this Fock space as the \emph{global Fock space}, $\mathcal{F}_G$, in contrast with the local Fock space, $\mathcal{F}_L$, defined above.
The corresponding total global number operator is $\hat{N}_G := \int \frac{d\boldsymbol{k}}{(2\pi)^n} \hat{a}_{\boldsymbol{k}}^\dagger \hat{a}_{\boldsymbol{k}}$, and the global vacuum state is defined to be $\ket{0}_G := \otimes_{\boldsymbol{k}} \ket{0}_{\boldsymbol{k}}$ (i.e., $\hat{a}_{\boldsymbol{k}} \ket{0}_G = 0$ for all $\boldsymbol{k}$).
Of course, the global number operator commutes with the Hamiltonian and the global vacuum is its ground state.
One purpose of choosing the normal mode (or global) quantization scheme is that if we insist on particles being entities that can be counted, then they should also have an element of persistence.
(Of course, this will not be appropriate for interacting theories, however the localizability issues we are considering here are present even in the case of free theories.
That being said, the discussion may also be relevant for the free theories one identifies with the asymptotic future and past of interacting theories, but we will not pursue this further here.)
A means of formalizing the requirement of persistence is to demand that the particle number be conserved in time, i.e., the number operator of the Fock space should commute with the Hamiltonian.
For the free Klein-Gordon theory, this is only true of the global number operator.
Another appeal of the global quantization is that the entities created with $\hat{a}_{\boldsymbol{k}}^\dagger$ exhibit the appropriate relativistic dispersion relation between mass, momentum, and energy: $\hat{H} \left( \hat{a}_{\boldsymbol{k}}^\dagger \ket{0}_G \right) = \sqrt{ c^2 \hbar^2 \boldsymbol{k}^2 + m^2 c^4 } \left( \hat{a}_{\boldsymbol{k}}^\dagger \ket{0}_G \right)$.
Of course, one would not expect an excitation of the form $\hat{a}_{\boldsymbol{k}}^\dagger \ket{0}_G$ to exhibit any sensible notion of localizability, since the normal modes of the Klein-Gordon field are supported everywhere in space.
A simple remedy would seem to be to define a wavepacket state of the form $\ket{\Psi} = \int \frac{d\boldsymbol{k}}{(2\pi)^n} \tilde{f}(\boldsymbol{k}) \hat{a}_{\boldsymbol{k}}^\dagger \ket{0}_G$, which lies in the single-particle subspace of the global Fock space and exhibits the appropriate dispersion relation in each branch of the momentum superposition.
It seems that one could then choose a wavepacket, $\tilde{f}(\boldsymbol{k})$, so that its Fourier transform is arbitrarily well localized in space, while retaining the appealing features of the global quantization.
However, this is only illusory, for it turns out that it is not appropriate to characterize the localizability of this wavepacket state with the Fourier transform of $\tilde{f}(\boldsymbol{k})$.
The reasoning for this stems from how this Fourier-transformed space relates to the local operators associated with the local Fock space.
The remainder of this section is devoted to elucidating this relationship.
\subsection{Relating the local and global Fock spaces}
The first step will be to examine how the local and global Fock spaces relate to each other.
The relationship is most clearly seen through the Bogoliubov transformation between the creation and annihilation operators of the two quantization schemes:
\begin{equation}\label{eq:xk_bbv}
\hat{b}_{\boldsymbol{x}} = \int \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot \boldsymbol{x}} \left[ c_+(\boldsymbol{k}) \hat{a}_{\boldsymbol{k}} - c_-(\boldsymbol{k}) \hat{a}_{-\boldsymbol{k}}^\dagger \right],
\end{equation}
where
\begin{equation}
c_\pm(\boldsymbol{k}) := \frac12 \left[ (1+(\boldsymbol{k}/k_c)^2)^{1/4} \pm (1+(\boldsymbol{k}/k_c)^2)^{-1/4} \right].
\end{equation}
The fact that there is mixing between the local and global annihilation and creation operators demonstrates that the counting is different in the two Fock spaces $\mathcal{F}_L$ and $\mathcal{F}_G$.
Indeed, the assertion of the Reeh-Schlieder theorem is that there cannot be local number operators counting the particles of the global Fock space, $\mathcal{F}_G$.
Moreover, one can show that the two representations are unitarily inequivalent (see Appendix~\ref{apdx:bbv}).
Hence these two notions of counting are incommensurable in the sense that the expectation value of $\hat{N}_L$ on any state of $\mathcal{F}_G$ diverges (and vice-versa).
That the local and global Fock spaces are different clearly demonstrates that the ground state of the Hamiltonian is not the local vacuum,
\begin{equation}
\ket{0}_G = \otimes_{\boldsymbol{k}} \ket{0}_{\boldsymbol{k}} \neq \ket{0}_L = \otimes_{\boldsymbol{x}} \ket{0}_{\boldsymbol{x}}
\end{equation}
(in fact, they do not even lie in the same space).
Intuitively, this seems to indicate that the local degrees of freedom are entangled in the ground state (global vacuum).
As discussed in \cite{Redhead1995}, it is precisely this entanglement which is responsible for preventing the existence of local number operators counting the particles of $\mathcal{F}_G$.
We will reconsider this ground state entanglement between local degrees of freedom in Section~\ref{sec:entanglement}, after making the non-relativistic approximation of the field theory.
\subsection{Two localization schemes in QFT}\label{subsec:two_schemes}
In the above quantization schemes, we began by writing degrees of freedom labeled by points in either position or momentum space, and then formally constructed the Hilbert space by taking a tensor product over these labels.
We then proceeded to identify this full space with a corresponding Fock space.
The Fock space picture is useful when discussing particle notions, but we do not want to abandon the tensor product description altogether, as this is how one can identify subsystems of the field, which is useful, e.g., if one is interested in entanglement.
There are many ways one can choose such a tensor product structure and arrive at the same Fock space.
For example, suppose we have a basis, $\{ f_i(\boldsymbol{x}) \}_i$, for the space of classical field configurations.
Then we can define coefficients of the classical fields as $\Phi_i := \int d\boldsymbol{x} f_i(\boldsymbol{x}) \Phi(\boldsymbol{x})$ and construct the full Hilbert space by again using the analogy with a collection of harmonic oscillators, but now labeled by the elements in the basis.
Explicitly, we can formally write the full Hilbert space as $\otimes_i L_2(\mathds{R},d\Phi_i)$, where each of the sectors labeled with $i$ is constructed as a Fock space with the annihilation operators $\hat{b}_i := \sqrt{\tfrac{m}{2\hbar^2}} \hat{\Phi}_i + \tfrac{i}{\sqrt{2m}} \hat{\Pi}_i = \int d\boldsymbol{x} f_i(\boldsymbol{x}) \hat{b}_{\boldsymbol{x}}$.
Note that we can reconstruct the continuously labeled field and annihilation operators by $\hat{\Phi}(\boldsymbol{x}) = \sum_i \hat{\Phi}_i f_i(\boldsymbol{x})$ and $\hat{b}_{\boldsymbol{x}} = \sum_i \hat{b}_i f_i(\boldsymbol{x})$.
Since the transformation between the two sets of annihilation operators does not mix with the creation operators, then clearly this new set of annihilation and creation operators generate the local Fock space, i.e., $\otimes_i L_2(\mathds{R},d\Phi_i) \cong \mathcal{F}[L_2(\mathds{R}^n,d\boldsymbol{x})]$.
We can view the expression of the annihilation and creation operators in the basis $\{ f_i(\boldsymbol{x}) \}_i$ as simply a change of basis in the single-particle subspace.
Conversely, every change of basis in the single-particle subspace of the Fock space corresponds to a different tensor product decomposition of the full Hilbert space.
In this sense, changing a basis in this subspace can be seen as a rearrangement of the degrees of freedom, or a different infinite collection of harmonic oscillators through which we are describing the quantum field theory.
However, these changes of bases do not exhaust all of the possible tensor product structures (hence subsystem decompositions) which are available, since they cannot change the Fock space.
We have already seen this above, where we took a tensor product over normal modes and arrived at a different Fock space.
Of course, we can perform these changes of basis in the global Fock space as well.
For instance, one could use a plane wave basis to define the Fourier transforms of the global annihilation and creation operators,
\begin{equation}\label{eq:ay_defn}
\hat{a}_{\boldsymbol{y}} := \int \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot \boldsymbol{y}} \hat{a}_{\boldsymbol{k}}.
\end{equation}
Notice that these new set of annihilation and creation operators characterize the Fourier-transformed wavepacket states we had discussed above, i.e.,
\begin{equation}
\ket{\Psi} = \int \frac{d\boldsymbol{k}}{(2\pi)^n} \tilde{f}(\boldsymbol{k}) \hat{a}_{\boldsymbol{k}}^\dagger \ket{0}_G = \int d\boldsymbol{y} f(\boldsymbol{y}) \hat{a}_{\boldsymbol{y}}^\dagger \ket{0}_G,
\end{equation}
where $\tilde{f}(\boldsymbol{k})$ is the Fourier transform of $f(\boldsymbol{y})$.
Above we considered the possibility that one could choose an arbitrarily well localized wavepacket, $f(\boldsymbol{y})$, to obtain a localized state of the field theory which exhibits the desirable features of the global quantization (such as persistence in time and obeying the appropriate relativistic dispersion relation).
However, the issue in doing so is that the function $f(\boldsymbol{y})$ does not appropriately characterize the localizability of the wavepacket in space.
To see this, first we note that in the above wavepacket state, the amplitude of the function $f$ at $\boldsymbol{y}$ determines the amplitude of the excitation created with $\hat{a}_{\boldsymbol{y}}^\dagger$.
Hence, if this function is highly peaked around a particular point $\boldsymbol{y}$, we can consider this excitation to be created ``at $\boldsymbol{y}$''.
However, perhaps counter to naive expectations, this $\boldsymbol{y}$ does not correspond to a point in space.
We can see this by first combining the definition \eqref{eq:ay_defn} and the Bogoliubov transformation \eqref{eq:xk_bbv}:
\begin{equation}\label{eq:yx_bbv}
\hat{a}_{\boldsymbol{y}} = \int d\boldsymbol{x} [ F_+(\boldsymbol{y}-\boldsymbol{x}) \hat{b}_{\boldsymbol{x}} + F_-(\boldsymbol{y}-\boldsymbol{x}) \hat{b}_{\boldsymbol{x}}^\dagger ],
\end{equation}
where $F_\pm(\boldsymbol{y}-\boldsymbol{x}) := \int \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot (\boldsymbol{y}-\boldsymbol{x})} c_\pm(\boldsymbol{k})$.
We stated above that the local annihilation and creation operators, $\hat{b}_{\boldsymbol{x}}$ and $\hat{b}_{\boldsymbol{x}}^\dagger$, are labeled by points in space since they are directly related to the local field operators.
Therefore, one sees from this Bogoliubov transformation that the operators $\hat{a}_{\boldsymbol{y}}$ and $\hat{a}_{\boldsymbol{y}}^\dagger$ act non-locally.
The degree of non-locality is governed by the integral kernels $F_\pm$, which decay asymptotically as $F_\pm \sim e^{-k_c |\boldsymbol{y}-\boldsymbol{x}|}$.
Hence the non-locality of the operators $\hat{a}_{\boldsymbol{y}}$ is suppressed exponentially at distances much larger than the Compton wavelength of the field.
Note that from the definition of $\hat{a}_{\boldsymbol{y}}$, it follows that $\ket{0}_G = \otimes_{\boldsymbol{y}} \ket{0}_{\boldsymbol{y}}$, since the two sets of operators generate the same Fock space (and, in particular, annihilate the same vacuum).
Therefore, there is clearly no vacuum entanglement between the $\boldsymbol{y}$ degrees of freedom in the ground state of the field theory.
Hence, we arrive at two different localization schemes for our QFT: that defined in \eqref{eq:bx_defn} using the local operators, which we will refer to as the `standard' localization scheme, and the other defined in \eqref{eq:ay_defn} using the Fourier-transformed global operators.
Features of the two localization schemes have been investigated in \cite{Halvorson-Clifton2002,Piazza-Costa2007}.
In the literature, the operators $\hat{a}_{\boldsymbol{y}}$ and $\hat{a}_{\boldsymbol{y}}^\dagger$ are commonly introduced as non-relativistic field operators (see, e.g., \cite{Anastopoulos-Hu2014,Piazza-Costa2007,Bjorken-Drell1964}).
\updated{A summary of the main features of the two localization schemes is provided in Table \ref{tab:two_schemes}.}
\begin{table}[h!]
\updated{
\caption{\label{tab:two_schemes}Summary of the localization schemes.}
\begin{center}
\begin{tabular}{ | c | c | }
\hline
\textbf{Non-relativistic}: $\hat{a}_{\boldsymbol{y}}, \hat{a}_{\boldsymbol{y}}^\dagger$ & \textbf{Local}: $\hat{b}_{\boldsymbol{x}}, \hat{b}_{\boldsymbol{x}}^\dagger$ \\
\hline
Fourier transforms of $\hat{a}_{\boldsymbol{k}}, \hat{a}_{\boldsymbol{k}}^\dagger$ & Bogoliubov mixing of $\hat{a}_{\boldsymbol{k}}, \hat{a}_{\boldsymbol{k}}^\dagger$ \\
\hline
non-locally related to $\hat{\Phi}(\boldsymbol{x}), \hat{\Pi}(\boldsymbol{x})$ & locally related to $\hat{\Phi}(\boldsymbol{x}), \hat{\Pi}(\boldsymbol{x})$ \\
\hline
act in $\mathcal{F}_G$ & act in $\mathcal{F}_L$, not $\mathcal{F}_G$ \\
\hline
$\ket{0}_G = \otimes_{\boldsymbol{y}} \ket{0}_{\boldsymbol{y}}$ & $\ket{0}_G \neq \otimes_{\boldsymbol{x}} \ket{0}_{\boldsymbol{x}}$ \\
\hline
\end{tabular}
\end{center}
}
\end{table}
One might expect that these operators coincide with the local annihilation and creation operators in the non-relativistic regime.
This would also be consistent with the observation that the degree of non-locality of the integral kernels $F_\pm$ in \eqref{eq:yx_bbv} decays exponentially beyond the Compton scale, and hence this non-locality may become insignificant in this regime.
One would then be able to use these non-relativistic field operators to define localizable particle states exhibiting the features of the global quantization.
Furthermore, if the two localization schemes were to coincide in the limit, the local and global vacua would also coincide, indicating that any entanglement between the local degrees of freedom would be unobservable in this limit.
Intuitively, this would have the consequence of lifting obstructions to particle localizability due to the Reeh-Schlieder theorem.
All of this would be a simple and consistent story, if only it were true.
It turns out the reasons the `non-relativistic' scheme is appropriate in the non-relativistic regime (thus justifying the terminology) is more subtle, as we will now discuss.
\section{Aspects of the non-relativistic approximation}\label{sec:NR_limit}
In this section we will describe the manner in which we will implement the non-relativistic approximation for the Klein-Gordon theory described above.
In the literature, there are different approaches for making this approximation; one can find methods using the WKB approximation or similar techniques \cite{Proca1938,Bjorken-Drell1964,Kiefer-Singh1991}, others using a group theoretic perspective \cite{Weinberg1995,LevyLeblond1967}, and yet others using a renormalization perspective \cite{daSilva2001,Caswell-Lepage1986,Lepage1989}.
Often in these approaches, one finds either the implicit or explicit assumption of an ultraviolet cutoff that can suitably impose the ``small momentum'' condition.
Here we will be careful to be explicit about this assumption, as it will have implications for our following investigation into localizability and vacuum entanglement.
\subsection{Requirement of an ultraviolet cutoff}
For a single classical particle, the non-relativistic approximation can be stated in terms of an expansion of the energy \emph{to second order} in $|\boldsymbol{p}|/mc$, i.e., $E = \sqrt{ m^2 c^4 + \boldsymbol{p}^2 c^2 } \approx mc^2 + \boldsymbol{p}^2 / 2m$.
For a quantum particle, we use the de Broglie relation $\boldsymbol{p} = \hbar \boldsymbol{k}$ to conclude that the appropriate expansion is to second order in $|\boldsymbol{k}|/k_c$.
There are perhaps different means through which one could go about formally implementing this approximation.
For a single-particle wavepacket of the form $\int \frac{d\boldsymbol{k}}{(2\pi)^n} \psi(\boldsymbol{k}) \hat{a}_{\boldsymbol{k}}^\dagger \ket{0}_G$, one may consider declaring that such particles are `slow' if the expectation value and the variance of the momentum is small.
For example, this could be achieved by restricting the set of allowable wavepackets to those with suitably quick decay at large momenta.
However, this space fails to be a closed linear space, which is a requirement if we want the resulting space to form a Hilbert space.
Because this space of wavepackets will ultimately be the state space we use for NRQM, it seems that this requirement is appropriate so that we recover important structural features of NRQM, such as the spectral theorem for observables restricted to this subspace.
The cost of enforcing this requirement is to impose a cutoff on the set of allowable wavenumbers, i.e., the support of these wavepackets should be restricted to $|\boldsymbol{k}| < \Lambda$, where $\Lambda$ is a cutoff such that $\Lambda/k_c \ll 1$.
The corresponding single-particle Hilbert space (called \emph{bandlimited wavefunctions}) will be denoted $B(\Lambda)$.
Imposing a sharp cutoff of this kind also confines us to a set of states where particle pair creation cannot occur (in an interacting theory).
Since particle creation and annihilation is a fundamental difference between QFT and NRQM, this seems an appropriate restriction, as a softer cutoff would always provide a non-zero amplitude for these processes.
Notice also that imposing such a cutoff picks out a preferred frame (that is, we break the Lorentz-invariance of the theory at this stage), but of course there should not be a frame-independent notion of `slow'.
It is straightforward to extend this to multiparticle states.
If we consider a two-particle state $\int \frac{d\boldsymbol{k}_1}{(2\pi)^n} \frac{d\boldsymbol{k}_2}{(2\pi)^n} \psi(\boldsymbol{k}_1,\boldsymbol{k}_2) \hat{a}_{\boldsymbol{k}_1}^\dagger \hat{a}_{\boldsymbol{k}_2}^\dagger \ket{0}_G$, then the appropriate restriction would be to limit the support of the wavepacket in both variables to $|\boldsymbol{k}_1|, |\boldsymbol{k}_2| < \Lambda$.
This is because we want the momenta of \emph{each} of the particles to be small, and not the \emph{total} momentum, for example.
Hence, the two-particle subspace should be identified with $(B(\Lambda)^{\otimes 2})_S$, and similarly for higher particle-number subspaces.
Overall, we arrive at a Fock space constructed with symmetrized tensor products of $B(\Lambda)$, namely $\mathcal{F}[B(\Lambda)]$.
This is a subspace of the global Fock space, since
\begin{equation}
B(\Lambda) \subset L_2(\mathds{R}^n,d\boldsymbol{k}) \implies \mathcal{F}[B(\Lambda)] \subset \mathcal{F}[L_2(\mathds{R}^n,d\boldsymbol{k})] = \mathcal{F}_G.
\end{equation}
One can also think of this bandlimited Fock space as obtained by removing (tracing out) the set of degrees of freedom associated with wavenumbers above the cutoff, i.e.,
\begin{equation}
\mathcal{F}[B(\Lambda)] \cong \otimes_{|\boldsymbol{k}|<\Lambda} L_2(\mathds{C},d\Phi_{\boldsymbol{k}}) \subset \mathcal{F}[L_2(\mathds{R}^n,d\boldsymbol{k})] \cong \otimes_{\boldsymbol{k}} L_2(\mathds{C},d\Phi_{\boldsymbol{k}}).
\end{equation}
As has been studied in \cite{Pye-Donnelly-Kempf2015}, bandlimitation has a non-trivial effect on the structure of local degrees of freedom and entanglement of the field.
We will return to explicitly discuss these effects in Subsection~\ref{subsec:sampling} and Section~\ref{sec:entanglement} (respectively).
Regardless of whether the cutoff is introduced for fundamental or for phenomenological reasons, all of the implications of bandlimitation need to be taken into account.
In \cite{Pye-Donnelly-Kempf2015}, quantum gravity considerations motivated introducing a cutoff at the Planck scale.
For the purposes of the non-relativistic approximation, the size of the cutoff is rather set by the Compton scale.
An operational means through which one could motivate introducing such a cutoff is through an interface with a probing system which couples only to this subset of modes.
This could occur, for example, in a detector model which couples to the field via a bandlimited smearing function, intuitively corresponding to a large detector (compared to the Compton wavelength of the field).
We can consistently describe the physics restricted to $\mathcal{F}[B(\Lambda)] \cong \otimes_{\boldsymbol{k}<\Lambda} L_2(\mathds{C},d\Phi_{\boldsymbol{k}})$, since in the free theory each $\boldsymbol{k}$ sector is decoupled.
We began this discussion by considering wavepackets formed with the global creation operators and seemingly abandoned the local Fock space associated with the local annihilation and creation operators $\hat{b}_{\boldsymbol{x}}$ and $\hat{b}_{\boldsymbol{x}}^\dagger$.
We choose to focus on the global Fock space as it is these particle excitations which exhibit the correct dispersion relation and are preserved in time.
If we are to hope to end up with a non-relativistic theory for fixed particle number states, with the Schr\"odinger equation as the equation of motion for these wavepackets, then we must choose to focus on states residing in the global Fock space.
However, we will return to consider the relevance of the local operators in the non-relativistic regime.
The bandlimited Fock space, $\mathcal{F}[B(\Lambda)]$, is the space from which we will draw non-relativistic wavefunctions.
Because this is a subspace of the global Fock space, it is straightforward to define the restriction of operators from the field theory to this subspace.
For example, the total momentum operator becomes
\begin{equation}
\boldsymbol{\hat{P}} \big|_{\mathcal{F}[B(\Lambda)]} = \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} \hbar \boldsymbol{k} \hat{a}_{\boldsymbol{k}}^\dagger \hat{a}_{\boldsymbol{k}},
\end{equation}
which we see is simply a restriction of the integration range of the $\boldsymbol{k}$ values.
\subsection{Operator approximations}
Once an operator written as an integral over wavevectors has been bandlimited, one is able to expand the integrand in powers of $|\boldsymbol{k}|/k_c \leq \Lambda/k_c \ll 1$.
The non-relativistic approximation entails keeping terms up to second order in this ratio.
More concretely, let us consider again the Bogoliubov transformation \eqref{eq:xk_bbv}.
After introducting a cutoff, we can write the bandlimited local annihilation operator using the (inverse) Bogoliubov transformation:
\begin{equation}
\hat{b}_{\boldsymbol{x}} = \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot \boldsymbol{x}} \left[ c_+(\boldsymbol{k}) \hat{a}_{\boldsymbol{k}} - c_-(\boldsymbol{k}) \hat{a}_{-\boldsymbol{k}}^\dagger \right],
\end{equation}
with $c_\pm(\boldsymbol{k}) := \frac12 \left[ (1+(\boldsymbol{k}/k_c)^2)^{1/4} \pm (1+(\boldsymbol{k}/k_c)^2)^{-1/4} \right]$.
Since $c_\pm(\boldsymbol{k})$ are analytic at 0, we can expand these in a Maclaurin series in $|\boldsymbol{k}|/k_c < \Lambda/k_c \ll 1$.
To second order, these become $c_+(\boldsymbol{k}) \sim 1$ and $c_-(\boldsymbol{k}) \sim \tfrac14 (\boldsymbol{k}/k_c)^2$.
Keeping up to second order terms, we can insert these into the above Bogoliubov transformation to get
\begin{equation}\label{eq:bx_NR}
\hat{b}_{\boldsymbol{x}} = \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot \boldsymbol{x}} \left[ \hat{a}_{\boldsymbol{k}} - \frac14 (\boldsymbol{k}/k_c)^2 \hat{a}_{-\boldsymbol{k}}^\dagger \right].
\end{equation}
Hence we obtain an operator that is equivalent to the original operator $\hat{b}_{\boldsymbol{x}}$ for modes below the bandlimit (to second order in $\Lambda/k_c$).
In the following, we will interpret this as the action of $\hat{b}_{\boldsymbol{x}}$ on the bandlimited subspace in the non-relativistic regime.
The extension of this procedure to other operators written as Fourier integrals is straightforward.
\section{Localizability under the non-relativistic approximation}\label{sec:localizability}
In this section we show that the two localization schemes do not coincide in the non-relativistic regime, and we demonstrate which should serve to salvage NRQM from QFT under the non-relativistic approximation.
We describe the sense in which we recover some of the features of standard NRQM, e.g., what can play the role of a wavefunction and a position operator.
Despite obtaining analogues of these objects, we find that there remain limitations to the localizability properties of these wavefunctions.
We will also discuss the form that the spatial degrees of freedom of the field theory and the wavefunctions take after the non-relativistic approximation.
\subsection{Localization schemes in the non-relativistic regime}\label{subsec:two_schemes_NR}
In Subsection~\ref{subsec:two_schemes}, we presented both the standard and non-relativistic schemes for characterizing localizability of particle states in a quantum field theory.
We also alluded to the possibility that the non-locality of the non-relativistic operators, inferred from the Bogoliubov transformation \eqref{eq:yx_bbv}, may disappear in the non-relativistic regime of the quantum field theory.
In this way, the non-relativistic localization scheme would faithfully represent the local degrees of freedom in this regime.
However, using the tools outlined in Section~\ref{sec:NR_limit}, we will now demonstrate that the two schemes do not coincide in this regime.
The Bogoliubov transformation \eqref{eq:yx_bbv} between the local and non-relativistic operators will be the relation of central importance to this demonstration.
Introducing a cutoff $|\boldsymbol{k}|<\Lambda$ and expanding to second order in $\Lambda/k_c$, we find
\begin{equation}\label{eq:yx_bbv_NR}
\hat{a}_{\boldsymbol{y}} = \hat{b}_{\boldsymbol{y}} + \int d\boldsymbol{x} \hspace{0.5mm} F_-^\Lambda(\boldsymbol{y}-\boldsymbol{x}) \hat{b}_{\boldsymbol{x}}^\dagger,
\end{equation}
where $F_-^\Lambda(\boldsymbol{y}-\boldsymbol{x}) := \frac14 \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot (\boldsymbol{y}-\boldsymbol{x})} (\boldsymbol{k}/k_c)^2$ is one of the non-local integral kernels from \eqref{eq:yx_bbv} after the approximation.
Therefore, we see that $\hat{a}_{\boldsymbol{y}}$ and $\hat{b}_{\boldsymbol{y}}$ do not coincide in the non-relativistic regime, but differ by a term which is second order in $\Lambda/k_c$.
This second order term also shows that $\hat{a}_{\boldsymbol{y}}$ continues to act non-locally in $\boldsymbol{x}$ after the approximation.
Furthermore, introducing a cutoff aggravates this non-locality.
Whereas the original function, $F_-(\boldsymbol{y}-\boldsymbol{x}) \sim e^{-k_c |\boldsymbol{y}-\boldsymbol{x}|}$, decays exponentially beyond the Compton scale, the introduction of the cutoff causes this integral kernel to decay only polynomially, $F_-^\Lambda(\boldsymbol{y}-\boldsymbol{x}) \sim 1/|\boldsymbol{y}-\boldsymbol{x}|^{(n+1)/2}$.
Although the situation may appear asymmetric between $F_+^\Lambda$ and $F_-^\Lambda$, it turns out one can also write $F_+^\Lambda$ as a polynomially-decaying integral kernel, but it is easy to check that $F_+^\Lambda(\boldsymbol{y}-\boldsymbol{x}) := \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot (\boldsymbol{y}-\boldsymbol{x})}$ acts as the identity on bandlimited functions (i.e., it is a reproducing kernel).
Hence, in particular, $\hat{b}_{\boldsymbol{y}} = \int d\boldsymbol{x} F_+^\Lambda(\boldsymbol{y}-\boldsymbol{x}) \hat{b}_{\boldsymbol{x}}$ (keeping in mind equation \eqref{eq:bx_NR}).
This suggests that the cutoff $|\boldsymbol{k}|<\Lambda$ introduces a fundamental non-locality to both the labels $\boldsymbol{x}$ and $\boldsymbol{y}$.
This non-locality is a general feature of bandlimited functions.
A first simple observation concerning localizability is the fact that bandlimited functions cannot have compact support in position space, and hence they must exhibit some degree of non-locality.
This is a consequence of the fact that bandlimited functions are compactly supported in momentum space and of Benedicks' theorem \cite{Benedicks1985}, which limits the mutual localizability in position and momentum space (and closely related to the uncertainty principle).
Hence, we will see that the wavefunctions we obtain will exhibit an intrinsic degree of non-locality.
Further, the cutoff not only affects the operators labeled by $\boldsymbol{y}$, but also the local operators labeled by $\boldsymbol{x}$ due to the modified definition \eqref{eq:bx_NR}.
Not only do the two localization schemes differ under the approximation, but the fact that the annihilation and creation operators mix implies that they continue to generate different Fock spaces.
We can see the effects of this through the construction of so-called `quasi-local' states \cite{VazquezEtal2014} by acting the local creation operators on the global vacuum.
For example, one can write a single-particle quasi-local state as $\ket{\Psi} := \int d\boldsymbol{x} \hspace{0.5mm} \Psi(\boldsymbol{x}) \hat{b}_{\boldsymbol{x}}^\dagger \ket{0}_G$.
Notice that this agrees with the single-particle states of $\mathcal{F}[B(\Lambda)]$, since the Bogoliubov transformation \eqref{eq:yx_bbv_NR} implies $\hat{b}_{\boldsymbol{x}}^\dagger = \hat{a}_{\boldsymbol{x}}^\dagger - \int d\boldsymbol{y} F_-^\Lambda(\boldsymbol{x}-\boldsymbol{y}) \hat{a}_{\boldsymbol{y}}$, and so
\begin{equation}
\ket{\Psi} = \int d\boldsymbol{x} \hspace{1mm} \Psi(\boldsymbol{x}) \hat{b}_{\boldsymbol{x}}^\dagger \ket{0}_G = \int d\boldsymbol{x} \hspace{1mm} \Psi(\boldsymbol{x}) \hat{a}_{\boldsymbol{x}}^\dagger \ket{0}_G,
\end{equation}
since $\hat{a}_{\boldsymbol{y}} \ket{0}_G = 0$.
Therefore, the discrepancy between the two schemes is irrelevant for single-particle states.
However, the difference will indeed manifest itself for higher particle-number states.
For example:
\begin{align}
\int d\boldsymbol{x}_1 d\boldsymbol{x}_2 \hspace{1mm} \Psi(\boldsymbol{x}_1,\boldsymbol{x}_2) \hat{b}_{\boldsymbol{x}_1}^\dagger \hat{b}_{\boldsymbol{x}_2}^\dagger \ket{0}_G =& \int d\boldsymbol{x}_1 d\boldsymbol{x}_2 \hspace{1mm} \Psi(\boldsymbol{x}_1,\boldsymbol{x}_2) \hat{a}_{\boldsymbol{x}_1}^\dagger \hat{a}_{\boldsymbol{x}_2}^\dagger \ket{0}_G \nonumber \\
&- \int d\boldsymbol{x}_1 d\boldsymbol{x}_2 \hspace{1mm} \Psi(\boldsymbol{x}_1,\boldsymbol{x}_2) F_-^\Lambda(\boldsymbol{x}_1-\boldsymbol{x}_2) \ket{0}_G.
\end{align}
Hence acting twice with the local creation operators generates a combination of a two-particle state and the global vacuum of $\mathcal{F}[B(\Lambda)]$.
It is generally the case that $\hat{b}_{\boldsymbol{x}_1}^\dagger \cdots \hat{b}_{\boldsymbol{x}_N}^\dagger \ket{0}_G$ will have non-zero components in particle-number sectors lower than $N$, in contrast to $\hat{a}_{\boldsymbol{x}_1}^\dagger \cdots \hat{a}_{\boldsymbol{x}_N}^\dagger \ket{0}_G$ which is an $N$-particle state in $\mathcal{F}[B(\Lambda)]$.
Hence we can conclude that the two schemes do not coincide in the non-relativistic regime, and thus we are forced to choose between them to characterize the localizability of particles and wavefunctions in the rendition of NRQM that we obtain after the approximation.
This choice is dictated by how closely each scheme can be used to recover the characteristic features of NRQM.
Of course, one of the key physical differences between NRQM and QFT is that NRQM is a theory of a fixed number of particles.
Since the number operator of the non-relativistic scheme commutes with the Hamiltonian (as opposed to that of the local operators), this clearly singles out the non-relativistic scheme as the appropriate choice.
Hence, this indeed justifies the use of the term \emph{non-relativistic} for this scheme.
We remark that this discrepancy has also implicitly appeared in other treatments of the non-relativistic approximation.
For example, in \cite{Bjorken-Drell1964}, a Foldy-Wouthuysen transformation is applied before the approximation in order to decouple the evolution of the local operators, $\hat{b}_{\boldsymbol{x}}$ and $\hat{b}_{\boldsymbol{x}}^\dagger$.
One can show that the Foldy-Wouthuysen transformation of \cite{Bjorken-Drell1964} is exactly the Bogoliubov transformation \eqref{eq:yx_bbv}, mapping the local to the non-relativistic operators.
Hence, our description of NRQM (aside from our explicit treatment of the cutoff) in terms of the non-relativistic operators is consistent with these previous works, where the transformation is applied manually in traversing from QFT to NRQM.
Although this transformation can be viewed as simply a change of representation \cite{Bjorken-Drell1964,Costella-McKellar1995}, the fact remains that the NRQM operators are non-local combinations of the field operators (and mix the local annihilation and creation operators).
In \cite{Costella-McKellar1995}, the local and non-relativistic representations (in the spin-1/2 case) were dubbed ``the two faces of the electron.''
\subsection{Salvaging NRQM from QFT}
Now we will proceed to examine which aspects of NRQM can be salvaged from the Klein-Gordon QFT after the non-relativistic approximation.
First, in addition to the non-relativistic approximation, we must restrict attention to a particular $N$-particle subspace of $\mathcal{F}[B(\Lambda)]$.
The wavefunctions of NRQM of $N$-particles will be wavepacket states of the form
\begin{equation}\label{eq:NR_wvfn}
\ket{\Psi} := \frac{1}{\sqrt{N!}} \int d\boldsymbol{y}_1 \cdots d\boldsymbol{y}_N \hspace{1mm} \Psi(\boldsymbol{y}_1, \dots, \boldsymbol{y}_N) \hspace{1mm} \hat{a}_{\boldsymbol{y}_1}^\dagger \cdots \hat{a}_{\boldsymbol{y}_N}^\dagger \ket{0}_G,
\end{equation}
where the factor of $1/\sqrt{N!}$ is added for convenience.
Typically in such discussions, one then naturally proceeds to consider time evolution of these states.
First, let us examine the restriction of the Hamiltonian operator associated with \eqref{eq:ham} to the subspace $\mathcal{F}[B(\Lambda)]$.
Expanding the dispersion relation to second order in $\Lambda/k_c$, we have
\begin{align}
\hat{H} \big|_{\mathcal{F}[B(\Lambda)]} &= \frac12 \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} \left( mc^2 + \frac{\hbar^2 \boldsymbol{k}^2}{2m} \right) ( \hat{a}_{\boldsymbol{k}}^\dagger \hat{a}_{\boldsymbol{k}} + \hat{a}_{\boldsymbol{k}} \hat{a}_{\boldsymbol{k}}^\dagger ) \\
&= \frac12 \int d\boldsymbol{y} \left[ mc^2 ( \hat{a}_{\boldsymbol{y}}^\dagger \hat{a}_{\boldsymbol{y}} + \hat{a}_{\boldsymbol{y}} \hat{a}_{\boldsymbol{y}}^\dagger ) - \frac{\hbar^2}{2m} ( \hat{a}_{\boldsymbol{y}}^\dagger \boldsymbol{\nabla}^2 \hat{a}_{\boldsymbol{y}} + \hat{a}_{\boldsymbol{y}} \boldsymbol{\nabla}^2 \hat{a}_{\boldsymbol{y}}^\dagger ) \right].\label{eq:NR_ham}
\end{align}
We see that in fact it takes the form of a non-relativistic Hamiltonian, with an appropriate kinetic terms plus a mass energy density term.
Recall that the full relativistic Hamiltonian is local in space (labeled by $\boldsymbol{x}$),
\begin{align}
\hat{H} &= \frac12 \int d\boldsymbol{x} \hspace{1mm} [ c^2 \hat{\Pi}(\boldsymbol{x})^2 + ( \boldsymbol{\nabla} \hat{\Phi}(\boldsymbol{x}) )^2 + k_c^2 \hat{\Phi}(\boldsymbol{x})^2 ] \\
&= \frac12 \int d\boldsymbol{x} \left[ mc^2 ( \hat{b}_{\boldsymbol{x}}^\dagger \hat{b}_{\boldsymbol{x}} + \hat{b}_{\boldsymbol{x}} \hat{b}_{\boldsymbol{x}}^\dagger ) - \frac{\hbar^2}{2m} ( \hat{b}_{\boldsymbol{x}} + \hat{b}_{\boldsymbol{x}}^\dagger ) \boldsymbol{\nabla}^2 ( \hat{b}_{\boldsymbol{x}} + \hat{b}_{\boldsymbol{x}}^\dagger ) \right].
\end{align}
However, the full relativistic Hamiltonian written in terms of the non-relativistic localization scheme is
\begin{equation}\label{eq:ay_ham}
\hat{H} = \frac12 \int \frac{d\boldsymbol{k}}{(2\pi)^n} \hbar \omega_{\boldsymbol{k}} ( \hat{a}_{\boldsymbol{k}}^\dagger \hat{a}_{\boldsymbol{k}} + \hat{a}_{\boldsymbol{k}} \hat{a}_{\boldsymbol{k}}^\dagger ) = \frac12 \int d\boldsymbol{y}' d\boldsymbol{y} f(\boldsymbol{y}'-\boldsymbol{y}) ( \hat{a}_{\boldsymbol{y}'}^\dagger \hat{a}_{\boldsymbol{y}} + \hat{a}_{\boldsymbol{y}} \hat{a}_{\boldsymbol{y}'}^\dagger ),
\end{equation}
where $f(\boldsymbol{y}'-\boldsymbol{y}) := \int \frac{d\boldsymbol{k}}{(2\pi)^n} \hbar \omega_{\boldsymbol{k}} e^{i \boldsymbol{k} \cdot ( \boldsymbol{y}' - \boldsymbol{y} ) }$ is a non-local integral kernel.
Comparing \eqref{eq:NR_ham} with \eqref{eq:ay_ham}, we see that the non-relativistic approximation seems to remove the non-locality associated with the integral kernel $f$, and we obtain a Hamiltonian which appears local in $\boldsymbol{y}$ space.
However, this observation is somewhat deceptive, since the cutoff $|\boldsymbol{k}|<\Lambda$ introduces non-locality in the functions over $\boldsymbol{y}$.
Since the $N$-particle subspace of $\mathcal{F}[B(\Lambda)]$ is preserved in time, the time-dependence of the wavepacket states \eqref{eq:NR_wvfn} can be absorbed into the smearing function which encodes the coefficients of the state in this subspace, i.e.,
\begin{equation}
\ket{\Psi(t)} := \frac{1}{\sqrt{N!}} \int d\boldsymbol{y}_1 \cdots d\boldsymbol{y}_N \hspace{1mm} \Psi(\boldsymbol{y}_1, \dots, \boldsymbol{y}_N; t) \hspace{1mm} \hat{a}_{\boldsymbol{y}_1}^\dagger \cdots \hat{a}_{\boldsymbol{y}_N}^\dagger \ket{0}_G.
\end{equation}
One can then translate the abstract time evolution of this state under the Hamiltonian \eqref{eq:NR_ham}, $i \hbar \frac{d}{dt} \ket{\Psi(t)} = \hat{H} \ket{\Psi(t)}$, into a differential equation for the smearing function $\Psi(\boldsymbol{y}_1, \dots, \boldsymbol{y}_N; t)$.
It is easy to see that the smearing function indeed satisfies the quantum mechanical Schr\"odinger equation,
\begin{equation}
i \hbar \partial_t \Psi(\boldsymbol{y}_1, \dots, \boldsymbol{y}_N; t) = \left( E_0 + mc^2 N - \frac{\hbar^2}{2m} \sum_{i=1}^N \boldsymbol{\nabla}_i^2 \right) \Psi(\boldsymbol{y}_1, \dots, \boldsymbol{y}_N; t),
\end{equation}
where $E_0$ is the zero-point energy.
Note that we make contact between QFT and NRQM by associating the smearing function in the $N$-particle subspace with the non-relativistic wavefunction, and not the field operator as would be suggested by the archaic use of the term `second quantization'.
Indeed, this is the only manner in which one could arrive at a multi-particle Schr\"odinger equation.
However, NRQM exhibits more structure than simply the Schr\"odinger equation; we need to further justify how the smearing function imitates the familiar NRQM wavefunction under this approximation.
For instance, the wavefunction $\Psi(\boldsymbol{y}_1, \dots, \boldsymbol{y}_N)$ would typically represent the probability amplitude for measuring $N$ particles at the points $\boldsymbol{y}_1, \dots, \boldsymbol{y}_N$.
Do the wavefunctions we obtain here have such an interpretation?
Does the label $\boldsymbol{y}_i$ correspond to an eigenvalue of a position-type operator satisfying the Heisenberg algebra?
Do we recover a Born rule and state-update rule for these wavefunctions?
Ordinarily a quantum mechanical wavefunction is defined over space by $\Psi(\boldsymbol{x}_1, \dots, \boldsymbol{x}_N) := \langle \boldsymbol{x}_1, \dots, \boldsymbol{x}_N | \Psi \rangle$, where $\ket{\boldsymbol{x}_1, \dots, \boldsymbol{x}_N}$ is an element of a position basis provided by the position operators, $\boldsymbol{\hat{x}}_1, \dots, \boldsymbol{\hat{x}}_N$, of the $N$ particles.
In order to describe bosonic (or fermionic) particles, this wavefunction must be symmetrized (or antisymmetrized) over the particle labels.
In QFT, we do not naturally have position operators to provide us with such a basis; the arguments, $\boldsymbol{y}_i$, of the smearing function rather label the infinite degrees of freedom of the non-relativistic localization scheme.
For the wavepackets \eqref{eq:NR_wvfn} in the bosonic Klein-Gordon field, the symmetrization of the particle labels occurs automatically due to $[ \hat{a}_{\boldsymbol{y}}^\dagger, \hat{a}_{\boldsymbol{y}'}^\dagger ] = 0$.
However, after this symmetrization, in both the QFT and NRQM cases, one cannot have a physical position operator $\boldsymbol{\hat{x}}_i$ for the $i^\text{th}$ particle as it would not map the symmetric subspace into itself.
Despite this, on physical grounds there should be a means of describing measurements of distance between identical particles.
Although this is clearly an important issue, we will not resolve it here, since the problem is also present in NRQM and not a distinct feature of localizability issues in QFT.
At least, we can define a center-of-mass position operator which is conjugate to the total momentum operator of the QFT, namely
\begin{equation}
\boldsymbol{\hat{X}} := \hat{N}^+ \int d\boldsymbol{y} \hspace{1mm} \boldsymbol{y} \hspace{1mm} \hat{a}_{\boldsymbol{y}}^\dagger \hat{a}_{\boldsymbol{y}},
\end{equation}
where $\hat{N}^+$ is the pseudo-inverse of $\hat{N}$.
The formal eigenstates of this position operator are:
\begin{equation}\label{eq:posn_eig}
\ket{\boldsymbol{y}_1, \dots, \boldsymbol{y}_N} \equiv \frac{1}{\sqrt{N!}} \hspace{1mm} \hat{a}_{\boldsymbol{y}_1}^\dagger \cdots \hat{a}_{\boldsymbol{y}_N}^\dagger \ket{0}_G,
\end{equation}
with eigenvalues $\frac1N \sum_{i=1}^N \boldsymbol{y}_i$.
This operator satisfies the Heisenberg algebra with the total momentum operator:
\begin{align}
\left[ \boldsymbol{\hat{X}}, \boldsymbol{\hat{P}}^T \right] &= \left[ \hat{N}^+ \int d\boldsymbol{y} \hspace{1mm} \boldsymbol{y} \hspace{1mm} \hat{a}_{\boldsymbol{y}}^\dagger \hat{a}_{\boldsymbol{y}} , \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} \hbar \boldsymbol{k}^T \hat{a}_{\boldsymbol{k}}^\dagger \hat{a}_{\boldsymbol{k}} \right] \nonumber \\
&= i \hbar \boldsymbol{\mathds{1}}_n ( \mathds{1}_{\mathcal{F}[B(\Lambda)]} - \ket{0}_G \bra{0}_G ),
\end{align}
where $\boldsymbol{\mathds{1}}_n$ is used to denote the fact that the Heisenberg algebra is satisfied component-wise for each of the $n$ spatial components of these operators, and the projector $( \mathds{1}_{\mathcal{F}[B(\Lambda)]} - \ket{0}_G \bra{0}_G )$ represents the identity on the Fock space, except the zero-particle subspace which is annihilated by both $\boldsymbol{\hat{X}}$ and $\boldsymbol{\hat{P}}$.
Alas, at present it is not clear how to concoct a complete set of position observables for the non-relativistic wavefunctions we obtain from the QFT (except the single-particle subspace where the above center-of-mass position operator suffices).
Nevertheless, we can build sets of measurement operators associated with asking questions of the kind: ``What is the probability of finding one of the $N$ particles in a region $\Delta \subset \mathds{R}^n$?''
Such measurements can serve to approximately characterize the localizability of these states (not fully, since there remains a discrepancy between $\boldsymbol{x}$-space and $\boldsymbol{y}$-space).
The measurement operators can be constructed in the $N$-particle subspace using the formal position eigenvectors \eqref{eq:posn_eig}:
\begin{equation}\label{eq:posn_POVM}
P_\Delta^{(N)} := \int_\Delta d\boldsymbol{y}_1 \int d\boldsymbol{y}_2 \cdots d\boldsymbol{y}_N \hspace{1mm} \ket{\boldsymbol{y}_1, \dots, \boldsymbol{y}_N} \bra{\boldsymbol{y}_1, \dots, \boldsymbol{y}_N}.
\end{equation}
The expectation value of these operators over an $N$-particle state \eqref{eq:NR_wvfn} yields
\begin{equation}
\bra{\Psi} P_\Delta^{(N)} \ket{\Psi} = \int_\Delta d\boldsymbol{y}_1 \int d\boldsymbol{y}_2 \cdots d\boldsymbol{y}_N \hspace{1mm} | \Psi(\boldsymbol{y}_1, \dots, \boldsymbol{y}_N) |^2.
\end{equation}
If one is agnostic as to the total number of particles, then one could use the operator $P_\Delta = \sum_{N=1}^\infty P_\Delta^{(N)}$ defined over the total bandlimited Fock space.
It is straightforward to construct measurement operators for similar types of questions.
Hence we see that these wavefunctions are indeed associated with probability amplitudes associated with these kind of measurements which aim to localize these particles.
However, interpreting these measurements is not as straightforward as it may seem.
As we mentioned above, the cutoff causes the wavefunctions to exhibit a kind of non-locality, and this gives these measurements some unintuitive features.
First, note that these measurement operators are not projectors, as one can easily verify $( P_\Delta^{(N)} )^2 \neq P_\Delta^{(N)}$.
This is a consequence of the fact that the formal eigenvectors \eqref{eq:posn_eig} of the position operator are not orthogonal because of the cutoff.
For example,
\begin{equation}
\langle \boldsymbol{y}' | \boldsymbol{y} \rangle = \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot (\boldsymbol{y}'-\boldsymbol{y})} \neq \delta(\boldsymbol{y}'-\boldsymbol{y}).
\end{equation}
Hence this collection of formal position eigenvectors form a frame rather than an orthonormal basis.
Note that the possibility that these formal eigenvectors do not form an orthonormal basis is due to the fact that the position operator we constructed is not essentially self-adjoint\footnote{One can see this by observing that the bandlimitation causes $\boldsymbol{\hat{X}}$ to have a finite minimum uncertainty, hence it cannot have any eigenvectors \cite{Kempf1994a,Kempf-Mangano-Mann1995}. For example, for a single-particle in one dimension, the bandlimitation causes a maximum uncertainty $\Delta \hat{P} \leq 2\Lambda$, thus the uncertainty principle forces $\Delta \hat{X} \geq \hbar/4\Lambda$.} \cite{Kempf1994a,Kempf-Mangano-Mann1995}.
Nevertheless, the operators $P_\Delta^{(N)}$ are clearly positive, hence the measurements we have described are associated with a POVM-type measurement.
Due to the fact that these measurement operators are elements of a POVM, and not projectors, prevents us from being able to write a state-update rule because we do not have the Kraus operators associated with the measurement.
Generally, the measurement theory which corresponds to ``observables'' of this kind is unclear.
Note that such objects do not only appear in such esoteric contexts; indeed, one also arrives in a similar situation when dealing with momentum and Hamiltonian operators for a particle in an infinite potential well (however, in some cases there are physically-motivated ways to resolve the difficulties \cite{Bonneau-Faraut-Valent2001,Fewster1993}).
We leave further investigation into these issues as future work.
In a relativistic setting, a theorem of Hegerfeldt \cite{Hegerfeldt1998a,Hegerfeldt1998b,Hegerfeldt1974} trivializes POVM elements that one might wish to associate with spatial regions, similar to how Malament's theorem \cite{Malament1996} trivializes projectors.
The key relativistic assumption is the requirement that there should be a finite speed of propagation \cite{Hegerfeldt1998a,Hegerfeldt1998b,Hegerfeldt1974}.
This is violated in both cases by energy positivity \cite{Hegerfeldt1998a,Hegerfeldt1998b,Hegerfeldt1974,Halvorson2001}.
The fact that we can construct non-trivial POVM elements \eqref{eq:posn_POVM} in the non-relativistic regime suggests that this requirement from relativity contributes to trivializing the spatial POVMs.
Of course, the localizability expressed in terms of these elements will suffer from superluminal propagation.
In our case, technically this is due to the fact that the ultraviolet cutoff breaks the Lorentz-invariance of the theory.
However, this is not an issue since superluminal propagation is also an aspect of NRQM.
The point is that we recover tools for characterizing localizability of states, despite their `pathological' propagation which can be attributed to the non-relativistic approximation.
\subsection{Local degrees of freedom and sampling theory}\label{subsec:sampling}
An aspect of the recovered theory which is incongruous with the usual structure of NRQM is that the space consists of wavefunctions which are bandlimited, not the full space $L_2(\mathds{R}^n)$.
Of course this makes sense because physically we know NRQM does not apply at large velocities, so it should disagree with QFT in this regime.
However, localizability of bandlimited functions is a delicate issue; this property has important implications regarding the arrangement of the spatial degrees of freedom in the field theory.
These implications will be particularly important for the discussion in Section~\ref{sec:entanglement}.
In this subsection, we will describe a framework that can be used to interpret these features.
Initially we had labeled points in space by $\boldsymbol{x}$, and the corresponding degrees of freedom through $\hat{\Phi}(\boldsymbol{x})$ and $\hat{\Pi}(\boldsymbol{x})$.
We then constructed the local Fock space through taking a tensor product over $\boldsymbol{x}$ labeling these degrees of freedom.
The bandlimited Fock space we used for the non-relativistic approximation has a tensor product structure over $\boldsymbol{k}$, i.e., $\mathcal{F}[B(\Lambda)] \cong \otimes_{|\boldsymbol{k}|<\Lambda} L_2(\mathds{C},d\Phi_{\boldsymbol{k}})$.
Is there an analogue of the $\boldsymbol{x}$ tensor product decomposition (even if it is a unitarily-inequivalent representation)?
As we shall see, it is not simply a space of the form $\otimes_{\boldsymbol{x}} \mathcal{H}_{\boldsymbol{x}}$.
This can be deduced from the fact that the operators $\hat{\Phi}(\boldsymbol{x})$ and $\hat{\Pi}(\boldsymbol{x})$ act non-locally after introducing the cutoff, which can be seen from the new commutation relations:
\begin{equation}
[ \hat{\Phi}(\boldsymbol{x}), \hat{\Pi}(\boldsymbol{x}') ] = i \hbar \int_{|\boldsymbol{k}|<\Lambda} \frac{d\boldsymbol{k}}{(2\pi)^n} e^{i \boldsymbol{k} \cdot (\boldsymbol{x}-\boldsymbol{x}')}.
\end{equation}
For example, for $n=1$, this becomes $[ \hat{\Phi}(x), \hat{\Pi}(x') ] = i \hbar \frac{\Lambda}{\pi} \sinc [ \Lambda (x-x') ]$, which decays $\sim 1/|x-x'|$.
Hence, the operators cannot act on a single factor of a Hilbert space of the form $\otimes_{\boldsymbol{x}} \mathcal{H}_{\boldsymbol{x}}$.
In light of this, is it possible to find an analogue characterization for the spatial degrees of freedom of the field theory?
For the purpose of clarity in describing the main features of sampling theory, in this subsection we will employ a cutoff on each of the coordinates of the wavevector $\boldsymbol{k}$, rather than the spherically-symmetric cutoff $|\boldsymbol{k}|<\Lambda$.
We will denote this cutoff by $\| \boldsymbol{k} \|_\infty < \Lambda$.
Clearly this will not affect the general features of the expansions under the non-relativistic approximation, as these norms are equivalent ($\| \boldsymbol{k} \|_\infty \leq \| \boldsymbol{k} \|_2 \leq \sqrt{n} \| \boldsymbol{k} \|_\infty$, where $|\boldsymbol{k}| \equiv \| \boldsymbol{k} \|_2$), and the only condition we required is $\Lambda \ll k_c$.
It is still possible to prove statements regarding the sampling theory for the spherically-symmetric case \cite{Landau1967,Jerri1977}, but cutting off each coordinate reduces the problem to a product of one-dimensional cases for which one can write down explicit formulas which will be pedagogically useful and will suffice for the present exposition.
The question of characterizing the spatial degrees of freedom of bandlimited fields was answered by Shannon sampling theory \cite{Shannon1948,Shannon1949,Nyquist1928} for classical bandlimited functions, and can be extended to quantum field theory \cite{Kempf1994b,Kempf1997,Kempf2000,Pye-Donnelly-Kempf2015}.
(\cite{Pye-Donnelly-Kempf2015} also contains a discussion about localizability in bandlimited quantum fields, albeit the context is a model for Planck-scale physics.)
The basic result of sampling theory for a function, $f$, with a cutoff $\| \boldsymbol{k} \|_\infty < \Lambda$, can be summarized by the following reconstruction formula:
\begin{equation}\label{eq:sampling}
f(\boldsymbol{x}) = \sum_{\boldsymbol{m} \in \mathds{Z}^n} K(\boldsymbol{x},\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}) f(\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}), \quad \text{where} \quad K(\boldsymbol{x},\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}) := \prod_{i=1}^n \sinc [ \Lambda ( x^i - (x^i)_{n^i}^{(\alpha^i)} ) ]
\end{equation}
where $\boldsymbol{\alpha} \in [0,1)^n$ is an arbitrary parameter, $\{ \boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})} := \pi (\boldsymbol{m}-\boldsymbol{\alpha})/\Lambda \}_{\boldsymbol{m} \in \mathds{Z}^n}$ is a set of \emph{sampling points} (or a \emph{sampling lattice}), and we use the notation $x^i$ to denote the $i^{th}$ component of the vector $\boldsymbol{x}$.
The essence of this formula is that the value of the function at \emph{any} $\boldsymbol{x} \in \mathds{R}^n$ is completely determined by the values of the function on a sampling lattice.
The values of a bandlimited function on a sampling lattice embody the independent spatial degrees of freedom.
Any one of the $\boldsymbol{\alpha}$-parametrized family of sampling lattices will serve for the above reconstruction formula, hence one can choose the values on any of the lattices as the degrees of freedom.
One can think of the above reconstruction formula as stating that the sampling kernels, $K(\boldsymbol{x},\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})})$, centered at the sample points of one of these lattices form an orthonormal basis for the space of bandlimited functions, and that the coefficients $f(\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})})$ specify a particular function in this space.
The arbitrary parameter $\boldsymbol{\alpha}$ indicates that there is a parametrized family of bases which are simply translated versions of one another.
It is straightforward to show that the above reconstruction formula extends to bandlimited quantum fields \cite{Pye-Donnelly-Kempf2015}: $\hat{\Phi}(\boldsymbol{x}) = \sum_{\boldsymbol{m} \in \mathds{Z}^n} K(\boldsymbol{x},\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}) \hat{\Phi}(\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})})$.
Supporting the claim that the function values on the sampling lattice exhibit the independent spatial degrees of freedom of the field theory, we note that these operators satisfy canonical commutation relations (up to a multiplicative factor) between points on the same sampling lattice (i.e., same $\boldsymbol{\alpha}$): $[ \hat{\Phi}(\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}), \hat{\Pi}(\boldsymbol{x}_{\boldsymbol{m}'}^{(\boldsymbol{\alpha})}) ] = i \hbar \left( \frac{\Lambda}{\pi} \right)^n \delta_{\boldsymbol{m} \boldsymbol{m}'}$.
Hence, the analogue of the local Hilbert space for the spatial degrees of freedom in this bandlimited theory is: $\mathcal{F}_L^\Lambda := \otimes_{\boldsymbol{m} \in \mathds{Z}^n} L_2(\mathds{R},d\Phi(\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})})) \cong \mathcal{F}[\ell_2^n]$, i.e., the local tensor product structure is over lattice points.
We also not that one obtains a different tensor product structure for each of the sampling lattices.
However, one can easily use the reconstruction formula to show that the Bogoliubov transformation between different lattices contains no mixing between the annihilation and creation operators:
\begin{equation}
\hat{b}_{\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}} = \sum_{\boldsymbol{m}' \in \mathds{Z}^n} K(\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}, \boldsymbol{x}_{\boldsymbol{m}'}^{(\boldsymbol{\alpha}')}) \hat{b}_{\boldsymbol{x}_{\boldsymbol{m}'}^{(\boldsymbol{\alpha}')}},
\end{equation}
where $\hat{b}_{\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}} := \sqrt{\frac{m}{2\hbar^2}} \hat{\Phi}(\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}) + \frac{i}{\sqrt{2m}} \hat{\Pi}(\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})})$.
Hence regardless of the lattice which is chosen, one arrives at the same Fock space.
How does this `local' Fock space associated with the sample points compare to the bandlimited Fock space, $\mathcal{F}[B(\Lambda)]$, in which we constructed the $N$-particle spaces of NRQM?
We have already demonstrated in Subsection~\ref{subsec:two_schemes_NR} that they are not the same.
In Appendix~\ref{apdx:bbv}, we show that they remain unitarily inequivalent (due to an infrared divergence), as in the case of the full local and global Fock spaces.
We also note that the non-relativistic operators also exhibit the bandlimitation property.
For example, one can similarly write
\begin{equation}
\hat{a}_{\boldsymbol{y}} = \sum_{\boldsymbol{p} \in \mathds{Z}^n} K(\boldsymbol{y},\boldsymbol{y}_{\boldsymbol{p}}^{(\boldsymbol{\beta})}) \hat{a}_{\boldsymbol{y}_{\boldsymbol{p}}^{(\boldsymbol{\beta})}}.
\end{equation}
Of course, because of the remaining discrepancy between the $\boldsymbol{x}$ and $\boldsymbol{y}$ labels, the sampling lattices in these two spaces are different.
That is, one can show the non-local kernel $F_-^\Lambda$ in \eqref{eq:yx_bbv_NR} is such that $F_-^\Lambda(\boldsymbol{y}_{\boldsymbol{p}}^{(\boldsymbol{\beta})}-\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}) \not\sim \delta_{\boldsymbol{p},\boldsymbol{m}}$ for any $\boldsymbol{y}$-sample point $\boldsymbol{y}_{\boldsymbol{p}}^{(\boldsymbol{\beta})}$ and any $\boldsymbol{x}$-sample point $\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}$.
Therefore, we see that after the bandlimitation introduced in order to enact the non-relativistic approximation, the resulting tensor product structures akin to the original $\boldsymbol{x}$ and $\boldsymbol{y}$ tensor product structures are those taken over lattice points.
The degrees of freedom of the field theory after this approximation can be identified with the field values at the sample points in $\boldsymbol{x}$-space.
Similarly, the wavefunctions of the NRQM we obtain in $\boldsymbol{y}$-space also exhibit the sampling property in each coordinate, hence these wavefunctions are determined by their values on a sampling lattice.
Thus, the representation of the wavefunction over all of $\boldsymbol{y}$-space is a redundant description.
This redundancy is related to the fact that the formal position eigenvectors, $\ket{\boldsymbol{y}_1, \dots, \boldsymbol{y}_N}$, form a frame rather than an orthonormal basis.
In fact, this collection of states can be seen as a union of orthonormal bases associated with the independent points of the sampling lattices \cite{Pye-Donnelly-Kempf2015}.
\section{Is the Unruh effect relativistic?}\label{sec:entanglement}
In the previous sections, we showed that one could define bandlimited versions of the local annihilation and creation operators acting on the bandlimited subspace of the global Fock space.
Although the corresponding number operator does not provide a counting of the particles associated with this Fock space, one can still ask about the fate of the Reeh-Schlieder theorem under the non-relativistic approximation.
We do not aim here to construct full analogues of the assumptions which lead to the Reeh-Schlieder theorem in the non-relativistic regime, but rather use the relation with vacuum entanglement \cite{Redhead1995} and investigate this instead.
One could also ask the question, out of independent interest, whether vacuum entanglement is a feature of the relativistic regime of the field theory.
\updated{As we said in the introduction, we are not implying that ground state entanglement only occurs in relativistic systems, but rather we are asking whether it occurs only in the relativistic regime of a relativistic QFT.}
Intuitively, ground state entanglement is caused by coupling between degrees of freedom in the Hamiltonian of a theory.
However, not all couplings lead to entanglement; indeed, we saw above that the non-relativistic field operators do not exhibit ground state entanglement, yet they are coupled through derivative terms in the Hamiltonian.
Ground state entanglement is exhibited by systems which are \emph{frustrated}, i.e., whose free and interaction terms\footnote{In our case, by \emph{interaction} term we mean the term of the Hamiltonian which couples the local oscillators, i.e., the spatial derivative term.} of the Hamiltonian do not commute.
In some instances, one can concretely relate measures of frustration with bounds on ground state entanglement \cite{Dawson-Nielsen2004}.
It is known that the local degrees of freedom of a Klein-Gordon field are entangled in the ground state, but it is not intuitively obvious whether the frustration of the couplings is a relativistic effect.
Ground state entanglement in quantum field theory is neatly displayed by the Unruh temperature, $k_B T_U = \hbar a / 2\pi c$.
There are two distinct ways of thinking about the physics behind this temperature: one in terms of the stimulation of a process involving a counter-rotating wave type interaction term between an accelerated detector and the field \cite{Unruh1976} (which can be extended to more general trajectories), and the other in terms of the thermality of the reduced state on a half-space caused by entanglement \cite{Wald1994,Sorkin1983,Bombelli-Koul-Lee-Sorkin1986} (which can be extended to reduced states on more general local subsystems).
Here we are interested in the latter.
A naive inspection of the Unruh temperature suggests it should vanish in the limit $c \to \infty$, hence one would be tempted to conclude that the effect is relativistic.
(Note this is similar to the manner in which one describes the Hawking temperature, $k_B T_H = \hbar c^3 / 8\pi GM$, as exhibiting features of quantum theory, relativity, and gravity, due to the presence of $\hbar$, $c$, and $G$.)
However, as we discussed previously, the non-relativistic regime does not correspond to a limit, but rather an approximation to second order in $\Lambda/k_c$, after introducing a cutoff $\Lambda$.
Here we will examine more carefully the requirement of relativity for the presence of the Unruh effect in \updated{relativistic} quantum field theory.
\updated{
An important ingredient in the standard derivation of the Unruh effect (viewed as arising from entanglement) is the presence of a horizon for a uniformly accelerated observer, which naturally splits the field into two (entangled) subsystems.
Of course, without relativity there can be no horizons present.
However, if there were some other circumstance (within the non-relativistic regime) in which an observer would aptly trace out some local subsystem, then such an observer would perceive an Unruh-like effect if the local subsystems remain entangled in this regime.
For example, in the relativistic setting, this occurs in situations where an observer couples locally to the field for a finite period of time.
Indeed, there are many works which refer to such effects as the ``Unruh effect'' \cite{martinetti2003diamond,crispino2008unruh}.
It is this generalized sense in which we are using the term throughout the text.
Primarily this literature concerns itself with the responses of detectors coupled to the field.
Here we are investigating, in the non-relativistic regime, whether a contribution to the response of such a detector would arise from the thermality of local subsystems in the field due to the fact that they are entangled.
As we are interested in isolating the contribution of the response due to entanglement, we will simply determine whether the local subsystems of the field are entangled in this regime.
}
\subsection{Local degrees of freedom remain entangled}
Here we are identifying the Unruh effect with entanglement between local degrees of freedom.
\updated{Here we are identifying the points on a sampling lattice as the local degrees of freedom after the ``coarse-graining'' induced by the cutoff for the non-relativistic limit.
Therefore, in this context, the Unruh effect is related to entanglement between the field at these sample points.
(See also another perspective in \cite{cacciatori2009renormalized}, where the herein identified non-relativistic degrees of freedom were identified as the appropriate coarse-grained degrees of freedom for the purposes of thermodynamics.
There, the entropy of a region of space, as identified by the non-relativistic localization scheme, was calculated for a global thermal state.)
}
First, we will demonstrate that the sample points are indeed entangled in the non-relativistic regime.
We begin with the Bogoliubov transformation,
\begin{equation}
\hat{a}_{\boldsymbol{k}} = \left( \frac{\pi}{\Lambda} \right)^{n/2} \sum_{\boldsymbol{m} \in \mathds{Z}^n} e^{-i \boldsymbol{k} \cdot \boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}} \left[ \hat{b}_{\boldsymbol{m}} + \frac14 (\boldsymbol{k}/k_c)^2 \hat{b}_{\boldsymbol{m}}^\dagger \right],
\end{equation}
where for convenience we will fix $\boldsymbol{\alpha}$ and write $\hat{b}_{\boldsymbol{m}} \equiv \left( \frac{\pi}{\Lambda} \right)^{n/2} \hat{b}_{\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}}$.
The factor $\left( \frac{\pi}{\Lambda} \right)^{n/2}$ is included in the definition of $\hat{b}_{\boldsymbol{m}}$ so that the commutation relations are properly normalized: $[ \hat{b}_{\boldsymbol{m}}, \hat{b}_{\boldsymbol{m}'}^\dagger ] = \delta_{\boldsymbol{m} \boldsymbol{m}'}$.
The above expression can be obtained by inverting the Bogoliubov transformation \eqref{eq:bx_NR} and using the sampling formula \eqref{eq:sampling}.
Generally, given a Bogoliubov transformation of the form,
\begin{equation}
\hat{a}_i = \sum_j ( \alpha_{ij} \hat{b}_j + \beta_{ij} \hat{b}_j^\dagger ),
\end{equation}
we can relate the two vacua by a unitary,
\begin{equation}
\ket{0}_a = N e^{-\frac12 \sum_{j,k} (\alpha^{-1} \beta)_{jk} \hat{b}_j^\dagger \hat{b}_k^\dagger} \ket{0}_b,
\end{equation}
provided the two spaces are unitarily equivalent, and the operator $(\alpha^{-1} \beta)$ exists and is symmetric.
(One can easily show that the latter two conditions always holds for a valid Bogoliubov transformation.)
The factor $N$ is simply a normalization constant.
In our case, we have
\begin{equation}
\alpha_{\boldsymbol{k} \boldsymbol{m}} = \left( \frac{\pi}{\Lambda} \right)^{n/2} e^{-i \boldsymbol{k} \cdot \boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}},
\qquad \beta_{\boldsymbol{k} \boldsymbol{m}} = \left( \frac{\pi}{\Lambda} \right)^{n/2} e^{-i \boldsymbol{k} \cdot \boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})}} \frac14 (\boldsymbol{k}/k_c)^2,
\end{equation}
and
\begin{equation}
(\alpha^{-1} \beta)_{\boldsymbol{m} \boldsymbol{m}'} = \frac14 \int_{\| \boldsymbol{k} \|_\infty < \Lambda} \frac{d\boldsymbol{k}}{(2\Lambda)^n} e^{i \boldsymbol{k} \cdot (\boldsymbol{x}_{\boldsymbol{m}}^{(\boldsymbol{\alpha})} - \boldsymbol{x}_{\boldsymbol{m}'}^{(\boldsymbol{\alpha})})} (\boldsymbol{k}/k_c)^2,
\end{equation}
which is indeed symmetric.
However, as we mentioned above, the bandlimited local and global Fock space representations are not unitarily equivalent.
Nevertheless, we will proceed with the following formal manipulations to suggest that the local degrees of freedom remain entangled, and proceed to investigate more carefully in Subsections~\ref{subsec:osc_temp} and \ref{subsec:osc_logneg}.
We will denote the global vacuum, $\ket{0}_G^\Lambda := \otimes_{\| \boldsymbol{k} \|_\infty < \Lambda} \ket{0}_{\boldsymbol{k}}$, and the local sampling vacuum, $\ket{0}_L^\Lambda := \otimes_{\boldsymbol{m} \in \mathds{Z}^n} \ket{0}_{\boldsymbol{m}}$.
Therefore, to second order in $\| \boldsymbol{k} \|_\infty / k_c$, we can write
\begin{align}
\ket{0}_G^\Lambda &= N \exp \left[ -\frac12 \sum_{\boldsymbol{m}, \boldsymbol{m}' \in \mathds{Z}^n} (\alpha^{-1} \beta)_{\boldsymbol{m} \boldsymbol{m}'} \hat{b}_{\boldsymbol{m}}^\dagger \hat{b}_{\boldsymbol{m}'}^\dagger \right] \ket{0}_L^\Lambda \nonumber \\
&\sim \ket{0}_L^\Lambda - \frac12 \sum_{\boldsymbol{m}, \boldsymbol{m}' \in \mathds{Z}^n} (\alpha^{-1} \beta)_{\boldsymbol{m} \boldsymbol{m}'} \hat{b}_{\boldsymbol{m}}^\dagger \hat{b}_{\boldsymbol{m}'}^\dagger \ket{0}_L^\Lambda.
\end{align}
Note that to second order, this state is a combination the vacuum and two-particle states in the local Fock space, with the terms at higher order lying in higher particle number subspaces.
If we write the states explicitly in a tensor decomposition over lattice points, we get
\begin{equation}
\ket{0}_G^\Lambda = \ket{0 0 \cdots} - \frac12 \sum_{\boldsymbol{m} \neq \boldsymbol{m}'} (\alpha^{-1} \beta)_{\boldsymbol{m} \boldsymbol{m}'} \ket{0 \cdots 0 1_{\boldsymbol{m}} 0 \cdots 0 1_{\boldsymbol{m}'} 0 \cdots} - \frac{1}{\sqrt{2}} \sum_{\boldsymbol{m}} (\alpha^{-1} \beta)_{\boldsymbol{m} \boldsymbol{m}} \ket{0 \cdots 0 2_{\boldsymbol{m}} 0 \cdots}.
\end{equation}
Such a state is always entangled if an off-diagonal entry of $(\alpha^{-1} \beta)$ is non-zero (as it is in our case, which can be easily verified).
This is because if one tried to write it as a state of the form,
\begin{equation}
\ket{\psi} := \otimes_{\boldsymbol{m}} ( A_{\boldsymbol{m}} \ket{0}_{\boldsymbol{m}} + B_{\boldsymbol{m}} \ket{1}_{\boldsymbol{m}} + C_{\boldsymbol{m}} \ket{2}_{\boldsymbol{m}} ),
\end{equation}
then one arrives at a contradiction in attempting to match coefficients.
The coefficient of $\ket{0 0 \cdots}$ in $\ket{\psi}$ is $\prod_{\boldsymbol{m}} A_{\boldsymbol{m}}$ which must be 1 to second order, which means that (up to global phases) we must have $A_{\boldsymbol{m}} = 1$ for all $\boldsymbol{m}$.
Next we have the single-particle components of $\ket{\psi}$, which have coefficients $B_{\boldsymbol{m}}$ for $\ket{0 \cdots 0 1_{\boldsymbol{m}} 0 \cdots}$ and must all vanish.
Hence we require $B_{\boldsymbol{m}} = 0$ for all $\boldsymbol{m}$.
Now we have a contradiction, since if a particular off-diagonal component $(\alpha^{-1} \beta)_{\boldsymbol{m} \boldsymbol{m}'} \neq 0$ (with $\boldsymbol{m} \neq \boldsymbol{m}'$), then we require the coefficient of $\ket{0 \cdots 0 1_{\boldsymbol{m}} 0 \cdots 0 1_{\boldsymbol{m}'} 0 \cdots}$ to be both $B_{\boldsymbol{m}} B_{\boldsymbol{m}'} = 0$ as well as $-\frac12 (\alpha^{-1} \beta)_{\boldsymbol{m} \boldsymbol{m}'} \neq 0$ to second order.
Therefore, $\ket{0}_G^\Lambda$ cannot be written as a product state over the $\boldsymbol{x}$-space sample point decomposition.
Hence we have formally established that there is entanglement between local degrees of freedom in the ground state of the field in the non-relativistic regime.
Now we investigate whether the entanglement manifests itself in certain entanglement measures for some simple situations where one can work out explicit expressions.
These will also serve to justify the conclusion of the above formal manipulations.
For the following computations, we will make use of the Gaussian state formalism, the relevant aspects of which we summarize in Appendix~\ref{apdx:gaussian}.
\subsection{Temperature of a single oscillator}\label{subsec:osc_temp}
First, we examine the reduced state of a single smeared field observable, with smearing function $f$.
We choose $f$ to be a general normalized ($\|f\|_2 = 1$) bandlimited smearing function.
Thus the subsystem is a single local oscillator generated by $\hat{\Phi}[f] := \int d\boldsymbol{x} f(\boldsymbol{x}) \hat{\Phi}(\boldsymbol{x})$ and $\hat{\Pi}[f] := \int d\boldsymbol{x} f(\boldsymbol{x}) \hat{\Pi}(\boldsymbol{x})$.
Here we will enforce the bandlimit via the smearing function.
For this and the following subsection, one could use either a spherically-symmetric cutoff, $\| \boldsymbol{k} \|_2 < \Lambda$, or a coordinate cutoff, $\| \boldsymbol{k} \|_\infty < \Lambda$.
The choice will only affect numerical prefactors.
Even though we made use of the latter cutoff in Subsection~\ref{subsec:sampling} to establish the form of the local degrees of freedom of the field in terms of sample points, if one restricts attention to finitely many sample points then the entanglement calculations can be performed in either case.
For a discussion of a similar situation, see \cite{Pye-Donnelly-Kempf2015}.
The reduced state on the subsystem defined by $f$ has a single symplectic eigenvalue,
\begin{equation}
\nu = \frac{1}{\hbar} \Delta \Phi_f \Delta \Pi_f,
\end{equation}
where
\begin{align}
\Delta \Phi_f^2 &:= \langle \hat{\Phi}[f]^2 \rangle = \int \frac{d\boldsymbol{k}}{(2\pi)^n} \frac{\hbar c^2}{2\omega_{\boldsymbol{k}}} | \tilde{f}(\boldsymbol{k}) |^2, \\
\Delta \Pi_f^2 &:= \langle \hat{\Pi}[f]^2 \rangle = \int \frac{d\boldsymbol{k}}{(2\pi)^n} \frac{\hbar \omega_{\boldsymbol{k}}}{2c^2} | \tilde{f}(\boldsymbol{k}) |^2,
\end{align}
and where $\tilde{f}(\boldsymbol{k})$ is the Fourier transform of $f(\boldsymbol{x})$.
If $f$ has a bandlimit $\Lambda$, then it is straightforward to show that
\begin{equation}
\nu = \frac12 + C (\Lambda/k_c)^4 + \mathcal{O}( (\Lambda/k_c)^6 ),
\end{equation}
where $C := \tfrac{1}{16} ( \tilde{f}_4 - \tilde{f}_2^2 )$ is a constant, and where
\begin{equation}
\tilde{f}_p := \int \frac{d\boldsymbol{k}}{(2\pi)^n} ( \boldsymbol{k} / \Lambda )^p | \tilde{f}(\boldsymbol{k}) |^2.
\end{equation}
Note that generically we can choose $f$ such that $\tilde{f}_4 \neq \tilde{f}_2^2$, and hence $C \neq 0$.
For example, in the case of $n=1$, one can choose $f(x) = \sqrt{\frac{\Lambda}{\pi}} \sinc [ \Lambda (x-x_0) ]$ (i.e., the smearing function for a sample point centered at $x_0$), in which case $C = 1/180$.
One can then work out that the von Neumann entropy (corresponding to the entanglement entropy) of the reduced state on this single oscillator is, to lowest order,
\begin{equation}
S \sim C \left( 1 - \log [ C (\Lambda/k_c)^4 ] \right) (\Lambda/k_c)^4.
\end{equation}
Note that this decays towards zero faster than $(\Lambda/k_c)^2$ as $\Lambda/k_c \to 0$, hence we consider this to be in the relativistic regime (the non-relativistic regime being contributions decaying slower or as slow as quadratically when $\Lambda/k_c \to 0$).
What can we conclude about the entanglement of a single oscillator?
On one hand, we showed that in the non-relativistic regime the ground state of the field is not fully separable over local degrees of freedom, yet here we see that nevertheless the entanglement entropy between one sample point and the rest of the system is zero in this regime.
However, this is simply an artifact of choosing the entropy to quantify entanglement.
In the following subsection we will calculate the logarithmic negativity between two sample points and show that it is non-zero, hence we can conclude that there is still entanglement in this regime.
However, for a single sample point we can still conclude that the state retains a non-zero temperature.
In analogy with the Unruh temperature characteristic of the Unruh effect, we can calculate a temperature for our single sample point using the symplectic eigenvalue we have calculated and identifying the coefficients from the thermal state decomposition in Eq.~\eqref{eq:gaussian_thermal_decomp} of Appendix~\ref{apdx:gaussian}:
\begin{equation}
k_B T = \frac{\hbar \omega_f}{\log \left( \frac{\nu+1/2}{\nu-1/2} \right)}.
\end{equation}
Because the symplectic eigenvalue only determines the Boltzmann factor $e^{-\hbar \omega_f/k_B T}$, one has to choose an appropriate frequency scale, $\omega_f$, to distill a temperature from the Boltzmann factor.
An obvious choice would be to associate the energy scale $\hbar \omega_f$ with that of the Hamiltonian restricted to the oscillator defined by $f$, namely, $H_f = (c^2/2) \Pi[f]^2 + (\omega[f]^2/2c^2) \Phi[f]^2$ where $\omega[f]^2/c^2 := \| \boldsymbol{\nabla} f \|_2^2 + k_c^2$.
However, it is not possible to identify a suitable temperature so that the reduced density matrix of this oscillator takes the form $\frac1Z e^{-\beta \hat{H}_f}$.
Nevertheless, it is possible to define an effective frequency, $\omega_f$, so that the reduced density matrix is a thermal state of an effective Hamiltonian of the form $(c^2/2) \Pi[f]^2 + (\omega_f^2/2c^2) \Phi[f]^2$.
One can identify this frequency by writing the ground state of the full system and directly computing the reduced state onto the subsystem defined by $f$.
This is a straightforward calculation and we will omit it here (for a reference, see \cite{Bombelli-Koul-Lee-Sorkin1986}).
We find that one should choose $\omega_f = c^2 \Delta \Pi_f / \Delta \Phi_f$, although we shall see that any other choice will suffice, provided it agrees with the Compton frequency $\omega_c = mc^2/\hbar$ to lowest order in the expansion in $\Lambda/k_c$.
The above expression for $k_B T$ is not analytic at $\Lambda/k_c = 0$, nevertheless one can show that the expression decays slower than any polynomial as $\Lambda/k_c \to 0$, i.e., $\lim_{\Lambda/k_c \to 0} (\Lambda/k_c)^n/(k_B T) = 0$ for $n>0$.
Therefore, we can conclude that the temperature of a single local oscillator is non-zero in the non-relativistic regime.
In particular, to leading order,
\begin{equation}
k_B T \sim \frac{mc^2}{ \log [ \left( \Lambda/k_c \right)^{-4} ] },
\end{equation}
where, in standard fashion, we use $f(x) \sim g(x)$ to indicate $\lim_{x \to 0} f(x)/g(x) = 1$.
Note that this leading order term does not depend on the particular choice of the smearing function $f$ (apart from the condition that it is bandlimited), nor the spatial dimension, nor the particular choice of frequency $\omega_f$ (provided it is analytic and equals $\omega_c$ at zeroth order).
\subsection{Logarithmic negativity between two local oscillators}\label{subsec:osc_logneg}
To demonstrate that there is a non-trivial measure of entanglement in the non-relativistic regime of the field theory, here we will examine the logarithmic negativity between two local degrees of freedom of the field.
These local degrees of freedom will be defined by two smearing functions, $f_1$ and $f_2$, which we will assume are bandlimited and are orthonormal, so that
\begin{equation}
( f_i | f_j ) = \int d\boldsymbol{x} f_i(\boldsymbol{x}) f_j(\boldsymbol{x}) = \int \frac{d\boldsymbol{k}}{(2\pi)^n} \tilde{f}^\ast_i(\boldsymbol{k}) \tilde{f}_j(\boldsymbol{k}) = \delta_{ij}.
\end{equation}
For simplicity, we will also assume that $|\tilde{f}_1(\boldsymbol{k})| = |\tilde{f}_2(\boldsymbol{k})|$, so that the local uncertainty of the oscillators are the same, i.e., $\Delta \Phi_f^2 \equiv \Delta \Phi_{f_1}^2 = \Delta \Phi_{f_2}^2$.
Similarly, we will have $\Delta \Pi_f^2 \equiv \Delta \Pi_{f_1}^2 = \Delta \Pi_{f_2}^2$.
This assumption is made only to simplify the form of the result, and will suffice for the purposes of the current demonstration.
Intuitively this requirement is stating that the two subsystems are simply translated versions of one another; for example, in the bandlimited theory these could be $\tilde{f}_i(\boldsymbol{k}) = \left( \frac{\pi}{\Lambda} \right)^{n/2} e^{-i \boldsymbol{k} \cdot \boldsymbol{x}_i} \chi( \| \boldsymbol{k} \|_\infty < \Lambda)$, where $\boldsymbol{x}_1 - \boldsymbol{x}_2 = \boldsymbol{N}\pi/\Lambda$ and $\boldsymbol{N} \in \mathds{Z}^n$ (i.e., $\boldsymbol{x}_1$ is $\boldsymbol{x}_2$ translated by an integral number of lattice spacings).
The correlation matrix for the Klein-Gordon field theory, restricted to these two oscillators, is:
\begin{equation}
\Sigma =
\begin{bmatrix}
\Delta \Phi_f^2 & 0 & \Phi_{12} & 0 \\
0 & \Delta \Pi_f^2 & 0 & \Pi_{12} \\
\Phi_{12} & 0 & \Delta \Phi_f^2 & 0 \\
0 & \Pi_{12} & 0 & \Delta \Pi_f^2
\end{bmatrix},
\end{equation}
where
\begin{align}
\Delta \Phi_f^2 &= \int \frac{d\boldsymbol{k}}{(2\pi)^n} \frac{\hbar c^2}{2\omega_{\boldsymbol{k}}} | \tilde{f}(\boldsymbol{k}) |^2, \\
\Phi_{12} &:= \int \frac{d\boldsymbol{k}}{(2\pi)^n} \frac{\hbar c^2}{2\omega_{\boldsymbol{k}}} \text{Re}[ \tilde{f}^\ast_1(\boldsymbol{k}) \tilde{f}_2(\boldsymbol{k}) ], \\
\Delta \Pi_f^2 &= \int \frac{d\boldsymbol{k}}{(2\pi)^n} \frac{\hbar \omega_{\boldsymbol{k}}}{2c^2} | \tilde{f}(\boldsymbol{k}) |^2, \\
\Pi_{12} &:= \int \frac{d\boldsymbol{k}}{(2\pi)^n} \frac{\hbar \omega_{\boldsymbol{k}}}{2c^2} \text{Re}[ \tilde{f}^\ast_1(\boldsymbol{k}) \tilde{f}_2(\boldsymbol{k}) ],
\end{align}
and we have denoted $|\tilde{f}(\boldsymbol{k})| \equiv |\tilde{f}_1(\boldsymbol{k})| = |\tilde{f}_2(\boldsymbol{k})|$.
In order to calculate the logarithmic negativity, we need to determine the symplectic eigenvalues of the partially-transposed covariance matrix, which in this case amounts to replacing $\Pi_{12} \mapsto -\Pi_{12}$ in the above matrix.
We find that the two symplectic eigenvalues are
\begin{equation}
\nu_\pm = \frac{1}{\hbar} \sqrt{(\Delta \Phi_f^2 \pm \Phi_{12})(\Delta \Pi_f^2 \pm \Pi_{12})}.
\end{equation}
If we expand this expression in powers of $\Lambda / k_c$, we find,
\begin{equation}
\nu_\pm = \frac12 \mp \frac14 \tilde{F}_{12}^{(2)} (\Lambda/k_c)^2 + \mathcal{O} ( (\Lambda/k_c)^4 ),
\end{equation}
where
\begin{equation}
\tilde{F}_{12}^{(2)} := \int \frac{d\boldsymbol{k}}{(2\pi)^n} (\boldsymbol{k}/\Lambda)^2 \hspace{1mm} \text{Re}[ \tilde{f}^\ast_1(\boldsymbol{k}) \tilde{f}_2(\boldsymbol{k}) ].
\end{equation}
Hence, the logarithmic negativity between these two subsystems is nonzero to second order in $\Lambda/k_c$,
\begin{equation}
E_N \sim \frac12 | \tilde{F}_{12}^{(2)} | (\Lambda/k_c)^2.
\end{equation}
Therefore, we can conclude also from this calculation that these subsystems are entangled.
For example, in $n=1$ dimensions, we can choose smearings $\tilde{f}_i(k) = \sqrt{\frac{\pi}{\Lambda}} e^{-i k x_i} \chi(|k|<\Lambda)$, which give $E_N \sim \frac{1}{N^2 \pi^2} (\Lambda/k_c)^2$, where $N$ is the number of lattice spacings between sample points $x_1$ and $x_2$.
\section{Conclusion and Outlook}
The main goal of this paper was to elucidate the obstructions to localizability of particle states in quantum field theory by attempting to recover the known localizability properties of wavefunctions in non-relativistic quantum mechanics under a non-relativistic approximation.
We enacted the non-relativistic approximation by identifying a subspace of the global Hilbert space corresponding to states satifying an ultraviolet cutoff set by the Compton scale.
We showed that one can recover many of the characteristic features of NRQM beyond the Schr\"odinger equation.
However, in this study we have also identified remaining localizability issues within this imitation of NRQM, such as differences between the standard and non-relativistic localization schemes in this regime, the interpretation of measurements associated with localizing the wavefunction, and non-locality due to the bandlimit and sampling properties.
Finally, we showed that the lingering discrepancy between the two localization schemes leads to the survival of ground state entanglement in the non-relativistic approximation \updated{of a Klein-Gordon field}.
Hence, we can conclude that the existence of the Unruh effect does not rely on the presence of relativistic effects \updated{(insofar as it is related to ground state entanglement)}.
Indeed, despite the observation that the Unruh temperature vanishes in the naive limit $c \to \infty$, we demonstrated that the local degrees of freedom exhibit a non-zero temperature in the non-relativistic approximation.
However, the fact that ground state entanglement \updated{of a Klein-Gordon field} is not relativistic could support the use of non-relativistic detector systems to probe this entanglement (see, e.g., \cite{Valentini1991,Reznik2003,PozasKerstjens-MartinMartinez2015}).
That is, the non-relativistic detector systems would not have to enter the relativistic regime in order to access the entanglement.
This may also suggests that the alternative derivation of the Unruh effect using accelerating detectors would survive the non-relativistic approximation.
The remaining localizability issues we have identified warrant further investigation.
For example, it would be interesting to further explore the measurement theory for the operators used to characterize the localizability of the wavefunctions in the non-relativistic approximation.
One task is to find the Kraus operators associated with the POVM, in order to determine the appropriate state-update rule.
Perhaps this can be achieved through a dilation, for example, using the modes above the cutoff.
As we mentioned, these operators also arise in simple quantum-mechanical systems, hence it seems such a clarification would be broadly applicable.
Also, the intrinsic slowly-decaying non-locality of the wavefunctions indicates that the requirement of an ultraviolet cutoff for the non-relativistic approximation may have been too severe.
This requirement was predicated on the assumption that one should recover a Hilbert space for the description of NRQM.
One could consider attempting to relax this requirement, for example, by choosing a space consisting of the (finite) span of a collection of states which decay sufficiently quickly in momentum space (e.g., Gaussian smearing functions).
The issue with such an approach is that one would lose much of the structure of NRQM (and helpful mathematical tools) without the Hilbert space assumption.
For instance, one would retain the superposition principle, but lose much of the functional analytic structure, such as the spectral theorem.
Perhaps this is an appropriate compromise in order to resolve certain localizability issues.
However, to obtain a model with a fixed number of particles, one should ensure the prevention of particle creation, despite allowing modes above the cutoff.
We note also that the classical limit of the Klein-Gordon quantum field theory is formally similar to the non-relativistic approximation, as one is also considering a regime where the Compton wavenumber, $k_c = mc/\hbar$, is large compared to some other scale.
This raises the question of whether the classical regime of the Klein-Gordon theory should also be considered a bandlimited theory.
Lastly, given that the local degrees of freedom of the field theory remain entangled in the non-relativistic regime, it is natural to ask about the fate of the Reeh-Schlieder theorem.
The Reeh-Schlieder theorem establishes the cyclicity of the ground state for the local operators confined to any open region of spacetime.
Although this can be linked to ground state entanglement, the presence of entanglement is not sufficient to demonstrate this fact.
Determining whether there is an analogue of the Reeh-Schlieder theorem in this regime may help further elucidate the persisting localizability issues in the non-relativistic regime.
\section*{Acknowledgements}
The authors would like to thank their supervisor Achim Kempf for his support and the very helpful discussions.
MP would like to thank Eduardo Mart\'in-Mart\'inez also for his support and the stimulating discussions.
MP would also like to thank Charis Anastopoulos, Doreen Fraser, and Juan Le\'on for sharing valuable insight.
The authors also thank Jos\'e de Ram\'on for the constructive feedback and the psychological support.
JP acknowledges support from the Natural Sciences and Engineering Research Council (NSERC) of Canada through the Doctoral Canada Graduate Scholarship (CGS) program, as well as from the Ontario Graduate Scholarship (OGS) program.
This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research and Innovation.
|
1,314,259,994,289 | arxiv | \section{Introduction}
Let $G$ be any group. By $\operatorname{Z} (G)$, $\gamma_2(G)$ and $\Phi(G)$, we denote the center, the commutator subgroup and
the Frattini subgroup of $G$ respectively. For $x \in G$, $[x, G]$ denotes the set $\{[x, g] = x^{-1}g^{-1}xg | g \in G\}$ and $x^G$ denotes the conjugacy class of
$x$ in $G$. Notice that $x^G = x[x,G]$ and therefore $|x^G| = |[x,G]|$ for all $x \in G$. For $x \in G$, $\operatorname{C} _{H}(x)$ denotes the
centralizer of $x$ in $H$, where $H$ is a subgroup of $G$. To say that some $H$ is a subgroup (proper subgroup) of $G$ we write
$H \leq G$ ($H < G$). For any group $H$ and an abelian group $K$, $\operatorname{Hom} (H, K)$ denotes the group of all homomorphisms from $H$ to $K$.
For a finite $p$-group $G$, we denote by $\Omega_m(G)$ the subgroup $\gen{x \in G \mid x^{p^m} = 1}$ and by $\Omega^m$ (which is not a standard notation) the subgroup $\gen{x^{p^m} \mid x \in G}$, where $p$ is a prime integer and $m$ is a positive integer.
Let $d(G)$ denote the number of elements in any minimal generating set for a finite $p$-group $G$.
An automorphism $\phi$ of a group $G$ is called \emph{central} if
$g^{-1}\phi(g) \in \operatorname{Z} (G)$ for all $g \in G$. The set of all central
automorphisms of $G$, denoted by $\operatorname{Autcent} (G)$, is a normal subgroup
of $\operatorname{Aut} (G)$. Notice that $\operatorname{Autcent} (G) = \operatorname{C} _{\operatorname{Aut} (G)}(\operatorname{Inn} (G))$, where $\operatorname{Inn} (G)$ denotes the group of all inner automorphisms of $G$.
An automorphism $\alpha$ of $G$ is called \emph{class preserving} if
$\alpha(x) \in x^G$ for all $x \in G$. The set of all class preserving
automorphisms of $G$, denoted by $\operatorname{Aut} _{c}(G)$, is a normal subgroup of
$\operatorname{Aut} (G)$. Notice that $\operatorname{Inn} (G)$ is a normal subgroup of $\operatorname{Aut} _c(G)$.
In 1999, A. Mann \cite[Question 10]{aM99} asked the following question: \textit{Do all $p$-groups have automorphisms that are not class preserving? If the answer is no, which are the groups that have only class preserving automorphisms?} The first part of the question have a negative answer. The examples of finite $p$-groups $G$ such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$ are already known in the literature. Such groups $G$, having nilpotency class $2$ were constructed by H. Heineken \cite{hH80} in 1980 and that having nilpotency class $3$ were constructed by I. Malinowska \cite{iM92} in 1992. So the second part of the question of Mann becomes relevant. Let us modify the question of Mann to make it more precise in the present scenario. \\
\noindent{\bf Question.} Let $n \ge 4$ be a positive integer and $p$ be a prime number. Does there exist a finite $p$-group of nilpotency class $n$ such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$.\\
The second part of Mann's question, which clearly talks about the classification, can be stated as\\
\noindent{\bf Problem (A. Mann).} Study (Classify) finite $p$-groups $G$ such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$.\\
In this paper we consider this problem for $p$-groups of nilpotency class $2$ and make the stone rolling.
Notice that, for a finite $p$-group of class $2$, we have the following sequence of subgroups
\[\operatorname{Aut} _c(G ) \le \operatorname{Autcent} (G) \le \operatorname{Aut} (G).\]
The groups $G$ such that $\operatorname{Autcent} (G) = \operatorname{Aut} (G)$, have been studied, every now and then, by many mathematicians (see \cite{JY12} and \cite{JRY} for recent developments and other references). So for studying groups $G$ of nilpotency class $2$ such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$, one needs to concentrate on the groups $G$ satisfying
\begin{hypa}\label{hypa}
$\operatorname{Aut} _c(G) = \operatorname{Autcent} (G)$.
\end{hypa}
Suppose that $\operatorname{Aut} _c(G) = \operatorname{Autcent} (G)$, for some finite group $G$. Then $\operatorname{Inn} (G) \le \operatorname{Autcent} (G)$. It then follows from the definition of $\operatorname{Autcent} (G)$ that $\operatorname{Inn} (G)$ is abelian. Hence the
nilpotency class of $G$ is at the most $2$. Since a finite nilpotent group $G$ can be written as a direct product of its Sylow $p$-subgroups, where $p$ is a prime, to study $\operatorname{Autcent} (G)$, it is sufficient to study the group of central
automorphisms of finite $p$-group for the relevant prime integers $p$. Suppose that $G$ is abelian and satisfies Hypothesis A, then $\operatorname{Aut} _c(G) = 1$ and $\operatorname{Autcent} (G) =
\operatorname{Aut} (G)$. Thus $\operatorname{Aut} (G) = 1$. But this is possible only when $|G| \le 2$. So from now onwards, we concentrate on finite $p$-groups of class $2$, where $p$ is a prime integer.
Let $G$ be a finite $p$-group of class $2$. Then $G/\operatorname{Z} (G)$ is abelian. Consider the following cyclic decomposition of $G/\operatorname{Z} (G)$.
\[G/\operatorname{Z} (G) = \operatorname{C} _{p^{m_1}} \times \cdots \times \operatorname{C} _{p^{m_d}}\]
such that $m_1 \ge m_2 \ge \cdots \ge m_d \ge 1$, where $\operatorname{C} _{p^{m_{i}}}$ denotes the cyclic group of order $p^{m_i}$ for $1 \le i \le d$. The integers $p^{m_1}, \ldots, p^{m_d}$ are unique for $G/\operatorname{Z} (G)$ and these are called the \emph{invariants} of $G/\operatorname{Z} (G)$. Now we state our first result in the following theorem, which we prove in Section 3 as Theorem \ref{thm1}.
\begin{thma}
Let $G$ be a finite $p$-group of class $2$ and $p^{m_1}, \ldots, p^{m_d}$ be the invariants of $G/\operatorname{Z} (G)$. Then $G$ satisfies Hypothesis A if and only if $\gamma_2(G) = \operatorname{Z} (G)$ and $|\operatorname{Aut} _c(G)| = \Pi_{i=1}^d |\Omega_{m_i}(\gamma_2(G))|$.
\end{thma}
Our next result is the following theorem, which we prove in Section 3 as Theorem \ref{thm2}.
\begin{thmb}
Let $G$ be a finite $p$-group of class $2$ satisfying Hypothesis A. Then $d(G)$ is even.
\end{thmb}
In the last section we concentrate on finite $p$-groups whose automorphisms are all class preserving and prove the following result.
\begin{thmc}
Let $G$ be a non-abelian finite $p$-group such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$, where $p$ is an odd prime. Then the following statements hold true.
\begin{subequations}
\begin{align}
& \text{$\gamma_2(G)$ cannot be cyclic.}\\
&\text{If $\operatorname{Aut} (G)$ is elementary abelian, then $G$ is a Camina special $p$-group.}\\
&\text{If $\operatorname{Aut} (G)$ is abelian, then $d(G)$ is even.}\\
&\text{If $\operatorname{Aut} (G)$ is abelian, then $|G| \ge p^8$ and $|\operatorname{Aut} (G)| \ge p^{12}$.}\\
&\text{With $\operatorname{Aut} (G)$ abelian, $|\operatorname{Aut} (G)| = p^{12}$ if and only if $|G| = p^8$.}\\
&\text{If $\operatorname{Aut} (G)$ is abelian of order $p^{12}$, then $\operatorname{Aut} (G)$ is elementary abelian}\\
&\text{There exists a group $G$ of order $3^8$ such that $|\operatorname{Aut} (G)| = |\operatorname{Aut} _c(G)| = 3^{12}$.}\label{goodeq7}
\end{align}
\end{subequations}
\end{thmc}
In Section 2, we collect some basic results, which are useful for our work. Some further properties and some examples of finite groups satisfying Hypothesis A are obtained in Section 4.
We conclude this section with some definitions.
A subset $\{y_1, \ldots, y_d\}$ of a finite abelian group $Y$ is said to be a \emph{minimal basis} for $Y$ if
\[Y =\gen{y_1} \times \gen{y_2} \times \ldots \times \gen{y_d} ~\textrm{ and}~ |\gen{y_1}| \ge |\gen{y_2}| \ge \cdots \ge |\gen{y_d}| > 1.\]
A minimal generating set $\{x_1, \ldots, x_d\}$ of a finite $p$-group $G$ of nilpotency class $2$ is said to be \emph{distinguished} if the set $\{{\bar x}_1, \ldots, {\bar x}_d\}$, ${\bar x}_i = x_i\operatorname{Z} (G)$, forms a minimal basis for $G/\operatorname{Z} (G)$.
\section{Some prerequisites and useful lemmas}
Let $G$ be a finite group.
Let $\alpha \in \operatorname{Autcent} (G)$. Then the map $f_{\alpha}$ from $G$ into $\operatorname{Z} (G)$
defined by $f_{\alpha}(x) = x^{-1}\alpha(x)$ is a homomorphism which sends
$\gamma_2(G)$ to $1$. Thus $f_{\alpha}$ induces a homomorphism from
$G/\gamma_2(G)$ into $\operatorname{Z} (G)$. So we get a one-to-one map $\alpha \rightarrow
f_{\alpha}$ from $\operatorname{Autcent} (G)$ into $\operatorname{Hom} (G/\gamma_2(G), \operatorname{Z} (G))$. Conversely,
if $f \in \operatorname{Hom} (G/\gamma_2(G), \operatorname{Z} (G))$, then $\alpha_f$ such that
$\alpha_f(x) = xf({\bar x})$ defines an endomorphism of $G$, where
${\bar x} = x \gamma_2(G)$ . But this, in general, may not be an automorphism of $G$. More precisely, $\alpha_f$ fails to be an automorphism of $G$ when $G$ admits a non-trivial abelian direct factor.
A group $G$ is called \emph{purely non-abelian} if it does not have a non-trivial abelian direct factor.
The following theorem of Adney and Yen \cite{AY65} shows that if $G$ is a purely non-abelian finite group, then the mapping $\alpha \rightarrow
f_{\alpha}$ from $\operatorname{Autcent} (G)$ into $\operatorname{Hom} (G/\gamma_2(G), \operatorname{Z} (G))$, defined above, is also onto.
\begin{thm}[\cite{AY65}, Theorem 1]\label{lemma0}
Let $G$ be a purely non-abelian finite group. Then the correspondence $\alpha
\rightarrow f_{\alpha}$ defined above is a one-to-one mapping of $\operatorname{Autcent} (G)$
onto $\operatorname{Hom} (G/ \gamma_2(G),$ $ \operatorname{Z} (G))$.
\end{thm}
The following lemma follows from \cite[page 141]{AY65}.
\begin{lemma}\label{lemma1}
Let $G$ be a finite $p$-group of class $2$ such that $\operatorname{Z} (G) = \gamma_2(G)$. Then $\operatorname{Autcent} (G)$ is abelian.
\end{lemma}
The following five lemmas, which we, sometimes, may use without any further reference, are well known.
\begin{lemma}\label{lemma5}
Let $\operatorname{C} _n$ and $\operatorname{C} _m$ be two cyclic groups of order $n$ and $m$
respectively. Then $\operatorname{Hom} (\operatorname{C} _n, \operatorname{C} _m) \cong \operatorname{C} _d$, where $d$ is the greatest
common divisor of $n$ and $m$, and $\operatorname{C} _d$ is the cyclic group of order $d$.
\end{lemma}
\begin{lemma}\label{lemma5a}
Let $\operatorname{A} $, $\operatorname{B} $ and $\operatorname{C} $ be finite abelian groups. Then
{\em (i)} $\operatorname{Hom} (\operatorname{A} \times \operatorname{B} , \operatorname{C} ) \cong \operatorname{Hom} (\operatorname{A} ,\operatorname{C} ) \times \operatorname{Hom} (\operatorname{B} ,\operatorname{C} )$;
{\em (ii)} $\operatorname{Hom} (\operatorname{A} , \operatorname{B} \times \operatorname{C} ) \cong \operatorname{Hom} (\operatorname{A} ,\operatorname{B} ) \times \operatorname{Hom} (\operatorname{A} ,\operatorname{C} )$.
\end{lemma}
\begin{lemma}\label{lemma5b}
Let $\operatorname{A} $, $\operatorname{B} $ and $\operatorname{C} $ be finite abelian groups such that $\operatorname{A} $ and $\operatorname{B} $ are
isomorphic. Then $\operatorname{Hom} (\operatorname{A} , \operatorname{C} ) \cong \operatorname{Hom} (\operatorname{B} ,\operatorname{C} )$.
\end{lemma}
\begin{lemma}\label{lemma5c}
Let $\operatorname{A} $ and $\operatorname{C} $ be finite abelian groups and $\operatorname{B} $ is a proper subgroup of
$\operatorname{C} $. Then $|\operatorname{Hom} (\operatorname{A} , \operatorname{B} )| \le |\operatorname{Hom} (\operatorname{A} , \operatorname{C} )|$.
\end{lemma}
\begin{lemma}\label{lemma2.7}
Let $\operatorname{C} _{p^m}$ be a cyclic group of order $p^m$ and $B$ be any finite abelian group. Then $|\operatorname{Hom} (\operatorname{C} _{p^m}, \operatorname{B} )| = |\operatorname{Hom} (\operatorname{C} _{p^m} , \Omega_{m}(B))|$.
\end{lemma}
The following lemma seems well known. But we include a proof here, because we
could not find a suitable reference for it.
\begin{lemma}\label{lemma5d}
Let $G$ be a finite abelian $p$-group and $M$ be a maximal subgroup of $G$.
Then there exists a subgroup $\operatorname{H} $ of $G$ and a positive integer $i$ such that
$G = \operatorname{H} \times C_{p^{i+1}}$ and $\operatorname{M} = \operatorname{H} \times C_{p^i}$.
\end{lemma}
\begin{proof}
We prove the lemma by induction on the finite order $|G| \ge p$ of $G$. Notice that $|G| = 1$ is
impossible because $G$ has a subgroup $M < G$. The lemma holds
trivially when $|G| = p$. So we may assume that $|G| > p$, and that the
lemma holds for all strictly smaller values of $|G|$.
Let $q = p^e$ be the exponent of $G$. Notice that $q > 1$
since $G \neq 1$. Any element $y \in G$ with order $q$ lies in a minimal basis
for $G$. So there is some subgroup $I$ of $G$ such that $G = \gen{y} \times I$. Furthermore $|I| < |G|$ since $y$ has order $q >
1$.
Suppose that $M$ contains the above element $y$ of order $q$. Then
$M = \gen{y} \times (M \cap I)$, where $M \cap I$ is a
maximal subgroup of $I$. By induction there exist a subgroup $J < I$ and
an element $x \in I$ such that $I = J \times \left < x \right >$ and $M
\cap I = J \times \gen{ x^p}$. The lemma now holds with this
$x$ and the subgroup $H = J \times \gen{y}$ of $G$.
We have handled every case where $M$ contains some element
$y$ of order $q$. So we may assume from now on that no such $y$ lies in
$M$. Let $x_1, x_2, \dots, x_d$ be a minimal basis for $G$. Then the order of $x_1$ is the exponent $q$ of $G$.
Since $M$ contains no element of order $q$ in $G$, it is contained in the subgroup
\[ K = \gen{x_1^p} \times \gen{x_2} \times \dots \times \gen{ x_d}, \]
which has index $p$ in $G$. Since $M$ is maximal in $G$, it must equal
$K$. Then the lemma holds with $x = x_1$ and $H = \gen{ x_2} \times \dots \gen{x_d}$. \hfill $\Box$
\end{proof}
The following interesting lemma is from \cite[Lemma D]{CM01}.
\begin{lemma}\label{lemma6}
Let $\operatorname{A} $, $\operatorname{B} $, $\operatorname{C} $ and $\operatorname{D} $ be finite abelian $p$-groups such that $\operatorname{A} $ is isomorphic to a proper subgroup of $\operatorname{B} $ and $\operatorname{C} $ is isomorphic to a proper
subgroup of $\operatorname{D} $. Then $|\operatorname{Hom} (\operatorname{A} , \operatorname{C} )| < |\operatorname{Hom} (\operatorname{B} ,\operatorname{D} )|$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma5c}, it is sufficient to prove the result when $\operatorname{A} $ is
isomorphic to a maximal subgroup $\operatorname{M} $ of $\operatorname{B} $ and $\operatorname{C} $ is isomorphic to a maximal subgroup $\operatorname{N} $ of $\operatorname{D} $.
By Lemma \ref{lemma5d}, we have $\operatorname{M} \cong \operatorname{E} \times \operatorname{C} _{p^i}$
and $\operatorname{B} \cong \operatorname{E} \times \operatorname{C} _{p^{i+1}}$ for some group $\operatorname{E} $ and some non-negative
integer $i$. Also $\operatorname{N} \cong \operatorname{F} \times \operatorname{C} _{p^j}$ and $\operatorname{D} \cong \operatorname{F} \times \operatorname{C} _{p^{j+1}}$ for some
group $\operatorname{F} $ and some non-negative integer $j$. By Lemma \ref{lemma5a}
\[\operatorname{Hom} (\operatorname{M} ,\operatorname{N} ) \cong \operatorname{Hom} (\operatorname{E} ,\operatorname{F} ) \times \operatorname{Hom} (\operatorname{E} ,\operatorname{C} _{p^j}) \times
\operatorname{Hom} (\operatorname{F} ,\operatorname{C} _{p^i}) \times \operatorname{Hom} (\operatorname{C} _{p^i},\operatorname{C} _{p^j})\]
and
\[\operatorname{Hom} (\operatorname{B} ,\operatorname{D} ) \cong \operatorname{Hom} (\operatorname{E} ,\operatorname{F} ) \times \operatorname{Hom} (\operatorname{E} ,\operatorname{C} _{p^{j+1}}) \times
\operatorname{Hom} (\operatorname{F} ,\operatorname{C} _{p^{i+1}}) \times \operatorname{Hom} (\operatorname{C} _{p^{i+1}},\operatorname{C} _{p^{j+1}}).\]
Since $|\operatorname{Hom} (\operatorname{E} ,\operatorname{C} _{p^j})| \le |\operatorname{Hom} (\operatorname{E} ,\operatorname{C} _{p^{j+1}})|$, $|\operatorname{Hom} (\operatorname{F} ,\operatorname{C} _{p^i})|
\le |\operatorname{Hom} (\operatorname{F} ,\operatorname{C} _{p^{i+1}})|$ and $|\operatorname{Hom} (\operatorname{C} _{p^i},\operatorname{C} _{p^j})|$ $ <
|\operatorname{Hom} (\operatorname{C} _{p^{i+1}},\operatorname{C} _{p^{j+1}})|$, it follow that $|\operatorname{Hom} (\operatorname{M} ,\operatorname{N} )|
< |\operatorname{Hom} (\operatorname{B} ,\operatorname{D} )|$. Now by Lemma \ref{lemma5b} we get $\operatorname{Hom} (\operatorname{M} ,\operatorname{N} ) \cong
\operatorname{Hom} (\operatorname{A} , \operatorname{C} )$. Hence $|\operatorname{Hom} (\operatorname{A} ,\operatorname{C} )| < |\operatorname{Hom} (\operatorname{B} ,\operatorname{D} )|$. \hfill $\Box$
\end{proof}
\section{Groups $G$ satisfying Hypothesis A}
In this section we derive some interesting properties of finite groups satisfying Hypothesis A and prove Theorems A and B. We start with the following easy lemma.
\begin{lemma}\label{lemma6a}
Let $G$ be a finite $p$-group of class $2$. Then the following holds:
\begin{enumerate}
\item The exponents of $\gamma_2(G)$ and $G/\operatorname{Z} (G)$ are same.
\item For each $x \in G-\operatorname{Z} (G)$, $[x,G]$ is a non-trivial normal subgroup of $G$ contained in
$\gamma_2(G)$.
\item For $x \in G-\operatorname{Z} (G)$, the exponent of the subgroup $[x,G]$ is equal to the order of
${\bar x} = x\operatorname{Z} (G) \in G/\operatorname{Z} (G)$.
\end{enumerate}
\end{lemma}
\begin{proof} Since (1) and (2) are well known, we only prove (3).
Let the order of ${\bar x} = x\operatorname{Z} (G)$ be $p^c$. Then $x^{p^c} \in \operatorname{Z} (G)$. Let
$[x,g] \in [x,G]$ be an arbitrary element. Now $[x,g]^{p^c} = [x^{p^c}, g] =
1$. Thus the exponent of $[x,G]$ is less than or equal to $p^c$. We claim that it
can not be less than $p^c$. Suppose that the exponent of $[x,G]$ is $p^b$, where
$b < c$. Then $[x^{p^b},g] = [x,g]^{p^b} = 1$ for all $g \in G$. This proves
that $x^{p^b} \in \operatorname{Z} (G)$, which gives a contradiction to the fact that order
of ${\bar x}$ is $p^c$. Hence exponent of $[x,G]$ is equal to $p^c$, which completes the proof of the lemma.
\hfill $\Box$
\end{proof}
Let $G$ be a finite nilpotent group of class $2$. Let $\phi \in \operatorname{Aut} _c(G)$. Then the map $g \mapsto g^{-1}\phi(g)$ is a homomorphism of
$G$ into $\gamma_2(G)$. This homomorphism sends $Z(G)$ to $1$. So it induces a homomorphism $f_{\phi} \colon G/Z(G) \to \gamma_2(G)$, sending
$gZ(G)$ to $g^{-1}\phi(g)$, for any $g \in G$. It can be easily seen that
the map $\phi \mapsto f_{\phi}$ is a monomorphism of the group
$\operatorname{Aut} _c(G)$ into $\operatorname{Hom} (G/Z(G), \gamma_2(G))$.
Any $\phi \in \operatorname{Aut} _c(G)$ sends any $g \in G$ to some $\phi(g)
\in g^G$. Then $f_{\phi}(gZ(G)) = g^{-1}\phi(g)$ lies in $g^{-1}g^G =
[g,G]$. Denote
\[ \{ \, f \in \operatorname{Hom} (G/Z(G), \gamma_2(G)) \mid f(gZ(G)) \in [g,G], \text{ for
all $g \in G$}\,\} \]
by $\operatorname{Hom} _c(G/Z(G), \gamma_2(G))$. Then $f_{\phi} \in \operatorname{Hom} _c(G/Z(G),
\gamma_2(G))$ for all $\phi \in \operatorname{Aut} _c(G)$. On the other hand, if $f \in
\operatorname{Hom} _c(G/Z(G), \gamma_2(G))$, then the map sending any $g \in G$ to
$gf(gZ(G))$ is an automorphism $\phi \in \operatorname{Aut} _c(G)$ such that $f_{\phi}
= f$. Thus we have
\begin{prop}\label{prop1}
Let $G$ be a finite nilpotent group of class 2. Then the
above map $\phi \mapsto f_{\phi}$ is an isomorphism of the group
$\operatorname{Aut} _c(G)$ onto $\operatorname{Hom} _c(G/Z(G), \gamma_2(G))$.
\end{prop}
We also need the following easy observation.
\begin{lemma}\label{lemma7}
Let $H = \operatorname{C} _{n_1} \times \cdots \times \operatorname{C} _{n_r}$ and $K = \operatorname{C} _{m_1} \times
\cdots \times \operatorname{C} _{m_s}$ be two finite abelian groups. Let $r \le s$ and
$n_i$ divides $m_i$, where $1 \le i \le r$. Then $H$ is isomorphic to a
subgroup of $K$.
\end{lemma}
We use above information and Lemma \ref{lemma6} to prove the following.
\begin{prop}\label{prop2}
Let $G$ be a finite $p$-group of class $2$ which satisfies Hypothesis A. Then
$\gamma_2(G) = \operatorname{Z} (G)$.
\end{prop}
\begin{proof}
We first prove that $G$ is purely non-abelian. Suppose the contrary, then
$G = \operatorname{K} \times \operatorname{A} $, where $\operatorname{A} $ is a non-trivial abelian subgroup of
$G$ and $K$ is purely non-abelian. Obviously $|\operatorname{Aut} _c(G)| = |\operatorname{Aut} _c(\operatorname{K} )|$. It follows that $|\operatorname{Autcent} (G)| >
|\operatorname{Autcent} (K)||\operatorname{Autcent} (\operatorname{A} )| \ge |\operatorname{Autcent} (K)|$ (Notice that the second inequality is also strict if $p$ is odd, since $\operatorname{A} $ is non-trivial). Hence
\[|\operatorname{Aut} _c(G)| = |\operatorname{Aut} _c(\operatorname{K} )| \le |\operatorname{Autcent} (K)| < |\operatorname{Autcent} (G)|.\]
This is a contradiction to Hypothesis A. This proves that $G$ is purely
non-abelian.
Now suppose that $\gamma_2(G) < \operatorname{Z} (G)$. Then $|G/\operatorname{Z} (G)| < |G/\gamma_2(G)|$.
Notice that all the conditions of Lemma \ref{lemma7} hold with $H = G/\operatorname{Z} (G)$ and
$K = G/\gamma_2(G)$. Thus $G/\operatorname{Z} (G)$ is isomorphic to a proper subgroup of
$G/\gamma_2(G)$. It follows from Theorem \ref{lemma0} that $|\operatorname{Autcent} (G)| =
|\operatorname{Hom} (G/\gamma_2(G), \operatorname{Z} (G))|$, since $G$ is purely non-abelian. By Proposition \ref{prop1}, we have
$|\operatorname{Aut} _c(G)| \le |\operatorname{Hom} (G/\operatorname{Z} (G), \gamma_2(G))|$. Since $G/\operatorname{Z} (G)$ is isomorphic
to a proper subgroup of $G/\gamma_2(G)$ and $\gamma_2(G)$ is a proper
subgroup of $\operatorname{Z} (G)$, if follows from Lemma \ref{lemma6} that
\[|\operatorname{Hom} (G/\operatorname{Z} (G), \gamma_2(G))| < |\operatorname{Hom} (G/\gamma_2(G), \operatorname{Z} (G))|.\]
Hence
\[|\operatorname{Aut} _c(G)| \le \operatorname{Hom} (G/\operatorname{Z} (G), \gamma_2(G))| < |\operatorname{Hom} (G/\gamma_2(G), \operatorname{Z} (G))|
= |\operatorname{Autcent} (G)|.\]
This contradicts the fact that $G$ satisfies Hypothesis A. Hence $\gamma_2(G)
= \operatorname{Z} (G)$. This completes the proof of the proposition. \hfill $\Box$
\end{proof}
Let $G$ be a finite $p$-group of class $2$ such that $\operatorname{Z} (G) \le \Phi(G)$. Then ${\bar G} = G/\operatorname{Z} (G)$, being finite abelian, admits a minimal basis. Let $\{{\bar x}_1, \ldots, {\bar x}_d\}$ be a minimal basis for ${\bar G}$, where ${\bar x}_i = x_i\operatorname{Z} (G)$. Now $G/\Phi(G) \cong (G/\operatorname{Z} (G))/(\Phi(G)/\operatorname{Z} (G)) \cong {\bar G}/\Phi({\bar G})$, since $\operatorname{Z} (G) \le \Phi(G)$. Thus the set $\{x_1 \Phi(G), \ldots, x_d \Phi(G)\}$ minimally generates $G/\Phi(G)$, which implies that the set $\{x_1, \ldots, x_d\}$ minimally generates $G$. Now let $x \in G-\Phi(G)$. Therefore $x\operatorname{Z} (G) \in G/\operatorname{Z} (G) - \Phi(G)/\operatorname{Z} (G)$. Thus we can find a minimal basis $\{{\bar x}_1, \ldots, {\bar x}_d\}$ for ${\bar G}$ such that ${\bar x} = x\operatorname{Z} (G) = {\bar x}_i$ for some $1 \le i \le d$. As a consequence of this discussion, we get the following result.
\begin{lemma}\label{lemma8a}
Let $G$ be a finite $p$-group of class $2$ such that $\operatorname{Z} (G) \le \Phi(G)$. Then the following holds true:
\begin{enumerate}
\item Any minimal basis $\{{\bar x}_1, \ldots, {\bar x}_d\}$ for ${\bar G}$ provides a distinguished minimal generating set $\{x_1, \ldots, x_d\}$ for $G$.
\item Any element $x \in G-\Phi(G)$ can be included in a distinguished minimal generating set for $G$.
\end{enumerate}
\end{lemma}
Since $\operatorname{Z} (G) = \gamma_2(G) \le \Phi(G)$ for a finite $p$-group of class $2$ satisfying Hypothesis A, we readily get
\begin{cor}\label{cor0}
Any finite $p$-group of class $2$ satisfying Hypothesis A, admits a distinguished minimal generating set.
\end{cor}
\begin{prop}\label{prop3}
Let $G$ be a finite $p$-group of class $2$ satisfying Hypothesis A. Let $\{x_1, \ldots, x_d\}$ be a distinguished minimal generating set for $G$. Then $|\operatorname{Aut} _c(G)| = \Pi_{i =1}^d |x_i^G|$ and $[x_i, G] = \Omega_{m_i}(\gamma_2(G))$, where $p^{m_i}$ is the order of ${\bar x}_i$. Moreover, $[x_1, G] = \gamma_2(G)$.
\end{prop}
\begin{proof}
Let $\{x_1, \ldots, x_d\}$ be a distinguished minimal generating set for $G$ such that order of ${\bar x}_i$ is $p^{m_i}$ for $1 \le i \le d$. Notice that $|\operatorname{Aut} _c(G)| \le \Pi_{i=1}^d |x_i^G|$, as there are not more than $|x_i^G|$ choices for the image of $x_i$ under any class preserving
automorphism of $G$. Since the exponent of the subgroup $[x_i,G]$ is equal to
the order of ${\bar x}_i = x_i\operatorname{Z} (G) \in G/\operatorname{Z} (G)$ for any $1 \le i \le d$ (Lemma \ref{lemma6a}), it follows that
$|\operatorname{Hom} (\gen{{\bar x}_i}, [x_i, G])| = |[x_i,G]|$. By Proposition \ref{prop2} we have $\operatorname{Z} (G) = \gamma_2(G)$. Thus
\begin{eqnarray}
|\operatorname{Aut} _c(G)| &=& |\operatorname{Autcent} (G)| = |\operatorname{Hom} (G/\gamma_2(G), \operatorname{Z} (G))|
= |\operatorname{Hom} (G/\operatorname{Z} (G), \gamma_2(G))|\nonumber\\
&=& \Pi_{i=1}^d |\operatorname{Hom} (\gen{{\bar x}_i}, \gamma_2(G))|
\ge \Pi_{i=1}^d |\operatorname{Hom} (\gen{{\bar x}_i}, [x_i,G])|
= \Pi_{i=1}^d |[x_i, G]| \label{eqlemma9}\\
& =& \Pi_{i =1}^d |x_i^G|.\nonumber
\end{eqnarray}
Hence $|\operatorname{Aut} _c(G)| = \Pi_{i=1}^d |x_i^G|$.
It now follows from \eqref{eqlemma9} that $\operatorname{Hom} (\gen{{\bar x}_i}, \gamma_2(G)) = \operatorname{Hom} (\gen{{\bar x}_i}, [x_i, G])$ for each $1 \le i \le d$. Notice that $\operatorname{Hom} (\gen{{\bar x}_i}, \gamma_2(G)) = \operatorname{Hom} (\gen{{\bar x}_i}, \Omega_{m_i}(\gamma_2(G))) \cong \Omega_{m_i}(\gamma_2(G))$ (Lemma \ref{lemma2.7}). Also, as mentioned above, $\operatorname{Hom} (\gen{{\bar x}_i}, [x_i, G]) \cong [x_i, G]$. Hence $|[x_i, G]| = |\Omega_{m_i}(\gamma_2(G))|$. Since the exponent of $[x_i, G]$ is equal to the order of ${\bar x}_i$, it follows that $[x_i, G] \le \Omega_{m_i}(\gamma_2(G))$ for each $1 \le i \le d$. Hence $[x_i, G] = \Omega_{m_i}(\gamma_2(G))$ for each $1 \le i \le d$. Since the order of ${\bar x}_1$ (which is equal to the exponent of $G/\operatorname{Z} (G)$) is equal to the exponent of $\gamma_2(G)$, $[x_1, G] = \gamma_2(G)$. This completes the proof of the proposition. \hfill $\Box$
\end{proof}
In the following corollary we show that the order of $\operatorname{Aut} _c(G)$, obtained in Proposition \ref{prop3}, is independent of the choice of distinguished minimal generating set for $G$.
\begin{cor}\label{cor1}
Let $G$ be a finite $p$-group of class $2$ satisfying Hypothesis A. Let the invariants of $G/\operatorname{Z} (G)$ be $p^{m_1}, \ldots, p^{m_d}$. Then
$|\operatorname{Aut} _c(G)| = \Pi_{i =1}^d |\Omega_{m_i}(\gamma_2(G))|$.
\end{cor}
\begin{proof}
Let $p^{m_1}, \ldots, p^{m_d}$ be the invariants of $G/\operatorname{Z} (G)$. Notice that any distinguished minimal generating set for $G$ can be written (after re-ordering if necessary) as $\{x_1, \ldots, x_d\}$ such that order of ${\bar x}_i$ is $p^{m_i}$ for $1 \le i \le d$. Hence, by Proposition \ref{prop3}, it follows that $|\operatorname{Aut} _c(G)| = \Pi_{i =1}^d |\Omega_{m_i}(\gamma_2(G))|$. This completes the proof. \hfill $\Box$
\end{proof}
Some other consequences of Proposition \ref{prop3} are
\begin{cor}\label{cor2}
Let $G$ be a finite $p$-group of class $2$ satisfying Hypothesis A. Let $x \in G-\Phi(G)$ such that order of $x\operatorname{Z} (G)$ is $p^m$. Then $[x, G] = \Omega_m(\gamma_2(G))$.
\end{cor}
\begin{proof}
By Lemma \ref{lemma8a}, we can find a distinguished minimal generating set $\{x_1, \ldots, x_d\}$ for $G$ such that $x = x_i$ for some $1 \le i \le d$. Now the assertion follows from Proposition \ref{prop3}. \hfill $\Box$
\end{proof}
\begin{cor}
Let $G$ be a finite $p$-group of class $2$ satisfying Hypothesis A. Let $x \in G-\Phi(G)$ such that order of $x\operatorname{Z} (G)$ is $p^m$. Then $|\operatorname{C} _{G}(x)| = |G/\operatorname{Z} (G)| |\Omega^m(\gamma_2(G))|$.
\end{cor}
We now prove Theorem A.
\begin{thm}[Theorem A]\label{thm1}
Let $G$ be a finite $p$-group of nilpotency class $2$. Then $G$ satisfies
Hypothesis A if and only if $\operatorname{Z} (G) = \gamma_2(G)$ and
$|\operatorname{Aut} _c(G)| = \Pi_{i =1}^d |\Omega_{m_i}(\gamma_2(G))|$, where $p^{m_1}, \ldots, p^{m_d}$ are the invariants of $G/\operatorname{Z} (G)$.
\end{thm}
\begin{proof}
Let $G$ satisfy Hypothesis A. Then by Proposition \ref{prop2}, $\operatorname{Z} (G) = \gamma_2(G)$ and by Corollary \ref{cor1},
$|\operatorname{Aut} _c(G)| = \Pi_{i =1}^d |\Omega_{m_i}(\gamma_2(G))|$.
On the other hand, suppose that $\operatorname{Z} (G) = \gamma_2(G)$ and
$|\operatorname{Aut} _c(G)| = \Pi_{i =1}^d |\Omega_{m_i}(\gamma_2(G))|$. Let $\{x_1, \ldots, x_d\}$ be a distinguished minimal generating set for $G$ such that order of ${\bar x}_i = x_i\operatorname{Z} (G)$ is $p^{m_i}$ for $1 \le i \le d$.
Since $\operatorname{Z} (G) = \gamma_2(G)$, $G$ is purely non-abelian and therefore
\begin{eqnarray*}
|\operatorname{Autcent} (G)| &=& \Pi_{i=1}^d |\operatorname{Hom} (\gen{{\bar x}_i}, \operatorname{Z} (G))| = \Pi_{i=1}^d |\operatorname{Hom} (\gen{{\bar x}_i}, \Omega_{m_i}(\gamma_2(G))|\\
& =& \Pi_{i =1}^d |\Omega_{m_i}(\gamma_2(G))| = |\operatorname{Aut} _c(G)|.
\end{eqnarray*}
Hence $G$ satisfies hypothesis A. This completes the proof of the theorem. \hfill $\Box$
\end{proof}
Let $\operatorname{Z} (G) = \operatorname{Z} _1 \times \operatorname{Z} _2 \times \cdots \times \operatorname{Z} _r$ be a cyclic
decomposition of $\operatorname{Z} (G)$ such that $|\operatorname{Z} _1| \ge |\operatorname{Z} _2| \ge \dots \ge |\operatorname{Z} _r| > 1$ and, for each $i$ such that $1 \le i \le r$, define
$\operatorname{Z} _i^* =\operatorname{Z} _1 \times \cdots \times \operatorname{Z} _{i-1} \times \operatorname{Z} _{i+1} \times \cdots \times \operatorname{Z} _r$.
\begin{prop}\label{prop4}
Let $G$ be a finite $p$-group of class $2$ satisfying Hypothesis A. Let $x \in
G - \Phi(G)$. Then $[x,G] \cap \operatorname{Z} _i \neq 1$ for each $i$ such that $1 \le i \le r$. Moreover, if $|\operatorname{Z} _i|$ is equal to the exponent of $\operatorname{Z} (G)$, then
$|[x,G] \cap \operatorname{Z} _i|$ is equal to the exponent of $[x,G]$ and $G/\operatorname{Z} _i^*$ satisfies Hypothesis A.
\end{prop}
\begin{proof}
Since $G$ satisfies Hypothesis A, $\operatorname{Z} (G) = \gamma_2(G)$. We'll make use of this fact without telling so.
Let $x \in G-\Phi(G)$ be such that order of $x\operatorname{Z} (G)$ is $p^m$, where $m \ge 1$ is an integer. Then it follows from Corollary \ref{cor2} that $[x, G] = \Omega_m(\gamma_2(G)) = \Omega_m(\operatorname{Z} (G))$. This proves that $[x,G] \cap \operatorname{Z} _i \neq 1$ for each $1 \le i \le r$.
Let $|\operatorname{Z} _i| = p^e$, the exponent of $\operatorname{Z} (G)$. Since the exponent of $[x,G]$ is equal to $|\gen{\bar{x}}|$, where $\bar{x} = x\operatorname{Z} (G)$ (Lemma \ref{lemma6a}), the exponent of $[x,G]$ is $p^m$. Thus $e \ge m$. Since $[x, G] = \Omega_m(\operatorname{Z} (G))$, it follows that $[x,G] \cap \operatorname{Z} _i = \Omega_m(\operatorname{Z} _i)$. Hence $|[x,G] \cap \operatorname{Z} _i| = p^m$.
Again let $|\operatorname{Z} _i| = p^e$, the exponent of $\operatorname{Z} (G)$. Assume for a moment that $\operatorname{Z} (G/\operatorname{Z} _i^*) = \operatorname{Z} (G)/\operatorname{Z} _i^*$. Then
$\operatorname{Z} (G/\operatorname{Z} _i^*) \cong \operatorname{Z} _i$ is cyclic and is equal to $\gamma_2(G/\operatorname{Z} _i^*) =
\gamma_2(G)/\operatorname{Z} _i^*$, since $\operatorname{Z} _i^* \le \gamma_2(G)$. This implies that
$\operatorname{Autcent} (G/\operatorname{Z} _i^*) = \operatorname{Inn} (G/\operatorname{Z} _i^*) = \operatorname{Aut} _c(G/\operatorname{Z} _i^*)$. Hence Hypothesys A holds true for $G/\operatorname{Z} _i^*$. Therefore, to complete the proof, it is sufficient to
prove what we have assumed, i.e., $\operatorname{Z} (G/\operatorname{Z} _i^*) = \operatorname{Z} (G)/\operatorname{Z} _i^*$.
Let $x\operatorname{Z} _i^* \in G/\operatorname{Z} _i^* - \operatorname{Z} (G)/\operatorname{Z} _i^*$. Then $x \in G - \operatorname{Z} (G)$. If $x \in G
- \Phi(G)$, then $[x,G] \not\subseteq \operatorname{Z} _i^*$ and therefore
$x\operatorname{Z} _i^* \not\in \operatorname{Z} (G/\operatorname{Z} _i^*)$. So let $x \in \Phi(G) - \operatorname{Z} (G)$. Then there
exists an element $y \in G - \Phi(G)$ such that $x = y^jz$ for some positive
integer $j$ and some element $z \in \operatorname{Z} (G)$. Now
\[[x,G] = [y^jz, G] = [y,G]^j, \;\; 1 \le j < \text{exponent of}\;\; [y,G].\]
Since $y \in G-\Phi(G)$, $|[y,G] \cap \operatorname{Z} _i|$ is equal to the exponent of $[y,G]$. So it follows that $[y,G]^j \not\subseteq
\operatorname{Z} _i^*$ for any non-zero $j$ which is strictly less than the exponent of
$[y,G]$. Thus $[x, G] \not\subseteq \operatorname{Z} _i^*$. This implies that $[x\operatorname{Z} _i^*,
G/\operatorname{Z} _i^*] \neq 1$. Hence $x\operatorname{Z} _i^* \not\in \operatorname{Z} (G/\operatorname{Z} _i^*)$. This proves that
$Z(G/\operatorname{Z} _i^*) = \operatorname{Z} (G)/\operatorname{Z} _i^*$, completing the proof of the proposition. \hfill $\Box$
\end{proof}
For the proof of our next result we need the following important result.
\begin{thm}[\cite{BBC}, Theorem 2.1]\label{thm0a}
Let $G$ be a finite $p$-group of nilpotency class $2$ with cyclic center. Then $G$ is a central product either of two generator subgroups with cyclic center or two generator subgroups with cyclic center and a cyclic subgroup.
\end{thm}
Now we are ready to prove Theorem B.
\begin{thm}[Theorem B]\label{thm2}
Let $G$ be a finite $p$-group of class $2$ satisfying Hypothesis A. Then $d(G)$ is even.
\end{thm}
\begin{proof}
Let $|\operatorname{Z} _i|$ be equal to the exponent of $\operatorname{Z} (G)$ and $G^*$ denote the factor group $G/\operatorname{Z} _i^*$. Then it follows from Proposition \ref{prop4} that $\operatorname{Z} (G^*) = \gamma_2(G^*)$ is cyclic of order
$|\operatorname{Z} _i|$. Since $G^*$ is purely non-abelian, it follows from Theorem \ref{thm0a} that $G$ is a central product of two generator subgroups with cyclic center. Hence $G^*/\operatorname{Z} (G^*) \cong G/\operatorname{Z} (G)$ is a direct product of even number of cyclic subgroups of $G^*/\operatorname{Z} (G^*)$. Thus $d(G/\operatorname{Z} (G))$ is even. Since $\operatorname{Z} (G) = \gamma_2(G)$, it follows that $d(G)$ is even. \hfill $\Box$
\end{proof}
\section{Some further properties and examples}
In this section we discuss some more properties and give some examples of finite groups which satisfy Hypothesis A. We start with the
following concept of isoclinism of groups, introduced by P. Hall \cite{pH40}.
Let $X$ be a finite group and $\bar{X} = X/\operatorname{Z} (X)$.
Then commutation in $X$ gives a well defined map
$a_{X} : \bar{X} \times \bar{X} \mapsto \gamma_{2}(X)$ such that
$a_{X}(x\operatorname{Z} (X), y\operatorname{Z} (X)) = [x,y]$ for $(x,y) \in X \times X$.
Two finite groups $G$ and $H$ are called \emph{isoclinic} if
there exists an isomorphism $\phi$ of the factor group
$\bar G = G/\operatorname{Z} (G)$ onto $\bar{H} = H/\operatorname{Z} (H)$, and an isomorphism $\theta$ of
the subgroup $\gamma_{2}(G)$ onto $\gamma_{2}(H)$
such that the following diagram is commutative
\[
\begin{CD}
\bar G \times \bar G @>a_G>> \gamma_{2}(G)\\
@V{\phi\times\phi}VV @VV{\theta}V\\
\bar H \times \bar H @>a_H>> \gamma_{2}(H).
\end{CD}
\]
The resulting pair $(\phi, \theta)$ is called an \emph{isoclinism} of $G$
onto $H$. Notice that isoclinism is an equivalence relation among finite
groups.
Let $G$ be a finite $p$-group. Then it follows from \cite{pH40} that there exists a finite $p$-group $H$ in the isoclinism family of $G$ such that
$\operatorname{Z} (H) \le \gamma_2(H)$. Such a group $H$ is called a \emph{stem group} in the isoclinism family of $G$.
The following theorem shows that the group of class preserving automorphisms is independent of the choice of a group in a given isoclinism family of groups.
\begin{thm}[\cite{mYp5}, Theorem 4.1]\label{thm2a}
Let $G$ and $H$ be two finite non-abelian isoclinic groups. Then
$\operatorname{Aut} _c(G) \cong \operatorname{Aut} _c(H)$.
\end{thm}
\begin{prop}\label{prop5}
Let $G$ and $H$ be two non-abelian finite $p$-groups which are isoclinic. Let $G$ satisfy Hypothesis A. Then $H$ satisfies Hypothesis A if and only if $|H| = |G|$.
\end{prop}
\begin{proof}
Suppose that $H$ satisfies Hypothesis A. Then $\gamma_2(H) = \operatorname{Z} (H)$. Since $G$ and $H$ are isoclinic and $G$ satisfies Hypothesis A, it follows that $|\operatorname{Z} (G)| = |\gamma_2(G)| = |\gamma_2(H)| = |\operatorname{Z} (H)|$ and $|G/\operatorname{Z} (G)| = |H/\operatorname{Z} (H)|$. Hence $|H| = |H/\operatorname{Z} (H)| |\operatorname{Z} (H)| = |G/\operatorname{Z} (G)| |\operatorname{Z} (G)| = |G|$.
Conversely, suppose that $|H| = |G|$. It is easy to show that $\gamma_2(H) = \operatorname{Z} (H)$. Let $p^{m_1}, \ldots, p^{m_d}$ be the invariants of $G/\operatorname{Z} (G) \cong H/\operatorname{Z} (H)$. Since $\gamma_2(H) \cong \gamma_2(G)$, we have $\Omega_{m_i}(\gamma_2(H)) \cong \Omega_{m_i}(\gamma_2(G))$. Since $G$ and $H$ are isoclinic, it follows from Theorem \ref{thm2a} that $\operatorname{Aut} _c(G) \cong \operatorname{Aut} _c(H)$. Hence $|\operatorname{Aut} _c(H)| = \Pi_{i =1}^d |\Omega_{m_i}(\gamma_2(G))| = \Pi_{i =1}^d |\Omega_{m_i}(\gamma_2(H))|$. That $H$ satisfies Hypothesis A, now follows from Theorem \ref{thm1}. \hfill $\Box$
\end{proof}
Let $G$ be a finite group and $1 \neq N$ be a normal subgroup of
$G$. $(G,N)$ is called a \emph{Camina pair} if $xN \subseteq x^G$ for
all $x \in G-N$. A group $G$ is called a \emph{Camina group} if $(G,\gamma_{2}(G))$
is a Camina pair. So if $G$ is a Camina group, then $x\gamma_2(G) \subseteq
x^G$ all $x \in G-\gamma_2(G)$. This is equivalent to saying that $\gamma_2(G)
\subseteq [x, G]$ all $x \in G-\gamma_2(G)$. Since $[x, G] \subseteq
\gamma_2(G)$, it follows that $G$ is a Camina group if and only if
$\gamma_2(G)= [x, G]$ all $x \in G-\gamma_2(G)$.
\begin{prop}\label{prop5a}
Let $G$ be a finite special $p$-group. Then Hypothesis A holds true for $G$ if and only if $G$ is a Camina group.
\end{prop}
\begin{proof}
First suppose that Hypothesis A holds true for $G$. Since $\gamma_2(G) = \Phi(G)$, we only need to show that $[x, G] = \gamma_2(G)$ for all $x \in G -\Phi(G)$. Let $x \in G -\Phi(G)$. By Corollary \ref{cor2}, we have $[x, G] = \Omega_m(\gamma_2(G))$, where order of $x\operatorname{Z} (G)$ in $G/\operatorname{Z} (G)$ is $p^m$. Since $G$ is special, $m =1$ and the exponent of $\gamma_2(G)$ is $p$. Thus $[x, G] = \gamma_2(G)$.
Conversely suppose that $G$ is a Camina group. Then it follows from \cite[Theorem 5.4]{mY07} that $|\operatorname{Aut} _c(G)| = |\gamma_2(G)|^d$, where $|G/\Phi(G)| = p^d$. Since $G$ is special, it follows that $\gamma_2(G) = \operatorname{Z} (G)$ and $p^{m_1} =p, p^{m_2} =p , \ldots, p^{m_d} = p $ are the invariants of $G/\operatorname{Z} (G)$. Thus $\gamma_2(G) = \Omega_{m_i}(\gamma_2(G))$. Now we can use Theorem \ref{thm1} to deduce that $G$ satisfies Hypothesis A and the proof is complete. \hfill $\Box$
\end{proof}
Since every finite Camina $p$-group of nilpotency class $2$ is special \cite{iM81}, we readily get
\begin{cor}
Let $G$ be a finite Camina $p$-group of nilpotency class $2$. Then $G$ satisfies Hypothesis A.
\end{cor}
With this much information we get
\noindent{\bf Example 1.} All finite Camina $p$-groups of class $2$ satisfy Hypothesis A. In particular, all finite extra-special $p$-groups satisfy Hypothesis A. Examples of non extra-special Camina $p$-groups of class $2$ can be found in \cite{DS96} and \cite{iM81}.
Now we construct an example of a finite $p$-group satisfying Hypothesis A, which is not a Camina group.
\noindent{\bf Example 2.} Let $R$ be the factor ring $S/p^2S$, where $S$ is the ring of $p$-adic
integers in the unramified extension of degree 2 over the $p$-adic
completion $\mathbb Q_p$ of the rational numbers $\mathbb Q$.
Form the group $G$ of all $3 \times 3$ matrices
\[ M(x,y,z) = \left ( \begin{matrix} 1 & 0 & 0 \\
x & 1 & 0 \\
z & y & 1
\end{matrix} \right ) \]
for $x,y,z \in R$. The additive group of $R$ is the direct
sum of two copies of a cyclic group of order $p^2$. The factor ring
$R/pR$ modulo the ideal $pR$ is a finite field of order $p^2$. Notice
that commutation in $G$ satisfies
\[[M(x,y,z), M(x',y',z')] = M(0,0,yx' - xy')\]
for any $x,y,z,x',y',z' \in R$. So both the center $\operatorname{Z} (G)$ and the
derived group $\gamma_2(G)$ consist of all matrices of the form $M(0,0,z)$
for $z \in R$.
Since $M(0,0,z)M(0,0,z') = M(0,0,z+z')$ for all $z, z' \in R$, $\operatorname{Z} (G)$
is noncyclic and equal to $\gamma_2(G)$. Thus the nilpotency class of $G$ is
$2$.
From the above formula for commutators it follows that
\[[M(x,y,z), G] = M(0,0,Rx+Ry) := \{ M(0,0,z) \mid z \in Rx + Ry \}\]
for any $x,y,z \in R$. Note that there are only three choices for the
ideal $Rx + Ry$ in $R$, namely, $R$, $pR$ and $p^2R = 0$.
Furthermore, all three possibilities happen for suitable $x$ and $y$.
Now
\[[M(1,0,0), G] = [M(p-1,0,0), G] = M(0,0,R) = \operatorname{Z} (G)\]
and
\[[M(1,0,0)M(p-1,0,0), G] = [M(p,0,0), G] = M(0,0,pR) = \operatorname{Z} (G)^p.\]
So
\[[M(1,0,0), G][M(p-1,0,0),G] = \operatorname{Z} (G) > \operatorname{Z} (G)^p = [M(1,0,0)M(p-1,0,0),
G].\]
This shows that $G$ has a non-central element $x = M(1,0,0)M(p-1,0,0)$ such
that $[x,G] < \gamma_2(G) = \operatorname{Z} (G)$. Hence $G$ is not a Camina group.
Since $\operatorname{Z} (G) = \gamma_2(G)$, $G$ is purely non-abelian.
Then from Lemma \ref{lemma0}, it follows that for any element $\alpha \in
\operatorname{Autcent} (G)$ there exists a corresponding element
$f_{\alpha} \in \operatorname{Hom} (G/\gamma_2(G),\operatorname{Z} (G))$ such that $f_{\alpha}({\bar x}) =
x^{-1} \alpha(x)$ for each ${\bar x} \in G/\gamma_2(G)$.
Let $x,y,z$ be any three elements of $R$, and $i = 0,1,2$
be such that $Rx + Ry = p^iR$.
Then the element $M(x,y,z)\operatorname{Z} (G)$ of $G/\operatorname{Z} (G)$ lies in $(G/\operatorname{Z} (G))^{p^i}$.
So its image $f(M(x,y,z)\operatorname{Z} (G))$ lies in
\[\operatorname{Z} (G)^{p^i} = M(0,0,p^iR) = M(0,0, Rx + Ry) = [M(x,y,z), G].\]
Thus $f(g) \in [g,G]$ for all $g \in G$, and $\alpha \in \operatorname{Aut} _c(G)$. Since
$\operatorname{Aut} _c(G) \le \operatorname{Autcent} (G)$, this proves that $\operatorname{Aut} _c(G) = \operatorname{Autcent} (G)$.
\section{Groups $G$ with $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$}
In this section we prove Theorem C. Throughout the section, $p$ always denotes an odd prime. We state some important known results in the following theorem.
\begin{thm}\label{thmo}
The following statements hold true.
\begin{enumerate}
\item {Let $G$ be a non-abelian $p$-group of order $p^5$ or less. Then $\operatorname{Aut} (G)$ is non-abelian. \cite[Theorem 5.2]{bE75} }\label{thm3}
\item {There is no group $G$ of order $p^6$ whose automorphism group is an abelian $p$-group. \cite[Proposition 1.4]{mM94}}\label{thm4}
\item{Let $G$ be a non-abelian finite $p$-group such that $\operatorname{Aut} (G)$ is abelian. Then $d(G) \ge 4$. \cite{mM95}} \label{thm4a}
\item{Let $G$ be a non-cyclic finite $p$-group, $p$ odd, for which $\operatorname{Aut} (G)$ is abelian. Then $p^{12}$ divides $|\operatorname{Aut} (G)|$. \cite[Main Theorem]{pH95} } \label{thm5}
\item{Let $G$ be a non-cyclic group of order $p^7$. If $\operatorname{Aut} (G)$ is abelian, then it must be of order $p^{12}$. \cite[Theorem 1]{BY98} } \label{thm6}
\item {Let $G$ be a finite non-cyclic $p$-group such that $\operatorname{Aut} (G)$ is abelian. Then $\gamma_2(G) \cong \operatorname{C} _{p^{m}} \times \operatorname{C} _{p^{m}}$ \text{or} $\operatorname{C} _{p^{m}} \times \operatorname{C} _{p^{m}} \times \operatorname{C} _{p^{m_1}} \times \cdots \times \operatorname{C} _{p^{m_k}}$, where $p^m$ is the exponent of $\gamma_2(G)$ and $m \ge m_i$ for $1 \le i \le k$. \cite[Lemma 4(12)]{BY98} } \label{thm6a}
\item{Let $G$ be a finite Camina $p$-group of class $2$ such that $d(G) = n$ and $d(\gamma_2(G)) = m$ for some positive integers $n$ and $m$. Then $n$ is even and $n \ge 2m$. (Follows from
\cite[Theorems 3.1, 3.2]{iM81})}\label{thm8}
\end{enumerate}
\end{thm}
Now we start the proof of Theorem C.
\begin{lemma}\label{lemma12}
Let $G$ be a non-abelian finite $p$-group such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$, where $p$ is an odd prime. Then $\gamma_2(G)$ cannot be cyclic.
\end{lemma}
\begin{proof}
Suppose that $\gamma_2(G)$ is cyclic. It now follows from \cite[Theorem 3]{yC82} that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G) = \operatorname{Inn} (G)$, which is not possible by a celebrated theorem of W. Gasch\"utz \cite{wG66}.
\hfill $\Box$
\end{proof}
\begin{lemma}\label{lemma13}
Let $G$ be a non-abelian finite $p$-group such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$. Then $\operatorname{Aut} (G)$ is elementary abelian if and only if $G$ is a Camina special $p$-group.
\end{lemma}
\begin{proof}
Suppose that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$ is elementary abelian. Then $G/\operatorname{Z} (G) \cong \operatorname{Inn} (G)$ is elementary abelian. Since $\operatorname{Z} (G) = \gamma_2(G) \le \Phi(G)$, it follows that $G$ is a special $p$-group. Hence, by Proposition \ref{prop5a}, $G$ is a Camina special $p$-group.
Conversely, suppose that $G$ is a Camina $p$-group of class $2$. Then $\operatorname{Inn} (G) \cong G/\operatorname{Z} (G)$ is elementary abelian. Hence it follows that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$ is elementary abelian. \hfill $\Box$
\end{proof}
\begin{remark}
If $G$ is a special $p$-group and $\operatorname{Aut} (G) = \operatorname{Autcent} (G)$, then it is not difficult to prove that $\operatorname{Aut} (G)$ is elementary abelian. But the converse is not true. The examples of non-special finite $p$-groups $G$ such that $\operatorname{Aut} (G) = \operatorname{Autcent} (G)$ is elementary abelian can be found in \cite{JRY}.
\end{remark}
\begin{prop}\label{prop6}
Let $G$ be a finite $p$-group of nilpotency class $2$ such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$, where $p$ is an odd prime. Then $|G| \ge p^8$.
\end{prop}
\begin{proof}
Since the nilpotency class of $G$ is $2$, $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$ is abelian. It now follows from Theorem \ref{thmo}\eqref{thm3} and Theorem \ref{thmo}\eqref{thm4} that $|G| \ge p^7$. Let $|G| = p^7$. Then $|\operatorname{Aut} (G)| = p^{12}$ (by Theorem \ref{thmo}\eqref{thm6}). Now using the fact that $\operatorname{Z} (G) = \gamma_2(G)$ and $|\operatorname{Aut} (G)| = |\operatorname{Autcent} (G)| = |\operatorname{Hom} (G/\operatorname{Z} (G), \operatorname{Z} (G)| = p^{12}$, it follows from Lemma \ref{lemma12} and Theorems \ref{thm2}, \ref{thmo}\eqref{thm4a} and \ref{thmo}\eqref{thm5} (by looking various possibilities for the order and structure of $\operatorname{Z} (G)$) that $G$ is a special $p$-group with $|G/\operatorname{Z} (G)| = p^4$ and $|\operatorname{Z} (G)| = p^3$. It then follows from Proposition \ref{prop5a} that $G$ is a Camina special $p$-group, which is not possible by Theorem \ref{thmo}\eqref{thm8}. This completes the proof. \hfill $\Box$
\end{proof}
The following lemma seems basic.
\begin{lemma}\label{lemma14}
Let $G$ be a finite $p$-group of nilpotency class $2$ such that $\gamma_2(G) \cong \operatorname{C} _{p^m} \times \operatorname{C} _{p^m}$ for some positive integer $m$. Then $G/\operatorname{Z} (G) \cong \operatorname{C} _{p^m} \times \operatorname{C} _{p^m} \times \operatorname{C} _{p^m} \times H$ for some abelian group $H$.
\end{lemma}
\begin{proof}
Since the exponent of $\gamma_2(G)$ is $p^m$, we can write $G/\operatorname{Z} (G) = \gen{{\bar x}_1} \times \gen{{\bar x}_2} \times K/\operatorname{Z} (G)$ for some subgroup $K$ of $G$ containing $\operatorname{Z} (G)$ such that $|\gen{[x_1, x_2]}| = |\gen{{\bar x}_1}| = |\gen{{\bar x}_2}| = p^m$. We only need to prove that the exponent of $K/\operatorname{Z} (G)$ is $p^m$.
Since $d(\gamma_2(G)) = 2$, we can find an element $w \in \gamma_2(G)$ such that $\gamma_2(G) = \gen{[x_1, x_2], w}$, where
\[w = [x_1, x_2]^{a}[x_1, k]^{a_1}[x_2, k']^{a_2} \Pi_{k_i, k_j \in K}[k_i, k_j]^{b_{ij}}\]
for some $k, k', k_i, k_j \in K$.
Let $u = [x_1, k_2]^{a_1}[k_1, x_2]^{a_2} \Pi_{k_i, k_j \in K}[k_i, k_j]^{b_{ij}}$. Then notice that $[x_1, x_2]$ and $u$ also generate $\gamma_2(G)$. If the exponent of $K/\operatorname{Z} (G)$ is less than $p^m$, then order of the element $u$ is also less than $p^m$. Thus $|\gamma_2(G)| < p^{2m}$, which is a contradiction to the given fact that $\gamma_2(G) \cong \operatorname{C} _{p^m} \times \operatorname{C} _{p^m}$. Hence the exponent of $K/\operatorname{Z} (G)$ is $p^m$ and the proof of the lemma is complete. \hfill $\Box$
\end{proof}
\begin{prop}\label{prop7}
Let $G$ be a finite $p$-group such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$ is abelian. Then $|\operatorname{Aut} (G)| = p^{12}$ if and only if $|G| = p^8$.
\end{prop}
\begin{proof}
Let $|\operatorname{Aut} (G)| = p^{12}$. By Proposition \ref{prop6}, we can assume that $|G| \ge p^8$. Also, by Theorem \ref{thm2}, $G/Z(G)$ is minimally generated by even number of elements. Notice that
\[|\operatorname{Hom} (G/\operatorname{Z} (G), \operatorname{Z} (G))| = |\operatorname{Autcent} (G)| = |\operatorname{Aut} (G)| = p^{12}.\]
Since $\operatorname{Z} (G) = \gamma_2(G)$ can not be cyclic, a copy of $\operatorname{C} _p \times \operatorname{C} _p$ is sitting inside $\operatorname{Z} (G)$. Suppose that $d(\operatorname{Z} (G)) \ge 3$. Since $d(G/\operatorname{Z} (G)) \ge 4$ (by Theorem \ref{thmo}\eqref{thm4a}), we have
$|\operatorname{Aut} (G)| \ge p^{12}$. Notice that the equality holds only when $d(\operatorname{Z} (G)) = 3$, $d(G/\operatorname{Z} (G)) = 4$ and the exponent of both $\operatorname{Z} (G)$ and $G/\operatorname{Z} (G)$ is $p$. This implies that $|G| = p^7$, which we are not considering. Thus $d(\operatorname{Z} (G)) = 2$. Let the exponent of $\operatorname{Z} (G)$ is $p^m$ for some positive integer $m$. Then it follows from Theorem \ref{thmo}\eqref{thm6a} that $\operatorname{Z} (G) \cong \operatorname{C} _{p^m} \times \operatorname{C} _{p^m}$. We claim that $m = 1$. Suppose that $m \ge 2$. If $d(G/\operatorname{Z} (G)) \ge 6$, then notice that $|\operatorname{Aut} (G)| > p^{12}$. Thus by Theorem \ref{thm2}, $d(G/\operatorname{Z} (G)) = 4$ and by Lemma \ref{lemma14}, $G/\operatorname{Z} (G) \cong \operatorname{C} _{p^m} \times \operatorname{C} _{p^m} \times \operatorname{C} _{p^m} \times \operatorname{C} _{p^r}$ for some $1 \le r \le m$. Now it is easy to show that $|\operatorname{Aut} (G)| > p^{12}$. This contradiction proves our claim, i.e., $m=1$. Now the only choice for $d(G/\operatorname{Z} (G))$ to give $|\operatorname{Aut} (G)| = p^{12}$ is $6$. Thus $G/\operatorname{Z} (G)$ and $\operatorname{Z} (G)$ are elementary abelian of order $p^6$ and $p^2$ respectively. Hence $|G| = p^8$.
Conversely, suppose that $|G| = p^8$ and $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$ is abelian. Then $|\operatorname{Z} (G)| = |\gamma_2(G)| \le p^4$. If $|\operatorname{Z} (G)| = p^4$, then $|G/\operatorname{Z} (G)| = p^4$ and therefore $G/\operatorname{Z} (G)$ is elementary abelian by Theorem \ref{thmo}\eqref{thm4a}. Hence $\Phi(G) = \operatorname{Z} (G) = \gamma_2(G)$ is elementray abelian. This shows that $G$ is special and therefore Camina special. But this is not possible by Theorem \ref{thmo}\eqref{thm8}. If $|\operatorname{Z} (G)| = p^3$, then by Theorem \ref{thm2} and Theorem \ref{thmo}\eqref{thm4a}, $G/\operatorname{Z} (G)$ can be written as
\[G/\operatorname{Z} (G) \cong \operatorname{C} _{p^2} \times \operatorname{C} _p \times \operatorname{C} _p \times \operatorname{C} _p\]
and by Lemma \ref{lemma12}, $\operatorname{Z} (G)$ can be written as
\[\operatorname{Z} (G) \cong \operatorname{C} _{p^2} \times \operatorname{C} _p.\]
Thus $|\operatorname{Aut} (G)| = |\operatorname{Autcent} (G)| = |\operatorname{Hom} (G/\operatorname{Z} (G), \operatorname{Z} (G))| = p^9$, which is not possible by Theorem \ref{thmo}\eqref{thm6}. Hence $|\operatorname{Z} (G)| = p^2$. Now $\operatorname{Z} (G)$, being non-cyclic by Lemma \ref{lemma12}, must be isomorphic to $\operatorname{C} _p \times \operatorname{C} _p$. Since the nilpotency class of $G$ is $2$ and $\operatorname{Z} (G) = \gamma_2(G)$, $G/\operatorname{Z} (G)$ must be elementary abelian of order $p^6$. Hence $|\operatorname{Aut} (G)| = |\operatorname{Autcent} (G)| = |\operatorname{Hom} (G/\operatorname{Z} (G), \operatorname{Z} (G))| = p^{12}$. \hfill $\Box$
\end{proof}
It only remains to establish the final thread of Theorem C. For an odd prime $p$, consider the group
\begin{eqnarray}\label{eqnl}
G &=& \langle x_1,\; x_2,\; x_3,\;x_4,\; x_5,\; x_6 \mid x_1^{p^2}= x_2^{p^2}= x_3^p = x_4^p = x_5^p = x_6^p = 1,\\
& &[x_1,x_2] = x_1^p,\; [x_1,x_3] = x_2^p, \; [x_2, x_3] = x_1^p, \; [x_1, x_4] = x_2^p,\; [x_2,x_4] = x_2^p,\nonumber\\
& & [x_3,x_4] = x_2^p,\; [x_1,x_5] = x_2^p,\; [x_2,x_5] = x_1^p,\; [x_3,x_5] = x_2^p, \; [x_4,x_5] = x_1^p,\nonumber\\
& & [x_1,x_6] = x_2^p,\; [x_2,x_6] = x_2^p,\; [x_3,x_6] = x_1^p, \; [x_4,x_6] = x_1^p,\; [x_5,x_6] = x_2^p \rangle \nonumber
\end{eqnarray}
The following lemma completes the proof of Theorem C.
\begin{lemma}
The group $G$, defined in \eqref{eqnl}, is a special $p$-group of order $p^8$ with $|\operatorname{Z} (G)| = p^2$ for all odd primes $p$. For $p = 3$, $G$ is a Camina group and $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$ is elementary abelian of order $p^{12}$.
\end{lemma}
\begin{proof}
It is fairly easy to show that $G$ is a special $p$-group of order $p^8$ with $|\operatorname{Z} (G)| = p^2$. Then by Theorem \ref{lemma0}, it follows that $|\operatorname{Autcent} (G)| = p^{12}$. For $p = 3$, using GAP \cite{gap} one can easily establish (i) $G$ has $737$ conjugacy classes and therefore it is a Camina group; (ii) $\operatorname{Aut} (G)$ is elementary abelian of order $p^{12}$. Thus, by Proposition \ref{prop5a}, $G$ satisfies Hypothesis A. Hence $|\operatorname{Aut} (G)| = |\operatorname{Aut} _c(G)| = p^{12}$. \hfill $\Box$
\end{proof}
We conclude this section with some questions. A finite abelian group $Y$ of exponent $e$ is said to be \emph{homocyclic} if the set of its invariants is $\{e\}$. \\
\noindent{\bf Question 1.} Let $G$ be a finite $p$-group of class $2$ such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$. Is it true that $d(G) \ge 2 d(\gamma_2(G))$? If the answer is negative, what is the relationship between $d(G)$ and $d(\gamma_2(G))$?\\
\noindent{\bf Question 2.} Let $G$ be a finite $p$-group of class $2$ such that $\operatorname{Aut} (G) = \operatorname{Aut} _c(G)$ and $G/\operatorname{Z} (G)$ is homocyclic. Is $\gamma_2(G)$ homocyclic? If not, how big homocyclic group of the highest exponent, $\gamma_2(G)$ contains?\\
\noindent{\bf Acknowledgements.} I thank Prof. Everett C. Dade for some useful e-mail discussion during the year 2008. Example 2 above is due to him.
|
1,314,259,994,290 | arxiv | \section{Introduction}
The Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix is one of the most important free
parameters of the Standard Model, encoding CP violation as complex phases in some
of its matrix elements. These effects manifest themselves only in the couplings
of the third generation, which makes the weak decays of bottom flavored hadrons an
ideal setting for their study. Preliminary results from CLEO and CDF are already providing
tantalizing evidence for nontrivial phases in some of the CKM matrix elements,
and more precise determinations are expected soon from the $B$ factories presently
being commisioned. Ultimately, studies of weak $B$ decays will help test the Standard
Model mechanism of CP violation and explore the possible existence of new physics.
For our purposes, the following approximate form of the CKM matrix given by
Wolfenstein will be sufficient
\begin{eqnarray}
\left( \begin{array}{ccc}
V_{ud} & V_{us} & V_{ub} \\
V_{cd} & V_{cs} & V_{cb} \\
V_{td} & V_{ts} & V_{tb}
\end{array}\right)
\simeq
\left( \begin{array}{ccc}
1-\lambda^2/2 & \lambda & A\lambda^3 (\rho-i\eta) \\
-\lambda & 1-\lambda^2/2 & A\lambda^2 \\
A\lambda^3(1-\rho-i\eta) & -A\lambda^2 & 1
\end{array}\right)\,.
\end{eqnarray}
In this parametrization the CP-violating phases are restricted to two
matrix elements $V_{ub}=A\lambda^3 R_b e^{-i\gamma}$ and $V_{td}=A\lambda^3
R_t e^{-i\beta}$, where we defined $R_b=\sqrt{\rho^2+\eta^2}$ and
$R_t=\sqrt{(1-\rho)^2+\eta^2}$. Finally, a third weak phase $\alpha$ is defined by
$\alpha+\beta+\gamma=\pi$. The three weak phases are identical with the angles
of the unitarity triangle following from the unitarity condition of the CKM matrix
\begin{equation}\label{unittriangle}
V_{ud} V^*_{ub} + V_{cd} V^*_{cb} + V_{td} V^*_{tb} =0\,.
\end{equation}
Numerically, the best known parameters are $\lambda \simeq |V_{us}| = 0.2196\pm 0.0023$
and $A\equiv |V_{cb}|/\lambda^2 = 0.819\pm 0.058$ \cite{PDG}. The remaining parameters
have been estimated from global fits of the unitarity triangle as $R_b = |V_{ub}/
V_{cb}|/\lambda = 0.41\pm 0.07$ and $R_t = |V_{td}/V_{cb}|/\lambda = 1.01\pm 0.21$
\cite{Ros}. Although knowledge of the sides of the triangle (\ref{unittriangle})
is sufficient to determine its angles too, one would like to measure the latter
directly, which would provide a consistency test of the whole picture.
Several methods have been proposed for determining the weak phases from $B$ decay
data, the most popular of which can be divided into two large classes: a) methods using
mixing-induced CP
violation in neutral ($B_d$ or $B_s$) decays to CP eigenstates and b) methods using
time-independent charged and/or neutral $B$ decay rates (for a discussion of other
methods see the contribution by R. Fleischer
in these proceedings). The best known methods of type a) include the determination
of the weak phase $\alpha$ from $B^0(t)\to \pi^+\pi^-$ decays and of the phase
$\beta$ from $B^0(t)\to J/\psi K_S$ \cite{BiSa}. Such methods are more demanding
from a practical point of view, as they require
time-dependent measurements of the CP asymmetry.
The second class of methods employs the approximate flavour SU(3) symmetry
of the strong interactions \cite{SU(3)1,GHLR,SU(3)3}.
The basic idea is that any $B$ decay amplitude is given
by a linear combination of (unknown) strong matrix amplitudes $T_j$ times CKM
factors $\xi_j$ as $A = \sum_j \xi_{j} T_j$.
The strong amplitudes $T_j$ corresponding to different decays are related by
SU(3) symmetry, such that one aims to eliminate them completely by combining
sufficiently many physical decay amplitudes, in order to determine the $\xi_j$ factors.
A particularly elegant version of this approach is formulated in a graphical language,
wherein the weak phases appear as angles in diagrams constructed from physical decay
amplitudes \cite{GHLR,GL,GW}.
While simple and attractive from an experimental point of view, this type of
methods are fraught
with theoretical uncertainties such as SU(3) breaking effects, final state interactions
and electroweak penguin effects. We will discuss these issues at length in the
following sections.
\section{SU(3) flavor symmetry and nonleptonic $B$ decays}
The flavour symmetry of the strong interactions plays an useful role in
organizing the structure of weak decay amplitudes of $B$ mesons into two
pseudoscalars. The effective weak nonleptonic Hamiltonian responsible for these
decays is given by
\begin{eqnarray}\nonumber
{\cal H} &=& \frac{G_F}{\sqrt2}\sum_{q=d,s}\left(
\sum_{q'=u,c}
V_{q'b}^* V_{q'q} [ c_1 (\bar bq')_{V-A}(\bar q'q)_{V-A} +
c_2 (\bar bq)_{V-A}(\bar q'q')_{V-A}] \right.\\\label{Ham}
& &\left.\qquad\qquad - V_{tb}^* V_{tq}\sum_{i=3}^{10} c_i Q_i^{(q)}\right)\,,
\end{eqnarray}
where the eight operators $Q_i^{(q)}$ include four QCD penguin-type and
four EW penguin-type operators
\begin{eqnarray}\label{Q3}
Q_3^{(q)} = (\bar bq)_{V-A}\sum_{q'=u,d,s,c}(\bar q'q')_{V-A}\,,\,\,\,
Q_4^{(q)} = (\bar b_i q_j)_{V-A}\sum_{q'=u,d,s,c}(\bar q'_j q'_i)_{V-A}& &\\
Q_5^{(q)} = (\bar bq)_{V-A}\sum_{q'=u,d,s,c}(\bar q'q')_{V+A}\,,\,\,\,
Q_6^{(q)} = (\bar b_i q_j)_{V-A}\sum_{q'=u,d,s,c}(\bar q'_j q'_i)_{V+A}& &\label{Q6}
\end{eqnarray}
and
\begin{eqnarray}\label{Q7}
Q_7^{(q)} = \frac32(\bar bq)_{V-A}\sum_{q'}e_{q'}(\bar q'q')_{V+A}\,,\,\,\,
Q_8^{(q)} = \frac32(\bar b_i q_j)_{V-A}\sum_{q'}e_{q'}(\bar q'_j q'_i)_{V+A}& &\\
Q_9^{(q)} = \frac32(\bar bq)_{V-A}\sum_{q'}e_{q'}(\bar q'q')_{V-A}\,,\,\,\,
Q_{10}^{(q)} = \frac32(\bar b_i q_j)_{V-A}\sum_{q'}e_{q'}(\bar q'_j q'_i)_{V-A}\,.& &
\label{Q10}
\end{eqnarray}
Each term in the Hamiltonian (\ref{Ham}) contains a product
$\bar q\bar q q$ which transforms as $\bar{\bf 3}\otimes \bar{\bf 3}\otimes {\bf 3} =
\overline{\bf 15} \oplus {\bf 6}\oplus \bar{\bf 3}\oplus \bar{\bf 3} $ under
flavour SU(3). When expressed in terms of well-defined SU(3) transformation properties,
the tree part of the Hamiltonian (\ref{Ham}) reads (without a factor of $G_F/\sqrt2$)
\begin{eqnarray}\label{T}
{\cal H}_T = \lambda_u^{(s)}
[\frac12(c_1-c_2)(-\bar {\bf 3}^{(a)}_{I=0} - {\bf 6}_{I=1}) +
\frac12(c_1+c_2)(-\overline {\bf 15}_{I=1} - \frac{1}{\sqrt2}\overline {\bf 15}_{I=0}
+\frac{1}{\sqrt2}\bar {\bf 3}^{(s)}_{I=0})] & &\nonumber\\
+ \lambda_u^{(d)}
[\frac12(c_1-c_2)({\bf 6}_{I=\frac12} - \bar {\bf 3}^{(a)}_{I=\frac12}) +
\frac12(c_1+c_2)(-\frac{2}{\sqrt3}\overline {\bf 15}_{I=\frac32} -
\frac{1}{\sqrt6}\overline {\bf 15}_{I=\frac12}
+\frac{1}{\sqrt2}\bar {\bf 3}^{(s)}_{I=\frac12})] ~.& &
\end{eqnarray}
We denoted here the combinations of CKM factors $\lambda_{q'}^{(q)}=V^*_{q'b}V_{q'q}$.
There are two $\bar{\bf 3}$ operators, which were chosen to be symmetric, respectively
antisymmetric under permutations of the two $q$ fields in $qq\bar q$. The explicit
form of the operators in (\ref{T}) can be found in \cite{GPY}.
The final state in the decay consists of two octet pseudoscalar Goldstone bosons.
Bose symmetry constrains its flavour wave function to be symmetric,
which allows only certain representations
$[{\bf 8}\otimes {\bf 8}]_S = {\bf 27} \oplus {\bf 8} \oplus {\bf 1}$.
The most general form of the decay matrix element induced by the Hamiltonian
(\ref{T}) is given by the Wigner-Eckart theorem, which is written in tensor
language as (omitting the CKM factors)
\begin{eqnarray}\label{WE}
{\cal H} &=&
\langle {\bf 27}|\!| \overline{{\bf 15}} |\!|{\bf 3}\rangle
\bar M^{i_1 i_2}_{j_1 j_2} H^{j_1}_{i_1 i_2} B^{j_2} +
\langle {\bf 8}|\!| \overline{{\bf 15}} |\!|{\bf 3}\rangle
\bar M^{i_1}_{j_1} H^{j_1}_{i_1 i_2} B^{i_2}\\
& &\hspace{-1cm} + \langle {\bf 8}|\!| {\bf 6} |\!|{\bf 3}\rangle
\epsilon_{abc} \bar M^{a}_{i} H^{ib} B^{c}
+ \langle {\bf 8}|\!| {\bar {\bf 3}}^{(a)} |\!|{\bf 3}\rangle
\bar M^{i}_{j} H_{i} B^{j}
+ \langle {\bf 1}|\!| {\bar{\bf 3}}^{(a)} |\!|{\bf 3}\rangle
\bar M H_{i} B^{j}\,.\nonumber
\end{eqnarray}
We denoted here with $M$ the possible tensors which can be formed from the usual
matrix of octet pseudoscalar $P^i_j = 1/\sqrt2 \pi^a\lambda^a$, corresponding to
the mentioned symmetric representations of SU(3), and with $H$ tensors
appearing in the SU(3) decomposition of the weak Hamiltonian.
The expansion of (\ref{WE}) gives any $B$ decay
amplitude into two pseudoscalars as a linear combinations of reduced SU(3) matrix
elements. The results are tabulated in an easy to use form in the Appendix of \cite{SU(3)3}.
There exists an equivalent description of SU(3) amplitudes in terms of quark diagrams
\cite{GHLR}, wherein a decay amplitude is decomposed into contributions which can be
associated with certain quark topologies. There are six graphical amplitudes, denoted
with $T$ (tree), $C$ (color-suppressed), $A$ (annihilation), $E$ ($W$-exchange),
$P$ (penguin) and $PA$ (penguin annihilation). Factorization approximation combined
with quark models for the form-factors can be used to determine these graphical
amplitudes (see, e.g. \cite{fact,fact1,fact2}). In this way a hierarchy emerges,
according to which
the dominant amplitude is $T$, followed by $C$ which is smaller by a factor
$a_2/a_1\simeq 0.2$.
The annihilation-type amplitudes $A$ and $E$ are predicted to be further suppressed by
a factor $f_B/m_B\simeq 0.05$ relative to $T$ (they can be however enhanced by rescattering
effects \cite{rescatt1,rescatt1.5,Uspin1,rescatt2,rescatt3,AtSo,Uspin2,Nr,He}).
The QCD penguin
amplitude $P$ contributes to
$\Delta S=0$ decays at the same order as $C$, and the $PA$ amplitude is suppressed relative
to it as in the
case of $A$ and $E$. This additional dynamical information makes the
graphical method more predictive than the group-theoretical approach discussed above.
We quote for later use the $B^+\to K\pi$ decay amplitudes in
quark diagram language.
\begin{eqnarray}\label{A(K0pi+)}
& &A(B^+\to K^0\pi^+) =\\
& &\qquad \lambda_u^{(s)}(P_u+A) + \lambda_c^{(s)}P_c +
\lambda_t^{(s)}(P_t+P^{EW}_t(B^+\to K^0\pi^+))~,\nonumber\\
& &\sqrt2 A(B^+\to K^+\pi^0) =\\
& &\quad -\lambda_u^{(s)}(T+C+P_u+A) - \lambda_c^{(s)}P_c +
\lambda_t^{(s)}(-P_t+\sqrt2 P^{EW}_t(B^+\to K^+\pi^-))~.\nonumber
\end{eqnarray}
The unitarity of the CKM matrix can be used to eliminate the charm penguin term
$P_c$ with the help of the relation $\lambda_c^{(s)} = -\lambda_u^{(s)}-\lambda_t^{(s)}$
by absorbing it into $P_{uc}\equiv P_u-P_c$ and $P_{tc}\equiv P_t-P_c$.
\subsection{U-spin symmetry}
At the first sight, the weak Hamiltonian (\ref{T}) appears to contain all
possible SU(3) representations allowed by the quark structure of the four-quark
operators, which would imply that no special symmetry relations exist among
decay amplitudes. In fact, an examination of the quark content of the Hamiltonian
(\ref{Ham}) shows that it transforms as a doublet
under $U$-spin symmetry (the subgroup of SU(3) exchanging $d$ and $s$ quarks).
Although the $\overline{\bf 15}$ representation contains both $U=1/2,3/2$ components,
the $U=3/2$ piece cancels in the specific combinations $\overline{\bf 15}_{I=1}
+ \frac{1}{\sqrt2}\overline{\bf 15}_{I=0}$ and
$\overline{\bf 15}_{I=3/2}+\frac{1}{2\sqrt2} \overline{\bf 15}_{I=1/2}$
appearing in (\ref{T}).
The most useful amplitude relations to be used in the following are consequences of
this symmetry property.
The weak Hamiltonian (\ref{T}) can be written as
\begin{eqnarray}\label{HamUspin}
{\cal H}_W &=& \left( V_{ub}^* V_{ud} {\cal T}^{(-\frac12)} +
V_{tb}^* V_{td} {\cal P}^{(-\frac12)}\right) -
\left( V_{ub}^* V_{us} {\cal T}^{(+\frac12)} +
V_{tb}^* V_{ts} {\cal P}^{(+\frac12)}\right)\,,
\end{eqnarray}
with ${\cal T}^{(U_3)}$ and ${\cal P}^{(U_3)}$ two $U=1/2$ operators standing for ``tree''
and ``penguin'' contributions respectively. The latter includes both strong and electroweak
penguin operators.
From the point of view of $U$-spin symmetry, the octet of pseudoscalar Goldstone
bosons contains one $U$-spin triplet ${\cal U}_1$, two doublets ${\cal U}_2$,
${\cal U}_3$ and one singlet ${\cal U}_4$. Their components are
\begin{eqnarray}
{\cal U}_1 =
\left( \begin{array}{c}
K^0 \\
\frac{\sqrt3}{2}\eta_8 - \frac12\pi^0 \\
-\bar K^0 \end{array}\right)\,,\quad
{\cal U}_2 =
\left( \begin{array}{c}
K^+ \\
-\pi^+ \end{array}\right)\,,\quad
{\cal U}_3 =
\left( \begin{array}{c}
\pi^- \\
-K^- \end{array}\right)
\end{eqnarray}
and ${\cal U}_4 = \frac{\sqrt3}{2}\pi^0 + \frac12\eta_8$.
To demonstrate the power of $U$-spin symmetry we derive a triangle relation \cite{GRL}
connecting the ``tree'' contributions to the $\Delta S=1$ and $\Delta = 0$ $B^+$ decays
\begin{equation}\label{triangle}
A(B^+\to K^0\pi^+) + \sqrt2 A(B^+\to K^+\pi^0) = \frac{V_{us}}{V_{ud}}
\sqrt2 A(B^+\to \pi^+\pi^0)\,.
\end{equation}
This relation (more precisely its extension including EWP contributions)
plays an important role in certain methods of bounding \cite{NR1} or determining
\cite{NR2,BuFl,GPY,N} the weak phase $\gamma$ from $B\to K\pi$ decays.
The strong penguin component in ${\cal P}$ does not contribute to either side of this
relation because of isospin constraints. However, the electroweak penguin components with
$I=1$ and $I=3/2$ respectively do contribute \cite{DesHe,Fl}, which will introduce a
correction to Eq.~(\ref{triangle}). This will be discussed in the next section.
The final states on the left-hand side of this relation have $U_3=+1/2$ and
can be obtained by combining ${\cal U}_1\otimes {\cal U}_2$ to a total $U$-spin 1/2
or 3/2, or by combining ${\cal U}_1\otimes {\cal U}_4$:
\begin{eqnarray}\label{K0pi+}
& &|K^0\pi^+\rangle = -\frac{1}{\sqrt3}|[{\cal U}_1\otimes {\cal U}_2]_{\frac32}\rangle
- \sqrt{\frac23} |[{\cal U}_1\otimes {\cal U}_2]_{\frac12}\rangle\\
& &|K^+\pi^0\rangle = -\frac{1}{\sqrt6}|[{\cal U}_1\otimes {\cal U}_2]_{\frac32}\rangle
+ \frac{1}{2\sqrt3} |[{\cal U}_1\otimes {\cal U}_2]_{\frac12}\rangle
+ \frac{\sqrt3}{2} |[{\cal U}_1\otimes {\cal U}_4]_{\frac12}\rangle\\
& &|K^+\eta_8\rangle = \frac{1}{\sqrt2}|[{\cal U}_1\otimes {\cal U}_2]_{\frac32}\rangle
- \frac{1}{2} |[{\cal U}_1\otimes {\cal U}_2]_{\frac12}\rangle
+ \frac{1}{2} |[{\cal U}_1\otimes {\cal U}_4]_{\frac12}\rangle\,.
\end{eqnarray}
In the strangeless sector one has the $U_3=-1/2$ states
\begin{eqnarray}
& &|\pi^0\pi^+\rangle = \frac{1}{\sqrt6}|[{\cal U}_1\otimes {\cal U}_2]_{\frac32}\rangle
+ \frac{1}{2\sqrt3} |[{\cal U}_1\otimes {\cal U}_2]_{\frac12}\rangle -\frac{\sqrt3}{2}
|[{\cal U}_1\otimes {\cal U}_4]_{\frac12}
\rangle\\\label{K+K0bar}
& &|K^+\bar K^0\rangle = -\frac{1}{\sqrt3}|[{\cal U}_1\otimes {\cal U}_2]_{\frac32}\rangle
+ \sqrt{\frac23} |[{\cal U}_1\otimes {\cal U}_2]_{\frac12}\rangle\\
& &|\pi^+\eta_8\rangle = -\frac{1}{\sqrt2}|[{\cal U}_1\otimes {\cal U}_2]_{\frac32}\rangle
- \frac12 |[{\cal U}_1\otimes {\cal U}_2]_{\frac12}\rangle
- \frac12 |[{\cal U}_1\otimes {\cal U}_4]_{\frac12} \,.
\end{eqnarray}
The initial state in $B^+$ decays is a $U$-spin singlet. Using the above expressions
for the final states, the relation (\ref{triangle}) follows simply as a consequence
of the absence of a $U=3/2$ term in the weak Hamiltonian.
Another useful application of the $U$-spin symmetry consists in the existence of pairs of
processes which are described by the same strong amplitudes, multiplied with different
CKM factors. This is the case, e.g. with the $B^+$ decays into $K^0\pi^+$ (\ref{K0pi+})
and $K^+\bar K^0$ (\ref{K+K0bar}), for which the final states contain the same $U=1/2$
$U$-spin multiplet. This gives \cite{Uspin1,Uspin2}
\begin{eqnarray}\label{U1}
A(B^+\to K^0\pi^+) &=& V_{ub}^* V_{us} A + V_{tb}^* V_{ts} P\\\label{U2}
A(B^+\to K^+\bar K^0) &=& V_{ub}^* V_{ud} A + V_{tb}^* V_{td} P\,,
\end{eqnarray}
with $A$ and $P$ the reduced matrix elements of the operators ${\cal T}$ and ${\cal P}$
in (\ref{HamUspin}). Knowledge of the ratio of charge-averaged
rates for such a pair can be used to constrain the ratio of strong amplitudes entering both
of them
\begin{equation}\label{Ubound}
|A/P| < \lambda\sqrt{\frac{B(B^\pm\to K^0\pi^\pm)}{B(B^\pm\to K^\pm\bar K^0)}}\,.
\end{equation}
Also, the CP asymmetries of two such processes are equal and of opposite sign \cite{Uspin2}.
Similar relations have been used for the pair of decay amplitudes
$(B^0, B_s)\to J/\psi K_S$ \cite{FlJpsi}
and for $A(B^0\to \pi^+\pi^-)$ and $A(B_s\to K^+ K^-)$
\cite{bound,FlBs}.
\section{Electroweak penguin effects}
The contributions of the electroweak penguin operators $Q_{7-10}$ (\ref{Q7})-(\ref{Q10}) are
suppressed relative to those of the strong penguins $Q_{3-6}$ (\ref{Q3})-(\ref{Q6})
by roughly a factor of $\alpha_{\rm e.m.}/(\alpha_s\sin^2\theta_W) \simeq 0.17$
\cite{GHLREWP,FlReview}, which is not negligibly small.
They are especially significant in penguin-dominated decays like $B\to K\pi$, where
the magnitude of the EWP amplitudes is comparable to that of the tree amplitudes.
Therefore, a precise control over their effects is important for an understanding of
these decays.
The Wilson coefficients $c_{7-10}$ have been computed to next-to-leading order (see
\cite{NLO} for a review) with the results (at the $m_b$ scale)
\begin{equation}
(c_7,\,c_8,\,c_9,\,c_{10})\,=\,(-0.002,
0.054, -1.292, 0.263)\alpha_{\rm e.m.}\,.
\end{equation}
Neglecting the small contributions of the operators $c_{7,8}$ leads to important
simplifications \cite{Nr,Flr}, as the remaining EWP operators $Q_{9,10}$ are related
by a Fierz
transformation to the current-current operators $Q_{1,2}$. Performing a SU(3) decomposition
one obtains the following expression for the EWP Hamiltonian in terms of the
$(V-A)\times (V-A)$ operators introduced in (\ref{T})
\begin{eqnarray}\label{EWP}
{\cal H}_{EWP} &\simeq & \frac{G_F}{\sqrt2}
\left\{-\lambda_t^{(s)}\left(c_9 Q^{(s)}_9 + c_{10}
Q^{(s)}_{10}\right) -
\lambda_t^{(d)}\left(c_9 Q^{(d)}_9 + c_{10} Q^{(d)}_{10}\right)\right\} = \\
& &\hspace{-1cm} \frac{G_F}{\sqrt2}\left\{
-\frac{\lambda_t^{(s)}}{2}\left(
\frac{c_9-c_{10}}{2}(3\cdot {\bf 6}_{I=1} + \bar {\bf 3}^{(a)}_{I=0} ) +
\frac{c_9+c_{10}}{2}( -3\cdot\overline {\bf 15}_{I=1}
-\frac{3}{\sqrt2}\overline {\bf 15}_{I=0}
-\frac{1}{\sqrt2}\bar {\bf 3}^{(s)}_{I=0} )\right)\right.\nonumber\\
& &\hspace{-2cm}\left. -\frac{\lambda_t^{(d)}}{2}\left(
\frac{c_9-c_{10}}{2}(-3\cdot {\bf 6}_{I=\frac12} + \bar {\bf 3}^{(a)}_{I=\frac12} ) +
\frac{c_9+c_{10}}{2}( -\sqrt{\frac32}\cdot\overline {\bf 15}_{I=\frac12}
-2\sqrt3\cdot \overline {\bf 15}_{I=\frac32}
-\frac{1}{\sqrt2}\bar {\bf 3}^{(s)}_{I=\frac12} )\right)\right\}~.\nonumber
\end{eqnarray}
Now the SU(3) methods discussed in Sec.~2 can be applied to express the EWP amplitude
corresponding to any $B$ decay in terms of ``tree'' amplitudes alone. The results
have been presented in \cite{GPY} in a quark diagram language, which has the advantage
of allowing an immediate insight into the relative size of different contributions.
In particular, this justifies the color-suppression of certain EWP contributions conjectured
in \cite{GHLREWP,Flr}.
The $U$-spin formalism discussed in Sec.~2.1 can be used to give a simple derivation
of the correction to
the triangle relation (\ref{triangle}) arising from EWP effects \cite{NR1}.
As mentioned, these corrections appear because the EWP Hamiltonian contains
$I=1$ operators in the $\Delta S=1$ sector and $I=3/2$ operators in the $\Delta S=0$
sector respectively, as one can see from (\ref{EWP}). Their matrix elements can be related
thanks to the special structure of the weak Hamiltonian (\ref{HamUspin}) written in
$U$-spin symmetric form:
\begin{eqnarray}
& &{\cal T} =
\frac12 (c_1+c_2) {\cal D}_1 +
\frac12 (c_1-c_2) {\cal D}_2\\
& &{\cal P}^{EWP} =
\frac12 (c_9+c_{10}) [-\frac32 {\cal D}_1 +{\cal D}_3] +
\frac12 (c_9-c_{10}) {\cal D}_4\,,
\end{eqnarray}
where ${\cal D}_2^{(-\frac12)}$, ${\cal D}_3^{(-\frac12)}$ and
${\cal D}_4^{(-\frac12)}$ are $I=1/2$ operators and only ${\cal D}_1^{(-\frac12)}$ has $I=3/2$.
This special property can be used \cite{NR1} to prove that,
although ${\cal D}_3^{(+\frac12)}$ and ${\cal D}_4^{(+\frac12)}$
contain $I=1$ pieces, they do not contribute to the LHS of (\ref{triangle}).
Therefore the EWP contribution to the LHS of (\ref{triangle}) can be expressed
solely in terms of the amplitude $A(B^+\to \pi^+\pi^0)$ induced by ${\cal D}_1$.
One obtains in this way the following generalization of (\ref{triangle}) including
the contributions of the EW penguin effects
\begin{eqnarray}\label{triangleEWP}
& &A(B^+\to K^0\pi^+) + \sqrt2 A(B^+\to K^+\pi^0) = \\
& &\qquad
\frac{V_{us}}{V_{ud}}\frac{f_K}{f_\pi}\sqrt2 A(B^+\to \pi^+\pi^0)
\left( 1 - \frac{c_9+c_{10}}{c_1+c_2}\frac{3}{2R_b\lambda^2}e^{-i\gamma}
\right)\,.\nonumber
\end{eqnarray}
In this relation one has neglected the EWP contribution to the decay amplitude
$A(B^+\to \pi^+\pi^0)$. They can be included in a model-independent way too
\cite{BuFl,GPY}, although their numerical impact turns out to be rather small,
in accordance with earlier estimates \cite{GHLREWP}. The ratio $f_K/f_\pi\simeq 1.22$
accounts for factorizable SU(3) breaking in the leading tree amplitude.
It is interesting to note that there exist SU(3) amplitude relations which are not
affected by EWP effects. One of them has been noted by Deshpande and He \cite{DesHe},
who based on it a different method for determining $\gamma$ (see also \cite{GrRos}).
This relation follows from the absence of a $U=3/2$ term in the weak Hamiltonian:
\begin{eqnarray}
0 &=& \langle [{\cal U}_1\otimes {\cal U}_2]_{\frac32}, U_3=+\frac12|H_W|B^+\rangle\\
&=&
\sqrt2 A(B^+\to K^+\pi^0) - \sqrt6 A(B^+\to K^+\eta_8) +
2 A(B^+\to K^0\pi^+)\,.\nonumber
\end{eqnarray}
Its analog for $\Delta S=0$ decays has been used in \cite{GP} and relates
$B^+$ decay amplitudes into strangeless final states
\begin{eqnarray}\label{GP}
0 &=& \langle [{\cal U}_1\otimes {\cal U}_2]_{\frac32}, U_3=-\frac12|H_W|B^+\rangle\\
&=& A(B^+\to K^+\bar K^0) + \sqrt{\frac32} A(B^+\to \pi^+\eta_8)
- \frac{1}{\sqrt2} A(B^+\to \pi^+\pi^0)\,.\nonumber
\end{eqnarray}
\section{Determining the weak phase $\gamma$ using $B\to K\pi$ decays}
A method for determining the weak phase $\gamma$ has been proposed in \cite{GRL},
requiring the $B^+$ decay rates into $K^0\pi^+, K^+\pi^0, \pi^+\pi^0$ and their charge
conjugates. This method, subsequently improved in \cite{NR2} by including EW penguin
effects, rests on the following assumptions:
a) flavor SU(3) symmetry, implied in the triangle relation (\ref{triangle}), respectively
its version (\ref{triangleEWP}) including EWP effects.
b) the absence of a term with nontrivial weak phase in the amplitude $A(B^+\to K^0\pi^+)$.
This amplitude has been given in (\ref{A(K0pi+)}) and can be rewritten as
\begin{eqnarray}\label{epsA}
A(B^+\to K^0\pi^+) = -A\lambda^2 P\left(1 + \varepsilon_A e^{i\phi_A} e^{i\gamma}
\right)
\end{eqnarray}
with $\varepsilon_A, \phi_A$ parametrizing the magnitude and phase of the
annihilation contribution relative to the dominant penguin one.
Neglecting the annihilation amplitude ($\varepsilon_A\simeq 0$), the SU(3) triangle
(\ref{triangle}) and its CP conjugate can be represented together as shown in Fig.~1.
The circle has radius $\delta_{EW}$ in units of $\sqrt2 A(B^+\to \pi^+\pi^0)$, with
$\delta_{EW} = -\frac{3}{2\lambda^2 R_b}\frac{c_9+c_{10}}{c_1+c_2} = 0.66\pm 0.15$.
The relative orientation of the two triangles is fixed together with the weak phase $\gamma$
by requiring the equality of the two angles denoted $2\gamma$ in Fig.~1.
There are several sources of theoretical errors affecting this determination. First,
there are uncertainties in the value of $\delta_{EW}$ from SU(3) breaking effects and
the imprecisely known value of the ratio $|V_{ub}/V_{cb}|$. The former have been
computed in the factorization approximation \cite{NR1,N}
and they lower $\delta_{EW}$ by $(6\pm 6)\%$
compared to its SU(3) value, although nonfactorizable SU(3) breaking, which could be
significant \cite{BuFl}, remains unknown. At present the latter dominate the error on
$\delta_{EW}$ but they are likely to decrease as the ratio of CKM matrix elements is
better measured.
We will focus in the following on another intrinsic uncertainty of this method, arising
from rescattering effects (assumption (b) above). As explained above, the naive
factorization approximation suggests that the component with weak phase $\gamma$ in the
amplitude (\ref{epsA}) is suppressed by a factor $f_B/m_B\simeq 0.05$ and is thus
negligibly small. However, dynamical calculations
\cite{rescatt1,rescatt1.5,rescatt2,rescatt3,rescatt4}
suggest that rescattering effects can induce a nonnegligible value for $\varepsilon_A$.
For example, elastic rescattering through a color-allowed intermediate
state as in $B^+\to \{ K^+\pi^0\}\to K^+\pi^0$ can conceivably enhance the annihilation
contribution.
The $U$-spin relation (\ref{Ubound}) can be used to give an upper bound on the magnitude
of these effects $\varepsilon_A < 0.18$, which is not yet very stringent.
We used here the CLEO results \cite{Frank} $B(B^\pm\to K^\pm K^0)<0.9\cdot 10^{-5}$
(at 90\% CL) and $B(B^\pm\to K^0\pi^\pm)=(1.4\pm 0.5\pm 0.2)\cdot 10^{-5}$.
We will adopt in our following estimates the value $\varepsilon_A=0.1$.
The complete set of $B^+\to K\pi$ decay amplitudes is defined by specifying $\varepsilon,
\phi_P, \gamma$ and the rescattering parameters $\varepsilon_A, \phi_A$, where we
denote the ``tree-to-penguin'' ratio
\begin{equation}
\varepsilon = \lambda\frac{f_K}{f_\pi}\sqrt{\frac{B(B^\pm\to \pi^\pm\pi^0)}{B(B^\pm\to
K^0\pi^\pm)}}
\end{equation}
and the relative phase $\phi_P=$Arg$(P/(T+C))$. One can simulate sets of decay amplitudes
corresponding to given values of these parameters and study the effects of the rescattering
effects on the extracted value of $\gamma$.
In Fig.~2 are shown the results of such a simulation using the input values
$\varepsilon=0.24$, $\varepsilon_A=0.1$ and $\gamma=76^\circ$. In Fig.~2(a) the
output value of $\gamma$ is plotted as a function of $\phi_A$ at $\phi_P=60^\circ$ and
$\phi_P=90^\circ$, and in Fig.~2(b) the dependence of $\gamma$ is shown as function
of $\phi_P$ at $\phi_A=0^\circ$.
The most notable feature of these results is the large deviation of the extracted $\gamma$
from its physical value for a strong phase $\phi_P$ around $90^\circ$, of about 14$^\circ$.
This example illustrates the possible significance of the rescattering corrections on this
method, even for moderate values of $\varepsilon_A\simeq 0.1$.
A modified version of this method for determining $\gamma$ has been proposed in \cite{N},
with the view of minimizing the rescattering effects. This method is formulated in terms
of two quantities $R_*$ and $\tilde A$ defined by
\begin{eqnarray}
& &R_* \equiv \frac{B(B^\pm\to K^0\pi^\pm)}{2B(B^\pm\to K^\pm\pi^0)}~,
\\
& &\tilde A \equiv \frac{B(B^+\to K^+\pi^0)-B(B^-\to K^-\pi^0)}{B(B^\pm\to
K^0\pi^\pm)} - \frac{B(B^+\to K^0\pi^+)-B(B^-\to \bar K^0\pi^-)}{2B(B^\pm\to
K^0\pi^\pm)}~.\nonumber
\end{eqnarray}
These quantities do not contain ${\cal O}(\varepsilon_A)$ terms; their dependence
on the rescattering parameter $\varepsilon_A$ appears only at order
${\cal O}(\varepsilon\varepsilon_A)$. Therefore, it was argued in \cite{N},
the determination of $\gamma$, by setting $\epsilon_A=0$ in the expressions for
$R_*$ and $\tilde A$, is insensitive to rescattering effects.
This procedure gives two equations for $\gamma$ and $\phi$ which can
be solved simultaneously from $R_*$ and $\tilde A$. Using two pairs of input
values for ($R_*, \tilde A$) (corresponding to a restricted range for $\phi_A$
and $\phi_P$) seemed to indicate that the error in $\gamma$
for $\varepsilon_A=0.08$ is only about $5^{\circ}$.
In Fig.~3 are shown the results of such an analysis carried out for the entire
parameter space of $\phi_A$ and $\phi_P$.
Whereas the angle $\phi_P$ can
be recovered with small errors, the results for $\gamma$ show the same
large rescattering effects for values of $\phi_P$ around 90$^\circ$ as in Fig.~2.
(A slight improvement is the absence of a discrete ambiguity in the value of
$\gamma$.) These results indicate that the large deviation of $\gamma$ from its
physical value for $\phi_P=90^\circ$ is a general phenomenon, common to all
variants of this methods. Some information about the size of the
expected error can be obtained by first determining $\phi_P$. Values not too
close to 90$^\circ$ would be an indication for a small error.
\subsection{Eliminating the rescattering effects using additional processes}
Several modifications of the method discussed above have been proposed
\cite{Flr,BuFl,GP,AgDe}, which use additional processes in order to completely
eliminate the rescattering contributions. All of these methods make use of the
decays $B^\pm\to K^\pm K^0$ which are related by $U$-spin to the amplitude
$B^\pm\to K^0\pi^\pm$ affected by rescattering by (\ref{U1}), (\ref{U2}). Using these
relations one can see that the rescattering effects cancel out in the difference
$A(B^+\to K^0\pi^+) - \lambda A(B^+\to K^+\bar K^0)$.
This is illustrated in Fig.~4, where in addition to the SU(3) triangles of Fig.~3
the amplitudes $\lambda A(B^+\to K^+\bar K^0)$ and of its CP conjugate are shown as
the segments $OC$ and $OD$ respectively. Assuming that the positions of $OC$ and $OD$
are known, then the relative orientation of the $B\to K\pi$ triangles and thereby $\gamma$
can be fixed by requring the equality of the two angles marked $2\gamma$ in Fig.~4.
The various existing methods in the literature differ in the way the positions of
the $OC$ and $OD$ segments are determined.
A minimal extension has been proposed in \cite{Flr,BuFl} which requires, in addition to
$B^+\to K\pi, \pi^+\pi^0$ data, only the charge-averaged rate for $B^\pm\to K^\pm K^0$.
In the geometrical formulation given in \cite{NR2}, this method works
by requiring the equality of the two segments $|YC|=|YD|$ (both considered as functions
of $\gamma$) in Fig.~4. Due to the smallness of the rescattering contribution relative to
the penguin amplitude, this equality is almost automatical for most values of $\gamma$,
which is to say that small errors in the amplitudes $A(B^+\to K^+\bar K^0)$ are amplified
in the extracted value of $\gamma$. Also, SU(3) breaking effects introduce large errors,
which can be however controlled if the direct CP asymmetry of the $K^+\bar K^0$ mode is
measured.
An improvement of this approach has been given in \cite{GP}, where the positions of the
segments $OC$ and $OD$ are determined independently of the $B^+\to K\pi$ data, with the
help of the SU(3) relation (\ref{GP}). The uncertainty in the position of the point $Y$
due to SU(3) breaking is naturally small because the sides $OC$ and $OD$ themselves are
small, relative to the long side of the triangle (\ref{GP}). Naive dimensional estimates
\cite{GP} suggest that the SU(3) breaking-induced error on $\gamma$ is of
the order of a few degrees, which is confirmed by a detailed numerical study \cite{GP2}.
One additional problem with this method is introduced by the $\eta-\eta'$ mixing, whose
treatment will add some model dependence. This can be avoided by using instead an alternative
approach using $B^0\to K\pi$ and $B_s$ decays \cite{GP}. However, it remains to be seen if
the statistical
errors due to the necessity of combining nine different decay rates will not overweigh the
theoretical advantages of this method.
\section{Conclusions}
Nonleptonic weak decays of the $B$ mesons are a valuable source of information about the
elements of the CKM matrix. In particular, the penguin-dominated decays $B\to K\pi$ can provide
useful constraints \cite{FM,GRbound,NR1} and determinations \cite{GRL,NR2,Flr,BuFl,GPY}
of the weak phase $\gamma$, which complement those from global fits of the unitarity triangle.
Although the focus of this talk has been on charged $B$
decays, useful information can be obtained also by combining $B^0$ with $B^+$ decay
data \cite{Fl,FM,GRbound,Flr,BuFl}.
While the electroweak penguin contributions to the
determination of $\gamma$ from $B^+\to K\pi, \pi^+\pi^0$ decays
can be controlled using SU(3) symmetry, the rescattering effects are potentially
significant. Depending on the precise value of a strong phase $\phi_P$ (which can be
determined fairly precisely), the corresponding uncertainty on $\gamma$ can be as
large as $\pm 15^\circ$. Several methods exist which make it possible to completely
eliminate these effects with the help of additional decays.
\acknowledgements
It is a pleasure to thank Michael Gronau and Tung-Mow Yan for discussions and collaboration
on the subjects discussed in this talk. This work has been supported by the National
Science Foundation.
|
1,314,259,994,291 | arxiv | \section{Introduction}
The ability to emulate spin-orbit coupling (SOC) and the Zeeman interaction
in Bose-Einstein condensates (BECs) \cite{Spielman2009,Splielman2011,Zhai2012,Spielman2013,Zhai2015}
has raised a great interest in
the interplay of nonlinear phenomena and spin dynamics of these systems.
These effects include the
creation of solitons {\cite{Kevrekidis,KaKoAb13,KaKoZe14,Wen2016,Chiquillo2018}},
vortices \cite{vortices0,vortices1,vortices2,LoKaKo14,vortices3,Busch},
localized spinful structures \cite{BenLi,HPu,Sakaguchi2016,Vasic2016,Romania},
enhanced localization \cite{Qu2017}, {bound states in continuum~\cite{BIC}},
and collapsing solutions \cite{collapse,Yu2017a,Yu2017b}. This research
greatly extends the understanding of solitons in other systems, such as
nonlinear photonic lattices \cite{pertsch2004,kartashov2005,kartashov2008,kartashov2009,kartashov2011,naether2013}.
In addition, the studies of disorder potentials which can be produced experimentally,
have demonstrated a strong qualitative interplay between nonlinearity and quantum
localization \cite{larcher2009,Flach2,Aleiner2010}.
In this paper we address the motion of
a bright soliton in a BEC with attractive interaction, the dynamics of which
can be strongly affected by the disorder and the
SOC, even in the semiclassical regime (as considered in the present paper), where quantum
effects are not sufficiently strong to induce Anderson localization \cite{fort2005,modugno2006,Mardonov2015}.
These effects can be experimentally observable to show
how the soliton propagation in a random potential can be affected by the
SOC and the condensate self-interaction.
\section{BEC-soliton in a random potential: model and main parameters.}
We consider a quasi one-dimensional BEC \cite{Modugno2018} with SOC forming a soliton
due to the internal self-attraction and affected by a
synthetic Zeeman field and {by a spin-diagonal disordered potential} \cite{Larcher2012}. The
two-component spinor wave function ${\bm\psi} (\mathbf{x})\equiv \left[ \psi _{1
}(\mathbf{x}),\psi _{2 }(\mathbf{x})\right]^{\rm T}$, where $\mathbf{x}\equiv(x,t),$ characterizing a pseudo-spin
$1/2$, is normalized to {unity}, and obtained as a solution of the time-dependent Gross-Pitaevskii equation
\begin{equation}
i\hbar \partial _{t}{\bm\psi} =\left[\frac{\hbar^{2}\hat{k}^{2}}{2M}+\alpha {\sigma }_{z
\hat{k}+\frac{\Delta }{2}{\sigma }_{x}+U (x)+H^{\rm int}\right]{\bm\psi},
\label{Hamilton}
\end{equation
with the self-interaction term:
$H^{\rm int}_{\lambda\lambda}=g\left|\psi _{\lambda}\right|^{2}+
\widetilde{g}\left|\psi_{\lambda^{\prime}}\right|^{2}$, and $H^{\rm int}_{\lambda\lambda^{\prime}}=0,$
where $\lambda,\lambda^{\prime}=1,2$, and $\lambda\ne\lambda^{\prime}.$
Here $M$ is the particle mass, $\hat{k}=-i\partial/\partial x$,
$\alpha $ is the SOC constant, ${\sigma}_{z}$ and ${\sigma }_{x}$
are Pauli matrices, $\Delta $ is the Zeeman splitting, $U(x)$ is the random potential,
and $g$ and $\widetilde{g}$ are the interaction constants including the total number of atoms
in the condensate. Hereafter we use the units with $M=\hbar\equiv\,1.$
The intra-component coupling $g$ is assumed to be negative, $g<0$, and equal for the
two components.
The inter-component coupling $\widetilde{g}$, will be considered for two limiting cases.
First, for $g=\widetilde{g}$
the system self-interaction energy is invariant with respect to the global spin rotations \cite{Manakov,Tokatly}.
In this case and in the absence of $U(x)$ and spin-related interactions with $\Delta=\alpha=0,$
the ground state is given by:
\begin{eqnarray}
\label{eq:soliton}
{\bm \psi}_{\rm gr}=\frac{\sqrt{-g}}{2\cosh\left[ 2(x-x_{0})/g\right] }\left[\begin{array}{c}
\cos(\theta/2)e^{i\phi}
\\
\sin(\theta/2)
\end{array}
\right],
\end{eqnarray}
where $x_0$ is a position of the soliton center and angles $\theta$ and $\phi$ characterize the pseudospin direction.
Second, we consider the case of a vanishing cross-spin coupling with $\widetilde{g}=0,$ where this invariance is lifted.
The potential $U(x)$ is produced by
$N\gg 1$ ``impurities'' of the amplitude $U_{0}$ with uncorrelated random positions $x_{j}$
as:
\begin{equation}
U (x)=U_{0}\sum_{j=1}^{N}s_{j}f\left(x-x_{j}\right).
\label{randomrealization}
\end{equation
Here $s_{j}=\pm 1$ is a random function of $j$ with $\sum_{j=1}^{N}s_{j}=0$, resulting in the spatially averaged
$\langle U(x)\rangle =0$. The mean
linear density of impurities is given by $\bar{n}=N/L$, where $L$ is the sample length.
The shape of a single impurity is given by $f\left(z\right) =\exp \left( -z^{2}/\xi ^{2}\right)$
with constant $\xi\ll L.$
In order to describe the dynamics of a soliton (or, more generally, a localized wavepacket) in the
random potential we explore the integral quantities $\mathcal{O}(t)$ associated with each observable $\hat{\mathcal{O}}$ and defined by
\begin{equation}
\mathcal{O}(t)=\int_{-\infty}^{\infty}{\bm\psi}^{\dagger}(\mathbf{x})
\hat{\mathcal{O}}
{\bm\psi}(\mathbf{x})dx.
\label{expectation}
\end{equation}
In particular, defining the total soliton momentum $k(t)$ and the force $F(t)$, for which
$\hat{\mathcal{O}}$ in (\ref{expectation}) is substituted by $\hat{k}$ and by $\hat{F}\equiv-dU(x)/dx$, respectively,
and using Eq. (\ref{Hamilton}), it is straightforward to verify the Ehrenfest-like relation
\begin{equation}
\frac{d k(t)}{dt}=F(t).
\label{force}
\end{equation}
As a reference point for the {following discussion}
we consider as the initial state an eigenstate of the
Hamiltonian (\ref{Hamilton}) with $\alpha =\Delta =0$, of the form ${\bm\psi}_{\rm in}(x)=\psi
_{0}(x)\left[ 1,0\right]^{\rm T}$, i.e. $\psi _{0}(x)$ is a stationary solitonic solution in the
$U(x)-$potential. To produce this solution, we start with the state (\ref{eq:soliton}) with $\theta=\phi=0$
and adiabatically switch on $U(x)$, in order to project the initial soliton into a
stationary state at equilibrium with the random potential.
Figure \ref{Fig:disorder10} shows a realization of $U(x)$
and the density of the soliton prepared with this protocol.
This soliton is localized near a potential minimum and subsequent
dynamics is induced by switching on the SOC and the Zeeman field.
We explore the spin state with the density matrix ${\bm \rho}(t):$
\begin{equation}
{\bm\rho}(t)=\int {\bm\psi}(\mathbf{x}){\bm\psi}^{\dagger}(\mathbf{x})dx.
\label{denmat}
\end{equation
The rescaled purity $P(t)=2{\rm tr}{\bm \rho}^{2}(t)-1$ is the square of the spin length,
$P(t)=\sum_{i}(\sigma_{i}(t))^{2},$ with the spin components
${\sigma}_{i}(t)={\mathrm{tr}\left({\sigma}_{i}{\bm \rho(t)}\right)},$ which can also be obtained with
Eq. \eqref{expectation}.
In order to characterize the evolution of the system we {consider} the
center of mass position $X(t)$ defined with $\hat{\mathcal{O}}=x$
in Eq. (\ref{expectation}). Then, the velocity of the wavepacket, described by Eq. (\ref{Hamilton}),
defined as $v(t)=dX(t)/dt$, is given by the relation
\begin{equation}
{v}(t) = {k}(t) +\alpha \sigma _{z}(t).
\label{spinvelocity}
\end{equation
Notice that this formula, which includes the anomalous term $\alpha\sigma_{z}(t),$ well-known in the linear theory
\cite{Adams,Stepanov,Armaitis2017}, remains valid for nonlinear model (\ref{Hamilton}).
According to Eq. (\ref{force}),
the soliton momentum evolves due to the random potential.
Correspondingly, this contributes to the spin precession due to the SOC.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.40\textwidth]{figure1.eps}
\end{center}
\caption{Density of the initial state (in arbitrary units) for $g=\widetilde{g}=-5$, for
a realization of disorder with $U_{0}=0.01$, $\bar{n}=10,$ and $\xi =1$, shown in the inset.}
\label{Fig:disorder10}
\end{figure}
Since the Hamiltonian (\ref{Hamilton}) depends on many parameters, we consider only
the case of a ``narrow'' soliton, having the width much less than $\xi$, and
assuming that it is stable against
the collapse due to the presence of the {transverse} degrees of freedom \cite{Perez1997}.
For a smooth random potential, the radiation from such a soliton is negligible, its dynamics is
close to adiabatic, and the derivative $dX(t)/dt$ truly characterizes the soliton velocity.
{Remarkably, for the} chosen parameters, the disorder potential has almost no effect on the shape of the
soliton. Here, the effective potential $V(t)$, computed with Eq. (\ref{expectation})
for $\hat{\mathcal{O}}=U(x)$ is very close to $U(X(t)),$
and for a weak random potential, where $\alpha\gg\,k(t),$ one estimates
$k(t)\approx\alpha\left[U(X(0))-U(X(t))\right],$ as follows from Eqs. (\ref{force}) and (\ref{spinvelocity}).
For the numerical analysis we consider
a single typical realization of disorder in Fig. \ref{Fig:disorder10}, and
use $\xi$ as the unit of length. With the accepted units $M\equiv\hbar\equiv 1,$
the units of energy and time become $1/\xi^{2}$ and $\xi^{2}$, respectively.
\section{Motion of soliton}
\subsection{Localization by Zeeman field.}
We begin with the symmetric $g=\widetilde{g}$ case.
{For $\Delta=0$ and $\alpha >0$}, the spin component
{$\sigma_{z}(t)$ is conserved, i.e. $\sigma_{z}(t)\equiv 1$ for the initial condition ${\bm\psi}_{\rm in}(x)$ } and the soliton
will start to displace to the right, owing to the spin dependent
velocity (\ref{spinvelocity}). For a small $\alpha,$ the
soliton undergoes harmonic oscillations in the vicinity of the initial position
{because ${\bm\psi}_{\rm in} (x)$ is centered in the local minimum of the potential}.
However, for $\alpha$ larger than a critical value, as discussed below
the soliton {moves} over long distances until it encounters a sufficiently strong peak
{of the potential}, that can stop it and reverse its motion resulting in essentially nonlinear oscillations.
\begin{figure}[tb]
\begin{center}
\hspace{-0.5cm}\includegraphics*[width=0.36\textwidth]{figure2a.eps} \\
\includegraphics*[width=0.4\textwidth]{figure2b.eps} \\
\includegraphics*[width=0.4\textwidth]{figure2c.eps}
\end{center}
\caption{(a) Position of the soliton center of mass as a function of time for different
$\Delta$ (marked in the plot), $g=\widetilde{g}=-5,$ and $\alpha =0.4$. Panel (a) shows that
for $\Delta =0$ the soliton travels a long distance, whereas switching on the
Zeeman field eventually traps it. (b, c) Density plots of the {two} spinor
components in the $(t,x)-$plane for $\Delta=0.1$.
}
\label{Fig:x_local}
\end{figure}
Figure \ref{Fig:x_local} describes the dynamics of the soliton for $\alpha =0.4$
at different values of $\Delta$.
The plot shows that, whereas for $\Delta=0$ the soliton moves a long distance
through the disordered potential until it is reflected by a large fluctuation
of $U(x)$, the presence of a Zeeman field inhibits the propagation,
and eventually traps it, for sufficiently large values of $\Delta$ ($=0.8$ in this plot).
In order to better understand this
effect of the Zeeman field, we consider {Eq. (\ref{spinvelocity}) for the velocity, along with}
the evolution of the {spin components}. To describe the spin evolution we assume the adiabatic
approximation for the soliton evolution (conserving the shape of equal densities of both spin states):
${\bm\psi}_{\rm ad}({\mathbf x})=\psi_{0}(x-X(t))\exp(ik(t)x)\chi(t),$
where $\chi(t)\equiv\left[\cos(\theta(t)/2)e^{i\phi(t)},\sin(\theta(t)/2)\right]^{\rm T}$
describes corresponding evolution of the spin state. Here ``fast'' degree of freedom
corresponds to the shape of $\psi_{0}(x-X(t))$,
while ``slow'', lower energy degrees of freedom are described by $k(t)$ and $\chi(t)-$dependencies.
In order to define the adiabatic evolution of the spinor $\chi(t)$ we perform
spatial ``averaging'' by multiplying (\ref{Hamilton}) by $\psi_{0}(x-X(t))$ and integrating over $x.$ This gives us an
effective Hamiltonian $H_{s}(t)=(\bm{\Omega}(t){\bm\sigma})/2$ for the spin motion in a synthetic
random Zeeman field as ${\bm\Omega}=\left(\Delta ,0,2\alpha{k(t)}\right)$,
corresponds to the rotation with the rate $\Omega=(4\alpha^{2}k^{2}(t)+\Delta^{2})^{1/2}$
around the randomly time-dependent axis ${\mathbf n}={\bm\Omega}/\Omega$.
The validity of the adiabatic approximation is corroborated
by the fact that the spin {state} is always close to a
pure one (Fig. \ref{Fig:1}(a)), and the spin is close to the Bloch sphere.
In addition, our numerical results show that the shape of the soliton (not shown in the Figures)
remains practically unchanged up to $t=300.$
As a result, the velocity (\ref{spinvelocity}) self-consistently depends on the evolution of the random
effective ``magnetic'' field. For a large $\Delta \gg\alpha |k(t)|$, the spin is
controlled by the Zeeman
coupling with $\sigma_{z}(t)\approx\cos{(\Delta t)}$ and, according to Eq. (\ref{spinvelocity}), the velocity behaves
as $v(t)\approx\alpha \cos (\Delta t).$ For $\Delta \sim {\alpha |k(t)|} $, the
behavior of observables becomes much more complicated.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.40\textwidth]{figure3a.eps}
\includegraphics[width=0.40\textwidth]{figure3b.eps}
\end{center}
\caption{
(a) Rescaled purity of the spin state for different $\Delta$ (marked in the plot). The spin is always close to
the Bloch sphere, $P^{1/2}(t)\approx1$.
(b) Evolution of the {$z$-component of the spin which enters the soliton velocity, see Eq. (\ref{spinvelocity}).
In both plots $\alpha=0.4$ and $g=\wt{g}=-5$.}
}
\label{Fig:1}
\end{figure}
Since we consider a narrow soliton with the self-interaction energy conserved under the total spin
rotations, we can introduce a conserved ``low-energy'' quantity $\epsilon_{0}$ obtained {as the average (\ref{Hamilton})
of the linear part of the Hamiltonian for the adiabatic soliton ${\bm \psi}_{\rm ad}({\bf x})$, i.e.}
${\epsilon_0=}k^{2}(t)/2+
\alpha{\sigma}_{z}(t)k(t)+\Delta\sigma_{x}(t)/2+U(X(t)).$
Taking into account that ${\sigma_{z}(t)}=\cos\theta(t),$ $\sigma_{x}(t)=\sin\theta(t)\cos\phi(t),$
the conservation of $\epsilon_{0}$ {(verified in the adiabatic approximation)} can be presented as
\begin{equation}
v^{2}(t)-{\alpha^{2}}\sigma_{z}^{2}(t)+\Delta\sigma_{x}(t)=2\left[U({X(0)})-U(X(t))\right],
\label{energy:conserved}
\end{equation
with the velocity $v(t)$ given by Eq. (\ref{spinvelocity}).
The value $U_{\rm inv}$ of the potential that is \textit{sufficient} to invert the soliton dynamics, that
is $v(t)=0$ at the point where $U(X(t))=U_{\rm inv}$, is
obtained by minimization of the sum of the spin-related terms in Eq. (\ref{energy:conserved}).
Since this minimum is achieved at $\cos\phi=-1$ and $\sin\theta=\min(\Delta/2\alpha^{2},1)$
we obtain $U_{\rm inv}=U(X(0))+\left(\alpha^{2}+\Delta^{2}/4\alpha^{2}\right)/2$ for $\Delta<2\alpha^{2}$, and
$U_{\rm inv}=U(X(0))+\Delta/2$ for $\Delta\ge\,2\alpha^{2}.$
Nevertheless, these conditions are not \textit{necessary}, and the soliton can be stopped already
at $U(X(t))<U_{\rm inv}$. At $\Delta=0$ we obtain $U_{\rm inv}=\alpha^{2}/2+U(X(0)),$
corresponding to Fig. \ref{Fig:x_local}(a), where $U_{\rm inv}-U(X(0))=0.083$ for $\alpha^{2}/2=0.08.$
\begin{figure}[tb]
\begin{center}
\includegraphics*[width=0.40\textwidth]{figure4.eps}
\end{center}
\caption{Phase trajectory for the values of SOC and
Zeeman field marked in the plot for $t<190$. The filled circle
describes $\left(X(0),v(0)\right)-$point. The oscillation
frequency for $\alpha =0.1$ is $\omega_{\mathrm{s}}=0.094,$ in a good agreement with
$\sim{\bar{n}}^{1/4}U_{0}^{1/2}$ estimate.
All plots correspond to $g=\widetilde{g}=-5$.}
\label{Fig:delocalization}
\end{figure}
Before discussing other effects,
we consider the possibility of experimental realization of the presented system.
We remind that the physical units of length and time here
are $\xi$ and $t_{\xi}\equiv M\xi^{2}/\hbar$, respectively. The resulting
coupling constant $g$ is approximately $2a/\xi\times (\xi/\xi_{\perp})^{2}{\cal N},$ where $\xi_{\perp}$
is the transversal confinement length, $a$ is the interatomic scattering
length (e.g., \cite{Mardonov2015a}),
and ${\cal N}$ is the total number of atoms in the condensate.
A typical value of $\xi=3$ $\mu{\rm m}$, with $M$ being the mass of $^{7}{\rm Li}$ atom \cite{Kevrekidis},
corresponds to $t_{\xi}\approx 1$ ms and the unit of
velocity $\xi/t_{\xi}\approx 0.3$ cm/s,
meaning that our results imply a relatively weak synthetic spin-orbit coupling.
The relevant time scale of the studied dynamical phenomena,
being of the order of $100$ $t_{\xi}$, is, therefore,
within the experimental lifetime of an attractive condensate (see e.g. \cite{Trenkwalder2016}).
For typical values of $a$ of the order of $-5\times 10^{-8}$ cm
\cite{Moerdijk1994,Pollack2009} and for a strong
confinement $(\xi/\xi_{\perp})^{2}\sim 10,$
we obtain that the required $g=-5$ can be achieved at ${\cal N}\sim 1.5\times 10^{3}$ particles
in the condensate. Under these conditions the mean field approach is still well applicable since ${\cal N}
\times(|a|^{3}/\xi\xi_{\perp}^{2})\sim |g|\times(a/\xi)^{2}\ll 1.$
\subsection{Delocalization induced by spin resonance}
Here we will show that in a certain regime, depending on the oscillation frequency, the Zeeman
coupling and the SOC, the soliton motion can be characterized by a resonance caused by
spin rotation in the Zeeman field.
Since this resonance occurs in a nonlinear system, it
cannot greatly increase the oscillation amplitude, but it is sufficient to
delocalize a soliton in the case of interest. Although in a disordered
potential the spin resonance is hardly exactly predictable, it is possible
to analyze its effects semiquantitatively. The oscillation
frequency $\omega_{\rm s}=2\pi/T$ with the
period
\begin{equation}
T=2\int_{a}^{b}\frac{dx}{v(x)}, \quad v(x)\equiv\sqrt{2(U(0)-U(x))+\alpha^{2}},
\label{period}
\end{equation
where at the turning points $v(a)=v(b)=0.$
Introducing the critical value $\alpha_{\rm c}^{2}=2[U(x_{r})-U(X(0))],$ we obtain
$\omega_{\rm s}\sim\omega_{0}/\ln\left[\alpha_{\rm c}\left(\alpha_{\rm c}-\alpha\right)^{-1}\right],$
where $\omega_{0}\sim{\bar{n}}^{1/4}U_{0}^{1/2}$ is the
oscillation frequency near the minimum. Here $x_{r}$ is the position of a strong peak {preventing} the escape of the
soliton (e.g., in Fig. \ref{Fig:disorder10}, $x_{r}\approx 2.7$).
The resonance between the spin and the orbital motion is expected at $\omega_{\rm s}\sim\Delta.$
\begin{figure}[tb]
\begin{center}
\hspace{-0.5cm}\includegraphics*[width=0.36\textwidth]{figure5a.eps}
\includegraphics*[width=0.40\textwidth]{figure5b.eps}
\includegraphics*[width=0.40\textwidth]{figure5c.eps}
\end{center}
\caption{(a) The center of mass position as a function of time for different $\Delta$ and $\widetilde{g}=0,$ $g=-5$.
(b), (c) Density plots of the spinor components at $\Delta=1.4$ in the $(t,x)-$ plane.
We note two major differences between this figure and Fig. \ref{Fig:x_local} arising due to
different symmetries of the self-interaction. First, here one needs an order of magnitude larger Zeeman $\Delta$
to considerably modify the $X(t)-$ dependence. Second, the density distributions of $\left|\psi_{1,2}(x,t)\right|^{2}$
are broader here with the pattern of $\left|\psi_{2}(x,t)\right|^{2}$ demonstrating a clear stripe-like structure.
}
\label{Fig:new_int1_local}
\end{figure}
Figure \ref{Fig:delocalization}, where we show the velocity as a
function of the center of mass position, demonstrates that the interplay
between the SOC and the Zeeman field can indeed delocalize the soliton.
For example, for $\alpha=0.2$, the delocalization takes place in the range $0.05<\Delta<0.28$.
Notice that the escape time, velocity, and the direction are random.
\subsection{Vanishing cross-spin coupling, $\widetilde{g}=0$}
Now we consider the major differences
brought about by the coupling, {when only the} diagonal self-interaction $g$ is present.
For a direct comparison, we begin with the same initial condition
as at $\widetilde{g}=g$, that is ${\bm\psi}_{\rm in} (x)$. For this realization of nonlinearity,
a spin rotation, making $\sigma_{z}(t)$ considerably different from 1,
requires a larger $\Delta$ to provide the energy for population transfer between the
spinor components $\psi_{1}({\mathbf x})$ and $\psi_{2}({\mathbf x})$. In order to estimate this Zeeman field and to understand
the interplay between the spin rotation and self-interaction, we assume for the moment $U_{0}=\alpha=0$, and
consider a ``rigid rotation'' of the wave
function {around the $x-$axis, namely:} ${\bm\psi} (\mathbf{x})=\psi_{0}(x)\left[\cos(\Delta t/2),i\sin(\Delta t/2)\right]^{\rm T}$.
For this state, the self-interaction energy corresponding to the $H^{\rm int}$ term in Eq. \eqref{Hamilton} becomes
\begin{equation}
E_{\rm int}(t)-E_{\rm int}(0)=-E_{\rm int}(0)\sin^{2}\left({\Delta t}\right)/2,
\label{int1Energy}
\end{equation
with the maximum $\max\left(E_{\rm int}(t)-E_{\rm int}(0)\right)$ value $|E_{\rm int}(0)|/2={g^{2}}/12$ achieved
at $t=\pi/2\Delta.$
On the other hand, the Zeeman energy is $\Delta\sigma_{x}(t)/2.$
It follows from the comparison of these energies that the spin reorientation by the Zeeman coupling
can provide sufficient energy for the increase in the self-interaction
if $\Delta\agt\Delta_{c}\sim 0.1{g}^{2}$. Consequently, $\Delta_{c}$ being of the order of one at $|g|=5$,
considerably exceeds the frequencies $2\pi/T\sim \bar{n}^{1/4}U_{0}^{1/2}$ in Eq. (\ref{period}).
As a result, the Zeeman field causes only the soliton localization, as shown in Fig. \ref{Fig:new_int1_local}(a).
The initial rotation populates the spin down component $\psi_{2}(\mathbf{x})$, which
is initially vanishing, and therefore not self-interacting.
As a result of small population at $t\ll\,1/\Delta$, the relatively weak self interaction cannot prevent the spread
of this component, and its broadening begins. At the same time, the decreasing (although never vanishing)
population of the upper component becomes
insufficient to keep the initial width, and it is broadening as well.
The oscillating broadening of $\left|\psi_{2}(\mathbf{x})\right|^{2}$ driven by the Zeeman field is
seen as the periodic stripe-like structure in Fig. \ref{Fig:new_int1_local}(c).
\section{Discussion and conclusions}
We have studied the dynamics of the self-attractive quasi one-dimensional Bose-Einstein condensate,
forming a bright soliton, in a random
potential in the presence of the spin-orbit- and Zeeman couplings.
We have found that for given spin-orbit coupling, the soliton motion strongly depends
on the Zeeman splitting and on the self-interaction of the condensate. In particular,
the Zeeman interaction can lead to localization or delocalization of the soliton due to the
spin-dependent anomalous velocity proportional to the spin-orbit coupling. A sufficiently strong
Zeeman field can cause localization of the soliton near the random potential minima.
If the Zeeman frequency is close to the typical frequency of the soliton oscillations in
the random potential, this resonance can cause its delocalization.
In the absence of cross-spin interaction, where the Manakov's symmetry is lifted,
the effect of delocalization due to the Zeeman-induced spin rotation is suppressed
since a stronger Zeeman field is required here to produce the
spin evolution sufficient to modify the center-of-mass motion.
\section{Acknowledgments}
We acknowledge support by the Spanish Ministry of Economy, Industry and Competitiveness (MINECO)
and the European Regional Development Fund FEDER through Grant No. FIS2015-67161-P (MINECO/FEDER, UE),
and the Basque Government through Grant No. IT986-16. S. M. was partially supported by the
Swiss National Foundation SCOPES project IZ74Z0\_160527.
We are grateful to Y.V. Kartashov for valuable comments.
|
1,314,259,994,292 | arxiv |
\section{Conclusions}
We propose to use proximal gradient descent (PGD) to obtain a sub-optimal solution to the $\ell^{0}$ sparse approximation problem. Our theoretical analysis renders the bound for the $\ell^{2}$-distance between the sub-optimal solution and the globally optimal solution, under conditions weaker than Restricted Isometry Property (RIP). To the best of our knowledge, this is the first time that such gap between sub-optimal solution and globally optimal solution is obtained under our mild conditions. Moreover, we propose provable randomized algorithms, namely proximal gradient descent via Randomized Matrix Approximation (PGD-RMA) and proximal gradient descent via Random Dimension Reduction (PGD-RDR), to accelerate the ordinary optimization by PGD.
\section{Introduction}
In this paper, we consider the $\ell^{0}$ sparse approximation problem, also named the $\ell^{0}$ penalized Least Square Estimation (LSE) problem
\begin{small}\begin{align}\label{eq:l0-sparse-appro}
&\mathop {\min }\limits_{{\mathbf{z} \in {\rm I}\kern-0.18em{\rm R}^n} } L(\mathbf{z}) = {\|\mathbf{x} - \mathbf{D} \mathbf{z}\|_2^2 + {\lambda}\|{\mathbf{z}}\|_0}
\end{align}\end{small}%
where $\mathbf{x} \in {\rm I}\kern-0.18em{\rm R}^d$ is a signal in $n$-dimensional Euclidean space, $\mathbf{D}$ is the design matrix of dimension $d \times n$ which is also called a dictionary with $n$ atoms in the sparse coding literature. The goal of problem (\ref{eq:l0-sparse-appro}) is to approximately represent signal $\mathbf{x}$ by the atoms of the dictionary $\mathbf{D}$ while requiring the representation $\mathbf{z}$ to be sparse. Due to the nonconvexity imposed by the $\ell^{0}$-norm, extensive existing works resort to solve its $\ell^{1}$ relaxation
\begin{small}\begin{align}\label{eq:l1-sparse-appro}
&\mathop {\min }\limits_{{\mathbf{z} \in {\rm I}\kern-0.18em{\rm R}^n} } {\|\mathbf{x} - \mathbf{D} \mathbf{z}\|_2^2 + {\lambda}\|{\mathbf{z}}\|_1}
\end{align}\end{small}%
(\ref{eq:l1-sparse-appro}) is convex and also known as Basis Pursuit Denoising which can be solved efficiently by linear programming or iterative shrinkage algorithms \citep{Daubechies-iter-shrink-sparse04,Elad-shrinkage06,Bredies-hard-iter-shrink08}. Albeit the nonconvexity of (\ref{eq:l0-sparse-appro}), sparse coding methods such as \citep{Mancera2006,BaoJQS14} that directly optimize virtually the same objective as (\ref{eq:l0-sparse-appro}) demonstrate compelling performance compared to its $\ell^{1}$ norm counterpart in various application domains such as data mining, applied machine learning and computer vision. Cardinality constraint in terms of $\ell^{0}$-norm is also studied for M-estimation problems \citep{Jain14-IHT-Mestimation}.
We use the proximal gradient descent (PGD) method to obtain a sub-optimal solution to (\ref{eq:l0-sparse-appro}) in an iterative shrinkage manner with theoretical guarantee. Although the Iterative Hard-Thresholding (IHT) algorithm proposed by \citep{Blumensath-hard-iter-shrink2008} also features iterative shrinkage, we prove the bound for gap between the sub-optimal solution and the globally optimal solution to (\ref{eq:l0-sparse-appro}). Our result of the bounded gap only requires nonsingularity of the submatrix of $\mathbf{D}$ with the columns in the support of the sub-optimal and globally optimal solution (see the subsection ``Assumptions for Our Analysis'' for details). To the best of our knowledge, this is the first result analyzing the gap between sub-optimal solution and the globally optimal solution for the important $\ell^{0}$ sparse approximation problem under assumptions weaker than Restricted Isometry Property (RIP) \citep{Candes2008}. The most related research is presented in in \citep{zhang2012} where the distance between two local solutions of the concave regularized problems under much more restrictive assumptions including sparse eigenvalues of the dictionary. Moreover, our results suggest the merit of sparse initialization.
Furthermore, we propose to accelerate PGD for the $\ell^{0}$ sparse approximation problem by two randomized algorithms. We propose proximal gradient descent via Randomized Matrix Approximation (PGD-RMA) which employs rank-$k$ approximation of the dictionary via random projection. PGD-RMA reduces the cost of computing the gradient during the gradient descent step of PGD from $\mathcal{O}(dn)$ to $\mathcal{O}(dk + nk)$ by solving the reduced problem instead of the original problem with $k \ll \min\{d,n\}$, for a dictionary of size $d \times n$. The second randomized algorithm is proximal gradient descent via Randomized Dimension Reduction (PGD-RDR) which employs random projection to generate dimensionality-reduced signal and dictionary. PGD-RDR reduces the computational cost of the gradient descent step of PGD from $\mathcal{O}(dn)$ to $\mathcal{O}(mn)$ with $m < d$. While previous research focuses on the theoretical guarantee for convex problems via such randomized optimization \citep{Drineas2011-fast-least-square-approximation,ZhangWZ2016-sparse-linear-random-projection}, we present the gap between the sub-optimal solution to the reduced problem and the globally optimal solution to the original problem. Our result establishes provable and efficient optimization by randomized low rank matrix decomposition and randomized dimension reduction for the nonconvex and nonsmooth $\ell^{0}$ sparse approximation problem, while very few results are available in the literature in this direction.
\subsection*{Notations}
Throughout this paper, we use bold letters for matrices and vectors, regular lower letter for scalars. The bold letter with subscript indicates the corresponding element of a matrix or vector, and $\|\cdot\|_p$ denote the $\ell^{p}$-norm of a vector, or the $p$-norm of a matrix. We let $\bm{\beta}_{{\mathbf{I}}}$ denote the vector formed by the elements of $\bm{\beta}$ with indices in ${\mathbf{I}}$ when $\bm{\beta}$ is a vector, or matrix formed by columns of $\bm{\beta}$ with indices being the nonzero elements of ${\mathbf{I}}$ when $\bm{\beta}$ is a matrix. ${\rm supp}(\cdot)$ indicates the support of a vector, i.e. the set of indices of nonzero elements of this vector. $\sigma_{\min}(\cdot)$ and $\sigma_{\max}(\cdot)$ indicate the smallest and largest nonzero singular value of a matrix.
\section{Accelerated Proximal Gradient Descent by Randomized Algorithms}
In this section, we propose and analyze two randomized algorithms that accelerate PGD for the $\ell^{0}$ sparse approximation problem by random projection. Our first algorithm employs randomized low rank matrix approximation for the dictionary $\mathbf{D}$, and the second algorithm uses random projection to generate dimensionality-reduced signal and dictionary and then perform PGD on the low-dimensional signal and dictionary. Our theoretical analysis establishes the suboptimality of the solutions obtained by the proposed randomized algorithms.
\subsection{Algorithm}
While Lemma~\ref{lemma::shrinkage-sufficient-decrease} shows that the $\ell^{0}$ sparse approximation (\ref{eq:l0-sparse-appro}) is equivalent to its dictionary-reduced version (\ref{eq:l0-sparse-appro-reduced}) which leads to improved efficiency, we are still facing the computational challenge incurred by dictionary with large dimension $d$ and size $n$ (or large $|\mathbf{S}|$). The literature has extensively employed randomized algorithms for accelerating the numeral computation of different kinds of matrix optimization problems including low rank approximation and matrix decomposition \citep{Frieze2004-fast-monto-carlo-lowrank,Drineas2004-large-graph-svd,Sarlos2006-large-matrix-random-projection,
Drineas2006-fast-monto-carlo-lowrank,Drineas2008-matrix-decomposition,Mahoney2009-matrix-decomposition,
Drineas2011-fast-least-square-approximation,Lu2013-fast-ridge-regression-random-subsample}. In order to accelerate the numerical computation involved in PGD, we propose two randomized algorithms. The first algorithm adopts the randomized low rank approximation by random projection \citep{Halko2011-random-matrix-decomposition} to obtain a low rank approximation of the dictionary so as to accelerate the computation of gradient for PGD. The second algorithm generates dimensionality-reduced signal and dictionary by random projection and then apply PGD upon the low-dimensional signal and dictionary for improved efficiency. The optimization algorithm for the $\ell^{0}$ sparse approximation problem (\ref{eq:l0-sparse-appro}) by PGD with low rank approximation of $\mathbf{D}$ via Randomized Matrix Approximation, termed PGD-RMA in this paper, is described in Section~\ref{sec::PGD-RMA}. Bearing the idea of using randomized algorithm for dimension reduction, the algorithm that accelerates PGD via randomized dimension reduction, termed PGD-RDR, is introduced in Section~\ref{sec::PGD-RDR}.
\subsubsection{Accelerated Proximal Gradient Descent via Randomized Matrix Approximation: PGD-RMA}
\label{sec::PGD-RMA}
The procedure of PGD-RMA is described as follows. A random matrix $\Omega \in {\rm I}\kern-0.18em{\rm R}^{n \times k}$ is computed such that each element $\Omega_{ij}$ is sampled independently from the Gaussian distribution $\mathcal{N}(0,1)$. With the QR decomposition of $\mathbf{D} \Omega$, i.e. $\mathbf{D} \Omega = {\mathbf{Q}} \mathbf{R}$ where ${\mathbf{Q}} \in {\rm I}\kern-0.18em{\rm R}^{d \times k}$ is an orthogonal matrix of rank $k$ and $\mathbf{R} \in {\rm I}\kern-0.18em{\rm R}^{k \times k}$ is an upper triangle matrix. The columns of ${\mathbf{Q}}$ form the orthogonal basis for $\mathbf{D} \Omega$. Then $\mathbf{D}$ is approximated by projecting $\mathbf{D}$ onto the range of $\mathbf{D} \Omega$: ${\mathbf{Q}}\bQ^{\top}\mathbf{D} = {\mathbf{Q}}{\mathbf{W}} = \tilde \mathbf{D}$ where ${\mathbf{W}} = {\mathbf{Q}}^{\top}\mathbf{D} \in {\rm I}\kern-0.18em{\rm R}^{k \times n}$. Replacing $\mathbf{D}$ with its low rank approximation $\tilde \mathbf{D}$, we resort to solve the following reduced $\ell^0$ sparse approximation problem (\ref{eq:l0-sparse-appro-rma})
\begin{small}\begin{align}\label{eq:l0-sparse-appro-rma}
&\mathop {\min }\limits_{{\mathbf{z} \in {\rm I}\kern-0.18em{\rm R}^n} } \tilde L(\mathbf{z}) = {\|\mathbf{x} - \tilde \mathbf{D} \mathbf{z}\|_2^2 + {\lambda}\|{\mathbf{z}}\|_0}
\end{align}\end{small}%
And the first step of PGD (\ref{eq:l0-sa-proximal-step1}) for the original $\ell^0$ sparse approximation problem is reduced to
\begin{small}\begin{align}\label{eq:l0-sa-proximal-step1-reduced}
&\tilde {\mathbf{z}}^{(t)} = {\mathbf{z}}^{(t-1)} - \frac{2}{{\tau}s} ({{\tilde \mathbf{D}}^\top}{\tilde \mathbf{D}}{{\mathbf{z}}^{(t-1)}}-{{\tilde \mathbf{D}}^\top}{\mathbf{x}}) \\ \nonumber
& ={\mathbf{z}}^{(t-1)} - \frac{2}{{\tau}s} ({{\mathbf{W}}^{\top}} {{\mathbf{Q}}^{\top}} {\mathbf{Q}} {\mathbf{W}} {{\mathbf{z}}^{(t-1)}}-{{\mathbf{W}}^{\top}}{{\mathbf{Q}}^{\top}}{\mathbf{x}})
\end{align}\end{small}%
The complexity of this step is reduced from $\mathcal{O}(dn)$ to $\mathcal{O}(dk + nk)$ wherein $k \ll \min\{d,n\}$ and significant efficiency improvement is achieved. Note that the computational cost of QR decomposition for $\mathbf{D} \Omega$ is less than $2dk^2$, which is acceptable with a small $k$.
The randomized algorithm PGD-RMA is described in Algorithm~\ref{alg:pgd-rma}. The time complexity of PGD-RMA is $\mathcal{O}(M(dk + nk))$ where $M$ is the number of iterations (or maximum number of iterations), compared to the complexity $\mathcal{O}(Mdn)$ for the original PGD.
\begin{algorithm}[!ht]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand\algorithmicensure {\textbf{Output:} }
\caption{Proximal Gradient Descent via Randomized Matrix Approximation (PGD-RMA) for the $\ell^{0}$ Sparse Approximation (\ref{eq:l0-sparse-appro}) }
\label{alg:pgd-rma}
\begin{algorithmic}[1]
\REQUIRE ~~\\
The given signal $\mathbf{x} \in R^d$, the dictionary $\mathbf{D}$, the parameter $\lambda$ for the weight of the $\ell^{0}$-norm, maximum iteration number $M$, stopping threshold $\varepsilon$, the initialization ${\mathbf{z}}^{(0)} \in {\rm I}\kern-0.18em{\rm R}^n$.
\STATE Sample a random matrix $\Omega \in {\rm I}\kern-0.18em{\rm R}^{n \times k}$ by $\Omega_{ij} \sim \mathcal{N}(0,1)$.
\STATE Compute the QR decomposition of $\mathbf{D} \Omega$: $\mathbf{D} \Omega = {\mathbf{Q}} \mathbf{R}$
\STATE Approximate $\mathbf{D}$ by $\tilde \mathbf{D} = {\mathbf{Q}}{\mathbf{W}}$ where ${\mathbf{W}} = {\mathbf{Q}}^{\top}\mathbf{D}$
\STATE{Perform PGD with (\ref{eq:l0-sa-proximal-step1-reduced}) and (\ref{eq:l0-sa-proximal-step2})} starting from $t = 1$. The iteration terminates either $\{{\mathbf{z}}^{(t)}\}_t$ or $\{L({\mathbf{z}}^{(t)})\}_t$ converges under certain threshold or maximum iteration number is achieved.
\ENSURE Obtain the sparse code $\tilde \mathbf{z}$ upon the termination of the iterations.
\end{algorithmic}
\end{algorithm}
\subsubsection{Accelerated Proximal Gradient Descent via Random Dimension Reduction: PGD-RDR}
\label{sec::PGD-RDR}
We introduce PGD-RDR which employs random projection to generate low-dimensional signal and dictionary, upon which PGD is applied for improved efficiency. The literature \citep{Frankl:1987-JL-Lemma,Indyk1998-ANN,Zhang2016-sparse-random-convex-concave} extensively considers the random projection that satisfies the following $\ell^2$-norm preserving property, which is closed related to the proof of the Johnson–Lindenstrauss lemma \citep{Dasgupta2003-JL-proof}.
\begin{MyDefinition}\label{def:L2-norm-preseving}
The linear operator $\mathbf{T} \colon {\rm I}\kern-0.18em{\rm R}^d \to {\rm I}\kern-0.18em{\rm R}^m$ satisfies the $\ell^2$-norm preserving property if there exists constant $c > 0$ such that
\begin{small}\begin{align}
&\Pr\big[(1-\varepsilon)\|\mathbf{v}\|_2 \le \|\mathbf{T} \mathbf{v}\|_2 \le (1+\varepsilon)\|\mathbf{v}\|_2\big] \ge 1-2e^{\frac{m\varepsilon^2}{c}}
\end{align}\end{small}%
holds for any fixed $\mathbf{v} \in {\rm I}\kern-0.18em{\rm R}^d$ and $0 < \varepsilon \le \frac{1}{2}$.
\end{MyDefinition}
The linear operator $\mathbf{T}$ satisfying the $\ell^2$-norm preserving property can be generated randomly according to uncomplicated distributions. With $\mathbf{T}^{'} = \sqrt{m} \mathbf{T}$, it is proved in \citep{Arriaga2006,Achlioptas2003-database-friendly-random-projection} that $\mathbf{T}$ satisfies the $\ell^2$-norm preserving property, if all the elements of $\mathbf{T}^{'}$ are sampled independently from the Gaussian distribution $\mathcal{N}(0,1)$, or uniform distribution over ${\pm 1}$, or the database-friendly distribution described by
\begin{small}\begin{align*}
\mathbf{T}_{ij}^{'} =
\left\{
\begin{array}
{r@{\quad:\quad}l}
\sqrt{3} & {{\rm with probability} \,\, \frac{1}{6}} \\
\sqrt{0} & {{\rm with probability} \,\, \frac{2}{3}} \\
-\sqrt{3} & {{\rm with probability} \,\, \frac{1}{6}}
\end{array}
\right., 1 \le i \le m, 1 \le j \le d
\end{align*}\end{small}%
With $m < d$, PGD-RDR first generate the dimensionality-reduced signal and dictionary by $\bar \mathbf{x} = \mathbf{T} \mathbf{x}$ and $\bar \mathbf{D} = \mathbf{T} \mathbf{D}$, then
solve the following dimensionality-reduced $\ell^0$ sparse approximation problem
\begin{small}\begin{align}\label{eq:l0-sparse-appro-rdr}
&\mathop {\min }\limits_{{\mathbf{z} \in {\rm I}\kern-0.18em{\rm R}^n} } \bar L(\mathbf{z}) = {\|\bar \mathbf{x} - \bar \mathbf{D} \mathbf{z}\|_2^2 + {\lambda}\|{\mathbf{z}}\|_0}
\end{align}\end{small}%
by PGD. The procedure of PGD-RDR for the $\ell^{0}$ sparse approximation problem (\ref{eq:l0-sparse-appro}) is described in Algorithm~\ref{alg:pgd-rdr}. The time complexity of sampling the random matrix $\mathbf{T}$ is $\mathcal{O}(md)$, and the time complexity of the first step of PGD (\ref{eq:l0-sa-proximal-step1}) for gradient descent is reduced from $\mathcal{O}(dn)$ to $\mathcal{O}(mn)$. The time complexity of PGD-RDR is $\mathcal{O}(Mmn)$ where $M$ is the number of iterations (or maximum number of iterations), compared to the complexity $\mathcal{O}(Mdn)$ for the original PGD. Improvement on the efficiency is achieved with $m < d$.
\begin{algorithm}[!ht]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand\algorithmicensure {\textbf{Output:} }
\caption{Proximal Gradient Descent via Randomized Dimension Reduction (PGD-RDR) for the $\ell^{0}$ Sparse Approximation (\ref{eq:l0-sparse-appro}) }
\label{alg:pgd-rdr}
\begin{algorithmic}[1]
\REQUIRE ~~\\
The given signal $\mathbf{x} \in R^d$, the dictionary $\mathbf{D}$, the parameter $\lambda$ for the weight of the $\ell^{0}$-norm, maximum iteration number $M$, stopping threshold $\varepsilon$, the initialization ${\mathbf{z}}^{(0)} \in {\rm I}\kern-0.18em{\rm R}^n$.
\STATE Sample a random matrix $\mathbf{T} \in {\rm I}\kern-0.18em{\rm R}^{m \times n}$ which satisfies the $\ell^2$-norm preserving property in Definition~\ref{def:L2-norm-preseving}, e.g. $\sqrt{m} \mathbf{T}_{ij} \sim \mathcal{N}(0,1)$.
\STATE Compute the low-dimensional signal $\bar \mathbf{x} = \mathbf{T} \mathbf{x}$ and the dimensionality-reduced dictionary $\bar \mathbf{D} = \mathbf{T} \mathbf{D}$.
\STATE{Perform PGD with (\ref{eq:l0-sa-proximal-step1-reduced}) and (\ref{eq:l0-sa-proximal-step2})} starting from $t = 1$, with $\mathbf{x}$ and $\mathbf{D}$ replaced by $\bar \mathbf{x}$ and $\bar \mathbf{D}$. The iteration terminates either $\{{\mathbf{z}}^{(t)}\}_t$ or $\{L({\mathbf{z}}^{(t)})\}_t$ converges under certain threshold or maximum iteration number is achieved.
\ENSURE Obtain the sparse code $\bar \mathbf{z}$ upon the termination of the iterations.
\end{algorithmic}
\end{algorithm}
\subsection{Theoretical Analysis}
We analyze the theoretical properties of the proposed PGD-RMA and PGD-RDR in the previous section. For both randomized algorithms, we present the bounded gap between the sub-optimal solution to the reduced $\ell^0$ sparse approximation problem (\ref{eq:l0-sparse-appro-rma}) or (\ref{eq:l0-sparse-appro-rdr}) and the globally optimal solution $\mathbf{z}^*$ to the original problem.
\subsection{Analysis for PGD-RMA}
\citep{Halko2011-random-matrix-decomposition} proved that the approximation $\tilde \mathbf{D}$ is close to $\mathbf{D}$ in terms of the spectral norm:
\begin{MyLemma}\label{lemma::D-approx}
(\textit{Corollary $10.9$ by \citep{Halko2011-random-matrix-decomposition} })
Let $k_0 \ge 2$ and $p = k-k_0 \ge 4$, then probability at least $1-6e^{-p}$, then the spectral norm of $\mathbf{D} - \hat \mathbf{D}$ is bounded by
\begin{align}\label{eq:D-appro}
&\|\mathbf{D} - \hat \mathbf{D}\|_2 \le C_{k,k_0}
\end{align}
where
\begin{align}\label{eq:C-k-k0}
&C_{k,k_0} = \big(1+17\sqrt{1+\frac{k_0}{p}}\big) \sigma_{k_0+1} + \frac{8\sqrt{k}}{p+1} (\sum\limits_{j > k_0} \sigma_j^2)^{\frac{1}{2}}
\end{align}
$\sigma_1 \ge \sigma_2 \ge \ldots$ are the singular values of $\mathbf{D}$.
\end{MyLemma}
Let $\tilde \mathbf{z}$ be the globally optimal solution to (\ref{eq:l0-sparse-appro-rma}), $\tilde \mathbf{S} = {\rm supp}(\tilde {\mathbf{z}})$, $\tilde Q(\mathbf{z}) = \|\mathbf{x} - \tilde \mathbf{D} {\mathbf{z}}\|_2^2$, ${\mathbf{z}}^{(0)}$ be the initialization for PGD for the optimization of the reduced problem (\ref{eq:l0-sparse-appro-rma}). We have the following theorem showing the upper bound for the gap between $\tilde \mathbf{z}$ and ${\mathbf{z}}^*$.
\begin{MyTheorem}\label{theorem::optimal-rma}
(\textit{Optimal solution to the reduced problem (\ref{eq:l0-sparse-appro-rma}) is close to the that to the original problem})
Let $\mathbf{G} = \tilde \mathbf{S} \cup \mathbf{S}^*$. Suppose $\mathbf{D}_{\mathbf{G}}$ is not singular with $\tau_0 \triangleq \sigma_{\min}(\mathbf{D}_{\mathbf{G}}) > 0$, $2\tau_0^2 > \tau > 0$. Then with probability at least $1-6e^{-p}$,
\begin{small}\begin{align}\label{eq:optimal-rp}
&\|{\mathbf{z}}^*-{\tilde \mathbf{z}}\|_2 \nonumber \\
&\le \frac{1}{2\tau_0^2-\tau}\bigg(\big(\sum\limits_{j \in {\mathbf{G}} \cap \tilde \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\tilde \mathbf{z}}_j - b| \})^2 + \nonumber \\
&\sum\limits_{j \in {\mathbf{G}} \setminus \tilde \mathbf{S}} (\max\{0, \frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}} + 2C_{k,k_0}M_0 (2\sigma_{\max}(\mathbf{D}) + C_{k,k_0}) + 2C_{k,k_0}\|\mathbf{x}\|_2 \bigg)
\end{align}\end{small}%
where $M_0 = \frac{\|\mathbf{x}\|_2 + \sqrt{{\tilde L}({\mathbf{z}}^{(0)})}} {\tau_0}$, and $b$ satisfies
\begin{small}\begin{align}\label{eq:b-cond-rp}
&0< b < \min\{\min_{j \in {\tilde \mathbf{S}}} | {\tilde \mathbf{z}}_j|, \max_{k \notin {\tilde \mathbf{S}}} \frac{\lambda }{ (\frac{\partial {\tilde Q}}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\tilde \mathbf{z}}}-\lambda)_{+}},
\min_{j \in {\mathbf{S}^*}} | {\mathbf{z}_j}^*|, \max_{k \notin {\mathbf{S}^*}} \frac{\lambda}{(\frac{\partial Q}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\mathbf{z}}^*}-\lambda)_{+}} \}
\end{align}\end{small}
\end{MyTheorem}
Let $\mathbf{S} = {\rm supp}({\mathbf{z}}^{(0)})$. According to Theorem~\ref{theorem::suboptimal-optimal} and Theorem~\ref{theorem::optimal-rma}, we have the bounded gap between the sub-optimal solution to the reduced problem (\ref{eq:l0-sparse-appro-rma}) and the globally optimal solution ${\mathbf{z}}^*$ to the original problem (\ref{eq:l0-sparse-appro}).
\begin{MyTheorem}\label{theorem::suboptimal-optimal-rp}
(\textit{Sub-optimal solution to the reduced problem (\ref{eq:l0-sparse-appro-rma}) is close to the globally optimal solution to the original problem})
Choose ${\mathbf{z}}^{(0)}$ such that $\|\mathbf{x} - {\tilde \mathbf{D}} {\mathbf{z}}^{(0)}\|_2 \le 1$, and $s > \max\{2|\mathbf{S}|, \frac{2(1+{\lambda}|\mathbf{S}|)}{\lambda \tau}\}$, if ${\tilde \mathbf{D}}_{\mathbf{S}}$ is nonsingular, then the sequence $\{{\mathbf{z}}^{(t)}\}_t$ generated by PDG for the reduced problem (\ref{eq:l0-sparse-appro-rma}) converges to a critical point of ${\tilde L}(\mathbf{z})$, denoted by $\hat {\tilde \mathbf{z}}$. Let $\hat \mathbf{S} = {\rm supp}(\hat {\tilde \mathbf{z}})$, $\mathbf{F} = ({\hat \mathbf{S}} \setminus \tilde \mathbf{S}) \cup (\tilde \mathbf{S} \setminus {\hat \mathbf{S}})$, $\mathbf{G} = \tilde \mathbf{S} \cup \mathbf{S}^*$. Suppose ${\tilde \mathbf{D}}_{\hat \mathbf{S} \cup \tilde \mathbf{S}}$ is not singular with $\kappa_0 \triangleq \sigma_{\min}({\tilde \mathbf{D}}_{\hat \mathbf{S} \cup \tilde \mathbf{S}}) > 0$, and $\kappa_0^2 > \kappa > 0$; $\mathbf{D}_{\mathbf{G}}$ is not singular with $\tau_0 \triangleq \sigma_{\min}(\mathbf{D}_{\mathbf{G}}) > 0$, $2\tau_0^2 > \tau > 0$. Then with probability at least $1-6e^{-p}$,
\begin{small}\begin{align}\label{eq:optimal-rp}
&\|{\mathbf{z}}^*-{\hat {\tilde \mathbf{z}}}\|_2 \le b_1 + b_2,
\end{align}\end{small}%
where
\begin{small}\begin{align*}
&b_1 = \frac{\big(\sum\limits_{j \in \mathbf{F} \cap \hat \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\hat {\tilde \mathbf{z}}}_j - b|\})^2 +
\sum\limits_{j \in \mathbf{F} \setminus \hat \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}}}{2\kappa_0^2-\kappa} \\
&b_2 = \frac{1}{2\tau_0^2-\tau}\bigg(\big(\sum\limits_{j \in {\mathbf{G}} \cap \tilde \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\tilde \mathbf{z}}_j - b| \})^2 +\sum\limits_{j \in {\mathbf{G}} \setminus \tilde \mathbf{S}} (\max\{0, \frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}} \nonumber \\
&+ 2C_{k,k_0}M_0 (2\sigma_{\max}(\mathbf{D}) + C_{k,k_0}) + 2C_{k,k_0}\|\mathbf{x}\|_2 \bigg)
\end{align*}
\end{small}%
$M_0 = \frac{\|\mathbf{x}\|_2 + \sqrt{{\tilde L}({\mathbf{z}}^{(0)})}} {\tau_0}$, and $b$ satisfies
\begin{small}\begin{align}\label{eq:b-cond-rp}
&0< b < \min\{\min_{j \in {\tilde \mathbf{S}}} | {\tilde \mathbf{z}}_j|, \max_{k \notin {\tilde \mathbf{S}}} \frac{\lambda }{ (\frac{\partial {\tilde Q}}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\tilde \mathbf{z}}}-\lambda)_{+}},
\min_{j \in {\mathbf{S}^*}} | {\mathbf{z}_j}^*|, \max_{k \notin {\mathbf{S}^*}} \frac{\lambda}{(\frac{\partial Q}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\mathbf{z}}^*}-\lambda)_{+}}, \min_{j \in {\hat \mathbf{S}}} | {\hat {\tilde \mathbf{z}}}_j|,
\max_{k \notin {\hat \mathbf{S}}} \frac{\lambda }{ (\frac{\partial {\tilde Q}}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\hat {\tilde \mathbf{z}}}}-\lambda)_{+}} \}
\end{align}\end{small}%
\end{MyTheorem}
\subsection{Analysis for PGD-RDR}
Let $\bar \mathbf{z}$ be the globally optimal solution to (\ref{eq:l0-sparse-appro-rdr}), $\bar \mathbf{S} = {\rm supp}(\bar {\mathbf{z}})$, $\bar Q(\mathbf{z}) = \|\bar \mathbf{x} - \bar \mathbf{D} {\mathbf{z}}\|_2^2$, ${\mathbf{z}}^{(0)}$ be the initialization for PGD for the optimization of the reduced problem (\ref{eq:l0-sparse-appro-rdr}). We have the following theorem showing the upper bound for the gap between $\bar \mathbf{z}$ and ${\mathbf{z}}^*$.
\begin{MyTheorem}\label{theorem::optimal-rdr}
(\textit{Optimal solution to the dimensionality-reduced problem (\ref{eq:l0-sparse-appro-rdr}) is close to the that to the original problem})
Let ${\mathbf{H}} = \bar \mathbf{S} \cup \mathbf{S}^*$. Suppose $\mathbf{D}_{{\mathbf{H}}}$ is not singular with $\eta_0 \triangleq \sigma_{\min}(\mathbf{D}_{\mathbf{G}}) > 0$, $2\eta_0^2 > \eta > 0$. If $\mathbf{T}$ satisfies the $\ell^2$-norm preserving property in Definition~\ref{def:L2-norm-preseving}, $m \ge 4c\log{\frac{4}{\delta}}$, then with probability at least $1 - \delta$,
\begin{small}\begin{align}\label{eq:optimal-rp}
&\|{\mathbf{z}}^*-{\bar \mathbf{z}}\|_2 \nonumber \\
&\le \frac{1}{2\eta_0^2-\eta}\bigg(\big(\sum\limits_{j \in {{\mathbf{H}}} \cap \bar \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\bar \mathbf{z}}_j - b| \})^2 + \nonumber \\
&\sum\limits_{j \in {{\mathbf{H}}} \setminus \bar \mathbf{S}} (\max\{0, \frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}} + 2\|\mathbf{D}\|_F{M_1}\sqrt{\frac{c}{m}\log{\frac{4}{\delta}}} (\sigma_{\max}(\mathbf{D})+1) \bigg)
\end{align}\end{small}%
where $M_1 = \frac{\|\mathbf{x}\|_2 + \sqrt{{\bar L}({\mathbf{z}}^{(0)})}} {\eta_0}$, and $b$ satisfies
\begin{small}\begin{align}\label{eq:b-cond-rp}
&0< b < \min\{\min_{j \in {\bar \mathbf{S}}} | {\bar \mathbf{z}}_j|, \max_{k \notin {\bar \mathbf{S}}} \frac{\lambda }{ (\frac{\partial {\bar Q}}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\bar \mathbf{z}}}-\lambda)_{+}},
\min_{j \in {\mathbf{S}^*}} | {\mathbf{z}_j}^*|, \max_{k \notin {\mathbf{S}^*}} \frac{\lambda}{(\frac{\partial Q}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\mathbf{z}}^*}-\lambda)_{+}} \}
\end{align}\end{small}
\end{MyTheorem}
Similar to the analysis for PGD-RMA, we let $\mathbf{S} = {\rm supp}({\mathbf{z}}^{(0)})$. Combining Theorem~\ref{theorem::suboptimal-optimal} and Theorem~\ref{theorem::optimal-rdr}, we have the bounded gap between the sub-optimal solution to the dimensionality-reduced problem (\ref{eq:l0-sparse-appro-rdr}) and the globally optimal solution ${\mathbf{z}}^*$ to the original problem (\ref{eq:l0-sparse-appro}).
\begin{MyTheorem}\label{theorem::suboptimal-optimal-rp}
(\textit{Sub-optimal solution to the dimensionality-reduced problem (\ref{eq:l0-sparse-appro-rdr}) is close to the globally optimal solution to the original problem})
Choose ${\mathbf{z}}^{(0)}$ such that $\|\mathbf{x} - {\bar \mathbf{D}} {\mathbf{z}}^{(0)}\|_2 \le 1$, and $s > \max\{2|\mathbf{S}|, \frac{2(1+{\lambda}|\mathbf{S}|)}{\lambda \eta}\}$, if ${\bar \mathbf{D}}_{\mathbf{S}}$ is nonsingular, then the sequence $\{{\mathbf{z}}^{(t)}\}_t$ generated by PDG for the dimensionality-reduced problem (\ref{eq:l0-sparse-appro-rdr}) converges to a critical point of ${\bar L}(\mathbf{z})$, denoted by $\hat {\bar \mathbf{z}}$. Let $\hat \mathbf{S} = {\rm supp}(\hat {\bar \mathbf{z}})$, $\mathbf{F} = ({\hat \mathbf{S}} \setminus \bar \mathbf{S}) \cup (\bar \mathbf{S} \setminus {\hat \mathbf{S}})$, ${\mathbf{H}} = \bar \mathbf{S} \cup \mathbf{S}^*$. Suppose ${\bar \mathbf{D}}_{\hat \mathbf{S} \cup \bar \mathbf{S}}$ is not singular with $\kappa_0 \triangleq \sigma_{\min}({\bar \mathbf{D}}_{\hat \mathbf{S} \cup \bar \mathbf{S}}) > 0$, and $\kappa_0^2 > \kappa > 0$; $\mathbf{D}_{{\mathbf{H}}}$ is not singular with $\eta_0 \triangleq \sigma_{\min}(\mathbf{D}_{{\mathbf{H}}}) > 0$, $2\eta_0^2 > \eta > 0$. If $\mathbf{T}$ satisfies the $\ell^2$-norm preserving property in Definition~\ref{def:L2-norm-preseving}, $m \ge 4c\log{\frac{4}{\delta}}$, then with probability at least $1 - \delta$,
\begin{small}\begin{align}\label{eq:optimal-rp}
&\|{\mathbf{z}}^*-{\hat {\bar \mathbf{z}}}\|_2 \le b_1 + b_2,
\end{align}\end{small}%
where
\begin{small}\begin{align*}
&b_1 = \frac{\big(\sum\limits_{j \in \mathbf{F} \cap \hat \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\hat {\bar \mathbf{z}}}_j - b|\})^2 +
\sum\limits_{j \in \mathbf{F} \setminus \hat \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}}}{2\kappa_0^2-\kappa} \\
&b_2 = \frac{1}{2\eta_0^2-\eta}\bigg(\big(\sum\limits_{j \in {{\mathbf{H}}} \cap \bar \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\bar \mathbf{z}}_j - b| \})^2 + \nonumber \\
&\sum\limits_{j \in {{\mathbf{H}}} \setminus \bar \mathbf{S}} (\max\{0, \frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}} + 2\|\mathbf{D}\|_F{M_1}\sqrt{\frac{c}{m}\log{\frac{4}{\delta}}} (\sigma_{\max}(\mathbf{D})+1) \bigg)
\end{align*}
\end{small}%
$M_0 = \frac{\|\mathbf{x}\|_2 + \sqrt{{\bar L}({\mathbf{z}}^{(0)})}} {\eta_0}$, and $b$ satisfies
\begin{small}\begin{align}\label{eq:b-cond-rp}
&0< b < \min\{\min_{j \in {\bar \mathbf{S}}} | {\bar \mathbf{z}}_j|, \max_{k \notin {\bar \mathbf{S}}} \frac{\lambda }{ (\frac{\partial {\bar Q}}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\bar \mathbf{z}}}-\lambda)_{+}},
\min_{j \in {\mathbf{S}^*}} | {\mathbf{z}_j}^*|, \max_{k \notin {\mathbf{S}^*}} \frac{\lambda}{(\frac{\partial Q}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\mathbf{z}}^*}-\lambda)_{+}}, \min_{j \in {\hat \mathbf{S}}} | {\hat {\bar \mathbf{z}}}_j|,
\max_{k \notin {\hat \mathbf{S}}} \frac{\lambda }{ (\frac{\partial {\bar Q}}{\partial {\mathbf{z}_k}}|_{\mathbf{z} = {\hat {\bar \mathbf{z}}}}-\lambda)_{+}} \}
\end{align}\end{small}%
\end{MyTheorem}
The detailed proofs of the theorems and lemmas are included in Section~\ref{sec::supplementary}. Note that we slightly abuse the notation of $\mathbf{F}$, $\hat \mathbf{S}$, $\kappa$ and $\kappa_0$ in the analysis for PGD-RMA and PGD-RDR with no confusion. To the best of our knowledge, our theoretical results are among the very few results for the provable randomized efficient algorithms for the nonsmooth and nonconvex $\ell^0$ sparse approximation problem.
\section{Proximal Gradient Descent for $\ell^{0}$ Sparse Approximation}
Solving the $\ell^{0}$ sparse approximation problem (\ref{eq:l0-sparse-appro}) is NP-hard in general \citep{Natarajan95-sparse-linear-system}. Therefore, the literature extensively resorts to approximate algorithms, such as Orthogonal Matching Pursuit \citep{Tropp04}, or that using surrogate functions \citep{Hyder09}, for $\ell^{0}$ problems. In addition, PGD has been used by \citep{BaoJQS14} to find the approximate solution to (\ref{eq:l0-sparse-appro}) with sublinear convergence to the critical point of the objective of (\ref{eq:l0-sparse-appro}), as well as satisfactory empirical results. The success of PGD raises the interesting question that how good the approximate solution by PGD is.
In this section, we first present the algorithm that employs PGD to optimize (\ref{eq:l0-sparse-appro}) in an iterative shrinkage manner. Then we show the suboptimality of the sub-optimal solution by PGD in terms of the gap between the sub-optimal solution and the globally optimal solution.
\subsection{Algorithm}
In $t$-th iteration of PGD for $t \ge 1$, gradient descent is performed on the squared loss term of $L(\mathbf{z})$, i.e. $Q(\mathbf{z}) \triangleq \|\mathbf{x} - \mathbf{D} {\mathbf{z}}\|_2^2$, to obtain
\begin{small}\begin{align}\label{eq:l0-sa-proximal-step1}
\tilde {\mathbf{z}}^{(t)} = {\mathbf{z}}^{(t-1)} - \frac{2}{{\tau}s} ({\mathbf{D}^\top}{\mathbf{D}}{{\mathbf{z}}^{(t-1)}}-{\mathbf{D}^\top}{\mathbf{x}})
\end{align}\end{small}%
where $\tau$ is any constant that is greater than $1$. $s>0$ is usually chosen as the Lipschitz constant for the gradient of function $Q(\cdot)$, namely
\begin{small}\begin{align}\label{eq:lipschitz-Q}
\|\nabla Q(\mathbf{y}) - \nabla Q(\mathbf{z})\|_2 \le s \|\mathbf{y}-\mathbf{z}\|_2, \,\, \forall \, \mathbf{y},\mathbf{z} \in {\rm I}\kern-0.18em{\rm R}^n
\end{align}\end{small}%
${\mathbf{z}}^{(t)}$ is then the solution to the following the proximal mapping:
\begin{small}\begin{align}\label{eq:l0-sa-subprob}
&{\mathbf{z}}^{(t)} = \argmin \limits_{\mathbf{v} \in {\rm I}\kern-0.18em{\rm R}^{n}} {\frac{{\tau} s}{2}\|\mathbf{v} - {\tilde {\mathbf{z}}^{(t)}}\|_2^2 + {\lambda}\|\mathbf{v}\|_0}
\end{align}\end{small}%
and (\ref{eq:l0-sa-subprob}) admits the closed-form solution:
\begin{small}\begin{align}\label{eq:l0-sa-proximal-step2}
&{\mathbf{z}}^{(t)} = h_{\sqrt{\frac{2\lambda}{{\tau}s}}}(\tilde {\mathbf{z}}^{(t)})
\end{align}\end{small}%
where $h_{\theta}$ is an element-wise hard thresholding operator:
\begin{small}\begin{align*}
[h_{\theta}(\mathbf{u})]_j=
\left\{
\begin{array}
{r@{\quad:\quad}l}
0 & {|\mathbf{u}_j| < \theta } \\
{\mathbf{u}_j} & {\rm otherwise}
\end{array}
\right., \quad 1 \le j \le n
\end{align*}\end{small}%
The iterations start from $t=1$ and continue until the sequence $\{L({\mathbf{z}}^{(t)})\}_t$ or $\{{\mathbf{z}}^{(t)}\}_t$ converges or maximum iteration number is achieved. The optimization algorithm for the $\ell^{0}$ sparse approximation problem (\ref{eq:l0-sparse-appro}) by PGD is described in Algorithm~\ref{alg:pgd}. In practice, the time complexity of optimization by PGD is $\mathcal{O}(Mdn)$ where $M$ is the number of iterations (or maximum number of iterations) for PGD.
\begin{algorithm}[!ht]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand\algorithmicensure {\textbf{Output:} }
\caption{Proximal Gradient Descent for the $\ell^{0}$ Sparse Approximation (\ref{eq:l0-sparse-appro}) }
\label{alg:pgd}
\begin{algorithmic}[1]
\REQUIRE ~~\\
The given signal $\mathbf{x} \in R^d$, the dictionary $\mathbf{D}$, the parameter $\lambda$ for the weight of the $\ell^{0}$-norm, maximum iteration number $M$, stopping threshold $\varepsilon$, the initialization ${\mathbf{z}}^{(0)} \in {\rm I}\kern-0.18em{\rm R}^n$.
\STATE{Obtain the sub-optimal solution ${{\tilde \mathbf{z}}}$ by the proximal gradient descent (PGD) method with (\ref{eq:l0-sa-proximal-step1}) and (\ref{eq:l0-sa-proximal-step2})} starting from $t = 1$. The iteration terminates either $\{{\mathbf{z}}^{(t)}\}_t$ or $\{L({\mathbf{z}}^{(t)})\}_t$ converges under certain threshold or the maximum iteration number is achieved.
\ENSURE Obtain the sparse code $\hat \mathbf{z}$ upon the termination of the iterations.
\end{algorithmic}
\end{algorithm}
\subsection{Theoretical Analysis}
In this section we present the bound for the gap between the sub-optimal solution by PGD in Algorithm~\ref{alg:pgd} and the globally optimal solution for the $\ell^{0}$ sparse approximation problem (\ref{eq:l0-sparse-appro}). With proper initialization $\mathbf{z}^{(0)}$, we show that the sub-optimal solution by PGD is actually a critical point of $L(\mathbf{z})$ in Lemma~\ref{lemma::PGD-convergence}, namely the sequence $\{{\mathbf{z}}^{(t)}\}_t$ converges to a critical point of the objective (\ref{eq:l0-sparse-appro}). We then show that both this sub-optimal solution and the globally optimal solution to (\ref{eq:l0-sparse-appro}) are local solutions of a carefully designed capped-$\ell^{1}$ regularized problem in Lemma~\ref{lemma::equivalence-to-capped-l1}. The bound for $\ell^{2}$-distance between the sub-optimal solution and the globally optimal solution is then presented in Theorem~\ref{theorem::suboptimal-optimal}. In the following analysis, we let $\mathbf{S} = {\rm supp}({\mathbf{z}}^{(0)})$. Also, since $\ell^{0}$ is invariant to scaling, the original problem (\ref{eq:l0-sparse-appro}) is equivalent to $\mathop {\min }\limits_{{\mathbf{z} \in {\rm I}\kern-0.18em{\rm R}^n} } {\|\mathbf{x} - \frac{\mathbf{D}}{m} \mathbf{z}^{'}\|_2^2 + {\lambda}\|{\mathbf{z}^{'}}\|_0}$ for $\mathbf{z}^{'} = m \mathbf{z}$ with $m > \max_i \|\mathbf{D}^i\|_2$, so it is safe to assume $\max_i \|\mathbf{D}^i\|_2 \le 1$. Without loss of generality, we let $\|\mathbf{x}\|_2 \le 1$.
\begin{MyLemma}\label{lemma::shrinkage-sufficient-decrease}
(\textit{Support shrinkage and sufficient decrease of the objective function})
Choose ${\mathbf{z}}^{(0)}$ such that $\|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(0)}\|_2 \le 1$. When $s > \max\{2|\mathbf{S}|, \frac{2(1+{\lambda}|\mathbf{S}|)}{\lambda \tau}\}$, then
\begin{small}\begin{align}\label{eq:support-shrinkage}
{\rm supp} ({\mathbf{z}}^{(t)}) \subseteq {\rm supp} ({\mathbf{z}}^{(t-1)}), t \ge 1
\end{align}\end{small}%
namely the support of the sequence $\{{\mathbf{z}}^{(t)}\}_t$ shrinks. Moreover, the sequence of the objective $\{L({\mathbf{z}}^{(t)})\}_t$ decreases, and the following inequality holds for $t \ge 1$:
\begin{small}\begin{align}\label{eq:l0graph-proximal-sufficient-decrease}
&L({\mathbf{z}}^{(t)}) \le L({\mathbf{z}}^{(t-1)}) - \frac{(\tau-1)s}{2} \|{\mathbf{z}}^{(t)} - {\mathbf{z}}^{(t-1)}\|_2^2
\end{align}\end{small}%
And it follows that the sequence $\{L({\mathbf{z}}^{(t)})\}_t$ converges.
\end{MyLemma}
\begin{MyRemark}
One can always choose ${\mathbf{z}}^{(0)}$ as the optimal solution to the $\ell^{1}$ regularized problem, so $\|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(0)}\|_2^2 + {\lambda}\|{\mathbf{z}}^{(0)}\|_1 \le \|\mathbf{x}\|_2^2 \le 1$, and it follows that $\|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(0)}\|_2^2 \le 1$. Also, when not every subspace spanned by linearly independent columns of $\mathbf{D}$ is orthogonal to $\mathbf{x}$ (which is common in practice), we can always find ${\mathbf{z}}^{(0)}$ such that $\|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(0)}\|_2^2 \le 1$ and ${\mathbf{z}}^{(0)}$ is a nonzero vector. The support shrinkage property by Lemma~\ref{lemma::shrinkage-sufficient-decrease} shows that the original $\ell^{0}$ sparse approximation (\ref{eq:l0-sparse-appro}) is equivalent to a dictionary-reduced version
\begin{align}\label{eq:l0-sparse-appro-reduced}
&\mathop {\min }\limits_{{\mathbf{z} \in {\rm I}\kern-0.18em{\rm R}^n} } L(\mathbf{z}) = {\|\mathbf{x} - \mathbf{D}_{\mathbf{S}} \mathbf{z}\|_2^2 + {\lambda}\|{\mathbf{z}}\|_0}
\end{align}
since PGD would not choose dictionary atoms outside of $\mathbf{D}_{\mathbf{S}}$, so the numerical computation of PGD can be improved by using $\mathbf{D}_{\mathbf{S}}$ as the dictionary with $s > \max\{2|\mathbf{S}|, \frac{2(1+{\lambda}|\mathbf{S}|)}{\lambda \tau}\}$. When $\mathbf{z}^{(0)}$ is sparse, $\mathbf{D}_{\mathbf{S}}$ has a small number of atoms which enables fast optimization of the reduced problem (\ref{eq:l0-sparse-appro-reduced}). Also note that $s$ can be much smaller than the Lipschitz constant for the gradient of function $Q(\cdot)$ by the choice indicated by Lemma~\ref{lemma::shrinkage-sufficient-decrease}, which leads to a larger step size for the gradient descent step (\ref{eq:l0-sa-proximal-step1}).
\end{MyRemark}
The definition of critical points are defined below which is important for our analysis.
\begin{MyDefinition}
(\textit{Critical points})
Given the non-convex function $f \colon {\rm I}\kern-0.18em{\rm R}^n \to R \cup \{+\infty\}$ which is a proper and lower semi-continuous function.
\begin{itemize}
\item for a given $\mathbf{x} \in {\rm dom}f$, its Frechet subdifferential of $f$ at $\mathbf{x}$, denoted by $\tilde \partial f(x)$, is
the set of all vectors $\mathbf{u} \in {\rm I}\kern-0.18em{\rm R}^n$ which satisfy
\begin{small}\begin{align*}
&\limsup\limits_{\mathbf{y} \neq \mathbf{x},\mathbf{y} \to \mathbf{x}} \frac{f(\mathbf{y})-f(\mathbf{x})-\langle \mathbf{u}, \mathbf{y}-\mathbf{x} \rangle}{\|\mathbf{y}-\mathbf{x}\|} \ge 0
\end{align*}\end{small}%
\item The limiting-subdifferential of $f$ at $\mathbf{x} \in {\rm I}\kern-0.18em{\rm R}^n$, denoted by written $\partial f(x)$, is defined by
\begin{small}\begin{align*}
&\partial f(x) = \{\mathbf{u} \in {\rm I}\kern-0.18em{\rm R}^n \colon \exists \mathbf{x}^k \to \mathbf{x}, f(\mathbf{x}^k) \to f(\mathbf{x}), \tilde \mathbf{u}^k \in {\tilde \partial f}(\mathbf{x}^k) \to \mathbf{u}\}
\end{align*}\end{small}%
\end{itemize}
The point $\mathbf{x}$ is a critical point of $f$ if $0 \in \partial f(x)$.
\end{MyDefinition}
If $\mathbf{D}_{\mathbf{S}}$ is nonsingular, Lemma~\ref{lemma::PGD-convergence} shows that the sequences $\{{\mathbf{z}}^{(t)}\}_t$ produced by PGD converges to a critical point of $L(\mathbf{z})$, the objective of the $\ell^{0}$ sparse approximation problem (\ref{eq:l0-sparse-appro}).
\begin{MyLemma}\label{lemma::PGD-convergence}
With $\mathbf{z}^{(0)}$ and $s$ in Lemma~\ref{lemma::shrinkage-sufficient-decrease}, if $\mathbf{D}_{\mathbf{S}}$ is nonsingular, then the sequence $\{{\mathbf{z}}^{(t)}\}_t$ generated by PDG with (\ref{eq:l0-sa-proximal-step1}) and (\ref{eq:l0-sa-proximal-step2}) converges to a critical point of $L(\mathbf{z})$.
\end{MyLemma}
Denote the critical point of $L(\mathbf{z})$ by ${\hat \mathbf{z}}$ that the sequence $\{{\mathbf{z}}^{(t)}\}_t$ converges to when the assumption of Lemma~\ref{lemma::PGD-convergence} holds, and denote by ${\mathbf{z}}^*$ the globally optimal solution to the $\ell^{0}$ sparse approximation problem (\ref{eq:l0-sparse-appro}).
Also, we consider the following capped-$\ell^{1}$ regularized problem, which replaces the noncontinuous $\ell^{0}$-norm with the continuous capped-$\ell^{1}$ regularization term $R$:
\begin{small}\begin{align}\label{eq:capped-l1-problem}
&\mathop {\min }\limits_{{\bm{\beta} \in {\rm I}\kern-0.18em{\rm R}^n}} L_{{\rm capped-}\ell^{1}}(\bm{\beta}) = \|\mathbf{x} - \mathbf{D} \bm{\beta}\|_2^2 + \mathbf{R}(\bm{\beta};b)
\end{align}\end{small}%
where $\mathbf{R}(\bm{\beta};b) = \sum\limits_{j=1}^n R(\bm{\beta}_j;b)$, $R(t;b) = {\lambda}\frac{\min\{|t|,b\}}{b}$ for some $b > 0$. It can be seen that $R(t;b)$ approaches the $\ell^{0}$-norm when $b \to 0+$. Our following theoretical analysis aims to obtain the gap between ${\hat \mathbf{z}}$ and ${\mathbf{z}}^*$. For the sake of this purpose, the definition of local solution and degree of nonconvexity of a regularizer are necessary and presented below.
\begin{MyDefinition}
(\textit{Local solution})
A vector $\tilde \bm{\beta}$ is a local solution to the problem (\ref{eq:capped-l1-problem}) if
\begin{small}\begin{align}\label{eq:cond-local-solution}
&\| 2{\mathbf{D}^{\top}}({\mathbf{D}} {\tilde \bm{\beta}} - \mathbf{x} ) + {\dot \mathbf{R}} (\tilde \bm{\beta};b)\|_2 = 0
\end{align}\end{small}%
where ${\dot \mathbf{R}(\tilde \bm{\beta};b) = [\dot R(\tilde \bm{\beta}_1;b),\dot R(\tilde \bm{\beta}_2;b),\ldots,\dot R(\tilde \bm{\beta}_n;b) ]^{\top}}$.
\end{MyDefinition}
Note that in the above definition and the following text, $\dot R(t;b)$ can be chosen as any value between the right differential $\frac{\partial R}{\partial t}(t+;b)$ (or ${\dot R(t+;b)}$) and left differential $\frac{\partial R}{\partial t}(t-;b)$ (or ${\dot R(t-;b)}$).
\begin{MyDefinition}\label{def::degree-nonconvexity}
(\textit{Degree of nonconvexity of a regularizer})
For $\kappa \geq 0$ and $t \in {\rm I}\kern-0.18em{\rm R}$, define
\[
\theta(t,\kappa):= \sup_s \{ -{\rm sgn}(s-t) ({\dot P}(s;b) - {\dot P}(t;b)) - \kappa |s-t|\}
\]
as the degree of nonconvexity for function $P$.
If $\mathbf{u} =(u_1,\ldots,u_n)^\top\in {\rm I}\kern-0.18em{\rm R}^n$, $\theta(\mathbf{u},\kappa)=[\theta(u_1,\kappa),\ldots,\theta(u_n,\kappa)]$. $sgn$ is a sign function.
\end{MyDefinition}
Note that $\theta(t,\kappa) = 0$ if $P$ is a convex function.
Let ${\hat \mathbf{S}} = {\rm supp}( {\hat \mathbf{z}})$, ${\mathbf{S}^*} = {\rm supp}( {\mathbf{z}}^*)$, the following lemma shows that both $ {\hat \mathbf{z}}$ and ${\mathbf{z}}^*$ are local solutions to the capped-$\ell^{1}$ regularized problem (\ref{eq:capped-l1-problem}).
\begin{MyLemma}\label{lemma::equivalence-to-capped-l1}
With $\mathbf{z}^{(0)}$ and $s$ in Lemma~\ref{lemma::shrinkage-sufficient-decrease}, if
\begin{small}\begin{align}\label{eq:b-cond}
&0< b < \min\{\min_{j \in {\hat \mathbf{S}}} | {\hat \mathbf{z}}_j|, \frac{\lambda}{ \max_{j \notin {\hat \mathbf{S}}} |\frac{\partial Q}{\partial {\mathbf{z}_j}}|_{\mathbf{z} = {\hat \mathbf{z}}}|},
\min_{j \in {\mathbf{S}^*}} | \mathbf{z}_j^*|, \frac{\lambda}{ \max_{j \notin {\mathbf{S}^*}} |\frac{\partial Q}{\partial {\mathbf{z}_j}}|_{\mathbf{z} = {\mathbf{z}}^*}|} \}
\end{align}\end{small}%
(if the denominator is $0$, $\frac{\lambda}{0}$ is defined to be $+\infty$ in the above inequality),
then both $ {\hat \mathbf{z}}$ and ${\mathbf{z}}^*$ are local solutions to the capped-$\ell^{1}$ regularized problem (\ref{eq:capped-l1-problem}).
\end{MyLemma}
\begin{MyTheorem}\label{theorem::suboptimal-optimal}
(\textit{Sub-optimal solution is close to the globally optimal solution})
With $\mathbf{z}^{(0)}$ and $s$ in Lemma~\ref{lemma::shrinkage-sufficient-decrease}, and suppose $\mathbf{D}_{\hat \mathbf{S} \cup \mathbf{S}^*}$ is not singular with $\kappa_0 \triangleq \sigma_{\min}(\mathbf{D}_{\hat \mathbf{S} \cup \mathbf{S}^*}) > 0$. When $\kappa_0^2 > \kappa > 0$ and $b$ is chosen according to (\ref{eq:b-cond}) as in Lemma~\ref{lemma::equivalence-to-capped-l1} ,let $\mathbf{F} = ({\hat \mathbf{S}} \setminus \mathbf{S}^*) \cup (\mathbf{S}^* \setminus {\hat \mathbf{S}})$ be the symmetric difference between $\hat \mathbf{S}$ and $\mathbf{S}^*$, then
\begin{small}\begin{align}\label{eq:suboptimal-optimal}
&\|{\hat \mathbf{z}} - \mathbf{z}^*\|_2 \le \frac{\big(\sum\limits_{j \in \mathbf{F} \cap \hat \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\hat \mathbf{z}}_j - b|\})^2 +
\sum\limits_{j \in \mathbf{F} \setminus \hat \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}}}{2\kappa_0^2-\kappa}
\end{align}\end{small}%
\end{MyTheorem}
It is worthwhile to connect the assumption on the dictionary in Theorem~\ref{theorem::suboptimal-optimal} to the Restricted Isometry Property (RIP) \citep{CandesTao05} used frequently in the compressive sensing literature. Before the discussion, the definition of sparse eigenvalues is defined below.
\begin{MyDefinition}
(\textit{Sparse eigenvalues})
The lower and upper sparse eigenvalues of a matrix $\mathbf{A}$ are defined as
\begin{small}\begin{align*
& \kappa_-(m) := \min_{\|\mathbf{u}\|_0 \leq m; \|\mathbf{u}\|_2=1} \|\mathbf{A} \mathbf{u}\|_2^2 \quad \kappa_+(m) := \max_{\|\mathbf{u}\|_0 \leq m,\|\mathbf{u}\|_2=1}\|\mathbf{A} \mathbf{u}\|_2^2
\end{align*}\end{small}%
\end{MyDefinition}
\subsection*{Assumptions for Our Analysis}
Typical RIP requires bounds such as $\delta_\tau+\delta_{2\tau}+\delta_{3\tau}< 1$ or $\delta_{2\tau} < \sqrt{2}-1$ \citep{Candes2008} for stably recovering the signal from measurements and $\tau$ is the sparsity of the signal, where $\delta_{\tau}=\max\{\kappa_+(\tau)-1,1-\kappa_-(\tau)\}$. It should be emphasized that our bound (\ref{eq:suboptimal-optimal}) only requires nonsingularity of the submatrix of $\mathbf{D}$ with the columns in the support of $\hat \mathbf{z}$ and $\mathbf{z}^*$, which are more general than RIP in the sense of not requiring bounds in terms of $\delta$. To see this point, choose the initialization $\mathbf{z}^{0}$ such that $|{\rm supp}(\mathbf{z}^{0})| \le \tau$, then the condition $\delta_{2\tau} < \sqrt{2}-1$ \citep{Candes2008} indicates that $\sigma_{\min}(\mathbf{D}_{\hat \mathbf{S} \cup \mathbf{S}^*}) > 2 - \sqrt{2}$, which is stronger than our assumption that $\sigma_{\min}(\mathbf{D}_{\hat \mathbf{S} \cup \mathbf{S}^*}) > 0$. In addition, our assumption on the nonsingularity of $\mathbf{D}_{\hat \mathbf{S} \cup \mathbf{S}^*}$ is much weaker than that of the sparse eigenvalues used in RIP conditions which require minimum eigenvalue of every submatrix of $\mathbf{D}$ of specified number of columns.
\begin{MyRemark}\label{remark::suboptimal-optimal}
If $\mathbf{z}^{(0)}$ is sparse, $ {\hat \mathbf{z}}$ is also sparse by the property of support shrinkage in Lemma~\ref{lemma::shrinkage-sufficient-decrease}. We can then expect that $|\hat \mathbf{S} \cup \mathbf{S}^*|$ is reasonably small, and a small $|\hat \mathbf{S} \cup \mathbf{S}^*|$ often increases the chance of a larger $\sigma_{\min}(\mathbf{D}_{|\hat \mathbf{S} \cup \mathbf{S}^*|})$. Also note that the bound for distance between the sub-optimal solution and the globally optimal solution presented in Theorem~\ref{theorem::suboptimal-optimal} does not require typical RIP conditions. Moreover, when $\frac{\lambda}{b} - {\kappa} |{\hat \mathbf{z}}_j - b|$ for nonzero ${\hat \mathbf{z}}_j$ and $\frac{\lambda}{b} - {\kappa} b$ are no greater than $0$, or they are small positive numbers, the sub-optimal solution $ {\hat \mathbf{z}}$ is equal to or very close to the globally optimal solution.
\end{MyRemark}
\section{Proofs}
\label{sec::supplementary}
\subsection{Proof of Lemma~\ref{lemma::shrinkage-sufficient-decrease}}
\begin{proof}
We prove this Lemma by mathematical induction.
When $t=1$, we first show that ${\rm supp} ({\mathbf{z}}^{(1)}) \subseteq {\rm supp} ({\mathbf{z}}^{(0)})$, i.e. the support of $\mathbf{z}$ shrinks after the first iteration. To see this,
$\tilde {\mathbf{z}}^{(t)} = {\mathbf{z}}^{(t-1)} - \frac{2}{{\tau}s} ({\mathbf{D}^\top}{\mathbf{D}}{{\mathbf{z}}^{(t-1)}}-{\mathbf{D}^\top}{\mathbf{x}})$.
Since $\|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(t-1)}\|_2^2 \le 1$, let $\mathbf{g}^{(t-1)} = - \frac{2}{{\tau}s} ({\mathbf{D}^\top}{\mathbf{D}}{{\mathbf{z}}^{(t-1)}}-{\mathbf{D}^\top}{\mathbf{x}})$, then
\begin{small}\begin{align*}
&|\tilde {\mathbf{z}_j}^{(t)}| \le \|\mathbf{g}^{(t-1)}\|_{\infty} \le \frac{2}{{\tau}s} \|{\mathbf{D}^\top}({\mathbf{D}}{{\mathbf{z}}^{(t-1)}}-{\mathbf{x}})\|_{\infty} \le \frac{2}{{\tau}s}
\end{align*}\end{small}
where $j$ is the index for any zero element of ${\mathbf{z}}^{(t-1)}$, $1 \le j \le n, j \notin {\rm supp}({\mathbf{z}}^{(t-1)})$. Now $|{\tilde {\mathbf{z}_j}^{(t)}}| < \sqrt{\frac{2\lambda}{{\tau}s}}$, and it follows that ${\mathbf{z}_j}^{(t)} = 0$ due to the update rule (\ref{eq:l0-sa-proximal-step2}). Therefore, the zero elements of ${\mathbf{z}}^{(t-1)}$ remain unchanged in ${\mathbf{z}}^{(t)}$, and ${\rm supp} ({\mathbf{z}}^{(t)}) \subseteq {\rm supp} ({\mathbf{z}}^{(t-1)})$ for $t=1$.
Let $Q_{\mathbf{S}}(\mathbf{y}) = \|\mathbf{x} - \mathbf{D}_{\mathbf{S}} \mathbf{y}\|_2^2$ for $\mathbf{y} \in {\rm I}\kern-0.18em{\rm R}^{|\mathbf{S}|}$, then we show that $s > 2|\mathbf{S}|$ is the Lipschitz constant for the gradient of function $Q_{\mathbf{S}}$. To see this, we have
\begin{small}\begin{align*}
\sigma_{\max}({\mathbf{D}_{\mathbf{S}}^\top}{\mathbf{D}_{\mathbf{S}}}) = \big(\sigma_{\max}(\mathbf{D}_{\mathbf{S}})\big)^2 \le {\rm Tr}({\mathbf{D}_{\mathbf{S}}^\top}{\mathbf{D}_{\mathbf{S}}}) = |\mathbf{S}|
\end{align*}\end{small}
Also, $\nabla Q_{\mathbf{S}}(\mathbf{y}) = 2 ({\mathbf{D}_{\mathbf{S}}^\top}{\mathbf{D}_{\mathbf{S}}}{\mathbf{y}}-{\mathbf{D}_{\mathbf{S}}^\top}{\mathbf{x}})$, and
\begin{small}\begin{align}\label{eq:lemma1-proof-seg1}
& \|\nabla Q_{\mathbf{S}}(\mathbf{y}) - \nabla Q_{\mathbf{S}}(\mathbf{z})\|_2 = 2 \| {\mathbf{D}_{\mathbf{S}}^\top}{\mathbf{D}_{\mathbf{S}}}({\mathbf{y}}-{\mathbf{z}})\|_2 \\
&\le 2 \sigma_{\max}({\mathbf{D}_{\mathbf{S}}^\top}{\mathbf{D}_{\mathbf{S}}}) \cdot \|({\mathbf{y}}-{\mathbf{z}})\|_2 \nonumber \\
& \le 2|\mathbf{S}| \|({\mathbf{y}}-{\mathbf{z}})\|_2 < s \|({\mathbf{y}}-{\mathbf{z}})\|_2 \nonumber
\end{align}
\end{small}
Note that when $t=1$, since $${\mathbf{z}}^{(t)} = \argmin \limits_{\mathbf{v} \in {\rm I}\kern-0.18em{\rm R}^n} {\frac{{\tau}s}{2}\|\mathbf{v} - {\tilde {\mathbf{z}}^{(t)}}\|_2^2 + {\lambda}\|\mathbf{v}\|_0}$$ we have
\begin{small}\begin{align}\label{eq:lemma1-proof-seg2}
& \frac{{\tau}s}{2}\|{\mathbf{z}}^{(t)} - {\tilde {\mathbf{z}}^{(t)}}\|_2^2 + {\lambda}\|{\mathbf{z}}^{(t)}\|_0 \\
&\le \frac{{\tau}s}{2}\|\frac{\nabla Q({\mathbf{z}}^{(t-1)})}{{\tau}s}\|_2^2 + {\lambda}\|{\mathbf{z}}^{(t-1)}\|_0 \nonumber
\end{align}\end{small}
which is equivalent to
\begin{small}\begin{align}\label{eq:lemma1-proof-seg3}
& \langle \nabla Q_{\mathbf{S}} ({\mathbf{z}_{\mathbf{S}}}^{(t-1)}), {\mathbf{z}_{\mathbf{S}}}^{(t)} - {\mathbf{z}_{\mathbf{S}}}^{(t-1)}\rangle + \frac{{\tau}s}{2} \|{\mathbf{z}}^{(t)} - {\mathbf{z}}^{(t-1)}\|_2^2 \\
& + {\lambda}\|{\mathbf{z}}^{(t)}\|_0 \le {\lambda}\|{\mathbf{z}}^{(t-1)}\|_0 \nonumber
\end{align}\end{small}
due to the fact that
\begin{small}\begin{align*}
&\langle \nabla Q ({\mathbf{z}}^{(t-1)}), {\mathbf{z}}^{(t)} - {\mathbf{z}}^{(t-1)}\rangle =
\langle \nabla Q_{\mathbf{S}} ({\mathbf{z}_{\mathbf{S}}}^{(t-1)}), {\mathbf{z}_{\mathbf{S}}}^{(t)} - {\mathbf{z}_{\mathbf{S}}}^{(t-1)}\rangle
\end{align*}\end{small}
Also, since $s$ is the Lipschitz constant for $\nabla Q_{\mathbf{S}}$,
\begin{small}\begin{align}\label{eq:lemma1-proof-seg4}
& Q_{\mathbf{S}}({\mathbf{z}_{\mathbf{S}}}^{(t)}) \le Q_{\mathbf{S}}({\mathbf{z}_{\mathbf{S}}}^{(t-1)}) + \langle \nabla Q_{\mathbf{S}} ({\mathbf{z}_{\mathbf{S}}}^{(t-1)}), {\mathbf{z}_{\mathbf{S}}}^{(t)} - {\mathbf{z}_{\mathbf{S}}}^{(t-1)}\rangle \\
& + \frac{s}{2} \|{\mathbf{z}_{\mathbf{S}}}^{(t)} - {\mathbf{z}_{\mathbf{S}}}^{(t-1)}\|_2^2 \nonumber
\end{align}
\end{small}
Combining (\ref{eq:lemma1-proof-seg3}) and (\ref{eq:lemma1-proof-seg4}) and note that $\|{\mathbf{z}_{\mathbf{S}}}^{(t)} - {\mathbf{z}_{\mathbf{S}}}^{(t-1)}\|_2 = \|{\mathbf{z}}^{(t)} - {\mathbf{z}}^{(t-1)}\|_2$, $Q_{\mathbf{S}}({\mathbf{z}_{\mathbf{S}}}^{(t)}) = Q({\mathbf{z}}^{(t)})$ and $Q_{\mathbf{S}}({\mathbf{z}_{\mathbf{S}}}^{(t-1)}) = Q({\mathbf{z}}^{(t-1)})$, we have
\begin{small}\begin{align}\label{eq:lemma1-proof-seg5}
&Q({\mathbf{z}}^{(t)}) + {\lambda}\|{\mathbf{z}}^{(t)}\|_0 \le Q({\mathbf{z}}^{(t-1)}) + {\lambda}\|{\mathbf{z}}^{(t-1)}\|_0 \\
& - \frac{(\tau-1)s}{2} \|{\mathbf{z}}^{(t)} - {\mathbf{z}}^{(t-1)}\|_2^2 \nonumber
\end{align}
\end{small}
Now (\ref{eq:support-shrinkage}) and (\ref{eq:l0graph-proximal-sufficient-decrease}) are verified for $t = 1$. Suppose (\ref{eq:support-shrinkage}) and (\ref{eq:l0graph-proximal-sufficient-decrease}) hold for all $t \ge t_0$ with $t_0 \ge 1$.
Since $\{L({\mathbf{z}}^{(t)})\}_{t=1}^{t_0}$ is decreasing, we have
\begin{small}\begin{align*}
&L({\mathbf{z}}^{(t_0)}) = \|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(t_0)}\|_2^2 + {\lambda}\|{\mathbf{z}}^{(t_0)}\|_0 \\
&\le \|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(0)}\|_2^2 + {\lambda}\|{\mathbf{z}}^{(0)}\|_0 \le 1 + {\lambda}|\mathbf{S}| \nonumber
\end{align*}\end{small}
which indicates that $\|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(t_0)}\|_2 \le \sqrt{1+{\lambda}|\mathbf{S}|}$.
When $t = t_0+1$,
\begin{small}\begin{align*}
&|\tilde {\mathbf{z}_j}^{(t)}| \le \|\mathbf{g}^{(t-1)}\|_{\infty} \le \frac{2}{{\tau}s} \|{\mathbf{D}^\top}({\mathbf{D}}{{\mathbf{z}}^{(t-1)}}-{\mathbf{x}})\|_{\infty} \\
&\le \frac{2}{{\tau}s} \sqrt{1+{\lambda}|\mathbf{S}|}
\end{align*}\end{small}
where $j$ is the index for any zero element of ${\mathbf{z}}^{(t-1)}$, $1 \le j \le n, j \notin {\rm supp}({\mathbf{z}}^{(t-1)})$. Now $|{\tilde {\mathbf{z}_j}^{(t)}}| < \sqrt{\frac{2\lambda}{{\tau}s}}$, and it follows that and ${\mathbf{z}_j}^{(t)} = 0$ due to the update rule in (\ref{eq:l0-sa-proximal-step2}). Therefore, the zero elements of ${\mathbf{z}}^{(t-1)}$ remain unchanged in ${\mathbf{z}_j}^{(t)}$, and ${\rm supp} ({\mathbf{z}}^{(t)}) \subseteq {\rm supp} ({\mathbf{z}}^{(t-1)}) \subseteq \mathbf{S}$ for $t=t_0+1$. Moreover, similar to the case when $t = 1$, we can derive (\ref{eq:lemma1-proof-seg3}), (\ref{eq:lemma1-proof-seg4}) and (\ref{eq:lemma1-proof-seg5}), so that the support shrinkage (\ref{eq:support-shrinkage}) and decline of the objective (\ref{eq:l0graph-proximal-sufficient-decrease}) are verified for $t=t_0+1$. It follows that the claim of this lemma holds for all $t \ge 1$.
Since the sequence $\{L({\mathbf{z}}^{(t)})\}_t$ is deceasing with lower bound $0$, it must converge.
\end{proof}
\subsection{Proof of Lemma~\ref{lemma::PGD-convergence}}
\begin{proof}
We first prove that the sequences $\{{\mathbf{z}}^{(t)}\}_t$ is bounded for any $1 \le i \le n$. In the proof of Lemma~\ref{lemma::shrinkage-sufficient-decrease}, it is proved that
\begin{small}\begin{align*}
&L({\mathbf{z}}^{(t)}) = \|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(t)}\|_2^2 + {\lambda}\|{\mathbf{z}}^{(t)}\|_0 \\
&\le \|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(0)}\|_2^2 + {\lambda}\|{\mathbf{z}}^{(0)}\|_0 \le 1 + {\lambda}|\mathbf{S}| \nonumber
\end{align*}\end{small}
for $t \ge 1$. Therefore, $\|\mathbf{x} - \mathbf{D} {\mathbf{z}}^{(t)}\|_2 \le \sqrt{1 + {\lambda}|\mathbf{S}|}$ and it follows that $\|\mathbf{D} {\mathbf{z}}^{(t)}\|_2^2 \le (1 + \sqrt{1+{\lambda}|\mathbf{S}|})^2$. Since ${\rm supp}({\mathbf{z}}^{(t)}) \subseteq \mathbf{S}$ for $t \ge 0$ due to Lemma~\ref{lemma::shrinkage-sufficient-decrease},
\begin{small}\begin{align*}
&(1 + \sqrt{1+{\lambda}|\mathbf{S}|})^2 \ge \|\mathbf{D} {\mathbf{z}}^{(t)}\|_2 = \|\mathbf{D}_{\mathbf{S}} {\mathbf{z}_{\mathbf{S}}}^{(t)}\|_2 \\
&\ge \sigma_{\min}({\mathbf{D}_{\mathbf{S}}}^{\top}{\mathbf{D}_{\mathbf{S}}})
\| {\mathbf{z}_{\mathbf{S}}}^{(t)}\|_2^2 = \sigma_{\min}({\mathbf{D}_{\mathbf{S}}}^{\top}{\mathbf{D}_{\mathbf{S}}}) \|{\mathbf{z}}^{(t)}\|_2^2
\end{align*}\end{small}
Since $\mathbf{D}_{\mathbf{S}}$ is nonsingular, we have $\sigma_{\min}({\mathbf{D}_{\mathbf{S}}}^{\top}{\mathbf{D}_{\mathbf{S}}}) = (\sigma_{\min}(\mathbf{D}))^2$ and it follows that ${\mathbf{z}}^{(t)}$ is bounded: $\|{\mathbf{z}}^{(t)}\|_2^2 \le \frac{(1 + \sqrt{1+{\lambda}|\mathbf{S}|})^2}{(\sigma_{\min}(\mathbf{D}))^2}$.
In addition, since $\ell^{0}$-norm function $\|\cdot\|_0$ is a semi-algebraic function, therefore, according to Theorem $1$ in \cite{BoltePAL2014}, $\{{\mathbf{z}}^{(t)}\}_t$ converges to a critical point of $L(\mathbf{z})$, denoted by $\hat {\mathbf{z}}$.
\end{proof}
\subsection{Proof of Lemma~\ref{lemma::equivalence-to-capped-l1}}
\begin{proof}
Let $\hat \mathbf{v} = 2{\mathbf{D}^{\top}}({\mathbf{D}} {\hat {\mathbf{z}}} - \mathbf{x} ) + {\dot \mathbf{R}} (\hat {\mathbf{z}};b)$, . For for $j \in {\hat \mathbf{S}}$, since $\hat {\mathbf{z}}$ is a critical point of $L(\mathbf{z}) = {\|\mathbf{x} - \mathbf{D} \mathbf{z}\|_2^2 + {\lambda}\|{\mathbf{z}}\|_0}$. then $\frac{\partial Q}{\partial {\mathbf{z}_j}} |_{\mathbf{z} = \hat {\mathbf{z}}} = 0$ because $\frac{\partial \|\mathbf{z}\|_0}{\partial {\mathbf{z}_j}} |_{\mathbf{z} = \hat {\mathbf{z}}} = 0$ . Note that $\min_{j \in {\hat \mathbf{S}}} |\hat {\mathbf{z}_j}| > b$, so
$\frac{\partial \mathbf{R}}{\partial \mathbf{z}_j}|_{\mathbf{z} = \hat {\mathbf{z}}} = 0$, and it follows that $\hat \mathbf{v}_j = 0$.
For $j \notin {\hat \mathbf{S}}$, since $\frac{d {R}}{d \mathbf{z}_j} (\hat \mathbf{z}_j+;b) = \frac{\lambda}{b}$ and $\frac{d {R}}{d \mathbf{z}_j} (\hat \mathbf{z}_j-;b) = -\frac{\lambda}{b}$,
$\frac{\lambda}{b} > \max_{j \notin {\hat \mathbf{S}}} |\frac{\partial Q}{\partial {\mathbf{z}_j}}|_{\mathbf{z} = \hat {\mathbf{z}}}|$, we can choose the $j$-th element of ${\dot \mathbf{R}} (\hat {\mathbf{z}};b)$ such that $\hat \mathbf{v}_j = 0$. Therefore, $\|\hat \mathbf{v}\|_2 = 0$, and $\hat {\mathbf{z}}$ is a local solution to the problem (\ref{eq:capped-l1-problem}).
Now we prove that ${\mathbf{z}}^*$ is also a local solution to (\ref{eq:capped-l1-problem}). Let $\mathbf{v}^* = 2{\mathbf{D}^{\top}}({\mathbf{D}} {\mathbf{z}}^* - \mathbf{x} ) + {\dot \mathbf{R}} ({\mathbf{z}}^*;b)$, and $Q$ is defined as before. For $j \in \mathbf{S}^*$, since ${\mathbf{z}}^*$ is the global optimal solution to problem (\ref{eq:l0-sparse-appro}), we also have $\frac{\partial Q}{\partial {\mathbf{z}_j}} |_{\mathbf{z} = {\mathbf{z}}^*} = 0$. If it is not the case and $\frac{\partial Q}{\partial {\mathbf{z}_j}} |_{\mathbf{z} = {\mathbf{z}}^*} \neq 0$, then we can change ${\mathbf{z}_j}$ by a small amount in the direction of the gradient $\frac{\partial Q}{\partial {\mathbf{z}_j}}$ at the point ${\mathbf{z}} = {\mathbf{z}}^*$ and still make ${\mathbf{z}_j} \neq 0$, leading to a smaller value of the objective $L(\mathbf{z})$.
Note that $\min_{j \in {\mathbf{S}^*}} | {\mathbf{z}_j}^*| > b$, so
$\frac{\partial \mathbf{R}}{\partial \mathbf{z}_j}|_{\mathbf{z} = \hat {\mathbf{z}}} = 0$, and it follows that $\mathbf{v}_j^* = 0$.
For $j \notin \mathbf{S}^*$, since $\frac{\lambda}{b} > \max_{j \notin {\hat \mathbf{S}}} |\frac{\partial Q}{\partial {\mathbf{z}_j}}|_{\mathbf{z} = {\mathbf{z}}^*}|$, we can choose the $j$-th element of ${\dot \mathbf{R}} ({\mathbf{z}}^*;b)$ such that $\mathbf{v}_j^* = 0$. It follows that $\|\mathbf{v}^*\|_2 = 0$, and ${\mathbf{z}}^*$ is also a local solution to the problem (\ref{eq:capped-l1-problem}).
\end{proof}
\subsection{Proof of Theorem~\ref{theorem::suboptimal-optimal}}
\begin{proof}
According to Lemma~\ref{lemma::equivalence-to-capped-l1}, both ${\hat \mathbf{z}}$ and ${\mathbf{z}}^*$ are local solutions to problem (\ref{eq:capped-l1-problem}). In the following text, let $\bm{\beta}_{{\mathbf{I}}}$ indicates a vector whose elements are those of $\bm{\beta}$ with indices in ${\mathbf{I}}$. Let $\Delta = {\mathbf{z}}^*-{\hat \mathbf{z}}$, $\tilde \Delta = {\dot {\mathbf{P}}}({\mathbf{z}}^*) - {\dot {\mathbf{P}}}(\hat {\mathbf{z}})$. By Lemma~\ref{lemma::equivalence-to-capped-l1}, we have
\begin{small}\begin{align*}
&\| 2{\mathbf{D}^{\top}}{\mathbf{D}}\Delta + \tilde \Delta\|_2 = 0
\end{align*}\end{small}
It follows that
\begin{small}\begin{align*}
& 2\Delta^{\top}{\mathbf{D}^{\top}}{\mathbf{D}}\Delta + \Delta^{\top} \tilde \Delta \le \|\Delta\|_2 \| 2{\mathbf{D}^{\top}}{\mathbf{D}}\Delta + \tilde \Delta\|_2 = 0
\end{align*}\end{small}
Also, by the proof of Lemma~\ref{lemma::equivalence-to-capped-l1}, for $k \in {\hat \mathbf{S}} \cap \mathbf{S}^*$, since $({\mathbf{D}^{\top}}{\mathbf{D}}\Delta)_k = 0$ we have $\tilde \Delta_k = 0$. We now present another property on any nonconvex function $P$ using the degree of nonconvexity in Definition~\ref{def::degree-nonconvexity}: $\theta(t,\kappa):= \sup_s \{ -{\rm sgn}(s-t) ({\dot P}(s;b) - {\dot P}(t;b)) - \kappa |s-t|\}$ on the regularizer ${\mathbf{P}}$. For any $s,t \in {\rm I}\kern-0.18em{\rm R}$, we have
\begin{small}\begin{align*}
& -{\rm sgn}(s-t) \big( {\dot P}(s;b) - {\dot P}(t;b) \big) - \kappa |s-t| \le \theta(t,\kappa)
\end{align*}\end{small}
by the definition of $\theta$. It follows that
\begin{small}\begin{align}\label{eq:suboptimal-optimal-seg1}
& \theta(t,\kappa) |s-t| \ge -(s-t)\big( {\dot P}(s;b) - {\dot P}(t;b) \big) - \kappa (s-t)^2 \nonumber \\
& -(s-t)\big( {\dot P}(s;b) - {\dot P}(t;b) \big) \le \theta(t,\kappa) |s-t| + \kappa (s-t)^2
\end{align}\end{small}
Applying (\ref{eq:suboptimal-optimal-seg1}) with $P = P_j$ for $j = 1,\ldots,n$, we have
\begin{small}\begin{align}\label{eq:suboptimal-optimal-seg2}
&2\Delta^{\top}{\mathbf{D}^{\top}}{\mathbf{D}}\Delta \le -\Delta^{\top} {\tilde \Delta} = -\Delta_{\mathbf{F}}^{\top} {\tilde \Delta_{\mathbf{F} }}
-\Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^*}^{\top} {\tilde \Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^* }} \nonumber \\
& \le |{\mathbf{z}_{\mathbf{F}}}^* - {\hat \mathbf{z}_{\mathbf{F}}}|^{\top} \theta(\hat \mathbf{z}_{\mathbf{F}},\kappa) + \kappa \|{\mathbf{z}_{\mathbf{F}}}^* - {\hat \mathbf{z}_{\mathbf{F}}}\|_2^2 + \|\Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^*}\|_2
{\|{\tilde \Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^* }}\|_2}\nonumber \\
& \le \|\theta(\hat \mathbf{z}_{\mathbf{F}},\kappa)\|_2 \|{\mathbf{z}_{\mathbf{F}}}^* - {\hat \mathbf{z}_{\mathbf{F}}}\|_2 + \kappa \|{\mathbf{z}_{\mathbf{F}}}* - {\hat \mathbf{z}_{\mathbf{F}}}\|_2^2 + \|\Delta\|_2{\|{\tilde \Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^* }}\|_2}\nonumber \\
&\le \|\theta({\hat \mathbf{z}_{\mathbf{F}}},\kappa)\|_2 \|\Delta\|_2 + \kappa \|\Delta\|_2^2 + \|\Delta\|_2{\|{\tilde \Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^* }}\|_2}
\end{align}\end{small}
On the other hand, $\Delta^{\top}{\mathbf{D}^{\top}}{\mathbf{D}}\Delta \ge \kappa_0^2 \|\Delta\|_2^2$. It follows from (\ref{eq:suboptimal-optimal-seg2}) that
\begin{small}\begin{align*}
& 2\kappa_0^2 \|\Delta\|_2^2 \le \|\theta(\hat \mathbf{z}_{\mathbf{F}},\kappa)\|_2 \|\Delta\|_2 + \kappa \|\Delta\|_2^2 + \|\Delta\|_2 {\|{\tilde \Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^* }}\|_2}
\end{align*}\end{small}
When $\|\Delta\|_2 \neq 0$, we have
\begin{small}\begin{align}\label{eq:suboptimal-optimal-seg3}
&2\kappa_0^2 \|\Delta\|_2 \le \|\theta(\hat \mathbf{z}_{\mathbf{F}},\kappa)\|_2 + \kappa \|\Delta\|_2 + {\|{\tilde \Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^* }}\|_2}\nonumber \\
& \Rightarrow \|\Delta\|_2 \le \frac{\|\theta(\hat \mathbf{z}_{\mathbf{F}},\kappa)\|_2 + {\|{\tilde \Delta_{{\hat \mathbf{S}} \cap \mathbf{S}^* }}\|_2}}{2\kappa_0^2-\kappa}
\end{align}\end{small}
According to the definition of $\theta$, it can be verified that $\theta(t,\kappa) = \max\{0,\frac{\lambda}{b} - {\kappa} |t - b| \}$ for $|t| > b$, and $\theta(0,\kappa) = \max\{0, \frac{\lambda}{b} - {\kappa} b\}$. Therefore,
\begin{small}\begin{align}\label{eq:suboptimal-optimal-seg4}
&\|\theta(\hat \mathbf{z}_{\mathbf{F}},\kappa)\|_2
= \big(\sum\limits_{j \in \mathbf{F} \cap \hat \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |\hat \mathbf{z}_j - b| \})^2 +
\sum\limits_{j \in \mathbf{F} \setminus \hat \mathbf{S}} (\max\{0, \frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}}
\end{align}\end{small}
And it follows that
\begin{small}\begin{align}\label{eq:suboptimal-optimal-seg5}
&\|\Delta\|_2 \le \frac{1}{2\kappa_0^2-\kappa}\bigg(\big(\sum\limits_{j \in \mathbf{F} \cap \hat \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |\hat \mathbf{z}_j - b| \})^2 +
\sum\limits_{j \in \mathbf{F} \setminus \hat \mathbf{S}} (\max\{0, \frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}} \bigg)
\end{align}\end{small}
This proves the result of this theorem.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem::optimal-rma}}
\begin{proof}
By the proof of Lemma~\ref{lemma::PGD-convergence}, we have
\begin{small}\begin{align*}
&\| 2{{{{\tilde \mathbf{D}}}}^{\top}}{{{\tilde \mathbf{D}}}}{{\tilde \mathbf{z}}} - 2{{{{\tilde \mathbf{D}}}}^{\top}}\mathbf{x} + {\dot \mathbf{R}}(\tilde {\mathbf{z}})\|_2 = 0
\end{align*}\end{small}
It follows that
\begin{small}\begin{align}\label{eq:optimal-rp-seg1}
&\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\tilde \mathbf{z}}} -2{{{\mathbf{D}}}^{\top}}\mathbf{x} + {\dot \mathbf{R}}(\tilde {\mathbf{z}})\|_2 \nonumber \\
&= \| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\tilde \mathbf{z}}} - 2{{{{\tilde \mathbf{D}}}}^{\top}}{{{\tilde \mathbf{D}}}}{{\tilde \mathbf{z}}}
+2{{{{\tilde \mathbf{D}}}}^{\top}}{{{\tilde \mathbf{D}}}}{{\tilde \mathbf{z}}} -2{{{\mathbf{D}}}^{\top}}\mathbf{x}
+ 2{{{{\tilde \mathbf{D}}}}^{\top}}\mathbf{x} - 2{{{{\tilde \mathbf{D}}}}^{\top}}\mathbf{x}
+ {\dot \mathbf{R}}(\tilde {\mathbf{z}})\|_2 \nonumber \\
& \le \| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\tilde \mathbf{z}}} - 2{{{{\tilde \mathbf{D}}}}^{\top}}{{{\tilde \mathbf{D}}}}{{\tilde \mathbf{z}}}\|_2 + \|2{{{\mathbf{D}}}^{\top}}\mathbf{x} - 2{{{{\tilde \mathbf{D}}}}^{\top}}\mathbf{x}\|_2
+ \| 2{{{{\tilde \mathbf{D}}}}^{\top}}{{{\tilde \mathbf{D}}}}{{\tilde \mathbf{z}}} -2{{{{\tilde \mathbf{D}}}}^{\top}}\mathbf{x} + {\dot \mathbf{R}}(\tilde {\mathbf{z}})\|_2 \nonumber \\
&=\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\tilde \mathbf{z}}} - 2{{{{\tilde \mathbf{D}}}}^{\top}}{{{\tilde \mathbf{D}}}}{{\tilde \mathbf{z}}}\|_2 + \|2{{{\mathbf{D}}}^{\top}}\mathbf{x} - 2{{{{\tilde \mathbf{D}}}}^{\top}}\mathbf{x}\|_2 \nonumber \\
&\le 2\|{{{\mathbf{D}}}^{\top}} ({{\mathbf{D}}}- {{{\tilde \mathbf{D}}}}) {{\tilde \mathbf{z}}}\|_2
+ 2\| ({{\mathbf{D}}}-{{{\tilde \mathbf{D}}}})^{\top} {{{\tilde \mathbf{D}}}} {{\tilde \mathbf{z}}}\|_2
+ 2\|{{{\mathbf{D}}}^{\top}}\mathbf{x} - {{{{\tilde \mathbf{D}}}}^{\top}}\mathbf{x}\|_2
\end{align}\end{small}
By ${\tilde L}({{\tilde \mathbf{z}}}) \le {\tilde L}({\mathbf{z}}^{(0)})$, we have $\|{{\tilde \mathbf{z}}}\|_2 \le M_0$. By Lemma~\ref{lemma::D-approx}, with probability at least $1-6e^{-p}$, $\|\mathbf{D} - {\tilde \mathbf{D}}\|_2 \le C_{k,k_0}$. It follows from (\ref{eq:optimal-rp-seg1}) that
\begin{small}\begin{align*}
&\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\tilde \mathbf{z}}} -2{{{\mathbf{D}}}^{\top}}\mathbf{x} + {\dot \mathbf{R}}(\tilde {\mathbf{z}})\|_2 \nonumber \\
& \le 2\sigma_{\max}(\mathbf{D}) C_{k,k_0}M_0 + 2C_{k,k_0} (\sigma_{\max}(\mathbf{D}) + C_{k,k_0})M_0 + 2C_{k,k_0}\|\mathbf{x}\|_2\nonumber \\
&= 2C_{k,k_0}M_0 (2\sigma_{\max}(\mathbf{D}) + C_{k,k_0}) + 2C_{k,k_0}\|\mathbf{x}\|_2
\end{align*}\end{small}
Also, by the proof of Lemma~\ref{lemma::PGD-convergence},
\begin{small}\begin{align*}
&\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}} {{\mathbf{z}}^*} -2{{{\mathbf{D}}}^{\top}}\mathbf{x} + {\dot \mathbf{R}}({{\mathbf{z}}^*})\|_2 = 0
\end{align*}\end{small}
Let $\Delta = {{\mathbf{z}}^*} - {{\tilde \mathbf{z}}}$, $\tilde \Delta = {\dot \mathbf{R}}({\mathbf{z}}^*) - {\dot \mathbf{R}}(\tilde {\mathbf{z}})$,
\begin{small}\begin{align*}
&\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}} \Delta + \tilde \Delta\|_2 \le 2C_{k,k_0}M_0 (2\sigma_{\max}(\mathbf{D}) + C_{k,k_0}) + 2C_{k,k_0}\|\mathbf{x}\|_2
\end{align*}\end{small}
Now following the proof of Theorem~\ref{theorem::suboptimal-optimal}, we have
\begin{small}\begin{align}\label{eq:optimal-rp-seg2}
&\|{\mathbf{z}}^*-{\tilde \mathbf{z}}\|_2 = \|\Delta\|_2 \nonumber \\
&\le \frac{1}{2\tau_0^2-\tau}\bigg(\big(\sum\limits_{j \in {\mathbf{G}} \cap \tilde \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\tilde \mathbf{z}}_j - b| \})^2 + \nonumber \\
&\sum\limits_{j \in {\mathbf{G}} \setminus \tilde \mathbf{S}} (\max\{0, \frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}} + 2C_{k,k_0}M_0 (2\sigma_{\max}(\mathbf{D}) + C_{k,k_0}) + 2C_{k,k_0}\|\mathbf{x}\|_2 \bigg)
\end{align}\end{small}%
\end{proof}
\subsection{Proof of Theorem~\ref{theorem::optimal-rdr}}
We have the following lemma before proving Theorem~\ref{theorem::optimal-rdr}.
\begin{MyLemma}\label{lemma::matrix-prod-projection}
Suppose $\mathbf{T}$ satisfies the $\ell^2$-norm preserving property in Definition~\ref{def:L2-norm-preseving}. If $m \ge 4c\log{\frac{4}{\delta}}$, then for any matrix $\mathbf{A} \in {\rm I}\kern-0.18em{\rm R}^{p \times d}$, $\mathbf{B} \in {\rm I}\kern-0.18em{\rm R}^{d \times q}$, with probability at least $1 - \delta$,
\begin{small}\begin{align}\label{eq:matrix-prod-projection}
&\|\mathbf{A} \mathbf{T}^{\top} \mathbf{T} \mathbf{B} - \mathbf{A} \mathbf{B}\| \le \|\mathbf{A}\|_F \|\mathbf{B}\|_F \sqrt{\frac{c}{m}\log{\frac{4}{\delta}}}
\end{align}\end{small}%
\end{MyLemma}
Lemma~\ref{lemma::matrix-prod-projection} can be proved using the definition of the $\ell^2$-norm preserving property in the same way that Lemma $6$ in \citep{Sarlos2006-large-matrix-random-projection} or \citep{Zhang2016-sparse-random-convex-concave} is proved.
\begin{proof}[Proof of Theorem~\ref{theorem::optimal-rdr}]
By the proof of Lemma~\ref{lemma::PGD-convergence}, we have
\begin{small}\begin{align*}
&\| 2{{{{\bar \mathbf{D}}}}^{\top}}{{{\bar \mathbf{D}}}}{{\bar \mathbf{z}}} - 2{{{{\bar \mathbf{D}}}}^{\top}}{\bar \mathbf{x}} + {\dot \mathbf{R}}(\bar {\mathbf{z}})\|_2 = 0
\end{align*}\end{small}
It follows that
\begin{small}\begin{align}\label{eq:optimal-rp-seg1}
&\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\bar \mathbf{z}}} -2{{{\mathbf{D}}}^{\top}}\mathbf{x} + {\dot \mathbf{R}}(\bar {\mathbf{z}})\|_2 \nonumber \\
&= \| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\bar \mathbf{z}}} - 2{{{{\bar \mathbf{D}}}}^{\top}}{{{\bar \mathbf{D}}}}{{\bar \mathbf{z}}}
+2{{{{\bar \mathbf{D}}}}^{\top}}{{{\bar \mathbf{D}}}}{{\bar \mathbf{z}}} -2{{{\mathbf{D}}}^{\top}}\mathbf{x}
+ 2{{{{\bar \mathbf{D}}}}^{\top}}{\bar\mathbf{x}} - 2{{{{\bar \mathbf{D}}}}^{\top}}{\bar\mathbf{x}}
+ {\dot \mathbf{R}}(\bar {\mathbf{z}})\|_2 \nonumber \\
& \le \| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\bar \mathbf{z}}} - 2{{{{\bar \mathbf{D}}}}^{\top}}{{{\bar \mathbf{D}}}}{{\bar \mathbf{z}}}\|_2 + \|2{{{\mathbf{D}}}^{\top}}\mathbf{x} - 2{{{{\bar \mathbf{D}}}}^{\top}}{\bar\mathbf{x}}\|_2
+ \| 2{{{{\bar \mathbf{D}}}}^{\top}}{{{\bar \mathbf{D}}}}{{\bar \mathbf{z}}} -2{{{{\bar \mathbf{D}}}}^{\top}}{\bar\mathbf{x}} + {\dot \mathbf{R}}(\bar {\mathbf{z}})\|_2 \nonumber \\
& = \| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\bar \mathbf{z}}} - 2{{{{\bar \mathbf{D}}}}^{\top}}{{{\bar \mathbf{D}}}}{{\bar \mathbf{z}}}\|_2 + \|2{{{\mathbf{D}}}^{\top}}\mathbf{x} - 2{{{{\bar \mathbf{D}}}}^{\top}}{\bar\mathbf{x}}\|_2 \nonumber \\
& = 2 \|{{{\mathbf{D}}}^{\top}}({\mathbf{I}} - \mathbf{T}^{\top}{\mathbf{T}}){{\mathbf{D}}} {\bar \mathbf{z}} \|_2 + 2 \|{{{\mathbf{D}}}^{\top}}({\mathbf{I}} - \mathbf{T}^{\top}{\mathbf{T}}) \mathbf{x}\|_2
\end{align}\end{small}%
By ${\bar L}({{\bar \mathbf{z}}}) \le {\bar L}({\mathbf{z}}^{(0)})$, we have $\|{{\bar \mathbf{z}}}\|_2 \le M_1$. According to Lemma~\ref{lemma::matrix-prod-projection}, with probability at least $1 - \delta$,
\begin{small}\begin{align}\label{eq:optimal-rp-seg2}
&2 \|{{{\mathbf{D}}}^{\top}}({\mathbf{I}} - \mathbf{T}^{\top}{\mathbf{T}}){{\mathbf{D}}} {\bar \mathbf{z}} \|_2 + 2 \|{{{\mathbf{D}}}^{\top}}({\mathbf{I}} - \mathbf{T}^{\top}{\mathbf{T}}) \mathbf{x}\|_2 \nonumber \\
&\le 2\|\mathbf{D}\|_F\sigma_{\max}(\mathbf{D})M_1\sqrt{\frac{c}{m}\log{\frac{4}{\delta}}} + 2\|\mathbf{D}\|_FM_1\sqrt{\frac{c}{m}\log{\frac{4}{\delta}}} \nonumber \\
&\le 2\|\mathbf{D}\|_F{M_1}\sqrt{\frac{c}{m}\log{\frac{4}{\delta}}} (\sigma_{\max}(\mathbf{D})+1)
\end{align}\end{small}
Combining (\ref{eq:optimal-rp-seg1}) and (\ref{eq:optimal-rp-seg2}), we have
\begin{small}\begin{align*}
&\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}}{{\bar \mathbf{z}}} -2{{{\mathbf{D}}}^{\top}}\mathbf{x} + {\dot \mathbf{R}}(\bar {\mathbf{z}})\|_2 \le 2\|\mathbf{D}\|_F{M_1}\sqrt{\frac{c}{m}\log{\frac{4}{\delta}}} (\sigma_{\max}(\mathbf{D})+1)
\end{align*}\end{small}
Also, by the proof of Lemma~\ref{lemma::PGD-convergence},
\begin{small}\begin{align*}
&\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}} {{\mathbf{z}}^*} -2{{{\mathbf{D}}}^{\top}}\mathbf{x} + {\dot \mathbf{R}}({{\mathbf{z}}^*})\|_2 = 0
\end{align*}\end{small}
Let $\Delta = {{\mathbf{z}}^*} - {{\bar \mathbf{z}}}$, $\bar \Delta = {\dot \mathbf{R}}({\mathbf{z}}^*) - {\dot \mathbf{R}}(\bar {\mathbf{z}})$,
\begin{small}\begin{align*}
&\| 2{{{\mathbf{D}}}^{\top}}{{\mathbf{D}}} \Delta + \bar \Delta\|_2 \le 2\|\mathbf{D}\|_F{M_1}\sqrt{\frac{c}{m}\log{\frac{4}{\delta}}} (\sigma_{\max}(\mathbf{D})+1)
\end{align*}\end{small}
Now following the proof of Theorem~\ref{theorem::suboptimal-optimal}, with probability at least $1 - \delta$,
\begin{small}\begin{align}\label{eq:optimal-rp-seg3}
&\|{\mathbf{z}}^*-{\bar \mathbf{z}}\|_2 = \|\Delta\|_2 \nonumber \\
&\le \frac{1}{2\eta_0^2-\eta}\bigg(\big(\sum\limits_{j \in {{\mathbf{H}}} \cap \bar \mathbf{S}} (\max\{0,\frac{\lambda}{b} - {\kappa} |{\bar \mathbf{z}}_j - b| \})^2 + \nonumber \\
&\sum\limits_{j \in {{\mathbf{H}}} \setminus \bar \mathbf{S}} (\max\{0, \frac{\lambda}{b} - {\kappa} b\})^2 \big)^{\frac{1}{2}} + 2\|\mathbf{D}\|_F{M_1}\sqrt{\frac{c}{m}\log{\frac{4}{\delta}}} (\sigma_{\max}(\mathbf{D})+1) \bigg)
\end{align}\end{small}
\end{proof}
|
1,314,259,994,293 | arxiv | \section{Introduction}
Leptogenesis~\cite{FY} provides an elegant framework to consistently
address the observed Baryon Asymmetry in the Universe
(BAU)~\cite{WMAP} in minimal extensions of the Standard
Model~(SM)~\cite{reviews}. According to the standard paradigm of
leptogenesis, there exist heavy Majorana neutrinos of masses close to
the Grand Unified Theory (GUT) scale $M_{\rm GUT} \sim 10^{16}$ that
decay out of equilibrium and create a net excess of lepton number
$(L)$, which gets reprocessed into the observed baryon number $(B)$,
through the $(B+L)$-violating sphaleron interactions~\cite{KRS}. The
attractive feature of such a scenario is that the GUT-scale heavy
Majorana neutrinos could also explain the observed smallness in mass
of the SM light neutrinos by means of the so-called seesaw
mechanism~\cite{seesaw}.
The original GUT-scale leptogenesis scenario, however, runs into
certain difficulties, when one attempts to explain the flatness of the
Universe and other cosmological data~\cite{WMAP} within supergravity
models of inflation. To avoid overproduction of gravitinos
$\widetilde{G}$ whose late decays may ruin the successful predictions
of Big Bang Nucleosynthesis (BBN), the reheat temperature $T_{\rm
reh}$ of the Universe should be lower than $10^9$--$10^6$~GeV, for
$m_{\widetilde{G}} = 8$--0.2~TeV~\cite{gravitino}. This implies that
the heavy Majorana neutrinos should accordingly have masses as low as
$T_{\rm reh} \stackrel{<}{{}_\sim} 10^9$~GeV, thereby rendering the
relation of these particles with GUT-scale physics less natural. On
the other hand, it proves very difficult to directly probe the
heavy-neutrino sector of such a model at high-energy colliders,
e.g.~at the LHC or ILC, or in any other foreseeable experiment.
A potentially interesting solution to the above problems may be
obtained within the framework of resonant leptogenesis
(RL)~\cite{APRD}. The key aspect of RL is that self-energy effects
dominate the leptonic asymmetries~\cite{LiuSegre}, when two heavy
Majorana neutrinos happen to have a small mass difference with respect
to their actual masses. If this mass difference becomes comparable to
the heavy neutrino widths, a resonant enhancement of the leptonic
asymmetries takes place that may reach values ${\cal
O}(1)$~\cite{APRD,PU}. An indispensable feature of RL models is that
flavour effects due to the light-to-heavy neutrino Yukawa
couplings~\cite{EMX} play a dramatic role and can modify the
predictions for the BAU by many orders of magnitude~\cite{APtau,PU2}.
Most importantly, these flavour effects enable the
modelling~\cite{PU2} of minimal RL scenarios with electroweak-scale
heavy Majorana neutrinos that could be tested at the
LHC~\cite{APZPC,DGP} and in other non-accelerator experiments, while
maintaining agreement with the low-energy neutrino data. Many
variants of RL have been proposed in the
literature~\cite{RLpapers,RLextra}, including soft
leptogenesis~\cite{soft} and radiative leptogenesis~\cite{rad}.
In spite of the many existing studies, leptogenesis models face in
general a serious restriction concerning the origin of the required CP
and $L$ violation. If CP or $L$ violation were due to the spontaneous
symmetry breaking (SSB) of the SM gauge group, a net $L$ asymmetry
could only be generated during the electroweak phase transition
(EWPT), provided the heavy Majorana neutrinos are not too heavy such
that they have not already decayed away while the Universe was
expanding.\footnote{An exception to this argument may result from a
phase transition that is strongly first order. However, such a
scenario is not feasible within the SM with singlet
neutrinos~\cite{HR} (see also our discussion below).}
In this paper we show how RL constitutes an interesting alternative to
provide a viable solution to the above problem as well. For
definiteness, we consider a minimal extension of the SM with
right-handed neutrinos and a complex singlet field $\Sigma$. The
model possesses a global lepton symmetry U(1)$_l$ which gets
spontaneously broken through the vacuum expectation value (VEV) of
$\Sigma$, giving rise to the usual $\Delta L = 2$ Majorana masses.
Because of the SSB of the U(1)$_l$, the model predicts a true massless
Goldstone boson, the Majoron. Therefore, this scenario is called the
singlet Majoron model in the literature~\cite{CMP,APMaj}. Depending
on the particular structure of the Higgs potential, the VEV of
$\Sigma$ may be related to the VEV of the SM Higgs doublet $\Phi$.
Such a relation, for example, arises if the bilinear operator
$\Sigma^*\Sigma$ is small or absent from the Higgs potential. In this
case, the breaking of $L$ occurs during the EWPT. For the model under
study and given the LEP limit~\cite{LEPHiggs} on the SM Higgs boson
$M_H \stackrel{>}{{}_\sim} 115$~GeV, the EWPT is expected to be second
order and hence continuous from the symmetric phase to the broken
one~\cite{EWSM}.
We should now notice that all SM fermions and right-handed neutrinos
have no chiral masses above the EWPT and therefore the generation of a
net leptonic asymmetry is not possible. Consequently, in this model
successful baryogenesis can result from RL at the EWPT. Although the
singlet Majoron model that we will be studying here violates CP
explicitly, the results of our analysis can straightforwardly apply to
models with an extended Higgs sector that realise spontaneous CP
violation at the electroweak scale.
The paper is organised as follows: Section~\ref{sec:model} presents
the basic features of the singlet Majoron model with right-handed
neutrinos, including the interaction Lagrangians that are relevant to
the calculation of the leptonic asymmetries in Section~\ref{sec:asym}.
Moreover, in Section~\ref{sec:asym} we consider the novel
contributions to the leptonic asymmetries, coming from the transverse
polarisations of the $W^\pm$ and $Z$ bosons. In the same context, the
resummation of the gauge-dependent off-shell heavy-neutrino
self-energies~\cite{PP,APNPB} (which remains an essential operation
in~RL) is performed within the so-called Pinch Technique (PT)
framework~\cite{PT}. In Section~\ref{sec:EWPT} we analyse the
Boltzmann dynamics of the sphaleron effects on RL and present
predictions for the BAU. Section~\ref{sec:pheno} is devoted to the
phenomenological and astrophysical implications of the singlet Majoron
model. Finally, Section~\ref{sec:concl} contains our conclusions.
\setcounter{equation}{0}
\section{The Singlet Majoron Model}\label{sec:model}
Here we describe the basic features of the singlet Majoron
model~\cite{CMP,APMaj} augmented with a number n$_R$ of right-handed
neutrinos $\nu_{\alpha R}$ (with $\alpha = 1,2, \dots, \mbox{n}_R$)
that will be relevant to our study. As mentioned in the introduction,
the singlet Majoron model contains one complex singlet field $\Sigma$
in addition to the SM Higgs doublet $\Phi$. Although $\Sigma$ is not
charged under the SM gauge group SU(2)$_L\,\otimes$\,U(1)$_Y$, it
still carries a non-zero quantum number under the global lepton
symmetry U(1)$_l$. More explicitly, the scalar potential of the model
is given by
\begin{eqnarray}
\label{LV}
-\, {\cal L}_V \!&=&\! m^2_\Phi\, \Phi^\dagger \Phi\ +\ m^2_\Sigma\,
\Sigma^* \Sigma\ +\ \frac{\lambda_\Phi}{2}\, (\Phi^\dagger \Phi)^2 \ +\
\frac{\lambda_\Sigma}{2}\, (\Sigma^* \Sigma)^2 \ -\
\delta\, \Phi^\dagger \Phi\, \Sigma^* \Sigma\; .\qquad
\end{eqnarray}
In order to minimise the potential~(\ref{LV}), we first linearly
decompose the scalar fields as follows:
\begin{equation}
\label{PhiSigma}
\Phi\ =\ \left ( \begin{array}{c} G^+ \\ \displaystyle{\frac{v}{\sqrt{2}}\ +\
\frac{\phi\: +\: iG}{\sqrt{2}}} \end{array} \right)\; , \qquad
\Sigma\ =\ \frac{w}{\sqrt{2}}\ +\ \frac{\sigma\: +\: iJ}{\sqrt{2}}\ .
\end{equation}
Then, the extremal or tadpole conditions may easily be calculated by
\begin{eqnarray}
\label{TadH}
T_\phi \!&\equiv&\! -\; \Bigg< \frac{\partial {\cal L}_V}{\partial
\phi}\Bigg>\ =\ v\, \Bigg(\, m^2_\Phi\: +\:
\frac{\lambda_\Phi}{2}\, v^2\: -\: \frac{\delta}{2}\, w^2\, \Bigg)\
=\ 0\; ,\\
\label{TadS}
T_\sigma \!&\equiv&\! -\; \Bigg< \frac{\partial {\cal L}_V}{\partial
\sigma}\Bigg>\ =\ w\, \Bigg(\, m^2_\Sigma\: +\:
\frac{\lambda_\Sigma}{2}\, w^2\: -\: \frac{\delta}{2}\, v^2\,
\Bigg)\ =\ 0\; .
\end{eqnarray}
If $m^2_{\Phi}$ or $m^2_\Sigma$ are negative, the tadpole
conditions~(\ref{TadH}) and~(\ref{TadS}) imply that the ground state
of the scalar potential breaks spontaneously the local
SU(2)$_L\,\otimes$\,U(1)$_Y$ and the global U(1)$_l$ symmetries,
through the non-zero VEVs $v$ and $w$, respectively.
Expanding the fields $\Phi$ and $\Sigma$ about their VEVs, we obtain
three would-be Goldstone bosons $G^\pm$ and $G^0$, which become the
longitudinal polarisations of $W^\pm$ and $Z$ bosons, and one true
massless Goldstone boson $J$ associated with the SSB of U(1)$_l$.
This massless CP-odd field $J$ is called the Majoron in the
literature~\cite{CMP,APMaj}. In addition, there are two CP-even Higgs
fields $H$ and $S$, whose masses are determined by the diagonalisation
of the mass matrix
\begin{equation}
\label{M2}
{\cal M}^2\ =\ \left( \begin{array}{cc}
\lambda_\Phi\, v^2 & -\delta\, v w \\
-\delta\, v w & \lambda_\Sigma\, w^2
\end{array}\right)\; ,
\end{equation}
where ${\cal M}^2$ is defined in the weak basis $(\phi \,,\ \sigma )$.
The Higgs mass eigenstates $H$ and $S$ are related to the states
$\phi$ and $\sigma$, through the orthogonal transformation:
\begin{equation}
\label{Mix}
\left( \begin{array}{c} \phi \\ \sigma \end{array} \right)\ =\
\left( \begin{array}{cc} c_\theta & - s_\theta \\
s_\theta & c_\theta \end{array} \right)\ \left(
\begin{array}{c} H \\ S \end{array} \right) \ ,
\end{equation}
with $t_\beta = s_\beta/c_\beta = v/w$ and
\begin{equation}
\label{t2theta}
t_{2\theta} \ =\ \frac{2\,\delta\, t_\beta}{\lambda_\Sigma\:
-\: \lambda_\Phi t^2_\beta}\ .
\end{equation}
In the above we used the short-hand notation: $s_x \equiv \sin x$,
$c_x \equiv \cos x$ and $t_x \equiv \tan x$. Moreover, the squared
mass eigenvalues of the CP-even $H$ and $S$ bosons may easily be
calculated from ${\cal M}^2$ in~(\ref{M2}) and are given by
\begin{equation}
\label{MHS2}
M^2_{H,S}\ =\ \frac{v^2}{2}\, \Bigg[\, \lambda_\Phi\: +\:
\lambda_\Sigma\,t^{-2}_\beta\ \pm\ \sqrt{ \Big(\lambda_\Phi\, -\,
\lambda_\Sigma\,t^{-2}_\beta\Big)^2\: +\: \delta^2\, t^{-2}_\beta }\ \Bigg]\; .
\end{equation}
The requirement that $M^2_{H,S}$ be positive gives rise to the
inequality conditions,
\begin{equation}
\label{stable}
\lambda_{\Phi,\Sigma}\ >\ 0\; ,\qquad
\lambda_\Phi\, \lambda_\Sigma\ >\ \delta^2\; ,
\end{equation}
for the quartic couplings of the potential. In this context, we note
that if $|m^2_\Sigma| \ll (\delta / \lambda_\Phi )\, |m^2_\Phi|$ such
that $m^2_\Sigma$ can be completely neglected in the scalar potential,
the VEV $w$ of $\Sigma$ is then entirely determined by the VEV $v$ of
$\Phi$ and the quartic couplings $\lambda_\Sigma$ and $\delta$, {\it
viz.}
\begin{equation}
\label{wtov}
w\ =\ \sqrt{\frac{\delta}{\lambda_\Sigma}}\ v\; .
\end{equation}
This is an interesting scenario, since the ratio $t_\beta = v/w =
\sqrt{\lambda_\Sigma/\delta}$ does not strongly depend on the
temperature $T$, as opposed to what happens to the VEVs $v$ and $w$
individually. In fact, as long as $\lambda_{\Phi,\Sigma}, \delta \ll
1$, the thermally-corrected effective potential can be expanded, to a
very good approximation, in powers of $T^2/m^2_\Phi$. In such a
high-$T$ expansion, the quartic couplings of ${\cal L}_V$ turn out to
be $T$-independent~\cite{JKapusta} and hence $t_\beta$ does not depend
on $T$.
We now turn our attention to the neutrino Yukawa sector of the model,
which is non-standard. After SSB, it is given in the unitary gauge by
\begin{equation}
\label{LYuk}
-\, {\cal L}_Y \ =\ \frac{\phi}{v}\ \bar{\nu}_{iL}\, (m_D)_{i
\alpha}\, \nu_{\alpha R}\
+\ \frac{\sigma\: +\: iJ}{2\,w}\
\bar{\nu}^C_{\alpha R}\, (m_M)_{\alpha\beta}\,
\nu_{\beta R}\quad +\quad \mbox{H.c.},
\end{equation}
where summation over repeated indices is understood. Hereafter we use
Latin indices to label the left-handed neutrinos, e.g.~$\nu_{iL}$, and
Greek indices for the right-handed ones, e.g.~$\nu_{\alpha R}$.
Observe that the spontaneous breaking of U(1)$_l$ generates
lepton-number-violating $\Delta L = 2$ Majorana masses
$(m_M)_{\alpha\beta}$ in addition to the lepton-number-preserving
$\Delta L = 0$ Dirac masses $(m_D)_{i\alpha}$.
The model under discussion predicts a number $(3 + {\rm n}_R)$ of
Majorana neutrinos which we collectively denote by $n_I$, with $I =
i\, ,\ \alpha$. Their physical masses are obtained from the
diagonalisation of the neutrino mass matrix
\begin{equation}
\label{Mnu}
{\cal M}^\nu \ =\ \left( \begin{array}{cc} 0 & m_D \\
m_D^T & m_M \end{array} \right)\; ,
\end{equation}
by means of the unitary transformation $U^{\nu\, T}\, {\cal M}^\nu\,
U^\nu = {\cal \widehat{M}}^\nu$, where ${\cal \widehat{M}}^\nu$ is a
non-negative diagonal matrix. The neutrino mass eigenstates $(n_I)_R$
and $(n_I)_L$ are related to the states $\nu_{iL}$, $(\nu_{iL})^C$,
$\nu_{\alpha R}$ and $(\nu_{\alpha R})^C$ through
\begin{equation}
\label{Unu}
\left( \begin{array}{c} \nu_L^C \\ \nu_R \end{array} \right)_I \ =\
U^\nu_{IJ}\ (n_J)_R \ ,\qquad
\left( \begin{array}{c} \nu_L \\ \nu^C_R \end{array} \right)_I \ =\
U^{\nu\ast}_{IJ}\ (n_J)_L\ .
\end{equation}
Assuming the seesaw hierarchy $(m_D)_{i\alpha}/(m_M)_{\alpha\beta} \ll
1$, the model predicts 3 light states that are identified with the
observed light neutrinos ($n_i \equiv \nu_i$), and a number n$_R$ of
heavy Majorana neutrinos ($n_\alpha \equiv N_\alpha$) with masses of
order $(m_M)_{\alpha\beta} = \rho_{\alpha\beta}\, w$, where
$\rho_{\alpha\beta} = \rho_{\beta\alpha}$ are the Yukawa couplings of
$\Sigma$ to right-handed neutrinos.
To obtain an accurate light and heavy neutrino mass spectrum within
the context of models of electroweak RL, it is important to go beyond
the leading seesaw approximation. To this end, we need first to
perform a block diagonalisation and cast ${\cal M}^\nu$ into the form:
\begin{equation}
\label{block}
{\cal M}^\nu\ \to \ \left( \begin{array}{cc} {\bf m}^\nu & 0 \\
0 & {\bf m}^N \end{array} \right)\; .
\end{equation}
This can be achieved by introducing the unitary matrix
$V$~\cite{KPSmatrix}:
\begin{equation}
\label{Vxi}
V\ =\ \left(\! \begin{array}{cc}
({\bf 1}_3\, +\: \xi^*\xi^T)^{-1/2} &
\xi^* ({\bf 1}_{{\rm n}_R} +\: \xi^T\xi^*)^{-1/2}\\
-\xi^T ({\bf 1}_3\, +\: \xi^*\xi^T)^{-1/2} &
({\bf 1}_{{\rm n}_R} +\: \xi^T \xi^*)^{-1/2}
\end{array} \!\right)\; ,
\end{equation}
where $\xi$ is an arbitrary $3\times {\rm n}_R$ matrix. The
expressions $({\bf 1}_3 + \xi^*\xi^T)^{-1/2}$ and $({\bf 1}_{{\rm
n}_R} + \xi^T \xi^*)^{-1/2}$ are defined in terms of a Taylor series
expansion about the ${\cal N}\times {\cal N}$ identity matrix ${\bf
1}_{\cal N}$. These infinite series converge provided the norm $||\xi
||$ is much smaller than 1, where $||\xi || \equiv \sqrt{{\rm Tr}(\xi
\xi^\dagger )}$. This condition is naturally fulfilled within the
seesaw framework~\cite{Minkowski}. Block diagonalisation of the
matrix ${\cal M}^\nu$ given in~(\ref{Mnu}) implies that the $\{ 12\}$
block element of $V^T {\cal M}^\nu V$ vanishes, or equivalently that
\begin{equation}
\label{xicondition}
m_D\: -\: \xi\, m_M\: -\: \xi\,m^T_D\,\xi^*\ =\ 0\; .
\end{equation}
Equation~(\ref{xicondition}) determines $\xi$ in terms of $m_D$ and
$m_M$. It can be solved iteratively, with the first iteration given
by
\begin{equation}
\label{xi}
\xi\ =\ m_D\, m_M^{-1}\: -\:
m_D\,m_M^{-1}\,m^T_D\,m^*_D\,m^{*\,-1}_M\, m^{-1}_M\; .
\end{equation}
Note that the second term on the RHS of~(\ref{xi}) is suppressed by
the ratio of the light-to-heavy neutrino masses and can thus be safely
neglected in numerical estimates. Upon block diagonalisation, the
block mass ``eigen-matrices'' are
\begin{eqnarray}
\label{blockmN}
{\bf m}^N \!&=&\! \Big({\bf 1}_{{\rm n}_R} +\: \xi^\dagger\xi\Big)^{-1/2}\,
\Big( m_M\: +\: m^T_D\,\xi^*\: +\: \xi^\dagger m_D \Big)\,
\Big({\bf 1}_{{\rm n}_R} +\: \xi^T\xi^*\Big)^{-1/2}\; ,\\
\label{blockmnu}
{\bf m}^\nu \!&=&\! -\, \Big({\bf 1}_3\, +\: \xi \xi^\dagger\Big)^{-1/2}
\Big( m_D\xi^T\: +\: \xi m_D^T\: -\: \xi m_M \xi^T \Big)\,
\Big({\bf 1}_3\, +\: \xi^* \xi^T\Big)^{-1/2}\nonumber\\
\!&=&\! -\: \xi\, {\bf m}^N\,\xi^T\; ,
\end{eqnarray}
where we used~(\ref{xicondition}) to arrive at the last equality
of~(\ref{blockmnu}). Keeping the leading order terms in an expansion
of ${\bf m}^N$ in powers of $m_D m^{-1}_M$, we find that
\begin{equation}
\label{bfMass}
{\bf m}^N\ =\ m_M\: +\: \frac{1}{2}\, \Big(\,
m_D^\dagger\, m^{-1}_M\, m_D\: +\:
m_D^T\, m^{-1}_M\, m^*_D\, \Big)\; ,\qquad
{\bf m}^\nu\ =\ -\, m_D\, m^{-1}_M\, {\bf m}^N\, m^{-1}_M\, m_D^T\; .
\end{equation}
These last expressions are used to calculate the light and heavy
neutrino mass spectra of the RL scenarios discussed in
Section~\ref{sec:EWPT}.
In order to calculate the leptonic asymmetries in the next section, we
need to know the Lagrangians that govern the interactions of the
Majorana neutrinos $n_I$ and charged leptons $l = e,\,\mu,\,\tau$
with: ({\it i}) the $W^\pm$ and $Z$ bosons; ({\it ii}) their
respective would-be Goldstone bosons $G^\pm$ and~$G$; ({\it iii}) the
CP-odd Majoron particle $J$; ({\it iv}) the CP-even Higgs fields $H$
and $S$. In detail, these interaction Lagrangians are given
by~\cite{APMaj}
\begin{eqnarray}
\label{LagW}
{\cal L}_{W^\mp} \!& = &\! -\, \frac{g_w}{\sqrt{2}}\; W^{-\mu}\
\bar{l}\, B_{lI}\, \gamma_\mu\,P_L\ n_I \quad + \quad \mbox{H.c.},\\[3mm]
\label{LagZ}
{\cal L}_Z \!& = &\! -\, \frac{g_w}{4\cos\theta_w}\; Z^\mu\
\bar{n}_I\, \gamma_\mu\, \Big(\, C_{IJ}\, P_L\: -\: C^*_{IJ}\, P_R\,
\Big)\, n_J\; ,\\[3mm]
\label{LagGplus}
{\cal L}_{G^\pm} \!& = &\! -\, \frac{g_w}{\sqrt{2}\, M_W}\; G^-\
\bar{l}\, B_{lI}\, \Big(\, m_l\, P_L\: -\: m_I\, P_R\,
\Big)\, n_I\quad +\quad \mbox{H.c.},\\[3mm]
\label{LagG}
{\cal L}_G \!& = &\! -\, \frac{i\,g_w}{4 M_W}\; G\
\bar{n}_I\, \Bigg[\, C_{IJ}\, \Big( m_I\, P_L - m_J\, P_R \Big)\: +\:
C^*_{IJ}\, \Big( m_J\, P_L - m_I\, P_R \Big)\, \Bigg]\, n_J\; ,\\[3mm]
\label{LagJ}
{\cal L}_J \!& = &\! -\, \frac{i\,g_w}{4 M_W}\; t_\beta\, J\
\bar{n}_I\, \Bigg[\, C_{IJ}\, \Big( m_I\, P_L - m_J\, P_R\Big)\: +\:
C^*_{IJ}\, \Big( m_J\, P_L - m_I\, P_R \Big)\nonumber\\
\!&&\! +\: \delta_{IJ}\, m_I \gamma_5\, \Bigg]\; n_J\;,\\[3mm]
\label{LagH}
{\cal L}_H \!& = &\! -\, \frac{g_w}{4 M_W}\; (c_\theta-s_\theta t_\beta)\;
H\
\bar{n}_I\, \Bigg[\, C_{IJ}\, \Big( m_I\, P_L + m_J\, P_R\Big)\: +\:
C^*_{IJ}\, \Big( m_J\, P_L + m_I\, P_R \Big)\nonumber\\
\!&&\! -\:
\frac{i\,t_\beta }{t^{-1}_\theta - t_\beta}\ \delta_{IJ}\,
m_I\,\gamma_5\, \Bigg]\; n_J\;,\\[3mm]
\label{LagS}
{\cal L}_S \!& = &\! -\, \frac{g_w}{4 M_W}\; (s_\theta+c_\theta t_\beta)\;
S\
\bar{n}_I\, \Bigg[\, C_{IJ}\, \Big( m_I\, P_L + m_J\, P_R\Big)\: +\:
C^*_{IJ}\, \Big( m_J\, P_L + m_I\, P_R \Big)\nonumber\\
\!&&\! +\:
\frac{i\,t_\beta }{t_\theta + t_\beta}\ \delta_{IJ}\,
m_I\,\gamma_5\, \Bigg]\; n_J\;,
\end{eqnarray}
where $P_{L,R} = \frac{1}2\, (1 \mp \gamma_5 )$, $g_w$ is the
SU(2)$_L$ gauge coupling of the SM and
\begin{equation}
\label{BC}
B_{l I}\ =\ V^l_{lk}\, U^{\nu\ast}_{kI}\; ,\qquad
C_{IJ}\ =\ U^\nu_{kI}\,U^{\nu\ast}_{kJ}\ .
\end{equation}
In~(\ref{BC}) $V^l$ is a 3-by-3 unitary matrix that occurs in the
diagonalisation of the charged lepton mass matrix~${\cal
M}^l$. Without loss of generality, we assume throughout the present
study that ${\cal M}^l$ is positive and diagonal, which implies that
$V^l = {\bf 1}_3$. Finally, we comment on the limit of $t_\beta \to
0$. It is easy to see from~(\ref{t2theta}) that this limit leads to
$t_\theta \to 0$ and the fields $S$ and $J$ decouple from matter; only
the Higgs field $H$ couples to Majorana neutrinos and to the rest of
the SM fermions~(cf.~\cite{APZPC}).
\setcounter{equation}{0}
\section{Leptonic Asymmetries}\label{sec:asym}
In this section we calculate the leptonic asymmetries produced by the
decays of the heavy Majorana neutrinos during a second-order EWPT.
The novel aspect of such a calculation is that, in stark contrast to
the conventional leptogenesis scenario, the $W^\pm$ and $Z$ bosons
also contribute to the decays and leptonic asymmetries of the heavy
Majorana neutrinos. This fact raises new issues related to the gauge
invariance of off-shell Green functions which are here addressed
within the so-called Pinch Technique (PT) framework~\cite{PT}.
Since sphalerons act on the left-handed SM fermions converting an
excess in leptons into that of baryons, we only need to consider the
decays of the heavy Majorana neutrinos $N_\alpha$ into the left-handed
charged leptons $l^-_L$ and light neutrinos $\nu_{lL}$. In detail, we
have to calculate the partial decay width of the heavy Majorana
neutrino $N_\alpha$ into a particular lepton flavour $l$,
\begin{equation}
\label{GammaN}
\Gamma^l_{N_\alpha}\ =\ \Gamma (N_\alpha \to l^-_L\ W^+,\, G^+ )\ +\
\Gamma (N_\alpha \to \nu_{lL}\ Z,\, G,\, J,\, H,\, S )\ .
\end{equation}
To compute $\Gamma^l_{N_\alpha}$, it proves more convenient to first
calculate the absorptive part $\Sigma^{\rm abs}_{\alpha\beta}
(\not\!p)$ of the heavy Majorana-neutrino self-energy transition
$N_\beta \to N_\alpha$ in the Feynman--'t Hooft gauge $\xi = 1$, where
$p^\mu$ is the 4-momentum carried by $N_{\alpha,\beta}$. The
Feynman--'t Hooft gauge is not a simple choice of gauge, but the
result obtained in the gauge-independent framework of the
PT~\cite{PT}, within which issues of analyticity, unitarity and CPT
invariance can self-consistently be addressed~\cite{PP,APNPB}.
\begin{figure}[t]
\begin{center}
\begin{picture}(350,100)(0,0)
\SetWidth{0.8}
\ArrowLine(0,50)(30,50)\ArrowLine(30,50)(90,50)\ArrowLine(90,50)(120,50)
\PhotonArc(60,50)(30,0,180){3}{6.5}
\Text(5,45)[t]{$N_\beta$}\Text(60,45)[t]{$l^-_L\,,\ \nu_{lL}$}
\Text(120,45)[t]{$N_\alpha$}
\Text(60,87)[b]{$W^+,\ Z$}
\Text(60,10)[]{\bf (a)}
\ArrowLine(200,50)(230,50)\ArrowLine(230,50)(290,50)
\ArrowLine(290,50)(320,50)\DashArrowArcn(260,50)(30,180,0){4}
\Text(205,45)[t]{$N_\beta$}\Text(260,45)[t]{$l^-_L\,,\ \nu_{lL}$}
\Text(320,45)[t]{$N_\alpha$}
\Text(260,87)[b]{$G^+,\ G,\ J,\ H,\ S$}
\Text(260,10)[]{\bf (b)}
\end{picture}
\end{center}
\caption{\it Feynman graphs that determine the 1-loop absorptive part
$A_{\alpha\beta} (s)$ of the heavy Majorana-neutrino
self-energy $\Sigma_{\alpha\beta} (\not\! p)$.}\label{fig:abs}
\end{figure}
Neglecting the small charged-lepton and light-neutrino masses,
$\Sigma^{\rm abs}_{\alpha\beta} (\not\!p)$ acquires the simple
spinorial structure:
\begin{equation}
\label{Selfabs}
\Sigma^{\rm abs}_{\alpha\beta} (\not\! p)\ =\ A_{\alpha\beta} (s)
\not\! p\, P_L\ +\ A^*_{\alpha\beta} (s)
\not\! p\, P_R\; ,
\end{equation}
where $s = p^2$ is the squared Lorentz-invariant mass associated to
the self-energy transition $N_\beta \to N_\alpha$. Considering the
Feynman graphs shown in Fig.~\ref{fig:abs} and the interaction
Lagrangians~(\ref{LagW})--(\ref{LagS}), the absorptive transition
amplitudes $A_{\alpha\beta} (s)$ are calculated to be
\begin{eqnarray}
\label{Abs}
A_{\alpha\beta} (s) \!& = &\! \frac{\alpha_w}{32}\,
\sum\limits_{l = e,\mu ,\tau}
\Bigg\{ B^*_{l\alpha}B_{l\beta}\,\Bigg[\,
4\,\Bigg( 1 - \frac{M^2_W}{s} \Bigg)^2 \theta (s - M^2_W )\ +\
\frac{2\,M^2_Z}{M^2_W}\; \Bigg( 1 - \frac{M^2_Z}{s} \Bigg)^2
\theta (s - M^2_Z )\,\Bigg]\nonumber\\
\!&&\! \hspace{-1.2cm}
+\;
\frac{m_{N_\alpha}\, m_{N_\beta}}{M^2_W}\,
B_{l\alpha}B^*_{l\beta}\,
\Bigg[\, 2\, \Bigg( 1 -
\frac{M^2_W}{s} \Bigg)^2 \theta (s - M^2_W )\ +\
\Bigg( 1 - \frac{M^2_Z}{s} \Bigg)^2 \theta (s - M^2_Z )\ +\ t^2_\beta\,
\theta (s)\nonumber\\
\!&&\! \hspace{-1.2cm}
+\, (c_\theta - s_\theta t_\beta )^2 \Bigg( 1 - \frac{M^2_H}{s}
\Bigg)^2 \theta (s - M^2_H )\ +\
(s_\theta + c_\theta t_\beta )^2 \Bigg( 1 - \frac{M^2_S}{s}
\Bigg)^2 \theta (s - M^2_S )\; \Bigg]\, \Bigg\}\; ,
\end{eqnarray}
where $\alpha_w = g^2_w/(4\pi)$ is the SU(2)$_L$ fine-structure
constant and $\theta (x)$ is the usual step function: $\theta (x) = 1$
for $x >0$, whilst $\theta (x) = 0$ if $x \leq 0$. In the calculation
of $A_{\alpha\beta} (s)$, we used the fact that $B_{l \alpha} =
C_{\nu_l \alpha} + {\cal O}(C^2_{\nu_l \alpha})$, which is an
excellent approximation in the physical charged-lepton mass basis.
We should bear in mind that all masses involved on the RHS
of~(\ref{Abs}) depend on the temperature $T$, through the
$T$-dependent VEVs $v(T)$ and $w(T)$ related to the Higgs doublet
$\Phi$ and the complex singlet $\Sigma$, respectively [cf.~(\ref{vT})
and~(\ref{wT})]. In the symmetric phase of the theory, i.e.~for
temperatures above the electroweak phase transition, these VEVs vanish
and the absorptive transition amplitude becomes
\begin{equation}
\label{AbsSymmetric}
A_{\alpha\beta} (s) \ =\
\frac{\alpha_w}{8}\
\frac{(m^T_D\, m^*_D)_{\alpha\,\beta}}{M^2_W}\
\Bigg(\, 1\: +\:
\frac{t^2_\beta}{2}\;\Bigg)\; .
\end{equation}
Note that this last formula is only valid in the weak basis in which
the Majorana mass matrix $m_M$ is diagonal.
To account for unstable-particle-mixing effects between heavy Majorana
neutrinos, we follow~\cite{APRD,PU} and define the resummed effective
couplings $\overline{B}_{l\alpha}$ and their CP-conjugate ones
$\overline{B}^c_{l\alpha}$ related to the vertices $W^-l_L N_\alpha$
and $W^+ (l_L)^C N_\alpha$, respectively. For a symmetric model with 3
left-handed and 3 right-handed neutrinos, the effective couplings
$\overline{B}_{l\alpha}$ exhibit the same analytic dependence on the
absorptive transition amplitudes $A_{\alpha\beta}$ as the one found
in~\cite{PU}:\footnote{Here we eliminate a typo that occurred
in~\cite{PU}, where $R_{\alpha\gamma}$ in the numerator of the
fraction needs be multiplied with $-i$.}
\begin{eqnarray}
\label{hres3g}
\overline{B}_{l\alpha} \!&=&\! B_{l\alpha}\: -\: i\,
\sum\limits_{\beta,\gamma = 1}^3
|\varepsilon_{\alpha\beta\gamma}|\; B_{l\beta}\\
&&\hspace{-1.35cm}\times\,\frac{m_\alpha ( m_\alpha A_{\alpha\beta} +
m_\beta A_{\beta\alpha}) - i R_{\alpha \gamma} \Big[ m_\alpha
A_{\gamma\beta} ( m_\alpha A_{\alpha\gamma} + m_\gamma A_{\gamma\alpha}
) + m_\beta A_{\beta\gamma} ( m_\alpha A_{\gamma\alpha} + m_\gamma
A_{\alpha \gamma} ) \Big]} { m^2_\alpha\, -\, m^2_\beta\, +\,
2i\,m^2_\alpha A_{\beta\beta} + 2i\,{\rm Im}R_{\alpha\gamma}\, \Big(
m^2_\alpha |A_{\beta\gamma}|^2 + m_\beta m_\gamma {\rm
Re}A^2_{\beta\gamma}\Big) }\ ,\nonumber
\end{eqnarray}
where all transition amplitudes $A_{\alpha\beta}$, $A_{\beta\gamma}$
etc are evaluated at $s = m^2_{N_\alpha} \equiv m^2_\alpha$ and
\begin{equation}
R_{\alpha \beta}\ =\ \frac{m^2_\alpha}{m^2_\alpha - m^2_\beta +
2i\, m^2_\alpha A_{\beta\beta} (m^2_\alpha)}\ .
\end{equation}
Moreover, $|\varepsilon_{\alpha\beta\gamma}|$ is the modulus of the
usual Levi--Civita anti-symmetric tensor. The respective CP-conjugate
effective couplings $\overline{B}^c_{li}$ are easily obtained
from~(\ref{hres3g}) by replacing the ordinary $W^-$-boson couplings
$B_{l\alpha}$ and $A_{\alpha\beta} (s)$ by their complex conjugates.
In the decoupling limit of $m_{N_3} \gg m_{N_{1,2}}$, we recover the
analytic results known for a model with 2 right-handed
neutrinos~\cite{APRD,PU}, where the effective couplings
$\overline{B}_{l1,2}$ are given by
\begin{eqnarray}
\label{B2gen1}
\overline{B}_{l1} \!& = &\! B_{l1}\ -\ i\, B_{l2}\, \frac{m_{N_1}\,\Big(\,
m_{N_1}\, A_{12} (m^2_{N_1})\: +\: m_{N_2}\, A_{21} (m^2_{N_1})\, \Big)}
{ m^2_{N_1}\: -\: m^2_{N_2}\ + \ 2i m^2_{N_1}\, A_{22} (m^2_{N_1})}\ ,\\[3mm]
\label{B2gen2}
\overline{B}_{l2} \!& = &\! B_{l2}\ -\ i\, B_{l1}\, \frac{m_{N_2}\,
\Big(\, m_{N_2}\, A_{21} (m^2_{N_2})\: +\:
m_{N_1}\, A_{12} (m^2_{N_2})\, \Big)}
{ m^2_{N_2}\: -\: m^2_{N_1}\ + \ 2i m^2_{N_2}\, A_{11} (m^2_{N_2})}\ .
\end{eqnarray}
In all our results, we neglect the 1-loop corrections to the vertices
$W^\pm l_L N_\alpha$, $Z\nu_{lL}N_\alpha$ etc, whose absorptive parts
are numerically insignificant in leptogenesis, but essential otherwise
to ensure gauge invariance and unitarity within the PT
framework~\cite{APNPB}.
In terms of the resummed effective couplings $\overline{B}_{l\alpha}$
and $\overline{B}^c_{l\alpha}$ and the absorptive transition
amplitudes $A_{\alpha\beta} (s)$, the partial decay widths
$\Gamma^l_{N_\alpha }$ and their CP-conjugates
$\overline{\Gamma}^l_{N_\alpha }$ are now given by
\begin{equation}
\label{Widths}
\Gamma^l_{N_\alpha }\ =\ m_{N_\alpha}\, A_{\alpha\alpha}
( m^2_{N_\alpha};\, \overline{B}_{l\alpha})\; ,\qquad
\overline{\Gamma}^{\; l}_{N_\alpha }\ =\ m_{N_\alpha}\, A_{\alpha\alpha}
(m^2_{N_\alpha};\, \overline{B}^c_{l\alpha})\; ,
\end{equation}
where the dependence of the absorptive transition amplitudes on
$\overline{B}_{l\alpha}$ and~$\overline{B}^c_{l\alpha}$ has explicitly
been indicated. Note that no summation over the individual charged
leptons and light neutrinos running in the loop should be performed
when calculating $\Gamma^l_{N_\alpha }$ and $\overline{\Gamma}^{\;
l}_{N_\alpha }$ using~(\ref{Abs}) and (\ref{Widths}). Then, the
leptonic asymmetries for each individual lepton flavour are readily
found to be
\begin{equation}
\label{deltaN}
\delta^l_{N_\alpha}\ =\ \frac{ \Delta \Gamma^l_{N_\alpha} }{
\Gamma_{N_\alpha} } \ =\
\frac{ |\overline{B}_{l\alpha}|^2\: -\: |\overline{B}^c_{l\alpha}|^2}
{\sum\limits_{l = e,\mu ,\tau}
\Big(\, |\overline{B}_{l\alpha}|^2\: +\:
|\overline{B}^c_{l\alpha}|^2\,\Big) }\ ,
\end{equation}
with
\begin{equation}
\Gamma_{N_\alpha} \ =\ \sum\limits_{l = e,\mu ,\tau}\,
\Big(\, \Gamma^l_{N_\alpha}\: +\:
\overline{\Gamma}^{\; l}_{N_\alpha}\,\Big)\;, \qquad
\Delta\Gamma^l_{N_\alpha} \ =\ \Gamma^l_{N_\alpha}\: -\:
\overline{\Gamma}^{\; l}_{N_\alpha}\; .
\end{equation}
Notice that both $\Gamma_{N_\alpha}$ and $\delta^l_{N_i}$ do in
general depend on the temperature $T$, through the $T$-dependent
masses, during a second-order electroweak phase transition. More
details on this issue will be presented in the next section.
\setcounter{equation}{0}
\section{Electroweak Resonant Leptogenesis}\label{sec:EWPT}
In this section we present the relevant Boltzmann equations (BEs) that
will enable us to evaluate the lepton-to-photon and baryon-to-photon
ratios, $\eta_{L_l}$ and $\eta_B$, during a second-order EWPT. In our
numerical estimates, we only include the dominant collision terms
related to the $1 \leftrightarrow 2$ decays and inverse decays of the
heavy Majorana neutrinos $N_\alpha$. We also neglect chemical
potential contributions from the right-handed charged leptons and
quarks~\cite{reviews}. A complete account of the aforementioned
subdominant effects may be given elsewhere.
To start with, we first write down the BEs that govern the photon
normalised number densities $\eta_{N_\alpha}$ and $\eta_{\Delta L_l}$
for the heavy Majorana neutrinos $N_\alpha$ and the left-handed
leptons $l_L$, $\nu_{lL}$, respectively:
\begin{eqnarray}
\label{BEN}
\frac{d\eta_{N_\alpha}}{dz} \!& =&\! \frac{z\, D_{N_\alpha}}{H(T_c)}\
\Bigg(\, 1\: -\: \frac{\eta_{N_\alpha}}{\eta^{\rm eq}_{N_\alpha}}\,
\Bigg)\; ,\\
\label{BEDL}
\frac{d\eta_{\Delta L_l}}{dz} \!& =&\! \frac{z\, D_{N_\alpha}}{H(T_c)}\
\Bigg[\, \Bigg(\, \frac{\eta_{N_\alpha}}{\eta^{\rm eq}_{N_\alpha}}\: -\: 1\,
\Bigg)\, \delta^l_{N_\alpha}\
-\ \frac{2}{3}\; B^l_{N_\alpha}\, \eta_{\Delta L_l}\,
\Bigg] \; .
\end{eqnarray}
Although our conventions and notations follow those of~\cite{PU2},
there are several key differences pertinent to our EWPT scenario that
need to be stressed here. Specifically, we express the $T$-dependence
of the BEs~(\ref{BEN}) and~(\ref{BEDL}) in terms of the dimensionless
parameter $z$:
\begin{equation}
\label{zparam}
z\ =\ \frac{T_c}{T}\ ,
\end{equation}
where $T_c$ is the critical temperature of the EWPT to be determined
below [cf.~(\ref{Tc})]. The parameter $H(T_c)\approx 17\times
T^2_c/M_{\rm P}$ is the Hubble constant at $T=T_c$, where $M_{\rm P} =
1.2\times 10^{19}$~GeV is the Planck mass. The parameter
$B^l_{N_\alpha}$ denotes the branching fraction of the decays of the
heavy Majorana neutrino $N_\alpha$ into a particular lepton flavour
$l$, i.e.~$B^l_{N_\alpha} = (\Gamma^l_{N_\alpha}\, +\,
\overline{\Gamma}^{\; l}_{N_\alpha})/\Gamma_{N_\alpha}$. Moreover,
$\eta_{N_\alpha}^{\rm eq}$ is the equilibrium number density of the
heavy neutrino $N_\alpha$, normalised to the number density of photons
$n_\gamma = 2T^3/\pi^2$:
\begin{equation}
\label{etaNeq}
\eta_{N_\alpha}^{\rm eq} \ =\ \frac{m^2_{N_\alpha}(T)}{2T^2}\;
K_2\Bigg(\frac{m_{N_\alpha} (T)}{T}\Bigg)\; ,
\end{equation}
where $K_n (x)$ is the $n$th-order modified Bessel function~\cite{AS}.
Finally, $D_{N_\alpha}$ is the $T$-dependent collision term related to
the decay and inverse decay of the heavy Majorana neutrino~$N_\alpha$:
\begin{equation}
\label{DNalpha}
D_{N_\alpha}\ =\ \frac{\Gamma_{N_\alpha} (T)}{n_\gamma}\;
g_{N_\alpha}\, \int\, \frac{d^3{\bf
p}_{N_\alpha}}{(2\pi)^3}\,\frac{m_{N_\alpha}(T)}{E_{N_\alpha}(T)}\,
e^{-E_{N_\alpha}(T)/T} \ =\ \frac{m^2_{N_\alpha}(T)}{2T^2}\,
\Gamma_{N_\alpha} (T)\, K_1\Bigg(\frac{m_{N_\alpha} (T)}{T}\Bigg)\; ,
\end{equation}
where $E_{N_\alpha}(T) = [\,|{\bf p}_{N_\alpha}|^2\: +\:
m^2_{N_\alpha}(T)\,]^{1/2}$ and $g_{N_\alpha } = 2$ is the number of
helicities of $N_\alpha$.
Our next step is to include the effect of the $(B+L)$-violating
sphalerons~\cite{KRS} on the lepton-number densities produced by the
decays of $N_\alpha$ during the EWPT. In particular, our interest is
to implement the temperature dependence of the rate of $B+L$ violation
just below the critical temperature $T_c$, where $T_c$ is given
by~\cite{MEC}
\begin{equation}
\label{Tc}
T_c\ =\ v\, \left(\,\frac{1}{2}\: +\:
\frac{3\,g^2_w}{8\,\lambda_\Phi}\: +\:
\frac{g^{\prime\,2}}{8\,\lambda_\Phi}\: +\:
\frac{h^2_{t}}{2\,\lambda_\Phi}\, \right)^{-1/2} .
\end{equation}
In the above, $g^\prime$ is the U(1)$_Y$ gauge coupling and $h_t$ is
the top-quark Yukawa coupling. We should notice that $\Phi$-$\Sigma$
mixing effects have been omitted in~(\ref{Tc}), which is a good
approximation for scenarios with $\delta /\lambda_\Phi \ll 1$ as the
ones to be considered here.
A reliable estimate~\cite{AM,CLMW} of the rate of $(B+L)$-violating
sphaleron transitions can be obtained for temperatures satisfying the
double inequality
\begin{equation}
\label{BLcond}
M_W(T)\ \ll\ T\ \ll\ \frac{M_W(T)}{\alpha_w}\;,
\end{equation}
where $\alpha_w = g^2_w/4\pi$ is the SU(2)$_L$ fine structure
constant, $M_W(T) = g_w\,v(T)/2$ is the $T$-dependent $W$-boson mass
and
\begin{equation}
\label{vT}
v(T)\ =\ v\, \left(\,1\: -\: \frac{T^2}{T_c^2}\,\right)^{1/2}
\end{equation}
is the $T$-dependent VEV of the Higgs field. In detail, the rate of
$B+L$ violation per unit volume is~\cite{AM}
\begin{equation}
\label{BLrate}
\gamma_{\Delta (B+L)}\ =\
\frac{\omega_-}{2\,\pi}\;{\cal N}_{\rm tr}\, ({\cal N}V)_{\rm rot}\,
\left(\frac{\alpha_w\,T}{4\,\pi} \right)^3 \alpha_3^{-6}\,e^{-E_{\rm
sp} / T}\, \kappa\; .
\end{equation}
Given the double inequality (\ref{BLcond}), this last expression is
valid for temperatures $T \stackrel{<}{{}_\sim} T_c$. Following the
notation of~\cite{AM}, the parameters $\omega_-$, ${\cal N}_{\rm tr}$
and ${\cal N}_{\rm rot}$ that occur in~(\ref{BLrate}) are functions of
$\lambda_\Phi / g^2_w$, $V_{\rm rot} = 8\pi^2$ and $\alpha_3 =
\alpha_w\,T/[2\,M_W(T)]$. The quantity $E_{\rm sp}$ is the
$T$-dependent energy of the sphaleron and is determined by
\begin{equation}
E_{\rm sp}\ =\ A\,\frac{2\,M_W(T)}{\alpha_w}\ ,
\end{equation}
where $A$ is a function of $\lambda_\Phi / g^2_w$ and is ${\cal O}( 1
)$, for values of phenomenological interest. The dependence of the
parameter $\kappa$ on $\lambda_\Phi/g^2_w$ has been calculated
in~\cite{AM,CLMW}, and the results of those studies are summarised in
Table~\ref{BLparams}, for $\lambda_\Phi /g^2_w = 0.556$. This value
corresponds to a SM Higgs-boson mass $M_H$ of 120~GeV in the vanishing
limit of a $\Phi$-$\Sigma$ mixing.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|}
\hline &&&&&\\[-11pt]
$\lambda_\Phi / g^2_w$ & $\omega_-$ & ${\cal N}_{\rm
rot}$ & ${\cal N}_{\rm tr}$ & $\kappa$ & $A$\\[2pt] \hline \hline
0.556 & 1.612$\times M_W$ & 11.2 & 7.6 & 0.135 -- 1.65 & 1\\ \hline
\end{tabular}
\end{center}
\caption{\it Values of the parameters occurring in (\ref{BLrate}) for
$\lambda_\Phi/g^2_w = 0.556$, which corresponds to a SM Higgs-boson
mass of 120~GeV when $\delta = 0$.}\label{BLparams}
\end{table}
Since the SM Higgs-boson mass is $M_H \stackrel{>}{{}_\sim} 115$~GeV,
it can be shown~\cite{EWSM} that the EWPT in the SM is not first
order, but continuous from $v(T_c)=0$ to $v$, without bubble
nucleation and the formation of large spatial inhomogeneities in
particle densities. Therefore, we use the formalism developed
in~\cite{PU2}, where the $(B+L)$-violating sphaleron dynamics is
described in terms of spatially independent $B$- and $L$-number
densities $\eta_{B}$ and~$\eta_{L_j}$. More explicitly, the BEs of
interest to us are~\cite{PU2}:
\begin{eqnarray}
\label{Bsph}
\frac{d \eta_{B}}{dz} \!& = &\! -\, \frac{z\,\Gamma_{\Delta
(B+L)}}{H(T_c)}\; \bigg[\, \eta_{B}\: +\:
\frac{28}{51}\, \eta_L\:
+\: \frac{v^2(T)}{T^2}\,
\bigg(\, \frac{75}{187}\, \eta_{B}\: +\:\frac{16}{187}\,
\eta_L\,\bigg)\,\bigg]\; ,\qquad\\[12pt]
\label{Lsph}
\frac{d \eta_{L_i}}{dz} \!& = & \!
\frac{d\eta_{\Delta L_i}}{dz}
\: +\: \frac{1}{3}\,
\frac{d \eta_{B}}{dz}\ ,
\end{eqnarray}
where $\eta_L = \sum_{l\,=e,\mu,\tau} \eta_{L_l}$ is the total
lepton asymmetry and
\begin{equation}
\Gamma_{\Delta (B+L)}\ =\ \frac{1683}{
132\, T^3\: +\: 51\, T\,v^2(T) }\ \gamma_{\Delta (B+L)} \; .
\end{equation}
We observe that in the limit $\Gamma_{\Delta (B+L)}/H(T_c) \to \infty$
and for $T > T_c$, the conversion of the lepton-to-photon ratio
$\eta_L$ to the baryon-to-photon ratio~$\eta_B$ is given by the known
relation~\cite{KS,LS}:
\begin{equation}
\eta_B\ =\ -\,\frac{28}{51}\: \eta_L\; .
\end{equation}
Likewise, when $1 \stackrel{<}{{}_\sim} z \stackrel{<}{{}_\sim} 1.7$
and $\kappa =1$, it is $\Gamma_{\Delta (B+L)}/H(T_c) \gg 1$ and the
baryon-to-photon ratio $\eta_B$ is then related to the total
lepton-to-photon ratio $\eta_L$ by
\begin{equation}
\label{etaBL}
\eta_B \ = \ -\ \Bigg(\,\frac{28}{51}\: +\: \frac{16}{187}\:
\frac{v^2(T)}{T^2}\, \Bigg)\, \Bigg( 1\: +\: \frac{75}{187}\:
\frac{v^2(T)}{T^2} \Bigg)^{-1}\, \eta_L\; .
\end{equation}
For $z\stackrel{>}{{}_\sim} 1.7$, sphaleron effects get sharply out of
equilibrium and $\eta_B$ freezes out. To account for the $T$-dependent
$(B+L)$-violating sphaleron effects, our numerical estimates will be
based on the BEs~(\ref{BEN}), (\ref{BEDL}), (\ref{Bsph}) and
(\ref{Lsph}).
In the singlet Majoron model, the restoration of the global symmetry
U(1)$_l$ will occur for temperatures above a critical temperature
$T^l_c$ that could in general differ from $T_c$ of the SM gauge group
given in~(\ref{Tc}). For example, in the absence of a doublet-singlet
mixing, the critical temperature related to the SSB of U(1)$_l$
is~\cite{JKapusta}
\begin{equation}
\label{Tlc}
T^l_c \ =\ -\ \frac{6\, m^2_\Sigma}{\lambda_\Sigma}\ .
\end{equation}
Consequently, the $T$-dependence of $w(T)$ for $T< T^l_c$ will be
analogous to $v(T)$ in~(\ref{vT}), i.e.
\begin{equation}
\label{wT}
w(T)\ =\ w\, \left(\,1\: -\:
\frac{T^2}{(T_c^l)^2}\,\right)^{1/2}\ .
\end{equation}
However, if $m^2_\Sigma$ vanishes, the singlet VEV $w(T)$ and the
doublet VEV $v(T)$ will be related by an expression very analogous
to~(\ref{wtov}), namely
\begin{equation}
\label{wTtovT}
w(T)\ \approx \ t^{-1}_\beta\, v(T)\; .
\end{equation}
As was mentioned after (\ref{wtov}), the above relation becomes exact
in a high-$T$ expansion of the thermally corrected effective
potential. Such an expansion is a very good approximation to the level
of a few \% for perturbatively small quartic
couplings~\cite{JKapusta}. As a consequence, the SM gauge group and
the global lepton symmetry U(1)$_l$ will both break down spontaneously
via the same second-order electroweak phase transition, with $T^l_c =
T_c$. Even though the focus of the paper will be on this class of
scenarios, we will comment on possible differences for models with
$T^l_c \neq T_c$.
If $T^l_c = T_c$, the heavy neutrino masses $m_{N_\alpha}$, the
gauge-boson masses $M_{W,Z}$ and the Higgs masses $M_{H,S}$ all scale
with the same $T$-dependent factor, $(1-T^2/T^2_c)^{\,1/2}$, for
temperatures $T<T_c$ of our interest. Hence, the $T$-dependence drops
out exactly in the expression~(\ref{Abs}) of the absorptive transition
amplitudes $A_{\alpha\beta} (m^2_{N_\gamma})$, and likewise in the
leptonic asymmetries $\delta^l_{N_\alpha}$ and the branching fractions
$B^l_{N_\alpha}$. However, as can be seen from~(\ref{DNalpha}), the
collision terms $D_{N_\alpha}$ exhibit a non-trivial $T$-dependence
that needs be carefully implemented in the BEs.
For our numerical estimates of the BAU, we consider the 3-generation
flavour scenario of the RL model discussed in~\cite{APtau,PU2}.
Specifically, the Majorana sector is assumed to be approximately SO(3)
symmetric,
\begin{equation}
\label{mM}
m_M\ =\ m_N\, {\bf 1}_3\: +\: \Delta M_S\; ,
\end{equation}
where $\Delta M_S$ are small SO(3)-breaking terms that are of order
$m^\dagger_D\, m_D/m_N$ as these are naturally expected
from~(\ref{bfMass}). Plugging~(\ref{mM}) into~(\ref{bfMass}), we find
that, to leading order in $\Delta M_S$, the heavy neutrino mass matrix
${\bf m}^N$ deviates from $ m_N\, {\bf 1}_3$ by an amount
\begin{equation}
\label{dmN}
\delta {\bf m}^N \ =\ \Delta M_S\: +\: \frac{1}{2 m_N}\;
\Big( m^\dagger_D\, m_D\: +\: m^T_D\, m^*_D \Big)\; .
\end{equation}
It is interesting to observe that possible renormalisation-group (RG)
running effects from a high-energy scale $M_X$, e.g.~GUT scale, down
to $m_N$ will induce a negative contribution to $\delta {\bf
m}^N$~\cite{rad}, i.e.
\begin{equation}
(\delta {\bf m}^N)_{\rm RG} \ =\ -\frac{\alpha_w}{8\pi}\;
\frac{m_N}{M^2_W}\ \Big(\, m^\dagger_D m_D\: +\: m^T_D m_D^*\, \Big)\;
\ln\Bigg( \frac{M_X}{m_N}\Bigg)\; .
\end{equation}
For $M_X = M_{\rm GUT} \sim 10^{16}$ and $m_N = 80$--150~GeV, the
RG-induced terms are typically smaller by a factor $\sim 0.1$--0.4
with respect to the tree-level contribution given in~(\ref{dmN}).
Thus, the inclusion of the RG effects are not going to affect the
results of our analysis in a substantial manner.
As was mentioned already, the SO(3) symmetry is broken by the Dirac
mass terms $(m_D)_{i\alpha}$, which in our case possess an approximate
U(1)-symmetric flavour pattern~\cite{APtau}:
\begin{equation}
\label{mD}
m_D\ =\ \frac{v}{\sqrt{2}}\,
\left( \begin{array}{ccc}
0 & a\,e^{-i\pi/4} & a\,e^{i\pi/4} \\
0 & b\,e^{-i\pi/4} & b\,e^{i\pi/4} \\
0 & c\,e^{-i\pi/4} & c\,e^{i\pi/4} \\
\end{array}\right)\ +\ \delta m_D\; ,
\end{equation}
where the 3-by-3 matrix $\delta m_D$,
\begin{equation}
\label{dmD}
\delta m_D\ =\ \frac{v}{\sqrt{2}}\, \left(
\begin{array}{ccc}
\varepsilon_e & 0 & 0\\
\varepsilon_\mu & 0 & 0\\
\varepsilon_\tau & 0 & 0\end{array} \right)\; ,
\end{equation}
violates the U(1) symmetry by small terms of order of the electron
mass $m_e$. Instead, the U(1)-symmetric Yukawa couplings $a$ and $b$
can be as large as the $\tau$-lepton Yukawa coupling $m_\tau/v$,
i.e.~of order $10^{-2}$--$10^{-3}$. For successful RL, it was
found~\cite{APtau,PU2} that the parameter $c$ needs to be taken of the
order of the electron Yukawa coupling $m_e/v$. It is important to
stress here that the approximate flavour symmetries SO(3) and U(1)
ensure the stability of the light- and heavy-neutrino sector under
loop corrections~\cite{APZPC,APtau,Kersten/Smirnov}.
\begin{table}[t]
\begin{center}
{\small
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
& & & & & & \\[-3mm]
{\bf Higgs} & $\lambda_\Phi$ & $\lambda_\Sigma$ & $\delta$ &
$\tan\beta$ & $M_H$~[GeV] & $M_S$~[GeV] \\
{\bf Sector} & & & & & & \\
\hline
& & & & & & \\[-4mm]
& 0.238 & $\frac{1}{30}\,\lambda_\Phi$ & $\frac{1}{15}\,\lambda_\Phi$
& $1/\sqrt{2}$ & 121 & 29 \\[-4mm]
& & & & & & \\
\hline\hline
& & & & & & \\[-3mm]
{\bf Neutrino} &
$\frac{\displaystyle (\delta {\bf m}^N)_{11}}{\displaystyle m_N}$ &
$\frac{\displaystyle (\delta {\bf m}^N)_{12}}{\displaystyle m_N}$ &
$\frac{\displaystyle (\delta {\bf m}^N)_{13}}{\displaystyle m_N}$ &
$\frac{\displaystyle (\delta {\bf m}^N)_{22}}{\displaystyle m_N}$ &
$\frac{\displaystyle (\delta {\bf m}^N)_{23}}{\displaystyle m_N}$ &
$\frac{\displaystyle (\delta {\bf m}^N)_{33}}{\displaystyle m_N}$ \\
{\bf Sector} & & & & & & \\
\hline
& & & & & & \\[-4mm]
& $10^{-5}$ & $-10^{-9}$ & $-4\times 10^{-10}$ & $4\times 10^{-9}$ &
$(6.8 - 0.6 i)$ & $5.2\times 10^{-9}$\\
& & & & & ~~~~~~~$\times 10^{-9}$ & \\
\hline
& $a$ & $b$ & $c$ &
$\varepsilon_e$ & $\varepsilon_\mu$ & $\varepsilon_\tau$\\
\hline
& & & & & & \\[-4mm]
& $\frac{\displaystyle 3}{\displaystyle 500}$ &
$\frac{\displaystyle 57}{\displaystyle 25000}$ & $2\times 10^{-7}$ &
$\frac{\displaystyle 1563}{\displaystyle 250000}$ &
$\frac{\displaystyle 39}{\displaystyle 50000}$ &
$-\,\frac{\displaystyle 147}{\displaystyle 128 000}$ \\[2mm]
& $\times \sqrt{\frac{\displaystyle m_N}{100~{\rm GeV}}}$ &
$\times \sqrt{\frac{\displaystyle m_N}{100~{\rm GeV}}}$ &
& $\times \sqrt{\frac{\displaystyle \Delta m_N}{100~{\rm GeV}}}$ &
$\times \sqrt{\frac{\displaystyle \Delta m_N}{100~{\rm GeV}}}$ &
$\times \sqrt{\frac{\displaystyle \Delta m_N}{100~{\rm GeV}}}$ \\[-3mm]
& & & & & & \\
\hline
\end{tabular} }
\end{center}
\caption{\it Complete set of the theoretical parameters used for the
singlet Majoron model, where $\Delta m_N = 2(\delta {\bf m}^N)_{23}
+ i [(\delta {\bf m}^N)_{33} - (\delta {\bf
m}^N)_{22}]$.}\label{Model}
\end{table}
For our numerical analysis, we fully specify in Table~\ref{Model} the
values of the theoretical parameters for the Higgs and neutrino
sectors. The only parameter that we allow to vary is the heavy
Majorana mass scale $m_N$. For $50~{\rm GeV} \stackrel{<}{{}_\sim}
m_N \stackrel{<}{{}_\sim} 200~{\rm GeV}$, the choice of parameters in
Table~\ref{Model} leads to an inverted hierarchical light-neutrino
spectrum with the following squared mass differences and mixing
angles:
\begin{eqnarray}
\label{lightspectrum}
m^2_{\nu_2}\: -\: m^2_{\nu_1} \! & = &\! (7.5\mbox{--}7.7)\times
10^{-5}~{\rm eV}^2\; ,
\qquad
m^2_{\nu_1}\: -\: m^2_{\nu_3} \ = \ 2.44\times 10^{-3}~{\rm eV}^2\; ,
\nonumber\\
\sin^2\theta_{12} \!& = &\! 0.362\;,\quad
\sin^2\theta_{23} \ = \ 0.341\; ,\quad \sin^2\theta_{13} \ =\ 0.047
\end{eqnarray}
and $m_{\nu_3} = 0$. The spectrum is compatible with the
light-neutrino data at the 3$\sigma$ confidence level
(CL)~\cite{JVdata}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{mN100.eps}
\end{center}
\caption{\it Numerical estimates of $\eta_B$ (solid), $\eta_{L_\tau}$
(dash-dotted), $\eta_{L_e} = \eta_{L_\mu}$ (dotted) and $\eta_L$
(dashed) as functions of $z = T_c/T$, for a model with $m_N
=$~100~GeV, and $\eta_{N_{\alpha}}^{\rm in} = 1$. The model
parameters are given in Table~\ref{Model}.
The horizontal grey line corresponds to the observed baryon-to-photon
ratio $\eta^{\rm obs}_B = 1.65 \times 10^{-8}$, after evolving the
latter back to the higher temperature $T = T_c/10$.}
\label{fig:mN100}
\end{figure}
In Fig.~\ref{fig:mN100} we present numerical estimates of the
lepton-flavour asymmetries $\eta_{L_{e,\mu,\tau}}$ and the baryon
asymmetry $\eta_B$ as functions of $z = T_c/T$, for a typical
electroweak RL scenario with $m_N = 100$~GeV. As initial conditions
at $T=T_c \approx 133~{\rm GeV}$, we take $\eta_{N_{\alpha}}^{\rm in}
= 1$ for the heavy neutrino number densities and vanishing
lepton-to-photon and baryon-to-photon ratios, i.e.~$\eta^{\rm
in}_{L_{e,\mu,\tau}} = 0$ and $\eta^{\rm in}_B = 0$. The thermal
in-equilibrium condition $\eta_{N_{\alpha}}^{\rm in} = 1$ is expected,
since the heavy neutrinos $N_{1,2,3}$ have no chiral masses when $T >
T_c$ and get rapidly thermalised by the sizeable light-to-heavy
neutrino Yukawa couplings $\sqrt{2} (m_D)_{i\alpha}/ v
\stackrel{>}{{}_\sim} 10^{-7}$. As can be seen from
Fig.~\ref{fig:mN100}, a net baryon asymmetry $\eta_B$ is generated by
a non-zero $\tau$-lepton asymmetry $\eta_{L_\tau}$. This
$L_\tau$-excess is created before sphalerons sharply freeze out,
i.e.~for temperatures $T \stackrel{>}{{}_\sim} T_{\rm sph} \approx
78$~GeV ($z \stackrel{<}{{}_\sim} 1.7$). Consequently, in the thermal
evolution of the Universe, there is a sufficiently long interval
$78~{\rm GeV} \stackrel{<}{{}_\sim} T \stackrel{<}{{}_\sim} 133~{\rm
GeV}$, where a leptonic asymmetry can be converted into the observed
BAU for our scenarios with spontaneous lepton-number violation at the
electroweak scale.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{mNall.eps}
\end{center}
\caption{\it Numerical estimates of $\eta_B$ versus $z = T_c/T$ for
$m_N = 120$~GeV (dashed), 100~GeV (solid), 80~GeV (dash-dotted),
50~GeV (dotted). The meaning of the horizontal grey line is the same
as in Fig.~\ref{fig:mN100}.}
\label{fig:mNall}
\end{figure}
Figure~\ref{fig:mNall} exhibits the dependence of the baryon-to-photon
ratio $\eta_B$ on $z = T_c/T$ for different values of the heavy
Majorana mass scale $m_N$. We notice that the lighter the heavy
neutrinos are, the smaller the created baryon asymmetry is. For
example, for heavy-neutrino masses $m_N \sim 80$~(50)~GeV, $\eta_B$
falls short almost by one order~(two orders) of magnitude with respect
to the observed BAU $\eta^{\rm obs}_B$. This is a generic feature of
our electroweak RL scenarios based on large wash-out effects due to
the relatively large Dirac-neutrino Yukawa couplings
$(m_D)_{i\alpha}/v$. If the heavy neutrinos have masses $m_N <
90$~GeV, their number densities will start decreasing for $T < m_N$,
potentially creating a net lepton asymmetry that can be converted into
$\eta^{\rm obs}_B$. However, this should happen above the freeze-out
temperature $T_{\rm sph} \approx 78$~GeV of sphalerons. Thus,
successful electroweak RL requires that $m_N > T_{\rm
sph}$.\footnote{Recently, a different leptogenesis scenario with $m_N
\ll T_{\rm sph}$ was studied in~\cite{Misha}, where the BAU is
generated by sterile-neutrino oscillations. Such a realisation relies
on the assumption that the oscillating sterile neutrinos start
evolving from a coherent state and retain their coherent nature within
the thermal plasma of the expanding Universe. In the singlet Majoron
model we have been studying here however, $t$-channel $2
\leftrightarrow 2$ scattering processes, such as $JJ \leftrightarrow
\nu^C_{\alpha R} \nu_{\alpha R}$, that occur before the EWPT ($T
\stackrel{>}{{}_\sim} T_c$) are strong, with rates ${\cal
O}[\rho^4_{\alpha \alpha} T/(8\pi)] \gg H(T)$, for Higgs-singlet
Yukawa couplings $\rho_{\alpha\alpha} \sim 1$. They can therefore
lead to rapid thermalization and loss of coherence of the massless
right-handed neutrinos. Shortly after the EWPT, for $z = T_c
/T\stackrel{>}{{}_\sim} 1.1$, it is $\Gamma_{N_{1,2}}/H \sim
10^9$--$10^{10}$ and $\Gamma_{N_3}/H \sim 1$--10, which again gives
rise to an almost instant thermalization of all the heavy neutrino
mass eigenstates $N_{1,2,3}$.}
Finally, it is important to comment on the last condition $m_N >
T_{\rm sph}$ for scenarios with $T^l_c \neq T_c$. This condition will
still be valid, as long as $T^l_c > T_{\rm sph}$. However, for
scenarios with $T^l_c \stackrel{<}{{}_\sim} T_c$, the predicted BAU
$\eta_B$ will sensitively depend on the initial values~$\eta^{\rm
in}_{L_{e,\mu,\tau}}$ and $\eta^{\rm in}_B$ at $T = T_c$. Instead, if
$T^l_c \gg T_c$ and $m_N \stackrel{>}{{}_\sim} 90$~GeV, the
predictions for the BAU will remain almost unaffected, even if
$\eta^{\rm in}_B \sim 10^2\, \eta^{\rm obs}_B$ at $T = 10\,
T_c$~\cite{PU2}.
\setcounter{equation}{0}
\section{Astrophysical and Phenomenological Implications}\label{sec:pheno}
It is interesting to discuss the implications of the singlet Majoron
model for astrophysics and low-energy phenomenology. To quantify the
effects of heavy Majorana neutrinos, we define the new-physics
parameters
\begin{equation}
\label{Omega}
\Omega_{ll'}\ =\ \delta_{ll'}\: -\: B^*_{lk}\,B_{l'k}\ =\
B^*_{l\alpha}\,B_{l'\alpha}\ ,
\end{equation}
where $l,\, l' = e,\, \mu ,\, \tau$. Evidently, in the absence of
light-to-heavy neutrino mixings, the parameters $\Omega_{ll'}$
vanish. LEP and low-energy electroweak data put severe limits on the
diagonal parameters $\Omega_{ll}$~\cite{Ofit}:
\begin{equation}
\label{Odiag}
\Omega_{ee}\ \leq\ 0.012\,,\qquad
\Omega_{\mu\mu}\ \leq\ 0.0096\,,\qquad
\Omega_{\tau\tau}\ \leq\ 0.016\, ,
\end{equation}
at the 90\% CL. On the other hand, lepton-flavour-violating (LFV)
decays, such as $\mu \to e\gamma$~\cite{CL}, $\mu \to eee$, $\tau \to
e\gamma$, $\tau \to eee$, $\mu \to e$ conversion in
nuclei~\cite{IP,LFVrev} and $Z \to ll'$~\cite{KPS}, constrain the
off-diagonal parameters $\Omega_{ll'}$, with $l\neq l'$. The derived
constraints strongly depend on the heavy neutrino masses
$m_{N_\alpha}$ and the size of the Dirac masses $(m_D)_{l\alpha}$.
However, for models relevant to leptogenesis, with $(m_D)_{l\alpha}
\ll M_W$~\cite{IP}, we obtain the following limits:
\begin{equation}
\label{Ooff}
|\Omega_{e\mu}|\ \stackrel{<}{{}_\sim}\ 0.0001\,,\qquad
|\Omega_{e\tau}|\ \stackrel{<}{{}_\sim}\ 0.02\,,\qquad
|\Omega_{\mu\tau}|\ \stackrel{<}{{}_\sim}\ 0.02\, ,
\end{equation}
including the recent BaBar data on LFV $\tau$ decays~\cite{Babar}.
The predictions for LFV decays in models of resonant leptogenesis has
been extensively discussed in~\cite{PU2}. Since our results obtained
in Section~\ref{sec:EWPT} agree well with this earlier analysis, we
will not repeat the details of this study here. Here, we only
reiterate the fact that successful electroweak RL requires that $m_N
\stackrel{>}{{}_\sim} 100~{\rm GeV}$. This latter constraint gives
rise to the following upper limits:
\begin{equation}
\label{Olepto}
\Omega_{ee}\ \stackrel{<}{{}_\sim}\ 2.2\times 10^{-4}\;,\qquad
|\Omega_{e\mu}|\ \stackrel{<}{{}_\sim}\ 8.3\times 10^{-5}\; ,\qquad
\Omega_{\mu\mu}\ \stackrel{<}{{}_\sim}\ 3.1\times 10^{-5}\; ,
\end{equation}
whereas all remaining parameters $\Omega_{ll'}$ are ${\cal
O}(10^{-8})$ and so unobservably small. All these limits are deduced
by using the model parameters of~Table~\ref{Model}.
\begin{figure}
\begin{center}
\begin{picture}(360,100)(0,0)
\SetWidth{0.8}
\ArrowLine(0,60)(20,60)\ArrowLine(60,60)(80,60)
\Photon(20,60)(60,60){2}{4}\Text(40,67)[b]{$W^-$}
\ArrowLine(20,60)(40,40)\ArrowLine(40,40)(60,60)
\Text(30,40)[r]{$N_\alpha$}\Text(55,40)[l]{$N_\beta$}
\DashLine(40,40)(40,20){3}
\Text(0,65)[b]{$l$}\Text(80,65)[b]{$l'$}
\Text(45,20)[l]{$J$}
\Text(40,0)[]{\bf (a)}
\ArrowLine(140,60)(160,60)\ArrowLine(200,60)(220,60)
\DashArrowLine(160,60)(200,60){3}\Text(180,67)[b]{$G^-$}
\ArrowLine(160,60)(180,40)\ArrowLine(180,40)(200,60)
\Text(170,40)[r]{$N_\alpha$}\Text(195,40)[l]{$N_\beta$}
\DashLine(180,40)(180,20){3}
\Text(140,65)[b]{$l$}\Text(220,65)[b]{$l'$}
\Text(185,20)[l]{$J$}
\Text(180,0)[]{\bf (b)}
\ArrowLine(260,80)(300,80)\ArrowLine(300,80)(340,80)
\Photon(300,80)(300,60){2}{2}\Text(306,70)[l]{$Z$}
\ArrowArc(300,50)(10,90,270)\ArrowArc(300,50)(10,-90,90)
\Text(288,50)[r]{$N_\alpha$}\Text(315,50)[l]{$N_\beta$}
\DashLine(300,40)(300,20){3}
\Text(260,85)[b]{$l,q$}\Text(340,85)[b]{$l,q$}
\Text(305,20)[l]{$J$}
\Text(300,0)[]{\bf (c)}
\end{picture}\\[0.7cm]
\end{center}
\caption{\it Loop-induced couplings of the Majoron to charged leptons $l,\,
l'$ and quarks $q$.}\label{fig:J}
\end{figure}
In the singlet Majoron model under study, there are additional LFV
decays for the muon and the tau-lepton that involve the Majoron,
i.e.~$\mu \to J e$, $\tau \to J e$ and $\tau \to J \mu$. As shown in
Fig.~\ref{fig:J}, these LFV decays are induced by heavy Majorana
neutrinos at the 1-loop level. Detailed analytic expressions for the
loop-induced couplings $Jll'$ and $Jqq$, where $q$ is a quark, may be
found in~\cite{APMaj}. To leading order in $\Omega_{ll'}$, the
prediction for the LFV decay $l^- \to l'^-J$ is
\begin{equation}
R (l \to l' J)\ \equiv\
\frac{\Gamma (l^- \to l'^- J)}{\Gamma (l^- \to l'^- \nu_l \bar{\nu}_{l'})}\
=\ \frac{3\alpha_w}{8\pi}\ t^2_\beta\, |\Omega_{ll'}|^2\;
\frac{M^2_W}{m^2_l}\; \frac{\lambda^4_N}{(1 - \lambda_N )^2}\;
\Bigg( 1\: +\: \frac{\ln \lambda_N}{1-\lambda_N}\Bigg)^2\; ,
\end{equation}
where $\lambda_N = m^2_N/M^2_W$. For $\lambda_N = 1$, the prediction
for the observable $R (l \to l' J)$ takes on the simpler form:
\begin{equation}
\label{Robs}
R (l \to l' J)\ =\ \frac{3\alpha_w}{32\pi}\ t^2_\beta\, |\Omega_{ll'}|^2\;
\frac{M^2_W}{m^2_l}\; .
\end{equation}
The requirement for successful electroweak RL, i.e.~$m_N
\stackrel{>}{{}_\sim} 100$~GeV, gets translated into the following
upper bounds:
\begin{equation}
\label{Rtheory}
R ( \mu \to eJ)\ \stackrel{<}{{}_\sim}\ 2.7\times 10^{-6}\;,\quad
R ( \tau \to eJ)\ \stackrel{<}{{}_\sim}\ 4.6\times 10^{-14}\;,\quad
R ( \tau \to \mu J)\ \stackrel{<}{{}_\sim}\ 6.7\times 10^{-15}\;.
\end{equation}
On the experimental side, however, the following upper limits are quoted:
\begin{eqnarray}
R(\mu \to e J) \!&\leq&\! 2.6\times 10^{-6}\,,\quad
\mbox{at 90\% CL~\cite{AJexp};}\nonumber\\
R(\tau \to e J)\!&\leq&\! 1.5\times 10^{-2}\,,\quad
\mbox{at 95\% CL~\cite{ARGUS};}\\
R(\tau \to \mu J)\! &\leq &\! 2.6\times 10^{-2}\,,\quad
\mbox{at 95\% CL~\cite{ARGUS}}.\nonumber
\end{eqnarray}
It is interesting to remark that the predicted value for $R ( \mu \to
eJ)$ is close to the present experimental sensitivity, whereas the
other decay modes turn out to be very suppressed for the given RL
model with inverted light-neutrino hierarchy. Had we chosen a model
with normal hierarchy, the decay rates $R(\tau \to e J)$ and $R(\tau
\to \mu J)$ would have been enhanced by a factor $\sim 10^8$, but they
will still be rather small ${\cal O}(10^{-6})$ to be observed; the
predictions generally lie 4 orders of magnitude below the current
experimental upper bounds.
Useful constraints on the parameters of the theory are obtained from
astrophysics as well~\cite{GGR}. Specifically, observational evidence
of cooling rates of white dwarfs implies that the interaction of the
Majoron to electrons, $g_{Jee} J \bar{e} i\gamma_5 e$, should be
sufficiently weak and the coupling $g_{Jee}$ must obey the approximate
upper bound~\cite{Astro}:
\begin{equation}
\label{gJee}
|g_{Jee}|\ \stackrel{<}{{}_\sim}\ 10^{-12}\ .
\end{equation}
The above limit gets further consolidated by considerations of the
helium ignition process in red giants, leading to the excluded range:
$3\times 10^{-13}\, \stackrel{<}{{}_\sim}\, |g_{Jee}|\,
\stackrel{<}{{}_\sim}\, 6\times 10^{-7}$. To leading order in
$\Omega_{ll}$, the loop-induced coupling $g_{Jee}$, is given
by~\cite{APMaj}:
\begin{equation}
\label{Jee}
g_{Jee}\ =\ \frac{g_w\,\alpha_w}{16\pi}\ \frac{m_e}{M_W}\ t_\beta\
\lambda_N\,
\Bigg[\, \Omega_{ee}\ \frac{\lambda_N}{1\, -\, \lambda_N}\
\Bigg(\, 1\: +\: \frac{\ln\lambda_N}{1\, -\, \lambda_N}\,\Bigg)\: +\:
\frac{1}{2}\; \sum\limits_{l=e,\mu,\tau} \Omega_{ll}\, \Bigg]\;.
\end{equation}
If $\lambda_N\gg 1$, the expression for the coupling $g_{Jee}$
simplifies to
\begin{equation}
\label{approxJee}
g_{Jee}\ =\ \frac{g_w\,\alpha_w}{32\pi}\ \frac{m_e}{M_W}\ t_\beta\
\lambda_N\,
\Big(\, \Omega_{\mu\mu}\: +\: \Omega_{\tau\tau}\: -\:
\Omega_{ee}\,\Big)\; ,
\end{equation}
whilst for $\lambda_N = 1$ $g_{Jee}$ becomes
\begin{equation}
\label{Jeelambda}
g_{Jee}\ =\ \frac{g_w\,\alpha_w}{32\pi}\ \frac{m_e}{M_W}\ t_\beta\
\Big(\, \Omega_{\mu\mu}\: +\: \Omega_{\tau\tau}\,\Big)\; .
\end{equation}
Given the limits~(\ref{Olepto}) for successful RL, we can estimate that
\begin{equation}
g_{Jee}\ \stackrel{<}{{}_\sim}\ -3.3\times 10^{-17}\; ,
\end{equation}
which passes comfortably the astrophysical constraint given
in~(\ref{gJee}).
Useful astrophysical constraints may also be obtained from
considerations of the cooling rate of neutron stars~\cite{GGR}.
Neutron stars will loose energy by Majoron emission through the
interaction: $g_{J{\cal N}{\cal N}}\, J\; \overline{\cal N} i\gamma_5
{\cal N}$, where ${\cal N}$ is a nucleon, specifically a neutron. The
observational limit on $g_{J{\cal N}{\cal N}}$ is~\cite{NI}
\begin{equation}
\label{JNN}
g_{J{\cal N}{\cal N}}\ \stackrel{<}{{}_\sim}\ 10^{-9}\ .
\end{equation}
On the other hand, the theoretical prediction for $g_{Jqq}$ at the
quark level is
\begin{equation}
\label{Jqq}
g_{Jqq}\ =\ \frac{g_w\,\alpha_w}{32\pi}\ \frac{m_q}{M_W}\ t_\beta\
\lambda_N\, \Big(\, \Omega_{ee}\: +\: \Omega_{\mu\mu}\: +\:
\Omega_{\tau\tau}\,\Big) \; .
\end{equation}
{}From naive dimensional analysis arguments, one expects that
$g_{J{\cal N}{\cal N}} \sim (m_{\cal N}/m_q) g_{Jqq}$. In this way,
one may estimate that
\begin{equation}
g_{J{\cal N}{\cal N}} \approx 7\times 10^{-10}\; ,
\end{equation}
after taking into consideration the limits stated in~(\ref{Olepto}).
Cosmic microwave background (CMB) data and BBN put stringent limits on
the maximum number of weakly-interacting relativistic degrees of
freedom, such as light neutrinos and Majorons~\cite{BKLMS,IST}. In
particular, the allowed range obtained for the effective number
$N_\nu$ of left-handed neutrino species is $N_\nu =
2.70^{+0.91}_{-1.32}$ at the 68\% CL~\cite{IST}. The upper bound on
$N_\nu$ may naively be translated into an upper limit on $\Delta N_\nu
= N_\nu - 3 = 0.61$ of extra effective neutrino species beyond the 3
SM left-handed neutrinos. The singlet Majoron contributes $\Delta
N_\nu = (\frac{1}2\times \frac{8}7)^{4/3} \approx 0.474$, if its
freeze-out or decoupling temperature $T_J$ is equal to the
corresponding one $T_\nu$ of the neutrinos. Although this result does
not pose by itself a serious limitation on the singlet Majoron model,
it can be estimated, however, that $T_J \gg T_\nu \approx 1$~MeV and
the contribution of $J$ to $\Delta N_\nu$ becomes even more
suppressed. Specifically, the freeze-out temperature $T_J$ is
determined when the annihilation rate of Majorons through the process
$JJ \to \nu\nu$ becomes smaller than the Hubble expansion rate $H(T)$
of the Universe. The annihilation process $JJ \to \nu\nu$ is mediated
by the $H$ and $S$ bosons in the $s$-channel and by the heavy
neutrinos $N_{1,2,3}$ in the $t$-channel. Considering the latter
reactions only, one may naively estimate that
\begin{equation}
\label{TJ}
\frac{T_J}{T_\nu}\ \sim\ \Bigg(\frac{G^2_F\, m^4_N}{\Omega_{ee}^2\,
t^4_\beta}\Bigg)^{1/3}\ \sim\ 10^2\, \mbox{--}\, 10^3\; .
\end{equation}
A similar value for $T_J/T_\nu$ is obtained if the $S,H$-boson
exchange processes are used for the model parameters of
Table~\ref{Model}. Thus, the freeze-out temperature $T_J$ lies in the
range $0.1$--1~GeV, namely about the quark-hadron deconfinement phase.
In this epoch of the Universe, the effective number of relativistic
degrees of freedom is $g_*(T_J) \approx 66$. Then, the actual
contribution of the Majoron to $\Delta N_\nu$ is reduced with respect
to the $T_J = T_\nu$ case by a factor $(g_*(T_\nu)/g_*(T_J))^{4/3}
\approx 0.016$ to the value $\Delta N_\nu \approx 0.008$, which is far
below the present and future observational sensitivity~\cite{IST}.
Finally, singlet Majorons $J$ and singlet scalars $S$ may also give
rise to interesting collider phenomenology~\cite{ASJ} through the
singlet-doublet mixing parameter $\delta$ in the scalar
potential~(\ref{LV}). However, since $\delta \ll \lambda_\Phi$
(cf.~Table~\ref{Model}), the singlet Majoron scenario under study
predicts a rather small mixing angle $s_\theta \approx -0.1$. The
production cross section of $S$, via the process $e^+e^-\to Z S$, is
then suppressed with respect to the SM one by a factor $s^2_\theta
\approx 0.01$. Moreover, the so-produced Higgs singlets may decay
quasi-invisibly into a pair of Majorons $J$, which makes difficult to
fully rule out such a scenario by LEP2 data or at the LHC. Future
high-energy $e^+e^-$ colliders of higher luminosity will severely
constrain the allowed parameter space of this singlet Majoron model.
\setcounter{equation}{0}
\section{Conclusions}\label{sec:concl}
The origin of CP violation in nature still remains an open physics
question. If CP violation originates from the SSB of the SM gauge
group, the original scenario~\cite{FY} of GUT-scale leptogenesis will
be excluded. Similar will be the fate of all high-scale leptogenesis
models, if the source of lepton-number violation is due to the SSB of
a global U(1)$_l$ symmetry at the electroweak scale. In this paper we
have shown how resonant leptogenesis at the EWPT constitutes a
realistic alternative for successful baryogenesis in models with
spontaneous lepton-number violation. Specifically, we have considered
a minimal extension of the SM, the singlet Majoron model, which
includes right-handed neutrinos and a complex singlet field that
carries a non-zero lepton number. Depending on the form of the scalar
potential, the lepton number can get broken spontaneously through the
VEV of the SM Higgs doublet. Taking into consideration the Boltzmann
dynamics of sphaleron effects, we have analysed the BAU for different
values of the Majorana mass scale $m_N$ within the context of a
benchmark scenario whose model parameters are given in
Table~\ref{Model}. The generic constraint from having successful
electroweak RL is that $m_N \stackrel{>}{{}_\sim} T_{\rm sph}$, where
$T_{\rm sph} \approx 78$~GeV is the freeze-out temperature of the
sphalerons.
The singlet Majoron model predicts a massless Goldstone particle, the
Majoron $J$. The Majoron can be produced via the LFV decays, $\mu \to
J e$, $\tau \to J \mu$ and $\tau \to J e$. Considering the
constraints from successful electroweak RL and the astrophysical
limits derived from the cooling rate of neutron stars, we have found
that the decay mode $\mu \to J e$ is the most promising channel, with
sizeable branching fraction that can be looked for in the next-round
low-energy experiments.
The predictions obtained for the BAU in this study are limited by the
approximations that are inherent in the calculation of the
non-perturbative sphaleron dynamics. The predicted values should be
regarded as order-of-magnitude estimates, since the $(B+L)$-violating
sphaleron transitions crucially depend on the parameter $\kappa$ that
varies by a factor of 10 or so. It would therefore be very valuable
to go beyond the current approximation methods and improve the
computation of the out-of-equilibrium sphaleron dynamics during a
second-order electroweak phase transition.
\bigskip
\subsection*{Acknowledgements}
I thank Roger Barlow, George Lafferty and Olga Igonkina for
discussions concerning the current status of Majoron searches, and
Frank Deppisch for a critical reading of the manuscript. This work
was supported in part by the STFC research grant: PP/D000157/1.
\newpage
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}
\newpage
|
1,314,259,994,294 | arxiv | \section{Interpretations and Constraints}
We show now that our theory suggests the light speed anisotropy with
respect to the azimuthal angle in an ``absolute'' reference frame.
To understand the azimuthal distribution of the GRAAL data, let us
consider two simple
forms of $\Delta^{\alpha \beta}$.
One is
\begin{equation}\label{DeltaStrain}
\Delta^{\alpha \beta}=\xi m^{\alpha} n^{\beta},
\end{equation}
where $m$ and $n$ are two unit vectors in the space-time and $\xi$
measures the magnitude of LV. When $n$ and $m$ are parallel,
$\Delta^{\alpha \beta}$ of Eq.~(\ref{DeltaStrain}) represents that
there exists a strain along the direction $n$ in the space-time.
This case of the LVM can help us to check whether there is a
preferred direction $n$ in the space-time. When $n$ and $m$ are
orthogonal, Eq.~(\ref{DeltaStrain}) represents a shear in the plane
spanned by the two vectors $m$ and $n$~\cite{Hestenes86}. Another
useful parameterization for $\Delta^{\alpha \beta}$ is
\begin{equation}\label{DeltaTrans}
\Delta^{\alpha \beta}=\lambda k^{\alpha}, \quad k^2=\pm1,
\end{equation}
which represents a translation along a direction $k$ and $\lambda$
measures the magnitude of LV too. Timelike unit vectors can be
parameterized as $(\cosh{\zeta}$,
$\sinh{\zeta}\sin{\theta}\cos{\phi}$,
$\sinh{\zeta}\sin{\theta}\sin{\phi}$, $\sinh{\zeta}\cos{\theta})$,
while spacelike ones as $(\sinh{\zeta}$,
$\cosh{\zeta}\sin{\theta}\cos{\phi}$,
$\cosh{\zeta}\sin{\theta}\sin{\phi}$, $\cosh{\zeta}\cos{\theta})$,
where $\zeta$, $\theta$ and $\phi$ are three variables to
parameterize the unit vectors.
Now, we assume that there is a preferred direction $n=m$ for the
space-time. For the sake of generality, we can take this direction
as the $x$-axis, i.e., $\zeta=0$, $\theta=\pi/2$ and $\phi=0$. So
Eq.~(\ref{DeltaStrain}) reads $\Delta^{\alpha
\beta}=\mathrm{diag}(0,\xi,0,0)$, which is substituted into
Eq.~(\ref{det}) and then we can obtain all the two physical
solutions for the light speed $c_{\gamma i}$.
$c_{\gamma1}=\sqrt{1-(2\xi-\xi^2)\sin^2{\theta}\cos^2{\phi}},$
$c_{\gamma2}=\sqrt{1-2\xi\sin^2{\theta}\cos^2{\phi}}.$ Neglecting
the higher powers of $\xi$, we can get $\delta c_{\gamma
a}/c_\gamma\equiv|c_{\gamma \mathrm{max}}-c_{\gamma
\mathrm{min}}|/c_\gamma\propto |\xi|$ and $\delta c_{\gamma
m}/c_\gamma\equiv|c_{\gamma1}-c_{\gamma2}|/c_\gamma\propto \xi^2.$
$\delta c_{\gamma a}$ and $\delta c_{\gamma m}$ represent the
differences resulting from the angular distribution and the mode
differences respectively. So we find two interesting results: i) The
light speed difference between two modes is proportional to the
square of the element of the LVM, i.e. $\delta c_{\gamma
m}/c_\gamma \propto \xi^2$; ii) For each mode, the light speed may
be direction dependent, and this anisotropy is linearly proportional
to the element of the LVM, i.e. $\delta c_{\gamma a}/c_\gamma\propto
|\xi|$. When $\xi=0$, the light speed $c_{\gamma i}$ is equal to the
constant $c=1$, and the angle distribution of $c_{\gamma}$ is a
sphere of radius 1 in the space. But it is direction dependent now
for $\xi \ne 0$. Along the direction $n$, the light speed decreases
($\xi>0$) or increases ($\xi<0$), and the distribution for
$c_{\gamma}$ is not spherical any more.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{fig1.eps}
\caption{$\delta x_{\mathrm{CE}}$ azimuthal distribution vs angles
of the GRAAL data of the years 1998-2005 on a plane ($x$-$y$ plane
or $\theta=\pi/2$). $\xi=-2.89\times10^{-13}$,
$\lambda=6.53\times10^{-14}$.}\label{fig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{fig2.eps}
\caption{$\delta x_{\mathrm{CE}}$ azimuthal distribution vs angles
of the GRAAL data of the year 2008 on a plane ($x$-$y$ plane or
$\theta=\pi/2$). $\xi=-3.64\times10^{-13}$,
$\lambda=8.24\times10^{-14}$.} \label{fig2}
\end{figure}
For $\Delta^{\alpha \beta}$, the sum of the two above cases reads
\begin{equation}\label{LVM_GRAAL}
\Delta^{\alpha \beta}=\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & \xi & 0 & 0 \\
\lambda & \lambda & \lambda & \lambda \\
0 & 0 & 0 & 0 \\
\end{array}
\right),
\end{equation}
which means $n=m=(0,1,0,0)$ in Eq.~(\ref{DeltaStrain}) and
$k=(0,0,1,0)$ in Eq.~(\ref{DeltaTrans}). This $\Delta^{\alpha
\beta}$ represents that there is a preferred direction $n=(0,1,0,0)$
and a translation along $k=(0,0,1,0)$ for the space-time, meaning the space-time is
not isotropic now. The equation (\ref{det}) is so complicated that
we can hardly solve all the eight analytical solutions for the light
speed in Eq.~(\ref{groupSpeed}). We get two solutions. One of them
is physical and its explicit form is also lengthy
\begin{equation}
c_\gamma=\frac{[\sin{\theta}(\sin{\phi}+\cos{\phi})+\cos{\theta}]\lambda^2-\sin{\theta}\sin{\phi}\lambda
-\sqrt{h}}{-1+\lambda^2}\label{aniso3}\\
\end{equation}
with
\begin{widetext}
\begin{eqnarray*}
h&=&1+[2\sin^2{\theta}(\cos^2{\phi}-\sin{\phi}\cos{\phi}-1)-2\sin{\theta}\cos{\theta}\sin{\phi}]\lambda\\&&
+[\sin^2{\theta}\sin{\phi}(\sin{\phi}+2\cos{\phi})+2\sin{\theta}\cos{\theta}(\sin{\phi}+\cos{\phi})]\lambda^
+(-1+\lambda^2)(2\xi-\xi^2)\sin^2{\theta}\cos^2{\phi}.
\end{eqnarray*}
\end{widetext}
Finally, Eq. (\ref{xCE0}) becomes
\begin{equation}\label{xCE}
\delta x_{\mathrm{CE}}=-\frac{4A E_0}{m_e}\beta^2\gamma^3(c_{\gamma}-c'),
\end{equation}
where $c'$ is an effective constant and $c'\rightarrow1$. The light
speed $c_{\gamma}$ is the specific form of Eq.~(\ref{aniso3}). Thus
we can compare our theoretical calculations of the light speed
anisotropy with the experimental results presented in
Figs.~\ref{fig1} and \ref{fig2}, in which the solid curves represent
the GRAAL data from Refs.~\cite{Gurzadyan07,Gurzadyan10}. The dashed
curves are the calculated results of Eq.~(\ref{xCE}), and they are
obtained to fit the experimental curves. The bright-color solid
curves are the calculated results averaged over $90$ degrees to fit
the experimental curves too, and they are completely allowed within
error bars by the GRAAL data. In Fig.~\ref{fig1}, the best fit
parameters are: $\xi=-2.89\times10^{-13}$,
$\lambda=6.53\times10^{-14}$, and in Fig.~\ref{fig2},
$\xi=-3.64\times10^{-13}$, $\lambda=8.24\times10^{-14}$. We also
find that the best fit occurs when $\xi\simeq -4\lambda$. In this
article, we can take the average and get $\xi=-3\times10^{-13}$ and
$\lambda=7\times10^{-14}$ . So the LVM for photons can be
approximated by
\begin{displaymath}
\Delta^{\alpha \beta}=\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & -3\times10^{-13} & 0 & 0 \\
7\times10^{-14} & 7\times10^{-14} & 7\times10^{-14} & 7\times10^{-14} \\
0 & 0 & 0 & 0 \\
\end{array}
\right)
\end{displaymath}
and $\delta c_{\gamma a}/c_\gamma\simeq10^{-14}$-$10^{-13}$. For
comparison, the constraints on the element $\xi$ of the LVM are
given in Tab.~\ref{tabel_xi2} from some other experimental results on
light speed anisotropy.
\begin{table}
\centering
\caption{Constraints on the element $\xi$ of the LVM from some light speed anisotropy experiments.}\label{tabel_xi2}
\begin{tabular}{c|cc}
\hline \hline
Experiment & $\delta c_{\gamma a}/c_\gamma$, $|\xi|$ & \\
\hline
Refs.~\cite{Gurzadyan05,Gurzadyan07} & $3\times 10^{-12}$& one-way \\
Ref.~\cite{Gurzadyan10} & $1.0 \times 10^{-14}$ & one-way\\
Refs.~\cite{Riis88,Bay89} & $3\times 10^{-9}$ & one-way \\
Ref.~\cite{Krisher90} & $3.5\times 10^{-7}$ & one-way\\
Refs.~\cite{Herrmann09,Herrmann05} (cf. \cite{Stanwix06,Stanwix05})& $3\times10^{-17}$ &two-way\\
\hline \hline
\end{tabular}
\end{table}
The GRAAL data shown in Figs.~\ref{fig1} and \ref{fig2} manifest a
consistency between the two periods of experiment of the years
1998-2005 and 2008~\cite{Gurzadyan07,Gurzadyan10}, but such data
were not discussed in a recent publication~\cite{Bocquet10}, where
only limits of the order of $10^{-14}$ on parameters related to
light speed anisotropy were reported. We may consider our work as
providing the constraints on the specific photon LVM of
Eq.~(\ref{LVM_GRAAL}) to the order of $10^{-14}$. Though the
regularity revealed by the GRAAL data in Figs.~\ref{fig1} and
\ref{fig2} needs to be further confirmed by future experiments, it
is a surprise that our simple model calculation can successfully
reproduce the azimuthal distribution of the reported GRAAL data in
an elegant manner. Therefore we may also consider our work as giving
an interpretation of the light speed azimuthal distribution reported
in the GRAAL experiment. In our framework, the Lorentz invariance
violation or the space-time anisotropy for the photon is the source
for the light speed anisotropy shown in the GRAAL data.
We thus conclude that we provide a novel explanation for the light
speed anisotropy in the GRAAL experiment, based on a new fundamental
theory of Lorentz invariance violation or space-time anisotropy.
This work not only manifests the elegant application of the new
theory to fit experimental results, but also suggests new chances to
test the theoretical predictions from the new framework and to
constraint the newly introduced Lorentz invariance violation matrix
by future experiments.
{\bf Acknowledgements}: This work is partially supported by National
Natural Science Foundation of China (No.~10721063 and No.~10975003),
by the Key Grant Project of Chinese Ministry of Education
(No.~305001) and by the Research Fund for the Doctoral Program of
Higher Education (China).
|
1,314,259,994,295 | arxiv |
\section{Subsumption Algorithm for \ensuremath{{\mathcal{F\!L}_0}}\xspace with General TBoxes}\label{sec:subs}
\newcommand{\sqcap}{\sqcap}
\newcommand{\I}{\ensuremath{\mathcal{I}}\xspace}
\newcommand{\sqsubseteq}{\sqsubseteq}
\newcommand{\sqcap}{\sqcap}
\newcommand{\sqsubseteq}{\sqsubseteq}
\newcommand{\sqsubseteq}{\sqsubseteq}
\newcommand{\Y}{\ensuremath{\mathcal{Y}}\xspace}
We define a decision procedure for subsumption of two concepts w.r.t.\@\xspace a TBox based
on a finite representation of the least functional model obtained by
\enquote{applying} GCIs like rules. By Proposition~\ref{prop:normal-form}, it is sufficient to focus on \ensuremath{{\mathcal{F\!L}_0}}\xspace TBoxes in normal form and subsumption between concept names. We can then use Lemma~\ref{lem:fl0bot} to extend the applicability of our algorithm to \FLbot.
In the remainder of this section, $\ensuremath{\mathcal{T}}\xspace$ denotes a \ensuremath{{\mathcal{F\!L}_0}}\xspace TBox in normal form, and we focus on the task of deciding $A\sqsubseteq_\ensuremath{\mathcal{T}}\xspace B$ for two concept names $A$, $B$ occurring in $\ensuremath{\mathcal{T}}\xspace$.
For the sake of simplicity, we assume in this section that \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace and \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace consist exactly of the concept and role names occurring in $\ensuremath{\mathcal{T}}\xspace$.
In particular, this means that \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace and \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace are finite and their cardinalities are bounded by the size of~$\ensuremath{\mathcal{T}}\xspace$.
The algorithm computes a finite subtree of the tree $\ensuremath{\canonical{A}}\xspace$
such that one can read off the named subsumers (concept names) of $A$ w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace at
the root.
The finite structure that the algorithm operates on is called \emph{partial functional interpretation}. This is similar to a functional interpretation, except that the domain is a \emph{finite} prefix-closed subset of $\ensuremath{\NR^*}$, that is, a finite tree.
\begin{definition}
{An} interpretation $\ensuremath{\mathcal{Y}}\xspace = (\domainof{\ensuremath{\mathcal{Y}}\xspace},\cdot^\ensuremath{\mathcal{Y}}\xspace)$ is a \emph{partial functional interpretation} iff
$\domainof{\ensuremath{\mathcal{Y}}\xspace} \subseteq \ensuremath{\NR^*}$ is a \emph{finite} prefix-closed set and
$r^\ensuremath{\mathcal{Y}}\xspace = \{ (\sigma,\sigma r) \mid \sigma r \in \domainof{\ensuremath{\mathcal{Y}}\xspace} \}$ for all $r \in \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$.
\end{definition}
Note that, as with functional interpretations, the interpretation of the role names is already determined by the domain. Thus, it suffices to give the domain and the interpretation of concept names to fix a partial interpretation.
Informally, the algorithm for deciding $A \ensuremath{\subsumed_{\T}}\xspace B$ proceeds as follows:
it starts with a partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace$ that has $\epsilon$ as only domain element, and for which $A^\ensuremath{\mathcal{Y}}\xspace=\{\epsilon\}$.
In each iteration, a domain element $d$ of the current tree $\ensuremath{\mathcal{Y}}\xspace$ and a single GCI $C\sqsubseteq D$ from $\ensuremath{\mathcal{T}}\xspace$ is chosen such that $d$ matches $C$ and does not match $D$. The tree is then extended so that $d$ matches $D$. The extension can affect both the domain and the interpretation of concept names. The method proceeds in such a way that, for every generated tree $\ensuremath{\mathcal{Y}}\xspace$, the invariant $\ensuremath{\mathcal{Y}}\xspace \subseteq \ensuremath{\canonical{A}}\xspace$ is satisfied. Termination is established by blocking further extensions for duplicate elements. The algorithm terminates if the following holds for every non-blocked element $d$ and every GCI $C\sqsubseteq D$ in $\ensuremath{\mathcal{T}}\xspace$: if $d$ matches $C$, then $d$ also matches $D$. Soundness and completeness is shown by establishing a correspondence between the nodes in the final tree and nodes in the least function model of $A$ w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace.
%
%
To describe the procedure more formally, we must define the following notions:
\begin{enumerate}
\item the condition under which a domain element of a partial interpretation matches a concept,
\item the extension of the tree to achieve a match of an element with the right-hand side of a GCI, and
\item the conditions that distinguish blocked from non-blocked elements.
\end{enumerate}
%
To address the first point, we introduce the following auxiliary notions.
%
\begin{definition}\label{def:match}
Let $\ensuremath{\mathcal{Y}}\xspace = (\domainof{\ensuremath{\mathcal{Y}}\xspace},\cdot^\ensuremath{\mathcal{Y}}\xspace)$ be a partial functional interpretation and $D$ a concept in normal form.
The set of elements in $\domainof{\ensuremath{\mathcal{Y}}\xspace}$ that \emph{match} $D$, denoted by $\match{D}{\ensuremath{\mathcal{Y}}\xspace}$, is defined inductively as follows:
\begin{align*}
\match{A}{\ensuremath{\mathcal{Y}}\xspace} :=~& A^\ensuremath{\mathcal{Y}}\xspace \text{ for all } A\in \NC;
\\
\match{\forall r. A}{\ensuremath{\mathcal{Y}}\xspace} :=~& \{ \sigma \in \domainof{\ensuremath{\mathcal{Y}}\xspace} \mid
\sigma r \in A^\ensuremath{\mathcal{Y}}\xspace \}
\text{ for all } r \in \NR \text{ and } A \in \NC;
\\
\match{C_1 \sqcap C_2}{\ensuremath{\mathcal{Y}}\xspace} :=~& \match{C_1}{\ensuremath{\mathcal{Y}}\xspace} \cap \match{C_2}{\ensuremath{\mathcal{Y}}\xspace}.
\smallskip
\end{align*}
\end{definition}
Since $\ensuremath{\mathcal{Y}}\xspace$ is partial functional (i.e.\@\xspace has \emph{at most} one child per node for each role name), it is easy to see that $\sigma \in \match{C}{\ensuremath{\mathcal{Y}}\xspace}$
implies $\sigma \in C^\ensuremath{\mathcal{Y}}\xspace$. The converse need not be true, as $\sigma$ may have no $r$-child in $\domainof{\ensuremath{\mathcal{Y}}\xspace}$.
We say that $\sigma \in \domainof{\ensuremath{\mathcal{Y}}\xspace}$ \emph{violates} the GCI $C \ensuremath{\sqsubseteq}\xspace D$ iff $\sigma \in \match{C}{\ensuremath{\mathcal{Y}}\xspace}$
and $\sigma \notin \match{D}{\ensuremath{\mathcal{Y}}\xspace}$. In this case, $\sigma$ is called an \emph{incomplete element}.
Given a TBox \ensuremath{\mathcal{T}}\xspace in normal form and a partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace$, we define the \emph{set of all incomplete elements} as follows:
\begin{align*}
\incomp{\ensuremath{\mathcal{Y}}\xspace,\ensuremath{\mathcal{T}}\xspace} := \{ \sigma \in \domainof{\ensuremath{\mathcal{Y}}\xspace} \mid \text{ there is } C \ensuremath{\sqsubseteq}\xspace D \in \ensuremath{\mathcal{T}}\xspace \text{ such that } \sigma \text{ violates } C \ensuremath{\sqsubseteq}\xspace D \}.
\end{align*}
Intuitively, the elements in $\incomp{\ensuremath{\mathcal{Y}}\xspace,\ensuremath{\mathcal{T}}\xspace}$ are those eligible for an extension of $\ensuremath{\mathcal{Y}}\xspace$ towards building a representation of the least functional model, while those in $\domainof{\ensuremath{\mathcal{Y}}\xspace} \setminus \incomp{\ensuremath{\mathcal{Y}}\xspace,\ensuremath{\mathcal{T}}\xspace}$ are not.
As an additional filter for extensions, we define a blocking condition. First, we introduce auxiliary notions for the blocking mechanism consisting of the standard notions of prefix, proper prefix, and a strict total order on $(\NR)^*$.
Let $\sigma,\rho \in \ensuremath{\NR^*}$. The \emph{length of an element} $\sigma \in \ensuremath{\NR^*}$ is denoted by $\abs{\sigma}$. We write $\rho \in \prefixset{\sigma}$ if $\sigma = \rho \widehat{\sigma}$
for some $\widehat{\sigma} \in \ensuremath{\NR^*}$,
and $\rho \in \pprefixset{\sigma}$ if $\rho \in \prefixset{\sigma}$ and $\rho \neq \sigma$. In the latter case, $\rho$ is called a
\emph{proper prefix} of $\sigma$.
Let $\prec$ be any total order on $\ensuremath{\NR^*}$ such that $\abs{\sigma}<\abs{\rho}$ implies $\sigma\prec\rho$ for all $\sigma, \rho\in\ensuremath{\NR^*}$.
Since $\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$ is finite, this implies that, for any element of $\sigma\in \ensuremath{\NR^*}$, there are only finitely many elements $\rho$ such that
$\rho \prec\sigma$. In particular, the order $\prec$ is well-founded.
For a (partial) functional interpretation $\ensuremath{\mathcal{Y}}\xspace=(\domainof{\ensuremath{\mathcal{Y}}\xspace},\cdot^\ensuremath{\mathcal{Y}}\xspace)$ and $\sigma\in\domainof{\ensuremath{\mathcal{Y}}\xspace}$, we define the \emph{label} of $\sigma$ in $\ensuremath{\mathcal{Y}}\xspace$ as
$\ensuremath{\mathcal{Y}}\xspace(\sigma) := \{ A \in \NC \mid \sigma \in A^\ensuremath{\mathcal{Y}}\xspace \}$. The cardinality of $\ensuremath{\mathcal{Y}}\xspace(\sigma)$ is bounded by the size of $\ensuremath{\mathcal{T}}\xspace$, and thus
there can be only exponentially many different such labels.
\begin{definition}\label{def:block}
Let $\ensuremath{\mathcal{Y}}\xspace = (\domainof{\ensuremath{\mathcal{Y}}\xspace},\cdot^\ensuremath{\mathcal{Y}}\xspace)$ be a partial functional interpretation.
The \emph{set of all blocked elements} in $\domainof{\ensuremath{\mathcal{Y}}\xspace}$ is defined
by induction over the well-founded order $\prec$:
\begin{enumerate}
\item[\bRule{1}]
The least element $\epsilon$ is not blocked.
\item[\bRule{2}]
The element $\sigma\in\domainof{\ensuremath{\mathcal{Y}}\xspace}$ is blocked if
there exists $\omega \in \domainof{\ensuremath{\mathcal{Y}}\xspace}$ with $\omega \prec \sigma$ such that
$\ensuremath{\mathcal{Y}}\xspace(\sigma) = \ensuremath{\mathcal{Y}}\xspace(\omega)$ and $\omega$ is not blocked.
\item[\bRule{3}]
Furthermore, the element $\sigma\in\domainof{\ensuremath{\mathcal{Y}}\xspace}$ is blocked if there exists $\rho \in \pprefixset{\sigma}$ such that $\rho$ is blocked.
\end{enumerate}
%
Only elements of $\domainof{\ensuremath{\mathcal{Y}}\xspace}$ for which \bRule{1} or \bRule{2} holds can be blocked. All other elements
are \emph{non-blocked} elements, which are collected in the set $\notblocked{\ensuremath{\mathcal{Y}}\xspace}$.
\end{definition}
Condition~\bRule{2} corresponds to \emph{anywhere blocking} in classical tableau algorithms: intuitively, if there are two nodes with the same label, it suffices to reason only on one of them, and the ordering decides which one is used. Condition~\bRule{3} corresponds to \emph{ancestor blocking}: if it is already decided that a node can be ignored, it is not necessary to consider its descendants either.
Nodes blocked due Condition~\bRule{2}
are called \emph{directly blocked}, while nodes blocked due Condition~\bRule{3}
are called \emph{indirectly blocked}.
Next, we define what an extension step is. Such a step expands a single non-blocked and incomplete element in a partial functional interpretation.
\begin{definition}
Let $\ensuremath{\mathcal{Y}}\xspace$
be a partial functional interpretation, $\ensuremath{\mathcal{T}}\xspace$ a TBox in normal form, $m,n \geq 0$ and
\begin{align*}
\alpha\ \mbox{a GCI in $\ensuremath{\mathcal{T}}\xspace$ of the form}\ \
\alpha = C \ensuremath{\sqsubseteq}\xspace
\left(A_1 \sqcap \cdots \sqcap A_m \sqcap \forall r_1. B_1 \sqcap \cdots \forall r_n. B_n \right).
\end{align*}
In addition, let $\sigma \in \notblocked{\ensuremath{\mathcal{Y}}\xspace} \cap \incomp{\ensuremath{\mathcal{Y}}\xspace,\ensuremath{\mathcal{T}}\xspace}$
be a non-blocked, incomplete element in \ensuremath{\mathcal{Y}}\xspace violating $\alpha$.
Then, the \emph{expansion of $\alpha$ at $\sigma$ in $\ensuremath{\mathcal{Y}}\xspace$} is the partial
interpretation $\ensuremath{{\mathcal{Z}}}\xspace$ defined by
\begin{itemize}
\item $\domainof{\ensuremath{{\mathcal{Z}}}\xspace} = \domainof{\ensuremath{\mathcal{Y}}\xspace} \cup \{ \sigma r_1,\ldots,\sigma r_n \}$;
\item $A_i^\ensuremath{{\mathcal{Z}}}\xspace = A_i^\ensuremath{\mathcal{Y}}\xspace \cup \{ \sigma \}$ for all $i = 1,\ldots,m$;
\item $B_i^\ensuremath{{\mathcal{Z}}}\xspace = B_i^\ensuremath{\mathcal{Y}}\xspace \cup \{ \sigma r_j \mid 1\leq j\leq n, B_j=B_i\}$ for all $i = 1,\ldots,n$; and
\item $Q^\ensuremath{{\mathcal{Z}}}\xspace = Q^\ensuremath{\mathcal{Y}}\xspace$ for all $Q \in \NC \setminus \{ A_1,\ldots A_m,B_1,\ldots,B_n \}$.
\end{itemize}
A partial functional interpretation $\ensuremath{{\mathcal{Z}}}\xspace$
is a \emph{$\ensuremath{\mathcal{T}}\xspace$-completion} of $\ensuremath{\mathcal{Y}}\xspace$, written as $\ensuremath{\mathcal{Y}}\xspace \ensuremath{ \mathbin{\vdash_{\T}} }\xspace \ensuremath{{\mathcal{Z}}}\xspace$,
iff $\ensuremath{{\mathcal{Z}}}\xspace$ is an expansion of some $\alpha\in\ensuremath{\mathcal{T}}\xspace$ at some $\sigma'\in\notblocked{\ensuremath{\mathcal{Y}}\xspace}\cap\incomp{\ensuremath{\mathcal{Y}}\xspace,\ensuremath{\mathcal{T}}\xspace}$. We denote by $\ensuremath{ \mathbin{\vdash_{\T}} }\xspace^*$ the reflexive transitive closure of $\ensuremath{ \mathbin{\vdash_{\T}} }\xspace$ and call $\ensuremath{{\mathcal{Z}}}\xspace$ with $\ensuremath{\mathcal{Y}}\xspace \ensuremath{ \mathbin{\vdash_{\T}} }\xspace^* \ensuremath{{\mathcal{Z}}}\xspace$ \emph{complete}
if every incomplete element is blocked, i.e., $\notblocked{\ensuremath{\mathcal{Y}}\xspace_n} \cap \incomp{\ensuremath{\mathcal{Y}}\xspace_n,\ensuremath{\mathcal{T}}\xspace} = \emptyset$.
\end{definition}
Depending on the choice of $\sigma$ and the GCI, there can be several \ensuremath{\mathcal{T}}\xspace-completions of~$\ensuremath{\mathcal{Y}}\xspace$. Also note that it is guaranteed that either
$\notblocked{\ensuremath{\mathcal{Y}}\xspace} \cap \incomp{\ensuremath{\mathcal{Y}}\xspace,\ensuremath{\mathcal{T}}\xspace} = \emptyset$ or there exists a \ensuremath{\mathcal{T}}\xspace-completion of~$\ensuremath{\mathcal{Y}}\xspace$. Thus, in case a given $\ensuremath{{\mathcal{Z}}}\xspace$ with $\ensuremath{\mathcal{Y}}\xspace \ensuremath{ \mathbin{\vdash_{\T}} }\xspace^* \ensuremath{{\mathcal{Z}}}\xspace$
is not complete, it can be further completed.
Given the input $A_0,B_0 \in \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$ and $\ensuremath{\mathcal{T}}\xspace$, the algorithm \SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace} for deciding $A_0 \ensuremath{\subsumed_{\T}}\xspace B_0$ computes a sequence of $\ensuremath{\mathcal{T}}\xspace$-completions until it reaches a complete partial functional interpretation, i.e., one where no non-blocked element violates any GCI from $\ensuremath{\mathcal{T}}\xspace$.
The algorithm starts with the following partial functional interpretation:
\begin{align}\label{eq:inittree}
\domainof{\ensuremath{\mathcal{Y}}\xspace_0} := \{ \epsilon \}; \quad
A_0^{\ensuremath{\mathcal{Y}}\xspace_0} := \{ \epsilon \}
\quad \text{ and } \quad
B^{\ensuremath{\mathcal{Y}}\xspace_0} := \emptyset \text{ for all } B \in \NC \setminus \{ A_0 \},
\end{align}
and computes a sequence
\begin{align*}
\ensuremath{\mathcal{Y}}\xspace_0 \ensuremath{ \mathbin{\vdash_{\T}} }\xspace \ensuremath{\mathcal{Y}}\xspace_1 \ensuremath{ \mathbin{\vdash_{\T}} }\xspace \cdots \ensuremath{\mathcal{Y}}\xspace_{(n-1)}\ensuremath{ \mathbin{\vdash_{\T}} }\xspace \ensuremath{\mathcal{Y}}\xspace_n
\end{align*}
such that $\ensuremath{\mathcal{Y}}\xspace_n$ is complete
in the sense introduced above.
It answers \enquote{yes} if $B_0 \in \ensuremath{\mathcal{Y}}\xspace_n(\epsilon)$ (or equivalently $\epsilon \in B_0^{\ensuremath{\mathcal{Y}}\xspace_n}$) and \enquote{no} otherwise.
\begin{example}
In this example, we illustrate the completion steps and how the blocking conditions are applied.
Let $\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace = \{ A,B,K,L,M \}$ and $\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace = \{ r,s \}$.
The TBox \ensuremath{\mathcal{T}}\xspace is defined as follows:
\begin{align*}
\ensuremath{\mathcal{T}}\xspace := \{ ~~~~~~~~
A &\ensuremath{\sqsubseteq}\xspace \forall r. A, &A \ensuremath{\sqsubseteq}\xspace~& B, \\
A &\ensuremath{\sqsubseteq}\xspace \forall s. K, & K \ensuremath{\sqsubseteq}\xspace~& \forall s. A, \\
\forall s. B &\ensuremath{\sqsubseteq}\xspace L, &\forall s. L \ensuremath{\sqsubseteq}\xspace~& M ~~~~~~~~ \}.
\end{align*}
One can verify that
$$
A \ensuremath{\subsumed_{\T}}\xspace M.
$$
In fact, the GCIs $A \ensuremath{\sqsubseteq}\xspace \forall s. K$, $K \ensuremath{\sqsubseteq}\xspace \forall s. A$ and $A \ensuremath{\sqsubseteq}\xspace B$ yield $A \ensuremath{\subsumed_{\T}}\xspace \forall s. \forall s. B$.
Using $\forall s. B \ensuremath{\sqsubseteq}\xspace L$ and $\forall s. L \ensuremath{\sqsubseteq}\xspace M$, we then obtain $A \ensuremath{\subsumed_{\T}}\xspace M$.
%
\begin{figure}
\framebox[0.95\textwidth]{
\centering
\begin{tikzpicture}
\node[treenode] (eps0) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A \} $ \nodepart{third} \phantom{\cmark} };
\node[treenode, below=0.5cm of eps0] (eps1) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A \} $ \nodepart{third} \phantom{\cmark} };
\node[treenode,right=0.2cm of eps1] (r1) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A \} $ \nodepart{third} \xmark };
\node[treenode, below=0.5cm of eps1] (eps2) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A,B \} $ \nodepart{third} \phantom{\cmark} };
\node[treenode,right=0.2cm of eps2] (r2) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode, below=0.5cm of eps2] (eps3) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A,B \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of eps3] (r3) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode,right=0.2cm of r3] (s3) {$\boldsymbol{s}$ \nodepart[text width=1cm]{second} $\{ K \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode, below=0.5cm of eps3] (eps4) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A,B \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of eps4] (r4) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode,right=0.2cm of r4] (s4) {$\boldsymbol{s}$ \nodepart[text width=1cm]{second} $\{ K \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode,right=0.2cm of s4] (rr4) {$\boldsymbol{rr}$ \nodepart[text width=1cm]{second} $\{ A \}$ \nodepart{third} \xmark };
\node[treenode, below=0.5cm of eps4] (eps5) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A,B \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of eps5] (r5) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A, B \} $ \nodepart{third} \xmark };
\node[treenode,right=0.2cm of r5] (s5) {$\boldsymbol{s}$ \nodepart[text width=1cm]{second} $\{ K \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode,right=0.2cm of s5] (rr5) {$\boldsymbol{rr}$ \nodepart[text width=1cm]{second} $\{ A \}$ \nodepart{third} \xmark };
\node[treenode, below=0.5cm of eps5] (eps6) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A,B \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of eps6] (r6) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A, B \} $ \nodepart{third} \xmark };
\node[treenode,right=0.2cm of r6] (s6) {$\boldsymbol{s}$ \nodepart[text width=1cm]{second} $\{ K \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of s6] (rr6) {$\boldsymbol{rr}$ \nodepart[text width=1cm]{second} $\{ A \}$ \nodepart{third} \xmark };
\node[treenode,right=0.2cm of rr6] (ss6) {$\boldsymbol{ss}$ \nodepart[text width=1cm]{second} $\{ A \}$ \nodepart{third} \phantom{\xmark} };
\node[treenode, below=0.5cm of eps6] (eps7) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A,B \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of eps7] (r7) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A, B \} $ \nodepart{third} \xmark };
\node[treenode,right=0.2cm of r7] (s7) {$\boldsymbol{s}$ \nodepart[text width=1cm]{second} $\{ K \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode,right=0.2cm of s7] (rr7) {$\boldsymbol{rr}$ \nodepart[text width=1cm]{second} $\{ A \}$ \nodepart{third} \xmark };
\node[treenode,right=0.2cm of rr7] (ss7) {$\boldsymbol{ss}$ \nodepart[text width=1cm]{second} $\{ A,B \}$ \nodepart{third} \xmark };
\node[treenode, below=0.5cm of eps7] (eps8) {$\boldsymbol{\epsilon}$ \nodepart[text width=1cm]{second} $\{ A,B \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode,right=0.2cm of eps8] (r8) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A, B \} $ \nodepart{third} \xmark };
\node[treenode,right=0.2cm of r8] (s8) {$\boldsymbol{s}$ \nodepart[text width=1cm]{second} $\{ K, L \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of s8] (rr8) {$\boldsymbol{rr}$ \nodepart[text width=1cm]{second} $\{ A \}$ \nodepart{third} \xmark };
\node[treenode,right=0.2cm of rr8] (ss8) {$\boldsymbol{ss}$ \nodepart[text width=1cm]{second} $\{ A, B \}$ \nodepart{third} \xmark };
\node[treenode, below=0.5cm of eps8] (eps9) {$\boldsymbol{\epsilon}$ \nodepart[text width=1.4cm]{second} $\{ A,B,M \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of eps9] (r9) {$\boldsymbol{r}$ \nodepart[text width=1cm]{second} $\{ A, B \} $ \nodepart{third} \phantom{\xmark} };
\node[treenode,right=0.2cm of r9] (s9) {$\boldsymbol{s}$ \nodepart[text width=1cm]{second} $\{ K, L \} $ \nodepart{third} \cmark };
\node[treenode,right=0.2cm of s9] (rr9) {$\boldsymbol{rr}$ \nodepart[text width=1cm]{second} $\{ A \}$ \nodepart{third} \phantom{\xmark} };
\node[treenode,right=0.2cm of rr9] (ss9) {$\boldsymbol{ss}$ \nodepart[text width=1cm]{second} $\{ A, B \}$ \nodepart{third} \xmark };
\node[blanknode, right=0.4cm of eps0] {$\completionstep{A \ensuremath{\sqsubseteq}\xspace \forall r. A}{\boldsymbol{\epsilon}}$};
\node[blanknode, right=0.4cm of r1] {$\completionstep{A \ensuremath{\sqsubseteq}\xspace B}{\boldsymbol{\epsilon}}$};
\node[blanknode, right=0.4cm of r2] {$\completionstep{A \ensuremath{\sqsubseteq}\xspace \forall s. K}{\boldsymbol{\epsilon}}$};
\node[blanknode, right=0.4cm of s3] {$\completionstep{A \ensuremath{\sqsubseteq}\xspace \forall r. A}{\boldsymbol{r}}$};
\node[blanknode, right=0.4cm of rr4] {$\completionstep{A \ensuremath{\sqsubseteq}\xspace B}{\boldsymbol{r}}$};
\node[blanknode, right=0.4cm of rr5] {$\completionstep{K \ensuremath{\sqsubseteq}\xspace \forall s. A}{\boldsymbol{s}}$};
\node[blanknode, right=0.4cm of ss6] {$\completionstep{A \ensuremath{\sqsubseteq}\xspace B}{\boldsymbol{ss}}$};
\node[blanknode, right=0.4cm of ss7] {$\completionstep{\forall s. B \ensuremath{\sqsubseteq}\xspace L}{\boldsymbol{s}}$};
\node[blanknode, right=0.4cm of ss8] {$\completionstep{\forall s. L \ensuremath{\sqsubseteq}\xspace M}{\boldsymbol{\epsilon}}$};
\node[blanknode,left=0.4cm of eps0] {$\ensuremath{\mathcal{Y}}\xspace_0$};
\node[blanknode,left=0.4cm of eps1] {$\ensuremath{\mathcal{Y}}\xspace_1$};
\node[blanknode,left=0.4cm of eps2] {$\ensuremath{\mathcal{Y}}\xspace_2$};
\node[blanknode,left=0.4cm of eps3] {$\ensuremath{\mathcal{Y}}\xspace_3$};
\node[blanknode,left=0.4cm of eps4] {$\ensuremath{\mathcal{Y}}\xspace_4$};
\node[blanknode,left=0.4cm of eps5] {$\ensuremath{\mathcal{Y}}\xspace_5$};
\node[blanknode,left=0.4cm of eps6] {$\ensuremath{\mathcal{Y}}\xspace_6$};
\node[blanknode,left=0.4cm of eps7] {$\ensuremath{\mathcal{Y}}\xspace_7$};
\node[blanknode,left=0.4cm of eps8] {$\ensuremath{\mathcal{Y}}\xspace_8$};
\node[blanknode,left=0.4cm of eps9] {$\ensuremath{\mathcal{Y}}\xspace_9$};
\end{tikzpicture}
}
\caption{Example run \label{fig:examplerun}}
\end{figure}
We use a total order on $\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace^*$ that satisfies
\begin{align*}
\epsilon \prec r \prec s \prec rr \prec r s \prec s r \prec s s \prec r r r \prec \cdots
\end{align*}
and compute a sequence of completion steps for \SUBSpar{A}{M}{\ensuremath{\mathcal{T}}\xspace} sketched in Figure \ref{fig:examplerun}, where
\begin{itemize}
\item[\xmark] marks blocked elements, and
\item[\cmark] marks non-blocked elements not violating any GCI in \ensuremath{\mathcal{T}}\xspace.
\end{itemize}
We write $\completionstep{A \ensuremath{\sqsubseteq}\xspace \forall r. A}{\boldsymbol{\epsilon}}$ to denote the completion step that takes
$\boldsymbol{\epsilon}$ as a non-blocked element violating $A \ensuremath{\sqsubseteq}\xspace \forall r. A$ and applies the expansion.
Figure \ref{fig:examplerun} shows the first completion steps needed to obtain $M \in \ensuremath{\mathcal{Y}}\xspace_9(\epsilon)$, which yields $A \ensuremath{\subsumed_{\T}}\xspace M$.
For example, in $\ensuremath{\mathcal{Y}}\xspace_1$ the blocking condition \bRule{2} is used to block the node $\boldsymbol{r}$. In
$\ensuremath{\mathcal{Y}}\xspace_2$, $\boldsymbol{r}$ is no longer blocked since the label of $\boldsymbol{\epsilon}$ has been expanded. In
$\ensuremath{\mathcal{Y}}\xspace_5$, $\boldsymbol{r}$ gets again blocked since its label is expanded, and thus
$\boldsymbol{rr}$ is indirectly blocked due to \bRule{3}. Also note that in $\ensuremath{\mathcal{Y}}\xspace_6$ we have $\boldsymbol{rr} \prec \boldsymbol{ss}$
and both have the same label, but since $\boldsymbol{rr}$ is already blocked, \bRule{2} does not apply to $\boldsymbol{ss}$, which allows
us to do further completion steps needed to derive $A \ensuremath{\subsumed_{\T}}\xspace M$.
\end{example}
Before we prove that the algorithm is sound and complete, we first show that the computed sequence is always finite, thus ensuring termination of the algorithm.
The \emph{depth} of a partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace = (\domainof{\ensuremath{\mathcal{Y}}\xspace},\cdot^\ensuremath{\mathcal{Y}}\xspace)$, denoted by $\depthtreeof{\ensuremath{\mathcal{Y}}\xspace}$, is the maximum length of role words in $\domainof{\ensuremath{\mathcal{Y}}\xspace}$, i.e., $\depthtreeof{\ensuremath{\mathcal{Y}}\xspace} := \mathsf{max}(\{ \abs{\sigma} \mid \sigma \in \domainof{\ensuremath{\mathcal{Y}}\xspace} \})$.
\begin{lemma}\label{lem:depthbound}
If $\ensuremath{{\mathcal{Z}}}\xspace$ is a partial functional interpretation such that $\ensuremath{\mathcal{Y}}\xspace_0 \ensuremath{ \mathbin{\vdash_{\T}} }\xspace^* \ensuremath{{\mathcal{Z}}}\xspace$, then
$
\depthtreeof{\ensuremath{{\mathcal{Z}}}\xspace} \leq 2^\abs{\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace} + 1.
$
\end{lemma}
\begin{proof}
Let $\ensuremath{\mathcal{Y}}\xspace_0\ensuremath{ \mathbin{\vdash_{\T}} }\xspace \ensuremath{\mathcal{Y}}\xspace_1 \ensuremath{ \mathbin{\vdash_{\T}} }\xspace\ldots\ensuremath{ \mathbin{\vdash_{\T}} }\xspace \ensuremath{\mathcal{Y}}\xspace_n=\ensuremath{{\mathcal{Z}}}\xspace$ be a sequence of expansions.
We show for each $i$, $1\leq i\leq n$, that the length of words in $\domainof{\ensuremath{\mathcal{Y}}\xspace_i}$ is bounded by $2^\abs{\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace}+1$.
A new element $\sigma\in\domainof{\ensuremath{\mathcal{Y}}\xspace_i}\setminus\domainof{\ensuremath{\mathcal{Y}}\xspace_{i-1}}$ is only added by the expansion at $\sigma$ of $\ensuremath{\mathcal{Y}}\xspace_{i-1}$ if
$\sigma=\omega r$ and $\omega\in\notblocked{\ensuremath{\mathcal{Y}}\xspace_{i-1}}$. Now, $\omega\in\notblocked{\ensuremath{\mathcal{Y}}\xspace_{i-1}}$ is only possible if there exist
no two distinct $\sigma_1,\sigma_2\in\prefixset{\omega}$ such that $\Y_{i-1}(\sigma_1)=\Y_{i-1}(\sigma_2)$.
Otherwise, since either $\sigma_1\prec\sigma_2$ or $\sigma_2\prec\sigma_1$, one of these two nodes would be blocked by blocking condition~\bRule{2},
and $\omega$ would be blocked by condition~\bRule{3}.
It follows that $\ensuremath{\mathcal{Y}}\xspace_{i-1}(\sigma_1)\neq\ensuremath{\mathcal{Y}}\xspace_{i-1}(\sigma_2)$ for every two distinct $\sigma_1$, $\sigma_2\in\prefixset{\omega}$,
and consequently $\abs{\omega}\leq 2^{\abs{\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace}}$ and $\abs{\sigma}=\abs{\omega}+1\leq 2^\abs{\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace}+1$.
Hence, $\abs{\sigma}\leq 2^{\abs{\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace}}+1$ for every $\sigma\in\domainof{\ensuremath{{\mathcal{Z}}}\xspace}$, which yields $\depthtreeof{\ensuremath{{\mathcal{Z}}}\xspace}\leq 2^\abs{\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace}+1$.
\hfill \end{proof}
The upper bound on the depth of the tree in a \ensuremath{\mathcal{T}}\xspace-completion sequence also yields an upper bound on its overall size,
since the outdegree of the tree is limited by $\big|\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace\big|$.
Furthermore, we observe that $\ensuremath{\mathcal{Y}}\xspace \ensuremath{ \mathbin{\vdash_{\T}} }\xspace \ensuremath{\mathcal{Y}}\xspace'$ implies that $\ensuremath{\mathcal{Y}}\xspace \subsetneq \ensuremath{\mathcal{Y}}\xspace'$, i.e.\@\xspace a \ensuremath{\mathcal{T}}\xspace-completion always adds something
and never removes anything.
At the same time, each label set can contain at most $\big| \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace \big|$ many names.
Thus, due to the depth bound, the bound on the outdegree, and the upper bound on the label size, there cannot be an infinite sequence of \ensuremath{\mathcal{T}}\xspace-completions. Hence, $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ always terminates. Note that we have used both blocking conditions, \bRule{2} and \bRule{3}, in the proof.
\begin{lemma}
$\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ always terminates.
\end{lemma}
Note, however, that our termination argument only yields a double-exponential bound on the run time of the algorithm.
The reason is that Lemma~\ref{lem:depthbound} only shows an exponential bound on the \emph{depth} of the generated trees,
and thus only a double-exponential bound on the \emph{size} of these trees. At the moment, it is not clear
whether one can construct examples where the algorithm only terminates after an double-exponential number of steps,
but we also do not have a proof that it always terminates in exponential time.
Thus, we currently do not know whether the algorithm is worst-case optimal or not.
However, our experimental evaluation shows that it works reasonably well in practice.
It remains to show that $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ always computes the correct result, i.e., that it is sound and complete.
The following lemma is crucial for proving this.
\begin{lemma}\label{lem:relationCanonicalModel}
Let $\ensuremath{\mathcal{Y}}\xspace_0$ be as in~\eqref{eq:inittree} and $\ensuremath{\mathcal{Y}}\xspace$ be a partial functional interpretation that is reachable from $\ensuremath{\mathcal{Y}}\xspace_0$ and complete,
that is, $\ensuremath{\mathcal{Y}}\xspace_0\ensuremath{ \mathbin{\vdash_{\T}} }\xspace^*\ensuremath{\mathcal{Y}}\xspace$ and $\incomp{\ensuremath{\mathcal{Y}}\xspace,\ensuremath{\mathcal{T}}\xspace}\cap\notblocked{\ensuremath{\mathcal{Y}}\xspace}=\emptyset$.
Then there is a functional model $\ensuremath{\mathcal{I}}\xspace$ of $\ensuremath{\mathcal{T}}\xspace$ such that $\ensuremath{\mathcal{Y}}\xspace(\epsilon)=\ensuremath{\mathcal{I}}\xspace(\epsilon)$.
\end{lemma}
\begin{proof}
We extend $\ensuremath{\mathcal{Y}}\xspace$ to a functional interpretation $\ensuremath{\mathcal{I}}\xspace$ such that\ $\ensuremath{\mathcal{Y}}\xspace(\epsilon)=\ensuremath{\mathcal{I}}\xspace(\epsilon)$.
Note that, in $\ensuremath{\mathcal{Y}}\xspace$, even non-blocked nodes $\sigma$ need not have $r$-successors for all $r\in \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$. This is the case if there is no GCI
that requires generating an $r$-successor for $\sigma$. In the least functional model, the successor $\sigma r$ exists, but it has label $\emptyset$.
We will represent such successors by a dummy node $d_\top$ with an empty label in our construction.
To construct $\ensuremath{\mathcal{I}}\xspace$, we first define a mapping $m:\ensuremath{\NR^*}\rightarrow\notblocked{\ensuremath{\mathcal{Y}}\xspace}\cup\{d_\top\}$ by induction on the length of $\sigma\in\ensuremath{\NR^*}$ as follows:
\\[-\medskipamount] \mbox{ } \hfill \parbox[t]{0.935\textwidth}{
\begin{itemize}
\item By definition, $\epsilon$ is not blocked, and thus we can set $m(\epsilon) = \epsilon$.
\item Now, consider a node $\sigma r$ of length $> 0$, and assume that $m(\sigma)$ is already defined.
We distinguish two cases:
\begin{itemize}
\item
Assume that $m(\sigma) r\in\domainof{\ensuremath{\mathcal{Y}}\xspace}$. Note that this node cannot be indirectly blocked since
$m(\sigma)$ is then a node in $\domainof{\ensuremath{\mathcal{Y}}\xspace}$ that is not blocked.
Thus,
there exists $\sigma'\in\notblocked{\ensuremath{\mathcal{Y}}\xspace}$ such that $\ensuremath{\mathcal{Y}}\xspace(\sigma')=\ensuremath{\mathcal{Y}}\xspace(m(\sigma) r)$.
We set $m(\sigma r)=\sigma'$.
\item If $m(\sigma) r\not\in\domainof{\ensuremath{\mathcal{Y}}\xspace}$, then we set $m(\sigma r)=d_\top$.
\end{itemize}
\end{itemize}
}
\smallskip \noindent
Based on $m$ and $\ensuremath{\mathcal{Y}}\xspace$, we define the functional interpretation $\ensuremath{\mathcal{I}}\xspace$ by setting
$$A^\ensuremath{\mathcal{I}}\xspace=\{\sigma\mid m(\sigma)\in A^\ensuremath{\mathcal{Y}}\xspace\}\ \ \ \mbox{for all $A\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$.}$$
It follows from Definition~\ref{def:match} that, for every $\sigma\in\ensuremath{\NR^*}$ and every \ensuremath{{\mathcal{F\!L}_0}}\xspace concept $C$ in normal form,
if $m(\sigma)$ matches $C$ in $\ensuremath{\mathcal{Y}}\xspace$, then $\sigma\in C^\ensuremath{\mathcal{I}}\xspace$. In fact, assume that $m(\sigma)$ matches $C$.
If $A$ is a conjunct in $C$, then $m(\sigma)\in A^\ensuremath{\mathcal{Y}}\xspace$, and thus
$\sigma\in A^\ensuremath{\mathcal{I}}\xspace$. If $\forall r.A$ is a conjunct in $C$, then $m(\sigma) r\in A^\ensuremath{\mathcal{Y}}\xspace$. This implies $m(\sigma) r\in\domainof{\ensuremath{\mathcal{Y}}\xspace}$,
and thus $m(\sigma r)$ satisfies $\ensuremath{\mathcal{Y}}\xspace(m(\sigma r))=\ensuremath{\mathcal{Y}}\xspace(m(\sigma) r)$, which yields $m(\sigma r)\in A^\ensuremath{\mathcal{Y}}\xspace$, and thus $\sigma r\in A^\ensuremath{\mathcal{I}}\xspace$.
This shows $\sigma\in (\forall r.A)^\ensuremath{\mathcal{I}}\xspace$.
The other direction also holds. Assume that $\sigma\in C^\ensuremath{\mathcal{I}}\xspace$. If $A$ is a conjunct in $C$, then
$\sigma\in A^\ensuremath{\mathcal{I}}\xspace$ implies $m(\sigma)\in A^\ensuremath{\mathcal{Y}}\xspace$. If $\forall r.A$ is a conjunct in $C$, then $\sigma\in (\forall r.A)^\ensuremath{\mathcal{I}}\xspace$
implies $\sigma r\in A^\ensuremath{\mathcal{I}}\xspace$, and thus $m(\sigma r)\in A^\ensuremath{\mathcal{Y}}\xspace$. Consequently,
$A\in \ensuremath{\mathcal{Y}}\xspace(m(\sigma r)) = \ensuremath{\mathcal{Y}}\xspace(m(\sigma)r)$ yields $m(\sigma)r\in A^\ensuremath{\mathcal{Y}}\xspace$, which completes the proof that $m(\sigma)$ matches $C$
We are now ready to show that $\ensuremath{\mathcal{I}}\xspace$ is a model of $\ensuremath{\mathcal{T}}\xspace$, that is, for every $C\sqsubseteq D\in\ensuremath{\mathcal{T}}\xspace$ and $\sigma\in C^\ensuremath{\mathcal{I}}\xspace$, also $\sigma\in D^\ensuremath{\mathcal{I}}\xspace$ holds.
Thus, assume $C\sqsubseteq D\in\ensuremath{\mathcal{T}}\xspace$ and $\sigma\in C^\ensuremath{\mathcal{I}}\xspace$. The latter implies that $m(\sigma)$ matches $C$. This is only possible if
$m(\sigma)\neq d_\top$. Thus, $m(\sigma)\in\notblocked{\ensuremath{\mathcal{Y}}\xspace}$ and since $\ensuremath{\mathcal{Y}}\xspace$ is complete,
$m(\sigma)\not\in\incomp{\ensuremath{\mathcal{Y}}\xspace}$. Consequently, $m(\sigma)$ matches $D$, which yields $\sigma\in D^\ensuremath{\mathcal{I}}\xspace$.
\hfill \end{proof}
\begin{theorem}
$\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ is sound and complete, that is, it outputs \enquote{yes} iff $A_0\sqsubseteq_\ensuremath{\mathcal{T}}\xspace B_0$.
\end{theorem}
\begin{proof}
Assume that the algorithm has generated a complete partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace$ such that $\ensuremath{\mathcal{Y}}\xspace_0\ensuremath{ \mathbin{\vdash_{\T}} }\xspace^*\ensuremath{\mathcal{Y}}\xspace$.
Lemma~\ref{lem:relationCanonicalModel} yields a model $\ensuremath{\mathcal{I}}\xspace$ of $\ensuremath{\mathcal{T}}\xspace$ such that $\ensuremath{\mathcal{I}}\xspace(\epsilon) = \ensuremath{\mathcal{Y}}\xspace(\epsilon)$.
If $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ outputs \enquote{no}, then $B_0\not\in \ensuremath{\mathcal{Y}}\xspace(\epsilon)$.
Since $A_0\in \ensuremath{\mathcal{Y}}\xspace(\epsilon) = \ensuremath{\mathcal{I}}\xspace(\epsilon)$ and $B_0\not\in \ensuremath{\mathcal{Y}}\xspace(\epsilon) = \ensuremath{\mathcal{I}}\xspace(\epsilon)$, the model $\ensuremath{\mathcal{I}}\xspace$ of $\ensuremath{\mathcal{T}}\xspace$ yields a counterexample
to the subsumption relation $A_0\sqsubseteq_\ensuremath{\mathcal{T}}\xspace B_0$ because this implies $\epsilon\in A_0^\ensuremath{\mathcal{I}}\xspace\setminus B_0^\ensuremath{\mathcal{I}}\xspace$.
If $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ outputs \enquote{yes}, then $B_0\in \ensuremath{\mathcal{Y}}\xspace(\epsilon)$. It is easy to see that $Y(\sigma) \subseteq \ensuremath{\canonical{A_0}}\xspace(\sigma)$
holds for all $\sigma\in\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace^*$. In fact, one can generate $\ensuremath{\canonical{A_0}}\xspace$ from $\ensuremath{\mathcal{Y}}\xspace_0$ by an infinite number of completion steps that also
are applied to blocked nodes. Thus, whatever is added in the sequence $\ensuremath{\mathcal{Y}}\xspace_0\ensuremath{ \mathbin{\vdash_{\T}} }\xspace^*\ensuremath{\mathcal{Y}}\xspace$ is also present in
$\ensuremath{\canonical{A_0}}\xspace$. But then $B_0\in \ensuremath{\mathcal{Y}}\xspace(\epsilon)$ yields $B_0\in \ensuremath{\canonical{A_0}}\xspace(\epsilon)$, and this implies $A_0\sqsubseteq_\ensuremath{\mathcal{T}}\xspace B_0$ by Theorem~\ref{sub:char:least-functional}.
\hfill \end{proof}
The algorithm $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ shares properties with the completion method for \ensuremath{\mathcal{E\!L}}\xspace \cite{BaBL05} as well as with
tableau algorithms for expressive DLs \cite{DBLP:journals/sLogica/BaaderS01}. Every single \ensuremath{\mathcal{T}}\xspace-completion step extends the label set of
at least one node in the tree. Intuitively, adding the concept name $A$ to the label set of domain element $\sigma$ corresponds to deriving $A_0 \ensuremath{\sqsubseteq}\xspace \forall \sigma. A$ as a consequence of $\ensuremath{\mathcal{T}}\xspace$. A single run of $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ not only decides whether $A_0 \ensuremath{\sqsubseteq}\xspace B_0$ is entailed by $\ensuremath{\mathcal{T}}\xspace$ but computes \emph{all} named subsumers of $A_0$. This is similar to the \ensuremath{\mathcal{E\!L}}\xspace completion method and other consequence-based calculi \cite{DBLP:conf/ijcai/SimancikKH11}. From tableau algorithms $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ inherits the blocking mechanism that ensures termination.
\section{Evaluation of the \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace reasoner}\label{sec:eval}
The \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace reasoner is implemented in Java. It takes as input a general \ensuremath{{\mathcal{F\!L}_\bot}}\xspace TBox \ensuremath{\mathcal{T}}\xspace in OWL format \cite{DBLP:journals/ws/GrauHMPPS08} and normalizes the input TBox. If the ontology uses $\top$ or $\bot$, the transformation rules from Section~\ref{ssec:reductions} are applied. \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace realizes the following reasoning tasks.
\begin{description}
\item[\emph{Subsumption:}] Given two OWL classes $A$ and $B$, decide whether $A \ensuremath{\subsumed_{\T}}\xspace B$ holds.
\item[\emph{Subsumer set:}] Given an OWL class $A$, compute all classes $B$ in $\ensuremath{\mathcal{T}}\xspace$ for which $A \ensuremath{\subsumed_{\T}}\xspace B$ holds.
\item[\emph{Classification:}] Decide for all pairs of named OWL classes $A$ and $B$ occurring in $\ensuremath{\mathcal{T}}\xspace$ whether the subsumption $A \ensuremath{\subsumed_{\T}}\xspace B$ holds.
\end{description}
To decide subsumption \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace runs $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$, but stops as soon as the {subsumer candidate} $B_0$ occurs at the root of the tree.
For computing the whole subsumer set of $A_0$, a single complete run of $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ is sufficient, where the choice of $B_0$ is actually irrelevant. All subsumers of $A_0$ can be found at the root of the final tree.
Classification is done by running $\SUBSpar{A_0}{\tt{*}}{\ensuremath{\mathcal{T}}\xspace}$ for each named class $A_0$ in \ensuremath{\mathcal{T}}\xspace separately. (Again, the used {subsumer candidate} $B_0$ is irrelevant {for this kind of reasoning task and can be replaced by any concept other than $A_0$ or $\top$. This is indicated here by the wildcard $*$.}) The Rete network for \ensuremath{\mathcal{T}}\xspace is created only once and is reused for the remaining runs of $\SUBSpar{\tt{*}}{\tt{*}}{\ensuremath{\mathcal{T}}\xspace}$ {during classification}. Furthermore, {\ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace} uses caching to reuse precomputed subsumer sets.
Our evaluation of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace investigates two aspects. First, we wanted to see which optimizations implemented in \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace turned out to be effective and, second, we wanted to see how \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace's performance compares to other state-of-the-art DL reasoner. As most other DL reasoners that can handle (extensions of) \ensuremath{{\mathcal{F\!L}_0}}\xspace implement tableau-based methods, such a comparison would also tell {us} whether our new approach based on least functional models is competitive in terms of performance.
We report on both kinds of evaluations in this section.
In order to be able to assess the performance of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace, we needed to find suitable test ontologies first.
\subsection{Test Data}
\newcommand{\textsc{Ore-Corpus}\xspace}{\textsc{Ore-Corpus}\xspace}
\newcommand{\textsc{Mowl-Corpus}\xspace}{\textsc{Mowl-Corpus}\xspace}
We generated two corpora for our evaluation.
The first corpus \textsc{Ore-Corpus}\xspace is based on the ontologies of the
OWL EL classification track from the OWL Reasoner Evaluation 2015 (ORE 2015)~(see \cite{parsia2017owl}). The benchmarks of the ORE 2015 have the advantage that they have been balanced according to different criteria such as size, expressivity and complexity, and consist of many application ontologies. However, unfortunately no track is dedicated to ontologies in \ensuremath{{\mathcal{F\!L}_\bot}}\xspace. We thus generated \ensuremath{{\mathcal{F\!L}_\bot}}\xspace ontologies from ontologies written in \ensuremath{\mathcal{E\!L}}\xspace by \enquote{flipping} the quantifier, that is, by replacing $\exists$ by $\forall$. We furthermore dropped axioms involving role inclusions, nominals, or other operators that cannot be expressed in \ensuremath{{\mathcal{F\!L}_\bot}}\xspace. From the resulting corpus, we removed all ontologies with less than 500 concept names, resulting in a set of 209 \ensuremath{{\mathcal{F\!L}_\bot}}\xspace ontologies.\footnote{In the initial study on \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace's performance \cite{MTZ-RuleML-19} was an undetected bug which lead to more ontologies being discarded. There we used only 159 ontologies.}
While the ontologies used by the ORE do not contain a lot of \ensuremath{{\mathcal{F\!L}_\bot}}\xspace axioms, we found larger usage of them in the Manchester Ontology Corpus (MOWLCorp), which is a large ontology corpus containing 34,741 OWL
ontologies that were obtained by web-crawling~\cite{MOWLCORP}. The second corpus, \textsc{Mowl-Corpus}\xspace, is based on MOWLCorp. From each ontology in MOWLCorp, we removed axioms that could not be expressed in \ensuremath{{\mathcal{F\!L}_\bot}}\xspace.
If the resulting ontology contained at least 500 concept names, it was included in our corpus.
This resulted in a set of 382 \ensuremath{{\mathcal{F\!L}_\bot}}\xspace ontologies.
While the \textsc{Ore-Corpus}\xspace contains more complex axioms and a more balanced set of ontologies, the \textsc{Mowl-Corpus}\xspace contains axioms that were obtained from application ontologies without modifications and thus preserves the original way of modeling.
Figure~\ref{fig:corpora} shows the distribution of different parameters in the two corpora: number of concept names and number of axioms.
The largest ontology in \textsc{Ore-Corpus}\xspace has 3,137,899 axioms, while the largest ontology in \textsc{Mowl-Corpus}\xspace has 279,682 axioms.
\begin{figure}
\input{figures/distributions-corpii}
\caption{Numbers of axioms and concept names in the ontologies under consideration.
{The $y$-axis shows in logarithmic scale the number of axioms and concept names of the respective ontology,
for which we ordered the values along the x-axis.}
}
\label{fig:corpora}
\end{figure}
In order to evaluate the subsumption task, we generated 80 individual subsumption tests per ontology for the \textsc{Ore-Corpus}\xspace, composed of 40 tests with \emph{positive} and 40 tests with \emph{negative} outcome.
Since this resulted in a large number of reasoning experiments to be performed, this experiment was only performed on \textsc{Ore-Corpus}\xspace, for which we assumed the most insights due to its more varied nature compared to \textsc{Mowl-Corpus}\xspace.
Positive tests were generated by randomly selecting a concept name, and then randomly selecting a subsumer of it. Negative tests were generated by randomly selecting a concept name, and then randomly selecting another concept name that does not subsume the first.
For \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace, positive subsumption tests are easier, as the reasoner stops as soon as the subsumption relation has been proven. For tableau-based reasoning systems, the expected behavior is the other way around, as these reasoners try to create a counter-example to contradict the subsumption to be tested. Thus, evaluation results might not be as informative if one would just generate pairs for the subsumption test randomly without distinguishing between positive and negative tests as it was done in the earlier study \cite{MTZ-RuleML-19}.
\subsection{Evaluation Setup}
In the initial study on \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace, we compared the performance for all three implemented reasoning tasks \emph{Subsumption}, \emph{Subsumer Set} and \emph{Classification} ~\cite{MTZ-RuleML-19}.
Although the OWL API supports computing subsumer sets and is implemented by all considered reasoner systems, it would not yield an informative test, since all three tableaux-based systems classify the entire ontology before returning the subsumer set, as inspection of their source code revealed. This makes a comparison simply unfair and less insightful, which is why we restricted this study to the tasks \emph{Subsumption} and \emph{Classification}.
For subsumption tests, we used the \textsc{Ore-Corpus}\xspace and the concept pairs for positive and negative subsumptions. For each subsumption test, the timeout was set to 1 minute.
Both \textsc{Ore-Corpus}\xspace and \textsc{Mowl-Corpus}\xspace were used to evaluate the classification task. For each classification reasoning task, the timeout was set to 10 minutes.
In addition to the running times measured for the two reasoning tasks, we also compared the computed results. While this comparison is easy for the \emph{Subsumption} tests, for \emph{Classification}, we computed a checksum for the classification result, and checked whether it was the same for every reasoner.
As a test system we have used an Intel Core i5-4590 CPU machine with 3.30GHz and 32 GB RAM, using Debian/GNU Linux 9 and OpenJDK 11.0.5.
Java was called with \textit{-Xmx8g} to set the maximum allocation pool (heap) size to 8 GB.
We only measured the running time of the actual reasoning task and not the time for loading the ontology.
\subsection{Evaluating \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace's Optimizations}
Although the current version of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace is certainly not highly optimized, it implements several optimizations.
We first evaluated the effect of the different optimizations within \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace:
\begin{description}
\item[Multithreading] The main algorithm computes all subsumers for a given concept name. For the task of \emph{classification}, we partition the concept names into batches of size 48 plus one partition for the rest, and compute the subsumers for each partition in a different thread.
\item[Ancestor blocking] Ancestor blocking corresponds to the blocking condition~\bRule{3}. While the method might not terminate without this condition, it is the interaction of this blocking condition with~\bRule{2} that makes the implementation of blocking more challenging (see Section~\ref{ssec:implementation-blocking}). To {assess} the impact of this blocking condition, we allowed to deactivate ancestor blocking in the implementation.
\item[Role filtering] We do not generate $r$-successors for roles $r\in\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$ that do not occur on the left-hand side of a GCI.
{As pointed out in Remark~\ref{rem:horn}, this optimization preserves soundness and completeness. Moreover, as shown in the proof for Theorem~\ref{the:horn-fl0}, reasoning in \text{Horn-}\FLnull becomes polynomial with this optimization, so that one would {expect} a big impact of this optimization.}
\item[Global caching] When performing classification, we store previously computed subsumer sets. If a node with a concept name is added for which we already have a subsumer set, we add all the subsumers to that node and block it. The node only becomes unblocked when new concept names are added to its label by subsequent reasoning steps.
\end{description}
\newcommand{\flower-MT\xspace}{\ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace-MT\xspace}
\newcommand{\flower-MT-AB\xspace}{\ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace-MT-AB\xspace}
\newcommand{\flower-MT-AB-GC\xspace}{\ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace-MT-AB-GC\xspace}
\newcommand{\flower-ALL\xspace}{\ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace-ALL\xspace}
We compared the following configurations of our reasoner:
\begin{itemize}
\item \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace with no optimizations,
\item \flower-MT\xspace (multithreading activated),
\item \flower-MT-AB\xspace (multithreading and ancestor blocking),
\item \flower-MT-AB-GC\xspace (multithreading, ancestor blocking and global caching), and
\item \flower-ALL\xspace with all four optimizations activated.
\end{itemize}
Figure~\ref{fig:subsumption-flower} shows the results for the subsumption experiment, while Figure~\ref{fig:classification-flower} shows the results for the classification experiments. Here and in the figures that follow, we use logarithmic scaling on both axes, and we show for the runs that caused a timeout the maximal value (1 minute for subsumption, and 10 minutes for classification).
\begin{figure}
\input{figures/ore-subsumption-flower}
\caption{Timeouts and running times for subsumption tests w.r.t.\ \textsc{Ore-Corpus}\xspace with the different optimizations.}
\label{fig:subsumption-flower}
\end{figure}
\begin{figure}
\textsc{Ore-Corpus}\xspace:
\medskip
\input{figures/ore-flower}
\medskip
\textsc{Mowl-Corpus}\xspace:
\medskip
\input{figures/mowlcorp-flower}
\caption{Timeouts and running times for classification with the different optimizations. {Note that the curve for \flower-MT-AB-GC\xspace is almost completely hidden under the curve for \flower-ALL\xspace.}
}
\label{fig:classification-flower}
\end{figure}
{For the subsumption tests, the biggest impact was caused by ancestor-blocking, despite the additional obstacles in the implementation. On the other hand, considering that termination can only be guaranteed with ancestor-blocking activated, and that the additional blocking condition may lead to {fewer} nodes being generated, a positive effect was to be expected. In fact, for \textsc{Ore-Corpus}\xspace, with ancestor blocking activated, the timeout rate dropped from 17.91\% to 4.10\%.
}
However, this positive effect was only notable for the ontologies in \textsc{Ore-Corpus}\xspace, which can be explained by the simpler structure of the ontologies in \textsc{Mowl-Corpus}\xspace which in turn lead to simpler functional models. For a single subsumption task, the other optimizations merely seem to create an overhead and do not improve the performance in general. This is obvious for the caching procedure, which only brings a benefit if more than one subsumption task is performed. The largest impact here seems to be obtained by the role filtering. Interestingly, \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace's reasoning time seems hardly correlated with the number of classes in the ontology---only if this number becomes very large, optimizations seem to have even a negative impact.
For classification computed on the \textsc{Ore-Corpus}\xspace, besides {ancestor blocking, global caching makes a noticeable impact, though it is not as large as one would expect for this task. In contrast, the impact of role filtering} is not as strong, though it decreases the number of timeouts from 2.99\% to 2.61\%. We also observe that for \textsc{Mowl-Corpus}\xspace, none of the optimizations apart from multithreading seem to be really indispensable. Again this can be explained by the simpler structure of the ontologies considered here.
\subsection{Comparison with other DL reasoners}
\newcommand{HermiT\xspace}{HermiT\xspace}
\newcommand{Openllet\xspace}{Openllet\xspace}
\newcommand{JFact\xspace}{JFact\xspace}
We evaluated \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace to see how its reasoning times compare with those of other state-of-the-art DL reasoners. We used the configuration of our DL reasoner with all optimizations active: \flower-ALL\xspace.
Since there is no other dedicated reasoner for \ensuremath{{\mathcal{F\!L}_0}}\xspace, we used reasoner systems that can handle expressive DLs of which \ensuremath{{\mathcal{F\!L}_0}}\xspace is a fragment.
Here, we focused on reasoners which are implemented in Java just as \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace, and selected the following three state-of-the-art reasoning systems:
\begin{itemize}
\item HermiT\footnote{\url{hermit-reasoner.com}}, version 1.3.8.510,
\item Openllet\footnote{\url{github.com/Galigator/openllet}}, version 2.6.3, and
\item JFact\footnote{\url{jfact.sourceforge.net}}, version 5.0.1.
\end{itemize}
All three reasoners implement the OWL API \cite{DBLP:journals/semweb/HorridgeB11},
which allows us to measure and compare the time needed for the reasoning tasks alone---excluding the time
for loading the ontologies using the OWL API.
Note that furthermore, all three reasoners implement tableaux-based algorithms, so that this comparative evaluation also serves as a comparison of the different approaches: least functional model generation vs.\ tableaux-based approach.
Regarding the actual reasoner implementations, we note that these are all complex and mature reasoning \emph{systems} that come with more sophisticated optimizations than \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace, { which makes it even more surprising that \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace performs quite well in comparison.} The timeouts and reasoning times of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace compared with those of the above three reasoners, are shown in Figure~\ref{fig:subsumption-all} for the subsumption experiment, and in Figure~\ref{fig:classification-all} for the two classification experiments.
\begin{figure}
\input{figures/ore-subsumption-all4}
\caption{Timeouts and running times for subsumption tests w.r.t.\ \textsc{Ore-Corpus}\xspace for the different reasoners.}
\label{fig:subsumption-all}
\end{figure}
Interestingly, for the subsumption tests,
the performance of both \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace and HermiT\xspace seems hardly affected by the number of concept names in the ontology. This number has a much bigger impact for JFact\xspace and Openllet\xspace which need orders of magnitude more running time than \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace and HermiT\xspace to decide subsumption for the test cases.
However, while some subsumption tasks still lead to timeouts for HermiT\xspace in 0.0047\% of cases, no timeouts were observed by \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace.
Generally, \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace performs substantially better for this task (on this test set) than the other DL reasoners.
\begin{figure}
\textsc{Ore-Corpus}\xspace:
\medskip
\input{figures/ore-all4}
\medskip
\textsc{Mowl-Corpus}\xspace:
\medskip
\input{figures/mowlcorp-all4}
\caption{Timeouts and running times for classification with the different reasoners.}
\label{fig:classification-all}
\end{figure}
For classification on \textsc{Ore-Corpus}\xspace, our measurements indicate that \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace performs the best among the four systems. JFact\xspace's running time is roughly an order of magnitude higher than the one of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace.
HermiT\xspace has twice as many timeouts as Openllet\xspace, but the picture on running times is more mixed, where HermiT\xspace often performed better than Openllet\xspace. \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace again almost halved the number of timeouts compared to Openllet\xspace, but here, the performance looks consistently better than for all other reasoners.
Interestingly, the (interpolated) performance curves of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace, HermiT\xspace, and Openllet\xspace show very similar characteristics, as they develop almost synchronously. This may suggest that the same kind of ontology is difficult for all three systems and for both reasoning approaches.
For \textsc{Mowl-Corpus}\xspace, the general picture is in principle similar. We can see a clear ranking between the reasoners, with \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace performing generally the best. For this corpus there were only timeouts for JFact\xspace.
Again, the (interpolated) performance curves of HermiT\xspace and Openllet\xspace are similar to the one of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace---albeit less strongly as in the case of the \textsc{Ore-Corpus}\xspace. However, this may
support the earlier finding that the same kind of ontology could be difficult for both reasoning approaches.
\smallskip
\noindent
To sum up, the running time of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace for testing subsumption and for computing classification is on average substantially better than the one of JFact\xspace, Openllet\xspace, and even of HermiT\xspace. This is a remarkable result of a comparison between a newcomer system that implements only a few optimizations and well-established systems that have been developed for years. Our comparative evaluation suggests that the same kind of ontology may be difficult (or, alternatively, be easy) for reasoners based on the computation of least functional models as well as for tableaux-based reasoners.
\section{Conclusions}\label{sec:concl}
The main contribution of this paper is a novel algorithm for deciding subsumption in the DL \ensuremath{{\mathcal{F\!L}_0}}\xspace w.r.t.\@\xspace general TBoxes,
and a practical demonstration that this algorithm is easy to implement and behaves surprisingly well on large ontologies. Our reasoner \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace outperforms state-of-the art DL reasoners for testing subsumption and for classifying general TBoxes.
One may ask, however, why a dedicated reasoner for \ensuremath{{\mathcal{F\!L}_0}}\xspace is needed, given the facts that the worst-case complexity of reasoning
in \ensuremath{{\mathcal{F\!L}_0}}\xspace is as high as for the considerably more expressive DL \ensuremath{\mathcal{ALC}}\xspace and that there are very few pure \ensuremath{{\mathcal{F\!L}_0}}\xspace ontologies
available. We argue that such a dedicated reasoner may turn out to be very useful.
%
First, the latter fact could be due to a {chicken} and egg problem: as long as no dedicated reasoner for \ensuremath{{\mathcal{F\!L}_0}}\xspace is available,
there is no incentive to restrict the expressiveness to \ensuremath{{\mathcal{F\!L}_0}}\xspace when creating an ontology. When extracting our test ontologies,
we observed that quite a number of application ontologies have large \ensuremath{{\mathcal{F\!L}_0}}\xspace fragments.
%
Second, regarding the former fact, it is well-known in the DL community that worst-case complexity results are not always
a good indication for how hard reasoning turns out to be in practice.
%
Third, some DL reasoners such as
Konclude\footnote{\url{konclude.com}} and
MORe \cite{DBLP:conf/semweb/RomeroGH12}
make use of specialized algorithms for certain language fragments as part of their overall reasoning approach, with
impressive improvements of the performance. Our efficient subsumption algorithm for \ensuremath{{\mathcal{F\!L}_0}}\xspace may turn out to be useful
in this context.
%
Finally, quite a number of non-standard reasoning tasks
in \ensuremath{{\mathcal{F\!L}_0}}\xspace w.r.t.\@\xspace general TBoxes
have recently been investigated \cite{DBLP:conf/jelia/BaaderMO16,DBLP:conf/lpar/BaaderGM18,DBLP:conf/gcai/BaaderGP18,DBLP:conf/www/BaaderMP18}.
The algorithms developed for solving these tasks usually depend on sub-procedures that perform subsumption tests or that use the least functional model directly.
Our reasoner \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace thus provides us with an efficient base for implementing such non-standard inferences.
\FloatBarrier
\section{Horn and other fragments of \ensuremath{{\mathcal{F\!L}_0}}\xspace}\label{sect:horn}
\newcommand{\text{Horn-}\FLnull}{\text{Horn-}\ensuremath{{\mathcal{F\!L}_0}}\xspace}
\newcommand{\text{Horn-}\FLbot}{\text{Horn-}\FLbot}
\newcommand{\tup}[1]{(#1)}
Based on the algorithm presented in the last section, we show that subsumption
between \ensuremath{{\mathcal{F\!L}_0}}\xspace concepts becomes tractable if one restricts to the
Horn logic \text{Horn-}\FLnull introduced in~\cite{DBLP:conf/aaai/KrotzschRH07}. We then consider some extensions.
In \text{Horn-}\FLnull, every GCI is of one of the following forms:
\begin{gather}
A\sqsubseteq C \quad A\sqcap B\sqsubseteq C \quad A\sqsubseteq\forall r.B,
\label{eq:horn-flnull-axioms}
\end{gather}
where $A,B,C\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$ and $r\in\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$. Our definition differs slightly from that
in~\cite{DBLP:conf/aaai/KrotzschRH07}, in that they allow $\top$ and
$\bot$ to be used both in $\ensuremath{{\mathcal{F\!L}_0}}\xspace$ and $\text{Horn-}\FLnull$. To see that this is not
a major restriction, we note that for the extension of $\text{Horn-}\FLnull$ that uses
$\top$ and $\bot$ anywhere where a concept is used, the reduction presented
in Section~\ref{ssec:reductions} can still be used to obtain a TBox fully in
$\text{Horn-}\FLnull$ as it is presented here.
\cite{DBLP:conf/aaai/KrotzschRH07} only show the complexity for knowledge base
consistency, which is
\textsc{PTime}\xspace-complete in \text{Horn-}\FLnull. We improve upon these results by showing that
subsumption between arbitrary \ensuremath{{\mathcal{F\!L}_0}}\xspace concepts with respect to a \text{Horn-}\FLnull
TBox is tractable as well. Note that, whereas for \ensuremath{{\mathcal{F\!L}_0}}\xspace, subsumption between
concepts can be reduced to knowledge base consistency, the restricted
expressivity of \text{Horn-}\FLnull does not allow for this in the general case.
\newcommand{\text{Horn-}\ensuremath{\mathcal{SROIQ}}\xspace}{\text{Horn-}\ensuremath{\mathcal{SROIQ}}\xspace}
\newcommand{\ensuremath{\mathcal{ALCI}}\xspace}{\ensuremath{\mathcal{ALCI}}\xspace}
\begin{theorem}\label{the:horn-fl0}
Concept subsumption of \ensuremath{{\mathcal{F\!L}_0}}\xspace concepts with respect to general \text{Horn-}\FLnull TBoxes is \textsc{PTime}\xspace-complete.
\end{theorem}
\begin{proof}
{Hardness follows easily from \textsc{PTime}\xspace-hardness of satisfiability of
propositional Horn formulae. Specifically, given a Horn formulae $\Phi$ over propositional variables $\{p_1,\ldots,p_m\}$, we associate to each variable $p_i$ a concept name $A_i$, translate clauses $p_{i_1}\wedge\ldots\wedge p_{i_m}\rightarrow p_j$ to GCIs $A_0\sqcap A_{i_1}\sqcap\ldots\sqcap A_{i_m}\sqsubseteq A_j$, and clauses
$p_{i_1}\wedge\ldots\wedge p_{i_m}\rightarrow\bot$ to $A_0\sqcap A_{i_1}\sqcap\ldots A_{i_m}\sqsubseteq B_0$. Then, we transform these GCIs into ones
with only binary conjunction on the left-hand sides by introducing auxiliary concept names. It is easy to see that the resulting TBox entails $A_0\sqsubseteq B_0$ iff $\Phi$ is unsatisfiable.}
For inclusion in \textsc{PTime}\xspace, we modify the procedure described in Section~\ref{sec:subs}.
In contrast to that procedure, we cannot
reduce
subsumption of the form $C\sqsubseteq_\ensuremath{\mathcal{T}}\xspace D$ to subsumptions of the form
$A_0\sqsubseteq_\ensuremath{\mathcal{T}}\xspace B_0$, since the axiom $D\sqsubseteq B_0$ need not be expressible in \text{Horn-}\FLnull.
However, we can restrict ourselves to
subsumptions of the form $A_0\sqsubseteq_\ensuremath{\mathcal{T}}\xspace D$, where $A_0\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$, as for
subsumptions $C\sqsubseteq D$, we can add the axiom $A_0\sqsubseteq C$ to the
original TBox, which after normalization becomes an $\text{Horn-}\FLnull$ TBox $\ensuremath{\mathcal{T}}\xspace$
{that entails $A_0\sqsubseteq D$ iff the original ontology entails $C\sqsubseteq D$.}
To decide $A_0\sqsubseteq_\ensuremath{\mathcal{T}}\xspace D$ in polynomial time, we apply
the algorithm described in Section~\ref{sec:subs} with two modifications:
\\[-\medskipamount] \mbox{ } \hfill
\parbox[t]{0.935\textwidth}{
\begin{enumerate}
\item the initial partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace_0$ already contains several nodes which
serve
as a \enquote{skeleton} of $D$, and
\item expansions are only applied on nodes from that skeleton.
\end{enumerate}
}
\smallskip \noindent
\noindent
Specifically, for $D=\forall \sigma_1.A_1\sqcap\ldots\sqcap\forall
\sigma_n.A_n$, the
initial partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace_0$ is now defined as
follows:
\[
\Delta^{\ensuremath{\mathcal{Y}}\xspace_0}=\bigcup_{1\leq i\leq n}\prefixset{\sigma_i}
\qquad
A_0^{\ensuremath{\mathcal{Y}}\xspace_0}=\{\epsilon\} \qquad B^{\ensuremath{\mathcal{Y}}\xspace_0}=\emptyset \text{ for all }
B\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace\setminus\{A_0\}.
\]
%
Furthermore, expansions are only applied on nodes $\sigma\in\Delta^{\ensuremath{\mathcal{Y}}\xspace_0}$,
that is,
new nodes may be introduced, but they are not further expanded. This
restriction makes every completion sequence polynomially bounded{, because} we have
at most one step per pair $(\alpha, \sigma)\in\ensuremath{\mathcal{T}}\xspace\times\Delta^{\ensuremath{\mathcal{Y}}\xspace_0}$. For the
final
interpretation $\ensuremath{{\mathcal{Z}}}\xspace$, we check whether {$\sigma_i\in A_i^\ensuremath{{\mathcal{Z}}}\xspace$} for all $1\leq i\leq n$,
which corresponds to checking whether $\epsilon\in D^\ensuremath{{\mathcal{Z}}}\xspace$. To show that the
resulting method is still sound and complete, we show that for the
least functional model $\ensuremath{\mathcal{I}}\xspace_{A,\ensuremath{\mathcal{T}}\xspace}$, we have for every
$\sigma\in\domainof{\ensuremath{\mathcal{Y}}\xspace_0}$ that {$\ensuremath{{\mathcal{Z}}}\xspace(\sigma)=I_{A,\ensuremath{\mathcal{T}}\xspace}(\sigma)$}.
For this, it suffices to show that, for every $d\in\ensuremath{\mathcal{Y}}\xspace^0$ and $C'\sqsubseteq
D'\in\ensuremath{\mathcal{T}}\xspace$, {$\sigma\in (C')^\ensuremath{{\mathcal{Z}}}\xspace$} implies {$\sigma\in (D')^\ensuremath{{\mathcal{Z}}}\xspace$}. Since $\ensuremath{\mathcal{T}}\xspace$ is in
\text{Horn-}\FLnull,
$C'$
does not contain universal role restrictions. Consequently, if
$\sigma\in\match{C'}{\ensuremath{{\mathcal{Z}}}\xspace}$, the expansion {already made} sure that
$\sigma\in\match{D'}{\ensuremath{{\mathcal{Z}}}\xspace}$ and consequently that $\sigma\in (D')^\ensuremath{{\mathcal{Z}}}\xspace$. It follows
that
$\ensuremath{{\mathcal{Z}}}\xspace(\sigma)=I_{A,\ensuremath{\mathcal{T}}\xspace}(\sigma)$ for all $\sigma\in\domainof{\ensuremath{\mathcal{Y}}\xspace_0}$. This means
that $A\sqsubseteq_\ensuremath{\mathcal{T}}\xspace D$ iff
$\epsilon\in D^\ensuremath{{\mathcal{Z}}}\xspace$. Our method runs in polynomial time and is sound
and complete, and thus subsumption with \text{Horn-}\FLnull-TBoxes can be
decided in polynomial time. \hfill
\end{proof}
\begin{remark}\label{rem:horn}
The proof of Theorem~\ref{the:horn-fl0} uses the fact that we only need to
consider role-successors of roles that occur on the left-hand side of a GCI
(in case of \text{Horn-}\FLnull there are no such roles to consider). We use this
observation in an optimization of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace to improve reasoning times.
\end{remark}
\newcommand{\ensuremath{\text{Horn-}\mathcal{FL}_\bot^+}\xspace}{\ensuremath{\text{Horn-}\mathcal{FL}_\bot^+}\xspace}
\newcommand{Horn-$\mathcal{FL}^-$\xspace}{Horn-$\mathcal{FL}^-$\xspace}
{For many DLs, such as \ensuremath{\mathcal{ALC}}\xspace and \ensuremath{\mathcal{ALCI}}\xspace, it is common to define their Horn-fragments as their intersection with \text{Horn-}\ensuremath{\mathcal{SROIQ}}\xspace. If we define \text{Horn-}\FLbot in this way, we obtain a DL in which value restrictions can occur on the left-hand side in axioms of the form
$A\sqcap\forall r.B\sqsubseteq\bot$, where $A,B\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$ and $r\in\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$. Specifically, in}
\text{Horn-}\FLbot, every axiom is of the form
\begin{align}
A\sqsubseteq B\qquad A\sqcap B\sqsubseteq C \qquad
A\sqsubseteq\forall r.A \qquad A\sqcap \forall r.B\sqsubseteq\bot,
\label{eq:horn-flnullbot-axioms}
\end{align}
where {$A,B,C\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace\cup\{\top,\bot\}$} and $r\in\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$.
\begin{theorem}
Subsumption between concept names is \textsc{PSpace}\xspace-complete for \text{Horn-}\FLbot.
\end{theorem}
\begin{proof}
Both directions can be shown by showing a relation to Horn-$\mathcal{FL}^-$\xspace, for which subsumption between concept names is also \textsc{PSpace}\xspace-complete~\cite{DBLP:conf/aaai/KrotzschRH07}.
Horn-$\mathcal{FL}^-$\xspace is similar to \text{Horn-}\FLbot, but instead of axioms of
the form $A\sqcap\forall r.B\sqsubseteq\bot$, it allows {for} axioms of the form
$A\sqsubseteq\exists r$, where the {semantics} of $\exists r$ is defined by
$(\exists r)^\I=\{d\mid \exists e\in\Delta^\I, (d,e)\in r^\I\}$. The Horn-$\mathcal{FL}^-$\xspace axiom $A\sqsubseteq\exists r$ is equivalent to the \text{Horn-}\FLbot axiom $A\sqcap\forall r.\bot\sqsubseteq\bot$, which means every Horn-$\mathcal{FL}^-$\xspace ontology can be easily translated into \text{Horn-}\FLbot.
This establishes \textsc{PSpace}\xspace-hardness of \text{Horn-}\FLbot.
For inclusion in \textsc{PSpace}\xspace, we show how every \text{Horn-}\FLbot ontology can be translated in polynomial time into a Horn-$\mathcal{FL}^-$\xspace ontology. For this, we replace every axiom {$\alpha$} of the
form $A\sqcap\forall r.B\sqsubseteq\bot$ by the axioms $A\sqsubseteq\exists r_\alpha$,
$A\sqsubseteq\forall r_\alpha.\overline{B}$ and $B\sqcap\overline{B}\sqsubseteq\bot$, where $r_\alpha$ is fresh for every such axiom {$\alpha$}. In addition, for every such fresh introduced role $r_\alpha$ and every axiom of the form $C\sqsubseteq\forall r.D$, we add $C\sqsubseteq\forall r_\alpha.D$.
{Intuitively, $A\sqcap\forall r.B\sqsubseteq\bot$ is satisfied iff every instance of $A$ has some $r$-successor that does not satisfy $B$. As there may be several such axioms, we need to distinguish between different $r$-successors for each such axiom. Horn-$\mathcal{FL}^-$\xspace is not expressive enough to do that directly, which is why we use a different role for every such axiom.
}
Let $\ensuremath{\mathcal{T}}\xspace$ be the TBox before this transformation and $\ensuremath{\mathcal{T}}\xspace'$ the result, and $A$, $B$ be two concept names occurring in $\ensuremath{\mathcal{T}}\xspace$. We show that $\ensuremath{\mathcal{T}}\xspace\models A\sqsubseteq B$ iff
{$\ensuremath{\mathcal{T}}\xspace'\models A\sqsubseteq B$}.
{($\Rightarrow$)
Assume $\ensuremath{\mathcal{T}}\xspace'\not\models A\sqsubseteq B$, which means there exists some model $\I'$ of $\ensuremath{\mathcal{T}}\xspace'$ s.t. $\I'\not\models A\sqsubseteq B$. We construct a model $\I$ of $\ensuremath{\mathcal{T}}\xspace$ s.t. $\I\not\models A\sqsubseteq B$ by setting $\Delta^\I=\Delta^{\I'}$, $A^\I=A^{\I'}$ for all $A\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$, and
}
\comment{If for
some model {$\I'$} of $\ensuremath{\mathcal{T}}\xspace'$, {$\I'\not\models A\sqsubseteq B$}, we
transform {$\I'$} into a model {$\I$} of $\ensuremath{\mathcal{T}}\xspace$ by setting }
$${r^{\I}=r^{\I'}\cup\bigcup_{\alpha = (A'\sqcap\forall r.B'\sqsubseteq\bot) \in \ensuremath{\mathcal{T}}\xspace}r_\alpha^{\I'}}$$
{for all $r\in\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$.}
{For} every introduced role name $r_\alpha$ and every axiom {$A'\sqsubseteq\forall r.B'\in\ensuremath{\mathcal{T}}\xspace$}, we have {$\I'\models A'\sqsubseteq\forall r_\alpha.B'$}, which yields {$\I\models A'\sqsubseteq\forall r.B'$}. Furthermore, for every {$\alpha=A'\sqcap\forall r.B'\sqsubseteq\bot\in\ensuremath{\mathcal{T}}\xspace$} and {$d\in (A')^{\I}$}, there exists {$(d,e)\in\ r^{\I'}$} s.t. {$(d,e)\in r_\alpha^{\I'}$}{ and } {$e\in(\overline{B'})^{\I'}$,} {which implies {$e\not\in (B')^{\I}$} and $d\not\in(\forall r.B')^{\I}$.}
Thus, we have show that {$\I$} is a model of $\ensuremath{\mathcal{T}}\xspace$ and that
{$\I\not\models A\sqsubseteq B$}, and thus $\ensuremath{\mathcal{T}}\xspace\not\models A\sqsubseteq B$.
{($\Leftarrow$)} Now let $\I$ be a model of $\ensuremath{\mathcal{T}}\xspace$ s.t. $\I\not\models A\sqsubseteq B$. \comment{We
extend $\I$ to a model of $\ensuremath{\mathcal{T}}\xspace'$ by specifying the interpretation of the fresh
role names. }
{Based on $\I$, we construct a model $\I'$ of $\ensuremath{\mathcal{T}}\xspace'$ s.t. $\I'\not\models A\sqsubseteq B$.}
For every $\alpha=A\sqcap\forall r.B\sqsubseteq\bot\in\ensuremath{\mathcal{T}}\xspace$ and $d\in
A^\I$, there exists some $e\in\Delta^\I$ s.t. $(d,e)\in r^\I$ and
$e\not\in B^\I$. The interpretation $r_\alpha^{\I'}$ of the role $r_\alpha$ is defined
\comment{by collecting }
{as the set of }
all those pairs
$(d,e)$. {All other concept and role names are interpreted as in $\I$.}
The resulting interpretation $\I'$ satisfies all axioms in $\ensuremath{\mathcal{T}}\xspace'$
and thus $\ensuremath{\mathcal{T}}\xspace'\not\models A\sqsubseteq B$.
{Summing up, we have shown that }%
\comment{We obtain that }%
$\ensuremath{\mathcal{T}}\xspace\not\models
A\sqsubseteq B$ iff $\ensuremath{\mathcal{T}}\xspace'\not\models A\sqsubseteq B$, and thus that subsumption
between concept names in $\text{Horn-}\FLbot$ can be polynomially reduced to
subsumption between concept names in Horn-$\mathcal{FL}^-$\xspace.
\end{proof}
We have used a modification of the algorithm presented in Section~\ref{sec:subs}
to show that subsumption in \text{Horn-}\FLnull is \textsc{PTime}\xspace-complete, thus indicating optimality of our algorithm
for this fragment. To deal with $\bot$, we could try to employ the reduction presented
in~Section~\ref{ssec:reductions}, which introduces a concept name for $\bot$.
%
Unfortunately, this approach cannot
work for \text{Horn-}\FLbot.
In fact,
if we generalized axioms of the form $A\sqcap\forall r.B\sqsubseteq\bot$ to ones
that use a concept name instead of $\bot$, we would have to allow axioms of the
form $A\sqcap\forall r.B\sqsubseteq C$. This makes the logic powerful enough to
cover the whole language of $\ensuremath{{\mathcal{F\!L}_0}}\xspace$, as we can represent
axioms of the form $A\sqcap\forall r.B_1\sqcap\forall s.B_2\sqsubseteq C$ using
$A\sqcap\forall r.B_1\sqsubseteq D$ and $D\sqcap\forall r.B_2\sqsubseteq C$,
and axioms of the form $A\sqcap\forall r.B\sqsubseteq\forall s.C$ using
$A\sqcap\forall r.B\sqsubseteq D$, $D\sqsubseteq\forall r.C$, where in each
case, $D$ is fresh.
In fact, already allowing more than one value restriction on the left-hand increases the complexity.
If we further relax \text{Horn-}\FLbot to allow several value restrictions on the
left-hand side, the logic becomes again \textsc{ExpTime}\xspace-complete. In
\ensuremath{\text{Horn-}\mathcal{FL}_\bot^+}\xspace, axioms are of the
forms listed in~\eqref{eq:horn-flnullbot-axioms} and the following form:
\begin{align}
\forall \sigma_1.A_1\sqcap\ldots\sqcap\forall \sigma_n.A_n\sqsubseteq\bot,
\label{eq:horn-fl0bot-plus}
\end{align}
where for $1\leq i\leq n$, $\sigma_i\in\ensuremath{\NR^*}$ and $A_i\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$.
Hardness of \ensuremath{\text{Horn-}\mathcal{FL}_\bot^+}\xspace can be shown based on the reduction used in the
proof for Proposition~1 in~\cite{BaTh-KI-20} employed to show
\textsc{ExpTime}\xspace-hardness of \ensuremath{{\mathcal{F\!L}_0}}\xspace. The reduction uses a TBox that is not in
\ensuremath{\text{Horn-}\mathcal{FL}_\bot^+}\xspace and does not even contain $\bot$. However, it uses a special concept name $F$ which essentially mimics the behavior of $\bot$. Replacing $F$ by $\bot$ creates a \ensuremath{\text{Horn-}\mathcal{FL}_\bot^+}\xspace TBox with a similar behavior.
Specifically, $F$ occurs on the right-hand side of the subsumption test, in
axioms of the form $A\sqcap B\sqcap\forall w_1.F\sqcap\ldots\sqcap
w_n.F\sqsubseteq F$ (Axiom 2), $A_1\sqcap\ldots\sqcap A_n\sqsubseteq\forall r.F$
(Axiom 7) and in axioms $F\sqsubseteq\forall r.F$, which are added for every
role name $r$ used in the reduction (Axioms 8 and 9). All other axioms are in
\text{Horn-}\FLnull. Thus, replacing $F$ by $\bot$ results in a TBox of the desired
form. We argue that in the resulting TBox, $C\sqsubseteq\bot$ is entailed iff
$C\sqsubseteq F$ is entailed in the original TBox, where $C$ does not contain
$F$. If $C\sqsubseteq F$ is entailed by the original TBox, clearly
$C\sqsubseteq\bot$ is entailed by the transformed.
For the other direction,
assume that $C\sqsubseteq F$ is not entailed by the original ontology, and let
$\I$ be a witnessing model with $d\in C^\I\setminus F^\I$ such that every
domain element is reachable by a path of role-successors from $d$. We transform
$\I$ into $\I'$ by removing all elements in $F^\I$. Since $\I\models
F\sqsubseteq\forall r.F$ for all $r\in\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$, we have for all domain elements
$e\in\Delta^{\I'}$ and words $w\in\ensuremath{\NR^*}$, $e\in(\forall w.F)^{\I}$ iff
$e\in(\forall w.\bot)^{\I}$. It follows that for every axiom of the form
$A\sqcap B\sqcap\forall w_1.F\sqcap\ldots\sqcap\forall w_n.F\sqsubseteq F$ in
$\ensuremath{\mathcal{T}}\xspace$, $\I'\models A\sqcap\forall w_1.\bot\sqcap\ldots\sqcap\forall
w_n.\bot\sqsubseteq\bot$, and for every axiom of the form $A_1\sqcap\ldots\sqcap
A_n\sqsubseteq\forall r.F$, $\I'\models A_1\sqcap\ldots\sqcap
A_n\sqsubseteq\forall r.\bot$. The axioms $\bot\sqsubseteq\forall r.\bot\in\ensuremath{\mathcal{T}}\xspace$
are naturally entailed. None of the remaining axioms have value restrictions on
the left-hand side, and are thus also entailed by $\I'$. Consequently,
$\I'$ is a model of the transformed TBox.
Thus, we have shown that the reduction used
in~\cite{BaTh-KI-20} to show \textsc{ExpTime}\xspace-hardness of \ensuremath{{\mathcal{F\!L}_0}}\xspace can be adapted to
show \textsc{ExpTime}\xspace-hardness of \ensuremath{\text{Horn-}\mathcal{FL}_\bot^+}\xspace.
\begin{theorem}
Deciding subsumption in $\ensuremath{\text{Horn-}\mathcal{FL}_\bot^+}\xspace$ is \textsc{ExpTime}\xspace-complete.
\end{theorem}
\section{Introduction}
Description Logics (DLs) \cite{BCNMP03,DLbook} are a well-investigated family of logic-based knowledge representation languages,
which are frequently used to formalize ontologies for application domains such as the Semantic Web \cite{HoPH03}
or biology and medicine \cite{HoSG15}.
To define the important notions of such an application domain as formal concepts, DLs state necessary and
sufficient conditions for an individual to belong to a concept.
%
These conditions can be Boolean combinations of atomic properties required for the
individual (expressed by concept names) or properties that refer
to relationships with other individuals and their properties
(expressed as role restrictions).
%
For example, the concept of a parent that has only daughters can be formalized by the concept description
$
C := \exists\textit{child}.\textit{Human} \sqcap \forall\textit{child}.\textit{Female},
$
which uses the concept names \textit{Female} and \textit{Human} and the role name \textit{child} as well as
the concept constructors conjunction ($\sqcap$), existential restriction ($\exists r.D$), and
value restriction ($\forall r.D$).
%
Constraints on the interpretation of concept and role names can be formulated as general concept
inclusions (GCIs). For example, the GCIs
$
\textit{Human} \sqsubseteq \forall\textit{child}.\textit{Human}
$
and
$
\exists\textit{child}.\textit{Human} \sqsubseteq \textit{Human}
$
say that humans have only human children, and they are the only ones that can have human children.
DL systems provide their users with reasoning services that allow them to derive implicit knowledge from the explicitly
represented one. In our example, the above GCIs imply that elements of our concept $C$ also belong to the
concept $D:= \textit{Human} \sqcap \forall\textit{child}.\textit{Human}$, i.e., $C$ is subsumed by $D$ w.r.t.\
these GCIs.
%
A specific DL is determined by which kind of concept constructors are available.
In the early days of DL research, the inexpressive DL \FLnull, which has only conjunction and value restriction
as concept constructors, was considered to be the smallest possible DL. In fact, when providing a formal semantics for so-called
property edges of semantic
networks in the first DL system KL-ONE \cite{BrSc85}, value restrictions were used. For this reason, the language for constructing concepts
in KL-ONE and all of the other early DL systems \cite{BMPAB91,Pelt91,MaDW91,WoSc92} contained \FLnull. It came as a surprise when it was
shown that subsumption reasoning
w.r.t.\ acyclic \FLnull TBoxes (a restricted form of GCIs) is \textsc{co-NP}\xspace-hard \cite{Nebe90}. The complexity increases when more expressive forms of TBoxes are
used: for cyclic TBoxes to \textsc{PSpace}\xspace \cite{Baad90c,KaNi03} and for general TBoxes consisting of GCIs even to \textsc{ExpTime}\xspace \cite{BaBL05,Hofm05}.
Thus, w.r.t.\ general TBoxes, subsumption reasoning in \FLnull is as hard as subsumption reasoning in \ensuremath{\mathcal{ALC}}\xspace, its closure under negation
\cite{Schi91}.
These negative complexity results for \FLnull were one of the reasons why the attention in the research of inexpressive DLs
shifted from \FLnull to \ensuremath{\mathcal{E\!L}}\xspace, which is obtained from \FLnull by replacing value restriction with existential restriction as a concept constructor.
In fact, subsumption reasoning in \ensuremath{\mathcal{E\!L}}\xspace stays polynomial even in the presence of general TBoxes \cite{Bran04}.
The reasoning method employed in \cite{Bran04}, which is nowadays called consequence-based reasoning, can be used to establish the
\textsc{PTime}\xspace complexity upper bounds also for reasoning in the extension \ensuremath{\mathcal{E\!L}^{+}}\xspace of \ensuremath{\mathcal{E\!L}}\xspace \cite{BaBL05}. This approach also applies to Horn fragments
of expressive DLs such as $\mathcal{SHIQ}$, for which reasoning is \textsc{ExpTime}\xspace-complete, but consequence-based reasoning approaches
behave considerably better in practice than the usual tableau-based approaches for expressive DLs \cite{Kaza09}.
%
The DL \FLnull is not Horn,\footnote
Actually, reasoning in its Horn fragment is \textsc{PTime}\xspace \cite{DBLP:conf/aaai/KrotzschRH07,HORN-DLs}.
}
but it shares with \ensuremath{\mathcal{E\!L}^{+}}\xspace and Horn-$\mathcal{SHIQ}$ that (general) TBoxes have canonical models, i.e., models such
that a subsumption relationship between concept names follows from the TBox if and only if it holds in the canonical model.
Consequence-based reasoning basically generates these models. However, whereas the canonical models for \ensuremath{\mathcal{E\!L}}\xspace and Horn-$\mathcal{SHIQ}$
are respectively of polynomial and exponential size, the canonical models for \FLnull, called least functional models \cite{DBLP:conf/gcai/BaaderGP18},
may be infinite.
In this paper we build on and extend the results from \cite{MTZ-RuleML-19}. We devise a novel algorithm for deciding subsumption w.r.t.\@\xspace general \ensuremath{{\mathcal{F\!L}_0}}\xspace TBoxes,
describe a first implementation of it in the new \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace reasoner,\footnote{\url{ https://github.com/attalos/fl0wer}}
and report on an evaluation of \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace on a large collection of ontologies, which shows that \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace competes well with existing highly optimized DL reasoners.
Basically, our new algorithm generates \enquote{large enough} parts of the least functional model and achieves termination
using a blocking mechanism similar to the ones employed by tableau-based reasoners.
The key idea of the implementation is to apply the TBox statements like rules and to use a variant of the well-known Rete algorithm for rule application \cite{DBLP:journals/ai/Forgy82},
adapted to the case without negation.
To create a large set of challenging \ensuremath{{\mathcal{F\!L}_0}}\xspace ontologies we have used, on the one hand, the OWL 2 EL ontologies of the OWL
reasoner competition \cite{parsia2017owl} transformed into \ensuremath{{\mathcal{F\!L}_0}}\xspace by exchanging the quantifier and omitting too small ontologies as too easy.
On the other hand, we have extracted \ensuremath{{\mathcal{F\!L}_0}}\xspace sub-ontologies of decent size from the ontologies of the
Manchester OWL Corpus (MOWLCorp).\footnote{%
\url{ https://zenodo.org/record/16708}}
In the next section, we introduce \ensuremath{{\mathcal{F\!L}_0}}\xspace and its extension \ensuremath{{\mathcal{F\!L}_\bot}}\xspace with the top ($\top$) and the bottom~($\bot$) concepts.
We recall the characterization of subsumption based on least functional models from \cite{DBLP:conf/gcai/BaaderGP18},
introduce a normal form for \ensuremath{{\mathcal{F\!L}_0}}\xspace TBoxes,
and show that the bottom concept $\bot$ and the top concept $\top$ can be simulated by such TBoxes.
In Section~\ref{sec:subs}, we introduce our new algorithm, and prove that it is sound, complete, and terminating.
Section~\ref{sect:horn} considers the Horn fragments of \ensuremath{{\mathcal{F\!L}_0}}\xspace and \ensuremath{{\mathcal{F\!L}_\bot}}\xspace. First, we show that, for Horn-\ensuremath{{\mathcal{F\!L}_0}}\xspace, our
algorithm can be restricted such that it runs in polynomial time. A polynomial upper bound for subsumption in Horn-\ensuremath{{\mathcal{F\!L}_0}}\xspace
has already been shown in \cite{DBLP:conf/aaai/KrotzschRH07,HORN-DLs} for an extension of Horn-\ensuremath{{\mathcal{F\!L}_0}}\xspace that contains $\bot$.
However, this extension is weaker than Horn-\ensuremath{{\mathcal{F\!L}_\bot}}\xspace. In fact, we also show in Section~\ref{sect:horn} that subsumption in
Horn-\ensuremath{{\mathcal{F\!L}_\bot}}\xspace is \textsc{PSpace}\xspace-complete, and that it becomes \textsc{ExpTime}\xspace-complete in a small extension of Horn-\ensuremath{{\mathcal{F\!L}_\bot}}\xspace.
Section~\ref{sec:rete} describes how to realize our novel algorithm based on Rete, and
Section~\ref{sec:eval} presents our experimental results, which evaluate several optimizations of the algorithm, and
compare its performance with that of existing highly optimized DL reasoners.
\section{Preliminaries on \ensuremath{{\mathcal{F\!L}_0}}\xspace and Extensions}\label{sec:lfm}
\newcommand{\FLbot}{\ensuremath{{\mathcal{F\!L}_\bot}}\xspace}
\newcommand{\ensuremath{\NR^*}}{\ensuremath{\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace^*}}
We introduce the DL \ensuremath{{\mathcal{F\!L}_0}}\xspace,
recall the characterization of subsumption based on least functional models from \cite{DBLP:conf/gcai/BaaderGP18},
introduce a normal form for \ensuremath{{\mathcal{F\!L}_0}}\xspace TBoxes,
and show that the bottom concept $\bot$ and the top concept $\top$ can be simulated by such TBoxes.
\subsection{Syntax, Semantics, and Functional Interpretations}
\newcommand{\sigC}[1]{\textsf{sig}_\mathsf{C}(#1)}
\newcommand{\sigR}[1]{\textsf{sig}_\mathsf{R}(#1)}
\paragraph*{Syntax.}
%
Let \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace and \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace be disjoint, at most countably infinite sets of \emph{concept names} and \emph{role names}, respectively.
An \emph{\ensuremath{{\mathcal{F\!L}_0}}\xspace concept description} (\emph{concept} for short) $C$ is built according to the following syntax rule
\begin{align*}
C ::= A \mid
C \sqcap C \mid \forall r. C, \text{~ where } A \in \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace, r \in \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace.
\end{align*}
Additionally allowing the use of the top concept $\top$ and the bottom concept $\bot$
in the above rule yields the DL
\ensuremath{{\mathcal{F\!L}_\bot}}\xspace.
A \emph{general concept inclusion} (GCI) for any of these DLs is of the form $C \ensuremath{\sqsubseteq}\xspace D$,
where $C$ and $D$ are concepts of the respective DL. A \emph{TBox} is a finite set of GCIs.
%
The \emph{signature} $\sig{C}$ ($\sig{\ensuremath{\mathcal{T}}\xspace}$) of a concept $C$ (TBox $\ensuremath{\mathcal{T}}\xspace$) is the set of concept and role names occurring in $C$ ($\ensuremath{\mathcal{T}}\xspace$). For convenience, we use further functions to refer only to the concept names and only to the role names in an expression. For a concept or TBox $E$, we set $\sigC{E}=\sig{E}\cap\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$ and $\sigR{E}=\sig{E}\cap\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$.
The expression $\forall r. C$ is called a \emph{value restriction}.
For nested value restrictions we use the following notation: given a word $\sigma = r_1\cdots r_m \in \ensuremath{\NR^*}$, $m \geq 0$, over the alphabet $\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$ of role names, and a concept $C$, we write $\forall \sigma. C$ as an abbreviation of $\forall r_1. \cdots \forall r_m. C$.
For the empty word $\epsilon$, we have $\forall\epsilon.C=C$.
\paragraph*{Semantics.}
An interpretation $\ensuremath{\mathcal{I}}\xspace$ is a pair $\ensuremath{\mathcal{I}}\xspace = (\domainof{\ensuremath{\mathcal{I}}\xspace},\cdot^\ensuremath{\mathcal{I}}\xspace)$, consisting of a non-empty set
\domainof{\ensuremath{\mathcal{I}}\xspace} (the \emph{domain} of \ensuremath{\mathcal{I}}\xspace) and an \emph{interpretation function} $\cdot^\ensuremath{\mathcal{I}}\xspace$ that
maps every concept name $A \in \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$ to a subset $A^\ensuremath{\mathcal{I}}\xspace \subseteq \domainof{\ensuremath{\mathcal{I}}\xspace}$ of the domain,
and every role name $r \in \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$ to a binary relation $r^\ensuremath{\mathcal{I}}\xspace \subseteq \domainof{\ensuremath{\mathcal{I}}\xspace} \times \domainof{\ensuremath{\mathcal{I}}\xspace}$.
The interpretation function is extended to (complex) concepts as follows:
\begin{align*}
(C \sqcap D)^\ensuremath{\mathcal{I}}\xspace & := C^\ensuremath{\mathcal{I}}\xspace \sqcap D^\ensuremath{\mathcal{I}}\xspace,
\qquad
\top^\ensuremath{\mathcal{I}}\xspace := \domainof{\ensuremath{\mathcal{I}}\xspace}, \qquad
\bot^\ensuremath{\mathcal{I}}\xspace := \emptyset,\text{ and}
\\
(\forall r. C)^\ensuremath{\mathcal{I}}\xspace & := \{ d \in \domainof{\ensuremath{\mathcal{I}}\xspace} \mid
\forall e \in \domainof{\ensuremath{\mathcal{I}}\xspace}. (d,e) \in r^\ensuremath{\mathcal{I}}\xspace \ \rightarrow \ e \in C^\ensuremath{\mathcal{I}}\xspace \}.
\end{align*}
The GCI $C \ensuremath{\sqsubseteq}\xspace D$ is \emph{satisfied} in $\ensuremath{\mathcal{I}}\xspace$, denoted as $\ensuremath{\mathcal{I}}\xspace \models C \ensuremath{\sqsubseteq}\xspace D$, if
$C^\ensuremath{\mathcal{I}}\xspace \subseteq D^\ensuremath{\mathcal{I}}\xspace$. The interpretation \ensuremath{\mathcal{I}}\xspace is a \emph{model} of the TBox~\ensuremath{\mathcal{T}}\xspace, denoted as $\ensuremath{\mathcal{I}}\xspace \models \ensuremath{\mathcal{T}}\xspace$,
if $\ensuremath{\mathcal{I}}\xspace$ satisfies all GCIs in \ensuremath{\mathcal{T}}\xspace.
The concept $C$ is \emph{subsumed} by the concept $D$ \emph{w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace},
denoted as $C \ensuremath{\subsumed_{\T}}\xspace D$, if $C^\ensuremath{\mathcal{I}}\xspace \subseteq D^\ensuremath{\mathcal{I}}\xspace$ is satisfied in all models \ensuremath{\mathcal{I}}\xspace of \ensuremath{\mathcal{T}}\xspace.
To decide subsumption in \ensuremath{{\mathcal{F\!L}_0}}\xspace, it is sufficient to consider so-called functional interpretations, which are tree-shaped interpretations in which every element has exactly one child for each role name. In such interpretations, domain elements are identified by sequences of role names.
\begin{definition}
An interpretation $\ensuremath{\mathcal{I}}\xspace = (\domainof{\ensuremath{\mathcal{I}}\xspace},\cdot^\ensuremath{\mathcal{I}}\xspace)$ is called a \emph{functional interpretation} if $\domainof{\ensuremath{\mathcal{I}}\xspace} = \ensuremath{\NR^*}$ and for all $r \in \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$, $r^\ensuremath{\mathcal{I}}\xspace = \{ (\sigma,\sigma r) \mid \sigma \in \ensuremath{\NR^*} \}$. It is called a \emph{functional model} of the \ensuremath{{\mathcal{F\!L}_0}}\xspace concept $C$ w.r.t.\@\xspace the \ensuremath{{\mathcal{F\!L}_0}}\xspace TBox \ensuremath{\mathcal{T}}\xspace
if $\ensuremath{\mathcal{I}}\xspace \models \ensuremath{\mathcal{T}}\xspace$ and $\epsilon \in C^\ensuremath{\mathcal{I}}\xspace$.
For two functional interpretations $\ensuremath{\mathcal{I}}\xspace$ and $\ensuremath{\mathcal{J}}\xspace$ we write
\[
\ensuremath{\mathcal{I}}\xspace \subseteq \ensuremath{\mathcal{J}}\xspace \text{ ~if~ } A^\ensuremath{\mathcal{I}}\xspace \subseteq A^\ensuremath{\mathcal{J}}\xspace \text{ for all } A \in \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace.
\]
\end{definition}
The notion of a functional interpretation fixes the domain and the interpretation of role names.
Thus, a functional interpretation is uniquely determined by the interpretation of the concept names.
Given a family $(\ensuremath{\mathcal{I}}\xspace_i)_{i\geq 0}$ of functional interpretations, their \emph{intersection} $\ensuremath{\mathcal{J}}\xspace := \bigcap_{i\geq 0} \ensuremath{\mathcal{I}}\xspace_i$
is the functional interpretations that satisfies $A^\ensuremath{\mathcal{J}}\xspace = \bigcap_{i\geq 0} A^{\ensuremath{\mathcal{I}}\xspace_i}$.
\begin{lemma}[see \cite{DBLP:conf/gcai/BaaderGP18}]
Given an \ensuremath{{\mathcal{F\!L}_0}}\xspace concept $C$ and
an \ensuremath{{\mathcal{F\!L}_0}}\xspace TBox \ensuremath{\mathcal{T}}\xspace, the functional models of $C$ w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace are closed under intersection. In particular, this
implies that there exists a \emph{least functional model $\ensuremath{\canonical{C}}\xspace$} of $C$ w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace, i.e., a functional model
of $C$ w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace such that $\ensuremath{\canonical{C}}\xspace \subseteq \ensuremath{\mathcal{J}}\xspace$ holds for all functional models $\ensuremath{\mathcal{J}}\xspace$ of $C$ w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace.
\end{lemma}
In \cite{DBLP:conf/gcai/BaaderGP18}, subsumption in \ensuremath{{\mathcal{F\!L}_0}}\xspace was characterized as inclusion of least functional models as follows:
given \ensuremath{{\mathcal{F\!L}_0}}\xspace concepts $C, D$ and an \ensuremath{{\mathcal{F\!L}_0}}\xspace TBox \ensuremath{\mathcal{T}}\xspace, we have
%
\begin{equation}\label{sub:char:eqn}
C \ensuremath{\subsumed_{\T}}\xspace D\ \ \text{iff}\ \ \ensuremath{\canonical{D}}\xspace \subseteq \ensuremath{\canonical{C}}\xspace.
\end{equation}
%
For our purposes, the following characterization of subsumption turns out to be more useful.
\begin{theorem}\label{sub:char:least-functional}
Given \ensuremath{{\mathcal{F\!L}_0}}\xspace concepts $C, D$ and an \ensuremath{{\mathcal{F\!L}_0}}\xspace TBox \ensuremath{\mathcal{T}}\xspace, we have $C \ensuremath{\subsumed_{\T}}\xspace D$
iff $\varepsilon\in D^{\ensuremath{\canonical{C}}\xspace}$.
\end{theorem}
\begin{proof}
Assume that $C \ensuremath{\subsumed_{\T}}\xspace D$. Then $\varepsilon\in C^{\ensuremath{\canonical{C}}\xspace}$ (which we know since $\ensuremath{\canonical{C}}\xspace$ is a
functional model of $C$ w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace) implies $\varepsilon\in D^{\ensuremath{\canonical{C}}\xspace}$ since $\ensuremath{\canonical{C}}\xspace$ is a model of \ensuremath{\mathcal{T}}\xspace.
Conversely, $\varepsilon\in D^{\ensuremath{\canonical{C}}\xspace}$ implies that $\ensuremath{\canonical{C}}\xspace$ is a functional model of $D$ w.r.t.\@\xspace \ensuremath{\mathcal{T}}\xspace,
and thus $\ensuremath{\canonical{D}}\xspace\subseteq \ensuremath{\canonical{C}}\xspace$, which yields $C \ensuremath{\subsumed_{\T}}\xspace D$ by \eqref{sub:char:eqn}. \hfill
\end{proof}
\subsection{Normal Forms for \ensuremath{{\mathcal{F\!L}_\bot}}\xspace and \ensuremath{{\mathcal{F\!L}_0}}\xspace Concepts and TBoxes}
\label{ssec:normal-forms}
An \ensuremath{{\mathcal{F\!L}_\bot}}\xspace \emph{concept} is in \emph{normal form} if it is of the form
\begin{itemize}
\item
$\top$ or $\bot$, or
\item
a non-empty conjunction of concepts of the form $\forall r.\bot$, $A$, $\forall r.A$, where $A\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$ and $r\in\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace$.
\end{itemize}
%
An \ensuremath{{\mathcal{F\!L}_\bot}}\xspace \emph{TBox} is in \emph{normal form} if it contains only GCIs of the form $C\sqsubseteq D$, where $C, D$ are
in normal form, and $C$ is not $\bot$ and $D$ is not $\top$.
%
In addition,
\ensuremath{{\mathcal{F\!L}_0}}\xspace concepts (TBoxes) in normal form are \ensuremath{{\mathcal{F\!L}_\bot}}\xspace concepts (TBoxes) in normal form that contain neither $\top$ nor $\bot$.
It is easy to see that every (\ensuremath{{\mathcal{F\!L}_\bot}}\xspace or \ensuremath{{\mathcal{F\!L}_0}}\xspace) TBox $\ensuremath{\mathcal{T}}\xspace$ can be transformed in
linear time into a TBox in normal form such that all subsumption relationships in the signature of
$\ensuremath{\mathcal{T}}\xspace$ are preserved. For this, one removes tautological GCIs with $\bot$ on the left-hand side
or $\top$ on the right-hand side, and flattens value-restrictions $\forall r.E$ with $E\not\in\ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace\cup\{\bot\}$.
To \emph{flatten} an occurrence of $\forall r.E$ in a GCI $C\sqsubseteq D$ means that $E$ is replaced by a fresh concept name $A_E$.
If the occurrence is within $C$, then the GCI $E\sqsubseteq A_E$ is added to the TBox, and otherwise $A_E\sqsubseteq E$.
It is well-known that subsumption between complex concepts can be reduced in linear time to subsumption between concept names.
In fact, we have $C \ensuremath{\subsumed_{\T}}\xspace D$ iff $A\sqsubseteq_{\ensuremath{\mathcal{T}}\xspace'} B$, where $A, B$ are concept names not occurring in $C$, $D$, or $\ensuremath{\mathcal{T}}\xspace$,
and $\ensuremath{\mathcal{T}}\xspace'$ is obtained from $\ensuremath{\mathcal{T}}\xspace$ by adding the GCIs $A\sqsubseteq C$ and $D\sqsubseteq B$.
\begin{proposition}\label{prop:normal-form}
Subsumption in \ensuremath{{\mathcal{F\!L}_\bot}}\xspace (\ensuremath{{\mathcal{F\!L}_0}}\xspace) w.r.t.\ TBoxes
can be reduced in linear time
to subsumption of concept names
w.r.t.\ \ensuremath{{\mathcal{F\!L}_\bot}}\xspace (\ensuremath{{\mathcal{F\!L}_0}}\xspace) TBoxes in normal form.
\end{proposition}
For subsumption between concept names $A, B$ in the DL \ensuremath{{\mathcal{F\!L}_0}}\xspace, the characterization of subsumption given in
Theorem~\ref{sub:char:least-functional} means that, to decide whether $A\ensuremath{\subsumed_{\T}}\xspace B$ holds, it is sufficient to check
whether the root of $\ensuremath{\canonical{A}}\xspace$ is contained in $B^{\ensuremath{\canonical{A}}\xspace}$, i.e., whether the label of this root
contains the concept name $B$.
\subsection{Reducing Subsumption in \FLbot to Subsumption in \ensuremath{{\mathcal{F\!L}_0}}\xspace}\label{ssec:reductions}
Subsumption between concept names in \FLbot can be reduced to subsumption in \ensuremath{{\mathcal{F\!L}_0}}\xspace using the
following transformation rules on \emph{normalized} \FLbot TBoxes $\ensuremath{\mathcal{T}}\xspace$:
\begin{enumerate}
\item[\tRule{1}]
Replace $\bot$ and $\top$ everywhere by the fresh concept names $A_\bot$ and $A_\top$, respectively;
\item[\tRule{2}]\label{red:item:two}
add the axioms $A_\bot\sqsubseteq B$ for all $B\in\sigC{\ensuremath{\mathcal{T}}\xspace}$;
\item[\tRule{3}]\label{red:item:three}
add the axioms $B\sqsubseteq A_\top$ and $A_\top\sqsubseteq\forall r.A_\top$ for all
$B\in\sigC{\ensuremath{\mathcal{T}}\xspace}$ and all $r\in\sigR{\ensuremath{\mathcal{T}}\xspace}$.
\end{enumerate}
We denote the TBox resulting from this transformation as $\ensuremath{{\mathcal{F\!L}_0}}\xspace(\ensuremath{\mathcal{T}}\xspace)$.
\begin{lemma}\label{lem:fl0bot}
For all \FLbot TBoxes $\ensuremath{\mathcal{T}}\xspace$ in normal form and all concept names $A$, $B$ occurring in $\ensuremath{\mathcal{T}}\xspace$, we
have $A\sqsubseteq_\ensuremath{\mathcal{T}}\xspace B$ iff $A\sqsubseteq_{\ensuremath{{\mathcal{F\!L}_0}}\xspace(\ensuremath{\mathcal{T}}\xspace)} B$.
\end{lemma}
%
\begin{proof}
%
\enquote{$\Leftarrow$}:
Assume that $A\not\sqsubseteq_\ensuremath{\mathcal{T}}\xspace B$. Then there is a model $\ensuremath{\mathcal{I}}\xspace$ of $\ensuremath{\mathcal{T}}\xspace$ such that $A^\ensuremath{\mathcal{I}}\xspace\not\subseteq B^\ensuremath{\mathcal{I}}\xspace$.
We modify $\ensuremath{\mathcal{I}}\xspace$ to an interpretation $\ensuremath{\mathcal{J}}\xspace$ by setting ${A_\bot}^{\ensuremath{\mathcal{J}}\xspace} := \emptyset$ and
${A_\top}^{\ensuremath{\mathcal{J}}\xspace} := \Delta^\ensuremath{\mathcal{I}}\xspace$, and leave the domain as well as the interpretation of the other concept names and the role
names as in $\ensuremath{\mathcal{I}}\xspace$. It is easy to see that $\ensuremath{\mathcal{J}}\xspace$ is a model of $\ensuremath{{\mathcal{F\!L}_0}}\xspace(\ensuremath{\mathcal{T}}\xspace)$ that satisfies
$A^{\ensuremath{\mathcal{J}}\xspace} = A^\ensuremath{\mathcal{I}}\xspace\not\subseteq B^\ensuremath{\mathcal{I}}\xspace = B^{\ensuremath{\mathcal{J}}\xspace}$.
\enquote{$\Rightarrow$}:
Assume that $A\not\sqsubseteq_{\ensuremath{{\mathcal{F\!L}_0}}\xspace(\ensuremath{\mathcal{T}}\xspace)} B$, and let $\ensuremath{\mathcal{I}}\xspace$ be a model of $\ensuremath{{\mathcal{F\!L}_0}}\xspace(\ensuremath{\mathcal{T}}\xspace)$ that contains an element $d_0$ with
$d_0 \in A^\ensuremath{\mathcal{I}}\xspace\setminus B^\ensuremath{\mathcal{I}}\xspace$. We may assume without loss of generality that all elements of $\Delta^\ensuremath{\mathcal{I}}\xspace$ are reachable from $d_0$
via a path of roles in $\sigR{\ensuremath{\mathcal{T}}\xspace}$.
Due to the GCIs introduced by \tRule{3},
$d_0\in A^\ensuremath{\mathcal{I}}\xspace$ yields $d_0\in {A_\top}^{\ensuremath{\mathcal{I}}\xspace}$,
and thus $d\in {A_\top}^{\ensuremath{\mathcal{I}}\xspace}$ holds for all $d\in \Delta^\ensuremath{\mathcal{I}}\xspace$. We also know that $d_0\not\in{A_\bot}^{\ensuremath{\mathcal{I}}\xspace}$ since otherwise
the GCI $A_\bot\sqsubseteq B$ added by \tRule{2} would yield $d_0\in B^\ensuremath{\mathcal{I}}\xspace$, contradicting our assumption that $d_0$
is a counterexample to the subsumption. The interpretation
$\ensuremath{\mathcal{J}}\xspace$ is obtained from $\ensuremath{\mathcal{I}}\xspace$ by removing all elements of ${A_\bot}^{\ensuremath{\mathcal{I}}\xspace}$.
Then $d_0$ is an element of $\Delta^{\ensuremath{\mathcal{J}}\xspace}$ and it satisfies $d_0 \in A^{\ensuremath{\mathcal{J}}\xspace}\setminus B^{\ensuremath{\mathcal{J}}\xspace}$. Thus, it remains to show
that $\ensuremath{\mathcal{J}}\xspace$ is a model of $\ensuremath{\mathcal{T}}\xspace$.
First, note that ${A_\top}^{\ensuremath{\mathcal{J}}\xspace} = \Delta^{\ensuremath{\mathcal{J}}\xspace} = \top^{\ensuremath{\mathcal{J}}\xspace}$ and ${A_\bot}^{\ensuremath{\mathcal{J}}\xspace} = \emptyset = \bot^{\ensuremath{\mathcal{J}}\xspace}$. This implies that it is enough to prove that the GCIs from $\ensuremath{\mathcal{T}}\xspace$ transformed by
\tRule{1}, which are satisfied by $\ensuremath{\mathcal{I}}\xspace$ since it is a model of $\ensuremath{{\mathcal{F\!L}_0}}\xspace(\ensuremath{\mathcal{T}}\xspace)$,
are also satisfied by $\ensuremath{\mathcal{J}}\xspace$.
For this, it is in turn sufficient to show that, for all concepts $C$ in normal form occurring in $\ensuremath{{\mathcal{F\!L}_0}}\xspace(\ensuremath{\mathcal{T}}\xspace)$ and all $d\in \Delta^{\ensuremath{\mathcal{J}}\xspace}$
we have $d\in C^\ensuremath{\mathcal{I}}\xspace$ iff $d\in C^{\ensuremath{\mathcal{J}}\xspace}$. For concept names this is trivial by the definition of $\ensuremath{\mathcal{J}}\xspace$.
Thus, consider a value restriction of the form $\forall r.A_1$.
First, assume that $d\in (\forall r.A_1)^\ensuremath{\mathcal{I}}\xspace$, but $d\not\in (\forall r.A_1)^{\ensuremath{\mathcal{J}}\xspace}$.
Then there is an element $e\in \Delta^{\ensuremath{\mathcal{J}}\xspace}$ with $(d,e)\in r^{\ensuremath{\mathcal{J}}\xspace}$, but $e\not\in {A_1}^{\ensuremath{\mathcal{J}}\xspace}$. However, since $e\in \Delta^{\ensuremath{\mathcal{J}}\xspace}$, we already
know that $e\not\in {A_1}^{\ensuremath{\mathcal{J}}\xspace}$ implies $e\not\in {A_1}^{\ensuremath{\mathcal{I}}\xspace}$. Since we also have $(d,e)\in r^{\ensuremath{\mathcal{I}}\xspace}$, this contradicts our assumption that
$d\in (\forall r.A_1)^\ensuremath{\mathcal{I}}\xspace$.
Second, assume that $d\in (\forall r.A_1)^{\ensuremath{\mathcal{J}}\xspace}$, but $d\not\in (\forall r.A_1)^\ensuremath{\mathcal{I}}\xspace$.
Then there is an element $e\in \Delta^{\ensuremath{\mathcal{I}}\xspace}$ with $(d,e)\in r^{\ensuremath{\mathcal{I}}\xspace}$, but $e\not\in {A_1}^{\ensuremath{\mathcal{I}}\xspace}$.
If $e\in \Delta^{\ensuremath{\mathcal{J}}\xspace}$, then we also have $(d,e)\in r^{\ensuremath{\mathcal{J}}\xspace}$ and $e\not\in {A_1}^{\ensuremath{\mathcal{J}}\xspace}$, which contradicts our assumption that
$d\in (\forall r.A_1)^{\ensuremath{\mathcal{J}}\xspace}$. Otherwise, we must have $e\in {A_\bot}^{\ensuremath{\mathcal{I}}\xspace}$ since $e$ was removed. But then the GCIs introduced by \tRule{2}
yield $e\in {A_1}^{\ensuremath{\mathcal{I}}\xspace}$, contradicting our assumption on $e$.\footnote{%
Note that $A_1$ cannot be $A_\top$ since a value restriction of the form $\forall r.\top$ is not normalized.
}
\hfill \end{proof}
Since normalization of an \ensuremath{{\mathcal{F\!L}_\bot}}\xspace TBox and the transformation into an \ensuremath{{\mathcal{F\!L}_0}}\xspace TBox described in this subsection
are polynomial, we obtain the following result.
\begin{theorem}
Subsumption in \ensuremath{{\mathcal{F\!L}_\bot}}\xspace can be reduced in polynomial time to subsumption in \ensuremath{{\mathcal{F\!L}_0}}\xspace.
\end{theorem}
\section{A Rete-based Implementation}\label{sec:rete}
Our implementation of $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ in \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace employs a variant of the algorithm for Rete networks~\cite{DBLP:journals/ai/Forgy82} to allow for a fast generation of completions of the partial model to be constructed. Specifically, the Rete network tests on
all domain elements
satisfaction of all GCIs at the same time. It stores also partial matches so that they can be quickly continued once additional information is available. In addition, \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace uses optimized data structures to allow for a fast and memory-efficient navigation in the current model, as well as to speed-up the implementation of blocking.
\subsection{Rete network for the TBox to speed-up matching of GCIs}
\label{sec:Rete}
In order to compute a sequence of \ensuremath{\mathcal{T}}\xspace-completions $\ensuremath{\mathcal{Y}}\xspace_0 \ensuremath{ \mathbin{\vdash_{\T}} }\xspace \ensuremath{\mathcal{Y}}\xspace_1 \ensuremath{ \mathbin{\vdash_{\T}} }\xspace \ensuremath{\mathcal{Y}}\xspace_2 \ensuremath{ \mathbin{\vdash_{\T}} }\xspace \cdots$
starting from the initial partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace_0$, we can employ a GCI $C \ensuremath{\sqsubseteq}\xspace D \in \ensuremath{\mathcal{T}}\xspace$ like a rule of the form
\begin{align*}
?\sigma \in \match{C}{\ensuremath{\mathcal{Y}}\xspace_i} \rightarrow ~ ?\sigma \in \match{D}{\ensuremath{\mathcal{Y}}\xspace_i},
\end{align*}
where $?\sigma$ ranges over the non-blocked domain elements of $\ensuremath{\mathcal{Y}}\xspace_i$, to obtain the next expansion. Overall, the rules corresponding to the GCIs from the TBox are applied during a run of $\SUBSpar{A_0}{B_0}{\ensuremath{\mathcal{T}}\xspace}$ in a forward-chaining manner to yield the sequence of \ensuremath{\mathcal{T}}\xspace-completions.
In each expansion step $i$, one has to compute the elements that violate a GCI, i.e., the pairs $(\sigma,C \ensuremath{\sqsubseteq}\xspace D) \in \left(\domainof{\ensuremath{\mathcal{Y}}\xspace_i} \cap \notblocked{\ensuremath{\mathcal{Y}}\xspace_i} \right) \times \ensuremath{\mathcal{T}}\xspace$ such that
$\sigma$ matches $C$ but not $D$ in $\ensuremath{\mathcal{Y}}\xspace_i$.
Since there is potentially a large number of elements in
$\domainof{\ensuremath{\mathcal{Y}}\xspace_i}$ that has to be matched against a large number of left-hand sides of GCIs (patterns) in the TBox in each step, we have chosen to implement this
task using the Rete algorithm for many pattern/many object matching
\cite{DBLP:journals/ai/Forgy82}, which is tailored to efficiently compute forward chaining rule applications. The general idea is to integrate the matching tests of all GCIs using a Rete network, which is in our case a compressed network-representation of the TBox.
In each completion step, the extension of a tree $\ensuremath{\mathcal{Y}}\xspace_i$ only affects a small number of its elements: the matching element $\sigma$ itself and/or its children. This makes the Rete-based algorithm particularly efficient in our setting, because it stores matching information across completion steps to avoid reiterating over the whole set of pairs $\left(\domainof{\ensuremath{\mathcal{Y}}\xspace_i} \cap \notblocked{\ensuremath{\mathcal{Y}}\xspace_i} \right) \times \ensuremath{\mathcal{T}}\xspace$ in each step. Only those elements with changes have to be re-matched again in the next completion step.
For a given element, the network tests which left-hand sides of a GCI are matched and triggers the extension for the corresponding right-hand side.
This Rete network corresponds to a graph using three kinds of nodes: a single root node, a set of intermediate nodes and a set of terminal nodes. Intuitively, the \emph{intermediate nodes} check for matches of parts of the left-hand side of a GCI, while the \emph{terminal nodes} hold the right-hand side of a GCI that is ready to be applied to an element.
To process an element $\sigma\in\domainof{\ensuremath{\mathcal{Y}}\xspace}$, a set of so-called tokens is passed from the root node through the intermediate nodes to the terminal nodes.
Such a \emph{token} is a pair of the form
$(\sigma, r) \in \big(\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace^*, \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace \cup \{ \epsilon \}\big)$.
Intuitively, the token $(\sigma, \epsilon)$ is used
to check whether $\sigma$ matches the concept names on the left-hand side of a GCI,
while a token of the form $(\sigma, r)$ with $r \in \NR$
is used to check whether $\sigma$ matches value restrictions with the role name $r$.
There are the following three types of \emph{intermediate nodes} that process tokens arriving from predecessor nodes in the network:
\begin{itemize}
\item A \emph{concept node} is labeled with a concept name $B \in \ensuremath{{\mathsf{N}_\mathsf{C}}}\xspace$ and sends an incoming token $(\sigma, s)$ to all successor nodes
iff $\sigma s \in B^{\ensuremath{\mathcal{Y}}\xspace_i}$.
\item A \emph{role node} is labeled with an
$s \in \ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace \cup \{ \epsilon \}$.
An arriving token of the form $(\sigma, s')$ is handled as follows. If $s \in \NR$ and $s'=s$, then it sends $(\sigma,s')$ to all successor nodes. If $s = \epsilon$, then it sends the token $(\sigma s', \epsilon)$ to all successor nodes.
\item An \emph{inter-element node} is labeled with a tuple $(s_1,\ldots,s_m) \in (\NR \cup \{ \epsilon \})^m$.
It stores all arriving tokens and sends a token $(\sigma, \epsilon)$ to its successor nodes once \emph{all} tokens of the form $(\sigma, s_1),\ldots,(\sigma,s_m)$ have arrived at this node.
\end{itemize}
The overall network is structured in layers. The root node with no incoming edges is on top.
All successors of the root node are concept nodes. The root node takes an element of the form $\sigma = \rho r \in \NR^*$ and sends the token $(\rho, r)$ to all successor nodes. A successor of a concept node can only be another concept node or a role node. A role node leads directly to an inter-element node and inter-element nodes lead to terminal nodes. Intuitively, paths of concept nodes corresponds to conjunctions of concept names a token must satisfy in order to pass through them. These concept names either need to be matched on the current element or on its immediate role successors. If the path of concept names goes into a role node labeled with $\epsilon$, this corresponds to a match on the current element. If it goes into a role node labeled with a role name $r$, this corresponds to a match on its $r$-successors. The inter-element nodes again correspond to a conjunction that combine the successful matches of the different role-successors.
\begin{example}\label{ex:rete}
As an example of the structure of a Rete network compiled from a TBox, consider the following normalized TBox:
\begin{align*}
\ensuremath{\mathcal{T}}\xspace_{ex} = \{~ A_2 \sqcap A_4 \sqcap A_5 \sqcap \forall r_1. A_3 \sqcap \forall r_1. A_4 \sqcap \forall r_2. A_1 & \ensuremath{\sqsubseteq}\xspace B_7, \\
\forall r_2. A_3 \sqcap \forall r_2. A_4 & \ensuremath{\sqsubseteq}\xspace B_8, \\
\forall r_1. A_6 &\ensuremath{\sqsubseteq}\xspace \forall r_1. B_9~ \}.
\end{align*}
The corresponding Rete network is displayed in Figure \ref{fig:rete} with the root node (Layer 1) and the three leaves being terminal nodes representing the left-hand sides of the three GCIs (Layer 5). The intermediate nodes are concept nodes representing (conjunctions of) named concepts (Layer 2), role nodes (Layer 3) or the inter-element node representing the conjunction of value restrictions for different roles from the first GCI in $\ensuremath{\mathcal{T}}\xspace_{ex}$ (Layer 4).
\end{example}
\begin{figure}
\input{figures/rete-example}
\caption{Rete network for the TBox $\ensuremath{\mathcal{T}}\xspace_{ex}$ from Example \ref{ex:rete}. Intermediate nodes are drawn in round shapes: concept nodes in light circles, role nodes in dark circles, and the inter-element node in an ellipsis. Terminal nodes are in displayed as boxes. }
\label{fig:rete}
\end{figure}
In the preprocessing phase, \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace compiles the normalized TBox \ensuremath{\mathcal{T}}\xspace into the corresponding \emph{Rete network}. In the main reasoning phase, \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace
saturates the initial partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace_0$ by the Rete algorithm.
To unleash the full potential of this Rete-based approach, we need to store the current model in a way that allows for fast access of its successor nodes, which is discussed in the next subsection.
\subsection{Numerical Representation of Partial Functional Interpretations}
\newcommand{\indx}[1]{\textsf{index}(#1)}
The operations \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace needs to perform repeatedly on the current partial functional interpretation $\ensuremath{\mathcal{Y}}\xspace_i$ for a given domain element are the following:
\begin{enumerate}
\item quickly access its direct successors when a GCI is applied,
\item quickly decide whether a smaller domain element with the same label set exists to test Condition \bRule{2} for direct blocking, and
\item quickly decide whether a domain element is an ancestor of another element to test Condition \bRule{3} for indirect blocking.
\end{enumerate}
To obtain a space-efficient representation of the partial functional interpretation
that supports these operations with minimal overhead, we use an integer-based
representation with the basis $\lvert\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace\rvert+1$. Specifically, we fix
an enumeration of the role names in $\ensuremath{\mathcal{T}}\xspace$: $\ensuremath{{\mathsf{N}_\mathsf{R}}}\xspace=\{r_1,\ldots,r_n\}$. A
word $\sigma=r_{i_1}r_{i_2}\ldots r_{i_m}\in\ensuremath{\NR^*}$ is then represented as
$$\indx{\sigma}=\sum_{1\leq j\leq m} i_j(n+1)^{m-j}.$$
This representation reduces various operations on
words that are relevant for the algorithm to fast arithmetic operations in the following ways:
\begin{enumerate}
\item the length of $\sigma$ is
$\left\lvert\sigma\rvert=\lfloor\log_{n+1}(\indx{\sigma})\right\rfloor$,
\item
the $r_i$-successor of $\sigma$ has index: $(n+1)\cdot\indx{\sigma}$,
\item the direct predecessor of $\sigma$ has index
$\left\lfloor\frac{\indx{\sigma}}{n+1}\right\rfloor$, and
\item checking whether $\rho$ is an ancestor of $\sigma$, i.e.\ whether
$\sigma\in\pprefixset{\rho}$, can be done by checking whether
%
$$\indx{\sigma}=\left\lfloor\frac{\indx{\rho}}{(n+1)^{(\lvert\rho\rvert-\lvert\sigma\rvert)}}\right\rfloor.$$
\end{enumerate}
Note that this numerical encoding also directly provides an \emph{ordering on elements} $\prec$ as required: specifically, we define this ordering by
$\sigma\prec\rho$ iff $\indx{\sigma}<\indx{\rho}$.
The labels of each domain element are stored in a tree map, which is a data structure that associates each index
with a non-empty label to its label set. The inverse of this map is also
stored, to quickly obtain which domain elements have a given label set. This operation is required to test the blocking Condition \bRule{2}.
\subsection{Implementation of Blocking}\label{ssec:implementation-blocking}
After each expansion step it needs to be tested whether the blocking conditions \bRule{1} to \bRule{3} are fulfilled for the elements of the partial functional interpretation. Unfortunately, there can be intricate interactions between the blocking statuses of different elements.
Although GCIs are only applied on elements that are not blocked, the labels of a blocked element can change if a GCI is applied on some of its predecessors. As elements that are themselves blocked cannot block other elements, such a change in the label of a blocked element can lead to chain-reactions where the blocking status of a number of elements changes once information is propagated into a single element.
An example of this effect is visualized in Figure~\ref{fig:Block}. On the left-hand side, $\sigma_3$
blocks $\sigma_4$, which makes the nodes $\sigma_5$ and $\sigma_6$ indirectly
blocked.
\begin{figure}
\input{figures/blocking-interactions}
\caption{Example of blocking interactions in a partial functional interpretation. Gray elements are blocked, and the dotted arrow indicates an element directly blocking another.}
\label{fig:Block}
\end{figure}
Thus, these nodes cannot block other nodes themselves. In our
example, we assume $\sigma_6$ and $\sigma_9$ to have the same labels.
$\sigma_9$ is not blocked by $\sigma_6$, since $\sigma_6$ is blocked. Blocking
$\sigma_9$
would thus make the overall reasoning procedure incomplete. Now imagine some extension
makes the node $\sigma_1$ blocked. The resulting situation is shown on the
right-hand side. Since $\sigma_3$ becomes indirectly blocked, it cannot block
$\sigma_4$ anymore. Consequently, also the
descendants
of $\sigma_4$ become
unblocked, and now $\sigma_9$ becomes blocked by $\sigma_6$, even though there
is no connection between these nodes and $\sigma_1$.
To determine directly blocked nodes, \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace uses two hash maps. One is mapping each node to
its label set, and the other one is mapping each label set to a node. In addition, \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace stores
for each node whether it is blocking another node, directly blocked, or
indirectly blocked. If the label set of a node changes, \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace determines via the
hash maps whether this change results in directly blocking or unblocking any nodes, and updates their
blocking status accordingly. For every node whose blocking status changes, the indirect blocking status of their successors is recursively updated. If the
indirect blocking status changes (as seen for $\sigma_6$ in the last example), \ensuremath{\mathcal{F\!L}_{o\hspace{-0.15ex}}\textit{wer}}\xspace checks via the hash maps whether the
blocking status of other nodes has to change as well, and invokes those changes.
This process is continued recursively until all affected blocking statuses have
been updated.
|
1,314,259,994,296 | arxiv | \section{Introduction}
In this paper we continue our investigations on random walk orthogonal polynomials of multiple type. In~our previous paper \cite{bfmaf} we found that the ideas of Karlin and McGregor \cite{KmcG}, see also \cite{KmcG1957-1,KmcG1957-2}, can be extended from standard orthogonality to the multiple orthogonality scenario, whenever the Jacobi matrix is nonnegative and bounded. Now, instead of birth and death Markov chains, in which the non zero transition probabilities could only happen for near neighbors, we have that transitions to the $N$-th previous states are permitted. The dual Markov chain has the $N$-th next states reachable in one transition only. This allowed, as in \cite{KmcG} to give a representation formula for the iterated transition probabilities in terms of integrals of the multiple orthogonal polynomials of type II and the corresponding linear forms of type I.
Also provided possible stationary states and gave a characterization of the recurrent or transient character of the random walk in terms of the divergence or convergence of a certain integral.
In \cite{bfmaf} we presented the Jacobi--Piñeiro multiple orthogonal polynomials as a case study, and several properties were given. In particular, we showed the region where the Jacobi--Piñeiro random walks were recurrent or transient in terms of the Jacobi--Piñeiro parameters.
The large $n$ limit of the corresponding Jacobi matrix leads to a particular interesting case with a Toeplitz matrix describing the transitions. At the time we finished the writing of \cite{bfmaf} a new paper appeared \cite{lima_loureiro}, in where new multiple orthogonal polynomials based on Gauss hypergeometric function weights were given. The corresponding Jacobi matrix is bounded and nonnegative. Therefore, we immediately realized that, as for the Jacobi--Piñeiro case, it was a case amenable to support random walks beyond birth and death.
In this paper we perform such a study and show that the Lima and Loureiro hypergeometric multiple orthogonal polynomials \cite{lima_loureiro} are indeed random walk polynomials.
For that aim we explicitly compute the value at unity of the type I linear forms.
Then, whenever the zeros of the type II multiple orthogonal polynomials are confined in the interior of the support of the weight system,
the type II multiple orthogonal polynomials at unity are also positive and our method in \cite{bfmaf} is applicable. Notice that this confinement of zeros happens for example for algebraic Chebyshev (AT) systems of weights, in particular for Nikishin systems as the hypergeometric system of weights is in some region of parameters, see~\cite{lima_loureiro}.
Let us stress that the parameters determining the system of weights in \cite[cf. equation~(1.2)]{lima_loureiro},
although very useful to unify their results, in some instances are too restrictive for our purposes.
In fact, sometimes is too strong and not so convenient when it excludes from the discussion cases such as the Piñeiro weights, or the set of parameters leading to a uniform stochastic matrix. Both cases, are \emph{perfect systems }of weights.
Therefore, in this paper we will relax such conditions when required.
For a general introduction on multiple orthogonal polynomials and on Markov chains we refer the reader to our paper \cite{bfmaf} and references therein.
Now, we describe the contents and results of this paper. Within this introduction we remind the reader the main aspects of multiple orthogonal polynomials relevant for our objectives. Then we give a brief description of the results
in~\cite{lima_loureiro}.
We finish the introduction by recalling some of our results in \cite{bfmaf}.
Section \ref{S:ratio_asymptotics} is devoted to ratio asymptotics.
In Theorem \ref{teo:linear_forms_at_unity} we use the Rodrigues type formula of \cite{lima_loureiro} to get the type I linear forms at unity explicitly.
This is an important finding, allowing explicit expressions for the type I stochastic matrices. Then, we consider the ratio asymptotics at unity of these type~I linear forms and some further properties of these linear forms. Next, we consider the ratio asymptotics at unity of the type~II multiple orthogonal polynomials. For that aim, we use the Christoffel--Darboux formula on the step-line and the Poincaré theorem for homogeneous linear recurrences relations, see Proposition \ref{pro:ratio_asymptotics_typeII}. We end this section with Theorem~\ref{teo:ratio asymptotics_linear forms} in where the ratio asymptotics of the type I linear forms is analyzed,
using the Osgood theorem, in compact subsets \cite{Beardon_Minda, Osgood}.
Then, in \S\ref{S:randowm_walks} we give, cf.
Theorem \ref{teo:JPII_stochastic},
the type I and II stochastic matrices for the hypergeometric multiple orthogonal polynomials. We also show that both matrices are connected for large $n$, as they respective limits are transposed to each other. Clearly, the Karlin--McGregor representation formulas, see Theorems \ref{teo:KMcG} and \ref{teo:KMcG2}, apply here. We also discuss in Theorem \ref{Theorem:recurrent-transient} whether these hypergeometric Markov chains are recurrent or transient and also find left eigenvectors of the stochastic matrices, close to steady states, in Proposition \ref{pro:steady}.
Then, in Theorem~\ref{teo:stochastic _factorization} we give the stochastic factorization of the hypergeometric stochastic matrices in terms of three very simple pure birth or death matrices (with only one superdiagonal or subdiagonal, respectively). This is an important result as we have an $LU$ factorization of an stochastic matrix in terms of stochastic matrices, leading to the corresponding interpretation in terms of compositions of experiments, and possibly to urn models \cite{grunbaum_de la iglesia,grunbaum_de la iglesia2}.
In \S\ref{S:uniform} we seek for uniform or almost uniform hypergeometric Jacobi matrices, i.e the Jacobi matrix is a banded Toeplitz matrix but for the first column. In particular, we search for Jacobi matrices that are variations (only in the first column) of the large $n$-limit of the Jacobi matrix for any set of hypergeometric parameters, the banded Toeplitz matrix in \eqref{eq:Jacobi_Jacobi_Piñeiro_uniform}, that appeared in \cite{Coussement_Coussment_VanAssche} and \cite{bfmaf}. In Theorem \ref{teo:12}, we find that there are twelve hypergeometric tuples leading to such cases. These hypergeometric \emph{uniform tuples }are organized in six couples. Each couple lies in the same \emph{gauge class} described in \S\ref{S:gauge_freedom}, and lead to the same sequences of multiple orthogonal polynomials of type II, linear forms and Jacobi matrix. In \S\ref{S:stochastic_uniform}, we analyze three couples of uniform tuples which have named as \emph{stochastic uniform tuples}, describing recurrent Markov chains with stochastic transition matrices, having recurrent dual Markov chains with sinks and sources and semi-stochastic transition matrices. In Theorem \ref{teo:uniform_stochastic_factorization} we find that one of these stochastic uniform tuples has for its Markov matrix an stochastic factorization with three uniform pure death and pure birth factors. Then, in \S\ref{S:semi-stochastic uniform tuples} we discuss
the other three couples of uniform tuples, that we call \emph{semi-stochastic uniform tuples} because the corresponding transient Markov chains have sink states.
One of these six semi-stochastic uniform tuples is the one considered in \cite{lima_loureiro}. For the twelve cases we give the corresponding set of weights, all of them perfect. The three couples of semi-stochastic uniform tuples correspond to Nikishin systems. In Theorem \ref{teo:Christoffel chains for uniform tuples} we connect the stochastic uniform tuples using permuting Christoffel transformations \cite{bfm}, this mixed with the \emph{gauge} transformations connect all the tuples in this set. The same is done for the semi-stochastic uniform tuples. We also discuss some connections trough basic Christoffel transformations \cite{bfm}. As byproduct we get summation formulas at unity and three and four terms contiguous relations for the generalized hypergeometric function $\tensor[_3]{F}{_2}$. To conclude the section, using uniform recurrence relations, we construct a generating function that leads to explicit expressions for the type I multiple orthogonal polynomials, see Theorem \ref{teo:uniform_type_I}.
To conclude the paper, in \S\ref{S:Summations} we consider the very recent summation formulas by Karp and Prilepkina \cite{Karp_Prilepkina} to get summation formulas relevant in the theory of simultaneous Hermite--Padé approximants. In particular, we give in Proposition~\ref{pro:summations_Padé} explicit summations for the generalized type II moments linked to the remainders in the interpolation conditions of the type II Hermite--Padé approximants to the Markov functions.
\subsection{Two component multiple orthogonal polynomials on the step-line}
We now present a brief introduction to multiple orthogonal polynomials form the Gauss--Borel factorization point of view \cite{afm}, see also~\cite{abv,Coussement_Coussment_VanAssche,Ismail}.
\subsubsection{Gauss--Borel factorization of the moment matrix}
Let us consider a couple of weights $(w_1,w_2)$, the semi-infinite vectors of monomials
\begin{align*}
X := \left( \begin{NiceMatrix
1 & x & x^2 & \Cdots
\end{NiceMatrix} \right)^\top,
&&
X_1 :=\left( \begin{NiceMatrix
1 & 0 & x & 0 & x^2 &
\Cdots
\end{NiceMatrix} \right)^\top,
&&
X_2 := \left( \begin{NiceMatrix
0 & 1 & 0 & x & 0 & x^2 &
\Cdots
\end{NiceMatrix} \right)^\top,
\end{align*}
and the following vector of undressed linear forms
\begin{align}
\xi &:= X_1 w_1 + X_2 w_2
=\left( \begin{NiceMatrix
w_1 & w_2 & xw_1 & xw_2 & x^2w_1 & x^2w_2 &
\Cdots
\end{NiceMatrix} \right)^\top.
\end{align}
Given a measure $\mu$ with support on the closed interval $\Delta\subset\mathbb{R}$, our moment matrix is
\begin{align}
\label{compact.g}
g :=\int_\Delta X(x)(\xi(x))^\top\d \mu(x).
\end{align}
The Gauss--Borel factorization of the moment matrix $g$ is the
problem of finding the solution of
\begin{align}\label{facto}
g &=S^{-1} H \tilde S^{-\top},
\end{align}
with $S$, $\tilde S$ lower unitriangular semi-infinite matrices
\begin{align*}
S&=\left(\begin{NiceMatrix}[columns-width = auto]
1 & 0 & \Cdots & \\
S_{1,0 } & 1& \Ddots & \\
S_{2,0} & S_{2,1} & \Ddots & \\
\Vdots & \Ddots & \Ddots &
\end{NiceMatrix}\right), &
\tilde S&=
\left(\begin{NiceMatrix}[columns-width = auto]
1 & 0 &\Cdots &\\
\tilde S_{1,0 } & 1&\Ddots&\\
\tilde S_{2,0} & \tilde S_{2,1} & \Ddots &\\
\Vdots & \Ddots& \Ddots&
\end{NiceMatrix}\right),
\end{align*}
and $H$ a semi-infinite diagonal matrix
$ H=\operatorname{diag}
\begin{pNiceMatrix
H_0 & H_1 & \Ldots
\end{pNiceMatrix} {}$,
with $H_l\neq 0$, $l\in\mathbb{N}_0$.
Vector of type~II multiple orthogonal polynomials and of type~I linear forms associated are defined respectively by
\begin{align*
B & :=
\left( \begin{NiceMatrix
B^{(0)}\\
B^{(1)}\\
\Vdots
\end{NiceMatrix} \right)
= S \, X,
A_1 & :=
\left( \begin{NiceMatrix
A_1^{(0)} \\[.1cm]
A_1^{(1)} \\
\Vdots
\end{NiceMatrix} \right)
= H^{-1} \, \tilde S \, X_1, &&
A_2 :=
\left( \begin{NiceMatrix
A_2^{(0)}\\[.1cm]
A_2^{(1)}\\
\Vdots
\end{NiceMatrix} \right)
= H^{-1} \, \tilde S \, X_2, &&
Q :=
\left( \begin{NiceMatrix
Q^{(0)}\\[.1cm]
Q^{(1)}\\
\Vdots
\end{NiceMatrix} \right)
=H^{-1} \, \tilde S \, \xi .
\end{align*}
\subsubsection{Multiple orthogonality and bi-orthogonality}
Let us assume that the Gauss--Borel factorization exists, that~is, the system $(w_1,w_2,\d\mu)$ is perfect. Then, in terms of $\, S$ and $\tilde S$ we construct the type~II multiple orthogonal~polynomials
\begin{align} \label{defmops}
B^{(m)}&:=x^m+\sum_{i=0}^{m-1} S_{m,i}x^{i}, && m \in\mathbb{N}_0,
\end{align}
as well as the type~I multiple orthogonal polynomials,
\begin{align}\label{eq:A12}
A^{(2 m)}_1&=\frac{1}{H_{2m}}\Bigg(x^m+\sum_{i=0}^{m-1}\tilde S_{2m,2i} x^{i}\Bigg), &
A^{(2 m +1)}_1&=\frac{1}{H_{2m+1}}\Bigg(\sum_{i=0}^{m}\tilde S_{2m+1,2i} x^{i}\Bigg), & m \in\mathbb{N}_0,
\end{align}
and\begin{align}\label{eq:A12'}
\begin{aligned}
A^{(0)}_2&=0, & A^{(1)}_2&=1, & \\
A^{(2m)}_2&=\frac{1}{H_{2m}}\Bigg(\sum_{i=0}^{m-1}\tilde S_{2m,2i+1} x^{i}\Bigg), &A^{(2m+1)}_2&=\frac{1}{H_{2m+1}}\Bigg(x^m+\sum_{i=0}^{m - 1}\tilde S_{2m+1,2i+1} x^{i} \Bigg), & m \in\mathbb{N}.
\end{aligned}
\end{align}
For $m \in\mathbb{N}_0$, the linear forms are
\begin{align*}
Q^{(m)}:=w_1A^{(m)}_1+w_2A_2^{(m)}.
\end{align*}
Given the vector of indices $\vec \nu (2 m)=(m+1,m)$ and $\vec\nu(2m+1)=(m+1,m+1)$, $m \in \mathbb{N}_0$, and the corresponding multiple orthogonal polynomials
\begin{gather*}
\begin{aligned}
B^{ (2 m)}&=B_{(m,m)}, & B^{(2m+1)}&=B_{(m+1,m)},
\end{aligned}\\
\begin{aligned}
A^{ (2 m)}_1&=A_{(m+1,m),1}&
A^{(2m+1)}_1&=A_{(m+1,m+1),1}, &
A^{ (2 m)}_2&=A_{(m+1,m),2}, &
A^{(2m+1)}_2&=A_{(m+1,m+1),2},
\end{aligned}
\end{gather*}
the following type I orthogonality relations
\begin{align*
\int_{\Delta} x^{j} (A_{\vec \nu, 1}(x)w_{1} (x)+A_{\vec \nu, 2}(x)w_{2} (x))\d\mu (x) =0, &&
\deg A_{\vec \nu, 1}\leq\nu_{1}-1, &&
\deg A_{\vec \nu, 2}\leq\nu_{2}-1,
\end{align*}
for $j\in\{0,\ldots, |\vec \nu|-2\}$,
and type~II orthogonality relations
\begin{align*}
\int_{\Delta} B_{\vec\nu}(x) w_{a} (x) x^{j} \d\mu (x) =0, && \deg B_{\vec\nu} &\leq|\vec \nu|, && j=0,\ldots, \nu_a - 1, && a=1,2,
\end{align*}
are fulfilled.
The following multiple biorthogonality relations
\begin{align}\label{biotrhoganility}
\int_\Delta B^{(m)}(x) Q^{(k)}(x)\d \mu(x)&=\delta_{m,k},& m,k \in \mathbb{N}_0,
\end{align}
hold.
\subsubsection{The Jacobi matrix and the fourth order homogeneous linear recurrence relations}
For describing the linear recurrence relations we require the following shift matrix
$ \Lambda
=\left( \begin{NiceMatrix}[small
0& 1 & 0 & \Cdots & &&\\
\Vdots &\Ddots & \Ddots&\Ddots&& &\\
& & &&&&\\
& & & &&&\\
&&&&&&\\
&&&&&&
\end{NiceMatrix} \right) $,
that satisfies $\Lambda\chi(x)=x\chi(x)$. The Jacobi matrix
$ { J } := S \Lambda S^{-1}$
is a banded semi-infinite matrix
\begin{align}\label{eq:Jacobi}
{ J }
= \left(\begin{NiceMatrix
\beta_0& 1 & 0 & \Cdots & & \\[-4pt]
\alpha_1&\beta_1& 1 & \Ddots& & \\
\gamma_1& \alpha_2& \beta_2& 1 &&\\
0& \gamma_2&\alpha_3 & \beta_3& 1 & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&
\end{NiceMatrix}\right)
\end{align}
The Jacobi matrix, type~II multiple orthogonal polynomials, and the corresponding type~I multiple orthogonal polynomials and linear forms of type~I fulfill the eigenvalue property
\begin{align}\label{eq:eigen_value}
{ J } \, B&= x \, B, & { J } ^\top A_1&= x \, A_1, & { J } ^\top A_2&= x \, A_2,& { J } ^\top Q&= x \, Q.
\end{align}
This eigenvalue property component-wise gives the following fourth order homogeneous linear recurrence relations:
\begin{align*}
\gamma_{n-1} B^{(n-2)} + \alpha_{n} B^{(n-1)}+\beta_{n} B^{(n)}+B^{(n+1)}&=xB^{(n)},\\
A_1^{(n-1)} + \beta_{n}A_1^{(n)}+\alpha_{n+1}A_1^{(n+1)}+\gamma_{n+1}A_1^{(n+2)} &= x A_1^{(n)},\\
A_2^{(n-1)}+\beta_{n}A_2^{(n)}+\alpha_{n+1}A_2^{(n+1)}+\gamma_{n+1}A_2^{(n+2)} &= x A_2^{(n)},\\
Q^{(n-1)}+\beta_{n}Q^{(n)}+\alpha_{n+1}Q^{(n+1)}+\gamma_{n+1}Q^{(n+2)}&=xQ^{(n)} ,
\end{align*}
for $n \in \mathbb{N}_0$, with the restriction of objects with negative indices be treated as zero.
\subsubsection{Gauge freedom}\label{S:gauge_freedom}
In \cite{bfm}
we presented
\begin{teo}\label{teo:gauge_freedom_0}
Let $\Delta \subset \mathbb{R}$ be the compact support of two perfect systems, $(w_1, w_2,\d\mu)$, with $\int_\Delta w_1(x) \d \mu(x) =1$, and $(\hat w_1,\hat w_2,\d\mu)$, with $\int_\Delta \hat{w}_1 (x) \d \mu(x) =1$, that have the same sequence of type~II multiple orthogonal polynomials $\{B^{(n)}(x)\}_{n=0}^\infty$.
Then $ \hat w_1 = w_1$ and there exists $\alpha,\beta \in\mathbb{R}$ with $\alpha,\beta \not =0$ such that
\begin{align}\label{eq:gauge_1_0}
\gamma w_1+ \alpha w_2+ \beta \hat w_2 =0.
\end{align}
If $A^{(m)}_1$, $A^{(m)}_2$ are the type I multiple orthogonal polynomials, associated with the system
$(w_1, w_2,\d\mu)$, then the type~I multiple orthogonal polynomials, $\hat{A}^{(m)}_1,\hat{A}^{(m)}_2$
associated with the system $(w_1,\hat w_2,\d\mu)$ are given by
\begin{align*}
\hat{A}^{(m)}_1=A^{(m)}_1- \frac{\gamma}{\alpha} A_2^{(m)}, && \hat{A}^{(m)}_2= -\frac{\beta}{\alpha} A_2^{(m)} ,
\end{align*} and both systems has the same type I linear forms , i.e.,
$ \hat{Q}^{(m)} = Q^{(m)}$.
This will be used in \S\ref{S:uniform}.
\end{teo}
\begin{rem}
This theorem leads to a surprising fact. For multiple orthogonal polynomials the Jacobi matrix $J$, the sequence of type~II multiple orthogonal polynomials $\{B^{(n)}\}_{n=0}^\infty$, and even the sequence of type I linear forms $\{Q^{(n)}\}_{n=0}^\infty$ do not determine uniquely the spectral system $(w_1,w_2,\d\mu)$, as it determines uniquely $w_1$, for the second weight one has the freedom just described. Inspired by similar phenomena in gauge field theories, we call this
a \emph{gauge freedom}.
\end{rem}
\begin{rem}\label{rem:scaling}
If in \eqref{eq:gauge_1_0} we put $\gamma=0$ and $\beta=-1$ we get $\hat w_1=w_1$ and $\hat w_2=\alpha w_2$, that is we are dealing with a rescaling of the second weight. In this case we get $\hat B^{(n)}=B^{(n)}$, $\hat Q^{(n)}=Q^{(n)}$, $\hat A_1^{(n)}=A_1^{(n)}$ and
$\hat A_2^{(n)}=\alpha^{-1}\beta A_2^{(n)}$. This is a particular case of a rescaling $\hat w_1=\alpha_1 w_1$ and $\hat w_2=\alpha_2 w_2$, and using the Gauss--Borel factorization it can be shown that $\hat B^{(n)}=B^{(n)}$, $\hat Q^{(n)}=Q^{(n)}$, $\hat A_1^{(n)}=\alpha_1^{-1} A_1^{(n)}$ and
$\hat A_2^{(n)}=\alpha_2^{-1} A_2^{(n)}$.
\end{rem}
\begin{rem}\label{teo:gauge_freedom}
A version of this result that we will use in \S\ref{S:uniform} goes as follows. Assume a compact support $\Delta$, a solvable Hausdorff moment problem for $w_2$, and that two perfect systems $(w_1, w_2,\d\mu)$, with $\int_\Delta w_1(x) \d \mu(x) =1$ and $\int_\Delta w_2(x) \d \mu(x) =1$, and $(w_1,\hat w_2,\d\mu)$ with
$\int_\Delta \hat w_2(x) \d \mu(x) =1$
have the same sequence of type II multiple orthogonal polynomials $\{B^{(n)}(x)\}_{n=0}^\infty$. Then, there exists $\alpha,\beta\in\mathbb{R}$ such that
\begin{align}\label{eq:gauge_1}
w_1&=\alpha w_2+\beta\hat w_2, & \alpha+\beta&=1.
\end{align}
Notice that \eqref{eq:gauge_1} can be written as
\begin{align}\label{eq:gauge_2}
\alpha' w_1+\beta'\hat w_2&= w_2, & \alpha'+\beta'&=1.
\end{align}
Indeed, from \eqref{eq:gauge_2} we get $ w_1= \alpha w_2+\beta\hat w_2$ with $\alpha=\frac{1}{\alpha'}$ and $\beta=1-\alpha=-\frac{\beta'}{\alpha'}$.
To analyze the \emph{gauge freedom} in \eqref{eq:gauge_2}, after the replacement $\alpha'\to \alpha$ and $\beta'\to \beta$,
requires to consider solutions of
\begin{align}\label{eq:rho}
\alpha \rho_{1,0} + \beta \hat \rho_{2,0} &= \rho_{2,0} , &
\alpha \rho_{1,1} + \beta \hat \rho_{2,1} &= \rho_{2,1} ,
\end{align}
where $\rho_{a,n}=\int_{\Delta} x^nw_a(x)\d\mu (x)$ with $a\in\{1,2\}$ and $\hat \rho_{2,n}=\int_{\Delta} x^n\hat w_2(x)\d\mu (x)$.
\end{rem}
\begin{rem}
This \emph{gauge freedom} will be used in \S\ref{S:uniform}. In that section couple of perfect systems $(W_1,W_2,\d x)$ and $(W_1,\hat W_2,\d x)$ leading to the same sequences multiple orthogonal polynomials of type II and Jacobi matrices are presented for pair of tuples of hypergeometric parameters $(a,b,c,d)$ and $(b,a,c,d)$, see \S\ref{S:LL} below.
\end{rem}
\subsection{Christoffel transformations}
For the aim of this paper two Christoffel transformations presented in \cite{bfm}
will be useful in the developments in \S\ref{S:uniform}. See also \cite{aagmm, matrix,matrix2}.
First, let us consider $\vec w=(w_1,w_2)$ and the transformed vector of weights
$\vec{\underline w}
=(w_2, x \, w_1)$, that is a simple Christoffel transformation of $w_1$ followed by a permutation of the two weights. Then, \cite[Theorem 4]{bfm}
says
\begin{teo}[Permuting Christoffel formulas]\label{teo:Christoffel}
For the type I orthogonal polynomials and linear forms we have,
\begin{gather*}
Q_{\underline{\vec w}}^{(n)}(x)=Q_{{\vec w}}^{(n)}(x)-\frac{A_{1,\vec w}^{(n)}(0)}{A_{1,\vec w}^{(n+1)}(0)}Q_{{\vec w}}^{(n+1)}(x),\\
\begin{aligned}
A_{1,\underline{\vec w}}^{(n)}(x)&=A_{2,{\vec w}}^{(n)}(x)-\frac{A_{1,\vec w}^{(n)}(0)}{A_{1,\vec w}^{(n+1)}(0)}A_{2,{\vec w}}^{(n+1)}(x),&
A_{2,\underline{\vec w}}^{(n)}(x)&=\frac{1}{x}\Big(A_{1,{\vec w}}^{(n)}(x)-\frac{A_{1,\vec w}^{(n)}(0)}{A_{1,\vec w}^{(n+1)}(0)}A_{1,{\vec w}}^{(n+1)}(x)\Big).
\end{aligned}
\end{gather*}
For the type II orthogonal polynomials we have
\begin{align}\label{eq:permuting_Christoffel}
B^{(n)}_{\underline{\vec w}}(x)
=\frac{1}{x}\bigg(B_{\vec w}^{(n+1)}(x)
+\bigg(\frac{A_{1,\vec w}^{(n-1)}(0)}{A^{(n)}_{1,\vec w}(0)} +J_{n,n}\bigg)B_{\vec w}^{(n)}(x)-\frac{A_{1,\vec w}^{(n+1)}(0)}{A^{(n)}_{1,\vec w}(0)}J_{n+1,n-1}B_{\vec w}^{(n-1)}(x) \bigg).
\end{align}
For the $H$'s we find the transformation formula
$ H_{\underline{\vec w},n}=-\frac{A_{1,\vec w}^{(n+1)}(0)}{A_{1,\vec w}^{(n)}(0)} H_{{\vec w},n}$.
\end{teo}
We also will be faced with the basic Christoffel transformation
$ \underline{\vec w}:=x\vec w$,
and \cite[Theorem 5]{bfm}
states that
\begin{teo}[Basic Christoffel formulas] \label{teo:Basic Christoffel formulas}
We have the following relations
\begin{align*}
B^{(n)}_{\underline{\vec w}}(x)&=\frac{1}{x}
\frac{\begin{vNiceMatrix}
B_{\vec w}^{(n)}(0)& B_{\vec w}^{(n)}(x)\\[3pt]
B_{\vec w}^{(n+1)}(0)& B_{\vec w}^{(n+1)}(x)
\end{vNiceMatrix}}{B_{\vec w}^{(n)}(0)}, &
Q^{(n)}_{\underline{\vec w}}(x)&=\frac{1}{x}
\frac{\begin{vNiceMatrix}
A^{(n)}_{\vec w,1} (0)& A^{(n)}_{\vec w,2} (0) & Q^{(n)}_{{\vec w}}(x)\\[3pt]
A^{(n+1)}_{\vec w,1} (0)& A^{(n+1)}_{\vec w,2} (0)& Q^{(n+1)}_{{\vec w}}(x)\\[3pt]
A^{(n+2)}_{\vec w,1} (0)& A^{(n+2)}_{\vec w,2} (0)& Q^{(n+2)}_{{\vec w}}(x)
\end{vNiceMatrix}}{\begin{vNiceMatrix}
A^{(n+1)}_{\vec w,1} (0)& A^{(n+1)}_{\vec w,2} (0)\\
A^{(n+2)}_{\vec w,1} (0)& A^{(n+2)}_{\vec w,2} (0)
\end{vNiceMatrix}}.
\end{align*}
\end{teo}
Hence, we have Christoffel formulas expressing the new multiple orthogonal polynomials of type II and linear forms of type I, in terms of the
original ones.
\subsection{The hypergeometric multiple orthogonal polynomials}\label{S:LL}
Here we explain the main results of Lima and Loureiro in \cite{lima_loureiro} regarding hypergeometric multiple orthogonal polynomials that are required in this paper.
Given $\{a_k\}_{k=1}^p\subset \mathbb{C}, \{b_k\}_{k=1}^q\subset \mathbb{C}\setminus (-\mathbb{N}_0)$, the generalized hypergeometric is defined by the following power series
\begin{align*}
\tensor[_p]{F}{_{q}} \left[
\begin{NiceMatrix}[small]a_1, &\Ldots && &, a_p \\&b_1, &\Ldots&, b_q&\end{NiceMatrix}
;x \right]
=
\sum_{k=0}^\infty \frac{(a_1)_k\cdots\cdots(a_p)_k}{(b_1)_k\cdots(b_{q})_k}\frac{x^k}{k!},
\end{align*}
where the Pochhammer symbol is $(a)_n:=a(a+1)\cdots(a+n-1)$, and $(a)_0=1$, the series converge absolutely when $p\leq q$, and for $|x|<1$ when $p=q+1$, that will be the cases in this paper for $q=,1,2,3$.
See \cite{Andrews} for details about these functions and in particular Theorems 2.1.1 and 2.1.2. Se also \cite{Bailey,Rainville0}.
\subsubsection{The system of weights}
The weights are constructed in terms of the Gauss' hypergeometric function
$\tensor[_2]{F}{_1}
as follows.
Given $(a,b,c,d)\in\mathbb{R}$, that we call \emph{hypergeometric tuple}, and $\delta:=c+d-a-b$ let us define the~function
\begin{align*}
\mathscr w (x,a,b;c,d) =\frac{\Gamma(c)\Gamma(d)}{\Gamma(a)\Gamma(b)\Gamma(\delta)}x^{a-1}(1-x)^{\delta-1}\;\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-b,d-b \\\delta\end{NiceArray}};1-x\right].
\end{align*}
For the hypergeometric multiple orthogonality we consider $(W_1, W_2,\d x)$, where the weights are given by
\begin{align}\label{eq:pesosLL_w}
W_1(x)&:=\mathscr w(x,a,b;c,d), &W_2(x)&:=\mathscr w(x,a,b+1;c+1,d),
\end{align}
and $\d x$ is the Lebesgue measure in $[0,1]$.
In \cite{lima_loureiro} the hypergeometric parameters are constrained by
\begin{align}\notag
a,b,c,d&\geq 0,\\
\min(c,d)&>\max (a,b),\label{eq:min-max}
\end{align}
so that $\delta>0$. In \cite[Theorem 2.1]{lima_loureiro} it was shown, for these constrained hypergeometric parameters,
that $\{\tilde w_1,\tilde w_2,\d x\}$ is a Nikishin system on $(0,1)$, therefore an~AT system and consequently a perfect system.
As we will see the min-max condition, $ \min(c,d)>\max (a,b)$, is not always required for our purposes.
For example, whenever
\begin{align}\label{eq:parameters_moments}
a,b,\delta>0
\end{align}
Equation 11 in \cite[\S2.21.1]{Prudnikov} gives for the moments the following expression
\begin{align}\label{eq:moments}
\int_0^1 x^{a+n-1}(1-x)^{\delta-1} \;\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-b,d-b \\\delta\end{NiceArray}};1-x\right]\d x=\frac{(a)_n(b)_n}{(c)_n(d)_n}, && n \in \mathbb{N}_0 .
\end{align}
This formula is very useful
in doing computations to perform the Gauss--Borel factorization.
As we see the min-max is too demanding for this result.
Moreover, this formula is used in \cite{lima_loureiro} to prove the multiple orthogonal relations, and also the expression for the $H$'s.
Orthogonality and bi-orthogonality directly give, for all $n \in \mathbb{N}_0$,
\begin{align}
\label{eq:H_2n}
H_{2n}&=\int_0^1B^{(2n)}(x)x^n w_1(x)\d\mu(x),
H_{2n+1}&=\int_0^1B^{(2n+1)}(x)x^n w_2(x)\d\mu(x).
\end{align}
Using \eqref{eq:moments}, in \cite{lima_loureiro} the expressions \eqref{eq:H_2n}
were
given as generalized hypergeometric functions ${}_4F_3$ at unity. Then, the
Karlsson--Minton summation method \cite{minton,karlsson,miller,Karp_Prilepkina} was used in \cite[Equation (3.5)]{lima_loureiro} to evaluate the $H$'s and check their positivity. For complex numbers $\{\beta,f_1,\ldots,f_p\}\subset \mathbb{C}$
and positive integers $\{m_1,\ldots,m_p\}\subset \mathbb{N}$,
with $n\geq m_1+\cdots+m_p$, we have for the value at unity of the hypergeometric functions the following Minton summation
\begin{align*}
{}_{p+2}F_{p+1}\hspace*{-3pt}\left[\!\!\!\begin{array}{c}
\begin{NiceMatrix}[small]-n, &\beta,&f_1+m_1&\Ldots &, f_p+m_p \end{NiceMatrix}\\
\begin{NiceMatrix}[small]\beta+1,&f_1, &\Ldots&, f_p&\end{NiceMatrix}
\end{array}\!\!\!
;1\right]=\frac{n!(f_1-\beta)_{m_1}\cdots(f_p-\beta)_{m_p}}{(\beta+1)_n(f_1)_{m_1}\cdots(f_p)_{m_p}}.
\end{align*}
At this point we stress that although in \cite{lima_loureiro} it was required that $\Re\beta,\Re f_1,\ldots,\Re f_p>0$, that constraint is not really required, see \cite{minton,Karp_Prilepkina}. From Minton summation, formulas (3.7) and (3.8) in \cite{lima_loureiro} follow, and they~read
\begin{align}\label{eq:H_pochhammer_1}
\begin{cases}
\displaystyle
H_{2n} =\frac{(2n)!(a)_{2n}(b)_{2n}(d-a)_{n}(d-b)_n}{(c)_{3n}(d)_{3n}(d+n-1)_{2n}}, \\[.25cm]
\displaystyle
H_{2n+1}=\frac{(2n+1)!(a)_{2n+1}(b+1)_{2n}(c-a+1)_n(c-b)_{n+1}}{(c+1)_{3n+1}(c+n)_{2n+1}(d)_{3n+1}} ,
\end{cases}
&& n \in \mathbb{N}_0 .
\end{align}
In order to have a Gauss--Borel factorization of the moment matrix, that is equivalent to the perfectness of the system of weights, we must have $H_n\neq 0$, $n \in \mathbb{N}_0$ (positivity of the $H_n$ is sufficient but not necessary).
In this context,
to derive these formulas we assume \eqref{eq:parameters_moments},
with $d-a,d-b\not\in -\mathbb{N}_0$ to get $H_{2n}\neq 0$,
and to ensure $H_{2n+1}\neq 0$ we must have $c+1-a,c-b\not\in -\mathbb{N}_0$.
Therefore, perfectness is ensured whenever we have
\begin{align}\label{eq:region_parameters_pochhammer_perfect}
a,b,\delta &>0, &
d-a,d-b,c+1-a,c-b&\not\in -\mathbb{N}_0.
\end{align}
\subsubsection{Multiple orthogonal polynomials}
In \cite[Equations (3.2) and (3.1)]{lima_loureiro}, explicit expressions for the corresponding monic orthogonal polynomials of type II, were given for all $n \in \mathbb{N}_0$ by
\begin{align} \label{eq:polinomiohipergeometrico}
B^{(n)}(x)&=\sum_{j=0}^n(-1)^j\binom{n}{j}\frac{(a+n-j)_j(b+n-j)_j}{\big(c+n+\big\lfloor\frac{n}{2}\big\rfloor-j\big)_j\big(d+n+\big\lfloor\frac{n-1}{2}\big\rfloor-j\big)_j}x^{n-j}
\\ \notag
&=
(-1)^n
\frac{(a)_n(b)_n}{\big(c+\big\lfloor\frac{n}{2}\big\rfloor\big)_n\big(d+\big\lfloor\frac{n-1}{2}
\big\rfloor\big)_n }\,
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; c+\big\lfloor\frac{n}{2}\big\rfloor,\;d+\big\lfloor\frac{n-1}{2}\big\rfloor \\a,\;b\end{NiceArray}};x\right] .
\end{align}
The second expression is given in terms of the generalized hypergeometric function $\tensor[_3]{F}{_2}$.
Therefore, for $n \in \mathbb{N}$,
\begin{align*}
\int_0^1B^{(n)}(x) x^k W_1(x)\d x &=0, & k&\in\Big\{0,\ldots,\Big\lfloor\frac{n-1}{2}\Big\rfloor \Big\} ,&
\int_0^1B^{(n)}(x)x^k W _2(x)\d x &=0, &k&\in\Big\{0,\ldots,\Big\lfloor\frac{n}{2}\Big\rfloor-1 \Big\} ,
.
\end{align*}
The proof in \cite{lima_loureiro} of the orthogonality of these polynomials only requires \eqref{eq:moments} so, we need the parameters to satisfy \eqref{eq:region_parameters_pochhammer_perfect}.
The type I orthogonal polynomials $A^{(n)}_{1}(x)$ and $A_2^{(n)}(x)$ and the associated linear forms
\begin{align*}
Q^{(n)}(x)= W_1(x)A^{(n)}_{1}(x)+ W_2(x)A^{(n)}_{2}(x), && n \in \mathbb{N}_0,
\end{align*}
that satisfies the biorthogonality
$ \int_0^1B^{(n)}(x) Q^{(m)}(x)\d x=\delta_{n , m}$,
were found in \cite[equation (2.16)]{lima_loureiro} to be determined by the following Rodrigues-type formula
\begin{align}\label{eq:LL_Rodrigues}
Q^{(n)}(x)=\frac{(-1)^n}{n!}\frac{\d^n}{\d x^n}\mathscr{w}\Big(x;a+n,b+n;c+\Big\lfloor\frac{n+1}{2}\Big\rfloor+n,d+\Big\lfloor\frac{n}{2}\Big\rfloor+n\Big), && n \in \mathbb{N}_0 .
\end{align}
We have $ \deg A^{(n)}_1=\big\lfloor\frac{n}{2}\big\rfloor,$ and $\deg A^{(n)}_2=\big\lfloor\frac{n-1}{2}\big\rfloor$, with $A^{(0)}_2 = 0$,
and
\begin{align*}
\int_0^1x^k Q^{(n)}(x)\d x&=0, & k& \in\{0,\ldots,n-1 \}, &
\int_0^1x^{n} Q^{(n)}(x)\d x&=1.
\end{align*}
A convenient alternative pair of weights is
\begin{align} \label{eq:measures}
w_1(x)&:=\;\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-b,d-b \\\delta\end{NiceArray}};1-x\right], &
w_2(x)&:=\frac{c}{b}\;\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-b,d-b-1 \\\delta\end{NiceArray}};1-x\right],
\end{align}
where now we have
\begin{align}\label{eq:mu}
\d\mu(x)&:= \frac{\Gamma(c)\Gamma(d)}{\Gamma(a)\Gamma(b)\Gamma(\delta)}x^{a-1}(1-x)^{\delta-1}\d x.
\end{align}
Defining
\begin{align} \label{eq:polinomio}
q^{(n)}(x)= w_1(x)A^{(n)}_{1}(x)+ w_2(x)A^{(n)}_{2}(x)
\end{align}
we have
\begin{align*}
\int_0^1B^{(n)}(x) q^{(m)}(x)\d \mu(x)=\delta_{n,m} , && n,m \in \mathbb{N}_0 ,
\end{align*}
and
\begin{align*}
\int_0^1x^k q^{(n)}(x)\d \mu(x)&=0, & k&\in\{0, \ldots , n-1 \},&
\int_0^1x^{n} q^{(n)}(x)\d \mu(x)&=1.
\end{align*}
\subsubsection{The hypergeometric Jacobi matrix}
In \cite[Theorem 3.5]{lima_loureiro} it is shown that the Jacobi matrix \eqref{eq:Jacobi} describing the higher recurrence relation of these hypergeometric multiple orthogonal polynomials is nonnegative.
Indeed, they show that for $n \in \mathbb{N}_0$,
\begin{align}\label{eq:jacobi_hyper_coeff}
\beta_n =\lambda_{3n}+\lambda_{3n+1}+\lambda_{3n+2},
&&
\alpha_{n+1}=(\lambda_{3n+1}
+\lambda_{3n+2})\lambda_{3n+3}+\lambda_{3n+2}\lambda_{3n+4},
&&
\gamma_{n+1}=\lambda_{3n+2}\lambda_{3n+4}\lambda_{3n+6},
\end{align}
being the $\lambda$'s defined in terms of the sequence
\begin{align*}
c_n=\begin{cases}
c+k, & n=2k-1,\\
d+k , & n=2k,
\end{cases} &&
k \in \mathbb{N} ,
\end{align*}
as follows
\begin{align}\label{eq:lambdas}
\begin{cases}
\displaystyle
\lambda_{3n} =\frac{n(b+n-1)(c_n-a-1)}{(c_n+n-2)(c_n+n-1)(c_{n-1}+n-1)}, \\[.25cm]
\displaystyle
\lambda_{3n+1} =\frac{n(a+n)(c_{n-1}-b)}{(c_n+n-1)(c_{n-1}+n-1)(c_{n-1}+n)}, \\[.25cm]
\displaystyle
\lambda_{3n+2}=\frac{(a+n)(b+n)(c_n-1)}{(c_n+n-1)(c_n+n)(c_{n-1}+n)} ,
\end{cases}
&&
n \in \mathbb{N}_0 .
\end{align}
An important point regarding this paper is to find when these recursion coefficients are nonnegative. As was stated in \cite{lima_loureiro}, for $a,b,c,d>0$, if the max-min \eqref{eq:min-max} condition is fulfilled the $\lambda$'s are positive and Jacobi matrix is nonnegative.
However, this condition is again too restrictive. For example, the region
\begin{align}\label{eq:positivity_jacobi}
a,b&>0, & d&>\max(a,b), & c&\geq a, &c+1\geq b,
\end{align}
is such that \eqref{eq:region_parameters_pochhammer_perfect} holds, i.e. the system of weights is perfect, and the $\lambda$'s are nonnegative, and therefore the Jacobi matrix is nonnegative.
Moreover, if
$ \kappa=\frac{4}{27}$,
we have
$ \lim\limits_{n\to\infty}\lambda_n=\kappa$
and, consequently,
\begin{align*}
\lim_{n\to\infty}\beta_n&=3\kappa,&
\lim_{n\to\infty}\alpha_n&=3\kappa^2,&
\lim_{n\to\infty}\gamma_n&=\kappa^3.
\end{align*}
As we have seen, the coefficients of the Jacobi matrix associated with the hypergeometric multiple orthogonal polynomials share their limit with the ones of the Jacobi--Piñeiro case
studied in \cite{bfmaf}.
In \cite{lima_loureiro} the authors comment that the Piñeiro reduction $\gamma=0$ of the Jacobi--Piñeiro multiple orthogonal polynomials ---with parameters $(\alpha,\beta,\gamma)$ with $\alpha,\beta,\gamma>-1$ and such that $\alpha-\beta\not\in\mathbb{Z}$--- corresponds to a particular limit case of these hypergeometric multiple orthogonal polynomials. In fact, for the particular choice of the hypergeometric parameters
$c=a$, and $d=b+1$
one gets the Piñeiro weights~\cite{pineiro}
$w_1=b x^{b-1}$ and $w_2=ax^{a-1}$. These Piñeiro weights have a nonnegative Jacobi matrix whenever $|a-b|<1$ (cf. \cite{bfmaf}).
With this choice we have $\delta=1>0$ but we cannot meet the required min-max condition \eqref{eq:min-max}.
However, the relaxed conditions \eqref{eq:positivity_jacobi} are satisfied. Now, in terms of $a,b$ we have
\begin{align}\label{eq:positivity_piñeiro}
b+1&>\max(a,b), & a&\geq a, &a+1\geq b,
\end{align}
if $a\leq b$ we only need to require $b\leq a+1$ and if $a\geq b$ we only need to check that $b+1>a$. These are precisely the conditions $|b-a|<1$. We stress that the Piñeiro case satisfies \eqref{eq:region_parameters_pochhammer_perfect}, thus the moments are quotients of Pochhammer as in \eqref{eq:moments} and the $H$'s do not cancel (cf. \cite[Corollary 3]{bfmaf}),
\begin{align}\label{eq:H_pochhammer_pineiro}
\begin{cases}
H_{2n} =\dfrac{n! (2n)! (a)_{2n}(b)_{2n}(b+1-a)_{n}}{(a)_{3n}(b+1)_{3n}(b+n)_{2n}}, \\[.25cm]
H_{2n+1}=\dfrac{n! (2n+1)! (a)_{2n+1}(b+1)_{2n}(a-b)_{n+1}}{(a+1)_{3n+1}(a+n)_{2n+1}(b+1)_{3n+1}},
\end{cases}
&& n \in \mathbb{N}_0 .
\end{align}
Following \cite{lima_loureiro} we see that for the hypergeometric tuple
$\Big(\frac{4}{3},\frac{5}{3},2,\frac{5}{2}\Big)$
we have that
$ \beta_n=3\kappa$, $\alpha_{n+1}=3\kappa^2$ and $\gamma_n=\kappa^3$.
That is, the Jacobi matrix is uniform along its diagonals, a Toeplitz matrix
\begin{align}\label{eq:Jacobi_Jacobi_Piñeiro_uniform}
{ J }
= \left(\begin{NiceMatrix
3\kappa& 1 & 0 & \Cdots & & \\[-5pt]
3\kappa^2&3\kappa& 1 & \Ddots& & \\
\kappa^3& 3\kappa^2& 3\kappa& 1 &&\\
0& \kappa^3&3\kappa^2& 3\kappa& 1 & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right).
\end{align}
We refer to this matrix as the \emph{asymptotic uniform Jacobi matrix}.
That is, the hypergeometric multiple orthogonal polynomials contain as a particular case the limit case, not as for the Jacobi--Piñeiro case, where no set of parameters reproduce such uniform Jacobi matrix.
Notice that
$ \delta=2+\frac{5}{2}-\frac{4}{3}-\frac{5}{3}=\frac{3}{2}>0$
and
$ \min(c,d)=2>\frac{5}{3}=\max(a,b)$.
The perfect system $(W_1,W_2,\d x)$ is
\begin{align} \label{eq:pesosLL}
\left\{
\begin{aligned}
W_1(x)&=\frac{81\sqrt 3}{16\pi}\sqrt[3]{x}\left(\sqrt[3]{1+\sqrt{1-x}}-\sqrt[3]{1-\sqrt{1-x}}\right),
\\
W_2(x)&=\frac{243\sqrt 3}{160\pi}\sqrt[3]{x}\left(\left(\sqrt[3]{1+\sqrt{1-x}}\,\right)^4-\left(\sqrt[3]{1-\sqrt{1-x}}\,\right)^4\right).
\end{aligned}
\right.
\end{align}
\begin{comment}
As we did in the general situation we can choose $(1,\frac{\tilde w_2}{\tilde w_1}, \tilde w_1\d x)$, but
recalling that
\begin{align*}
\frac{A^4-B^4}{A-B}=A^3+A^2B+AB^2+B^3,
\end{align*}
and putting
\begin{align*}
A&:=\sqrt[3]{1+\sqrt{1-x}}, &B&:=\sqrt[3]{1-\sqrt{1-x}},
\end{align*}
we have
\begin{align*}
A^3+B^3&=1+\sqrt{1-x}+1-\sqrt{1-x}=2,\\
A^2B+AB^2&=AB(A+B)=\sqrt[3]{1^2-(\sqrt{1-x})^2}\left(\sqrt[3]{1+\sqrt{1-x}}+\sqrt[3]{1-\sqrt{1-x}}\right)\\&=
\sqrt[3]{x}\left(\sqrt[3]{1+\sqrt{1-x}}+\sqrt[3]{1-\sqrt{1-x}}\right)
\end{align*}
so that
\begin{align*}
w_2:=\frac{\tilde w_2}{\tilde w_1}=\frac{3}{10} \left(2+\sqrt[3]{x}\left(\sqrt[3]{1+\sqrt{1-x}}+\sqrt[3]{1-\sqrt{1-x}}\right)\right).
\end{align*}
In conclusion we will use the system
\begin{align*}
w_1&:= 1, & w_2&:=\frac{3}{10} \left(2+\sqrt[3]{x}\left(\sqrt[3]{1+\sqrt{1-x}}+\sqrt[3]{1-\sqrt{1-x}}\,\right)\right).
\end{align*}
and
\begin{align*}
\d\mu(x)=\frac{81\sqrt 3}{16\pi}\sqrt[3]{x}\left(\sqrt[3]{1+\sqrt{1-x}}-\sqrt[3]{1-\sqrt{1-x}}\right)\d x.
\end{align*}
\end{comment}
\subsection{Random walk multiple orthogonal polynomials}
We now describe the main results we derived in our previous paper \cite{bfmaf} extending the results of Karlin and McGregor to multiple orthogonal polynomials.
We reproduce \cite[Theorems 1 \& 2]{bfmaf} in a convenient form for the aims of this paper.
\begin{teo
\label{pro:sigma_spectral}
Let us assume that
\begin{enumerate}
\item The Jacobi matrix $ { J } $ is nonnegative.
\item The values at unity of the polynomials and of the linear forms are positive, i.e. $B^{(n)}(1)$, $q^{(n)}(1)>0$.
\end{enumerate}
Then, the diagonal matrices
\begin{align*}
\sigma_{II}&=
\operatorname{diag}
\begin{pNiceMatrix
\sigma_{II,0},\sigma_{II,1},\ldots
\end{pNiceMatrix},
& \sigma_{II,n}&:=\frac{1}{B^{(n)}(1)},&
\sigma_I&=
\operatorname{diag}
\begin{pNiceMatrix
\sigma_{I,0},\sigma_{I,1},\ldots
\end{pNiceMatrix},
& \sigma_{I,n}:=\frac{1}{q^{(n)}(1)} ,
\end{align*}
are such that
\begin{align*
P_{II}&:=\sigma_{II} { J } \, \sigma_{II} ^{-1}, & P_{II,n,m}&=\frac{B^{(m)}(1)}{B^{(n)}(1)} J_{n,m},&
P_{I}&:=\sigma_I { J } ^\top\sigma_I^{-1}, & P_{I,n,m}&=\frac{q^{(m)}(1)}{q^{(n)}(1)}J_{m,n} ,
\end{align*}
are stochastic matrices.
\end{teo}
We have \cite[Theorems 6 \& 7]{bfmaf} that for the reader convenience we reproduce here
\begin{teo}[KMcG representation formula]\label{teo:KMcG}
Let us assume the conditions requested in Theorem~\ref{pro:sigma_spectral}.
Then, for random walks with Markov matrices $P_{II}$ and $P_I$, the transition probabilities, after~$r$ transitions from state~$n$ to state~$m$ are given respectively by
\begin{align*
P_{II,nm}^r &=\frac{ B^{(m)}(1)}{B^{(n)}(1)} \int_\Delta x^rB^{(n)}(x) q^{(m)}(x)\d \mu(x),&
P_{I,nm}^r&=\frac{q^{(m)}(1) }{ q^{(n)}(1)}
\int_\Delta x^rB^{(m)}(x) q^{(n)}(x)\d \mu(x) .
\end{align*}
\end{teo}
\begin{teo}\label{teo:KMcG2}
Let us assume the conditions requested in Theorem~\ref{pro:sigma_spectral}. Then, for $|s|<1$, the transition probability generating functions reads~as
\begin{align*}
P_{II,nm}(s)&=\frac{B^{(m)}(1)}{B^{(n)}(1)}
\bigintssss_\Delta \frac{B^{(n)}(x) q^{(m)}(x)}{1-sx}\d \mu(x),& P_{I,nm}(s)&=\frac{q^{(m)}(1)}{q^{(n)}(1)}
\bigintssss_\Delta \frac{B^{(m)}(x) q^{(n)}(x)}{1-sx}\d \mu(x),
\end{align*}
respectively.
The first passage generating functions are given by
\begin{align*}
F_{II,nm}(s) =\frac{B^{(m)}(1)}{B^{(n)}(1)}
\dfrac{\bigintss_\Delta \dfrac{B^{(n)}(x) q^{ (m)}(x)}{1-sx}\d \mu(x)}{
\bigintss_\Delta \dfrac{B^{(m)}(x) q^{(m)}(x)}{1-sx}\d \mu(x)}, &&
F_{I,nm}(s) =\frac{Q^{(m)}(1)}{q^{(n)}(1)}
\dfrac{\bigintss_\Delta \dfrac{B^{(m)}(x) q^{(n)}(x)}{1-sx}\d \mu(x)}{
\bigintss_\Delta \dfrac{B^{(m)}(x) q^{(m)}(x)}{1-sx}\d \mu(x)},
\end{align*}
for $n \neq m$, and for $n =m$ by
\begin{align*}
F_{nn}(s)=1-\frac{1}{\bigintss_\Delta \dfrac{B^{(n)}(x) q^{(n)}(x)}{1-sx}\d \mu(x)} .
\end{align*}
The last expression $F_{nn}(s)$ is valid for both Markov chains of types I and II.
\end{teo}
\section{Ratio asymptotics}\label{S:ratio_asymptotics}
In this section we assume \eqref{eq:region_parameters_pochhammer_perfect}, so that the system of weights is perfect.
\subsection{On the linear forms at unity and Rodrigues type formula}
In \cite{lima_loureiro} multiple orthogonality for the measures $\tilde w_1\d x $ and $\tilde w_2(x)\d x$ supported on $[0,1]$ for the step-line were considered, i.e., one takes $\vec n=(1,1)$ (in the notation of \cite{afm,bfm,bfmaf}).
Here we use the weights $w_1$, $w_2$ defined on \eqref{eq:measures}.
\begin{teo}[Type I linear forms at unity]\label{teo:linear_forms_at_unity}
For $n\in\mathbb{N}_0$, the linear forms \eqref{eq:polinomio} at unity have the following values
\begin{align*}
q^{(2n)}(1) = \frac{1}{(2n)!}\frac{(c)_{3n}(d)_{3n}}{(a)_{2n}(b)_{2n}},
&&
q^{(2n+1)}(1) = \frac{1}{(2n+1)!}\frac{(c)_{3n+2}(d)_{3n+1}}{(a)_{2n+1}(b)_{2n+1}} .
\end{align*}
\end{teo}
\begin{proof
Recall that
\begin{align*}
Q^{(n)}(x)=\frac{\Gamma(c)\Gamma(d)}{\Gamma(a)\Gamma(b)\Gamma(\delta)}x^{a-1}(1-x)^{\delta-1}q^{(n)}(x)
\end{align*}
so that the mentioned Rodrigues type formulas \eqref{eq:LL_Rodrigues} can be rewritten as
\begin{align*}
q^{(2n)}(x) &=
\frac{1}{(2n)!}\frac{(c)_{3n}(d)_{3n}}{(a)_{2n}(b)_{2n}(\delta)_{2n}}
\frac{1}{x^{a-1}(1-x)^{\delta-1}}\frac{\d^{2n}}{\d x^{2n}}\Big(x^{a-1+2n}(1-x)^{\delta-1+2n}\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-b+n,d-b+n \\
\delta+2n\end{NiceArray}};1-x\right]\Big),
\\
q^{(2n+1)}(x) &=
-\frac{1}{(2n+1)!}\frac{(c)_{3n+2}(d)_{3n+1}}{(a)_{2n+1}(b)_{2n+1}(\delta)_{2n+1}}
\frac{1}{x^{a-1}(1-x)^{\delta-1}}\frac{\d^{2n+1}}{\d x^{2n+1}}\Big(x^{a+2n}(1-x)^{\delta+2n}\;\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-
b+n+1,d-b +n\\\delta+2n+1\end{NiceArray}};1-x\right]\Big).
\end{align*}
Let us expand the derivatives to get
\begin{subequations}\label{eq:Q}
\begin{align}\label{eq:Qodd}
& \begin{multlined}[t][.9\textwidth]
q^{(2n)}(x) =
\frac{1}{(2n)!}\frac{(c)_{3n}(d)_{3n}}{(a)_{2n}(b)_{2n}(\delta)_{2n}
\\
\times \sum_{k=0}^{2n}\sum_{l=0}^k
(-1)^{l+k}\binom{2n}{k}\binom{k}{l} (a+2n-k+l)_{k-l}(\delta+2n-l)_l x^{2n-k+l}(1-x)^{2n-l
\tensor*[_2]{F}{_1^{(2n-k)}}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-
b+n,d-b+n \\\delta+2n\end{NiceArray}};1-x\right],
\end{multlined}
\\ \label{eq:Qeven}
& \begin{multlined}[t][.9\textwidth]
q^{(2n+1)}(x) =
\frac{1}{(2n+1)!}\frac{(c)_{3n+2}(d)_{3n+1}}{(a)_{2n+1}(b)_{2n+1}(\delta)_{2n+1}}\\\times
\sum_{k=0}^{2n+1}\sum_{l=0}^k
(-1)^{l+k}\binom{2n+1}{k}\binom{k}{l} (a+2n+1-k+l)_{k-l}(\delta+2n+1-l)_l x^{2n+1-k+l}(1-x)^{2n+1-l}\\\times
\tensor*[_2]{F}{_1^{(2n+1-k)}}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-
b+n+1,d-b+n \\\delta+2n+1\end{NiceArray}};1-x\right],
\end{multlined}
\end{align}
\end{subequations}
where $\tensor*[_2]{F}{_1^{(m)}}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]\alpha,\; \beta \\\gamma \end{NiceArray}};1-x\right]$, stands for the $m$-th $x$-derivative of $\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]\alpha,\; \beta \\\gamma \end{NiceArray}};x\right]$ when evaluated at~$1-x$.
Hence, when evaluating at unity we see that almost all terms cancel due to the presence of the factor $(1-x)^N$ for some natural number $N$. Then, in the evaluation at unity only survive the summands corresponding to $l=k=2n$ and $l=k=2n+1$, respectively.
In doing that one obtains for the factor with the double summations the values $(\delta)_{2n}$ and $(\delta)_{2n+1}$, respectively. Then, the announced result follows immediately.
\end{proof}
\begin{coro}\label{coro:positivity}
Whenever $a,b,c,d>0$, the linear forms when evaluated at unity are positive numbers
\begin{align*}
q^{(n)}(1)&>0, & n&\in\mathbb{N}_0.
\end{align*}
\end{coro}
\begin{coro}\label{cor:expressions_type_I}
The linear forms \eqref{eq:polinomio} can be expressed as
\begin{align*
& \begin{multlined}[t][0.95\textwidth]
q^{(2n)}(x) =
\frac{1}{(2n)!}\frac{(c)_{3n}(d)_{3n}}{(a)_{2n}(b)_{2n}(\delta)_{2n}
\sum_{k=0}^{2n}\sum_{l=0}^k
(-1)^{l+k}\binom{2n}{k}\binom{k}{l} (a+2n-k+l)_{k-l}(\delta+2n-l)_l x^{2n-k+l}(1-x)^{2n-l}\\
\times \frac{(c - b+n)_{2n-k}(d-b+n)_{2n-k}}{(\delta+2n)_{2n-k}}\,\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c- b+3n-k , d-b+3n -k\\
\delta+4n-k\end{NiceArray}};1-x\right],
\end{multlined}
\\%\label\label{eq:Qeven_epp}
& \begin{multlined}[t][0.95\textwidth]
q^{(2n+1)}(x) =
\frac{1}{(2n+1)!}\frac{(c)_{3n+2}(d)_{3n+1}}{(a)_{2n+1}(b)_{2n+1}(\delta)_{2n+1}}\\ \times
\sum_{k=0}^{2n+1}\sum_{l=0}^k
(-1)^{l+k}\binom{2n+1}{k}\binom{k}{l} (a+2n+1-k+l)_{k-l}(\delta+2n+1-l)_l x^{2n+1-k+l}(1-x)^{2n+1-l}\\
\times \frac{(c- b+n+1)_{2n-k+1}(d-b+n)_{2n-k+1}}{(\delta+2n+1)_{2n-k+1}}\,
\,\tensor[_2]{F}{_1}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]
c-b+3n+2-k,d-b+3n+1-k \\\delta+4n+2-k\end{NiceArray}};1-x\right] .
\end{multlined}
\end{align*}
\end{coro}
\begin{proof}
The result follows from the well known equation
\begin{align*}
\frac{\d^m}{\d z^m}\tensor[_2]{F}{_1}\!\!\left[{\begin{NiceArray}{c}[small]\alpha,\; \beta \\\gamma \end{NiceArray}};x\right]=
\frac{(\alpha)_m(\beta)_m}{(\gamma)_m}\,{}_2F_1\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]\alpha+m , \beta +m\\\gamma +m\end{NiceArray}};x\right] ,
\end{align*}
applied to \eqref{eq:Q}
\end{proof}
For the next Corollary we require the system of weights is such that the zeros are confined in the support of the measure. For example, this happens whenever the system of weights is an AT system.
This holds when the min-max constraint \eqref{eq:min-max} is assumed, so the system is Nikishin \cite{lima_loureiro}, but it also holds for another system of hypergeometric parameters, as those leading to the Piñeiro multiple orthogonal polynomials.
\begin{coro}\label{coro:psoitivity>1}
Whenever $a,b,c,d>0$, the linear forms fulfill $q_n(x)>0$, for all $n \in \mathbb{N}_0$ and $x\geq 1$.
\end{coro}
\begin{proof}
As the zeros of $q_n(x)$ belong to $(0,1)$, the result follows from Corollary \ref{coro:positivity}.
\end{proof}
\subsection{Ratio asymptotics at unity of linear forms of type I }
\begin{coro}\label{coro:ratio_asymptotics_typeI}
The large $n$ ratio asymptotics for the type I hypergeometric linear forms at unity is
\begin{align*}
\lim_{n\to\infty} \frac{q^{(n+1)}(1)}{q^{(n)}(1)}=\frac{1}{2\kappa}=\frac{27}{8}.
\end{align*}
\end{coro}
\begin{proof}
It follows from the explicit values of the linear form at unity. Indeed,
\begin{align*}
\hspace{-.25cm}
\lim_{n\to\infty} \frac{q^{(2n+1)}(1)}{q^{(2n)}(1)}&=\lim_{n\to\infty}\frac{(2n)!}{(2n+1)!} \frac{(c)_{3n+2}(d)_{3n+1}(a)_{2n}(b)_{2n}}{(c)_{3n}(d)_{3n}(a)_{2n+1}(b)_{2n+1}} =\lim_{n\to\infty}\frac{(c+3n)(c+3n+1)(d+3n)}{(2n+1)(a+2n)(b+2n)} =\frac{27}{8},\\
\hspace{-.25cm}
\lim_{n\to\infty} \frac{q^{(2n+2)}(1)}{q^{(2n+1)}(1)}&=\lim_{n\to\infty}\frac{(2n+1)!}{(2n+2)!} \frac{(c)_{3n+3}(d)_{3n+3}(a)_{2n+1}(b)_{2n+1}}{(c)_{3n+2}(d)_{3n+1}(a)_{2n+2}(b)_{2n+2}} =\lim_{n\to\infty}\frac{(c+3n+2)(d+3n+1)(d+3n+2)}{(2n+2)(a+2n+1)(b+2n+1)} =\frac{27}{8}.
\end{align*}
Which is what we wanted to prove.
\end{proof}
\subsection{Ratio asymptotics at unity of multiple orthogonal polynomials of type II}
A well known classical tool in the theory of orthogonal polynomials is the Christoffel--Darboux (CD) kernel, see~\cite{simon}.
Multiple orthogonal polynomials have also Christoffel--Darboux kernels,
Sorokin and Van Iseghem~\cite{sorokin} derived a formula that can be applied to multiple orthogonal polynomials, see also~\cite{Coussement__VanAssche} and~\cite{tesis}. Daems and Kuijlaars derived a~CD formula for the mixed multiple case~\cite{daems-kuijlaars,daems-kuijlaars2} using Riemann--Hilbert approach in the context of nonintersecting Brownian motions. Later on, the article~\cite{afm} reproduces the same result with an algebraic approach
in the framework of mixed multiple orthogonal polynomial sequences.
The~CD formula derived in~\cite{CD} suits particularly well to our problem because it is expressed only in terms of a unique polynomial sequence.
\begin{pro}\label{pro:ratio_asymptotics_typeII}
The large $n$ ratio asymptotics for the type II hypergeometric multiple orthogonal polynomials at unity~is
\begin{align*}
\lim_{n\to\infty} \frac{B^{(n+1)}(1)}{B^{(n)}(1)}=2\kappa=\frac{8}{27}.
\end{align*}
\end{pro}
\begin{proof}
From the Christoffel--Darboux formula on the sequence, see \cite[Proposition 6]{bfmaf}, we find
\begin{multline}\label{eq:CD_QP_1}
q^{(n+1)}(1)B^{(n+2)}(1)
-q^{(n+2)}(1)J_{n+2,n}B^{(n)}(1) -q^{(n+2)}(1) J_{n+2,n+1}B^{(n+1)}(1)
-q^{(n+3)}(1)J_{n+3,n+1}B^{(n+1)}(1)=0,
\end{multline}
that allows
us the application of the ideas in \cite[Theorem 4]{bfmaf}.
Notice that \eqref{eq:CD_QP_1} can be written as
\begin{align}\label{eq:CD_QP_2}
B^{(n+2)}(1) =
a_nB^{(n+1)}(1)+b_nB^{(n)}(1),
\end{align}
with
$
a_n :=\frac{q^{(n+2)}(1)}{q^{(n+1)}(1)} \alpha_{n+2}
+\frac{q^{(n+3)}(1)}{q^{(n+1)}(1)}\gamma_{n+2}$ and $
b_n :=\frac{q^{(n+2)}(1)}{q^{(n+1)}(1)}\gamma_{n+1}$.
From the explicit expressions for the hypergeometric type I linear forms at unity we find that
\begin{align*}
\lim_{n\to\infty}a_n&=\frac{1}{2\kappa}3\kappa^2+\frac{1}{4\kappa^2}\kappa^3=\frac{7\kappa}{4},&
\lim_{n\to\infty}b_n&=\frac{1}{2\kappa}\kappa^3=\frac{\kappa^2}{2}.
\end{align*}
Hence, we can apply the Poincaré's theory \cite{poincare,montel,Norlund, Elaydi} for homogeneous linear recurrence relations with coefficients having finite large $n$ limits. The characteristic polynomial
$ r^2-\frac{7\kappa}{4}r-\frac{\kappa^2}{2}=\Big(r+\frac{1}{27}\Big)\Big(r-\frac{8}{27}\Big)$
has two simple roots, $-\frac{1}{27}$ and $\frac{8}{27}$, with distinct absolute value. Consequently, being a sequence of positive numbers, following Poincaré we find that it converges to the positive root~$\frac{27}{8}$.
\end{proof}
\begin{coro}[Ratio asymptotics for ${}_3F_2$]
We have the following large $n$ limit for the ratio of two generalized hypergeometric functions
\begin{align*}
\lim_{n\to\infty}\frac{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-2,\; c+n+1,\;d+n \\a,\;b\end{NiceArray}};1\right]}{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\; c+n,\;d+n \\a,\;b\end{NiceArray}};1\right] }=-2.
\end{align*}
\end{coro}
\begin{proof}
As we have
\begin{align*}
B^{(2n)}(x)&=
\frac{(a)_{2n}(b)_{2n}}{(c+n)_{2n}(d+n-1)_{2n}}
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n,\; c+n,\;d+n-1 \\a,\;b\end{NiceArray}};x\right],&
B^{(2n+1)}(x)&=-
\frac{(a)_{2n+1}(b)_{2n+1}}{(c+n)_{2n+1}(d+n)_{2n+1}}
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\; c+n,\;d+n \\a,\;b\end{NiceArray}};x\right].
\end{align*}
The ratios we are interested in are
\begin{align*}
\frac{B^{(2n+1)}(1)}{B^{(2n)}(1)}&=-\frac{(a+2n)(b+2n)(d+n-1)}{(c+3n)(d+3n)(d+3n-1)}\frac{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\; c+n,\;d+n \\a,\;b\end{NiceArray}};1\right]}{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n,\; c+n,\;d+n-1 \\a,\;b\end{NiceArray}};1\right]},\\
\frac{B^{(2n+2)}(1)}{B^{(2n+1)}(1)}&=-\frac{(a+2n+1)(b+2n+1)(c+n)}{(c+3n+1)(c+3n+2)(d+3n+1)}\frac{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-2,\; c+n+1,\;d+n \\a,\;b\end{NiceArray}};1\right]}{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\; c+n,\;d+n \\a,\;b\end{NiceArray}};1\right] }.
\end{align*}
Notice that if the ratio asymptotics at unity, $ \lim\limits_{n\to\infty}\frac{B^{(2n+1)}(1)}{B^{(2n)}(1)}$, exists, in terms of $\kappa=\frac{4}{27}$, we have
\begin{align*}
\begin{aligned}
\lim_{n\to\infty} \frac{B^{(2n+1)}(1)}{B^{(2n)}(1)}&=- \kappa\lim_{n\to\infty}\frac{
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\; c+n,\;d+n \\a,\;b\end{NiceArray}};1\right]}{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n,\; c+n,\;d+n-1 \\a,\;b\end{NiceArray}};1\right]} , &
\lim_{n\to\infty} \frac{B^{(2n+2)}(1)}{B^{(2n+1)}(1)}&=- \kappa\lim_{n\to\infty}\frac{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-2,\; c+n+1,\;d+n \\a,\;b\end{NiceArray}};1\right]}{\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\; c+n,\;d+n \\a,\;b\end{NiceArray}};1\right]} ,
\end{aligned}
\end{align*}
and using Proposition \ref{pro:ratio_asymptotics_typeII} the result follows.
\end{proof}
\subsection{Ratio asymptotics in compact subsets}
For this section we need that the system of weights is such that the zeros are confined in the support of the measure. For example, this happens whenever the system of weights is an AT system.
\begin{rem}
In \cite[\S3.4]{lima_loureiro} it was shown that
\begin{align}\label{eq:B_F}
\lim_{n\to\infty}\frac{B^{(n+1)}(x)}{B^{(n)}(x)}&=-\frac{\kappa}{F(x)},
\end{align}
with
\begin{align}\label{eq:F}
F(x)&:=1-\frac{3}{2}\sqrt[3]{x}
\Big(\omega^2\sqrt[3]{\sqrt{1-x}-1}+\omega\sqrt[3]{-\sqrt{1-x}-1}\Big), &\omega&:=\Exp{\frac{2\pi\operatorname{i}}{3}},
\end{align}
uniformly in compact subsets of $\mathbb{C}\setminus[0,1]$. Taking $\sqrt[3]{-1}=-1$ we get
\begin{align*}
\lim_{x\to 1^+} F(x)=1+\frac{3}{2}(\omega^2+\omega)=1-\frac{3}{2}=-\frac{1}{2}.
\end{align*}
Consequently, we have
\begin{align*}
\lim_{x\to 1^+} \lim_{n\to\infty}\frac{B^{(n+1)}(x)}{B^{(n)}(x)}=2\kappa,
\end{align*}
so that using Proposition \ref{pro:ratio_asymptotics_typeII} we conclude
\begin{align*}
\lim_{x\to 1^+} \lim_{n\to\infty}\frac{B^{(n+1)}(x)}{B^{(n)}(x)}= \lim_{n\to\infty}\lim_{x\to 1^+}\frac{B^{(n+1)}(x)}{B^{(n)}(x)}.
\end{align*}
\end{rem}
\begin{rem}
As $B^{(n)}(x)>0$ for $x\geq1$, from \eqref{eq:B_F} we get that $F(x)<0$ for $x>1$.
Moreover, from \cite[Lemma~3.5]{Coussement_Coussment_VanAssche}, $F(x)$ is analytic in $\mathbb{C}\setminus[0,1]$.
\end{rem}
\begin{teo}\label{teo:ratio asymptotics_linear forms}
For $F(x)$ as in \eqref{eq:F}, we have
\begin{align*}
\lim_{n\to\infty}\frac{q^{(n+1)}(x)}{q^{(n)}(x)}=\frac{1}{2\kappa}\left(
F(x)-3+\sqrt{\frac{x}{\kappa}\frac{F(x)-4}{F(x)-1}}\,
\right),
\end{align*}
pointwise in $\mathbb{C}\setminus (-\infty,1]$. There exists an open dense subset $\Omega \subset \{x\in \mathbb{C}\setminus (-\infty,1]: F(x)\not\in [0,4] \}$ in where the convergence is
uniform in compact sets.
\end{teo}
\begin{proof}From \cite[Proposition 6]{bfmaf} we get
\begin{align*
q^{(n+1)}(x)\frac{B^{(n+2)}(x)}{B^{(n+1)}(x)}
-q^{(n+2)}(x)\Big(\gamma_{n+1}\frac{B^{(n)}(x) }{B^{(n+1)}(x)}+\alpha_{n+2}\Big)
-q^{(n+3)}(x)\gamma_{n+2}=0,
\end{align*}
that can be written as follows
\begin{align}\label{eq:CD_Q}
a_{n+1}(x) q^{(n+1)}(x)+b_{n+2}(x)q^{(n+2)}(x)+c_{n+3}q^{(n+3)}(x)=0,
\end{align}
where
\begin{align*}
\hspace*{-.25cm}
\begin{aligned}
a_n(x)&:=\frac{B^{(n+1)}(x)}{B^{(n)}(x)}\xrightarrow[n\to\infty]{}-\frac{\kappa}{F(x)},&
b_{n+2}(x)&:=-\gamma_{n+1}\frac{B^{(n)}(x) }{B^{(n+1)}(x)}- \alpha_{n+2}\xrightarrow[n\to\infty]{}\kappa^2(F(x)-3),&
c_{n+3}&:=-\gamma_{n+2}\xrightarrow[n\to\infty]{}-\kappa^3 ,
\end{aligned}
\end{align*}
which holds uniformly on compact sets of $\mathbb{C}\setminus[0,1]$ (in the $a_n(x)$ and $b_n(x)$ cases).
The characteristic equation for the third order linear homogeneous recurrence \eqref{eq:CD_Q} reads as
\begin{align*}
\kappa^2r^2-(F(x)-3)\kappa r+ \frac{1}{F(x)}=0 ,
\end{align*}
with roots given by
\begin{align*}
r_\pm(x)=\frac{F(x)-3\pm
\sqrt{
(F(x)-3)^2-\frac{4}{F(x)}
}}
{2 \kappa} .
\end{align*}
As $F(x)<0$ for $x>1$ we get $r_\pm(x)\gtrless 0$ for $x>1$. Given that $q^{(n)} (x)>0$ for $x\geq 1$, see Corollary \ref{coro:psoitivity>1}, we must have, according to Poincaré \cite{poincare} that
\lim_{n\to\infty}\frac{q^{(n+1)}(x)}{q^{(n)}(x)}=r_+(x)$
pointwise for $x>1$.
For the rest $x\in\mathbb{C}\setminus \mathbb{R}$, Poincaré theory ensures that the pointwise limit exists and its value must be $r_+(x)$ or $r_-(x)$.
Notice that the linear forms $q^{(n)}(x)$ have branch with a cut at $(-\infty,0]$; moreover, being $(w_1,w_2)$ an~AT~system has all its zeros in $(0,1)$. Hence,
$f_n(x):=\frac{q^{(n+1)}(x)}{q^{(n)}(x)}$
is holomorphic in the domain $\Omega_1:= \mathbb{C}\setminus(-\infty,1]$.
Therefore, we have a sequence of holomorphic functions $\{f_n\}_{n=0}^\infty$ with
pointwise convergence in the domain~$\Omega_1$.
Now, according to Osgood theorem (cf. \cite{Osgood,Beardon_Minda,Krantz}) the sequence converges pointwise in an open dense subset of $\Omega_1$ to a holomorphic function.
Hence, as in $(1,+\infty)$ converges to $r_+(x)$, by analytical continuation it converges pointwise in $\Omega_0=\{x\in \Omega_1: F(x)\not\in[0,4]\}$ to $r_+(x)$, which is holomorphic in that domain. Notice that the set $0\leq F(x)\leq 4$ is the cut of $r_+$, determined by the cut of the square root in the function $r_+(x)$, thus in that set the function $r_+(x)$ ceases to be holomorphic.
From Osgood's theorem we also know that
there is an open dense subset $\Omega\subset \Omega_0\subset \Omega_1$, $\overline \Omega= \overline \Omega_0= \overline \Omega_1=\mathbb{C}$, in where the convergence can be taken to be uniform in compact sets.
Now, as $F=-\kappa\phi$ with $\phi$ given in \cite[Equation (3.5)]{Coussement_Coussment_VanAssche} and, according to \cite[Lemma 3.5]{Coussement_Coussment_VanAssche}, we have $x\phi=(1+\kappa\phi)^3$, we are led to $xF=\kappa(F-1)^3$, and, consequently, we find for the radicand in $r_+(x)$ the alternative expression
\begin{align*}
(F-3)^2-\frac{4}{F}=\frac{F^3-6F^2+9F-4}{F}=\frac{(F-1)^3-3(F-1)^2}{F}=\frac{x}{\kappa}\frac{F-4}{F-1} ,
\end{align*}
and from this we get the desired representations for the limit function.
\end{proof}
\section{Multiple hypergeometric random walks}\label{S:randowm_walks}
In this section we assume \eqref{eq:positivity_jacobi}, i.e. $a,b>0$, $d>\max(a,b)$, $c\geq a$ and $c+1\geq b$,
so that the Jacobi matrix is a nonnegative matrix. Being the Jacobi matrix nonnegative, the hypergeometric multiple orthogonal polynomials of Lima and Loureiro are in fact random walk multiple orthogonal polynomials.
\subsection{The stochastic transition matrices}
Following \cite{bfmaf} one concludes that there are two stochastic matrices
\begin{teo}\label{teo:JPII_stochastic}
Let us assume for the hypergeometric system that \eqref{eq:region_parameters_pochhammer_perfect} holds.~Then,
\begin{enumerate}
\item If the zeros of the polynomial sequence $\{ B^{(n)} \}_{n\in\mathbb{N}}$ are confined in $(0,1)$, the semi-infinite matrix
\begin{align}\label{eq:stochastic_hypergeometric_II}
P_{II}=
\left( \begin{NiceMatrix}[columns-width = .5cm
P_{II,0,0} & P_{II,0,1}& 0 & \Cdots & & \\
P_{II,1,0}&P_{II,1,1}& P_{II,1,2}& \Ddots& & \\[5pt]
P_{II,2,0}& P_{II,2,1}& P_{II,2,2} & P_{II,2,3}&&\\[5pt]
0& P_{II,3,1}&P_{II,3,2}& P_{II,3,3} & P_{II,3,4}& \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right)
\end{align}
with coefficients given in terms of the Jacobi matrix~\eqref{eq:jacobi_hyper_coeff} and the multiple orthogonal polynomials of type~II evaluated at unity, $B^{(n)}(1)$ for $n \in\mathbb{N}_0$, as~follows
\begin{align*}
P_{II,n,n+1} & = \frac{B^{(n+1)}(1)}{B^{(n)}(1)}, &
P_{II,n,n} & = \beta_{n}, &
P_{II,n+1,n} & = \frac{B^{(n)}(1)}{B^{(n+1)}(1)}\alpha_{n+1}, &
P_{II,n+2,n}&= \frac{B^{(n)}(1)}{B^{(n+2)}(1)}\gamma_{n+1}.
\end{align*}
is a multiple stochastic matrix of type~II.
\item
The semi-infinite matrix
\begin{align}\label{eq:stochastic_Jacobi_Piñeiro_I}
P_I=\left(\begin{NiceMatrix}[columns-width = .5cm
P_{I,0,0} & P_{I,0,1}& P_{I,0,2} & 0 & \Cdots & \\
P_{I,1,0}&P_{I,1,1}& P_{I,1,2}& P_{I,1,3}& \Ddots& \\[5pt]
0& P_{I,2,1}& P_{I,2,2} & P_{I,2,3} & P_{I,2,4}&\\
\Vdots & \Ddots&P_{I,3,2}& P_{I,3,3} & P_{I,3,4}& \Ddots\\
&&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right)
\end{align}
with coefficients expressed in terms
of the Jacobi matrix~\eqref{eq:jacobi_hyper_coeff} and the linear forms of type~I evaluated at unity, $Q^{(n)}(1)$, for $n\in\mathbb{N}_0$, as~follows
\begin{align*}
P_{I,n,n+2}&= \frac{q^{(n+2)}(1)}{q^{(n)}(1)}\gamma_{n+1}, &
P_{I,n,n+1}&= \frac{q^{(n+1)}(1)}{q^{(n)}(1)}\alpha_{n+1}, &
P_{I,n,n}&=\beta_{n}, &
P_{I,n+1,n}&=\frac{q^{(n)}(1)}{q^{(n+1)}(1)},
\end{align*}
is a multiple stochastic matrix of type~I.
\end{enumerate}
\end{teo}
\begin{proof}
i) As the zeros of the polynomials $B^{(n)}$ belong to $(0,1)$ and $B^{(n)}$ is monic and therefore diverges to $+\infty$ when $x\to+\infty$, we deduce that $B^{(n)}(1)>0$, for $n\in\mathbb{N}_0$.
ii) It follows from Theorem \ref{teo:linear_forms_at_unity}.
\end{proof}
The corresponding diagrams for these Markov chains are
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$P_{01}$,1/$P_{12}$/,2/$P_{23}$,3/$P_{34}$,4/$P_{45}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[bend left,"\txt",color=Periwinkle] (\n1);
\foreach
\i/\txt in {0/$P_{10}$,1/$P_{21}$/,2/$P_{32}$,3/$P_{34}$,4/$P_{54}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[bend left, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$P_{20}$,1/$P_{31}$/,2/$P_{42}$,3/$P_{53}$}
\draw let \n1 = { int(\i+2) } in
(\n1) edge[auto,bend left=50,"\txt",color=RawSienna] (\i);
\foreach \i/\txt in {1/$P_{11}$,2/$P_{22}$/,3/$P_{33}$,4/$P_{44}$,5/$P_{55}$}
\draw (\i) edge[loop above,color=NavyBlue, "\txt"] (\i);
\draw (0) edge[loop left, color=NavyBlue,"$P_{00}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10.5,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Type II Markov chain diagram}
\end{center}
\end{minipage}};
\end{tikzpicture}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$P_{01}$,1/$P_{12}$/,2/$P_{23}$,3/$P_{34}$,4/$P_{45}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[bend left,"\txt",below,color=Periwinkle] (\n1);
\foreach
\i/\txt in {0/$P_{10}$,1/$P_{21}$/,2/$P_{32}$,3/$P_{34}$,4/$P_{54}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[bend left,below, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$P_{02}$,1/$P_{13}$/,2/$P_{24}$,3/$P_{35}$}
\draw let \n1 = { int(\i+2) } in
(\i) edge[bend left=60,color=MidnightBlue,"\txt"](\n1);
\foreach \i/\txt in {1/$P_{11}$,2/$P_{22}$/,3/$P_{33}$,4/$P_{44}$,5/$P_{55}$}
\draw (\i) edge[loop below, color=NavyBlue,"\txt"] (\i);
\draw (0) edge[loop left, color=NavyBlue,"$P_{00}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10.5,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Type I Markov chain diagram}
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
\begin{coro}
The coefficients of the type I stochastic matrix can be written, for all $n \in \mathbb{N}_0$, as follows
\begin{align*}
\begin{aligned}
P_{I,2n,2n+2} & =
\frac{(d+n-1) (d-a+n) (d-b+n)}
(d+3 n-1)_3} ,
&
P_{I,2n+1,2n} & =\frac{(2n+1)(a+2n-1)(b+2n-1)}{(c+3n)(d+3n-1)_2},
\\
P_{I,2n+1,2n+3} & =
\frac{(c+n) (c-a+n+1) (c-b+n+1)}
(c + 3 n+1)_3} ,
&
P_{I,2n+2,2n+1} & =\frac{(2n+1)(a+2n)(b+2n)}{(c+3n)_2(d+3n)},
\end{aligned} \phantom{olaolaolaolaol}
\\
\begin{aligned}
P_{I,2n,2n} & =
\frac{(2 n+1) (a+2 n) (b+2 n)}{(c+3 n) (d+3 n)}-\frac{2 n (a+2 n-1) (b+2 n-1)}{(c+3 n-1) (d+3 n-2)} ,
\end{aligned} \phantom{olaolaolaolaolaolaolaolaolaolaolaolaola}
\\
\begin{aligned}
P_{I,2n+1,2n+1} & =
\frac{2 (n+1) (a+2 n+1) (b+2 n+1)}{(c+3 n+2) (d+3 n+1)}-\frac{(2 n+1) (a+2 n) (b+2 n)}{(c+3 n) (d+3 n)} ,
\end{aligned}\phantom{olaolaolaolaolaolaolaolaolaolaolaol}
\\
\begin{multlined}[t][1\textwidth]
P_{I,2n,2n+1}
= \frac{n (a+2 n-1) (b+2 n-1) (c+3 n+1)}{(c+3 n-1) (d+3 n-1)} \\
-\frac{(2 n+1) (a+2 n) (b+2 n) (c+3 n+1)}{(c+3 n) (d+3 n)}+\frac{(n+1) (a+2 n+1) (b+2 n+1)}{d+3 n+1} , \phantom{olaolaolaolaola}
\end{multlined}
\\
\begin{multlined}[t][1\textwidth]
P_{I,2n+1,2n+2}
=
(d+3 n+2) \left( -\frac{2 (n+1) (a+2 n+1) (b+2 n+1)}{(c+3 n+2) (d+3 n+1)} \right.
\\
\left. +\frac{(2 n+3) (a+2 (n+1)) (b+2 (n+1))}{2 (c+3 n+3) (d+3 n+2)}+\frac{(2 n+1) (a+2 n) (b+2 n)}{2 (c+3 n+1) (d+3 n)}\right). \phantom{olaolaolaolaola}
\end{multlined}
\end{align*}
\end{coro}
\begin{pro}[Large $n$ limit for the dual hypergeometric Markov chains]\label{pro:hypergeometric_stochastic_dual}
The large $n$ limit of the hypergeometric stochastic matrices of type~I and II are the same after transposition, i.e.,
\begin{align*}
\lim_{n\to\infty} P_{I,n,n+k}
&=\lim_{n\to\infty}P_{II,n+k,n}, & k\in\{-2,-1,0,1\}.
\end{align*}
\end{pro}
\begin{proof}
We need to show that
\begin{align}\label{eq:other_relations}
\frac{B^{(n-k)}(1)q^{(n-k)}(1)}{B^{(n)}(1)q^{(n)}(1)}&\xrightarrow[n\to\infty]{}1, &k&=2,1,-1.
\end{align}
But these relations follow from Corollary \ref{coro:ratio_asymptotics_typeI} and Proposition \ref{pro:ratio_asymptotics_typeII}.
\end{proof}
\begin{rem}[Karlin--McGregor representation formulas]
The Karlin--McGregor representation formulas in terms of multiple orthogonal polynomials are given by Theorems \ref{teo:KMcG} and \ref{teo:KMcG2}, that hold, as we showed above, for these hypergeometric multiple random walks.
\end{rem}
\begin{teo}[Recurrent and transient hypergeometric random walks]\label{Theorem:recurrent-transient}
Both dual hypergeometric random walks are recurrent whenever $0<\delta\leq1$ and transient for $\delta> 1$.
\end{teo}
\begin{proof}
From \cite[Theorem 8]{bfmaf} we know that
both dual Markov chains are recurrent if and only if the integral
\begin{align}\label{eq:integral}
\int_0^1 \frac{w_1(x)}{1-x}\d \mu(x),
\end{align}
diverges, and both dual Markov chains are transient whenever the integral converges.
According to \eqref{eq:measures} and \eqref{eq:mu} we have to discuss the convergence
of the integral
\begin{align
\int_0^1 {}_2F_1\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]c-b,d-b \\\delta\end{NiceArray}};1-x\right]\frac{x^{a-1}(1-x)^{\delta-1}}{1-x}\d x.
\end{align}
Hence, the divergence appears linked to the behavior of the integrand near the point $x=1$. The divergence of the integral appears for $\delta-2\leq-1$ and the integral converges for $\delta-2>-1$. That is, for $\delta\leq1$ the integral diverges and for $\delta >1$ the integral converges.
\end{proof}
\begin{rem}
As we discussed in \cite{bfmaf}, given that there is no mass points, we conjecture that this recurrence is a null recurrence, and the mean of the return times is infinity.
\end{rem}
\begin{rem}
Notice that $\displaystyle\int_0^1 \frac{w_1(x)}{1-x}\d \mu(x)$ is the Stieltjes--Markov transform of the measure $w_1\d\mu$ at unity.
\end{rem}
Following \cite{bfmaf} we get
\begin{pro}\label{pro:steady}
The state
$ \boldsymbol \pi=(\begin{NiceMatrix}
B^{(0)}(1) q^{(0)}(1)&B^{(1)}(1) q^{(1)}(1) &\Cdots
\end{NiceMatrix})$
satisfies the steady state conditions
$ \boldsymbol \pi P_{I}=\boldsymbol \pi$ and $\boldsymbol \pi P_{II}=\boldsymbol \pi$.
\end{pro}
\begin{rem}
In \cite{bfmaf} it was conjectured that the state
$ \boldsymbol \pi=\begin{pNiceMatrix}
B^{(0)}(1) q^{(0)}(1)&B^{(1)}(1) q^{(1)}(1) &\Cdots
\end{pNiceMatrix}$
does not belong to~$\ell^1$,~i.e. not being a proper steady state.
\end{rem}
\subsection{Stochastic $LU$ factorizations of the Markov transition matrices}
Inspired by \cite{grunbaum_de la iglesia} in this section we find stochastic factorizations of the Jacobi matrix.
Following \cite{Barrios_Branquinho_Foulquie} we easily find
\begin{lemma}\label{lem:Jacobi_LLU}
The Jacobi matrix $J$ given in \eqref{eq:Jacobi} with coefficients given in \eqref{eq:jacobi_hyper_coeff} and \eqref{eq:lambdas} has the following Gauss--Borel factorization
\begin{align*}
J = L_1 L_2 U
\end{align*}
where
\begin{align*}
L_1 & := \left(
\begin{NiceMatrix}
1 & 0&\Cdots & \\[-5pt]
\lambda_3 & 1 & \Ddots& \\
0 & \lambda_6 & 1 & \\
\Vdots & \Ddots & \Ddots & \Ddots
\end{NiceMatrix}
\right),
&
L_2 & := \left( \begin{NiceMatrix}
1 & 0 &\Cdots & \\[-5pt]
\lambda_4 & 1 & \Ddots& \\
0 & \lambda_7 & 1 & \\
\Vdots& \Ddots & \Ddots & \Ddots
\end{NiceMatrix}
\right)
&
U & := \left( \begin{NiceMatrix}
\lambda_2 & 1 & 0& \Cdots& \\[-5pt]
0 & \lambda_5 & 1 &\Ddots & \\[-5pt]
\Vdots& \Ddots & \lambda_8 & 1 &\\
& & & \Ddots & \Ddots
\end{NiceMatrix}
\right).
\end{align*}
\end{lemma}
This simple $LU$ factorization induces a corresponding $LU$ factorization with stochastic factors of the stochastic matrices $P_{II}$ and $P_I$, i.e. stochastic Gauss--Borel factorizations.
\begin{teo}[Stochastic $LU$ factorization]\label{teo:stochastic _factorization}
\begin{enumerate}
\item The stochastic matrix $P_{II}$ has the following stochastic $LU$ factorization
\begin{align}\label{eq:stochastic_factorization_II}
P_{II}=P_{II,1}^L P_{II,2}^L P_{II}^U
\end{align}
in terms of the stochastic matrices
\begin{align*}
P_{II,1}^L &:=\sigma_{II} L_1 D_{II,2}^{-1}, &
P_{II,2}^L &:=D_{II,2} L_2 D_{II,1}^{-1} ,&
P_{II}^U&:=D_{II,1}U \sigma_{II}^{-1},
\end{align*}
where
\begin{align*}
\sigma_{II}& = \operatorname{diag} \left( \begin{NiceMatrix} \frac{1}{B^{(0)}(1)} & \frac{1}{B^{(1)}(1)} & \Cdots \end{NiceMatrix}\right), &
D_{II,i }&= \operatorname{diag} \left( \begin{NiceMatrix} \frac{1}{ d_{II,i}^{(0)}}& \frac{1}{d_{II,i}^{(1)}} & \Cdots \end{NiceMatrix} \right), &i&\in\{1,2\},
\end{align*}
with
\begin{align*}
d_{II,1}^{(n)} &= \lambda_{3n+2}B^{(n)}(1) + B^{(n+1)}(1) , & n&\in \mathbb{N}_0\\
d_{II,2}^{(n)} & = \lambda_{3n+1}\lambda_{3n-1}B^{(n-1)}(1) + (\lambda_{3n+1} +\lambda_{3n+2} )B^{(n)}(1) + B^{(n+1)}(1)& n&\in \mathbb{N},
\end{align*}
and $d_{II,2}^{(0)}=d_{II,1}^{(0)}$.
\item The stochastic matrix $P_I$ has the following stochastic $LU$ factorization
\begin{align}\label{eq:stochastic_factorization_I}
P_{I}=P_{I}^L P_{I,2}^U P_{I,1}^U
\end{align}
in terms of stochastic matrices
\begin{align*}
P_{I}^L &:=\sigma_I U^\top D_{I,2}^{-1},&
P_{I,2}^U&:= D_{I,2} L_2^\top D_{I,1}^{-1},&
P_{I,1}^U&:=D_{I,1} L_1^\top \sigma_I ^{-1},
\end{align*}
where
\begin{align*}
\sigma_{I}& = \operatorname{diag} \left( \begin{NiceMatrix} \frac{1}{q^{(0)}(1)} & \frac{1}{q^{(1)}(1)} & \Cdots \end{NiceMatrix}\right), &
D_{I,i }&= \operatorname{diag} \left( \begin{NiceMatrix} \frac{1}{ d_{I,i}^{(0)}}& \frac{1}{d_{I,i}^{(1)}} & \Cdots \end{NiceMatrix} \right), &i&\in\{1,2\},
\end{align*}
with
\begin{align*}
d_{I,1}^{(n)} &= q^{(n)}(1)+\lambda_{3n+3}q^{(n+1)}(1) , &
d_{I,2}^{(n)} & = q^{(n)}(1) + (\lambda_{3n+3} + \lambda_{3n+4})q^{(n+1)}(1)
+ \lambda_{3n+4} \lambda_{3n+6} q^{(n+2)}(1),
\end{align*}
for $n\in\mathbb{N}_0$.
\end{enumerate}
\end{teo}
\begin{proof}
\begin{enumerate}
\item
Lemma \ref{lem:Jacobi_LLU} leads to the following factorization
\begin{align*}
P_{II} = \sigma_{II} L_1 D_{II,2}^{-1} D_{II,2} L_2 D_{II,1}^{-1} D_{II,1} U \sigma_{II}^{-1},
\end{align*}
where we are going to determine $D_{1,II}$ and $D_{II,2}$ so that the three factors are stochastic matrices; i.e., the $LU$ factorization is stochastic.
We now seek for $D_{II,1}$, in order that the matrix $D_{II,1} U \sigma^{-1}$ be stochastic, that is
$ D_{II,1 }U \sigma_{II}^{-1}\text{\fontseries{bx}\selectfont \textup 1}= \text{\fontseries{bx}\selectfont \textup 1}$,
so it holds that
\begin{align*}
\left( \begin{NiceMatrix}
\lambda_2 & 1 & 0& \Cdots& \\
0 & \lambda_5 & 1 &\Ddots & \\
\Vdots& \Ddots & \lambda_8 & 1 &\\
& & & \Ddots & \Ddots
\end{NiceMatrix}
\right) \left( \begin{NiceMatrix}
B^{(0)}(1) \\
B^{(1)}(1) \\
\Vdots \\
\end{NiceMatrix}
\right) = \left( \begin{NiceMatrix}
d_{II,1}^{(0)} \\[2pt]
d_{II,1}^{(1)}\\
\Vdots \\
\end{NiceMatrix}
\right)
\end{align*}
and we get
$ d_{II,1}^{(n)}= \lambda_{3j+2}B^{(n)}(1) + B^{(n+1)}(1)$.
An important fact to notice here is that the RHS is strictly positive, $d_{II,1}^{(n)}>0$.
Let us find $D_{II,2}$, in order that the matrix $D_{II,2} L_2 D_{II,1}^{-1}$ be stochastic, that is
$D_{II,2} L_2 D_{II,1}^{-1}\text{\fontseries{bx}\selectfont \textup 1}= \text{\fontseries{bx}\selectfont \textup 1}$,
and we get
\begin{align*}
\left( \begin{NiceMatrix}
1 & 0 &\Cdots & \\[-5pt]
\lambda_4 & 1 & \Ddots& \\
0 & \lambda_7 & 1 & \\
\Vdots& \Ddots & \Ddots & \Ddots
\end{NiceMatrix}
\right)
\left( \begin{NiceMatrix}
d_{II,1}^{(0)} \\[2pt]
d_{II,1}^{(1)}\\
\Vdots \\
\end{NiceMatrix} \right)
= \left( \begin{NiceMatrix}
d_{II,2}^{(0)} \\[2pt]
d_{II,2}^{(1)}\\
\Vdots \\
\end{NiceMatrix}\right).
\end{align*}
Therefore,
\begin{align*}
d_{II,2}^{(n) } = \lambda_{3n+1} d_{II,1}^{(n-1) } + d_{II,1}^{(n) }
& = \lambda_{3n+1}\lambda_{3n-1}B^{(n-1)}(1) + (\lambda_{3n+1} +\lambda_{3n+2} )B^{(n)}(1) + B^{(n+1)}(1).
\end{align*}
Notice that the RHS is strictly positive, $ d_{II,2}^{(n)}>0$.
Finally, $ \sigma_{II} L_1 D_{II,2}^{-1}$ is an stochastic matrix, because all its entries are nonnegative and
\begin{align*}
\text{\fontseries{bx}\selectfont \textup 1}=P_{II}\text{\fontseries{bx}\selectfont \textup 1}=P_{II,1}^L P_{II,2}^L P_{II}^U\text{\fontseries{bx}\selectfont \textup 1}=P_{II,1}^L\text{\fontseries{bx}\selectfont \textup 1}.
\end{align*}
\item
Again, from Lemma \ref{lem:Jacobi_LLU} and a transposition we get
\begin{align*}
P_{I} = \sigma_{I} U^\top L_2^\top L_1^\top \sigma_{I} ^{-1}= \sigma_{I} U^\top D_{I,2}^{-1}D_{I,2} L_2^\top D_{I,1}^{-1}D_{I,1}L_1^\top \sigma_{I} ^{-1}.
\end{align*}
Let us find $D_{I,1}$, in order that the matrix $D_{I,1} L_1^\top \sigma_{I}^{-1}$ be stochastic, that is
$ \tilde{D}_1 L_1^\top \tilde{\sigma}^{-1}\text{\fontseries{bx}\selectfont \textup 1}= \text{\fontseries{bx}\selectfont \textup 1}$.
This is equivalent to
$L_1^\top \tilde{\sigma}^{-1}\text{\fontseries{bx}\selectfont \textup 1}= \tilde{D}_1^{-1}\text{\fontseries{bx}\selectfont \textup 1}$,
and taking the transpose we get
\begin{align*}
\left( \begin{NiceMatrix} q^{(0)}(1) & q^{(1)}(1) & \Cdots \end{NiceMatrix} \right)
\left( \begin{NiceMatrix}
1 & 0 & \Cdots& \\[-5pt]
\lambda_3 & 1 &\Ddots & \\
0 & \lambda_6 & 1 & \\[-5pt]
\Vdots& \Ddots & \lambda_8 & 1 \\
& & \Ddots & \Ddots & \Ddots
\end{NiceMatrix}
\right) =
\left( \begin{NiceMatrix}
d_{I,1}^{(0)} &
d_{I,1}^{(1)} &
\Cdots &
\end{NiceMatrix}
\right)
\end{align*}
Hence, we deduce that
\begin{align*}
d_{I,1}^{(n)}= q^{(n)}(1)+\lambda_{3n+3}q^{(n+1)}(1).
\end{align*}
An important fact to notice here is that the RHS is strictly positive, so that $ d_{I,1}^{(n)}>0$.
Let us find $D_{I,2}$, in order that the matrix $\tilde{D}_2 L_2^\top \tilde{D}_1^{-1}$ be stochastic, that is
$D_{I,2} L_2^\top D_{I,1}^{-1}\text{\fontseries{bx}\selectfont \textup 1}=\text{\fontseries{bx}\selectfont \textup 1}$.
This is equivalent to
$ L_2^\top \tilde{D}_1^{-1}\text{\fontseries{bx}\selectfont \textup 1}= \tilde{D}_2^{-1} \text{\fontseries{bx}\selectfont \textup 1}$, and taking the transpose we obtain
\begin{align*}
\left( \begin{NiceMatrix} d_{I,1}^{(0)} & d_{I,1}^{(1)} & \Cdots \end{NiceMatrix} \right)
\left( \begin{NiceMatrix}
1 & 0 & \Cdots& \\[-5pt]
\lambda_4 & 1 & \Ddots& \\
0 & \lambda_7 & 1 & \\[-5pt]
\Vdots & \Ddots& \lambda_{10} & 1 \\
& & \Ddots & \Ddots & \Ddots
\end{NiceMatrix}
\right)= \left( \begin{NiceMatrix} d_{I,2}^{(0)} & d_{I,2}^{(1)} & \Cdots \end{NiceMatrix} \right).
\end{align*}
Consequently, we find
\begin{align*}
d_{I,2}^{(n)}& = d_{I,1}^{(n)} +\lambda_{3n+4}d_{I,1}^{(j+1)}
= q^{(n)}(1) + (\lambda_{3n+3} + \lambda_{3n+4})q^{(n+1)}(1)
+ \lambda_{3n+4} \lambda_{3n+6} q^{(n+2)}(1).
\end{align*}
Notice that the RHS of this identity is strictly positive and $d_{I,2}^{(n)}>0$.
\end{enumerate}
Using the same reasoning as in the previous case we get that the matrix
$ \sigma_I U^\top {D}_{I,2}^{-1} $ is stochastic.
\end{proof}
\begin{rem}
This stochastic factorization has been motivated by \cite{grunbaum_de la iglesia2} in where an urn model was proposed for the Jacobi--Piñeiro random walks in \cite{bfmaf}. In \cite{grunbaum_de la iglesia2} for the Jacobi--Piñeiro situation they give a factorization $P_{II}=P_LP_U$ with $P_{U}$ an stochastic upper triangular matrix with only the first superdiagonal nonzero, i.e. an stochastic matrix describing a pure birth Markov chain, and a matrix $P_L$ an stochastic lower triangular matrix, with zero as an absorbing state and only the two first subdiagonals nonzero. As there is a nonzero second subdiagonal, following \cite{Gallager,Bremaud} this is not a pure death Markov chain, and it goes beyond it as there are transitions beyond near neighbors.
The stochastic factorization provided here for the hypergeometric situation in three simple stochastic factors is in terms of a pure birth factor and two pure death factors for the type II, and in terms of one pure death factor and two pure birth factors for the type I case.
\end{rem}
\begin{rem}
The construction of the corresponding urn models, once the stochastic factorization is provided, will be given by an appropriate choice of the hypergeometric parameters $(a,b,c,d)$ and three urns, one urn per factor, with three different experiments.
\end{rem}
\subsection{Stochastic factorization of the type I Markov matrix}
Surprisingly, the stochastic factorization of the hypergeometric Markov matrix of type I can be carried out leading to extremely simple expressions in terms of quotients of arithmetic progressions in $n$.
\begin{teo}\label{theorem:stochastic factorization}
For $n\in\mathbb{N}_0$, the stochastic factorization \eqref {eq:stochastic_factorization_I},
$P_{I}=P_{I,1}^L P_{I,2}^U P_{I,1}^U$, is explicitly given as follows
\begin{align*}
\hspace*{-.5cm}\begin{aligned}
(P_{I}^L )_{2n,2n-1}&=\frac{2n}{3n-1+d}, & (P_{I}^L )_{2n,2n}&=\frac{n-1+d}{3n-1+d},&
(P_{I}^L )_{2n+1,2n}&=\frac{2n+1}{3n+1+c}, & (P_{I}^L )_{2n+1,2n+1}&=\frac{n+c}{3n+1+c},\\
(P_{I,2}^U )_{2n,2n+1}&=\frac{n+d-b}{3n+d}, & (P_{I,2}^U)_{2n,2n}&=\frac{2n+b}{3n+d},&
(P_{I,2}^U )_{2n+1,2n+2}&=\frac{n+1+c-b}{3n+2+c}, & (P_{I,2}^U)_{2n+1,2n+1}&=\frac{2n+1+b}{3n+2+c},\\
(P_{I,1}^U )_{2n,2n+1}&=\frac{n+c-a}{3n+c}, & (P_{I,1}^U)_{2n,2n}&=\frac{2n+a}{3n+c},&
(P_{I,1}^U )_{2n+1,2n+2}&=\frac{n+d-a}{3n+1+d}, & (P_{I,1}^U)_{2n+1,2n+1}&=\frac{2n+1+a}{3n+1+d}.
\end{aligned}
\end{align*}
Notice that $(P_{I}^L )_{2n,2n-1}$ does not exist.
The matrices read
{\scriptsize\begin{align*}
P_{I}^L&=\left(
\begin{NiceMatrix}[columns-width = 0.3cm]
1 & 0 &\Cdots & & & & & \\
\frac{1}{c+1} & \frac{c}{c+1} & \Ddots & & & & & \\[4pt]
0 & \frac{2}{d+2} & \frac{d}{d+2} & & & & & \\[2pt]
\Vdots & \Ddots & \frac{3}{c+4} & \frac{c+1}{c+4} & & && \\[2pt]
& & & \frac{4}{d+5} & \frac{d+1}{d+5} & & & \\[2pt]
& & & & \frac{5}{c+7} & \frac{c+2}{c+7} & & \\[2pt]
& & & & & \frac{6}{d+8} & \frac{d+2}{d+8}& \\[2pt]
&&&&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right),
\end{align*}}
for which zero is an absorbent state, and
{\scriptsize\begin{align*}
\hspace*{-.25cm}
\begin{aligned}
P_{I,2}^U&=\left(
\begin{NiceMatrix}[columns-width = 0.4cm]
\frac{b}{d} &\frac{d-b}{d} & 0 & \Cdots& & & & \\
0 & \frac{b+1}{c+2} & \frac{c-b+1}{c+2} & \Ddots & & & &\\
\Vdots& \Ddots& \frac{b+2}{d+3} & \frac{d-b+1}{d+3}& & & &\\[2pt]
& & & \frac{b+3}{c+5} & \frac{c-b+2}{c+5} & & & \\[2pt]
& & & & \frac{b+4}{d+6} & \frac{d-b+2}{d+6} & & \\[2pt]
& & & & & \frac{b+5}{c+8} & \frac{c-b+3}{c+8} & \\[2pt]
& & & & & & \Ddots & \Ddots
\end{NiceMatrix}
\right),&
P_{I,1}^U&=\left(
\begin{NiceMatrix}[columns-width = 0.4cm]
\frac{a}{c} &\frac{c-a}{c} & 0 & \Cdots& & & & \\
0 & \frac{a+1}{d+1} & \frac{d-a}{d+1} & \Ddots & & & & \\
\Vdots& \Ddots& \frac{a+2}{c+3} & \frac{c-a+1}{c+3}& & & & \\[2pt]
& & & \frac{a+3}{d+4} & \frac{d-a+1}{d+4} & & & \\[2pt]
& & & & \frac{a+4}{c+6} & \frac{c-a+2}{c+6} & & \\[2pt]
& & & & & \frac{a+5}{d+7} & \frac{d-a+2}{d+7} & \\[2pt]
& & & & & & \Ddots & \Ddots
\end{NiceMatrix}
\right).
\end{aligned}
\end{align*}}
\end{teo}
\begin{proof}
The proof is an algebraic calculation involving Theorems \ref{teo:linear_forms_at_unity} and \ref{teo:stochastic _factorization} and Equations \eqref{eq:lambdas}.
\end{proof}
\section{Uniform matrices}\label{S:uniform}
In this section we find and study real uniform or almost uniform Jacobi matrices. A uniform Jacobi matrix is understood in this context as a banded Toeplitz matrix, that is constant or uniform by diagonals as in~\eqref{eq:Jacobi_Jacobi_Piñeiro_uniform},~i.e.,
\begin{align*}
J= \left(\begin{NiceMatrix
3\kappa& 1 & 0 & \Cdots & & \\[-5pt]
3\kappa^2&3\kappa& 1 & \Ddots& & \\
\kappa^3& 3\kappa^2& 3\kappa& 1 &&\\
0& \kappa^3&3\kappa^2& 3\kappa& 1 & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right).
\end{align*}
with $\kappa = \frac 4 {27} $,
that we called \emph{asymptotic uniform Jacobi matrix}. The \emph{almost asymptotic uniform Jacobi matrix} is understood as the asymptotic uniform Jacobi matrix but for the first two columns. Asymptotic uniform Jacobi matrix appeared in the context of multiple orthogonal polynomials \cite{Coussement_Coussment_VanAssche}, see also \cite{bfmaf} for a discussion for Jacobi--Piñeiro polynomials and random walks, as limiting cases.
\begin{teo}\label{teo:12}
Almost asymptotic uniform Jacobi matrices happen if and only if the hypergeometric tuple $(a,b,c,d)$ is one of the following twelve tuples
\begin{align}\label{eq:almost uniform tuples}
\begin{aligned}
&\Big(\frac 1 3, \frac 2 3, \frac1 2, 1\Big), & &\Big(\frac 2 3, \frac 1 3, \frac1 2, 1\Big),
& &\Big(\frac 1 3, \frac 2 3, 1, \frac 3 2\Big), & &\Big( \frac 2 3, \frac 1 3, 1, \frac 3 2\Big) , & &\Big(\frac 2 3, \frac 4 3, 1, \frac 3 2 \Big), & &\Big(\frac 4 3, \frac 2 3, 1, \frac 3 2\Big),
\\
&\Big(\frac 2 3, \frac 4 3, \frac 3 2 , 2\Big), &&\Big(\frac 4 3, \frac 2 3, \frac 3 2 , 2 \Big), &&
\Big(\frac 4 3, \frac 5 3, \frac 3 2 , 2\Big), &&\Big(\frac 5 3, \frac 4 3, \frac 3 2 , 2\Big ), &&
\Big(\frac 4 3, \frac 5 3, 2 , \frac 5 2\Big) , &&\Big(\frac 5 3, \frac 4 3, 2 , \frac 5 2 \Big),
\end{aligned}
\end{align}
that we will called \emph{uniform tuples}.
\end{teo}
\begin{proof}
The twelve uniform cases follow, we used Mathematica, from the real positive solution $(a,b,c,d)$ of the system of equations
\begin{align*}
\beta_1& = 3\kappa , &
\alpha_2& = 3\kappa^2 , &
\gamma_2& = \kappa^3 , &
\beta_2& = 3\kappa , &&,
\end{align*}
where the sequences $(\beta_n)$, $(\alpha_n)$, $(\gamma_n)$ are given in \eqref{eq:jacobi_hyper_coeff} and \eqref{eq:lambdas}.
That is
\begin{align*}
\frac{2 (a+1) (b+1)}{(c+2) (d+1)}-\frac{a b}{c d} & = 3\kappa , \\
\frac{2 (a+1) (b+1}{(c+2) (d+1)} ) \left(\frac{a b}{2 (c+1) d}-\frac{2 (a+1) (b+1)}{(c+2) (d+1)}+\frac{3 (a+2) (b+2)}{2 (c+3) (d+2)}\right)& = 3\kappa^2 , \\
\frac{6 (a+1) (a+2) (b+1) (b+2) c (-a+c+1) (-b+c+1)}{(c+1) (c+2)^2 (c+3)^2 (c+4) (d+1) (d+2) (d+3)} & = \kappa^3 , \\
\frac{3 (a+2) (b+2)}{(c+3) (d+3)}-\frac{2 (a+1) (b+1)}{(c+2) (d+1)} & = 3\kappa.
\end{align*}
Then, one can check, replacing the given tuples, that for all different cases and for $n\in\mathbb{N}$ one has $\beta_n=3\kappa$, $\alpha_{n+1}=3\kappa^2$ and $\gamma_{n+1}=\kappa^3$
\end{proof}
\begin{coro}
If the tuple $(a,b,c,d)$ is uniform,
i.e. it is in the list \eqref{eq:almost uniform tuples}, then so is $(b,a,c,d)$.
\end{coro}
\begin{rem}
Notice that when $a$ and $b$ are permuted in the hypergeometric tuple the weight $w_1$ is not altered but the weight $w_2$ changes to a new weight $\check w_2$. As we discussed in
Theorem~\ref{teo:gauge_freedom}
a given pair of weights $(w_1,w_2)$ uniquely determines the corresponding sequences of orthogonal polynomials of type $II$, $\{B^{(n)}(x)\}_{n=0}^\infty$, the sequence of linear forms of type $I$, $\{Q^{(n)}(x)\}_{n=0}^\infty$ and the Jacobi matrix $J$, up to the transformations $(w_1,w_2)\mapsto (w_1,\check w_2=\alpha w_1+\beta w_2)$ with $\beta\neq 0$ and $\alpha+\beta=1$, what we called a \emph{gauge} transformation. We will show in the sequel that the transformation given by the permutation $(a,b,c,d)\leftrightarrow (b,a,c,d)$ is precisely a manifestation of this \emph{gauge} symmetry.
\end{rem}
\begin{rem}
According to the previous remark we have only six different almost cubic Toeplitz--Jacobi matrices, sequences of type II orthogonal polynomials and type I linear forms. To each of these sets we have two different sets of weights $(w_1,w_2)$ and $(w_1,\check w_2)$ and correspondingly two different uniform tuples $(a,b,c,d)$and $(b,a,c,d)$.
\end{rem}
To study this \emph{gauge} symmetry is convenient to introduce the following algebraic functions
\begin{align*}
\vartheta_\pm (x)&:= \sqrt[\leftroot{2}\uproot{2}\scriptstyle 3]{1 \pm \sqrt{1-x}} , & x &\in (0,1).
\end{align*}
\begin{pro}
The functions $\vartheta_\pm$ satisfies the following relations
\begin{align*}
\vartheta_+^3 + \vartheta_-^3 &= 2 , & \vartheta_+^3 - \vartheta_-^3 &= 2 \sqrt{1-x} , &
\vartheta_+ \vartheta_- &= \sqrt[\leftroot{2}\uproot{2}\scriptstyle 3]{x},
\end{align*}
and also
\begin{align}\label{eq:algebraic_functions2}
(\vartheta_+ + \vartheta_-)(\vartheta_+^2 -\vartheta_+\vartheta_- + \vartheta_-^2) &= 2 ,&
(\vartheta_+ - \vartheta_-)(\vartheta_+^2 +\vartheta_+\vartheta_- + \vartheta_-^2) &= 2 \sqrt{1-x} ,
\end{align}
\end{pro}
\begin{proof}
Equation\eqref {eq:algebraic_functions2} follow from
\begin{align*}
(\vartheta_+ + \vartheta_-)(\vartheta_+^2 -\vartheta_+\vartheta_- + \vartheta_-^2) &= \vartheta_+^3 + \vartheta_-^3 , &
(\vartheta_+ - \vartheta_-)(\vartheta_+^2 +\vartheta_+\vartheta_- + \vartheta_-^2) &= \vartheta_+^3 - \vartheta_-^3 ,
\end{align*}
and the first ones follows directly form the definition of $\vartheta_\pm$.
\end{proof}
These identities will be instrumental in proving some identities between $(W_1,W_2)$ and
$(W_1,\hat W_2)$.
We now proceed with the study of these matrices. We divide this family of twelve tuples in two sets, the six \emph{stochastic uniform tuples} are
\begin{align}\label{eq:stochastic uniform tuples}
&\Big(\frac 1 3, \frac 2 3, \frac1 2, 1\Big), &&\Big(\frac 2 3, \frac 1 3, \frac1 2, 1\Big), &&
\Big( \frac 2 3, \frac 4 3, 1, \frac 3 2\Big) ,&& \Big(\frac 4 3, \frac 2 3, 1, \frac 3 2\Big) , &&
\Big( \frac 4 3, \frac 5 3, \frac 3 2 , 2\Big) , && \Big(\frac 5 3, \frac 4 3, \frac 3 2 , 2 \Big),
\end{align}
and the six \emph{semi-stochastic tuples} are
\begin{align}\label{eq:semi-stochastic uniform tuples}
& &\Big(\frac 1 3, \frac 2 3, 1, \frac 3 2\Big), & &( \frac 2 3, \frac 1 3, 1, \frac 3 2\Big) , &
&\Big(\frac 2 3, \frac 4 3, \frac 3 2 , 2\Big), &&\Big(\frac 4 3, \frac 2 3, \frac 3 2 , 2 \Big), &&
\Big(\frac 4 3, \frac 5 3, 2 , \frac 5 2\Big) , &&\Big(\frac 5 3, \frac 4 3, 2 , \frac 5 2 \Big).
\end{align}
We will see that the stochastic tuples lead to three stochastic matrices of type I (taking into account the permutation \emph{gauge} symmetry $a\leftrightarrow b$), with corresponding recurrent Markov chains and to semi-stochastic matrices of type II that describe Markov chains with sinks and sources. The semi-stochastic tuples lead to three double semi-stochastic matrices of type I and three more of type II corresponding Markov chains with~sinks.
In what follows we will give for each case, the explicit expressions of the system of weights, the type II multiple orthogonal polynomials, and its values at unity. We will also give the stochastic factorization, and discuss whether there are uniform pure death o birth factors.
Notice that, as we know the values at unity of type II multiple orthogonal polynomials and type I linear forms, we can apply Theorem \ref{pro:sigma_spectral} in order to construct type II and I stochastic matrices, but unfortunately not uniform, but for three the type I stochastic matrices related to stochastic uniform tuples. This is why we follow an alternative path leading to semi-stochastic uniform matrices.
It is also remarkable that the knowledge of the explicit value at unity of the type II multiple orthogonal polynomials leads to following nontrivial summation formulas for the generalized hypergeometric function $\tensor[_3]{F}{_2}(1)$ that, for the reader convenience, we collect together here.
\begin{pro}[Summation formulas at unity]
The following summation formulas at unity for the generalized hypergeometric function $\tensor[_3]{F}{_2}$ hold true
\begin{align*}
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};1\right]&=\frac{1+2(-8)^{n}}{3 \times 4^{n}}, &
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};1\right]&=
\frac{4 (-8)^{n} - 1}{3 (3 n+1) 4^{n}} ,
\\
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} ,\; \frac{n+2}{2} \\[3pt]
\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};1\right]&
=\frac{2(1-(-8)^{n+1} )}{9(n+1)(3n+2)4^n},&
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};1\right]&
=\frac{1+2 (9 n+4)(-8)^n}{9 \times 4^{n}},\\
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};1\right]&
=\frac{ 4 (9 n+7) (-8)^n -1 }{27 (n+1) 4^{n}},& \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+4}{2} , \; \frac{n+3}{2} \\[3pt]\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};1\right]&
=\frac{2(1- (9 n+10)(-8)^{n+1})}{81 (n+1) (n+2)4^n}.
\end{align*}
\end{pro}
\subsection{Stochastic uniform tuples. Uniform recurrent random walks }\label{S:stochastic_uniform}
\begin{pro}
Among all the uniform tuples \eqref{eq:almost uniform tuples} only the stochastic uniform tuples \eqref{eq:stochastic uniform tuples} are such that the type I linear forms at unity satisfies
\begin{align}\label{eq:linear form at unity uniform}
Q^{(n)}(1)=\frac 1 {(2\kappa)^n} , && n \in \mathbb{N} .
\end{align}
\end{pro}
\begin{proof}
The stochastic uniform tuples are the real positive solution, $a,b,c,d$, of the system of equations
\begin{align*}
Q^{(1)}(1) & = \frac 1 {2 \kappa} , &
Q^{(2)}(1) & = \frac 1 {(2\kappa)^2} , &
Q^{(3)}(1) & = \frac 1 {(2 \kappa)^3} , &
Q^{(4)}(1) & = \frac 1 {(2\kappa)^4} ,
\end{align*}
where $\{ Q^{(n)} (1)\}_{n=0}^\infty$ is the sequence of type I linear forms at unity given in Theorem \ref{teo:linear_forms_at_unity}.
One can check that for these stochastic uniform tuples Equation \eqref{eq:linear form at unity uniform} holds.
\end{proof}
\begin{rem}
From \eqref{eq:linear form at unity uniform} and the procedure in \eqref{eq:stochastic_Jacobi_Piñeiro_I} we obtain a corresponding stochastic matrix of type I, an as all the objects are almost uniform as a result we get almost uniform stochastic matrices. Using the same matrix
\begin{align}\label{eq:norma}
\sigma_{II} := \operatorname{diag} \left(\begin{NiceMatrix} \frac 1{2 \kappa} & \left( \frac 1{2 \kappa} \right)^2 & \Cdots\end{NiceMatrix}\right)
\end{align}
we can obtain semi-stochastic matrices of type II.
\end{rem}
We now analyze the three cases. In all of them the Jacobi matrix differs from \eqref{eq:Jacobi_Jacobi_Piñeiro_uniform} only in the first column, and $\delta=\frac{1}{2}$ so that the corresponding Markov chains are recurrent. The min-max property \eqref{eq:min-max}, $\max(a,b)>\min(c,d)$ does not hold. Moreover,
the $\alpha_1$ in the proof of \cite[Theorem 2.1]{lima_loureiro} now is not positive, and the proof of the Nikishin property of the weight fails.
The coefficients $H_{2n}$ are positive and $H_{2n+1}$ are negative, but they never cancel, therefore the Gauss--Borel factorization of the moment matrix holds. Therefore, the system of weights $(W_1,W_2,\d x)$ and $(W_1,\check W_2,\d\mu)$ are perfect. From \eqref{eq:lambdas} we find that the Jacobi matrix is nonnegative whenever
$a<d$, $b<d$, $a<c$ and $b-1<c$. When the condition $\max(a,b)<\min(c,d)$ is meet the Jacobi matrix is a nonnegative case. However, for
$ a<d$, $b<d$, $a<c$ and $b-1<c<b$ we also get a nonnegative Jacobi matrix but \eqref{eq:min-max} is not fulfilled.
The type I transition matrix $P_I$ is stochastic and uniform, and have stochastic factorizations. Only one pair of tuples, $\big(\frac 1 3, \frac 2 3, \frac1 2, 1\big), \big(\frac 2 3, \frac 1 3, \frac1 2, 1\big)$ have a uniform stochastic factorization. The type II transition matrix~$P_{II}$ is semi-stochastic having three states, the one with uniform stochastic factorization, or two states, the two remaining ones, which are sinks or sources. In a sink probability is destroyed, and in a source probability is created. In these case, the overall destroyed and created probability balance to zero.
\paragraph{\textbf{The stochastic uniform tuples }$\big(\frac 1 3, \frac 2 3, \frac1 2, 1\big), \big(\frac 2 3, \frac 1 3, \frac1 2, 1\big)$}
In this case the Jacobi matrix and the type I stochastic matrix are
\begin{align*}
J & =
\left(\begin{NiceMatrix
3\kappa& 1 & 0 & \Cdots & & \\[-5pt]
6\kappa^2&3\kappa& 1 & \Ddots& & \\
3\kappa^3& 3\kappa^2& 3\kappa& 1 &&\\
0& \kappa^3&3\kappa^2& 3\kappa& 1 & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),
&
P_I&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{12}{27} &\frac{12}{27} &\frac{3}{27}& 0 & \Cdots& & \\
\frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & & \\
0& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & \\
\Vdots& \Ddots& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \\
& & & \Ddots& \Ddots & \Ddots&\Ddots
\end{NiceMatrix}
\right),
\end{align*}
with corresponding diagram
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {1/$\frac{6}{27}$/,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.22 mm,bend left,"\txt",below,color=Periwinkle] (\n1);
\draw (0) edge[line width=.44 mm,bend left,"$\frac{12}{27}$",below,color=Periwinkle] (1);
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.3 mm,bend left,below, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {1/$\frac{1}{27}$,1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\i) edge[line width=.04 mm,bend left=65,color=MidnightBlue,"\txt"](\n1);
\draw (0) edge[line width=.11 mm,bend left=65,color=MidnightBlue,"$\frac{3}{27}$"](2);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop below, color=NavyBlue,"\txt"] (\i);
\draw (0) edge[line width=.44 mm,loop left, color=NavyBlue,"$\frac{12}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type I Markov chain }
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
The system of weights $(W_1,W_2,\d x)$ and $(W_1,\hat W_2,\d x)$ corresponding to the stochastic uniform tuples $\big(\frac 1 3, \frac 2 3, \frac1 2, 1\big)$ and $\big(\frac 2 3, \frac 1 3, \frac1 2, 1\big)$ are, respectively,
\begin{align*}
W_1 (x) & = \frac{\sqrt{3} (\vartheta_+(x)+\vartheta_-(x))}{4 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x^{2}} \sqrt{1-x} },&
W_2 (x) & = \frac{3 \sqrt{3} (\vartheta_+^4(x)+\vartheta_-^4(x))}{16 \pi \sqrt[\leftroot{2}\uproot{2}3]{x^{2}}\sqrt{1-x} } , &
\hat W_2 (x) & = \frac{3 \sqrt{3} ( \vartheta_+^2(x)+\vartheta_-^2(x))}{8 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x}\sqrt{1-x} } .
\end{align*}
Solving the system of equations \eqref{eq:rho}
we find that $ \alpha = \frac{3}{2}$ and $\beta = -\frac{1}{2}$.
After some simplifications we get
\begin{align*}
\frac{3}{2} W_1 -\frac{1}{2} \hat W_2 & =3 \sqrt{3} \,
\frac{2 (\vartheta_+(x)+\vartheta_-(x))+\sqrt[\leftroot{2}\uproot{2} 3]{x}(\vartheta_+^2(x)-\vartheta_-^2(x)) }{16 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x^2} \sqrt{1-x} } ,
\end{align*}
but the numerator can be written as
$ (\vartheta_+ + \vartheta_-)^2 (\vartheta_+^2 - \vartheta_+ \vartheta_- + \vartheta_-^2) - (\vartheta_+^2 + \vartheta_-^2) \vartheta_+ \vartheta_- = \vartheta_+^4 + \vartheta_-^4 $,
and, consequently, we obtain
\begin{align*}
\frac{3}{2} W_1 -\frac{1}{2} \hat W_2 & = \frac{3 \sqrt{3}}{16 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x^2}\sqrt{1-x}} \big( \vartheta_+^4 + \vartheta_-^4 \big) = W_2,
\end{align*}
i.e., $\hat W_2=3 W_1-2W_2$, and both set of measures are in the same \emph{gauge} class.
The type II multiple orthogonal polynomials are
\begin{align}\label{eq:B_3F2_1}
B^{(n)}(x) & = 3(-\kappa)^n
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right] , && n \in \mathbb{N} .
\end{align}
To evaluate these polynomials at unity notice that the following constant coefficients order four homogeneous recurrence is satisfied
$ B^{(n)} (1) = B^{(n+1)} (1)+3 \kappa B^{(n)} (1) + 3\kappa^2 B^{(n-1)} (1) + \kappa^3 B^{(n-2)} (1)$,
that has characteristic polynomial in $p(t)=(\kappa+t)^3-t^2=(t-2\kappa)^2\big(t+\frac{\kappa}{4}\big)$. Hence,
$ B^{(n) }(1) = \Big(-\frac{\kappa}{4}\Big)^n c_1+\big(2\kappa\big)^n (c_2+n c_3)$,
for appropriate constants $\{c_1,c_2,c_3\}$ such that the initial conditions
$B^{(1)} (1) = \frac{5}{9}$, $B^{(2)} (1) = \frac{43}{243}$ and $B^{(3)} (1)=\frac{341}{6561}$
are satisfied. Hence,
\begin{align}\label{eq:Bn(1)}
B^{(n)}(1)&= \frac{2\times 8^n+(-1)^n}{27^{n}}, &
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};1\right]&=\frac{1+2(-8)^{n}}{3 \times 4^{n}}, &
n&\in\mathbb{N}.
\end{align}
The stochastic factorization of the type I Markov matrix $P_I=P_{I}^L P_{I,2}^U P_{I,1}^U$
\begin{align*}P_{I}^L&=\left(
\begin{NiceMatrix}[columns-width = 0.1cm]
1 & 0 &\Cdots & & \\
\frac{2}{3} & \frac{1}{3} & \Ddots & & \\
0 & \frac{2}{3} & \frac{1}{3} & & \\
\Vdots & \Ddots & \frac{2}{3} & \frac{1}{3} & \\
&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{I,1}^U&=P_{I,2}^U= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{2}{3} &\frac{1}{3} & 0 & \Cdots& & \\
0 & \frac{2}{3} & \frac{1}{3} & \Ddots & & \\
\Vdots& \Ddots& \frac{2}{3} & \frac{1}{3} & & \\[2pt]
& & &\frac{2}{3} & \frac{1}{3}& \\[2pt]
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right).
\end{align*}
We have a uniform stochastic factorization, all factors are uniform. This is a very peculiar property among the hypergeometric type I Markov matrices.
\begin{teo}\label{teo:uniform_stochastic_factorization}
There is only one hypergeometric type stochastic matrix $P_I$ with a uniform stochastic factorization, and its hypergeometric tuples are the uniform stochastic tuples $\big(\frac 1 3, \frac 2 3, \frac1 2, 1\big)$ and $\big(\frac 2 3, \frac 1 3, \frac1 2, 1\big)$.
\end{teo}
\begin{proof}
Here we use Theorem \ref{theorem:stochastic factorization} where the factorization into pure birth and death factors $P_I=LU_1, U_2$ was given.
The even rows of the pure death factor $P^{L}_I$ are uniform if and only if $d=1$, while the odd rows are uniform if and only if $c=\frac{1}{2}$:
\begin{align*}
\begin{aligned}
P_I^L\big|_{d=1}&=\left(\begin{NiceMatrix
1 & 0 &\Cdots & & & & & \\%[2pt]
\frac{1}{c+1} & \frac{c}{c+1} & \Ddots & & & & & \\[1pt]
0 & \frac{2}{3} & \frac{1}{3} & & & & & \\%[2pt]
\Vdots & \Ddots & \frac{3}{c+4} & \frac{c+1}{c+4} & & && \\[1pt]
& & & \frac{2}{3} & \frac{1}{3} & & & \\[1pt]
& & & & \frac{5}{c+7} & \frac{c+2}{c+7} && \\[1pt]
& & & & & \frac{2}{3} & \frac{1}{3}& \\[1pt]
&&&&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right), & P_I^L\big|_{c=\frac{1}{2}}&=\left(
\begin{NiceMatrix
1 & 0 &\Cdots & & & & & \\
\frac{2}{3} & \frac{1}{3} & \Ddots & & & & & \\[1pt]
0 & \frac{2}{d+2} & \frac{d}{d+2} & & & & & \\
\Vdots & \Ddots & \frac{2}{3} & \frac{1}{3} & & & & \\[1pt]
& & & \frac{4}{d+5} & \frac{d+1}{d+5} & & & \\[1pt]
& & & & \frac{2}{3} & \frac{1}{3} & & \\[1pt]
& & & & & \frac{6}{d+8} & \frac{d+2}{d+8}& \\%[2pt]
&&&&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right).
\end{aligned}
\end{align*}
Therefore, the pure death factor is uniform if and only if $c=\frac{1}{2}$ and $d=1$ and
$ P_L:= P_I^L\big|_{c=\frac{1}{2},d=1}=\left(
\begin{NiceMatrix}[small]
1 & 0 &\Cdots & \\[-3pt]
\frac{2}{3} & \frac{1}{3} & \Ddots & \\[3pt]
0 & \frac{2}{3} & \frac{1}{3} & &\\
\Vdots & \Ddots & \Ddots& \Ddots \\
\end{NiceMatrix}
\right)$.
For the second pure birth factor $P_{I,2}^U$ we have uniformity on even rows if and only if $3b-2d=0$ and on the odd rows if and only if $3b-2c=1$:
{\scriptsize\begin{align*}
\hspace*{-.5cm}\begin{aligned}
P_{I,2}^U\big|_{3b-2d=0}&=\left(
\begin{NiceMatrix}[columns-width = 0.3cm]
\frac{2}{3} &\frac{1}{3} & 0 & \Cdots& & & & \\
0 & \frac{b+1}{c+2} & \frac{c-b+1}{c+2} & \Ddots & & & &\\
\Vdots& \Ddots& \frac{2}{3} & \frac{1}{3}& & & &\\[2pt]
& & & \frac{b+3}{c+5} & \frac{c-b+2}{c+5} & & & \\[2pt]
& & & & \frac{2}{3}& \frac{1}{3}& & \\[2pt]
& & & & & \frac{b+5}{c+8} & \frac{c-b+3}{c+8} & \\[2pt]
& & & & & & \Ddots & \Ddots
\end{NiceMatrix}
\right),&
P_{I,2}^U\big|_{3b-2c=1}&=\left(
\begin{NiceMatrix}[columns-width = 0.3cm]
\frac{b}{d} &\frac{d-b}{d} & 0 & \Cdots& & & & \\
0 & \frac{2}{3} & \frac{1}{3} & \Ddots & & & & \\
\Vdots& \Ddots& \frac{b+2}{d+3} & \frac{d-b+1}{d+3}& & & &\\[2pt]
& & &\frac{2}{3} & \frac{1}{3}& & & \\[2pt]
& & & & \frac{b+4}{d+6} & \frac{d-b+2}{d+6} & & \\[2pt]
& & & & & \frac{2}{3} & \frac{1}{3}& \\[2pt]
& & & & & & \Ddots & \Ddots
\end{NiceMatrix}
\right).
\end{aligned}
\end{align*}}
In fact, this factor is uniform if only if $3b-2c=1$ and $3b-2d=0$, and
$ P_U:= P_{I,2}^U\bigg|_{\substack{3b-2d=0\\
3b-2c=1 }}=\left(
\begin{NiceMatrix}[small]
\frac{2}{3} &\frac{1}{3} & 0 & \Cdots \\
0 & \frac{2}{3} & \frac{1}{3} & \Ddots \\
\Vdots& \Ddots& \Ddots& \Ddots
\end{NiceMatrix}
\right)$.
For the first pure birth factor we have uniformity on the even rows if and only if $3a-2c=0$ and on odd rows if and only if $3a-2d=-1$:
{\scriptsize\begin{align*}
\hspace*{-.35cm}\begin{aligned}
P_{I,1}^U\big|_{3a-2c=0}&=\left(
\begin{NiceMatrix}[columns-width = 0.3cm]
\frac{2}{3} & \frac{1}{3}& 0 & \Cdots& & & & \\
0 & \frac{a+1}{d+1} & \frac{d-a}{d+1} & \Ddots & & & &\\
\Vdots& \Ddots& \frac{2}{3} & \frac{1}{3}& & & & \\[2pt]
& & & \frac{a+3}{d+4} & \frac{d-a+1}{d+4} & & & \\[2pt]
& & & & \frac{2}{3} & \frac{1}{3}& & \\[2pt]
& & & & & \frac{a+5}{d+7} & \frac{d-a+2}{d+7} & \\[2pt]
& & & & & & \Ddots & \Ddots
\end{NiceMatrix}
\right),& P_{I,1}^U\big|_{3a-2d=-1}&=\left(
\begin{NiceMatrix}[columns-width = 0.3cm]
\frac{a}{c} &\frac{c-a}{c} & 0 & \Cdots& & & & \\
0 & \frac{2}{3} & \frac{1}{3}& \Ddots & & & &\\
\Vdots& \Ddots& \frac{a+2}{c+3} & \frac{c-a+1}{c+3}& & & &\\[2pt]
& & & \frac{2}{3} & \frac{1}{3}& & & \\[2pt]
& & & & \frac{a+4}{c+6} & \frac{c-a+2}{c+6} & & \\[2pt]
& & & & & \frac{2}{3} & \frac{1}{3}& \\[2pt]
& & & & & & \Ddots & \Ddots
\end{NiceMatrix}
\right).
\end{aligned}
\end{align*}}
This factor is uniform if and only if $3b-2c=1$ and $3b-2d=0$ with
$ P_{I,2}^U\Big|_{\substack{3a-2c=0,\\
3a-2d=-1 }}=P_U$.
Hence, both pure birth factors $P^U_{I,1}$ and $P^U_{I,2}$ are uniform whenever we have
$3b-2d=0$, $3b-2c=1$, $3a-2c=0$ and $3a-2d=-1$
whose solution is
$(a,b,c,d)= \big( a, a+\frac{1}{3}, \frac{3}{2}a, \frac{3}{2}a+\frac{1}{2}\big)$. Then, to have $d=1$ we need $a=\frac{1}{3}$ and we get the result.
\end{proof}
The corresponding type II transition matrix
$ P_{\text{II}}
= \left(\begin{NiceMatrix}[small]
\frac{12}{27}& \frac{8}{27} & 0 & \Cdots & & \\
\frac {12} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots & & \\[5pt]
\frac{3}{27} & \frac {6} {27}& \frac{12}{27}& \frac{8}{27} &&\\[5pt]
0& \frac{1}{27} &\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right)
$
is stochastic but for the first three rows, this models a recurrent random walk where the first state is a sink and the second and third states are sources. The diagram if this random walk is
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.3 mm,bend left,"\txt",color=Periwinkle] (\n1);
\foreach
\i/\txt in {1/$\frac{6}{27}$/,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.22 mm,bend left=50,above, "\txt",color=Mahogany,auto=right] (\i);
\draw (1) edge[line width=.44 mm,bend left=50,above, auto=right,color=Mahogany,"$\frac{12}{27}$"] (0);
\foreach
\i/\txt in {1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\n1) edge[line width=.04 mm,bend left=60,color=RawSienna,"\txt"] (\i);
\draw (2) edge[line width=.12 mm,bend left=50,above,color=RawSienna,"$\frac{12}{27}$"] (0);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop above,color=NavyBlue, "\txt"] (\i);
\draw (0) edge[line width=.44 mm,loop left, color=NavyBlue,"$\frac{12}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type II Markov chain with one sink and two sources}
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
\paragraph{\textbf{The stochastic uniform tuples }
$\Big(\frac 2 3, \frac 4 3, 1, \frac 3 2 \Big)$ and $\Big( \frac 4 3, \frac 2 3, 1, \frac 3 2 \Big)$}
In this case the Jacobi matrix and the type I stochastic matrix are
\begin{align*}
J & =
\left(\begin{NiceMatrix
4\kappa& 1 & 0 & \Cdots & & \\[-5pt]
5\kappa^2&3\kappa& 1 & \Ddots& & \\
\kappa^3& 3\kappa^2& 3\kappa& 1 &&\\
0& \kappa^3&3\kappa^2& 3\kappa& 1 & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),
&
P_I&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{16}{27} &\frac{10}{27} &\frac{1}{27}& 0 & \Cdots& & \\
\frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & & \\
0& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & \\
\Vdots& \Ddots& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \\
& & & \Ddots& \Ddots & \Ddots&\Ddots
\end{NiceMatrix}
\right),
\end{align*}
with corresponding diagram
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {1/$\frac{6}{27}$/,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.22 mm,bend left,"\txt",below,color=Periwinkle] (\n1);
\draw (0) edge[line width=.37 mm,bend left,"$\frac{10}{27}$",below,color=Periwinkle] (1);
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.3 mm,bend left,below, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$\frac{1}{27}$,1/$\frac{1}{27}$,1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\i) edge[line width=.04 mm,bend left=65,color=MidnightBlue,"\txt"](\n1);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop below, color=NavyBlue,"\txt"] (\i);
\draw (0) edge[line width=.49 mm,loop left, color=NavyBlue,"$\frac{16}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type I Markov chain }
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
The system of weights $(W_1,W_2,\d x)$ and $(W_1,\hat W_2,\d x)$ corresponding to the stochastic uniform tuples $\big(\frac 2 3, \frac 4 3, 1,\frac3 2\big)$ and $\big(\frac 4 3, \frac 2 3, 1, \frac3 2,\big)$ are, respectively,
\begin{align*}
W_1 (x) & = \frac{3\sqrt{3} (\vartheta_+^2(x)+\vartheta_-^2(x))}{8 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x} \sqrt{1-x} },&
W_2 (x) & = \frac{9\sqrt{3} (\vartheta_+^5(x)+\vartheta_-^5(x))}{32 \pi \sqrt[\leftroot{2}\uproot{2}3]{x}\sqrt{1-x} } , &
\hat W_2 (x) & = \frac{9 \sqrt{3} \sqrt[\leftroot{2}\uproot{2} 3]{x}( (\vartheta_+(x)+\vartheta_-(x)))}{16\pi \sqrt{1-x} } .
\end{align*}
Solving the system of equations
\eqref{eq:rho}
we find that $ \alpha = \frac{3}{2}$ and $\beta = -\frac{1}{2}$.
After some simplifications we get
\begin{align*}
\frac{3}{2} W_1 -\frac{1}{2} \hat W_2 & =9 \sqrt{3} \,
\frac{2 (\vartheta_+^2(x)+\vartheta_-^2(x))-\sqrt[\leftroot{2}\uproot{2} 3]{x^2}(\vartheta_+(x)+\vartheta_-(x)) }{32 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x} \sqrt{1-x} } ,
\end{align*}
but the numerator can be written as
$ (\vartheta_+ + \vartheta_-) (\vartheta_+^2 - \vartheta_+ \vartheta_- +
\vartheta_-^2) (\vartheta_-^2 + \vartheta_+^2) - (\vartheta_- +
\vartheta_+) (\vartheta_+ \vartheta_-)^2$,
and, consequently, we obtain
\begin{align*}
\frac{3}{2} W_1 -\frac{1}{2} \hat W_2 & = \frac{9 \sqrt{3}}{32 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x}\sqrt{1-x}} \big( \vartheta_+^5+ \vartheta_-^5\big) = W_2,
\end{align*}
i.e., $\hat W_2=3 W_1-2W_2$, and both sets of weights are in the same \emph{gauge} class.
The type II multiple orthogonal polynomials are
\begin{align}\label{eq:B_3F2_2}
B^{(n)}(x) & =
(3 n+1) \left(
- \kappa \right)^n
\, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right], && n \in \mathbb{N} .
\end{align}
Following similar arguments as for the deduction of \eqref{eq:Bn(1)}, we get the following values at unity of the type II multiple orthogonal polynomials and generalized hypergeometric functions
\begin{align*}
B^{(n)}(1)&= \frac{(-1)^{n+1}+4 \times 8^{n}}{3 \times 27^{n}}
,
&
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};1\right]&=
\frac{4 (-1)^{-n} 8^{n} - 1}{3 (3 n+1) 4^{n}}
,
&
n&\in\mathbb{N}.
\end{align*}
The stochastic factorization of the type I Markov matrix $P_I=P_{I}^L P_{I,2}^U P_{I,1}^U$
\begin{align*}
\hspace{-.35cm}
P_{I}^L&=\left(
\begin{NiceMatrix}[columns-width = 0.1cm]
1 & 0 &\Cdots & & \\
\frac{1}{2} & \frac{1}{2} & \Ddots & & \\
0 & \frac{4}{7} & \frac{3}{7} & & \\
\Vdots & \Ddots & \frac{3}{5} & \frac{2}{5} & \\
&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{I,2}^U&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{2}{3} &\frac{1}{3} & 0 & \Cdots& & \\
0 & \frac{2}{3} & \frac{1}{3} & \Ddots & & \\
\Vdots& \Ddots& \frac{2}{3} & \frac{1}{3} & & \\[2pt]
& & &\frac{2}{3} & \frac{1}{3}& \\[2pt]
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right),& P_{I,I}^U&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{8}{9} &\frac{1}{9} & 0 & \Cdots& & \\
0 & \frac{7}{9}&\frac{2}{9}& \Ddots & & \\
\Vdots& \Ddots& \frac{20}{27} & \frac{7}{27} & & \\[2pt]
& & &\frac{13}{18} & \frac{5}{18}& \\[2pt]
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right).
\end{align*}
There is only a uniform factor, one of the two pure births.
The type II transition matrix
$P_{\text{II}}
= \left(\begin{NiceMatrix}[small]
\frac{16}{27}& \frac{8}{27} & 0 & \Cdots & & \\
\frac {10} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots & & \\[5pt]
\frac{1}{27} & \frac {6} {27}& \frac{12}{27}& \frac{8}{27} &&\\[5pt]
0& \frac{1}{27} &\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right)
$
is stochastic but for the two first rows, representing a recurrent random walk with the first state being a sink, and the second a source. The diagram of this random walk is
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.3 mm,bend left,"\txt",color=Periwinkle] (\n1);
\foreach
\i/\txt in {1/$\frac{6}{27}$/,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.22 mm,bend left=50,above, "\txt",color=Mahogany,auto=right] (\i);
\draw (1) edge[line width=.37 mm,bend left=50,above, auto=right,color=Mahogany,"$\frac{10}{27}$"] (0);
\foreach
\i/\txt in {1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\n1) edge[line width=.04 mm,bend left=60,color=RawSienna,"\txt"] (\i);
\draw (2) edge[line width=.12 mm,bend left=50,above,color=RawSienna,"$\frac{12}{27}$"] (0);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop above,color=NavyBlue, "\txt"] (\i);
\draw (0) edge[line width=.59 mm,loop left, color=NavyBlue,"$\frac{16}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type II Markov chain with one sink and one source}
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
\paragraph{\textbf{The stochastic uniform tuples }
$\Big(\frac 4 3, \frac 5 3, \frac 3 2, 2 \Big)$ and $\Big( \frac 5 3, \frac 4 3, \frac 3 2, 2 \Big)$}
In this case the Jacobi matrix and the type I stochastic matrix are
\begin{align*}
J & =
\left(\begin{NiceMatrix}[]
5\kappa& 1 & 0 & 0 & 0 & \Cdots\\
3\kappa^2&3\kappa& 1 & 0 & 0 & \Ddots \\
\kappa^3& 3\kappa^2& 3\kappa& 1 &0&\Ddots\\
0& \kappa^3&3\kappa^2& 3\kappa& 1 & \Ddots\\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),&
P_I&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{20}{27} &\frac{6}{27} &\frac{1}{27}& 0 & \Cdots& & \\
\frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & & \\
0& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & \\
\Vdots& \Ddots& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \\
& & & \Ddots& \Ddots & \Ddots&\Ddots
\end{NiceMatrix}
\right),
\end{align*}
with corresponding diagram
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$\frac{6}{27}$,1/$\frac{6}{27}$,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.22 mm,bend left,"\txt",below,color=Periwinkle] (\n1);
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.3 mm,bend left,below, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$\frac{1}{27}$,1/$\frac{1}{27}$,1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\i) edge[line width=.04 mm,bend left=65,color=MidnightBlue,"\txt"](\n1);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop below, color=NavyBlue,"\txt"] (\i);
\draw (0) edge[line width=.74mm,loop left, color=NavyBlue,"$\frac{20}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type I Markov chain}
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
The system of weights $(W_1,W_2,\d x)$ and $(W_1,\hat W_2,\d x)$ corresponding to the stochastic uniform tuples $\big(\frac 4 3, \frac 5 3, \frac 3 2, 2 \big)$ and $\big( \frac 5 3, \frac 4 3, \frac 3 2, 2 \big)$ are, respectively,
\begin{align*}
W_1 (x) & = \frac{9\sqrt{3} \sqrt[\leftroot{2}\uproot{2} 3]{x} (\vartheta_+(x)+\vartheta_-(x))}{16\pi \sqrt{1-x} },&
W_2 (x) & = \frac{81\sqrt{3}\sqrt[\leftroot{2}\uproot{2} 3]{x} (\vartheta_+^4(x)+\vartheta_-^4(x))}{160 \pi \sqrt{1-x} } , &
\hat W_2 (x) & = \frac{81 \sqrt{3} \sqrt[\leftroot{2}\uproot{2} 3]{x^2} (\vartheta_+^2(x)+\vartheta_-^2(x))}{128\pi \sqrt{1-x} } .
\end{align*}
Solving the system of equations \eqref{eq:rho}
we find that $ \alpha = \frac{9}{5}$ and $\beta = -\frac{4}{5}$.
After some simplifications we get
\begin{align*}
\frac{9}{5} W_1 -\frac{4}{5} \hat W_2 & =-81\sqrt{3}\sqrt[\leftroot{2}\uproot{2} 3]{x} \,
\frac{\vartheta_+^2(x)+\vartheta_-^2(x)-2(\vartheta_+(x)+\vartheta_-(x)) }{32 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x} \sqrt{1-x} } ,
\end{align*}
but the numerator can be written as
$ -(-(\vartheta_+ + \vartheta_-) (\vartheta_+^2 - \vartheta_+ \vartheta_- +
\vartheta_-^2) (\vartheta_- + \vartheta_+) + (\vartheta_-^2 + \vartheta_+^2) \
(\vartheta_+ \vartheta_-)) (\vartheta_+ \vartheta_-)$,
and, consequently, we obtain
\begin{align*}
\frac{9}{5} W_1 -\frac{4}{5} \hat W_2 & = \frac{81 \sqrt{3}}{160 \pi \sqrt{1-x}} \vartheta_+\vartheta_-\big( \vartheta_+^4+ \vartheta_-^4\big) = W_2,
\end{align*}
i.e., $\hat W_2=\frac{9}{4}W_1-\frac{5}{4}W_2$, and both sets of weights are in the same \emph{gauge} class.
The type II multiple orthogonal polynomials are
\begin{align}\label{eq:B_3F2_3}
B^{(n)}(x)=
\frac{(n+1) (3 n+2)}{2} ( -\kappa )^n
\, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} ,\; \frac{n+2}{2} \\[3pt]
\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};x\right] , && n \in \mathbb{N} .
\end{align}
Following similar arguments as for the deduction of \eqref{eq:Bn(1)}, we get the following values at unity of the type II multiple orthogonal polynomials and generalized hypergeometric functions
\begin{align*}
B^{(n)}(1)&= \frac{(-1)^n+8^{n+1} }{9\times 27^n }, &
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} ,\; \frac{n+2}{2} \\[3pt]
\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};1\right]&
=\frac{2(1-(-8)^{n+1} )}{9(n+1)(3n+2)4^n}, &
n&\in\mathbb{N}.
\end{align*}
The stochastic factorization of the type I Markov matrix $P_I=P_{I}^L P_{I,2}^U P_{I,1}^U$
\begin{align*}P_{I}^L&=\left(
\begin{NiceMatrix}[columns-width = 0.1cm]
1 & 0 &\Cdots & & \\
\frac{2}{5} & \frac{3}{5} & \Ddots & & \\
0 & \frac{1}{2} & \frac{1}{2} & & \\
\Vdots & \Ddots & \frac{6}{11} & \frac{5}{11} & \\
&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{I,2}^U&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{5}{6} &\frac{1}{6} & 0 & \Cdots& & \\
0 & \frac{16}{21} & \frac{5}{21} & \Ddots & & \\
\Vdots& \Ddots& \frac{11}{15} & \frac{4}{15} & & \\[2pt]
& & &\frac{28}{39} & \frac{11}{39}& \\[2pt]
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right),& P_{I,I}^U&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{8}{9} &\frac{1}{9} & 0 & \Cdots& & \\
0 & \frac{7}{9}&\frac{2}{9}& \Ddots & & \\
\Vdots& \Ddots& \frac{20}{27} & \frac{7}{27} & & \\[2pt]
& & &\frac{13}{18} & \frac{5}{18}& \\[2pt]
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right).
\end{align*}
There is no uniform factor. The corresponding type II transition matrix
$ P_{\text{II}}
= \left(\begin{NiceMatrix}[small]
\frac{20}{27}& \frac{8}{27} & 0 & \Cdots & & \\
\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots & & \\
\frac{1}{27} & \frac {6} {27}& \frac{12}{27}& \frac{8}{27} &&\\
0& \frac{1}{27} &\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right)$
is stochastic but for the two first rows, this models a recurrent random walk with the first state being a sink and the second a source. The diagram of this random walk is
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.3 mm,bend left,"\txt",color=Periwinkle] (\n1);
\foreach
\i/\txt in {0/$\frac{6}{27}$,1/$\frac{6}{27}$,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.22 mm,bend left=50,above, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\n1) edge[line width=.04 mm,bend left=60,color=RawSienna,"\txt"] (\i);
\draw (2) edge[line width=.12 mm,bend left=50,above,color=RawSienna,"$\frac{12}{27}$"] (0);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop above,color=NavyBlue, "\txt"] (\i);
\draw (0) edge[line width=.74 mm,loop left, color=NavyBlue,"$\frac{20}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type II Markov chain with one sink and one source}
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
\subsection{Semi-stochastic uniform tuples. Uniform transient random walks with sinks}\label{S:semi-stochastic uniform tuples}
We now proceed as for the type II stochastic uniform tuples, that is we normalize the almost asymptotic uniform Jacobi matrices using the matrices
$\sigma_{II}$ and $\sigma_{I}=\sigma_{II}^{-1}$, see \eqref{eq:norma}. For these semi-stochastic tuples $\delta = \frac 32$, so that they are transient random walks with sinks. All of them satisfy the min-max property \eqref{eq:min-max}. Moreover, the corresponding uniform Jacobi matrices differ from \eqref{eq:Jacobi_Jacobi_Piñeiro_uniform} by at most one column.
\paragraph{\textbf{The semi-stochastic uniform tuples} $\Big(\frac 1 3, \frac 2 3, 1, \frac 3 2\Big)$ and $\Big( \frac 2 3, \frac 1 3, 1, \frac 3 2 \Big)$}
In this case the Jacobi matrix and semi-stochastic matrices are
\begin{align*}
\hspace*{-1cm}\begin{aligned}
J & =
\left(\begin{NiceMatrix
\kappa& 1 & 0 & \Cdots & & \\[-5pt]
2\kappa^2&3\kappa& 1 & \Ddots& & \\
\kappa^3& 3\kappa^2& 3\kappa& 1 &&\\
0& \kappa^3&3\kappa^2& 3\kappa& 1 & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),&
P_I&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{4}{27} &\frac{4}{27} &\frac{1}{27}& 0 & \Cdots& & \\
\frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & & \\
0& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & \\
\Vdots& \Ddots& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \\
& & & \Ddots& \Ddots & \Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{\text{II}}
& = \left(\begin{NiceMatrix}
\frac{4}{27}& \frac{8}{27} & 0 & \Cdots& & \\
\frac {4} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots& & \\[5pt]
\frac{1}{27} & \frac {6} {27}& \frac{12}{27}& \frac{8}{27} &&\\
0& \frac{1}{27} &\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots\\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),
\end{aligned}
\end{align*}
with corresponding diagrams
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {1/$\frac{6}{27}$,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.22 mm,bend left,"\txt",below,color=Periwinkle] (\n1);
\draw (0) edge[line width=.15 mm,bend left,color=Periwinkle,"$\frac{4}{27}$"](1);
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.3 mm,bend left,below, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$\frac{1}{27}$,1/$\frac{1}{27}$,1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\i) edge[line width=.04 mm,bend left=65,color=MidnightBlue,"\txt"](\n1);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop below, color=NavyBlue,"\txt"] (\i);
\draw (0) edge[line width=.15mm,loop left, color=NavyBlue,"$\frac{4}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\end{center}
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type I Markov chain with one sink}
\end{center}
\end{minipage}};
\end{tikzpicture}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.3 mm,bend left,"\txt",color=Periwinkle] (\n1);
\foreach
\i/\txt in {1/$\frac{6}{27}$,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.22 mm,bend left=50,above, "\txt",color=Mahogany,auto=right] (\i);
\draw (1) edge[line width=.15 mm,bend left=50,above, auto=right,color=Mahogany,"$\frac{4}{27}$"] (0);
\foreach
\i/\txt in {1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\n1) edge[line width=.04 mm,bend left=60,color=RawSienna,"\txt"] (\i);
\draw (2) edge[line width=.12 mm,bend left=50,above,color=RawSienna,"$\frac{12}{27}$"] (0);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop above,color=NavyBlue, "\txt"] (\i);
\draw (0) edge[line width=.15 mm,loop left, color=NavyBlue,"$\frac{4}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type II Markov chain with two sinks}
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
The systems of weights $(W_1,W_2,\d x)$ and $(W_1,\hat W_2,\d x)$ corresponding to the semi-stochastic uniform tuples $\big(\frac 1 3, \frac 2 3, 1, \frac 3 2\big)$ and $\big( \frac 2 3, \frac 1 3, 1, \frac 3 2 \big)$ are, respectively,
\begin{align*}
W_1 (x) & = \frac{3\sqrt{3} (\vartheta_+(x)-\vartheta_-(x))}{4\pi \sqrt[\leftroot{2}\uproot{2} 3]{x^2} },&
W_2 (x) & = \frac{9\sqrt{3} (\vartheta_+^4(x)-\vartheta_-^4(x))}{32 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x^2} } , &
\hat W_2 (x) & = \frac{9 \sqrt{3} ( (\vartheta_+^2(x)-\vartheta_-^2(x)))}{8\pi \sqrt[\leftroot{2}\uproot{2} 3]{x} } .
\end{align*}
From \eqref{eq:rho}
we find that $ \alpha = \frac{3}{4}$ and $\beta = \frac{1}{4}$.
We get
\begin{align*}
\frac{3}{4} W_1 +\frac{1}{4} \hat W_2 & =9\sqrt{3} \,
\frac{(\vartheta_+^2(x)-\vartheta_-^2(x))\big(2+ \sqrt[\leftroot{2}\uproot{2} 3]{x} (\vartheta_+^3(x)+\vartheta_-^3(x)))\big) }{32 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x^2} } ,
\end{align*}
but the numerator can be written as
$ (\vartheta_- - \vartheta_+) ((\vartheta_+ + \vartheta_-) (\vartheta_+^2 - \vartheta_+ \vartheta_- +
\vartheta_-^2) + (\vartheta_+ + \vartheta_-) \vartheta_+ \vartheta_-) $,
and, consequently, we obtain
\begin{align*}
\frac{3}{4} W_1 +\frac{1}{4} \hat W_2 & = \frac{9 \sqrt{3}}{32 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x^2}} \big( \vartheta_+^4- \vartheta_-^4\big) = W_2,
\end{align*}
i.e., $\hat W_2=-3W_1+4W_2$, and both sets of weights are in the same \emph{gauge} class.
The type II multiple orthogonal polynomials are
\begin{align}\label{eq:B_3F2_4}
B^{(n)}(x)= (-\kappa)^n
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right] , && n \in \mathbb{N} .
\end{align}
Following similar arguments as for the deduction of \eqref{eq:Bn(1)}, we get the following values at unity of the type II multiple orthogonal polynomials and generalized hypergeometric functions
\begin{align*}
B^{(n)}(1)&= \frac{2(9 n+4)8^n+(-1)^n }{9\times 27^n}, &
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};1\right]&
=\frac{1+2(-1)^n (9 n+4)8^n}{9 \times 4^{n}}, &
n&\in\mathbb{N}.
\end{align*}
The stochastic factorization of the type I Markov matrix $P_I=P_{I}^L P_{I,2}^U P_{I,1}^U$ is not uniform
\begin{align*}P_{I}^L&=\left(
\begin{NiceMatrix}[columns-width = 0.1cm]
1 & 0 &\Cdots & & \\
\frac{1}{2} & \frac{1}{2} & \Ddots & & \\[5pt]
0 & \frac{4}{7} & \frac{3}{7} & & \\
\Vdots & \Ddots & \frac{3}{5} & \frac{2}{5} & \\
&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{I,2}^U&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{4}{9} &\frac{5}{9} & 0 & \Cdots& & \\
0 & \frac{5}{9} & \frac{4}{9} & \Ddots & & \\
\Vdots& \Ddots& \frac{16}{27} & \frac{11}{27} & & \\[5pt]
& & &\frac{11}{18} & \frac{7}{18}& \\
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right),& P_{I,1}^U&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{1}{3} &\frac{2}{3} & 0 & \Cdots& & \\
0 & \frac{8}{15}&\frac{7}{15}& \Ddots & & \\
\Vdots& \Ddots& \frac{7}{12} & \frac{5}{12} & & \\[5pt]
& & &\frac{20}{33} & \frac{13}{33}& \\
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right).
\end{align*}
\paragraph{\textbf{The semi-stochastic uniform tuples} $\Big(\frac 2 3, \frac 4 3, \frac 3 2 ,2\Big)$ and $\Big( \frac 4 3, \frac 2 3, \frac 3 2 ,2 \Big)$}
In this case the Jacobi matrix and semi-stochastic matrices are
\begin{align*}
\hspace*{-1cm}\begin{aligned}
J & =
\left(\begin{NiceMatrix
2\kappa& 1 & 0 & \Cdots & & \\[-5pt]
3\kappa^2&3\kappa& 1 & \Ddots& & \\
\kappa^3& 3\kappa^2& 3\kappa& 1 &&\\
0& \kappa^3&3\kappa^2& 3\kappa& 1 & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),&
P_I&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{8}{27} &\frac{6}{27} &\frac{1}{27}& 0 & \Cdots& & \\
\frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & & \\
0& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & \\
\Vdots& \Ddots& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \\
& & & \Ddots& \Ddots & \Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{\text{II}}
& = \left(\begin{NiceMatrix}
\frac{8}{27}& \frac{8}{27} & 0 & \Cdots& & \\
\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots& & \\[5pt]
\frac{1}{27} & \frac {6} {27}& \frac{12}{27}& \frac{8}{27} &&\\
0& \frac{1}{27} &\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots\\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),
\end{aligned}
\end{align*}
with corresponding diagrams
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$\frac{6}{27}$,1/$\frac{6}{27}$,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.22 mm,bend left,"\txt",below,color=Periwinkle] (\n1);
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.3 mm,bend left,below, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$\frac{1}{27}$,1/$\frac{1}{27}$,1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\i) edge[line width=.04 mm,bend left=65,color=MidnightBlue,"\txt"](\n1);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop below, color=NavyBlue,"\txt"] (\i);
\draw (0) edge[line width=.3mm,loop left, color=NavyBlue,"$\frac{8}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type I Markov chain with one sink}
\end{center}
\end{minipage}};
\end{tikzpicture}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.3 mm,bend left,"\txt",color=Periwinkle] (\n1);
\foreach
\i/\txt in {0/$\frac{6}{27}$,1/$\frac{6}{27}$,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.22 mm,bend left=50,above, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$\frac{1}{27}$,1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\n1) edge[line width=.04 mm,bend left=60,color=RawSienna,"\txt"] (\i);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop above,color=NavyBlue, "\txt"] (\i);
\draw (0) edge[line width=.3 mm,loop left, color=NavyBlue,"$\frac{8}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type II Markov chain with two sinks}
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
The systems of weights $(W_1,W_2,\d x)$ and $(W_1,\hat W_2,\d x)$ corresponding to the semi-stochastic uniform tuples $\big(\frac 2 3, \frac 4 3, \frac 3 2 ,2\big)$ and $\big( \frac 4 3, \frac 2 3, \frac 3 2 ,2 \big)$ are, respectively,
\begin{align*}
W_1 (x) & = \frac{9\sqrt{3} (\vartheta_+^2(x)-\vartheta_-^2(x))}{8\pi \sqrt[\leftroot{2}\uproot{2} 3]{x} },&
W_2 (x) & = \frac{81\sqrt{3} (\vartheta_+^5(x)-\vartheta_-^5(x))}{160 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x} } , &
\hat W_2 (x) & = \frac{81 \sqrt{3} \sqrt[\leftroot{2}\uproot{2} 3]{x}(\vartheta_+(x)-\vartheta_-(x))}{16\pi } .
\end{align*}
From \eqref{eq:rho}
we find that $ \alpha = \frac{9}{10}$ and $\beta = \frac{1}{10}$.
We get
\begin{align*}
\frac{9}{10} W_1 +\frac{1}{10} \hat W_2 & =81\sqrt{3} \,
\frac{(\vartheta_+(x)-\vartheta_-(x))\big( \sqrt[\leftroot{2}\uproot{2} 3]{x}+2 (\vartheta_+(x)+\vartheta_-(x))\big) }{160 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x} } ,
\end{align*}
but the numerator can be written as
$ ( \vartheta_- - \vartheta_+) ((\vartheta_+ + \vartheta_-) (\vartheta_+^2 - \vartheta_+ \vartheta_- +
\vartheta_-^2) (\vartheta_- + \vartheta_+) + (\vartheta_+ \vartheta_-)^2)$,
and, consequently, we obtain
\begin{align*}
\frac{9}{10} W_1 +\frac{1}{10} \hat W_2 & = \frac{81 \sqrt{3}}{160 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x}} \big( \vartheta_+^5- \vartheta_-^5\big) = W_2,
\end{align*}
i.e., $\hat W_2=-9W_1+10W_2$, and both sets of weights are in the same \emph{gauge} class.
The type II multiple orthogonal polynomials are
\begin{align}\label{eq:B_3F2_5}
B^{(n)}(x)= (n+1) \left( -\kappa \right)^n
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right] , && n \in \mathbb{N} .
\end{align}
Following similar arguments as for the deduction of \eqref{eq:Bn(1)}, we get the following values at unity of the type II multiple orthogonal polynomials and generalized hypergeometric functions
\begin{align*}
B^{(n)}(1)&= \frac{4 (9 n+7)8^n+(-1)^{n+1} }{27^{n+1}}, &
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};1\right]&
=\frac{ 4 (9 n+7) (-8)^n -1 }{27 (n+1) 4^{n}}, &
n&\in\mathbb{N}.
\end{align*}
The stochastic factorization of the type I Markov matrix $P_I=P_{I}^L P_{I,2}^U P_{I,1}^U$ is not uniform
\begin{align*}
\hspace{-.5cm}
P_{I}^L&=\left(
\begin{NiceMatrix}[columns-width = 0.1cm]
1 & 0 &\Cdots & & \\
\frac{2}{5} & \frac{3}{5} & \Ddots & & \\[5pt]
0 & \frac{1}{2} & \frac{1}{2} & & \\
\Vdots & \Ddots & \frac{6}{11} & \frac{5}{11} & \\
&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{I,2}^U&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{2}{3} & \frac{1}{3} & 0 & \Cdots& & \\
0 & \frac{2}{3} & \frac{1}{3}& \Ddots & & \\
\Vdots& \Ddots& \frac{2}{3} & \frac{1}{3}& & \\[5pt]
& & &\frac{2}{3} & \frac{1}{3}& \\
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right),& P_{I,1}^U&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{4}{9} & \frac{5}{9} & 0 & \Cdots& & \\
0 & \frac{5}{9} & \frac{4}{9}& \Ddots & & \\
\Vdots& \Ddots& \frac{16}{27} & \frac{11}{27}& & \\[5pt]
& & & \frac{11}{18} & \frac{7}{18} & \\
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right).
\end{align*}
Notice that there is a uniform factor.
\paragraph{\textbf{The semi-stochastic uniform tuples} $\Big(\frac 4 3, \frac 5 3, 2, \frac 5 2 \Big)$ and
$\Big( \frac 5 3, \frac 4 3, 2, \frac5 2 \Big)$}
Notice that $\big(\frac 4 3, \frac 5 3, 2, \frac 5 2 \big)$ is the uniform case discussed in \cite{lima_loureiro}. The Jacobi matrix and semi-stochastic matrices are
\begin{align*}
\hspace*{-.85cm}\begin{aligned}
J & =
\left(\begin{NiceMatrix
3\kappa& 1 & 0 & \Cdots & & \\[-5pt]
3\kappa^2&3\kappa& 1 & \Ddots& & \\
\kappa^3& 3\kappa^2& 3\kappa& 1 &&\\
0& \kappa^3&3\kappa^2& 3\kappa& 1 & \\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),&
P_I&= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{12}{27} &\frac{6}{27} &\frac{1}{27}& 0 & \Cdots& & \\
\frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & & \\
0& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \Ddots & \\
\Vdots& \Ddots& \frac{8}{27} & \frac{12}{27} &\frac{6}{27} &\frac{1}{27} & \\
& & & \Ddots& \Ddots & \Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{\text{II}}
& = \left(\begin{NiceMatrix}
\frac{12}{27}& \frac{8}{27} & 0 & \Cdots& & \\
\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots& & \\[5pt]
\frac{1}{27} & \frac {6} {27}& \frac{12}{27}& \frac{8}{27} &&\\
0& \frac{1}{27} &\frac {6} {27}& \frac{12}{27}& \frac{8}{27} & \Ddots\\
\Vdots&\Ddots&\Ddots&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}\right),
\end{aligned}
\end{align*}
with corresponding diagrams
\begin{center}
\tikzset{decorate sep/.style 2 args={decorate,decoration={shape backgrounds,shape=circle,shape size=#1,shape sep=#2}}}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {1/$\frac{6}{27}$/,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.22 mm,bend left,"\txt",below,color=Periwinkle] (\n1);
\draw (0) edge[line width=.15 mm,bend left,"$\frac{4}{27}$",below,color=Periwinkle] (1);
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.3 mm,bend left,below, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$\frac{1}{27}$,1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\i) edge[line width=.04 mm,bend left=65,color=MidnightBlue,"\txt"](\n1);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop below, color=NavyBlue,"\txt"] (\i);
\draw (0) edge[line width=.15 mm,loop left, color=NavyBlue,"$\frac{4}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type I Markov chain with one sink}
\end{center}
\end{minipage}};
\end{tikzpicture}
\begin{tikzpicture}[start chain = going right,
-latex, every loop/.append style = {-latex}]\small
\foreach \i in {0,...,5}
\node[state, on chain,fill=gray!50!white] (\i) {\i};
\foreach
\i/\txt in {0/$\frac{8}{27}$,1/$\frac{8}{27}$/,2/$\frac{8}{27}$,3/$\frac{8}{27}$,4/$\frac{8}{27}$}
\draw let \n1 = { int(\i+1) } in
(\i) edge[line width=.3 mm,bend left,"\txt",color=Periwinkle] (\n1);
\foreach
\i/\txt in {0/$\frac{6}{27}$,1/$\frac{6}{27}$/,2/$\frac{6}{27}$,3/$\frac{6}{27}$,4/$\frac{6}{27}$}
\draw let \n1 = { int(\i+1) } in
(\n1) edge[line width=.22 mm,bend left=50,above, "\txt",color=Mahogany,auto=right] (\i);
\foreach
\i/\txt in {0/$\frac{1}{27}$,1/$\frac{1}{27}$/,2/$\frac{1}{27}$,3/$\frac{1}{27}$}
\draw let \n1 = { int(\i+2) } in
(\n1) edge[line width=.04 mm,bend left=60,color=RawSienna,"\txt"] (\i);
\foreach \i/\txt in {1/$\frac{12}{27}$,2/$\frac{12}{27}$/,3/$\frac{12}{27}$,4/$\frac{12}{27}$,5/$\frac{12}{27}$}
\draw (\i) edge[line width=.44 mm,loop above,color=NavyBlue, "\txt"] (\i);
\draw (0) edge[line width=.44 mm,loop left, color=NavyBlue,"$\frac{12}{27}$"] (0);
\draw[decorate sep={1mm}{4mm},fill] (10,0) -- (12,0);
\end{tikzpicture}
\begin{tikzpicture}
\draw (4,-1.8) node
{\begin{minipage}{0.8\textwidth}
\begin{center}\small
\textbf{Uniform type II Markov chain with two sinks}
\end{center}
\end{minipage}};
\end{tikzpicture}
\end{center}
The systems of weights $(W_1,W_2,\d x)$ and $(W_1,\hat W_2,\d x)$ corresponding to the semi-stochastic uniform tuples
$\big(\frac 4 3, \frac 5 3, 2, \frac 5 2 \big)$ and $\big( \frac 5 3, \frac 4 3, 2, \frac5 2 \big)$ are, respectively,
\begin{align*}
W_1 (x) & = \frac{81\sqrt{3} \sqrt[\leftroot{2}\uproot{2} 3]{x}(\vartheta_+(x)-\vartheta_-(x))}{16\pi },&
W_2 (x) & = \frac{243\sqrt{3} \sqrt[\leftroot{2}\uproot{2} 3]{x}(\vartheta_+^4(x)-\vartheta_-^4(x))}{160 \pi } , &
\hat W_2 (x) & = \frac{243 \sqrt{3} \sqrt[\leftroot{2}\uproot{2} 3]{x^2}( (\vartheta_+^2(x)-\vartheta_-^2(x)))}{64\pi } .
\end{align*}
Solving \eqref{eq:rho}
we find that $ \alpha = \frac{3}{5}$ and $\beta = \frac{2}{5}$.
After some clearing we obtain
\begin{align*}
\frac{3}{5} W_1 +\frac{2}{5} \hat W_2 & =243\sqrt{3} \sqrt[\leftroot{2}\uproot{2} 3]{x}\,
\frac{(\vartheta_+^3(x)-\vartheta_-^3(x))\big( 2 +\sqrt[\leftroot{2}\uproot{2} 3]{x}(\vartheta_+^3(x)+\vartheta_-^3(x)))\big) }{160 \pi \sqrt[\leftroot{2}\uproot{2} 3]{x} } ,
\end{align*}
but the numerator can be written as
$ ( \vartheta_- - \vartheta_+) ((\vartheta_+ + \vartheta_-) (\vartheta_+^2 - \vartheta_+ \vartheta_- +
\vartheta_-^2) + (\vartheta_- + \vartheta_+) (\vartheta_+ \vartheta_-)) (\vartheta_+ \vartheta_-)
$,
and, consequently, we obtain
\begin{align*}
\frac{3}{5} W_1 +\frac{2}{5} \hat W_2 & =\frac{243 \sqrt{3}}{160 \pi } \vartheta_+ \vartheta_- \big( \vartheta_+^4 -\vartheta_-^4 \big) = W_2
\end{align*}
i.e., $\hat W_2=-\frac{3}{2}W_1+\frac{5}{2}W_2$, and both sets of weights are in the same \emph{gauge} class.
The type II multiple orthogonal polynomials are \cite{lima_loureiro}
\begin{align}\label{eq:B_3F2_6}
B^{(n)}(x)= \frac{(n+2) (n+1) }{2}
\left( -\kappa \right)^n
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+4}{2} , \; \frac{n+3}{2} \\[3pt]\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};x\right] , && n \in \mathbb{N} .
\end{align}
Following similar arguments as for the deduction of \eqref{eq:Bn(1)}, we get the following values at unity of the type II multiple orthogonal polynomials and generalized hypergeometric functions
\begin{align*}
B^{(n)}(1)&= \frac{8^{n+1} (9 n+10)+(-1)^n}{3 \times 27^{ n+1} }, &
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+4}{2} , \; \frac{n+3}{2} \\[3pt]\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};1\right]&
=\frac{2(1- (9 n+10)(-8)^{n+1})}{81 (n+1) (n+2)4^n} &
n&\in\mathbb{N}.
\end{align*}
The stochastic factorization of the type I Markov matrix $P_I=P_{I}^L P_{I,2}^U P_{I,1}^U$ is
\begin{align*}P_{I}^L&=\left(
\begin{NiceMatrix}[columns-width = 0.1cm]
1 & 0 &\Cdots & & \\
\frac{1}{3} & \frac{2}{3} & \Ddots & & \\[5pt]
0 & \frac{4}{9} & \frac{5}{9} & & \\
\Vdots & \Ddots & \frac{1}{2} & \frac{1}{2} & \\
&&\Ddots&\Ddots&\Ddots
\end{NiceMatrix}
\right), & P_{I,2}^U&= P_{I,1}^U= \left(\begin{NiceMatrix}[columns-width =auto]
\frac{2}{3} & \frac{1}{3} & 0 & \Cdots& & \\
0 & \frac{2}{3} & \frac{1}{3}& \Ddots & & \\
\Vdots& \Ddots& \frac{2}{3} & \frac{1}{3}& & \\[5pt]
& & &\frac{2}{3} & \frac{1}{3}& \\
& & & & \Ddots & \Ddots
\end{NiceMatrix}\right),
\end{align*}
with two uniform pure birth factors.
\subsection{Chain of Christoffel transformations. Contiguous relations for $_3F_2$}
Using the scaling transformations in Remark \ref{rem:scaling} and the Christoffel transformations described in Theorem \ref{teo:Christoffel} we consider the permuting Christoffel transformations
$ (W_1,W_2)\xrightarrow[]{ C_\alpha} (W_2, \alpha x W_1)$,
among the stochastic uniform weights and the semi-stochastic uniform weights.
We will also consider the square of these transformations, that we called basic Christoffel transformations, that is
$ (W_1,W_2)\xrightarrow[]{ C_{\alpha_1,\alpha_2}} (\alpha_1 x W_1,\alpha_2 x W_2 )$.
By inspection of the previous results we~get:
\begin{teo}[Christoffel chains for uniform tuples]\label{teo:Christoffel chains for uniform tuples}
\begin{enumerate}
\item The stochastic uniform weights are related through the following chain of permuting Christoffel transformations
\begin{align}\label{eq:Christoffel_chain_1}
\Big(\frac{2}{3},\frac{1}{3}, \frac{1}{2},1\Big)\xrightarrow[]{ C_{\frac{9}{4}}}
\Big(\frac{4}{3},\frac{2}{3}, 1,\frac{3}{2}\Big)\xrightarrow[]{ C_{\frac{27}{16}}}
\Big(\frac{5}{3},\frac{4}{3}, \frac{3}{2},{2}\Big).
\end{align}
\item The semi-stochastic uniform weights are connected by the following chain of permuting Christoffel transformations
\begin{align}\label{eq:Christoffel_chain_2}
\Big(\frac{2}{3},\frac{1}{3}, 1,\frac{3}{2}\Big)\xrightarrow[]{ C_{\frac{27}{4}}}
\Big(\frac{4}{3},\frac{2}{3}, \frac{3}{2},2\Big)\xrightarrow[]{ C_{\frac{27}{8}}}
\Big(\frac{5}{3},\frac{4}{3},
2,\frac{5}{2}\Big).
\end{align}
\item
For the stochastic uniform weights we have the basic Christoffel transformations
\begin{subequations}\label{eq:Christoffel_basic_stochastic}
\begin{gather}\label{eq:Christoffel_basic_1}
\Big(\frac{2}{3},\frac{1}{3}, \frac{1}{2},1\Big)\xrightarrow[]{ C_{\frac{9}{4},\frac{27}{16}}}
\Big(\frac{5}{3},\frac{4}{3}, \frac{3}{2},{2}\Big), \\\label{eq:Christoffel_basic_2}
\Big(\frac{1}{3},\frac{2}{3}, \frac{1}{2},1\Big)\xrightarrow[]{ C_{\frac{9}{4},\frac{27}{10}}}
\Big(\frac{4}{3},\frac{5}{3}, \frac{3}{2},{2}\Big).
\end{gather}
\end{subequations}
\item
For the semi-stochastic uniform weights we have the basic Christoffel transformations
\begin{subequations}\label{eq:Christoffel_basic_semistochastic}
\begin{gather}\label{eq:Christoffel_basic_3}
\Big(\frac{2}{3},\frac{1}{3}, 1, \frac{3}{2}\Big)\xrightarrow[]{ C_{\frac{27}{4},\frac{27}{8}}}
\Big(\frac{5}{3},\frac{4}{3}, 2,\frac{5}{2}\Big),\\\label{eq:Christoffel_basic_4}
\Big(\frac{2}{3},\frac{1}{3}, 1, \frac{3}{2}\Big)\xrightarrow[]{ C_{\frac{27}{4},\frac{27}{5}}}
\Big(\frac{4}{3},\frac{5}{3}, 2,\frac{5}{2}\Big).
\end{gather}
\end{subequations}
\end{enumerate}
\end{teo}
\begin{rem}
\begin{enumerate}
\item Notice that if compose the permuting Christoffel transformations in \eqref{eq:Christoffel_chain_1} we get \eqref{eq:Christoffel_basic_1}, and composing \eqref{eq:Christoffel_chain_2} we get \eqref{eq:Christoffel_basic_3}.
\item The basic Christoffel transformations as in \eqref{eq:Christoffel_basic_2} and \eqref{eq:Christoffel_basic_4} connect uniform tuples subject in the same \emph{gauge} class as the tuples in \eqref{eq:Christoffel_basic_1} and \eqref{eq:Christoffel_basic_3}, respectively. However, we have not been able to find the corresponding permuting Christoffel chains.
\item Using the \emph{gauge} transformations and the Christoffel chains we are able to connect all the stochastic uniform tuples, the same happens for the semi--stochastic tuples.
\item The chains connect the stochastic uniform tuples and semi-stochastic uniform tuples separately. So far, we have not been able to see a connection between these two sets of uniform tuples.
\item The Christoffel formulas in Theorems \ref{teo:Christoffel} and \ref{teo:Basic Christoffel formulas} can be used to connect the multiple orthogonal polynomials corresponding to different uniform tuples.
\end{enumerate}
\end{rem}
In the theory of hypergeometric functions relations among hypergeometric functions with shifted parameters by integers are known as contiguous relations, see \cite{Andrews,Rainville,Bailey}, and particularly \cite[\S 7]{Rainville0}.
From the previous Christoffel transformations and using the explicit form of the type II multiple orthogonal polynomials we get some non trivial relations among different instances of the generalized hypergeometric function $\tensor[_3]{F}{_2}$. As we will see, the six relations happens to be contiguous relations for $\tensor[_3]{F}{_2}$.
First we get contiguous relations with three hypergeometric functions.
\begin{pro}[Three terms $_3F_2$ contiguous relations]
For $n\in\mathbb{N}$, the following relations are fulfilled
\begin{align*}
\frac{ (n+1) (3 n+2)x}{6\kappa}
\, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} ,\; \frac{n+2}{2} \\[3pt]
\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};x\right]+
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n-1,\; \frac{n+2}{2} , \;\frac{n+1}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right] -
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right] &=0,\\
\frac{ (n+1) ( n+2)x}{2\kappa}
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+4}{2} , \; \frac{n+3}{2} \\[3pt]\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};x\right]+
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n-1,\; \frac{n+2}{2} , \; \frac{n+3}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right]-
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right]&=0.
\end{align*}
\end{pro}
\begin{proof}
From the Equations \eqref{eq:B_3F2_1} and \eqref{eq:B_3F2_4} we get
for the uniform tuples $ \big(\frac{2}{3},\frac{1}{3}, \frac{1}{2},1\big)$
and $ \big(\frac{2}{3},\frac{1}{3}, 1,\frac{3}{2}\big)$ that
$ B^{(n)}(0)=3(-\kappa)^n$ and $B^{(n)}(0)=(-\kappa)^n$,
respectively. Then, from \eqref{teo:Basic Christoffel formulas}, i.e. $x B^{(n)}_{\underline{\vec w}}(x)=B_{\vec w}^{(n+1)}(x)
-\frac{B_{\vec w}^{(n+1)}(0)}{B_{\vec w}^{(n)}(0)}B_{\vec w}^{(n)}(x)$
and Equations \eqref{eq:B_3F2_3} and \eqref{eq:B_3F2_6} give the result.
\end{proof}
Second, we obtain contiguous relations with four hypergeometric functions.
\begin{pro}[Four terms $_3F_2$ contiguous relations]
For $n\in\mathbb{N}$, the following relations are satisfied
\begin{gather} \label{eq:hyper_relation_4_1}
\frac{(3 n+1)x }{\kappa}
\, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right]
+ 3 \, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n-1,\; \frac{n+2}{2} , \;\frac{n+1}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right]
-6\,\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right]
+3\,\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n+1,\; \frac{n}{2} , \;\frac{n-1}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right]=0,\\
\label{eq:hyper_relation_4_2} \begin{multlined}[t][.9\textwidth]
\frac{(n+1) (3 n+2)x}{2\kappa}
\, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} ,\; \frac{n+2}{2} \\[3pt]
\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};x\right]
+(3 n+4)
\, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n-1,\; \frac{n+2}{2} , \;\frac{n+3}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right]\\-2 (3 n+1)
\, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \;\frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right] + (3 n-2)
\, \tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n+1,\; \frac{n}{2} , \;\frac{n+1}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right]=0,
\end{multlined}\\
\label{eq:hyper_relation_4_3} \frac{(n+1) x}{\kappa}
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right]
+
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n-1,\; \frac{n+2}{2} , \; \frac{n+3}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right] -2 \,
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+1}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right] +
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n+1,\; \frac{n}{2} , \; \frac{n+1}{2} \\[3pt]
\frac{1}{3},\;\frac{2}{3}\end{NiceArray}};x\right] =0,
\\
\label{eq:hyper_relation_4_4}
\begin{multlined}[t][.9\textwidth]
\frac{(n+1)(n+2) x }{2\kappa}
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+4}{2} , \; \frac{n+3}{2} \\[3pt]\frac{4}{3},\;\frac{5}{3}\end{NiceArray}};x\right]
+(n+2) \,
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n-1,\; \frac{n+4}{2} , \; \frac{n+3}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right]\\ -2 (n+1) \,
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n,\; \frac{n+3}{2} , \; \frac{n+2}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right] + n \,
\tensor[_3]{F}{_2}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-n+1,\; \frac{n+2}{2} , \; \frac{n+1}{2} \\[3pt]
\frac{2}{3},\;\frac{4}{3}\end{NiceArray}};x\right]=0.
\end{multlined}
\end{gather}
\end{pro}
\begin{proof}
For the uniform tuples $\big(\frac{2}{3},\frac{1}{3}, \frac{1}{2},1\big)$, $\big(\frac{4}{3},\frac{2}{3}, 1,\frac{3}{2}\big)$,
$\big(\frac{2}{3},\frac{1}{3}, 1,\frac{3}{2}\big)$ and $\big(\frac{4}{3},\frac{2}{3}, \frac{3}{2},2\big)$, proceeding similarly as we did
for \eqref{eq:Bn(1)}, but instead of unity the origin, instead of the type II polynomials the type I, and using dual recurrences, we get
\begin{align}\label{eq:A10}
A_1^{(n)}(0) &= \frac{1}{(-\kappa)^n},
\end{align}
From Equations \eqref{eq:permuting_Christoffel} and \eqref{eq:Jacobi_Jacobi_Piñeiro_uniform} we get
\begin{align}
x B^{(n)}_{\underline{\vec w}}(x)
=B_{\vec w}^{(n+1)}(x)
+\bigg(\frac{A_{1,\vec w}^{(n-1)}(0)}{A^{(n)}_{1,\vec w}(0)} +3\kappa\bigg)B_{\vec w}^{(n)}(x)-\frac{A_{1,\vec w}^{(n+1)}(0)}{A^{(n)}_{1,\vec w}(0)}\kappa^3B_{\vec w}^{(n-1)}(x).
\end{align}
and \eqref{eq:A10} leads to
\begin{align}\label{eq:permuting_Christoffel_uniform}
x B^{(n)}_{\underline{\vec w}}(x)
= B_{\vec w}^{(n+1)}(x) +2\kappa B_{\vec w}^{(n)}(x) +\kappa^2 B_{\vec w}^{(n-1)}(x).
\end{align}
From \eqref{eq:permuting_Christoffel_uniform}, \eqref{eq:B_3F2_1} and \eqref{eq:B_3F2_2} we conclude \eqref{eq:hyper_relation_4_1}, Equations
\eqref{eq:permuting_Christoffel_uniform}, \eqref{eq:B_3F2_2} and \eqref{eq:B_3F2_3} leads to \eqref{eq:hyper_relation_4_2},
recalling \eqref{eq:permuting_Christoffel_uniform}, \eqref{eq:B_3F2_4} and \eqref{eq:B_3F2_5} we have \eqref{eq:hyper_relation_4_3}, and finally, from \eqref{eq:permuting_Christoffel_uniform}, \eqref{eq:B_3F2_5} and \eqref{eq:B_3F2_6} we get \eqref{eq:hyper_relation_4_4}.
\end{proof}
\subsection{Type I multiple orthogonal polynomials. Explicit expressions}
In contrast with the type II multiple orthogonal polynomials, for which we have expressions in terms of the generalized hypergeometric function $\tensor[_3]{F}{_2}$, the best expression we have for the type I multiple orthogonal polynomials is provided in Corollary \ref{cor:expressions_type_I}. These are computationally rather more involved than the mentioned hypergeometric ones for the type II.
Fortunately, as we will show now, for the uniform tuples much simpler expressions do exist for these hypergeometric type I multiple orthogonal polynomials. The idea is to use generating functions for constant coefficient recurrence relations.
The uniform order four homogeneous linear recurrence relation satisfied
by type I multiple orthogonal polynomials discussed here reads as follows
\begin{align}
v_n+3\kappa v_{n+1}+3\kappa^2 v_{n+2}+\kappa^3v_{n+3}&= x v_{n+1}, && n \in \mathbb N_0.
\label{eq:uniform_recurrenceI}
\end{align}
Notice that, if we denote
\begin{align*}
\kappa_*&:=\frac{1}{\kappa}, &x_*&:=\kappa_*^3x,
\end{align*}
recurrence \eqref{eq:uniform_recurrenceI} can be written as
\begin{align}\label{eq:uniform_recurrenceI'}
\kappa_*^3 v_n+3\kappa_*^2 v_{n+1}+3\kappa_*v_{n+2}+v_{n+3}&= x_* v_{n+1}, && n \in \mathbb N_0 .
\end{align}
Let us consider the corresponding generating function
\begin{align*} V(t,x)&:=\sum_{n=0}^\infty v_n(x)t^n.
\end{align*}
\begin{pro}\label{pro:V}
The generating function is explicitly given by the rational function
\begin{align}
\label{eq:V}
V&=\frac{(v_2+3\kappa_*v_1+(3\kappa_*^2-x_*)v_0)t^2+(v_1+3\kappa_* v_0)t+v_0}{(1+\kappa_* t)^3-x_*t^2}.
\end{align}
\end{pro}
\begin{proof}
For \eqref{eq:uniform_recurrenceI} we multiply \eqref{eq:uniform_recurrenceI'} by $t^{n+3}$ so that
\begin{align*
(\kappa_* t)^3 t^n v_n+3 (\kappa_* t)^2 t^{n+1} v_{n+1} + 3 (\kappa_* t) t^{n+2}v_{n+2} + t^{n+3}v_{n+3} & = x_*t^2 t^{n+1}v_{n+1},
\end{align*}
and sum up, for the type I generating function we get
\begin{align*
(\kappa_* t)^3 V+3 (\kappa_* t)^2 (V-v_0)+3 (\kappa_* t) (V-v_0-v_1t)+V-v_0-v_1t-v_2t^2&= x_*t^2 (V-v_0),
\end{align*}
so that \eqref{eq:V} immediately follows.
\end{proof}
To get the $v_n(x)$ back we need to expand Equation \eqref{eq:V} in power series in $t$.
\begin{lemma}[Power series expansions]\label{lem:trinomial}
For $|3\kappa t+3(\kappa^2-x) t^2+\kappa^3t^3|<1$, we find
\begin{align*}
\frac{1}{{(1+\kappa_* t)^3-x_*t^2}}&=\sum_{n=0}^\infty
e_n(x)t^n , &e_n(x)&:=(-\kappa)^{-n}\hspace*{-11pt}\sum_{m_1+2m_2+3m_3=n} \hspace*{-10pt}(-1)^{m_2} \binom{m_1+m_2+m_3}{m_1,m_2,m_3}3^{m_1+m_2}\Big(1-\frac{ x}{3\kappa}\Big)^{m_2}.
\end{align*}
Here $\binom{k}{k_1,k_2,k_3}=\frac{k!}{k_1!k_2!k_3!}$ is the trinomial coefficient.
\end{lemma}
\begin{proof}
As
\begin{align*}
{(1+\kappa_* t)^3-x_*t^2}= 1+3\kappa_* t+3\Big(1-\frac{x_*}{3\kappa_*^2}\Big) \kappa_*^2t^2+\kappa_*^3t^3,
\end{align*}
for $t,x$ such that $|3\kappa_* t+3(\kappa_*^2-x_*) t^2+\kappa_*^3t^3|<1$, we can write
\begin{align*}
\frac{1}{(1+\kappa_* t)^3-x_*t^2}= \frac{1}{1+3\kappa_* t+3\big(1-\frac{x_*}{3\kappa_*^2}\big)\kappa_*^2 t^2+\kappa_*^3t^3}=\sum_{m=0}^\infty \Big(-3\kappa_* t-3\Big(1-\frac{x_*}{3\kappa_*^2}\Big)\kappa_*^2t^2-\kappa_*^3t^3\Big)^m,
\end{align*}
and he multinomial theorem gives result.
\end{proof}
Using the theory of linear Diophantine equations we find:
\begin{pro}\label{pro:the polynomials e_n}
The polynomials $e_n$, for $n=3p,3p+1,3p+2$ with $p$ a nonnegative integer, are given by
\begin{align*}
e_{3p}&:= \frac{1}{(-\kappa)^{3p}}\sum_{l=0}^p 27^l\sum_{k=0}^{\lfloor\frac{3l}{2}\rfloor}\frac{(-1)^k}{3^{k}} \binom{p+2l-k}{3l-2k,k,p-l}\Big(1-\frac{ x}{3\kappa}\Big)^{k},\\
e_{3p+1}&:=\frac{3}{(-\kappa)^{3p+1}} \sum_{l=0}^p27^l\sum_{k=0}^{\lfloor\frac{3l+1}{2}\rfloor}\frac{(-1)^k}{3^{k}} \binom{p+2l+1-k}{3l+1-2k,k,p-l}\Big(1-\frac{ x}{3\kappa}\Big)^{k},\\
e_{3p+2}&:=\frac{9}{(-\kappa)^{3p+2}} \sum_{l=0}^p27^l\sum_{k=0}^{\lfloor\frac{3l}{2}\rfloor+1}\frac{(-1)^k}{3^{k}} \binom{p+2l+2-k}{3l+2-2k,k,p-l}\Big(1-\frac{ x}{3\kappa}\Big)^{k}.
\end{align*}
\end{pro}
\begin{proof}
The restriction in the sum
$ m_1+2m_2+3m_3=n$
can be understood as a linear Diophantine equation for three unknown positive integers $(m_1,m_2,m_3)$. As $\text{gcd}(1,2,3)=1$, this can be solved as follows, depending whether $n=3p,3p+1,3p+2$ with $p$ a nonnegative integer:
\begin{align*}
n&=3p: &\begin{pNiceMatrix}
m_1\\m_2\\m_3
\end{pNiceMatrix}&=\begin{pNiceMatrix}
3l-2k\\k\\displaystyle-l
\end{pNiceMatrix}, & l&\in\{0,1,\dots,p\}, & k\in\Big\{0,1,\dots,\Big\lfloor\frac{3l}{2}\Big\rfloor\Big\},\\
n&=3p+1: &\begin{pNiceMatrix}
m_1\\m_2\\m_3
\end{pNiceMatrix}&=\begin{pNiceMatrix}
3l+1-2k\\k\\displaystyle-l
\end{pNiceMatrix}, & l&\in\{0,1,\dots,p\}, & k\in\Big\{0,1,\dots,\Big\lfloor\frac{3l+1}{2}\Big\rfloor\Big\},\\
n&=3p+2: &\begin{pNiceMatrix}
m_1\\m_2\\m_3
\end{pNiceMatrix}&=\begin{pNiceMatrix}
3l+2-2k\\k\\displaystyle-l
\end{pNiceMatrix}, & l&\in\{0,1,\dots,p\}, & k\in\Big\{0,1,\dots,\Big\lfloor\frac{3l}{2}\Big\rfloor+1\Big\}.
\end{align*}
\end{proof}
\begin{rem}
The first twelve polynomials are
\begin{gather*}
\begin{aligned}
e_0 &=1,& e_1&=-\frac{3}{\kappa},& e_2&=\frac{6 \kappa+x}{\kappa^3},&e_3&=-\frac{2 (5 \kappa+3 x)}{\kappa^4},&
e_4&=\frac{15 \kappa^2+21\kappa x +x^2}{\kappa^6},&e_5&=-\frac{21 \kappa^2+56\kappa x +9 x^2}{\kappa^7},
\end{aligned}\\
\begin{aligned}
e_6&=\frac{28 \kappa^3+126 \kappa^2 x+45 \kappa x^2 +x^3}{\kappa^9},&e_7&=-\frac{3 \left(12 \kappa^3+84 \kappa^2 x+55 \kappa x^2+4 x^3\right)}{\kappa^{10}},&e_8&=\frac{45 \kappa^4+462 \kappa^3x +495 \kappa^2 x^2+78 \kappa x^3 +x^4}{\kappa^{12}},
\end{aligned}\\
\begin{aligned}
e_9&=-\frac{55 \kappa^4+792 \kappa^3 x^3 +1287 \kappa^2 x^2 +364 \kappa x^3 +15 x^4}{\kappa^{13}},&
e_{10}&=\frac{66 \kappa^5+1287 \kappa^4 x+3003 \kappa^3 x^2+1365 \kappa^2 x^3+120 \kappa x^4 +x^5}{\kappa^{15}},
\end{aligned}\\
e_{11}=-\frac{78 \kappa^5+2002 \kappa^4 x+6435\kappa^3 x^2 +4368 \kappa^2 x^3+680\kappa x^4 +18 x^5}{\kappa^{16}}.
\end{gather*}
\end{rem}
\begin{lemma}\label{lem:A}
For $n\in\mathbb{N}$, the multiple orthogonal polynomials are
\begin{align*}
A_1^{(n+2)}(x)&=\Big(A_1^{(2)}(x)+\frac{3}{\kappa} A_1^{(1)}+\frac{3}{\kappa^2} -\frac{x}{\kappa^3}\Big)e_n(x)+\Big(A_1^{(1)}+\frac{3}{\kappa}\Big)e_{n+1}(x)+e_{n+2}(x),\\
A_2^{(n+2)}(x)&=\Big(A_2^{(2)}+\frac{3}{\kappa} A_2^{(1)}\Big)e_n(x)+A_2^{(1)}e_{n+1}(x),
\end{align*}
with initial conditions given by
\begin{center}
\begin{NiceTabular}{|c|c|c|}[rules/color=[Gray]{0.75},
code-before = \cellcolor{Gray!15}{7-2,8-2}\cellcolor{MidnightBlue!15}{5-2,9-2,11-2}\cellcolor{Maroon!15}{3-3,9-3,2-3,8-3}
\cellcolor{Emerald!15}{6-3,12-3,10-3} \cellcolor{Plum!15}{7-3,11-3,13-3} \cellcolor{RedOrange!15}{4-3,5-3}
]\toprule
$(a,b,c,d)$ \rule[-2ex]{0pt}{0pt}
&
$\left\{A_{1}^{(0)},A_1^{(1)},A_1^{(2)}\right\}$
&
$\left\{A_{2}^{(0)},A_2^{(1)},A_2^{(2)}\right\}$ \\\toprule
$\left(\frac{1}{3},\frac{2}{3},\frac{1}{2},1\right\} $&
$\left\{1,\frac{27}{2},\frac{6561 x}{64}-\frac{3645}{16}\right\} $&
$\left\{0,-\frac{27}{2},\frac{729}{4} \right\} $\\\midrule
$\left(\frac{2}{3},\frac{1}{3},\frac{1}{2},1\right) $&$\left\{1,-\frac{27}{4},\frac{6561 x}{64}+\frac{729}{16}\right\}$ &$\left\{0,\frac{27}{4},-\frac{729}{8}\right\} $\\\midrule
$ \left(\frac{2}{3},\frac{4}{3},1,\frac{3}{2}\right) $
& $\left\{1,\frac{27}{2},\frac{19683 x}{64}-\frac{5103}{8}\right\}$ & $ \left\{0,-\frac{27}{2},\frac{3645}{8}\right\} $\\\midrule
$\left(\frac{4}{3},\frac{2}{3},1,\frac{3}{2}\right)$ & $\left\{1,-\frac{27}{4},\frac{19683 x}{64}+\frac{729}{16}\right\} $ &
$\left\{0,\frac{27}{4},-\frac{3645}{16}\right\} $\\\midrule
$ \left(\frac{4}{3},\frac{5}{3},\frac{3}{2},2\right) $ &
$ \left\{1,\frac{135}{4},\frac{19683 x}{64}-\frac{3645}{4}\right\}$ & $ \left\{0,-\frac{135}{4},\frac{10935}{16}\right\} $\\\midrule
$ \left(\frac{5}{3},\frac{4}{3},\frac{3}{2},2\right) $ &$ \left\{1,-27,\frac{19683 x}{64}+\frac{5103}{16}\right\}$ &
$ \left\{0,27,-\frac{2187}{4}\right\} $\\
\arrayrulecolor{black!90} \midrule
$\left(\frac{1}{3},\frac{2}{3},1,\frac{3}{2}\right)$
& $ \left\{1,-27,\frac{19683 x}{64}+\frac{5103}{16}\right\}$ & $ \left\{0,27,-\frac{729}{2}\right\} $\\
\arrayrulecolor{gray!75}\midrule
$\left(\frac{2}{3},\frac{1}{3},1,\frac{3}{2}\right) $ & $\left\{1,-\frac{27}{4},\frac{19683 x}{64}+\frac{729}{16}\right\} $ &
$\left\{0,\frac{27}{4},-\frac{729}{8}\right\} $\\
\midrule
$\left(\frac{2}{3},\frac{4}{3},\frac{3}{2},2\right)$
&$\left\{1,-\frac{135}{2},\frac{19683 x}{64}+\frac{5103}{4}\right\} $
&$\left\{0,\frac{135}{2},-\frac{10935}{8}\right\}$\\\midrule
$\left(\frac{4}{3},\frac{2}{3},\frac{3}{2},2\right) $ &$\left\{1,-\frac{27}{4},\frac{19683 x}{64}+\frac{729}{16}\right\}$ &
$\left\{0,\frac{27}{4},-\frac{2187}{16}\right\} $\\\midrule
$\left(\frac{4}{3},\frac{5}{3},2,\frac{5}{2}\right)$ & $\left\{1,-\frac{135}{4},\frac{19683 x}{64}+\frac{2187}{4}\right\}$ &
$\left\{0,\frac{135}{4},-\frac{10935}{16}\right\}$\\ \midrule
$ \left(\frac{5}{3},\frac{4}{3},2,\frac{5}{2}\right)$ & $\left\{1,-\frac{27}{2},\frac{19683 x}{64}+\frac{2187}{16}\right\} $ &
$\left\{0,\frac{27}{2},-\frac{2187}{8}\right\}$
\\\bottomrule
\end{NiceTabular}
\end{center}
\end{lemma}
\begin{proof}
A direct consequence of Proposition \ref{pro:V} and Lemma \ref{lem:trinomial} is
\begin{align*}
v_{n+2}(x)&=\Big(v_2(x)+\frac{3}{\kappa} v_1(x)+\Big(\frac{3}{\kappa^2} -\frac{x}{\kappa^3}\Big)v_0(x)\Big)e_n(x)+\Big(v_1(x)+\frac{3}{\kappa} v_0(x)\Big)e_{n+1}(x)+v_0(x)e_{n+2}(x).
\end{align*}
Hence, applying it to the multiple orthogonal polynomials we get the result.The initial conditions are gotten directly from the Gauss--Borel factorization problem.
\end{proof}
\begin{rem}
These relations for $n=0$ are identities.
\end{rem}
\begin{rem}
A remarkable fact is that, having different sets of weights there are sequences of type~I multiple orthogonal polynomials that coincide, or are multiples.
In the above table, we have identified such classes with the same color.
Namely, $\left(\frac{4}{3},\frac{2}{3},1,\frac{3}{2}\right)$,
$\left(\frac{2}{3},\frac{1}{3},1,\frac{3}{2}\right) $
and
$\left(\frac{4}{3},\frac{2}{3},\frac{3}{2},2\right) $
have the same sequence
$\big\{A_1^{(n)}\big\}_{n=0}^\infty$, the same happens with
$\left(\frac{5}{3},\frac{4}{3},\frac{3}{2},2\right) $
and
$\left(\frac{1}{3},\frac{2}{3},1,\frac{3}{2}\right)$.
For
$\left(\frac{2}{3},\frac{1}{3},\frac{1}{2},1\right) $
and
$\left(\frac{2}{3},\frac{1}{3},1,\frac{3}{2}\right) $
we have the same sequence
$\big\{A_2^{(n)}\big\}_{n=0}^\infty$
and
$\left(\frac{1}{3},\frac{2}{3},1,\frac{3}{2}\right)$
has the same sequence multiplied by $4$, and
$\left(\frac{1}{3},\frac{2}{3},\frac{1}{2},1\right) $
the same sequence multiplied by $-2$.
For
$\left(\frac{4}{3},\frac{5}{3},\frac{3}{2},2\right) $
and
$\left(\frac{4}{3},\frac{5}{3},2,\frac{5}{2}\right)$
we have opposite sequences
$\big\{A_2^{(n)}\big\}_{n=0}^\infty$,
moreover the sequence for
$\left(\frac{2}{3},\frac{4}{3},\frac{3}{2},2\right)$
is a half of the previous one.
Similarly, the sequences
$\big\{A_2^{(n)}\big\}_{n=0}^\infty$
for
$\left(\frac{4}{3},\frac{2}{3},\frac{3}{2},2\right) $,
$ \left(\frac{5}{3},\frac{4}{3},2,\frac{5}{2}\right)$
and
$\left(\frac{5}{3},\frac{4}{3},\frac{3}{2},2\right) $,
are obtained by multiplying by two the previous one.
Finally, the~$A_2$ sequence for
$\left(\frac{4}{3},\frac{2}{3},1,\frac{3}{2}\right)$
is $-2$ times the $A_2$ sequence for
$ \left(\frac{2}{3},\frac{4}{3},1,\frac{3}{2}\right) $.
All these symmetries reduce the number essentially different sequences
$\big\{A_2^{(n)}\big\}_{n=0}^\infty$ to $4$
types.
\end{rem}
\begin{rem}
A similar (but slightly more complicated) construction holds for the type II multiple orthogonal polynomials. However, such a construction provides more involved expressions than the $\tensor[_2]{F}{_3}$ hypergeometric expressions found in \cite{lima_loureiro}.
\end{rem}
\begin{teo}\label{teo:uniform_type_I}
For $n\in\mathbb{N}_0$, the type I multiple orthogonal polynomials corresponding to uniform tuples are given in terms of the polynomials $\{e_n\}_{n=0}^\infty$ in Proposition \ref{pro:the polynomials e_n} in the following table
\begin{center}
\begin{NiceTabular}{|c|c|c|}[rules/color=[Gray]{0.75},
code-before = \cellcolor{Gray!15}{7-2,8-2}\cellcolor{MidnightBlue!15}{5-2,9-2,11-2}\cellcolor{Maroon!15}{3-3,9-3,2-3,8-3}
\cellcolor{Emerald!65!Plum!15}{6-3,12-3,10-3,7-3,11-3,13-3} \cellcolor{RedOrange!15}{4-3,5-3
]\toprule
$(a,b,c,d)$\rule[-1.2ex]{0pt}{0pt}
&
$A_1^{(n+2)}$
&
$A_2^{(n+2)}$ \\ \toprule
$\left(\frac{1}{3},\frac{2}{3},\frac{1}{2},1\right) $&
$\big(\frac{729}{4}-\frac{6561 x}{32}\big)e_n+\frac{135}{4}e_{n+1}+e_{n+2}$&
$ -\frac{729}{8}e_ n -\frac{27}{2}e_{n+1} $\\\midrule
$\left(\frac{2}{3},\frac{1}{3},\frac{1}{2},1\right) $&
$\big(\frac{729}{16}-\frac{6561 x}{32}\big)e_n+\frac{27}{2}e_{n+1}+e_{n+2}$ &
$\frac{729}{16}e_n+\frac{27}{4}e_{n+1}$\\\midrule
$ \left(\frac{2}{3},\frac{4}{3},1,\frac{3}{2}\right) $ &
$-\frac{3645}{16} e_n+\frac{135}{4} e_{n+1}+e_{n+2}$ &
$ \frac{729}{4}e_n -\frac{27}{2}e_{n+1}$\\\midrule
$\left(\frac{4}{3},\frac{2}{3},1,\frac{3}{2}\right)$ &
$\frac{729}{16}e_n+ \frac{27}{2} e_{n+1}+e_{n+2}$ &
$-\frac{729}{8} e_n+\frac{27}{4} e_{n+1} $\\\midrule
$ \left(\frac{4}{3},\frac{5}{3},\frac{3}{2},2\right) $ &
$ -\frac{729}{8}e_n+ 54e_{n+1}+e_{n+2}$ &
$-\frac{135}{4}e_{n+1} $\\\midrule
$ \left(\frac{5}{3},\frac{4}{3},\frac{3}{2},2\right) $ &
$ -\frac{729}{8}e_n -\frac{27}{4}e_{n+1}+e_{n+2}$ &
$ 27e_{n+1}$\\
\arrayrulecolor{black!90} \midrule
$\left(\frac{1}{3},\frac{2}{3},1,\frac{3}{2}\right)$&
$ -\frac{729}{8}e_n -\frac{27}{4}e_{n+1}+e_{n+2}$ &
$ \frac{729}{4}e_n+ 27e_{n+1} $\\
\arrayrulecolor{gray!75}\midrule
$\left(\frac{2}{3},\frac{1}{3},1,\frac{3}{2}\right) $ &
$\frac{729}{16}e_n+\frac{27}{2} e_{n+1}+e_{n+2} $ &
$\frac{729}{16} e_n+\frac{27}{4}e_{n+1}$\\
\midrule
$\left(\frac{2}{3},\frac{4}{3},\frac{3}{2},2\right)$&
$ \frac{729}{16}e_n-\frac{189}{4}e_{n+1} +e_{n+2}$&
$\frac{135}{2} e_{n+1}$\\\midrule
$\left(\frac{4}{3},\frac{2}{3},\frac{3}{2},2\right) $ &
$\frac{729}{16} e_n+\frac{27}{2}e_{n+1}+e_{n+2}$ &
$\frac{27}{4} e_{n+1}$\\\midrule
$\left(\frac{4}{3},\frac{5}{3},2,\frac{5}{2}\right)$ &
$-\frac{27}{2}e_{n+1}+e_{n+2}$ &
$\frac{135}{4} e_{n+1}$\\ \midrule
$ \left(\frac{5}{3},\frac{4}{3},2,\frac{5}{2}\right)$ &
$\frac{27}{4} e_{n+1}+e_{n+2}$ &
$\frac{27}{2} e_{n+1}$
\\\bottomrule
\end{NiceTabular}
\end{center}
\end{teo}
\section{Summation formulas}\label{S:Summations}
In this section we assume \eqref{eq:region_parameters_pochhammer_perfect}, so that the system of weights is perfect, and the moments are given by~\eqref{eq:moments}. For $m\in\mathbb{N}_0$, we study the following moments of the type II polynomials, that we call type II generalized moments
\begin{align*}
\eta^{(2n)}_{m,1}&:= \int_0^1B^{(2n)}(x)x^{n+m}w_1(x)\d\mu(x),&
\eta^{(2n+1)}_{m,1}&:= \int_0^1B^{(2n+1)}(x)x^{n+m}w_1(x)\d\mu(x),\\
\eta^{(2n)}_{m,2}&:= \int_0^1B^{(2n)}(x)x^{n+m}w_2(x)\d\mu(x),&
\eta^{(2n+1)}_{m,2}&:= \int_0^1B^{(2n+1)}(x)x^{n+m}w_2(x)\d\mu(x).
\end{align*}
These coefficients are key objects in the type II, Hermite--Padé approximation problem. In fact,
departing from $\{ B^{(n)} \}$, multiple orthogonal polynomials of type II in the step-line, with respect to a system of weights $w_1$, $w_2$, on $[0,1]$, we define the system of functions of second kind by
\begin{align*}
f^{(2n)}_1 (z) & = \int_{0}^1 \frac{B^{(2n)} (x)}{z-x} w_1 (x) \, \d x
= \sum_{k=0}^\infty \frac{\eta_{k,1}^{(2n)}}{z^{n+k}} , &
f^{(2n+1)}_1 (z) & = \int_{0}^1 \frac{B^{(2n+1)} (x)}{z-x} w_1 (x) \, \d x
= \sum_{k=0}^\infty \frac{\eta_{k+1,1}^{(2n+1)}}{z^{n+k+1}} , \\
f^{(2n)}_2 (z) & = \int_{0}^1 \frac{B^{(2n)} (x)}{z-x} w_2 (x) \, \operatorname{d} x= \sum_{k=0}^\infty \frac{\eta_{k,2}^{(2n)}}{z^{n +k}} , &
f^{(2n+1)}_2 (z) & = \int_{0}^1 \frac{B^{(2n+1)} (x)}{z-x} w_2 (x) \, \operatorname{d} x= \sum_{k=0}^\infty \frac{\eta_{k,2}^{(2n)}}{z^{n +k}} ,
\end{align*}
with the series converging uniformly on compact sets of $\mathbb C \setminus [0,1]$. In the previous set of equations $n\in\mathbb{N}_0$.
For $n=0$ these functions are
\begin{align*}
\mathscr S_{w_1}(z) = f^{(0)}_1 (z) & = \int_{0}^1 \frac{ w_1 (x)}{z-x} \, \d x ,&
\mathscr S_{w_2}(z) = f^{(0)}_2 (z) & = \int_{0}^1 \frac{ w_2 (x)}{z-x} \, \d x,
\end{align*}
the Stieltjes--Markov transforms of the weights. We can also introduce
$\{ P^{(n) }_j \} $, $j =1,2$
\begin{align*}
P^{(n)}_1 (z) & = \int_{0}^1 \frac{B^{(n)} (z)-B^{(n)} (x)}{z-x} w_1 (x) \, \d x , & n &\in \mathbb N _0,&
P^{(n)}_2 (z) & = \int_{0}^1 \frac{B^{(n)} (z)-B^{(n)} (x)}{z-x} w_2 (x) \, \d x , & n &\in \mathbb N ,
\end{align*}
which are polynomials with $\deg P^{(n)}<n$, called the associated polynomials with respect to $\{ B^{(n)} \}$ and the system of weights $w_1$, $w_2$ (cf. \cite{nikishin_sorokin}).
Then, for the Stieltjes--Markov transforms of the weights we have the following representation, or Hermite--Padé approximation problem of type~II.
\begin{align*}
B^{(n)} (z) \mathscr S_{w_1}(z) - P^{(n) }_1 (z) & = f^{(n)}_1 (z) , &
B^{(n)} (z) \mathscr S_{w_2}(z)- P^{(n) }_2 (z) & = f^{(n)}_2 (z).
\end{align*}
Thus we have simultaneous rational approximants $\frac{P^{(n) }_j}{B^{(n)}}$ to $\mathscr S_{w_j}(z)$ for $j=1,2$.
Hence, $f^{(n)}_j$ are the remainders of the interpolation conditions defining the type II Hermite--Padé approximants to the Markov functions.
We have already studied the first type II generalized moments, in fact
\begin{align}\label{eq:H_eta}
\eta_{0,1}^{(2n)}&=H_{2n}, & \eta^{(2n+1)}_{0,2}&=H_{2n+1}.
\end{align}
Notice that by orthogonality relations $\eta^{(2n+1)}_{0,1}=0$.
The nontrivial first two moments can be evaluated by means of a Karlsson--Minton formula, see \cite{lima_loureiro}; i.e.,
\begin{align}\label{eq:eta_h_pochhammer}
\eta_{0,1}^{(2n)}&=\frac{(2n)!(a)_{2n}(b)_{2n}(d-a)_{n}(d-b)_n}{(c)_{3n}(d)_{3n}(d+n-1)_{2n}}, &
\eta_{0,2}^{(2n+1)}&=\frac{(2n+1)!(a)_{2n+1}(b+1)_{2n}(c-a+1)_n(c-b)_{n+1}}{(c+1)_{3n+1}(c+n)_{2n+1}(d)_{3n+1}},
\end{align}
Now, using a recent extension of the Karlsson--Minton summation formula due to Karp and Prilepkina we can perform a similar evaluation for all the remaining nontrivial type II generalized moments.
\begin{pro}[Type II moment summations]\label{pro:summations_Padé}
Let us assume that $c-d\not\in\mathbb{Z}$. Then, the following relations hold~true
\begin{enumerate}
\item
For $m\in\mathbb{N}$ and $(\theta_1,\ldots,\theta_{2m+1})=(c,c+1,\ldots,c+m-1,d-1,d,\ldots,d+m-1)$:
\begin{align}\label{eq:summation_product_2n_k_1}
\eta^{(2n)}_{m,1}=
\frac{(2n)!(a)_{2n}(b)_{2n}}{(c)_{3n}(d)_{3n-1} }
\,
\sum_{l=1}^{2m+1}\frac{(\theta_l-a-m+1)_{n+m}(\theta_l-b-m+1)_{n+m}}{\prod_{i\ne l}(\theta_i-\theta_l) (\theta_l+n)(\theta_l+n+1)_{2n}}.
\end{align}
\item For $m\in\mathbb{N}$ and $(\theta_1,\ldots,\theta_{2m})=(c,c+1,\ldots,c+m-1,d,d+1,\ldots,d+m-1)$:
\begin{align}\label{eq:summation_product_2n+1_k_1}
\eta^{(2n+1)}_{m,1}= - \frac{(2n+1)!(a)_{2n+1}(b)_{2n+1}}{(c)_{3n+1}(d)_{3n+1}}
\sum_{l=1}^{2m}\frac{(\theta_l-a-m+1)_{n+m}(\theta_l-b-m+1)_{n+m}}{\prod_{i\ne l}(\theta_i-\theta_l) (\theta_l+n)(\theta_l+n+1)_{2n+1}}.
\end{align}
\item For $m\in\mathbb{N}_0$ and $(\theta_1,\ldots,\theta_{2m+2})=(c,c+1,\ldots,c+m,d-1,d,\ldots,d+m-1)$
\begin{align}\label{eq:summation_product_2n_k_2}
\eta^{(2n)}_{m,2}=
- \frac{c}{b}\frac{(2n)!(a)_{2n}(b)_{2n}}{(c)_{3n}(d)_{3n-1}}
\sum_{l=1}^{2m+2}\frac{(\theta_l-a-m+1)_{n+m}(\theta_l-b-m)_{n+m+1}}{\prod_{i\ne l}(\theta_i-\theta_l)(\theta_l+n)(\theta_l+n+1)_{2n}}
\end{align}
\item For $m\in\mathbb{N}$ and $(\theta_1,\ldots,\theta_{2m+1})=(c,c+1,\ldots,c+m,d,d+1,\ldots,d+m-1)$
\begin{align}\label{eq:summation_product_2n+1_k_2}
\eta^{(2n+1)}_{m,2}=
\frac{c}{b} \frac{(2n+1)!(a)_{2n+1}(b)_{2n+1}}{(c)_{3n}(d)_{3n+1}}
\sum_{l=1}^{2m+1}\frac{(\theta_l-a-m+1)_{n+m}(\theta_l-b-m)_{n+m+1}}{\prod_{i\ne l}(\theta_i-\theta_l)(\theta_l+n)(\theta_l+n+1)_{2n+1}}.
\end{align}
\end{enumerate}
\end{pro}
\begin{proof}
According to (3.6) in \cite{lima_loureiro} we have
\begin{align}\label{eq:product_2n_k_1}
\int_0^1B^{(2n)}(x)x^kw_1(x)\d\mu(x)&=
\frac{(a)_{2n}(b)_{2n}(a)_k(b)_k}{(c+n)_{2n}(d+n-1)_{2n} (c)_k(d)_k}\,
\tensor[_5]{F}{_4}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n,\;a+k,\;b+k,\; c+n,\;d+n-1 \\a,\;b,\;c+k,\;d+k\end{NiceArray}};1\right],
\\\notag
\int_0^1B^{(2n+1)}(x)x^kw_1(x)\d\mu(x)&= -
\frac{(a)_{2n+1}(b)_{2n+1}(a)_k(b)_k}{(c+n)_{2n+1}(d+n)_{2n+1} (c)_k(d)_k}\,
\tensor[_5]{F}{_4}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\;a+k,\;b+k,\; c+n,\;d+n \\a,\;b,\;c+k,\;d+k\end{NiceArray}};1\right],
\\
\notag
\int_0^1B^{(2n)}(x)x^kw_2(x)\d\mu(x)&=
\frac{(a)_{2n}(b)_{2n}(a)_k(b+1)_k}{(c+n)_{2n}(d+n-1)_{2n} (c+1)_k(d)_k}\,
\tensor[_5]{F}{_4}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n,\;a+k,\;b+k+1,\; c+n,\;d+n-1 \\a,\;b,\;c+k+1,\;d+k\end{NiceArray}};1\right],
\\
\notag
\int_0^1B^{(2n+1)}(x)x^kw_2(x)\d\mu(x)&= -
\frac{(a)_{2n+1}(b)_{2n+1}(a)_k(b+1)_k}{(c+n)_{2n+1}(d+n)_{2n+1} (c+1)_k(d)_k}\,
\tensor[_5]{F}{_4}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\;a+k,\;b+k+1,\; c+n,\;d+n \\a,\;b,\;c+k+1,\;d+k\end{NiceArray}};1\right],
\end{align}
To evaluate these generalized hypergeometrical functions at unity we use the Karp--Prilepkina extension on the Karlsson--Minton summation formulas. For $b_1,b_2\in\mathbb{C}$, $N,p_1,p_2,m_1,m_2\in\mathbb{N}$, let
\begin{align*}
\boldsymbol{\beta}=(b_1,b_1+1,\cdots, b_1+p_1-1,b_2+b_2+1,\ldots, b_2+p_2-1)=(\beta_1,\ldots,\beta_{p_1+p_2})
\end{align*}
with $\beta_i\neq\beta_j$ for $i\neq j$, and assume that $p_1+p_2+N-m_1-m_2>0$. Then,
a particular instance of \cite[Theorem~2.2]{Karp_Prilepkina} gives the following summation formula
\begin{align*}
{}_{5}F_{4}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-N,\;b_1,\;b_2,\; f_1+m_1,\;f_2+m_2 \\b_1+p_1,\;b_2+p_2,\;f_1\;f_2\end{NiceArray}};1\right]=
N!\frac{(b_1)_{p_1}(b_2)_{p_2}}{(f_1)_{m_1}(f_2)_{m_2}}\sum_{l=1}^{p_1+p_2}\frac{(f_1-\beta_l)_{m_1}(f_2-\beta_l)_{m_2}}{\prod_{i\ne l}(\beta_i-\beta_l) \beta_l(\beta_l+1)_{N}}.
\end{align*}
For \eqref{eq:product_2n_k_1} we take $m_1=m_2=k$, $f_1=a$, $f_2=b$, $N=2n$, $b_1=c+n$, $b_2=d+n-1$, $p_1=k-n$ and $p_2=k-n+1$; with
$
\boldsymbol{\beta}=(c+n,c+n+1,\ldots,c+k-1,d+n-1,d+n,\ldots,d+k-1)$.
If we assume $c-d\ne \mathbb{Z}$ all the $\beta$'s are different, but this is not the only case with all these coefficients distinct. In this case
$
p_1+p_2+N-m_1-m_2=2(k-n)+1+2n-2k=1>0
$
and the Karp--Prilepkina summation formula applies:
\begin{align*}
\tensor[_5]{F}{_{4}}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n,\;a+k,\;b+k,\; c+n,\;d+n-1 \\a,\;b,\;c+k,\;d+k\end{NiceArray}};1\right]=
(2n)!
\frac{(c+n)_{k-n}(d+n-1)_{k-n+1}}{(a)_{k}(b)_k}
\sum_{l=1}^{2(k-n)+1}\frac{(a-\beta_l)_{k}(b-\beta_l)_{k}}{\prod_{i\ne l}(\beta_i-\beta_l) \beta_l(\beta_l+1)_{2n}},
\end{align*}
and taking $m=k-n$, Equation \eqref{eq:summation_product_2n_k_1} follows.
\begin{comment}
For \eqref{eq:product_2n+1_k_1} we take $m_1=m_2=k$, $f_1=a$, $f_2=b$, $N=2n+1$, $b_1=c+n$, $b_2=d+n$, $p_1=k-n$ and $p_2=k-n$;
; with
\begin{align*}
\boldsymbol{\beta}=(c+n,c+n+1,\ldots,c+k-1,d+n,d+n,\ldots,d+k-1).
\end{align*}
Now,
\begin{align*}
p_1+p_2+N-m_1-m_2=2(k-n)+2n+1-2k=1>0
\end{align*}
and Karp--Prilepkina applies giving
\begin{multline*}
{}_{5}F_{4}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\;a+k,\;b+k,\; c+n,\;d+n \\a,\;b,\;c+k,\;d+k\end{NiceArray}};1\right]\\=
(2n+1)!\frac{(c+n)_{k-n}(d+n)_{k-n}}{(a)_{k}(b)_{k}}\sum_{l=1}^{2(k-n)}\frac{(a-\beta_l)_{k}(b-\beta_l)_{k}}{\prod_{i\ne l}(\beta_i-\beta_l) \beta_l(\beta_l+1)_{2n+1}}.
\end{multline*}
and putting $m=k-n$ we get \eqref{eq:summation_product_2n+1_k_1}.
For \eqref{eq:product_2n_k_2} we take $m_1=k,m_2=k+1$, $f_1=a$, $f_2=b$, $N=2n$, $b_1=c+n$, $b_2=d+n-1$, $p_1=k-n+1$ and $p_2=k-n+1$;
; with
\begin{align*}
\boldsymbol{\beta}=(c+n,c+n+1,\ldots,c+k,d+n-1,d+n,\ldots,d+k-1).
\end{align*}
Now,
\begin{align*}
p_1+p_2+N-m_1-m_2=2(k-n)+2+2n-2k-1=1>0,
\end{align*}
and the summation formula applies
\begin{multline*}
{}_{5}F_{4}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n,\;a+k,\;b+k+1,\; c+n,\;d+n-1 \\a,\;b,\;c+k+1,\;d+k\end{NiceArray}};1\right]\\
=(2n)!\frac{(c+n)_{k-n+1}(d+n-1)_{k-n+1}}{(a)_{k}(b)_{k+1}}\sum_{l=1}^{2(k-n)+2}\frac{(a-\beta_l)_{k}(b-\beta_l)_{k+1}}{\prod_{i\ne l}(\beta_i-\beta_l) \beta_l(\beta_l+1)_{2n}}.
\end{multline*}
so that putting $m=k-n$, \eqref{eq:summation_product_2n_k_2} follows.
Finally, for \eqref{eq:product_2n+1_k_2} we put $m_1=k,m_2=k+1$, $f_1=a$, $f_2=b$, $N=2n+1$, $b_1=c+n$, $b_2=d+n$, $p_1=k-n+1$ and $p_2=k-n$;
; with
\begin{align*}
\boldsymbol{\beta}=(c+n,c+n+1,\ldots,c+k,d+n,d+n+1,\ldots,d+k-1).
\end{align*}
In this case,
\begin{align*}
p_1+p_2+N-m_1-m_2=2(k-n)+1+2n+1-2k-1=1>0,
\end{align*}
and Karp--Prilepkina's summation gives
\begin{multline*}
{}_{5}F_{4}\hspace*{-3pt}\left[{\begin{NiceArray}{c}[small]-2n-1,\;a+k,\;b+k+1,\; c+n,\;d+n \\a,\;b,\;c+k+1,\;d+k\end{NiceArray}};1\right]\\=
(2n+1)!\frac{(c+n)_{k-n+1}(d+n)_{k-n}}{(a)_{k}(b)_{k+1}}\sum_{l=1}^{2(k-n)+1}\frac{(a-\beta_l)_{k}(b-\beta_l)_{k+1}}{\prod_{i\ne l}(\beta_i-\beta_l) \beta_l(\beta_l+1)_{2n+1}},
\end{multline*}
and Equation \eqref{eq:summation_product_2n+1_k_2} follows.
\end{comment}
The remaining equations follow similarly.
\end{proof}
\begin{rem}
As examples of the previous summations we have
\begin{enumerate}
\item For $m=0$ and $(\theta_1,\theta_2)=(c,d-1)$:
\begin{align*}
\eta^{(2n)}_{0,2}= -\frac{c}{b}\frac{(2n)!(a)_{2n}(b)_{2n}}{(c)_{3n}(d)_{3n-1}(d-c-1)}\Bigg(
\frac{(c-a+1)_n(c-b)_{n+1}}{(c+n)(c+n+1)_{2n}}- \frac{(d-a)_n(d-1-b)_{n+1}}{(d-1-c)(d-n)_{2n}}
\Bigg).
\end{align*}
\item For $m=0$ and $(\theta_1,\theta_2)=(c,d)$:
\begin{align*}
\eta^{(2n+1)}_{1,1}=-
\frac{(2n+1)!(a)_{2n+1}(b)_{2n+1}}{(c)_{3n+1}(d)_{3n+1}(d-c)}\Bigg(
\frac{(c-a)_{n+1}(c-b)_{n+1}}{(c+n)(c+n+1)_{2n+1}}- \frac{(d-a)_{n+1}(d-b)_{n+1}}{(d+n)(d+n+1)_{2n+1}}
\Bigg).
\end{align*}
\item For $m=1$ and $(\theta_1,\theta_2,\theta_{3})=(c,d-1,d)$:
\begin{align*
\begin{multlined}[t][0.85\textwidth]
\eta^{(2n)}_{1,1}
= \frac{(2n)!(a)_{2n}(b)_{2n}}{(c)_{3n}(d)_{3n-1} }\Bigg(
\frac{(c-a)_{n+1}(c-b)_{n+1}}{(d-c)(d-1-c) (c+n)(c+n+1)_{2n}}\\+
\frac{(d-1-a)_{n+1}(d-1-b)_{n+1}}{(c-d+1)(d-1+n)(d+n)_{2n}}-
\frac{(d-a)_{n+1}(d-b)_{n+1}}{(c-d)(d+n)(d+n+1)_{2n}}\Bigg).
\end{multlined}
\end{align*}
\item For $m=1$ and $(\theta_1,\theta_2,\theta_3)=(c,c+1,d)$
\begin{align*
\begin{multlined}[t][0.85\textwidth]
\eta^{(2n+1)}_{1,2} =
\frac{c}{b} \frac{(2n+1)!(a)_{2n+1}(b)_{2n+1}}{(c)_{3n}(d)_{3n+1}}
\Bigg(
\frac{(c-a)_{n+1}(c-b-1)_{n+2}}{(d-c)(c+n)(c+n+1)_{2n+1}}-
\frac{(c+1-a)_{n+1}(c-b)_{n+2}}{(d-c-1)(c+1+n)(c+n+2)_{2n+1}}\\+
\frac{(d-a)_{n+1}(d-b-1)_{n+2}}{(c-d)(c+1-d)(d+n)(d+n+1)_{2n+1}}
\Bigg).
\end{multlined}
\end{align*}
\end{enumerate}
\end{rem}
\begin{comment}
\begin{pro}
The first three subdiagonal coeficientes for $\tilde S$ in terms of type II generalized moments are
\begin{align}\label{eq:1subdiagonal_bis}
\tilde S^{[1]}_{2n}&=-\frac{\eta^{(2n)}_{0,2}}{\eta^{(2n)}_{0,1}}, &
\tilde S^{[1]}_{2n+1} &=-\frac{\eta^{(2n+1)}_{1,1}}{\eta^{(2n+1)}_{0,2}},\\\label{eq:2subdiagonal_bis}
\tilde S^{[2]}_{2n}&=-\frac{\begin{vmatrix}
\eta^{(2n+1)}_{0,2} & \eta^{(2n+1)}_{1,1}\\[3pt]
\eta^{(2n)}_{0,2} & \eta^{(2n)}_{1,1}\\
\end{vmatrix}}{\eta^{(2n+1)}_{0,2}\eta^{(2n)}_{0,1}}, &\tilde S^{[2]}_{2n+1}&=-\frac{\begin{vmatrix}
\eta^{(2n+2)}_{0,1} &\eta^{(2n+2)}_{0,2}\\
\eta^{(2n+1)}_{1,1} & \eta^{(2n+1)}_{1,2}
\end{vmatrix}}{ \eta^{(2n+2)}_{0,1} \eta^{(2n+1)}_{0,2}},\\\label{eq:3subdiagonal_bis}
\tilde S^{[3]}_{2n+1}&=\frac{\begin{vmatrix}
\eta^{(2n+3)}_{0,2} & 0 & \eta^{(2n+3)}_{1,1}\\[3pt]
\eta^{(2n+2)}_{0,2} & \eta^{(2n+2)}_{0,1}& \eta^{(2n+2)}_{1,1}\\[3pt]
\eta^{(2n+1)}_{1,2} & \eta^{(2n+1)}_{1,1}& \eta^{(2n+1)}_{1,1}
\end{vmatrix}}{ \eta^{(2n+3)}_{0,2} \eta^{(2n+2)}_{0,1} \eta^{(2n+1)}_{0,2}}, &
\tilde S^{[3]}_{2n}&=\frac{\begin{vmatrix}
\eta^{(2n+2)}_{0,1} & 0 & \eta^{(2n+2)}_{0,2}\\[3pt]
\eta^{(2n+1)}_{1,1} & \eta^{(2n+1)}_{0,2}& \eta^{(2n+1)}_{1,2}\\[3pt]
\eta^{(2n)}_{1,1} & \eta^{(2n)}_{0,2}& \eta^{(2n)}_{1,2}
\end{vmatrix}}{ \eta^{(2n+2)}_{0,1}\eta^{(2n+1)}_{0,2} \eta^{(2n)}_{0,1}},
\end{align}
\end{pro}
We extend the large $n$ asymptotics given in Proposition \ref{pro:asymtotics_S}
\begin{pro}[Large $n$ asymptotics for $\tilde S^{[1]}_n$]
We have
\begin{align*
\tilde S^{[1]}_{2n}&=\left\{
\begin{aligned}
&-\frac{c}{b}\frac{1}{3^{c-d+1}(c-d+1)}\frac{\Gamma(d-a)\Gamma(d-b)}{\Gamma(c-a+1)\Gamma(c-b)}n^{2(c-d+1)}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &c>d-1,\\
&
-\frac{c(d-b-1)}{b(d-c-1)}
\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &d-1>c,
\end{aligned}
\right.\\
\tilde S^{[1]}_{2n+1} &=
\left\{
\begin{aligned}
& - \frac b c \frac{c-a}{c-d} \Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &c>d,\\
&- \frac b c \frac{1}{3^{d-c}(d-c)} \frac{\Gamma(c-a+1)\Gamma(c-b)}{\Gamma(d-a)\Gamma(d-b)} n^{2 (d-c)} \Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &d>c,
\end{aligned}
\right.
\end{align*}
\end{pro}
\begin{proof}
It follows from \eqref{eq:1subdiagonal_bis} and Propositions \ref{pro:asymp_H} and \ref{pro:asymp_moments_II}
\begin{align*
\tilde S^{[1]}_{2n}&=-\frac{\eta^{(2n)}_{0,2}}{\eta^{(2n)}_{0,1}}=\left\{
\begin{aligned}
&-\frac{\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c+1)\Gamma(d)\kappa^{3n} n^{c-d+\frac{3}{2}}}{3^{2c+d-1}\Gamma(a)\Gamma(b+1)\Gamma(c-a+1)\Gamma(c-b)
(c-d+1)}}{ \frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c)\Gamma(d)\kappa^{3n}n^{d-c-\frac{1}{2}}}{3^{c+2d-2}\Gamma(a)\Gamma(b)\Gamma(d-a)\Gamma(d-b)}}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &c>d-1,\\
&-\frac{\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c+1)\Gamma(d)\kappa^{3n} n^{d-c-\frac{1}{2}}}{3^{c+2d-2}\Gamma(a)\Gamma(b+1)\Gamma(d-a)\Gamma(d-b-1)
(d-1-c)}}{ \frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c)\Gamma(d)\kappa^{3n}n^{d-c-\frac{1}{2}}}{3^{c+2d-2}\Gamma(a)\Gamma(b)\Gamma(d-a)\Gamma(d-b)}}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &d-1>c,
\end{aligned}
\right.\\
\tilde S^{[1]}_{2n+1} &
=-\frac{\eta^{(2n+1)}_{1,1}}{\eta^{(2n+1)}_{0,2}}
=\left\{
\begin{aligned}
& - \frac{\frac{\sqrt{3\pi^3}2^{a+b+4}\Gamma(c)\Gamma(d)\kappa^{3n} n^{c-d+\frac{1}{2}}}{3^{2c+d+3} (c-d) \Gamma(a) \Gamma(b) \Gamma(c-a) \Gamma(c-b) }}{\frac{\sqrt{3\pi^3}2^{a+b+4} \Gamma(c+1) \Gamma(d) \kappa^{3n}n^{c-d+\frac{1}{2}}}{3^{2c+d+3}\Gamma(a)\Gamma(b+1)\Gamma(c-a+1)\Gamma(c-b) }} \Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &c>d,\\
&- \frac{\frac{\sqrt{3\pi^3}2^{a+b+4}\Gamma(c)\Gamma(d)\kappa^{3n} n^{d-c+\frac{1}{2}}}{3^{c+2d+3} (d-c) \Gamma(a) \Gamma(b) \Gamma(d-a) \Gamma(d-b) }}{\frac{\sqrt{3\pi^3}2^{a+b+4} \Gamma(c+1) \Gamma(d) \kappa^{3n}n^{c-d+\frac{1}{2}}}{3^{2c+d+3}\Gamma(a)\Gamma(b+1)\Gamma(c-a+1)\Gamma(c-b) }}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &d>c.
\end{aligned}
\right.
\end{align*}
\begin{align*}
\tilde S^{[2]}_{2n}&=-\frac{\begin{vmatrix}
\eta^{(2n+1)}_{0,2} & \eta^{(2n+1)}_{1,1}\\[3pt]
\eta^{(2n)}_{0,2} & \eta^{(2n)}_{1,1}\\
\end{vmatrix}}{\eta^{(2n+1)}_{0,2}\eta^{(2n)}_{0,1}}
\end{align*}
\begin{align*}
\eta^{(2n)}_{0,1}&=\begin{aligned} &\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c)\Gamma(d)\kappa^{3n}n^{d-c-\frac{1}{2}}}{3^{c+2d-2}\Gamma(a)\Gamma(b)\Gamma(d-a)\Gamma(d-b)}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty,\end{aligned}\\
\eta^{(2n+1)}_{0,2}&=\begin{aligned}
&\frac{\sqrt{3\pi^3}2^{a+b+4} \Gamma(c+1) \Gamma(d)\kappa^{3n}n^{c-d+\frac{1}{2}}}{3^{2c+d+3}\Gamma(a)\Gamma(b+1)\Gamma(c-a+1)\Gamma(c-b) }\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty,
\end{aligned}\\
\eta^{(2n)}_{1,1}&= \left\{
\begin{aligned}
&\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c)\Gamma(d)\kappa^{3n} n^{c-d+\frac{5}{2}}}{3^{2c+d-1}\Gamma(a)\Gamma(b)\Gamma(c-a)\Gamma(c-b)
(c-d)_{2}}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & c&>d,\\
&\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c)\Gamma(d)\kappa^{3n} n^{d-c+\frac{5}{2}}}{3^{c+2d-1}\Gamma(a)\Gamma(b)\Gamma(d-a)\Gamma(d-b)
(d-c)}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & d&>c,
\end{aligned}
\right.\\
\eta^{(2n+1)}_{1,1}&= \left\{
\begin{aligned}
&\frac{\sqrt{3\pi^3}2^{a+b+4}\Gamma(c)\Gamma(d)\kappa^{3n} n^{c-d+\frac{1}{2}}}{3^{2c+d+3}\Gamma(a)\Gamma(b)\Gamma(c-a)\Gamma(c-b)
(c-d)}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & c&>d,\\
&\frac{\sqrt{3\pi^3}2^{a+b+4}\Gamma(c)\Gamma(d)\kappa^{3n} n^{d-c+\frac{1}{2}}}{3^{c+2d+3}\Gamma(a)\Gamma(b)\Gamma(d-a)\Gamma(d-b)
(d-c)}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & d&>c,
\end{aligned}
\right.\\
\eta^{(2n)}_{0,2}&= \left\{
\begin{aligned}
&\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c+1)\Gamma(d)\kappa^{3n} n^{c-d+\frac{3}{2}}}{3^{2c+d-1}\Gamma(a)\Gamma(b+1)\Gamma(c-a+1)\Gamma(c-b)
(c-d+1)}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &c>d-1,\\
&\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c+1)\Gamma(d)\kappa^{3n} n^{d-c-\frac{1}{2}}}{3^{c+2d-2}\Gamma(a)\Gamma(b+1)\Gamma(d-a)\Gamma(d-b-1)
(d-1-c)}\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &d-1>c,
\end{aligned}
\right.\\
\eta^{(2n+1)}_{1,2}&= \left\{
\begin{aligned}
&\frac{\sqrt{3\pi^3}2^{a+b+4}\Gamma(c+1)\Gamma(d)\kappa^{3n}n^{c-d+\frac{7}{2}}}{3^{2c+d+3}\Gamma(a)\Gamma(b+1)\Gamma(c-a+1)
\Gamma(c-b)(c-d+1)}
\Big(1+O\Big(\frac{1}{n}\Big)\Big)& n&\to+\infty, & &c>d-1,\\
&\frac{\sqrt{3\pi^3}2^{a+b+4}\Gamma(c+1)\Gamma(d) \kappa^{3n}n^{d-c+\frac{3}{2}}}{3^{2d+c+2}\Gamma(a)\Gamma(b+1)\Gamma(d-a)\Gamma(d-b-1)(d-1-c)_{m+1}}
\Big(1+O\Big(\frac{1}{n}\Big)\Big),& n&\to+\infty, & &d-1>c,
\end{aligned}
\right.
\end{align*}
We have three regions $c>d$, $d-1<c<d$ and $c<d-1$.
$c>d$
\begin{align*}
\tilde S^{[2]}_{2n}&=-\frac{\begin{vmatrix}
\eta^{(2n+1)}_{0,2} & \eta^{(2n+1)}_{1,1}\\[3pt]
\eta^{(2n)}_{0,2} & \eta^{(2n)}_{1,1}\\
\end{vmatrix}}{\eta^{(2n+1)}_{0,2}\eta^{(2n)}_{0,1}} &\tilde S^{[2]}_{2n+1}&=-\frac{\begin{vmatrix}
\eta^{(2n+2)}_{0,1} &\eta^{(2n+2)}_{0,2}\\
\eta^{(2n+1)}_{1,1} & \eta^{(2n+1)}_{1,2}
\end{vmatrix}}{ \eta^{(2n+2)}_{0,1} \eta^{(2n+1)}_{0,2}}
\end{align*}
We notice that, given the asymptotic behavior of the $\eta$'s involved, we get, for all values of parameters that for $c>d$
\begin{align*}
\tilde S^{[2n]}_{2n}&=-\frac{\eta_{1,1}^{(2n)}}{\eta^{(2n)}_{0,1}}\Big(1+O\Big(\frac{1}{n}\Big)\Big)\\
&= \frac{\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c)\Gamma(d)\kappa^{3n} n^{c-d+\frac{5}{2}}}{3^{2c+d-1}\Gamma(a)\Gamma(b)\Gamma(c-a)\Gamma(c-b)
(c-d)_{2}}}{\frac{\sqrt{3\pi^3}2^{a+b+1}\Gamma(c)\Gamma(d)\kappa^{3n}n^{d-c-\frac{1}{2}}}{3^{c+2d-2}\Gamma(a)\Gamma(b)\Gamma(d-a)\Gamma(d-b)}}\Big(1+O\Big(\frac{1}{n}\Big)\Big)\\\
& =-
\frac{ 3^{c-d+1}\Gamma(d-a)\Gamma(d-b)}{\Gamma(c-a)\Gamma(c-b)
(c-d)_{2}}n^{2(c-d)+3}\Big(1+O\Big(\frac{1}{n}\Big)\Big)\,& n&\to+\infty,
\end{align*}
\end{proof}
\end{comment}
\begin{comment} |
1,314,259,994,297 | arxiv | \section{introduction}
\label{intro}
Magnetism in the absence of the spatial inversion symmetry has drawn considerable interest in condensed matter physics, since it exhibits various fascinating phenomena, such as the magneto-electric effect~\cite{Curie, Fiebig_2005, khomskii2009trend} and nonreciprocal transport~\cite{wakatsuki2017nonreciprocal}.
For example, magnetic skyrmions in polar/chiral magnets show nonreciprocal directional dichroism due to the lack of the spatial inversion symmetry~\cite{PhysRevB.87.134403}.
Recently, current-induced magnetization and magneto-piezo electricity in antiferromagnetic (AFM) metals without the inversion symmetry have been observed in experiments~\cite{doi:10.7566/JPSJ.87.033702, magpiezo1, PhysRevLett.122.127207}.
Such magnets without the spatial inversion symmetry also affect collective excitations of magnon and photon, which results in directional-dependent dynamical properties even in magnetic insulators~\cite{PhysRevLett.30.125, doi:10.1143/JPSJ.56.3635, cortes2013influence, PhysRevB.88.184404, doi:10.7566/JPSJ.85.053705, 10.1038/s41598-019-51646-3, kawano2019designing, PhysRevB.100.174402}.
Theoretically, the nonreciprocal magnons have long been studied in the magnetic systems with the
Dzyaloshinsky-Moriya (DM) interaction~\cite{DZYALOSHINSKY1958241, PhysRev.120.91}, which were observed in recent experiments~\cite{PhysRevB.92.184419, zhang2015plane, cho2015thickness, PhysRevB.94.144420, PhysRevB.93.235131, PhysRevLett.119.047201, PhysRevB.95.220406, tacchi2017interfacial, chaurasiya2018dependence, PhysRevB.98.064416}.
Among them, asymmetric (nonreciprocal) magnon dispersions were directly detected in the noncentrosymmetric ferromagnet $\mathrm{LiFe_5O_8}$~\cite{PhysRevB.92.184419} and AFM $\alpha$-$\rm{Cu_2 V_2O_7}$~\cite{PhysRevLett.119.047201} through the spectroscopic measurements.
Furthermore, such asymmetric magnons give rise to directional-dependent physical phenomena,
such as nonreciprocal magneto-optical
~\cite{takahashi.Nat.Phys., doi:10.1143/JPSJ.81.023712, Miyahara2013, PhysRevB.89.195145, PhysRevLett.114.197203, PhysRevB.98.134422, PhysRevB.99.094401} and nonreciprocal spin Seebeck effects~\cite{PhysRevB.98.020401, PhysRevB.96.180414}.
Meanwhile, some nonreciprocal-magnon mechanisms which are different from the DM interaction have been found, e.g., the dipolar coupling between ferromagnetic layers~\cite{grunberg1986layered, zhang1987spin, di2015enhancement, gallardo2019reconfigurable, albisetti2020optically}, the vector spin chirality in the spiral spin structures~\cite{PhysRevB.89.195145, doi:10.1143/JPSJ.81.023712, takahashi.Nat.Phys., PhysRevB.98.184405}, the bond-dependent symmetric anisotropic exchange interaction~\cite{maksimov2019anisotropic}, and magnetic interactions induced by curved magnetic surfaces~\cite{otalora2016curvature}, and graded magnetization~\cite{gallardo2019spin} in ferromagnetic films.
Among them, the bond-dependent symmetric anisotropic and DM interactions originate from the spin-orbit coupling in bulk systems, which are different from each other:
The former mechanism does not require the inversion symmetry breaking on the bond center, while the latter does.
Thus, nonreciprocal magnons can be realized even in centrosymmetric magnets when the magnetic order breaks the inversion symmetry in the presence of the bond-dependent symmetric anisotropic exchange interaction, which will extend the scope of functional materials toward applications to AFM spintronics devices.
Nevertheless, its microscopic mechanism has not been elucidated thus far.
In the present study, we investigate the behavior of nonreciprocal magnons under the anisotropic magnetic interactions on the basis of point group symmetry.
We show that the threefold bond-dependent symmetric exchange interaction in the honeycomb structure leads to a valley-type nonreciprocal magnon excitations once the staggered-type collinear AFM ordering occurs.
We also find that its nonreciprocal magnon excitations show an in-plane angle-dependent directional dispersions under an external magnetic field.
We present that the microscopic origin of the nonreciprocal magnon excitations is attributed to the emergent magnetic toroidal multipoles hidden in the cluster magnetic structure from the symmetry point of view.
The organization of this paper is as follows.
In Sec.~\ref{model}, we introduce the spin model and outline the linear spin wave calculations based on the Holstein-Primakoff transformations.
In Sec.~\ref{result}, we give the nonreciprocal magnon excitations at both zero and nonzero magnetic fields.
Section~\ref{summary} is devoted to a summary of the present paper.
In Appendix~\ref{GS}, we present the calculations of the spin configurations in the ground state.
In Appendix~\ref{DM}, we compare the magnon dispersions obtained in the present paper with those in the presence of the
DM interaction.
\section{model and method}
\label{model}
Let us start by considering the localized spin model in the honeycomb structure, as shown in Fig.~\ref{fig1}(a).
By taking into account the symmetry elements of the honeycomb structure under the point group $6/mmm$, the spin Hamiltonian with the symmetry-allowed exchange interactions is given by
\begin{align}
\label{eq1}
\mathcal{H}=&\sum_{\langle ij \rangle}
\Big[
J(S^+_{i{\rm{A}}}S^-_{j{\rm{B}}}+S^-_{i{\rm{A}}}S^+_{j{\rm{B}}}) +J^z S^z_{i{\rm{A}}} S^z_{j{\rm{B}}} \nonumber \\
&+J^a(\gamma_{ij}S^+_{i{\rm{A}}}S^+_{j{\rm{B}}}+\gamma_{ij}^{*}S^-_{i{\rm{A}}}S^-_{j{\rm{B}}})
\Big]
-\sum_{i, \eta}
\mathbf{H}\cdot \mathbf{S}_{i \eta},
\end{align}
where $S^{\zeta}_{i \eta}$ is a classical spin with a $\zeta=x, y,z$ component at unit cell $i$ and sublattice $\eta=$ A, B, and $S^{\pm}_{i \eta}\equiv (S^x_{i \eta}\pm iS^y_{i \eta})/\sqrt{2}$.
The sum of $\langle ij \rangle$ is taken for the nearest-neighbor spins.
The first two terms in
the square braket in Eq.~\eqref{eq1} represent the $xxz$-type AFM exchange interactions where we assume $J_z >J>0$.
The third term stands for a bond-dependent
symmetric anisotropic exchange interaction with the coupling constant $J^a$ and the phase factor $\gamma_{ij}\equiv {\rm{e}}^{i\frac{2\pi n}{3}}$ where $n=0,1, 2$ corresponds to the three nearest-neighbor bonds in Fig.~\ref{fig1}(a).
This term originates from the relativistic spin-orbit coupling in multi-orbital systems where the competition between the crystalline electric field and the atomic spin-orbit coupling gives rise to a Kramers doublet under the large total angular momentum, although it is different from the DM interaction which appears in the absence of the inversion symmetry at the bond center.
A similar bond-dependent symmetric anisotropic exchange interaction has recently been studied in the triangle AFM~\cite{PhysRevB.94.035107} and honeycomb ferromagnet~\cite{PhysRevB.95.014435, PhysRevApplied.9.024029}.
The second term is a Zeeman interaction under an external in-plane magnetic field, $\mathbf{H}=(H_x, H_y, 0)=H(\cos \phi, \sin \phi,0)$.
We set $J^z=1$ as the energy unit and the distance between A and B sublattices to be 1.
In order to discuss the magnon excitations, we investigate the optimal spin pattern within the two-sublattice orderings in the model in Eq.~(\ref{eq1}).
For $J^z>J$, the spin configurations are given by $\mathbf{S}_{i{\rm A}}=S(\sin{\theta}\cos{\phi}, \sin{\theta}\sin{\phi}, \cos{\theta})$ and $\mathbf{S}_{i{\rm B}}=S(\sin{\theta}\cos{\phi}, \sin{\theta}\sin{\phi}, -\cos{\theta})$ where $\theta=\sin^{-1}\left[H/3S(J+J^z)\right]$, as shown in Appendix~\ref{GS}.
Note that $J^a$ does not contribute to the ground-state energy within the two-sublattice AFM ordering, although it plays an important role in asymmetric magnon excitations as discussed in Sec.~\ref{result}.
For the above spin configuration, we examine magnetic excitations by using the linear spin-wave theory.
We adopt the standard Holstein-Primakoff transformation as $\tilde{S}^{+}_{i\eta}\equiv \sqrt{S}\eta'_i$, $\tilde{S}^{-}_{i\eta}\equiv \sqrt{S}\eta_i'^{\dagger}$, and $\tilde{S}^z_{i\eta}\equiv S-\eta_i'^{\dagger}\eta'_i$ where $S=1$ and $(\tilde{S}^{x}_{i\eta},\tilde{S}^{y}_{i\eta},\tilde{S}^{z}_{i\eta})^T$ is the local rotated frame with the quantization axis along the $\tilde{S}^{z}_{i\eta}$ direction and $\eta'_i=a_i$ and $b_i$ are the boson operator for sublattice $\eta=$ A and B, respectively.
\begin{figure}[t]
\includegraphics[width=1.0\linewidth]{001
\caption{\label{fig1} (a) Schematic picture of the honeycomb structure consisting of A and B sublattices.
The bond index 0-2 is also shown.
(b) The magnon dispersions in the model in Eq.~\eqref{eq4} at $J=0.9$ and $H=0$.
The red solid (black dotted) lines represent the result at $J^a=0.2$ ($J^a=0$).
In the inset, the first Brillouin zone is shown.
(c) The color plot of $\langle J^a_{\mathbf{q}\alpha} \rangle$ in Eq.~(\ref{ja}) in $\mathbf{q}$ space.}
\end{figure}
By performing the Fourier transformation, the spin-wave Hamiltonian in momentum ($\mathbf{q}$) space is obtained as
\begin{align}
\label{eq4}
\mathcal{H}=\frac{1}{2}
\sum_{\mathbf{q}}
\Psi^{\dagger}_{\mathbf{q}}
\begin{pmatrix}
X(\mathbf{q})&Y(\mathbf{q})\\
Y^{*}(-\mathbf{q})&X^{*}(-\mathbf{q})
\end{pmatrix}
\Psi_{\mathbf{q}},
\end{align}
where $\Psi^{\dagger}_{\mathbf{q}}=(a^{\dagger}_{\mathbf{q}},b^{\dagger}_{\mathbf{q}},a_{-\mathbf{q}},b_{-\mathbf{q}})$.
We omit the classical ground-state energy per unit cell $E_{\rm{GS}}=-3J^zS^2-H^2/[3(J^z+J)]$.
$X(\mathbf{q})$ and $Y(\mathbf{q})$ in Eq.~\eqref{eq4} are $2\times 2$ matrices, which are given by
\begin{align}
\label{eq5}
X(\mathbf{q})=&
\begin{pmatrix}
Z&\sum_{n}F_n{\rm{e}}^{i\mathbf{q} \cdot \boldsymbol{\rho}_n }\\
\sum_{n} F^{*}_n{\rm{e}}^{-i\mathbf{q} \cdot \boldsymbol{\rho}_n}&Z
\end{pmatrix},\\
\label{eq52}
Y(\mathbf{q})=&
\begin{pmatrix}
0&\sum_{n}G_n{\rm{e}}^{i\mathbf{q} \cdot \boldsymbol{\rho}_n }\\
\sum_{n}G_n{\rm{e}}^{-i\mathbf{q} \cdot \boldsymbol{\rho}_n }&0
\end{pmatrix},
\end{align}
where the sum of $n$ is taken for the three nearest-neighbor bonds ($n=0,1,2$) with $\boldsymbol{\rho}_0=(1,0)
$, $\boldsymbol{\rho}_1=(-1/2,\sqrt{3}/2)$, and $\boldsymbol{\rho}_2=(-1/2,-\sqrt{3}/2)$.
In Eqs.~\eqref{eq5} and \eqref{eq52}, $F_n$, $G_n$, and $Z$ are expressed as
\begin{align}
F_n=&
\frac{J+J^z}{2}\sin^2{\theta} \nonumber \\
&-J^a\left[\cos{\Phi_n}\frac{1+\cos^2{\theta}}{2} - i \sin{\Phi_n}\cos{\theta}\right],\\
G_n=&
-J+\sin^2{\theta}\left[ \frac{J+J^z}{2}
+\frac{J^a}{2}\cos{\Phi_n}\right],
\end{align}
and $Z=H\sin\theta-3J\sin^2{\theta}+3J^z\cos^2{\theta}$ where $\Phi_n=2\phi+\chi_n$ and $\chi_n=0, 2\pi/3, 4\pi/3$ for $n=0,1,2$.
We use the numerical Bogoliubov transformation for the Hamiltonian in Eq.~(\ref{eq4}) for the magnon dispersions~\cite{COLPA1978327}.
\section{result}
\label{result}
In this section, we first show the nonreciprocal magnon excitations under a collinear AFM order in Sec.~\ref{H=0}.
Next, we discuss the nonreciprocal behavior under the external magnetic field in Sec.~\ref{Hneq0}.
\subsection{Collinear antiferromagnetic order at zero field}
\label{H=0}
We show the result in the absence of the magnetic field ($H=0$) where the staggered collinear AFM order with the moments along the $z$ direction, i.e., $\theta=0$, becomes the ground state.
Figure~\ref{fig1}(b) shows the magnon dispersions at $J=0.9$, $J^a=0.2$, and $H=0$ (red solid lines).
For comparison, we also show the magnon dispersions at $J^a=0$ (black dotted lines).
Compared to the result at $J^a=0$, the magnon dispersions at $J^a=0.2$ split in the entire Brillouin zone except for the $\Gamma$ and K points.
The splitting of magnon excitation spectrum is characterized in an asymmetric way: the magnon dispersions undergo an antisymmetric deformation with respect to $\mathbf{q}$ for $J^a \neq 0$.
To examine the effect of $J^a$ on the antisymmetric magnon dispersions, we calculate its contribution by evaluating
the expectation value at each momentum $\mathbf{q}$ in the the third term in the square braket in Eq.~(\ref{eq1}), which is represented by
\begin{align}
\label{ja}
\langle J^a_{\mathbf{q}\zeta} \rangle \equiv&
J^a \sum_n \bra{\zeta_{\mathbf{q}}}
{\rm{e}}^{i\mathbf{q} \cdot \boldsymbol{\rho}_n }(\bar{F}_n a^{\dagger}_{\mathbf{q}}b_{\mathbf{q}}+\bar{F}_n^* a_{-\mathbf{q}}b^{\dagger}_{-\mathbf{q}}) \nonumber \\
&+{\rm{e}}^{i\mathbf{q} \cdot \boldsymbol{\rho}_n } \bar{G}_n (a^{\dagger}_{\mathbf{q}}b^{\dagger}_{-\mathbf{q}}+a_{-\mathbf{q}}b_{\mathbf{q}}) + {\rm H.c.}
\ket{\zeta_{\mathbf{q}}},
\end{align}
where $\ket{\zeta_{\mathbf{q}}}=\zeta^{\dagger}_{\mathbf{q}}\ket{0}$ stands for the eigenmode where $\zeta_{\mathbf{q}}=\alpha_{\mathbf{q}}$ ($\beta_{\mathbf{q}}$) for the upper (lower) magnon band.
$\bar{F}_n$ and $\bar{G}_n$ are given by $\bar{F}_n=[\cos{\Phi_n}(1+\cos^2{\theta})/2 - i \sin{\Phi_n}\cos{\theta}]$ and $\bar{G}_n=\cos{\Phi_n}\sin^2{\theta}$.
Figure~\ref{fig1}(c) shows the color plot of $\langle J^a_{\mathbf{q}\alpha}\rangle$ in the entire $\mathbf{q}$ space where $\langle J^a_{\mathbf{q}\alpha}\rangle \simeq -\langle J^a_{\mathbf{q}\beta}\rangle$.
In Fig.~\ref{fig1}(c), $\langle J^a_{\mathbf{q}\alpha}\rangle$ remains a threefold rotational symmetry in the form of $\sin (\sqrt{3} q_y/2) [\cos (3 q_x/2)-\cos (\sqrt{3} q_y/2)]$, which is symmetric along the M-$\Gamma$ line ($q_x \leftrightarrow -q_x$) and asymmetric along the K-$\Gamma$-$\rm{K}'$ line ($q_y \leftrightarrow -q_y$).
Reflecting such a functional form, $\langle J^a_{\mathbf{q}\alpha}\rangle$ becomes the maximum at the $\rm{K}'$ point.
The behavior of $\langle J^a_{\mathbf{q}\zeta}\rangle$ in Fig.~\ref{fig1}(c) is consistent with the magnon-band splitting in Fig.~\ref{fig1}(b).
In fact, the magnon-band splitting $\Delta E_\mathbf{q}$ is related with $\langle J^a_{\mathbf{q}\zeta}\rangle$ as $\Delta E_\mathbf{q}=\langle J^a_{\mathbf{q}\alpha}\rangle-\langle J^a_{\mathbf{q}\beta}\rangle$.
We analytically evaluate the asymmetric magnon-band splitting $\Delta E_\mathbf{q}$ by using the perturbation analysis with respect to $J^a$.
The lowest-energy correction by $J^a$ is given by the first-order perturbation, which is obtained as
\begin{align}
\label{eq6}
\Delta E_\mathbf{q}=|J^a|\sqrt{3+2\left[\sum_{n=0,1,2}\cos\left(\mathbf{q} \cdot \boldsymbol{\rho}'_n+\frac{2\pi}{3}\right)\right]},
\end{align}
where we assume $J^z \gg J$ for simplicity.
In Eq.~(\ref{eq6}), $\boldsymbol{\rho}'_n$ is the next-nearest-neighbor vector as $\boldsymbol{\rho}'_0=\bm{\rho}_0-\bm{\rho}_1$, $\boldsymbol{\rho}'_1=\bm{\rho}_1-\bm{\rho}_2$, and $\boldsymbol{\rho}'_2=\bm{\rho}_2-\bm{\rho}_0$.
Note that Eq.~(\ref{eq6}) describes $\Delta E_\mathbf{q} \neq \Delta E_{-\mathbf{q}}$.
The expression indicates that an effective kinetic motion of magnons between the next-nearest-neighbor spins plays an important role in inducing the antisymmetric magnon-band splitting.
Moreover, Eq.~(\ref{eq6}) shows that the nonreciprocal magnon dispersion is proportional to the symmetric anisotropic exchange $J^a$ and is irrespective of the sign of $J^a$.
Such an emergent asymmetric magnon structure in $\mathbf{q}$ space is also caused by the DM interaction, which appears without the inversion symmetry at the bond center.
However, the way of asymmetric spectra is qualitative different from each other, although both of them originate from the spin-orbit coupling microscopically.
The asymmetric magnon dispersion induced by the symmetric anisotropic exchange interaction exhibits the magnon-band splitting except for the high-symmetry points, $\Gamma$ and K, in Fig.~\ref{fig1}(b), whereas that by the DM interaction does not show any splittings; the twofold degenerated bands at the K and $\rm{K}'$ points move in an opposite direction, as shown in Appendix~\ref{DM}~\cite{doi:10.7566/JPSJ.85.053705}.
Thus, these two contributions are separably detected by the spectroscopic measurements~\cite{PhysRevB.92.184419,PhysRevLett.119.047201}.
Moreover, another difference is found in the microscopic origin.
The key issue for the present mechanism appears in the exchange interactions between the the {\it nearest-neighbor} bonds.
On the other hand, the exchange interactions between the {\it next-nearest-neighbor} bonds give the contribution to nonreciprocal magnons for the mechanisms based on the DM interaction~\cite{doi:10.7566/JPSJ.85.053705}.
Thus, nonreciprocal magnons in the honeycomb AFM can be expected even when the next-nearest-neighbor exchange couplings including the DM interaction are negligibly small.
\begin{figure}[t]
\includegraphics[width=1.0\linewidth]{002
\caption{
\label{fig2}
(a) and (b) magnon bands at (a) $\mathbf{H}
=(H,0,0)$ ($\phi=0$) and (b) $\mathbf{H}
=(H,H,0)/\sqrt{2}$ ($\phi=\pi/4$) with $H=2$.
The other model parameters in Eq.~\eqref{eq4} are the same as those in Fig.~\ref{fig1}(b).
(c)-(f) $\langle J^a_{\mathbf{q}\alpha} \rangle$ for (c)$\phi=0$, (d)$\phi=\pi/4$, (e)$\phi=\pi/2$, and (f)$\phi=3\pi/4$.
}
\end{figure}
\subsection{Canted antiferromagnetic order at nonzero field}
\label{Hneq0}
We discuss an additional asymmetric magnon deformation under the magnetic field ($H \neq 0$).
Figures~\ref{fig2}(a) and \ref{fig2}(b) represent the results in the presence of the in-plane magnetic field $\mathbf{H}=H(\cos \phi, \sin \phi, 0)$ for $\phi=0$ and $\phi=\pi/4$, respectively.
The magnon dispersions in Figs.~\ref{fig2}(a) and \ref{fig2}(b) are different from each other for $J^a \neq 0$, while they are the same for $J^a=0$.
For instance, the magnon bands are symmetric (asymmetric) along the M$_1$-$\Gamma$-M$_2$ line in Fig.~\ref{fig2}(a) [Fig.~\ref{fig2}(b)], which means that the antisymmetric functional form depends on the magnetic-field direction.
In order to display the antisymmetric modulations under the in-plane magnetic field, we show $\langle J^a_{\mathbf{q}\alpha} \rangle$ ($\simeq -\langle J^a_{\mathbf{q}\beta} \rangle$) in Eq.~\eqref{ja} for several values of $\phi$: $\phi=0$ in Fig.~\ref{fig2}(c), $\phi=\pi/4$ in Fig.~\ref{fig2}(d), $\phi=\pi/2$ in Fig.~\ref{fig2}(e), and $\phi=3\pi/4$ in Fig.~\ref{fig2}(f).
In contrast to the result at $H=0$ in Fig.~\ref{fig1}(b), $\langle J^a_{\mathbf{q}\alpha} \rangle$ in Figs.~\ref{fig2}(c)-\ref{fig2}(f) breaks the threefold rotational symmetry: there are linearly antisymmetric modulations against $q_y$ along the [100] and [010] field directions ($\phi=0$ and $\phi=\pi/2$) in Figs.~\ref{fig2}(c) and \ref{fig2}(e) and against $q_x$ along the $[110]$ and $[\bar{1}10]$ field directions ($\phi=\pi/4$ and $\phi=3\pi/4$) in Figs.~\ref{fig2}(d) and \ref{fig2}(f).
In contrast to the case at zero field in Sec.~\ref{H=0}, the magnon-band splitting $\Delta E(\mathbf{q})$ is slightly deviated from $\langle J^a_{\mathbf{q}\alpha}\rangle-\langle J^a_{\mathbf{q}\beta}\rangle$ in the presence of $\mathbf{H}$.
\begin{figure}[t]
\includegraphics[width=1.0\linewidth]{003
\caption{\label{fig3}(a)
Magnetic-field angle dependences of the linear coefficients in the magnon band, $a_{1x}$ and $a_{1y}$, at $J=0.9$, $J^a=0.2$, and $H=2$.
(b) $H$ dependence of $a_1(\equiv \sqrt{a_{1x}^2+a_{1y}^2})$ for $J^a=0.05$, $0.1$, and $0.2$ where $H_{\rm sat}$ represents the saturated magnetic field.
}
\end{figure}
We analyze additional antisymmetric modulations in the magnon dispersions by considering the $\mathbf{q}\to 0$ limit.
By setting $\mathbf{q}=(q_x, q_y)=q (\cos \phi_q, \sin \phi_q)$, the magnon dispersion for upper band $E_{\mathbf{q} \alpha}$ is expanded as $E_{\mathbf{q} \alpha}=a_0+ (a_{1x} \cos\phi_q+a_{1y} \sin\phi_q)q +\mathcal{O} (\mathbf{q}^2)$ where $a_0$, $a_{1x}$, and $a_{1y}$ are the expansion coefficients.
We show the field-angle dependence of the linear coefficients $a_{1x}$ and $a_{1y}$ obtained by performing the numerical differentiation for upper band in Fig.~\ref{fig3}(a).
As clearly shown in Fig.~\ref{fig3}(a), there are linear antisymmetric
modulations in the magnon dispersions under the in-plane magnetic field.
The angle dependences of $a_{1x}$ and $a_{1y}$ are fitted as $-\sin(2\phi)$ and $-\cos(2\phi)$, respectively, where their norm $a_1 \equiv \sqrt{a_{1x}^2+a_{1y}^2}$ is independent of $\phi$.
The linear antisymmetric direction is rotated by $-2\phi$ when the field direction is rotated by $\phi$.
The result suggests that the nonreciprocal dispersion can be controlled by the magnetic-field direction, since the nonreciprocal transport is dominantly characterized by the linear antisymmetric components~\cite{PhysRevB.98.134422, PhysRevB.99.094401, PhysRevB.96.180414}.
Moreover, it is noted that such an angle dependence of nonreciprocal magnon does not occur under the DM interaction that might appear in the next-nearest-neighbor bonds in the honeycomb AFM, as shown in Appendix~\ref{DM}.
Thus, the symmetric anisotropic exchange-driven nonreciprocal magnon can be detected by measuring the conductive and response tensors in the nonreciprocal magneto-optical~\cite{takahashi.Nat.Phys., doi:10.1143/JPSJ.81.023712, Miyahara2013, PhysRevB.89.195145, PhysRevLett.114.197203, PhysRevB.98.134422, PhysRevB.99.094401} and spin Seebeck effects~\cite{PhysRevB.98.020401, PhysRevB.96.180414} besides the microscopic spectroscopic measurements.
Figure~\ref{fig3}(b) shows the $H$ dependence of $a_1$ for $J^a=0.05$, $0.1$, and $0.2$ where $H_{\rm sat}$ represents the saturated magnetic field.
The value of $a_1$ becomes gradually small while increasing the magnetic field, and it vanishes at $H_{\rm sat}$.
Note that there is a finite jump of $a_1$ for infinitesimally small $H$, whose discontinuity is presumably due to the presence of the band crossing at the $\Gamma$ point, as shown in Fig.~\ref{fig1}(b).
The magnitude of the linear coefficient $a_1$ is proportional to $J^a$ in Fig.~\ref{fig3}(b).
Finally, let us discuss the peculiar angle dependence of the nonreciprocal excitations in terms of emergent magnetic toroidal multipoles~\cite{Spaldin_0953-8984-20-43-434203, hayami2018microscopic, PhysRevB.98.165110}.
In the case of $H=0$ where the staggered AFM state with the moments along the $z$ direction is stabilized, this AFM state is regarded as a ferroic alignment of the odd-parity magnetic toroidal octupole with the $y(3x^2-y^2)$ component~\cite{doi:10.7566/JPSJ.85.053705,PhysRevB.99.174407}, which results in the $q_y(3q_x^2-q_y^2)$-type magnon band deformation, as shown in Fig.~\ref{fig1}(c)~\cite{PhysRevB.98.165110}.
This antisymmetric functional form implies that a directional nonreciprocity is coupled with the quadrupole degrees of freedom (second order of $H$) when dividing $q_y(3q_x^2-q_y^2)$ as $2 q_x \times (q_x q_y)+q_y \times (q_x^2-q_y^2)$.
As the symmetry $q_x q_y$ and $q_x^2-q_y^2$ are the same as $H_x H_y$ and $H_x^2-H_y^2$, the $y(3x^2-y^2)$-type magnetic toroidal octupole gives rise to the coupling as $2 q_x \times (H_x H_y)+q_y \times (H_x^2-H_y^2) \sim q_x \sin(2\phi)+q_y\cos(2\phi)$.
As $q_x$ and $q_y$ correspond to the polar vector, this decomposition expresses the emergence of in-plane magnetic toroidal dipoles $T_x \sim q_x$ and $T_y \sim q_y$ in the canted AFM state, and explains a qualitative behavior of the result in Figs.~\ref{fig2}(c)-(f) and \ref{fig3}(a).
Such an emergent magnetic toroidal dipole in the canted AFM state is consistent with the symmetry analysis by using the cluster multipole theory~\cite{PhysRevB.99.174407}.
From the viewpoint of model parameters, the symmetric anisotropic exchange interaction is essential, since it breaks continuous spin rotational symmetry.
The threefold symmetric interaction consists of the product of the dipole and quadrupole degrees of freedom on the basis of the microscopic multipole description~\cite{matsumoto2017symmetry, PhysRevB.98.165110}.
Recently, a similar angle-dependent magneto-electric effect observed in Co$_4$Nb$_2$O$_9$~\cite{Khanh_PhysRevB.93.075117, Khanh_PhysRevB.96.094434} is understood from the multipole aspect~\cite{Yanagi_PhysRevB.97.020404,doi:10.7566/JPSJ.88.094704}.
\section{summary}
\label{summary}
To summarize, we have investigated the behavior of nonreciprocal magnon induced by the nearest-neighbor symmetric anisotropic exchange interaction on a honeycomb AFM.
The antisymmetric nature of magnon bands is qualitatively different from that by the DM interaction.
Moreover, we have found that the nonreciprocal magnon excitations exhibit peculiar angle-dependent responses under the external magnetic field.
We have also clarified that the nonreciprocal dispersions and angle-dependent responses are related with the emergence of odd-parity magnetic toroidal multipoles, which are accompanied by the cluster AFM structure.
As the antisymmetric modulation of the magnon bands becomes larger while increasing the bond-dependent symmetric anisotropic exchange interaction, the superexchange paths favoring the anisotropic interactions rather than the Heisenberg interaction, such as the Kitaev interaction~\cite{PhysRevLett.102.017205}, will enhance nonreciprocal physical phenomena.
Our mechanism of nonreciprocal magnons is expected to be observed in various honeycomb AFMs including the transition-metal tricalcogenide $\rm{MnPS_3}$~\cite{PhysRevB.82.100408, Li3738, PhysRevB.91.235425, PhysRevB.96.134425} and rare-earth metallic compound $\rm{ErNi_3Ga_9}$~\cite{e1ddc90bb22e4448a040d6ce3fdca500}, where the $z$-AFM state becomes the ground state.
In these materials, the nonreciprocal magnon excitations will be observed in both microscopic and macroscopic experiments.
Microscopically, the nonreciprocal magnon spectra can be detected by the inelastic neutron scattering experiment.
On the other hand, from a macroscopic viewpoint, the angle-dependent nonreciprocal magneto-optical and nonreciprocal spin Seebeck effects can be observed under an in-plane magnetic field.
As the nature of the asymmetric deformation of the magnon band is qualitatively different from that by the DM interaction, our mechanism will provide a deep understanding of further nonreciprocal magnon physics.
\begin{acknowledgments}
We would like to thank T. J. Sato for fruitful discussions.
This research was supported by JSPS KAKENHI Grants Numbers JP18H04296 (J-Physics), JP18K13488, JP19K03752, and JP19H01834.
This work was also supported by the Toyota Riken Scholarship.
Parts of the numerical calculations were performed in the supercomputing systems in ISSP, the University of Tokyo.
\end{acknowledgments}
|
1,314,259,994,298 | arxiv | \section{Introduction}
The goal of many observational analyses is to estimate the causal effect on survival of different time-fixed or time-varying treatment strategies, interventions or rules in a study population. These causal effects can be formally defined by a contrast (e.g., difference or ratio) in the distributions of counterfactual outcomes had interventions been implemented to ensure those strategies are followed in that population. Robins (1986) \cite{Robins1986} showed that, under assumptions that allow complex longitudinal data structures such that measured time-varying confounders may themselves be affected by past treatment, the \textsl{g-formula} indexed by a particular treatment strategy identifies the average counterfactual outcome under that strategy. Therefore, estimators of the g-formula and associated contrasts indexed by different strategies may be used to estimate causal effects.
In practice, the g-formula typically depends on high-dimensional nuisance parameters. In this case, many estimators of the g-formula and associated contrasts have been proposed including the density-based parametric g-formula \cite{Robins1986}, iterated conditional expectation (ICE) estimators \cite{Tran2019,Wen2021}, inverse probability weighted (IPW) estimators \cite{Cain2010, Neugebauer2014}, and estimators derived from the efficient influence function (EIF) \cite{Bang2005, van2011}. EIF based estimators (i.e., estimators constructed to evaluate the EIF from an empirical sample) have several theoretical advantages over the other approaches including they may be $\sqrt{n}$-consistent if the nuisance functions are estimated at slower rates through flexible nonparametric or machine learning methods \cite{Robins2009Q,Robins2016,Chernozhukov2018}.
EIF based estimators may also have a model double-robustness property in that, when nuisance functions are estimated via {parametric} models, these estimators may remain consistent and asymptotically normal if models for only one of two (sets of) nuisance functions are correctly specified, not necessarily both. This model double robustness property always holds for EIF estimators when the g-formula is indexed by a \textsl{deterministic treatment strategy} at most dependent on past treatment and confounders measured in the observational study \cite{Bang2005, Stitelman2012, Rotnitzky2017}. However, the identification results of Robins (1986)\cite{Robins1986} were not limited to such deterministic strategies but generalized to allow identification of \textsl{stochastic} treatment strategies at most dependent on this measured past. The latter identifying functional or \textsl{generalized g-formula} depends on the \textsl{intervention treatment distribution}, that is, the distribution of treatment under an intervention that ensures the strategy of interest is followed conditional only on the measured past in the observational study. The generalized g-formula coincides with the more familiar g-formula indexed by a deterministic strategy when the intervention treatment distribution is chosen as degenerate conditional on any level of the measured past.
Recently, several practical applications have motivated estimation of the generalized g-formula indexed by intervention treatment distributions that depend on the \textsl{observed treatment process}, that is, the observed treatment distribution conditional on the measured past \cite{ Taubman2009, Munoz2012, Haneuse2013, Kennedy2019, Young2019}. The generalized g-formula indexed by an intervention treatment distribution dependent on the observed treatment process has the particular advantage of relying on relatively weak positivity conditions \cite{Haneuse2013,Young2014} even, for example, in observational studies where the propensity score is equal or close to zero for certain measured confounder histories \cite{Kennedy2019}. When the (degenerate or non-degenerate) intervention treatment distribution does \textsl{not} depend on the observed treatment process, EIF derived estimators of the generalized g-formula will be model doubly robust. However, when the intervention treatment distribution \textsl{does} depend on the observed treatment process, such estimators may or may not be doubly robust \cite{Munoz2012, Haneuse2013,Kennedy2019,Diaz2020}.
In this paper, we exploit particular representations of the generalized g-formula to give sufficient conditions for the existence of doubly robust estimators for point treatment interventions when the chosen intervention treatment distribution depends on the observed treatment process with examples from the recent literature. We also provide a general form of EIFs for a class of intervention treatment distributions that may depend on the observed treatment process in longitudinal settings that guarantee model doubly (and multiply) robust estimators. Motivated by observational studies of the effects of realistic HIV pre-exposure prophylaxis (PrEP) initiation interventions, we consider a new class of intervention treatment distributions dependent on the observed treatment process that is a variation on the incremental propensity score interventions proposed by Kennedy (2019)\cite{Kennedy2019}. We show that estimators based on the EIF for our proposed intervention treatment distribution are model doubly/multiply robust, and can attain fast convergence rates even when used in combination with machine learning algorithms, where modelling assumptions are relaxed. We illustrate both EIF-based, as well as simpler singly robust, estimators of the g-formula indexed by this class of intervention treatment distribution in simulated data and in an illustrative data application.
\section{Observed data structure}
\label{sec:observedata}
Consider a longitudinal study with $j=0,1,2,\ldots, J$ denoting a follow-up interval (e.g., week, month) where $J$ is the end of the follow-up of interest. Assume the following random variables are measured in this study on each of $n$ individuals meeting some eligibility criteria at baseline. For each $j=0,1,2,\ldots, J-1$, let $A_j$ denote binary or discrete treatment variable during interval $j$, $\bL_j$ a vector of additional time-varying covariates measured in interval $j$, and $Y_{j+1}$ an indicator of survival by interval $j+1$.
For notational simplicity, we will assume throughout that all covariates are discrete in that they have distributions that are absolutely continuous with respect to a counting measure but arguments naturally extend to settings with continuous covariates and Lebesgue measures.
By definition, ${Y}_0 = 1$ (all individuals are at risk of failure baseline) and by convention we define $\bar{\bL}_{-1} = \bar{A}_{-1} = \emptyset$. For a random variable $X$, we let $\bar{X}_j = (X_0,\ldots, X_j)$ denote history through time $j$. We assume the ordering $O=(\bL_0,A_0, Y_1,\ldots, \bL_{J-1}, A_{J-1}, Y_J)$. Without loss of generality, we will assume no individual is lost to follow-up until Section \ref{sec:censoring}.
\section{Intervention treatment distribution}
\label{sec:assumptionsetc}
Let $g$ denote a treatment rule that specifies how treatment should be assigned at each $j=0,1,2,\ldots, J$. Following Richardson and Robins (2013) \cite{Richardson2013}, denote $(\bL^{g}_j,Y^{g}_j)$ and $A_j^{+g}$ as the natural values of covariates and survival status and the intervention value of treatment at $j$ under $g$, respectively. In turn, the distribution of $A^{+g}_j$ evaluated at some treatment level $a_j$ conditional on the ``measured past'' under $g$ $(Y^{g}_{j}=1,\overline{\bL}^{g}_j=\overline{\bl}_j,\overline{A}^{+g}_{j-1}=\overline{a}_{j-1})$ is specified by
$q^{g}(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})\equiv \Pr(A^{+g}_j=a_j\mid Y^{g}_{j}=1,\overline{\bL}^{g}_j=\overline{\bl}_j,\overline{A}^{+g}_{j-1}=\overline{a}_{j-1})$
which we refer to as the \textsl{intervention treatment distribution} at $j$ associated with $g$.
When treatment assignment at any time under a selected rule $g$ deterministically depends on the measured past, there is only one value $a^{+}_j\in \text{supp}(A^{+g}_j)$ given any history $(\bar{\bl}_{j}, \bar{a}_{j-1})\in \text{supp}(\overline{\bL}^{g}_j,\overline{A}^{+g}_{j-1})$ for those with $Y^{g}_j=1$. In this case, $q^{g}(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})=1$ when $a_j=a^{+}_j$ and 0 otherwise. Examples include \textsl{static} deterministic rules that assign the same level of treatment to all surviving individuals at all follow-up times
and \textsl{dynamic} deterministic rules that assign treatment based on the measured past.
By contrast, when a selected rule $g$ assigns treatment stochastically at some $j$ (as a random draw from a distribution), at most dependent on the measured past, then there will be multiple values $a^{+}_j\in \text{supp}(A^{+g}_j)$ such that we may have $0<q^{g}(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})<1$ when $a_j=a^{+}_j$. We focus here on the problem of estimating $\mbox{E}[Y_J^{g}]=\Pr[Y_J^{g}=1]$, the cumulative survival probability by end of follow-up under a choice of $g$, when the intervention treatment distribution associated with $g$ has this non-degenerate property of a stochastic rule, in particular, \textsl{through its dependence on the observed treatment distribution conditional on the measured past} as specified by $
f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})\equiv \Pr(A_j=a_j|Y_{j}=1,\overline{\bL}_j=\overline{\bl}_j,\overline{A}_{j-1}=\overline{a}_{j-1}).$
We refer to $f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})$ as the \textsl{observed treatment process} conditional on the measured past. The observed treatment process evaluated at $a_j=1$ coincides with the so-called \textsl{propensity score} \cite{Rosenbaum1983} at $j$ when treatment $A_j$ is binary. Next, we consider several examples of such intervention distributions.
\section{Examples of intervention treatment distributions that depend on the observed treatment process}
\label{examples}
In this section, we review examples of intervention distributions considered in the literature that have depended on the observed treatment process:
\begin{itemize
\item \textsl{Dynamic treatment initiation strategies with grace period:}
Motivated by questions about the effects of CD4-based treatment initiation strategies, previous authors have considered strategies of the form ``If a condition for treatment initiation is met by interval $j$ then start treatment by $m+j$ for a selected grace period $m$, with no intervention in intervals $j$ through $j+m-1$. Otherwise, do not start at $j$'', $\forall j$ \cite{Cain2010,Young2011}.
For $A_j$ an indicator of treatment initiation by $j$ and $L_j^\ast\in \bL_j$ an indicator that the condition for initiating treatment has been met by $j$, the intervention treatment distribution at each $j$ is
\[
q^{g}(1\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) \left.
\begin{cases}
1, & \text{if } l^\ast_{j-m}=1\\
0, & \text{if } l^\ast_{j}=0\\
f(1\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}), & \text{otherwise}
\end{cases}
\right.
\]
or
$q^{g}(a_j\mid 1, \bar{\bl}_j, \bar{a}_{j-1}) = (1-l_j^\ast)(1-a_j) + l_{j-m}^\ast a_j + (1-l_{j-m}^\ast)l_j^\ast f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})$.
\item \textsl{Representative interventions:}
Motivated by observational studies to understand the long-term effects of lifestyle interventions (e.g., interventions that increase daily minutes of physical activity), previous authors have considered \textsl{representative interventions} that assign the value of a multi-level treatment to an individual at each $j=0,\ldots,J$ as a random draw from a particular distribution: specifically, the observed distribution of treatment in interval $j$ among those who, in the observational study, (i) had the same measured confounder and treatment history prior to $j$ as that individual and (ii) had treatment at $j$ at or above a cutoff $\delta$ (or more generally, within a pre-specified range), e.g., ``at least 30 minutes of daily physical activity'' \cite{Picciotto2012,Young2019}. In this case,
\[
q^{g}(a_j\mid 1, \bar{\bl}_j, \bar{a}_{j-1}) = f(a_j\mid 1, \bar{\bl}_j, \bar{a}_{j-1},a_j\geq \delta)
\]
This intervention distribution notably only depends on the observed treatment process at $j$ \textsl{among those with treatment in the pre-specified range at $j$}.
\item \textsl{Deterministic interventions that depend on the natural value of treatment:}
Alternative interventions that maintain a multi-level treatment within a pre-specified range have been posed that assign treatment at each $j$ as a function of the natural treatment value at $j$ \cite{Robins2004,Taubman2009}, e.g., ``If the natural value of treatment at $j$ is below $\delta$ than intervene and set treatment at $j$ to $\delta$. Otherwise, do not intervene at $j$''. The resulting intervention distribution at each $j$ (conditional only on the measured past and marginal with respect to the natural value of treatment at $j$) is
\begin{equation*}
q^{g}(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})=F_{A_j}(\delta \mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) I(a_j=\delta) + I(a_j\geq \delta) f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})
\end{equation*}
where $F_{A_j}(\delta \mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})=\sum_{a_j<\delta}f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})$.
\item \textsl{Incremental propensity score interventions: }
Kennedy (2019)\cite{Kennedy2019} posed \textsl{incremental propensity score interventions} that at each $j$ assign a binary treatment according to a strategy $g$ that results in an intervention treatment distribution defined by a shifted version (on the odds scale) of the propensity score. Specifically, for a particular $\delta\in (0,\infty)$
\begin{equation}
q^{g}(1\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) =
\dfrac{\delta f(1\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})}{\delta f(1\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) + f(0\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})}
\label{kennedyshift}
\end{equation}
or $q^{g}(a_j\mid 1, \bar{\bl}_j, \bar{a}_{j-1}) = \{{a_j \delta f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) + (1-a_j)f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})}\}\{\delta f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) + f(1-a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})\}^{-1}$.
Here we consider a modification of (\ref{kennedyshift}) motivated by the PrEP context. Specifically, for $\delta\in [0,1]$ and $L_j^\ast\in \bL_j$, a measured marker of risk for HIV acquisition (e.g., receiving a bacterial sexually transmitted infections or STI test at $j$ and no prior HIV diagnosis), we consider interventions indexed by the alternative intervention treatment distribution
\begin{equation}
q^{g}(0\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) = \left.
\begin{cases}
f(0\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}), & \text{if } l_j^\ast=0 \\
f(0\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})\delta, & \text{if } l^\ast_{j}=1
\end{cases}
\right.
\label{ourshift}
\end{equation}
or $q^{g}(a_j\mid 1, \bar{\bl}_j, \bar{a}_{j-1}) = (1-\delta) l_j^\ast a_j + (l_j^\ast \delta + 1 - l_j^\ast) f(a_j\mid 1, \bar{\bl}_{j},\bar{a}_{j-1})$ after some algebra.
In words, the probability of initiating treatment conditional on the measured past under $g$ at each $j$ will be larger than the propensity score at $j$ (that under no intervention) by decreasing its complement by a factor of $\delta$ for those with an indication ($L_j^\ast=1$).
Specifically, $\delta$ is the risk ratio for \textsl{not initiating} treatment under $g$ vs. no intervention conditional on the measured past. Choosing $\delta=0$ corresponds to ``always treat" those with $L_j^\ast=1$, and $\delta=1$ to no intervention. We will refer to interventions indexed by either (\ref{kennedyshift}) or (\ref{ourshift}) as \textsl{incremental propensity score interventions}, distinguishing them by the classifier \textsl{odds shift} or \textsl{multiplicative shift}, respectively.
\end{itemize}
\section{Identification by the generalized g-formula}
\label{sec:g-formula}
Consider a treatment assignment rule $g$ at most dependent on the measured past. Further, let $\mathcal{D}_g$ denote the set of all \textsl{deterministic} strategies at most dependent on this past that individuals could be observed to follow under the selected rule $g$, with $d$ any element of $\mathcal{D}_g$. In the special case when $g$ is initially selected to be a deterministic rule then the only element of $\mathcal{D}_g$ is $g$. Otherwise, $\mathcal{D}_g$ may contain many elements.
Let $Y_{j}^{d},L_j^{d}$ and $A_j^{+_d}$ denote the natural values of survival status and covariates and the intervention value of treatment at $j$, respectively, under a deterministic $d\in\mathcal{D}_g$ though ($j=0,\ldots, J)$ and consider the following assumptions:
\begin{enumerate}
\item Exchangeability:
$(Y_{j+1}^{d}, \ldots, Y_{J}^{d})\independent A_j \mid \bar{\bL}_j=\bar{\bl}_j, \bar{A}_{j-1}=\bar{a}_{j-1}^{+}, Y_j=1$.
\item Consistency: If $\bar{A}_{j} = \bar{A}_j^{+_d}$ then $\bar{Y}_{j+1} = \bar{Y}_{j+1}^{d}$ and $\bar{\bL}_{j} = \bar{\bL}_{j}^{d}$
\item Positivity: $f_{\bar{\bL}_j,\bar{A}_{j-1},Y_j} (\bar{\bl}_j, \bar{a}_{j-1}^{+},1) >0 \Longrightarrow f_{A_j\mid Y_j, \bar{\bL}_j, \bar{A}_{j-1}} (a_j^{+} \mid 1, \bar{\bl}_j,\bar{a}_{j-1}^{+})>0$
\end{enumerate}
Robins (1986) \cite{Robins1986} showed that given these exchangeability, consistency and positivity conditions hold for all deterministic $d\in\mathcal{D}_g$ then the following function of only the observed data identifies $\mbox{E}[Y_J^{g}]$:
\begin{align}
\psi^{g}=&\sum_{\forall\bar{a}_{J-1}} \sum_{\forall\bar{\bl}_{J-1}} \Prob(Y_{J}=1\mid {Y}_{J-1} = 1, \bar{\bL}_{J-1}=\bar{\bl}_{J-1}, \bar{A}_{J-1}=\bar{a}_{J-1})
\times \label{eq:gform0} \\[-0.5em]
&\prod_{s=0}^{J-1} \Prob(Y_s=1\mid {Y}_{s-1} =1, \bar{\bL}_{s-1}=\bar{\bl}_{s-1}, \bar{A}_{s-1}=\bar{a}_{s-1})f({\bl}_s\mid {Y}_{s} = 1, \bar{\bl}_{s-1}, \bar{a}_{s-1})q^{\text{\text{g}}}(a_s\mid 1, \bar{\bl}_{s}, \bar{a}_{s-1}) \nonumber
\end{align}
The function $\psi^{g}$ is referred to as the \textsl{generalized g-formula} indexed by the intervention treatment distribution $q^{\text{\text{g}}}(a_s\mid 1, \bar{\bl}_{s}, \bar{a}_{s-1})$. Note that, under stronger identifying conditions, the generalized g-formula may identify the outcome mean under a treatment rule $g$ that depends on more than the measured past \cite{Richardson2013,Young2014}. Also see Web Appendix B.
\subsection{Generalized positivity}
Note the assumption that the positivity condition above holds for all deterministic $d\in\mathcal{D}_g$ can be equivalently stated as follows:
\begin{equation}
q^{g}(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) >0 \Longrightarrow f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})>0 \label{genpositivity}
\end{equation}
for all $\overline{\bl}_j,\overline{a}_{j}\in \text{supp}(\overline{\bL}^{g}_j,\overline{A}^{+g}_{j})$.
The positivity condition (\ref{genpositivity}) generalizes the more familiar definition of positivity often relied on in the literature that there may be treated and untreated individuals within any level of the measured past; i.e., the assumption that the propensity score and its complement are positive for all possible measured histories and all $j$. It is straightforward to see that the more general condition \eqref{genpositivity} reduces to this typical definition of positivity only for the special case of a static deterministic intervention $g$ on a binary treatment. By contrast, the more general condition \eqref{genpositivity} only requires that, for any level of the past possible in the observational study and also plausible under $g$, if an intervention level of treatment can occur under $g$ it must also possibly occur in the observational study. Depending on the choice of $g$, this condition may hold when traditional definitions requiring positive propensity scores fail. Intervention treatment distributions that depend on the observed treatment process may help avoid positivity violations by this more general definition and, in some instances, may guarantee that positivity violations cannot occur regardless of the observed treatment process. We discuss this further in the next section.
Similar to arguments given in Kennedy (2019)\cite{Kennedy2019}, the odds shift (\ref{kennedyshift}) has the particular advantage that, by construction, the generalized positivity condition (\ref{genpositivity}) is guaranteed to hold, no matter the nature of the observed treatment process. By contrast, the multiplicative shift (\ref{ourshift}) only enjoys this guarantee for measured pasts consistent with $L_j^\ast=0$. However, compared to (\ref{kennedyshift}) which is indexed by a shift $\delta$ with no upper bound that quantifies an odds ratio, (\ref{ourshift}) may be easier to communicate to subject matter collaborators as it constrains the choice of $\delta\in[0,1]$ and quantifies a risk ratio. Notably, the performance of weighted estimators of $\psi^{g}$ indexed by both (\ref{kennedyshift}) and (\ref{ourshift}) are relatively resilient to so-called ``near positivity violations'' -- such that (\ref{genpositivity}) holds but $f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})$ is still close to zero for some $(\bar{\bl}_{j}, \bar{a}_{j-1})$ -- particularly when $\delta$ is chosen to coincide with relatively small increases in treatment uptake under $g$ (see Section \ref{sec:sim}).
\section{Model double robustness when the intervention treatment distribution depends on the observed treatment process}
\label{sec:mainint}
Suppose that the observed data $O$ defined in Section \ref{sec:observedata} follows a law $P$ which is known to belong to $\mathcal{M}=\{P_\theta:\theta\in \Theta\}$ where $\Theta$ is the parameter space. The efficient influence function (EIF) $U_{\psi^g}(O)$ for the causal parameter $\psi^{g}\equiv \psi^g(\theta)$ in a non-parametric model that imposes no restrictions on the law of $O$ is given by
${d\psi^g(\theta_t)}/{dt}\vert_{t=0} = E\{U_{\psi^g}(O)S(O)\}$, where ${d\psi^g(\theta_t)}/{dt}\mid_{t=0}$ is known as the pathwise derivative of the parameter $\psi^g$ along a parametric submodel of the observed data distribution indexed by $t$, and $S(O)$ is the score function of the parametric submodel evaluated at $t=0$ \cite{newey1994,Van2000}.
In this section, we provide results that aid the intuition on the existence of doubly robust estimators of $\psi^{g}$ when the intervention treatment distribution depends on the observed treatment process through understanding properties of the EIF for the parameter $\psi^{g}$.
\subsection{Point treatment}
We begin with the special case of a point treatment where $J=1$ and $O=(\bL_0,A_0,Y_1)\equiv(\bL,A,Y)$. In this case, (\ref{eq:gform0}) reduces to
$\psi^g= \sum_{\forall\bar{a}} \sum_{\forall\bar{\bl}}E(Y|A=a,\bL=\bl)q^g(a|\bl)f(\bl)$.
\begin{theorem}
Suppose $\psi^g$ can be written as a linear combination of the form:
\begin{equation}
\psi^g=c_{1}\underbrace{E\{h_{1}(O)\}}_{\nu_{1}} + c_{2} \underbrace{E[E\{h_{2}(O)\mid A=a^\ast,L \} ]}_{\nu_{2}}
\label{eq:thm1eq}
\end{equation}
where $a^\ast$, $c_1$ and $c_2$ are constants, and $h_{1}(O)$ and $h_{2}(O)$ are known measurable functions of $O$ (i.e., they do not depend on $\theta$). Then the EIF for $\psi^g$ is given by
\begin{equation}
U_{\psi^g}(O) = c_{1}h_{1}(O) + c_{2} \left[\frac{I(A = a^\ast)}{f(A\mid \bL)} \left[h_{2}(O) - E\{h_{2}(O)\mid A,\bL\}\right] + E\{h_{2}(O)\mid A=a^\ast,\bL\} \right]-\psi^g
\label{eq:thm1eq2}
\end{equation}
\label{theorem:thmone}
\end{theorem}
\vspace{-2em}
See Web Appendix A for proof.
Clearly, $\psi^{g}$ under a static deterministic strategy that sets treatment to level $a^\ast$ for all individuals trivially meets the conditions of Theorem 1 by selecting $h_1(O)=0$, $c_{2}=1$, $h_{2}(O)=Y$. In this case, the EIF for the g-formula indexed by $g$ or $E\{E(Y\mid A=a^\ast,\bL)\}$ equals:
\begin{equation}
U_{\psi^g}(O)=\frac{I(A = a^\ast)}{f(A\mid \bL)} \left\{Y - m(A,\bL)\right\} + m(a^\ast,\bL) - \psi^{a^\ast}
\label{eq:eifstatic}
\end{equation}
where $m(A,\bL) \equiv E(Y\mid A,\bL)$ and $m(a^\ast,\bL) \equiv E(Y\mid A=a^\ast,\bL)$\cite{Bang2005,Tsiatis2006,van2011}.
A heuristic justistification for Theorem \ref{theorem:thmone} follows from the fact that the EIF of ${\nu_{1}}\equiv \nu_{1}(\theta)$ is simply $h_{1}(O)$, and the EIF of ${\nu_{2}}\equiv \nu_{2}(\theta)$ can realized by replacing $Y$ with $h_{2}(O)$ in Expression (\ref{eq:eifstatic}), \textit{as the function $h_{2}(\cdot)$ does not depend on $\theta$ and therefore its pathwise derivative is zero}.
Furthermore, it is established that an estimator derived from the influence function (\ref{eq:eifstatic}) (e.g., an estimator solving $\sum_{i=1}^{n}U_{\psi^g}(O_i)=0$ for $\psi^g$) is model doubly robust in that it remains consistent if estimated under correctly specified parametric models for either one of two (sets of) nuisance functions, specifically $E(Y\mid A,\bL)$ or $f(A\mid \bL)$.
The following Corollary gives a sufficient condition for the existence of doubly robust estimators of $\psi^g$ when the intervention treatment distribution depends on the observed treatment process, provided the conditions of Theorem 1 hold.
\begin{corollary}
Suppose the conditions of Theorem \ref{theorem:thmone} hold. If {$h_{2}(O) = Y \tilde{h}_{2}(A,\bL)$}, where $\tilde{h}_2(A,\bL)$ is a known measurable function of $(A,\bL)$, then an estimator of $\psi^g$ derived from an EIF of the form (\ref{eq:thm1eq2}) is model doubly robust
\label{corollary:corone}
\end{corollary}
A proof of Corollary \ref{corollary:corone} is given in Web Appendix A.
A similar heuristic reasoning for Corollary \ref{corollary:corone} is that the estimator of the EIF of a mean outcome does not rely on any models, and doubly robust estimators exist for $\nu_{2}$ because we have simply replaced $Y$ with $h_{2}(O)$ in Equation (\ref{eq:eifstatic}) which does not depend on $\theta$.
We now consider some applications of Theorem 1 and Corollary 1.1 to examples where the intervention distribution indexing $\psi^g$ depends on the observed treatment process.
\setcounter{example}{0}
\begin{example}
Consider a variation of the grace period treatment initiation strategies defined in Section \ref{examples} for $J=1,~m=0$, such that, rather than withholding treatment when $L^\ast=0$, no intervention is made. The intervention treatment distribution is then given by $q^g(a\mid \bl)=(1-l^\ast)f(a\mid \bl) + l^\ast a $ .
\end{example}
For this choice of intervention distribution we have:
\begin{align*}
\psi^g &= E_{\bL}\left\{\sum\nolimits_{a=0}^1 E(Y\mid a,\bL)q^g(a\mid \bL)\right\}
\\& = E_{\bL}\left[\sum\nolimits_{a=0}^1 E(Y\mid a,\bL)\left\{(1-L^\ast)f(a\mid \bL) + I_{\{1\}}(a)L^\ast \right\}\right] \nonumber
\\& = {E_{\bL,A}\left[E\left\{Y(1-L^\ast)\mid A,\bL \right\}\right]} + {E_{\bL}\left[E\left\{YL^\ast\mid A=1,\bL \right\}\right]}
\end{align*}
Selecting $a^\ast=1$, $c_{1}=c_{2}=1$, $h_{1}(O)=Y(1-L^\ast)$, and $h_{2}(O)=YL^\ast$, we have
\begin{align*}
\psi^g &= c_{1}{E\{h_{1}(O)\}} + c_{2} {E[E\{h_{2}(O)\mid A=a,\bL \} ]}
\nonumber
\\& = \underbrace{E\{Y(1-L^\ast)\}}_{\nu_{1}} + \underbrace{E[E\{YL^\ast\mid A=1,L\}]}_{\nu_{2}}
\end{align*}
by Theorem \ref{theorem:thmone} and further, the EIF for $\psi^g$ is given by
\begin{align*}
U_{\psi^g}(O) = Y(1-L^\ast) + \frac{AL^\ast}{f(A\mid \bL)} \left\{Y - m(A,\bL)\right\} + m(1,\bL)L^\ast- \psi^g.
\end{align*}
This can be re-expressed as
\[
U_{\psi^g}(O) = \frac{q^{g}(A\mid \textbf{L})}{f(A\mid \textbf{L})} \left\{Y - m(A,\textbf{L})\right\} + m(A,\textbf{L})(1-L^\ast) + m(1,\textbf{L})L^\ast - \psi^g
\]
which is a useful representation for deriving doubly robust estimators.
By Corollary \ref{corollary:corone}, estimators based on the EIF for $\psi^g$ in this case will be doubly robust because $\nu_{1}$ can be estimated non-parametrically, and EIF-based estimators for $\nu_{2}$ are doubly robust. In particular, these will be consistent if either $f(A\mid \bL)$ or $m(A,\bL)$ is consistently estimated, not necessarily both
\begin{example}
Representative interventions for $J=1$. The intervention treatment distribution is given by $q^\text{g}(a\mid {\bl})=f(a\mid {\bl}, R=1)$ where $R=I(A\geq \delta)$.
\end{example}
In this case we have
\begin{align*}
\psi^g &= E_{{\bL}}\left\{\sum\nolimits_{a=0}^1 E(Y\mid a,{\bL})q^g(a\mid {\bL})\right\}
= E_{{\bL}}\left[E_A\left\{ E(Y\mid A, {\bL}, R=1)\mid {\bL}, R=1 \right\}\right]
= E_{{\bL}}\left\{ E(Y\mid {{\bL}}, R=1)\mid \right\}
\end{align*}
Following previous results \cite{Young2019}, we can see that, for this choice of intervention treatment distribution and $J=1$, $\psi^{g}$ is only a function of $R$, a coarsening of $A$, and takes the same form as the g-formula indexed by the static deterministic strategy that sets treatment to 1 but with $R$ playing the role of treatment.
We can apply Theorem \ref{theorem:thmone} and Corollary \ref{corollary:corone} replacing the treatment $A$ with $R$ and $a^\ast$ with $r^\ast$. Specifically, selecting $r^\ast=1$, $c_{2}=1$, $h_{1}(O)=0$, $h_{2}(O)=Y$, by Theorem \ref{theorem:thmone} we have
\begin{align*}
\psi^g &= c_{2} {E[E\{h_{2}(O)\mid R=r^\ast,L \} ]} = \underbrace{E\left\{E(Y \mid R=1,\bL) \right\}}_{\nu_{2}}
\end{align*}
and the EIF for $\psi^g$ is
\begin{equation*}
U_{\psi^g}(O)=\frac{I(R = 1)}{f(R\mid \bL)} \left\{Y - m(R,\bL)\right\} + m(1,\bL) - \psi^g
\end{equation*}
By Corollary \ref{corollary:corone}, estimators based on the EIF will be model doubly robust, i.e., consistent if either models for $f(R\mid L)$ \textit{or} $E(Y\mid R,L)$ are correctly specified
\begin{example}
Multiplicative shift incremental propensity score interventions for $J=1$.
The intervention treatment distribution is given by $q^{g}(a\mid \bl)=(1-\delta)a l^\ast + f(a\mid \bl)(l^\ast \delta + 1 - l^\ast)$. Note the intervention distribution in Example 1 is a special case with $\delta=0$.
\end{example}
In this case, for a choice of $\delta\in\{0,1\}$ we have
\begin{align*}
\psi^g(\delta) &= E_{\bL}\left\{\sum\nolimits_{a=0}^1 E(Y\mid a,\bL)q^g(a\mid \bL)\right\}
\\&=
E_{\bL}\left[\sum\nolimits_{a=0}^1 E(Y\mid a,\bL)\left\{f(a\mid \bL)(L^\ast\delta+1-L^\ast)+L^\ast a(1-\delta)\right\}\right]\nonumber
\\& = E_{\bL,A}[E\left\{Y(L^\ast\delta+1-L^\ast)\right\}\mid A,\bL] + E_{\bL}\left[E\{YL^\ast(1-\delta)\mid A=1,L\} \right] \nonumber
\end{align*}
\sloppy
Selecting $a^\ast=1$, $c_{1}=1$, $c_{2}=(1-\delta)$, $h_{1}(O)=Y(L^\ast\delta+1-L^\ast)$, $h_{2}(O)=YL^\ast$, we have
\begin{align*}
\psi^g(\delta) &= c_{1}{E\{h_{1}(O)\}} + c_{2} {E[E\{h_{2}(O)\mid A=a,L \} ]}
\nonumber
\\& = \underbrace{E\{Y(L^\ast\delta+1-L^\ast)\}}_{\nu_{1}} + (1-\delta)\underbrace{E[E\{YL^\ast\mid A=1,L\}]}_{\nu_{2}}
\end{align*}
by Theorem \ref{theorem:thmone} and the EIF for $\psi^g(\delta)$ is given by
\begin{align*}
U_{\psi^g(\delta)}(O) = Y(L^\ast\delta+1-L^\ast) + (1-\delta)\left[\frac{L^\ast A}{f(A\mid \bL)} \left\{Y - m(A,\bL)\right\} + m(1,\bL)L^\ast\right] - \psi^g(\delta).
\end{align*}
This can be re-expressed as
\begin{align*}
U_{\psi^g(\delta)}(O) = \frac{q^g(A\mid \textbf{L})}{f(A\mid \textbf{L})} \left\{Y - m(A,\textbf{L})\right\} + m(A,\textbf{L}) (L^\ast\delta+1-L^\ast) + m(1,\textbf{L})L^\ast(1-\delta) - \psi^g(\delta)
\end{align*}
which is useful for deriving doubly robust estimators.
By Corollary \ref{corollary:corone}, the estimators based on the EIF for $\psi^g(\delta)$ will be model doubly robust.
As Kennedy (2019) \cite{Kennedy2019} noted, the EIF for $\psi^g$ indexed by an odds shift (\ref{kennedyshift}) is not model doubly robust and, therefore, does not meet the conditions of Corollary \ref{corollary:corone}.
Note that the conditions of Corollary \ref{corollary:corone} are sufficient for model double robustness of the EIF but are not necessary conditions. In Web Appendix B, we consider model double robustness and the EIF for deterministic strategies that depend on the natural value of treatment discussed in Section \ref{examples} \cite{Munoz2012,Haneuse2013,Young2014,Diaz2020}.
These are examples of $\psi^g$ indexed by intervention treatment distributions that do not meet the conditions of Corollary \ref{corollary:corone} yet model doubly robust estimators still exist.
\subsection{Time-varying treatments}
\label{sec:arg_long}
Recently, Molina (2017)\cite{Molina2017} showed that, in time-varying treatment settings, estimators derived from the EIF for a $\psi^{g}$ indexed by any intervention treatment distribution that does not depend on the observed treatment process \cite{Robins2000p,Bang2005,van2011} confer more protection against model misspecification than model double robustness. Rather, they showed that these estimators are $J+1$ model multiply robust, which implies model double robustness.
The following Theorem gives a sufficient condition for the existence of $J+1$ model multiple robust estimators of $\psi^{g}$ when the intervention treatment distribution may depend on the observed treatment process, and a simple approach to deriving the EIFs for a particular class of such intervention treatment distributions.
\begin{theorem}
\label{theorem: thmlong}
Suppose an intervention treatment distribution can be written as the following:
\begin{align}
q^{g}(a_j\mid 1, \bar{\bl}_{j},\bar{a}_{j-1})
=& c_1 h_1(\bar{\bl}_{j},\bar{a}_{j-1})I(a_j=a_j^\ast) + c_2 h_2(\bar{\bl}_{j},\bar{a}_{j-1})f(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1}) + \label{eq:longtmt}
\\& c_{3} h_{3}(\bar{\bl}_{j},\bar{a}_{j-1})p^*(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})\nonumber
\end{align}
where $a_j^\ast$,
$c_1,~c_2$ and $c_{3}$ are constants; $h_1(\bar{\bL}_{j},\bar{A}_{j-1})$, $h_2(\bar{\bL}_{j},\bar{A}_{j-1})$ and $h_{3}(\bar{\bL}_{j},\bar{A}_{j-1})$ are known measurable functions of $(\bar{\bL}_{j},\bar{A}_{j-1})$; and $p^*(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})$ is a non-degenerate known probability distribution for $A_j$. Then the EIF for $\psi^{g}$ indexed by this intervention treatment distribution is
\begin{equation}
\label{eq:genEIF}
\begin{aligned}
U_{\psi^g}(O) =
\sum_{j=1}^{J}({T_j}-Q_{j-1})\prod_{k=0}^{j-1}\frac{q^{g}(A_k\mid Y_k=1, \bar{\bL}_k, \bar{A}_{k-1})}{f(A_k\mid Y_k=1, \bar{\bL}_k, \bar{A}_{k-1})} +
T_0 -\psi^g
\end{aligned}
\end{equation}
where $Q_j\equiv Q_j(\bar{\bL}_j, \bar{A}_j, \bar{Y}_j)$ and $T_j\equiv T_j(\bar{\bL}_j, \bar{A}_j, \bar{Y}_j)$ are iteratively defined from $j=J-1,\ldots,0$ such that for $T_J \equiv Y_J$, we have $Q_j\equiv E(T_{j+1}\mid \bar{\bL}_j, \bar{A}_j, \bar{Y}_j)$ and
\begin{align*}
T_j = &c_1Q_j^{A_j=a_j^\ast}h_{1}(\bar{\bL}_{j},\bar{A}_{j-1}) + c_2Q_jh_{2}(\bar{\bL}_{j},\bar{A}_{j-1}) +
c_{3}\left\{\sum_{a_j}p^*(a_j\mid 1, \bar{\bl}_{j}, \bar{A}_{j-1}) Q_j^{A_j=a_j}\right\}h_{3}(\bar{\bL}_{j},\bar{A}_{j-1})
\end{align*}
with $Q_j^{A_j=a^\ast_j}\equiv Q_j(\bar{\bL}_j, A_j=a_j^\ast,\bar{A}_{j-1}, \bar{Y}_j)$. Estimators based on this EIF are $J+1$ model multiply robust in that they are consistent if models for $Q_j$ are correctly specified for $j={k,\ldots, J-1}$ and the observed treatment models are correctly specified from $j=0, \ldots,k-1$ (for $k=0, \ldots, J$), where $j=s,s-1$ is $\emptyset$ $\forall s$.
\end{theorem}
{Theorem \ref{theorem: thmlong} makes the derivation of the EIF and the corresponding estimators far more straightforward and accessible when intervention distributions are in the form given by (\ref{eq:longtmt}).}
In Web Appendix D we prove that Expression (\ref{eq:genEIF}) is the EIF under a nonparametric model that imposes no restriction on the observed data law for $\psi^g$ indexed by (\ref{eq:longtmt}). In Web Appendix E we prove that estimators based on this EIF are $J+1$ model multiply robust. Note that, by the monotonicity of the survival indicators, we have $Y_{j+1}=Y_{j+1}Y_j$. This implies that $Q_j = Y_j Q_j = Y_j Q_j(\bar{\bL}_j, \bar{A}_j, {Y}_j=1)$, where $Q_j(\bar{\bL}_j, \bar{A}_j, {Y}_j=1) = E(T_{j+1}\mid \bar{\bL}_j, \bar{A}_j, {Y}_j=1)$.
We now illustrate Theorem 2 in an example where the intervention treatment distribution depends on the observed treatment process.
\begin{example}
Consider the multiplicative shift incremental propensity score interventions from Section \ref{examples}, recalling the intervention distribution is $q^{g}(a_j\mid 1, \bar{\bl}_j, \bar{a}_{j-1}) = (1-\delta) l_j^\ast a_j + (l_j^\ast \delta + 1 - l_j^\ast) f(a_j\mid 1, \bar{\bl}_{j},\bar{a}_{j-1})$.
\end{example}
This intervention distribution can be written in the form of Equation (\ref{eq:longtmt}) by selecting $a_j^\ast=1$, $c_1=1-\delta,~c_2=1,~h_1(\bar{\bl}_{j},\bar{a}_{j-1})=l_j^\ast,~h_2(\bar{\bl}_{j},\bar{a}_{j-1})=l_j^\ast \delta+1-l_j^\ast,~h_3(\bar{\bl}_{j},\bar{a}_{j-1})=0$.
By Theorem \ref{theorem: thmlong}, the EIF for this intervention distribution is then given by:
\setlength{\abovedisplayskip}{1pt}
\begin{equation}
\label{eq:genMIEIF}
\begin{aligned}
U_{\psi^g(\delta)}(O) = &(Y_J - Q_{J-1}) \prod_{j=0}^{J-1}\frac{q^{g}(A_j\mid Y_j=1, \bar{\bL}_j, \bar{A}_{j-1})}{f(A_j\mid Y_j=1, \bar{\bL}_j, \bar{A}_{j-1})} +
\\&\sum_{j=1}^{J-1}\Big\{\underbrace{ (1-\delta) Q_j^{A_j=1}L_j^\ast + Q_j(L_j^\ast\delta+1-L_j^\ast) }_{T_j}-Q_{j-1}\Big\}\prod_{k=0}^{j-1}\frac{q^{g}(A_k\mid Y_k=1, \bar{\bL}_k, \bar{A}_{k-1})}{f(A_k\mid Y_k=1, \bar{\bL}_k, \bar{A}_{k-1})} +
\\&\underbrace{(1-\delta)Q_0^{A_0=1}L_0^\ast + Q_0(L_0^\ast\delta + 1-L_0^\ast)}_{T_0} -\psi^g
\end{aligned}
\end{equation}
It is also straightforward to see that any $g$ corresponding to a deterministic static treatment rule meets the conditions of Theorem 2 by selecting $h_2(\bar{\bl}_{j},\bar{a}_{j-1})=h_3(\bar{\bl}_{j},\bar{a}_{j-1})=0$, $h_1(\bar{\bl}_{j},\bar{a}_{j-1})=1$ and $c_1=1$.
In Web Appendix E, we further illustrate the application of Theorem \ref{theorem: thmlong} to deterministic dynamic treatment rules, as well as other examples of intervention distributions that depend on the observed treatment process for the time-varying case including representative interventions and dynamic treatment initiation strategies with a grace period.
Note that, in these examples and the incremental propensity score intervention example above, Theorem 2 holds by selecting $h_3(\bar{\bl}_{j},\bar{a}_{j-1})=0$ such that $p^*(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})$ need not be specified. More generally, the applicability of Theorem 2 may require specification of $p^*(a_j\mid 1, \bar{\bl}_{j}, \bar{a}_{j-1})$. For example, this applies to an alternative grace period strategy where initiation within the grace period is assigned such that there is a uniform probability of initiating at each $j$ \cite{Cain2010}.
Note that the EIF given in Diaz et al. (2020) \cite{Diaz2020} cover a comprehensive class of interventions that also guarantees estimators with model double robustness, including interventions that depend on the natural value of treatment \cite{Munoz2012,Haneuse2013} also known as modified treatment policies.
Following the results of Theorem 2 in Diaz et al. (2020), the EIF of the implied modified treatment policy from our proposed intervention necessarily involve randomizer terms, but their derivation of the corresponding EIF assumes that the distributions of the randomizers are not known, when they certainly will be.
Our Theorem 2 provides the EIF for the g-formula indexed by a class of stochastic interventions that may depend on the observed treatment process.
It can be shown that nearly all functionals in this class are captured by the g-formula functionals for which Diaz et al. (2020) provides the EIF. Diaz et al.'s results would capture all of the functionals in this class, including those indexed by our proposed multiplicative incremental propensity score interventions, provided they projected their EIF onto a tangent space corresponding to smaller models whereby the distributions of some conceptualized randomizers are known.
We do not take this approach and allow one to derive the EIF directly from a stochastic treatment distribution without requiring one to define an implied modified treatment policy first\footnote{There are multiple ways to define a modified treatment policy that has an identifying formula to equal the g-formula for our multiplicative shift intervention. It will involve randomizer terms that can potentially also depend on $L_j^*$.} This alternative derivation of the EIF may be more intuitive for treatment distributions that do not depend on the natural value of treatment.
Finally, in Web Appendix C we use a similar line of reasoning to Theorem \ref{theorem:thmone} and Corollary \ref{corollary:corone} to derive the EIF for $\psi^g$ and to assess the existence of doubly robust estimators for $\psi^{g}$ indexed by an intervention distribution that depends on the observed treatment process when $J=2$ with examples. However, this approach to deriving the EIF is cumbersome for large $J$, providing no simplification over Theorem \ref{theorem: thmlong}.
\section{Estimators of $\psi^g$ indexed by multiplicative shift incremental propensity score interventions}
\label{sec:proposedint}
In this section, we consider various estimators of $\psi^g$ under the multiplicative shift incremental propensity score interventions defined by (\ref{ourshift}).
\subsection{EIF-based estimators
Several EIF-based estimators for $\psi^{g}$ have been proposed for deterministic treatment interventions including Bang and Robins (2005)'s estimator \cite{Bang2005,Scharfstein1999,Robins2000}, weighted ICE estimator \cite{Robins2007, Rotnitzky2017} and targeted maximum likelihood estimator (TMLE) \cite{van2011,Petersen2014,Lendle2017}.
Weighted ICE and TMLE are variations of Bang and Robins (2005)\cite{Bang2005}. Compared with Bang and Robins (2005)\cite{Bang2005}, weighted ICE can give better performance \cite{Tran2019}. Unlike both Bang and Robins (2005)\cite{Bang2005} and weighted ICE, TMLE can incorporate machine learning algorithms \cite{Tran2019}. In the absence of machine learning algorithms, weighted ICE and TMLE perform similarly \cite{Tran2019}, but weighted ICE is easier to implement.
In this section we will consider two estimators: 1) weighted ICE estimator that uses parametric models to estimate the nuisance functions thereby allowing for $J+1$ model multiple robustness; and 2) TMLE that also uses sample-splitting and cross fitting \cite{Van2000,Zheng2010,Chernozhukov2018} to allow one to incorporate machine learning algorithms to estimate the the nuisance functions.
\subsubsection{Weighted ICE estimator}
\label{sec:wice}
Let $\pi_j\equiv f(A_j\mid Y_j=1, \bar{A}_{j-1}, \bar{\bL}_j)$ and let $\pi_j(\alpha_j) = f(A_j\mid Y_j=1, \bar{A}_{j-1}, \bar{\bL}_j; \alpha_j)$ be a working parametric model for $\pi_j$ with $\alpha = (\alpha_0,\ldots,\alpha_{J-1})$.
Denote estimates $\hat{\pi}_j\equiv \pi_j(\hat{\alpha}_j)$ of $\pi_j$ with $\hat{\alpha}_j$ the maximum likelihood estimate (MLE) of ${\alpha}_j$ computed from the observed data.
Subsequently, let $\hat{q}^{g}_j \equiv q^{g}_j(\hat{\pi}_j)$ be an estimate of $q^{g}(A_j\mid Y_j=1, \bar{A}_{j-1}, \bar{\bL}_j)$ as defined in (\ref{ourshift}) for a choice of $\delta\in[0,1]$, replacing the observed treatment process with the estimate $\hat{\pi}_j$.
Let $\hat{Q}_j$ be a working parametric model for $Q_{j}$ defined in Theorem 2. In the following algorithm, each $\hat{T}_j$ is calculated by replacing $Q_{j}$ in formula (\ref{eq:genMIEIF}) with the estimate $\hat{Q}_j$. The weighted ICE algorithm is specifically implemented as follows:
\begin{algorithm}[H]
\renewcommand{\theenumi}{\Alph{enumi}}
\caption{Algorithm for Weighted ICE}
\begin{algorithmic} [1]
\item Compute the MLEs $\hat{\alpha}$ of ${\alpha}$ from the observed data. Set $\hat{T}_{J} = Y_J$. \\
Recursively from $j=J-1,\ldots,0$:
\begin{enumerate
\item Fit a logistic regression model $Q_j(\bar{\bL}_{j},\bar{A}_{j}, Y_j=1;\theta_j)= \expit\{\theta_j^T\phi(\bar{\bL}_{j},\bar{A}_{j})\}$ for $E(\hat{T}_{j+1} \mid \bar{\bL}_{j},\bar{A}_{j}, Y_j=1)$ with observational weight $\prod_{k=0}^{j}({\hat{q}^{g}_k}/{\hat{\pi}_k})$ in those who survive by time $j$.
Here, $\phi(\bar{\bL}_j,\bar{A}_j)$ is a known function of $\bar{\bL}_{j}$ and $\bar{A}_{j}$.
More specifically, we solve for $\theta_j$ in the following estimating equation:
\begin{equation*}
\mathbb{P}_n \left[ Y_{j}\prod_{k=0}^{j}\frac{\hat{q}^{g}}{\hat{\pi}_k} \phi_{j}({\bar{\bL}_{j},\bar{A}_{j}})\left\{\hat{T}_{j+1} - Q_{j}(\bar{\bL}_{j},\bar{A}_{j}, Y_j=1;{\theta}_{j})\right\}\right] = 0
\end{equation*}
\item Compute $\hat{T}_{j}$ from $\hat{Q}_{j}\equiv Q_{j}(\bar{\bL}_{j},\bar{A}_{j}, \bar{Y}_j;\hat{\theta}_{j})$ ensuring $\hat{T}_{j}=0$ when $Y_j=0$.
\end{enumerate}
\item Estimate $\hat{\psi}^g(\delta)_{WICE}=\mathbb{P}_n(\hat{T}_0)$
\end{algorithmic}
\end{algorithm}
\noindent where $\mathbb{P}_n\{f(X)\} =n^{-1}\sum_{i=1}^n f(X_i)$. Following arguments in Section \ref{sec:arg_long}, this estimator is $J+1$ model multiply robust
\subsubsection{TMLE with sample-splitting and cross-fitting}
This algorithm utilizes sample-splitting and cross-fitting to allow flexible machine learning algorithms for estimating nuisance functions while circumventing Donsker class conditions \cite{Van1996,Van2000}.
In Web Appendix G, we prove the asymptotic normality of this estimator under the condition that the nuisance functions are estimated consistently at rates faster than $n^{-1/4}$ when $\psi^{g}$ is indexed by the interventions (\ref{ourshift}).
Suppose that a sample of size $n$ is split into $M$ disjoint subsets. Let $S_m$ denote the subset of individuals in split $m=1,\ldots,M$ and let $S_{-m}$ denote individuals not in split $m$ (i.e., $S_{-m} = \{i \notin S_m\}$).
Moreover, let $\hat{\pi}_j^{(-m)}$, $\hat{q}_j^{(-m)}$ and $\hat{Q}_j^{(-m)}$ denote estimates of $\pi_j,~q_j^g$ and $Q_j$ \textit{obtained from machine learning algorithms to individuals in $S_{(-m)}$}.
\begin{algorithm}[H]
\renewcommand{\theenumi}{\Alph{enumi}}
\caption{Algorithm for TMLE with sample-splitting and cross-fitting}
\begin{algorithmic}[1]
\item For each $m=1,\ldots,M$:
\begin{enumerate
\item \textbf{\textit{For individuals in $S_{-m}$}}: compute $\hat{\pi}_j^{(-m)}$, $\forall j$. Set $\hat{T}_{J} = Y_J$.
\item Recursively from $j=J-1,\ldots,0$ \textbf{\textit{for individuals in $S_{-m}$}}:
\begin{enumerate}
\item Compute $\hat{Q}_j^{(-m)}(\bar{\bL}_{j},\bar{A}_{j}, Y_j=1)$ by regressing $\hat{T}_{j+1}$ on $(\bar{\bL}_{j},\bar{A}_{j})$ in those alive at time $j$
\item Compute $\hat{T}_{j}$ from $\hat{Q}_{j}^{(-m)} \equiv \hat{Q}_{j}^{(-m)}(\bar{\bL}_{j},\bar{A}_{j}, \bar{Y}_j)$ by formula (\ref{eq:genMIEIF}), setting $\hat{T}_{j}=0$ if ${Y}_{j}=0$
\end{enumerate}
\item \textbf{\textit{For individuals in $S_{m}$}}, set $\hat{T}_{J} = Y_J$. Then recursively from $j=J-1,\ldots,0$:
\begin{enumerate}
\item Solve for $\gamma_j$ in the following set of estimating equations:
\begin{equation*}
\mathbb{P}_n^{m} \left( Y_{j}\prod_{k=0}^{j}\frac{\hat{q}_k^{{g}^{(-m)}}}{\hat{\pi}_k^{(-m)}} \left[\hat{T}_{j+1} - \expit\left\{\logit\left(\hat{Q}_j^{(-m)}(\bar{\bL}_{j},\bar{A}_{j}, Y_j=1)\right)+\gamma_j \right\}\right]\right) = 0
\end{equation*}
\item Compute $\hat{T}_{j}$ from $\hat{Q}_j^\Delta(\bar{\bL}_{j},\bar{A}_{j}, Y_j=1) \equiv \expit\left\{\logit\left(\hat{Q}_j^{(-m)}(\bar{\bL}_{j},\bar{A}_{j}, Y_j=1)\right)+\hat{\gamma}_j \right\}$ if $Y_j=1$, otherwise set $\hat{T}_{j}=0$ if ${Y}_{j}=0$
\end{enumerate}
\end{enumerate}
\item Calculate \[\hat{\psi}^g(\delta)_{TMLE} = \frac{1}{M}\sum_{m=1}^M\mathbb{P}_n^m(\hat{T}_0)\]
\end{algorithmic}
\end{algorithm}
\noindent Here $\mathbb{P}_n^{m}\{f(X)\} = \frac{1}{\mid S_m\mid}\sum_{i\in S_m} f(X_i)$ where ${\mid S_m\mid}=n/M$ is the cardinality of $S_m$.
\subsection{Singly robust estimators}
We also consider less optimal but computationally simple singly robust estimators of $\psi^{g}$ indexed by (\ref{ourshift}). An IPW estimator $\hat{\psi^g}_{IPW}(\delta)$ can be obtained by the product $\hat{\psi}^g_{IPW}(\delta) = \prod_{j=0}^{J-1} \hat{\Upsilon}^g_{IPW,j}(\delta)$ where $\hat{\Upsilon}^g_{IPW,j}(\delta)$ can be interpreted as an estimate of the discrete hazard at $j$ under a stochastic strategy $g$ where treatment assignment is a draw from (\ref{ourshift}) given the identifying conditions of Section \ref{sec:g-formula}. Each $\hat{\Upsilon}^g_{IPW,j}(\delta)$ can be obtained by solving for $\Upsilon^g_{IPW,j}(\delta)$ in the following estimating equations:
\[
\mathbb{P}_n \left[ Y_{j}\prod_{k=0}^{j}\frac{\hat{q}^g_k}{\hat{\pi}_k } \{Y_{j+1}-\Upsilon^g_{IPW,j}(\delta)\} \right] =0
\]
Alternatively, the singly-robust ICE estimator, which we will denote $\hat{\psi}^g_{ICE}(\delta)$, can be obtained as a special case of the algorithm for weighted ICE above where the observational weights are set to 1.
\subsection{Censoring}\label{sec:censoring} Straightforward extensions of the identification arguments in Section \ref{sec:g-formula} in studies with censoring follow by implicitly including in $g$ a hypothetical intervention that eliminates censoring throughout follow-up \cite{Young2011} with straightforward extensions of the g-formula $\psi^{g}$, properties of its EIF and associated estimation procedures. Briefly, denote $C_j$ as the indicator of censoring by time $j$ and adopt the order $(L_j,A_j,C_{j+1},Y_{j+1})$. Extensions to accommodate censoring for singly robust weighted estimators and the various EIF-based estimators considered, require, in addition to estimating $\alpha_j$ in $f(A_j\mid Y_j=1, \bar{A}_{j-1}, \bar{\bL}_j, C_j=0; \alpha_j)$, also estimating $\alpha^c_j$ in $P(C_{j+1}=1\mid C_{j}=0,\bar{A}_{j}, \bar{\bL}_{j}, Y_{j}=1; \alpha^c_{j})$ for $j=0,\ldots, J-1$ with $\alpha^c = (\alpha^c_1,\ldots,\alpha^c_{J})$. Further details of modifications to the weighted ICE and TMLE to accommodate censoring are provided in Web Appendix F.
\section{Simulation studies}
\label{sec:sim}
We conducted two different simulation studies. The first simulation study aims to compare the performance of the weighted ICE, IPW and ICE estimators when the nuisance functions are estimated through parametric models under various model misspecification scenarios. The second simulation study aims to compare the performance of TMLE with sample splitting and cross fitting, IPW and ICE when the nuisance functions are estimated through machine learning algorithms.
\subsection{Simulation study 1: using parametric models}
In this simulation study we compare the performance of the weighted ICE estimator with the singly robust estimators (IPW and ICE estimators) for $\psi^{g}$ indexed by the intervention distribution (\ref{ourshift}) which, under identifying conditions discussed in Section \ref{sec:g-formula}, equals the cumulative probability of survival at $J$ under an intervention that increases the probability of treatment initiation in those with $L_j^\ast=1$ as a function $\delta$. Recall that this increase is defined such that decreasing values of $\delta$ correspond to an increasing probability of treatment initiation (with $\delta=1$ coinciding with no treatment intervention).
We simulated 1000 samples of $n=(500,~1000,~2500)$ individuals selecting $J=5$ and $\delta=(0.75,~0.50,~0.25)$. We simulated the following variables: $(\bL_0,A_0,C_1,Y_1,\bL_1,A_1,...,C_{5},Y_{5})$, where $\bL_j = (L_j^\ast, L_{1j}, L_{2j})$ is the vector of measured confounders.
Specifically, we generated $L_{1}^\ast$ and $L_{11}\sim \text{Ber}\{\text{expit}(-1)\}$, and $L_{2}\sim \text{Ber}\{\text{expit}(1+L_{1}^\ast)\}$.
The censoring indicator at each time $j$ ($j=1,\ldots,5$) was simulated from $C_j \sim \text{Ber}\{\text{expit}(-2+L_{1j}-L_{2j})\}$ if $C_{j-1}=0$ and $Y_j=1$.
The outcome at each time $j$ ($j=1,\ldots,5$) is simulated from $Y_j \sim \text{Ber}\{\text{expit}(1+3A_{j-1}-2L_{j-1}^\ast+L_{1,j-1}-L_{2,j-1})\}$ if $Y_{j-1}=1$ and $C_{j}=0$.
The time-varying confounders at time $j$ ($j=0,\ldots,4$) are simulated from $L_{j}^\ast \sim \text{Ber}\{\text{expit}(-1-A_{j-1}+L_{j-1}-L_{1,j-1}+L_{2,j-1})\}$, $L_{1j} \sim \text{Ber}\{\text{expit}(-1+A_{j-1}+L_{1,j-1}-L_{2,j-1})\}$ and $L_{2j} \sim \text{Ber}\{\text{expit}(1+A_{j-1}+L_j^\ast+L_{2,j-1})\}$ if $Y_j=1$.
Treatment at time $j$ ($j=0,\ldots,4$) is simulated from $A_j \sim \text{Ber}\{\text{expit}(-1-2L_j^\ast-L_{1j} + L_{2j} + 2A_{j-1})\}$ if $Y_j=1$.
In addition $(Y_j, \bL_{j}, A_j,\ldots)=(\emptyset,\emptyset,\emptyset,\ldots)$ if $C_j=1$, and $(\bL_{j}, A_j, C_{j+1},\ldots)=(\emptyset,\emptyset,\emptyset,\ldots)$ if $Y_{j}=0$.
\sloppy
The true cumulative probabilities of survival were calculated by using the true parametric models to generate a Monte Carlo sample of size $10^7$ under all interventions of interest. Our selection of parameters resulted in a scenario where selecting smaller $\delta$ (that is, interventions with larger increases in the probability of treatment initiation at each $j$) improves survival.
We considered three estimation scenarios for each choice of $\delta$ and sample size such that (1) all models are correctly specified, (2) only the outcome regression models are correctly specified, and (3) only the treatment (propensity score) and censoring models are correctly specified. The true functional forms of the treatment and censoring models are known under our simulation because treatment and censoring were generated to only depend on past measured variables. Similarly, the functional form of the outcome regression model for $Q_{J-1} = E(Y_J \mid Y_{J-1}=1, C_J=0, \bar{A}_{J-1}, \overline{L}_{J-1})$ is known due to the absence of unmeasured common causes. However, the true functional forms of the outcome regressions $Q_j$ for $0\leq j<J-1$ are not known under our simulation. To ensure correctly specified models for $Q_j$, $0\leq j<J-1$, saturated models were fit, i.e., all main terms and interaction terms for $(A_{j},L_{j}^\ast,L_{1j},L_{2j})$. In scenarios with misspecified models, at each time $j$, the misspecified treatment model ignores the censoring process and excludes $A_{j-1}$ in the model, and the misspecified outcome regression model excludes any pairwise interactions between the covariates and treatment.
\begin{figure}
\includegraphics[width=15cm]{bias_MSE.pdf}
\caption{Results for the Simulation study 1}
\label{fig:paramsim}
\end{figure}
Figure \ref{fig:paramsim} compares performance of the three estimators of $\psi^{g}$ indexed by (\ref{ourshift}) for $\delta=(0.75,~0.50,~0.25)$.
Complementary results are given in Tables 3--5 in Web Appendix H.
As expected, all estimators were nearly unbiased under correctly specified models.
Under our model misspecification scenarios, $\hat{\psi}^g(\delta)_{WICE}$ is nearly unbiased, but the IPW estimator is biased when the treatment models are misspecified, and the ICE estimator is biased when the outcome models are misspecified.
In addition, under correctly specified models $\hat{\psi}^g(\delta)_{ICE}$ is the most efficient, and $\hat{\psi}^g(\delta)_{IPW}$ is the least efficient estimator. Interestingly, the simulation results show that $\hat{\psi}^g(\delta)_{WICE}$ has smaller MSE than the IPW estimator in all scenarios.
The simulation results also show that as $\delta$ decreases, the standard error (and MSE) in all three estimators increases. This is due to an increase in the effect of near positivity violations as $\delta$ nears zero. In fact, we would expect all three estimators to have the largest standard errors when $\delta=0$, which is equivalent to a strategy that treats all individuals with $L_j^\ast=1$ at all times. In Web Appendix H, we also show $J+1$ model robustness of our weighted ICE estimator in a model misspecification scenario that requires more than model double robustness.
\subsection{Simulation study 2: using machine learning methods}
In the second simulation, we compare the performance of algorithms that use machine learning to estimate the nuisance functions for $\psi^{g}$ indexed by (\ref{ourshift}) with $J=5$. Specifically, we compare TMLE with sample splitting and cross fitting, IPW and ICE. Given much longer computation times, we limited consideration to one choice of $\delta=0.5$.
Unlike Simulation 1, we add model complexity to the data generating mechanism by considering continuous covariates, which might mimic real-life data more closely. We simulated 1000 hypothetical cohorts of $n=(250,~500,~1000)$ comprising the following variables: $(\bL_0,A_0,C_1,Y_1,\bL_1,A_1,...,C_{5},Y_{5})$, where $\bL_j = (L_{1j}, L_j^\ast)$. In addition, $\bL_0=(L_0^{1}, L_0^{2}, L_{10}, L_0^\ast)$, where $L_0^{1}$ and $L_0^{2}$ are baseline covariates.
In particular, $L_0^{1}\sim \text{Ber}(0.5)$, $L_0^{2}\sim \text{N}(0,1)$, $L_{10}\sim \text{N}(2+L_0^{1},1)$ and $L_0^\ast\sim \text{Ber}\{\text{expit}(1.5-0.5L_0+L_0^{1}+0.25L_0^{2})\}$.
For $j\geq 1$,
$L_{1j}\sim \text{N}(2+A_{j-1}-L_{j-1}^\ast+0.5L_{1,j-1}+L_0^{1},1)$ and $L_{j}^\ast \sim \text{Ber}\{\text{expit}(1.5-A_{j-1}-0.5L_{1j}+L_{j-1}^\ast+L_0^{1}+0.25L_0^{2})\}$ if $Y_j=1$.
Censoring indicator at each time $j$ ($j=1,\ldots,5$) is simulated from $C_j \sim \text{Ber}[\text{expit}\{-4-A_{j-1}-L_{j-1}^\ast-0.5\sqrt{\mid L_{1,j-1}L_0^{2}\mid} + 1.5\mid L_{1,j-1} \mid/(1+\exp(L_0^{2}))\}]$ if $C_{j-1}=0$ and $Y_j=1$.
The outcome at each time $j$ ($j=1,\ldots,5$) is simulated from $Y_j \sim \text{Ber}[\text{expit}\{-1+2A_{j-1}-2L_{j-1}^\ast+0.25L_{j-1}^\ast L_{1,j-1}+0.5L_0^{1}+0.75\mid L_{1,j-1}+L_0^{2}\mid^{1.5}\}]$ if $Y_{j-1}=1$ and $C_{j}=0$.
Treatment at time $j$ ($j=0,\ldots,4$) is simulated from $A_j \sim \text{Ber}\{\text{expit}(-3+L_{j}^\ast-0.5L_{1j}+0.25L_{j}^\ast L_{j}+0.5L_0^{1}+0.25L_0^{2} + 0.5\mid L_0^{2}\mid)\}$ if $Y_j=1$ and $A_{j-1}=0$, and is set to $1$ if $Y_j=1$ and $A_{j-1}=1$.
Nuisance functions were estimated using the Super Learner ensemble, which uses cross validation to select the best convex combination of predictions from a pool of prediction algorithms \cite{van2007}. The library of potential candidates used here consisted of: generalized linear models and its variants (SL.glm, SL.glm.interaction), Bayesian generalized linear models (SL.bayesglm), generalized additive models with smoothing splines (SL.gam), multivariate adaptive regression Splines (SL.earth), neural networks (SL.nnet) and random forest (SL.ranger).
Table~\ref{tab:ML} compares the performance of the 3 estimators. The ICE and IPW estimators show bias as they are not expected to converge at $\sqrt{n}$ rates when machine learning is used for nuisance parameter estimation. TMLE, on the other hand, show little to no bias in all instances. This agrees with theory as TMLE allows the nuisance functions to converge at slower nonparametric rates.
Moreover, the estimated coverage probability of the confidence intervals for TMLE based on the asymptotic variance (see Web Appendix G) is very close to the nominal 95\%: $(94.7,~96.2,~95.2,~94.4)$ for $n=(250,~500,~1000,~2500)$, respectively.
\setlength{\tabcolsep}{1.5pt}
\renewcommand{\arraystretch}{1.0}
\begin{table}
\caption{\label{tab:ML}Simulation study 2 for proposed treatment intervention distribution and incorporating machine learning algorithms ($M=2$).
True probability of survival at time 5 is $0.629$. All values are multiplied by 100.}
\centering{\scalebox{1}{
\begin{tabular}{| l | lcc | lcc | lcc | lcc |}
\hline
& \multicolumn{3}{c}{$n=250$} & \multicolumn{3}{|c|}{$n=500$} & \multicolumn{3}{|c|}{$n=1000$} & \multicolumn{3}{|c|}{$n=2500$}
\\ \hline
Estimator & BIAS & SE & RMSE & BIAS & SE & RMSE & BIAS & SE & RMSE & BIAS & SE & RMSE \\
\hline
$\hat{\psi}^g(\delta)_{ICE}$ &-1.50 &4.35 &4.61 &-0.82 &2.83 &2.95 &-0.47 &2.14 &2.19 &-0.16 &1.38 &1.39\\
$\hat{\psi}^g(\delta)_{IPW}$ &-1.50 &4.91 &5.13 &-1.50 &3.32 &3.64 &-1.35 &2.60 &2.93 &-1.10 &1.71 &2.03\\
$\hat{\psi}^g(\delta)_{TMLE}$ &-0.19 &5.79 &5.79 &-0.09 &3.61 &3.62 &-0.07 &2.59 &2.59 &0.03 &1.65 &1.65\\
\hline
\end{tabular} }}
\end{table}
\section{Application}
\label{sec:app}
Randomized trials suggest that antiretroviral pre-exposure prophylaxis (PrEP) is highly effective in preventing HIV infection among men who have sex with men (MSM) \cite{Grant2010,Molina2015,Mccormack2016}. At the same time, there is widespread concern that PrEP may decrease condom use,
thereby increasing incidence of bacterial sexually transmitted infections (STIs) among MSM \cite{Tellalian2013,Krakower2014}.
With this backdrop, PrEP uptake has been low in practice, particularly in MSM with markers for higher HIV risk \cite{Marcus2016}. This is precisely the setting where near or true positivity violations will occur if the analyst attempts to query observational data about the effects of deterministic interventions such as ``always treat'' versus ``never treat'' with PrEP because propensity scores will be close to (or equal to) zero for individuals with certain levels of the measured confounders. In conjunction with these challenges, such deterministic treatment effects are not of greatest interest for treatments like PrEP, where biological benefits are established but population disease burden may be impacted with even small increases in treatment uptake.
We illustrate the application of the estimators of survival by $J$ discussed in Section \ref{sec:proposedint} using electronic health record data from the Cambridge Health Alliance -- a large community healthcare system in Eastern Massachusetts -- to estimate effects of increasing PrEP uptake on bacterial STI diagnosis. Specifically, we consider interventions that, beginning at the time of an HIV negative test, successfully increase the proportion initiating PrEP ($A_j$) in each follow-up week $j$ only among those receiving an STI \textsl{test} and no prior diagnosis of HIV at time $j$ ($L_j^{\ast}=1$, being tested for STIs suggests recent condomless sex and PrEP would not be used after an HIV diagnosis). No intervention is made for the remainder of the population at time $j$ ($L_j^{\ast}=0$). Increases in treatment uptake under these interventions are quantified by a specified $\delta\in[0,1]$ as defined in (\ref{sec:proposedint}), which quantifies the factor by which the probability of treatment non-initiation is decreased (relative to no intervention) at $j$. We consider $J=26$ weeks and, as in simulation study 1, consider $\delta=(0.95,~0.85,~0.75)$ representing realistic interventions that result in ``low'', ``medium'' and ``high'' success in PrEP uptake relative to no intervention. We use $\delta=1$ (corresponding to no intervention) as the reference in defining causal effects.
Our analytic data set was restricted to patients who met all the following inclusion criteria at some point during 2012--2017: 1) Cis male with report of male gender of sex partner(s); 2) 15 years of age or older; 3) an HIV-negative test; 4) had no PrEP prescription in the 3 months prior to baseline; and 5) had no STI diagnosis in the 12 months prior to baseline.
Baseline (week $j=0$) for an individual was defined as the first week that all of these inclusion criteria are met.
For simplicity, we excluded one individual who met these criteria but died without a bacterial STI diagnosis during the 26 week follow-up period.
Our final analytic data set consisted of $n=1103$ individuals. As expected, few initiated PrEP over the follow-up (cumulatively 5.1\% over the 26 weeks).
The cumulative proportion of those receiving an STI test while being HIV-free over the 26 weeks was 70.7\%.
Note that no individual was treated as censored in this analysis, requiring additional assumptions that medical care was not sought outside of the Cambridge Health Alliance by any individual included at baseline over the 26 week follow-up.
Baseline covariates $\bL_0$ included age and calendar year at baseline, race/ethnicity, and time-varying covariates $\bL_j$ included indicator of any ambulatory encounter, indicator of HIV, indicator of any HIV testing and indicator of any STI testing.
We used the Super Learner ensemble (with the same potential candidates as in the simulation) to estimate all nuisance functions for TMLE with $M=5$. We compared these results with the IPW, ICE and weighted ICE estimators described in this paper where the nuisance functions are specified by parametric models.
Confidence intervals for each of the methods are obtained from 1000 bootstrap samples by taking the 2.5th and 97.5th percentiles of the resulting estimates.
Our estimate of the probability of not receiving an STI diagnosis under no intervention by 26 week follow-up ($\delta=1$) was 93.7\%. Table \ref{tab:MLdata} shows results from the four methods for $\delta<1$.
In this case point estimates from all of the methods are similar.
The results do not provide sufficient evidence that increasing PrEP uptake increases risk of STI diagnosis. For instance, compared with no intervention ($\delta=1$), the relative survival estimates under low, medium and high increases in PrEP uptake were 0.99 (95\% CI = $(0.96,~1.01)$), 0.97 (95\% CI = $(0.91,~1.01)$) and 0.96 (95\% CI = $(0.87,~1.02)$), respectively under $\hat{\psi}^g(\delta)_{TMLE}$. The relative survival estimates using other estimators were very similar (see Web Appendix I).
We also note that due to an increase in the presence of near positivity violations as $\delta$ nears zero, observational weights calculated under smaller $\delta$ were more variable than larger $\delta$ (see Web Appendix I). We would expect standard errors from all of the estimators to be the largest for $\delta=0$.
\setlength{\tabcolsep}{1.5pt}
\renewcommand{\arraystretch}{1.0}
\begin{table}
\caption{\label{tab:MLdata}Point estimates and 95\% confidence intervals from analysis of MSM from the Cambridge Health Alliance on the effect of incremental PrEP initiation on incident STI diagnosis. All values are multiplied by 100.
}
\centering{\scalebox{1}{
\begin{tabular}{| l | cc | cc | cc | cc |}
\hline
& \multicolumn{2}{c}{$\hat{\psi}^g(\delta)_{TMLE}$ (with ML)} & \multicolumn{2}{|c|}{$\hat{\psi}^g(\delta)_{WICE}$} & \multicolumn{2}{|c|}{$\hat{\psi}^g(\delta)_{ICE}$} & \multicolumn{2}{|c|}{$\hat{\psi}^g(\delta)_{IPW}$}
\\ \hline
$\uparrow$ in PrEP & Est. & 95\% C.I. & Est. & 95\% C.I. & Est. & 95\% C.I. & Est. & 95\% C.I. \\ \hline
Low &92.9&(89.3,~95.2)&93.0&(90.9,~94.9)&92.9&(91.1,~94.6)&92.9&(90.5,~94.9)\\
Medium &91.4&(84.8,~95.5)&91.6&(87.1,~95,1)&91.4&(88.5,~94.1)&91.3&(85.8,~95.0)\\
High &90.9&(80.9,~95.8)&90.3&(83.2,~95.4)&90.0&(85.8,~93.8)&89.9&(80.9,~95.3)\\
\hline
\end{tabular} }}
\end{table}
\section{Discussion}
\label{sec:discussion}
Many methods that have been proposed for estimating causal estimands in time-varying treatment settings for survival analysis, and among these methods are estimators that offer protection against model misspecification and can also attain semiparametric efficiency bound. However, most of these doubly robust estimators have been in the setting of deterministic treatment interventions. In this paper we provided some sufficient conditions for the existence of doubly robust estimators when a treatment intervention distribution can depend on the observed treatment process for point treatment processes.
We also discussed a class of intervention distributions that are always guaranteed to give doubly/multiply robust estimators and gave a general form of the EIFs that are associated with these intervention distributions.
Among these intervention distributions is our multiplicative shift incremental propensity score intervention distribution, which aims to increase treatment uptake in a group of individuals who are at high risk of the outcome but have low exposure to treatment.
We provided various estimators that can be used for our proposed treatment intervention for both parametric and machine learning algorithms.
We conducted two simulation studies for our proposed multiplicative shift intervention distribution.
Our first study show that the weighted ICE is more robust to model misspecification than IPW and ICE when the nuisance functions are estimated using parametric models.
Our second study show that the TMLE with sample-splitting and cross-fitting is consistent as long as the nuisance functions are estimated consistently at fast enough rates using machine learning methods, which may not necessarily be $n^{-1/2}$.
We also illustrated our an application of our estimators to a real world dataset in the PrEP context.
Note that our proposed intervention treatment distribution (\ref{ourshift}) is guaranteed under a stochastic intervention such that treatment initiation status at time $j$ for an individual with $L^{\ast}_j=1$ is a random draw from (\ref{ourshift}). The identifying conditions reviewed in Section \ref{sec:g-formula} are sufficient to give our effect estimates this interpretation. A more policy relevant interpretation might, for example, be an intervention where individuals with $L^{\ast}_j=1$ are always offered PrEP counseling. In these individuals, the intervention distribution (\ref{ourshift}) quantifies the hypothesized ``success'' of such an intervention where $g$ in this case really refers to a deterministic strategy relative to the unmeasured treatment ``offered PrEP counseling''. Additional assumptions are needed to give our effect estimates this interpretation following similar arguments to those given in Richardson and Robins (2013)\cite{Richardson2013} and Young et al. (2014) \cite{Young2014}.
Finally, we note that while machine learning algorithms are more robust to model form misspecification, they are also computationally complex and may be practically infeasible for very large datasets without powerful computing systems.
Moreover, issues related to data privacy make access to advanced computational resources impossible in many cases. Therefore, estimators that offer model double/multiple robustness are useful in practice as they offer protection against model misspecification and can be easily computed using standard off-the-shelf regression software in R.
\section*{Acknowledgment}
This research was funded by NIAID grant R21AI143386-01A1. The authors thank Dr. Gerard Coste at Cambridge Health Alliance for assistance with collection of clinical data.
\bibliographystyle{SageV}
|
1,314,259,994,299 | arxiv | \section{Introduction}
Deep inelastic lepton - nucleon scattering (DIS) has been an important
tool in particle physics for more than twenty-five years. Recent high
precision measurements by groups such as the NMC, SMC and various SLAC
groups~\cite{NMC, SMC, E140, E142} are testing the limits of our theoretical
knowledge of the structure of the nucleon. The SMC and the E143 groups
\cite{SMC2, E143} have recently reported the first measurements of the
transverse spin structure function $G_{2}(x, Q^{2})$
\footnote{Throughout this work I use capital letters {\it F, G} to denote
the structure functions measured in experiment and lower case letters
{\it f, g, h} to denote the various parton distributions.}
, which has a twist-three component $g_{T}$. In the near future we can also
expect the HERMES experiment to investigate the spin dependent structure
functions in some detail. Also the development of polarized beams at RHIC
will lead to measurements of the chiral-odd distributions $h_{1,L}$, which
are twist-two and twist-three respectively~\cite{JJ}.
Higher twist structure functions will also be important at the energies of
CEBAF and the proposed ELFE accelerator because of the phenomenon of
hadron-parton duality~\cite{BG,JU}, or `precocious scaling'~\cite{DGP}.
This predicts that at even moderate values of $Q^{2}$, higher twist
contributions to observables, such as the nucleon form factor, will be
non-negligible. Thus it is important to have some theoretical understanding
on the expected sizes and shapes of the higher twist structure functions.
In a recent paper, Ji~\cite{Ji} defined the 14 possible nucleon structure
functions in the standard model, and calculated each structure function in
terms of all the possible parton distribution functions up to the level of
twist-four (ignoring loop effects from QCD radiative corrections). The
parton distribution (or correlation) functions are defined in terms of
Fourier transforms of matrix elements of non-local quark and gauge fields
separated along the light-cone~\cite{JJ, Ji, EFP}. At present these matrix
elements cannot be calculated in QCD, however calculations of twist-two
matrix elements using the wavefunction of the MIT bag model have been
performed~\cite{SST, AS}, and these describe current data relatively well.
In this paper these calculations are extended to all the parton distribution
functions of twist-three and four containing only two quark fields.
\section{Parton Distribution Functions}
Following Ji~\cite{Ji}, let us consider a nucleon of mass $M$ with
momentum $P^{\mu}$ and polarisation vector $S^{\mu}$. If we choose the
nucleon to be moving along the $z$-axis with momentum $P$ we have
\begin{equation}
P^{\mu} = (\sqrt{M^{2} + P^{2}}, 0, 0, P).
\end{equation}
We introduce two orthogonal, light-like vectors $p$ and $n$
\begin{eqnarray}
p^{\mu} & = & \frac{1}{2}(\sqrt{M^{2} + P^{2}} + P)(1, 0, 0, 1), \nonumber \\
n^{\mu} & = & \frac{1}{M^{2}}(\sqrt{M^{2} + P^{2}} - P)(1, 0, 0, -1),
\end{eqnarray}
satisfying $P^{\mu} = p^{\mu} + M^{2}n^{\mu}/2$. We also decompose the
polarisation vector, $S = S_{\parallel} + M S_{\perp}$, where
\begin{equation}
S_{\parallel}^{\mu} = p^{\mu} - \frac{M^{2}}{2}n^{\mu},\;\;
S_{\perp}^{\mu} = (0, 1, 0, 0).
\end{equation}
A parton distribution with $k$ light-cone momentum fractions
$M(x_{1}, \ldots, x_{k})$ is defined via the matrix element
\begin{equation}
\int \prod_{i=1}^{k} \frac{d\lambda_{i}}{2\pi} \exp(i\lambda_{i}x_{i})
\langle PS | \hat{Q}(\lambda_{1}n, \ldots, \lambda_{k}n) | PS \rangle
= M(x_{1}, \ldots, x_{k}) \hat{T}(p, n, S_{\perp})
\end{equation}
where $\hat{Q}$ is a product of $k$ quark and gluon fields, and $\hat{T}$
is a Lorentz tensor.
If the mass dimension of the parton distribution $M(x_{i})$ is $d_{M}$ and
that of $\hat{T}$ is $d_{T}$, then at large $Q^{2}$ the matrix element will
behave as $\Lambda^{d_{M}}Q^{d_{T}}$, where $\Lambda$ is a soft QCD mass
scale related to non-perturbative physics. A distribution with mass
dimension $d_{M}$ is called a twist-$(d_{M}+2)$ distribution. As the
dimension of a physical observable is fixed, the higher $d_{M}$ becomes,
the higher the inverse power of hard momenta must become in the observable.
Thus a twist-$n$ parton distribution can only contribute to physical
observables of twist-$n$ or higher, which behave as $Q^{2-(n+a)}$, where
$a \geq 0$, in the $Q^{2} \rightarrow \infty$ limit. In the following we
will often rescale a parton distribution by factors of $(M/\Lambda)$ to
make it dimensionless, but the twist willl be unchanged.
For scattering processes involving two quark fields we can define a quark
density matrix
\begin{equation}
M_{\alpha\beta}(x) = \int \frac{d\lambda}{2\pi} e^{i\lambda x}
\langle PS | \bar{\psi}_{\beta}(0) \psi_{\alpha}(\lambda n) | PS \rangle.
\end{equation}
From this it is possible to systematically generate the possible
distributions at a given twist. The twist-2 part of the density matrix can
be written
\begin{equation}
M(x)|_{\mbox{twist-2}} = \frac{1}{2} \not \! p f_{1}(x) +
\frac{1}{2}\gamma_{5} \not \!p (S_{\parallel} \cdot n) g_{1}(x) +
\frac{1}{2}\gamma_{5} \not \! S_{\perp} \not \! p h_{1}(x)
\end{equation}
where the three quark distribution functions $f_{1}$, $g_{1}$ and $h_{1}$
are defined
\begin{eqnarray}
f_{1}(x) & = & \frac{1}{2} \int \frac{d\lambda}{2\pi} e^{i\lambda x}
\langle P | \bar{\psi}(0) \not \! n \psi(\lambda n) | P \rangle, \nonumber \\
g_{1}(x) & = & \frac{1}{2}\int \frac{d\lambda}{2\pi} e^{i\lambda x}
\langle PS_{\parallel} | \bar{\psi}(0) \not \! n \gamma_{5} \psi(\lambda n)
| PS_{\parallel} \rangle, \nonumber \\
h_{1}(x) & = & \frac{1}{2}\int \frac{d\lambda}{2\pi} e^{i\lambda x}
\langle PS_{\perp} | \bar{\psi}(0) \not \! n \gamma_{5} \not \! S_{\perp}
\psi(\lambda n) | PS_{\perp} \rangle. \label{eq:tw2}
\end{eqnarray}
These represent the unpolarized quark density, the quark helicity density
and the quark transversity density~\cite{JJ} respectively. The light-cone
gauge, $A^{+} = 0$, has been chosen, so that the distributions are manifestly
gauge invariant.
Similarly at the twist-3 level, the appropriate portion of the quark
density matrix can be written
\begin{equation}
M(x)|_{\mbox{twist-3}} = \frac{\Lambda}{2} [ e(x) +
\frac{1}{2}(S_{\parallel} \cdot n) (\not \! p \not \! n - \not \! n \not \! p)
\gamma_{5} h_{L}(x) + \gamma_{5} \not \! S_{\perp} g_{T}(x) ]
\end{equation}
where $\Lambda$ is a soft mass scale in QCD. The three twist-3 distribution
functions are given by
\begin{eqnarray}
e(x) & = & \frac{1}{2\Lambda} \int \frac{d\lambda}{2\pi} e^{i\lambda x}
\langle P | \bar{\psi}(0) \psi(\lambda n) | P \rangle, \nonumber \\
h_{L}(x) & = & \frac{1}{2\Lambda}\int \frac{d\lambda}{2\pi} e^{i\lambda x}
\langle PS_{\parallel} | \bar{\psi}(0) \frac{1}{2} (\not \! p \not \! n -
\not \! n \not \! p)\gamma_{5} \psi(\lambda n) | PS_{\parallel} \rangle,
\nonumber \\
g_{T}(x) & = & \frac{1}{2\Lambda}\int \frac{d\lambda}{2\pi} e^{i\lambda x}
\langle PS_{\perp} | \bar{\psi}(0) \gamma_{5} \not \! S_{\perp} \psi(\lambda n)
| PS_{\perp} \rangle. \label{eq:tw3}
\end{eqnarray}
At twist-4 we have
\begin{equation}
M(x)|_{\mbox{twist-4}} = \frac{\Lambda^{2}}{4} [ \not \! n f_{4}(x) +
\not \! n \gamma_{5} (S_{\parallel} \cdot n) g_{3}(x) +
\not \! n \gamma_{5} \not \! S_{\perp} h_{3}(x) ]
\end{equation}
with the twist-4 distributions
\begin{eqnarray}
f_{4}(x) & = & \frac{1}{2 \Lambda^{2}} \int \frac{d\lambda}{2\pi}
e^{i\lambda x} \langle P | \bar{\psi}(0) \not \! p \psi(\lambda n)
| P \rangle, \nonumber \\
g_{3}(x) & = & \frac{1}{2 \Lambda^{2}}\int \frac{d\lambda}{2\pi}
e^{i\lambda x} \langle PS_{\parallel} | \bar{\psi}(0) \gamma_{5} \not \! p
\psi(\lambda n) | PS_{\parallel} \rangle, \nonumber \\
h_{3}(x) & = & \frac{1}{2 \Lambda^{2}}\int \frac{d\lambda}{2\pi}
e^{i\lambda x} \langle PS_{\perp} | \bar{\psi}(0) \gamma_{5} \not \! S_{\perp}
\not \! p \psi(\lambda n) | PS_{\perp} \rangle. \label{eq:tw4}
\end{eqnarray}
The quark field $\psi$ can be decomposed into `good' and `bad' components,
$\psi_{+}$ and $\psi_{-}$ respectively
\begin{equation}
\psi_{\pm} = P^{\pm} \psi, P^{\pm} + \frac{1}{2}\gamma^{\mp}\gamma^{\pm},
\gamma^{\pm} = \frac{1}{\sqrt{2}}(\gamma^{0} \pm \gamma^{3}).
\end{equation}
By inspection it can be seen that the twist-2 quark distributions involve
only the `good' component $\psi_{+}$, whereas the twist-3 distributions
involve mixing one `good' and one `bad' component, and the twist-4
distributions involve only the `bad' components.
The QCD equations of motion~\cite{KS}
\begin{eqnarray}
i\frac{d}{d\lambda} \psi_{-}(\lambda n) & = &
\frac{1}{2} \not \! n (-i\! \not \! D_{\perp} + m_{q}) \psi_{+}(\lambda n) , \\
-i\frac{d}{d\lambda} \bar{\psi}_{-}(\lambda n) & = &
\frac{1}{2} \bar{\psi}_{+}(\lambda n) (-i\! \not \! D_{\perp} + m_{q})
\not \! n ,
\end{eqnarray}
make it possible to eliminate the `bad' components from the twist-three and
four distributions, at the cost of introducing gluon fields into the matrix
elements. Because the model wavefunctions do not include gluon fields, we
will not do this here. Also note that at twist-three and twist-four there
exist distributions involving the gluon field with two or three light-cone
momentum fractions, such as~\cite{Ji, EFP}
\begin{equation}
E(x, y) = -\frac{1}{4\Lambda} \int \frac{d\lambda}{2\pi}
\frac{d\mu}{2\pi} e^{i\lambda x} e^{i\mu (y-x)} \langle P | \bar{\psi}(0)
i\! \not \! D_{\perp}(\mu n) \not \! n \psi(\lambda n) | P \rangle, \\
\end{equation}
and
\begin{equation}
B_{1}(x, y, z) = \frac{1}{2\Lambda^{2}} \int \frac{d\lambda}{2\pi}
\frac{d\mu}{2\pi} \frac{d\nu}{2\pi} e^{i\lambda x} e^{i\mu (y-x)}
e^{i\nu (z-y)}\langle P | \bar{\psi}(0) \not \! n i\! \not \!
D_{\perp}(\nu n) i\! \not \! D_{\perp}(\mu n) \psi(\lambda n) | P \rangle, \\
\end{equation}
which can be related to $e(x)$ and $f_{4}(x)$ by integrating over $y$ or
$y$ and $z$ respectively. However, as the MIT bag wavefunction has no
explicit gluon field, these distributions will be zero in the model. Also
the distributions with only one light-cone momentum fraction are of the
most interest for DIS and Drell-Yan processes.
Finally there exist distributions at the twist-four level involving four
quark operators eg
\begin{equation}
U^{s}_{1}(x, y, z) = \frac{1}{4\Lambda^{2}} \int \frac{d\lambda}{2\pi}
\frac{d\mu}{2\pi} \frac{d\nu}{2\pi} e^{i\lambda x} e^{i\mu (y-x)}
e^{i\nu (z-y)}\langle P | \bar{\psi}(0) \not \! n \psi(\nu n) \bar{\psi}(\mu n)
\psi(\lambda n) | P \rangle,
\end{equation}
which is a four quark light-cone correlation function. Calculating this
distribution by the method below would require the evaluation of the
overlap integral between the four quark fields over the bag volume, which
is expected to much smaller than the two quark overlap integral required
for the two quark correlation functions. Hence these will not considered
further in this work.
\section{Calculation of Quark Distributions}
At present no QCD wavefunction for the nucleon can be calculated. So in
order to make useful calculations of the quark distributions at any twist
it is necessary to the use the wavefunction from some phenomenological model
of the nucleon. The MIT bag model~\cite{MIT} is used here as it incorporates
relativistic, light quarks, and also models confinement. It also has the
further advantage that the wavefunction is simple and analytic. Other models
could also be chosen~\cite{BS}.
The major problem in calculating the relevant matrix elements for the quark
distributions is ensuring that momentum conservation is obeyed throughout
the calculation, hence ensuring that the calculated distributions have the
correct support, i.e. they vanish for light-cone momentum fraction $x$
outside the interval $[0, 1]$ \cite{ST}. To guarantee momentum conservation,
a complete set of intermediate states, $\sum_{m} |m \rangle \langle m|$, can
be inserted into the matrix elements of the quark distributions
(eqns.~(\ref{eq:tw2}, \ref{eq:tw3}, \ref{eq:tw4})). Using translational
invariance of the matrix element, all the $n$ dependence can be moved
into the argument of the exponential function. Then integrating over
$\lambda$ gives a momentum conserving delta function. The twist-two quark
distributions then become
\begin{eqnarray}
f_{1}(x) & = & \frac{1}{\sqrt{2}} \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
|\langle m | \psi_{+}(0) | P \rangle |^{2}, \nonumber \\
g_{1}(x) & = & \frac{1}{\sqrt{2}} \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
[|\langle m | \hat{R} \psi_{+}(0) | PS_{\parallel} \rangle |^{2} -
|\langle m | \hat{L} \psi_{+}(0) | PS_{\parallel} \rangle |^{2}], \nonumber \\
h_{1}(x) & = & \frac{1}{\sqrt{2}} \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
[|\langle m | \hat{Q}_{+} \psi_{+}(0) | PS_{\perp} \rangle |^{2} - \nonumber \\
&&\hspace{6cm}|\langle m | \hat{Q}_{-} \psi_{+}(0) | PS_{\perp} \rangle |^{2}],
\label{eq:t2}
\end{eqnarray}
where $\hat{R}(\hat{L})$ is the projection operator for right-(left-) handed
quarks $\hat{R}(\hat{L}) = (1 \pm \gamma_{5})/2$, and
$\hat{Q}_{\pm}$ is the projection operator
$\hat{Q}_{\pm} = (1 \pm \gamma_{5} \not \! S_{\perp})/2$, which
projects out eigenstates of the transversely projected Pauli-Lubanski
operator $\not \! S_{\perp}\gamma_{5}$ in a transversely projected nucleon.
The twist-four distributions are similar to those of twist-two, except they
involve `bad' components of the quark wavefunction
\begin{eqnarray}
f_{4}(x) & = & \frac{1}{\sqrt{2}} \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
|\langle m | \psi_{-}(0) | P \rangle |^{2}, \nonumber \\
g_{3}(x) & = & \frac{1}{\sqrt{2}} \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
[|\langle m | \hat{L} \psi_{-}(0) | PS_{\parallel} \rangle |^{2} -
|\langle m | \hat{R} \psi_{-}(0) | PS_{\parallel} \rangle |^{2}], \nonumber \\
h_{3}(x) & = & \frac{1}{\sqrt{2}} \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
[|\langle m | \hat{Q_{+}} \psi_{-}(0) | PS_{\perp} \rangle |^{2} - \nonumber \\
&&\hspace{6cm}|\langle m | \hat{Q_{-}} \psi_{-}(0) | PS_{\perp} \rangle |^{2}].
\label{eq:t4}
\end{eqnarray}
Here the distributions have been rescaled to make them dimensionless, and
factors of $(M / \Lambda)^{2}$ have been absorbed in the twist-four part of
the density matrix, $M(x)|_{\mbox{twist-4}}$.
Note that the twist-two and twist-four distributions have a natural
interpretation in the parton model, where they are related to the
probability of finding a parton carrying fraction $x$ of the plus component
of momentum of the nucleon, and in the appropriate helicity or transversity
eigenstates. In the twist-three case, the distributions do not have a
similar interpretation in the parton model. However we can still guarantee
momentum conservation by introducing a complete set of intermediate states,
$\sum_{m} |m \rangle \langle m|$, and then write the distributions in terms
of the matrix elements between nucleon states and intermediate states
\begin{eqnarray}
e(x) & = & \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
\langle P | \psi^{\dagger}(0) | m \rangle_{a} (\gamma^{0})_{ab}
\langle m | \psi(0) | P \rangle_{b} , \nonumber \\
h_{L}(x) & = & \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
\langle PS_{\parallel} | \psi^{\dagger}(0) | m \rangle_{a} (\gamma^{3}
\gamma^{5})_{ab} \langle m | \psi(0) | PS_{\parallel} \rangle_{b}, \nonumber \\
g_{T}(x) & = & \sum_{m} \delta (p^{+}(1-x) - p_{m}^{+})
\langle PS_{\perp} | \psi^{\dagger}(0) | m \rangle_{a} (\gamma^{0} \gamma^{5}
\not \! S_{\perp})_{ab} \langle m | \psi(0) | PS_{\perp} \rangle_{b}
\label{eq:t3}
\end{eqnarray}
where the matrix indices have been shown explicitly. Again the distributions
have been rescaled so that they are dimensionless, absorbing factors of
$M / \Lambda$ into the density matrix $M(x)|_{\mbox{twist-3}}$.
The next step in the model calculation of the distributions is to form the
momentum eigenstates $|P\rangle$ and $|m\rangle$ from the static states of
the model. This can be done using either the Peierls-Yoccoz~\cite{PY}
projection, which gives a momentum dependent normalisation, or the
Peierls-Thouless~\cite{PT} projection, which leads to a more difficult
calculation, but which preserves Galilean invariance of the matrix
elements. Using either method, the correct support for the distributions is
guaranteed by the above formalism. The distributions can then be calculated
in terms of the Hill-Wheeler overlap integrals between the quark
wavefunctions~\cite{SST,AS}.
Using the wavefunction of a model also introduces a scale $\mu$ into the
calculated distribution functions. This is the scale at which the model
wavefunction is considered a good approximation to the true QCD
wavefunction, which is presently unknown. The natural scale for the bag
model, and most other phenomenological models employing light relativistic
quarks, is the typical transverse momentum of the quarks,
$k_{T} \approx 400 \mbox{MeV}$. In order to
compare a calculated distribution function with experiment, the
calculated distribution needs to be evolved from the model scale up to the
experimental scale $Q^{2}$. This has previously been done using leading order
QCD evolution for the twist-2 distributions $f_{1}(x)$ and
$g_{1}(x)$~\cite{SST, AS}, with good agreement being obtained for a value
of $\mu$ in the region of 250 -- 500 MeV. This could be criticised on the
grounds that the strong coupling constant is not small in this region,
however calculations using next to leading order evolution\cite{StT} also
give good agreement with experiment for values of $\mu \approx 350 \mbox{MeV}$
and $\alpha_{S}(Q^{2} = \mu^{2}) \approx 0.75$.
For the higher twist distributions there do not yet exist comprehensive
calculations of the relevant anomalous dimensions for evolution of the
distribution functions. For the twist-three distributions $g_{T}$ and
$h_{L}$ full calculations of the anomalous dimensions have appeared in
the literature \cite{SV,KYU,KT}. For the lowest moments of these
distributions ($n=3,4$), the leading order evolution gives similar results
to naive power counting for $\sqrt{Q^{2}}$ in the region of 1 GeV. Thus we
shall neglect QCD corrections to the evolution of the higher twist
distributions, and use naive power counting to evolve these distributions.
In Figure 1 the nine twist-2, 3 and 4 distributions involving two quark
correlation functions, calculated at the bag scale $\mu$, for a bag radius of
0.8 fm are shown. The Peierls-Yoccoz projection has been used to form the
momentum eigenstates $|p\rangle$ and $|m\rangle$, and only intermediate
states containing two quarks have been considered. As is expected from
equations~ (\ref{eq:t2}, \ref{eq:t4}, \ref{eq:t3}), each of these
distribution functions are similar in magnitude to one another. It is worth
noting that the parton distributions satisfy the equalities
\begin{eqnarray}
f_{1}(x) + g_{1}(x) & = & 2h_{1}(x) \nonumber \\
e(x) + h_{L}(x) & = & 2g_{T}(x) \nonumber \\
f_{4}(x) + g_{3}(x) & = & 2h_{3}(x)
\end{eqnarray}
which are the lower bounds of Soffer's inequalities\cite{So}.
The evolution of the twist-3 distributions is particularly interesting
as the distributions have contributions from local operators of both
twist-3 and twist-2. Hence the moments of the twist-3 distributions
are related to the moments of the twist-2 distributions
\cite{JJ,KYU,KT}
\begin{eqnarray}
{\cal M}_{n}[e] & = & \frac{m}{M} {\cal M}_{n}[f_{1}] + {\cal M}_{n}[e^{(3)}]
\nonumber \\
{\cal M}_{n}[h_{L}] & = & \frac{2}{n+2}{\cal M}_{n}[h_{1}] + \frac{n}{n+2}
\frac{m}{M}{\cal M}_{n-1}[g_{1}] + {\cal M}_{n}[h_{L}^{(3)}] \nonumber \\
{\cal M}_{n}[g_{T}] & = & \frac{1}{n+1} {\cal M}_{n}[g_{1}] - \frac{1}{2}
\frac{m}{M}{\cal M}_{n-1}[h_{1}] + {\cal M}_{n}[g_{T}^{(3)}]
\end{eqnarray}
where $m$ is the quark mass,
${\cal M}_{n}[f] = \int_{0}^{1} x^{n}f(x)dx$ is the $n$th moment of the
distribution $f(x)$ and $f^{(3)}$ denotes the genuine twist-3 part of the
distribution $f(x)$. Note that when contributions from operators proportional
to the quark mass or from twist-3 operators are ignored, the third of these
relations is equivalent to the Wandzura-Wilczek relation \cite{WW}
\begin{equation}
g_{2}(x) = -g_{1}(x) + \int_{x}^{1}\frac{g_{1}(y)}{y}dy.
\end{equation}
In what follows we shall ignore terms directly proportional to the quark
mass, as the masses of the $u$ and $d$ quarks are zero (or very small)
in the MIT bag model. (If this discussion were extended to include strange
quarks and anti-quarks then it would be important to keep terms in the
quark mass.)
We can invert the moments above to find the genuine twist-3 parts of
each distribution in terms of the calculated distributions and the
twist-2 distributions
\begin{eqnarray}
e^{(3)}(x) & = & e(x) \nonumber \\
h^{(3)}_{L}(x) & = & h_{L}(x) - 2x \int_{x}^{1} \frac{dy}{y^{2}} h_{1}(y)
\nonumber \\
g^{(3)}_{T}(x) & = & g_{T}(x) - \int_{x}^{1} \frac{dy}{y} g_{1}(y).
\label{eq:t3-2}
\end{eqnarray}
This separation makes it possible to evolve the calculated distributions
$e, h_{L}, g_{T}$ from the model scale up to experimental scales. The
procedure I use for the evolution is to separate the calculated distributions
at the model scale $Q^{2}=\mu^{2}$ into twist-2 and twist-3 parts using
eqn.~(\ref{eq:t3-2}). The twist-2 parts are then evolved using the
Gribov-Lipatov-Altarelli-Parisi (GLAP) equation \cite{GLAP} to leading order,
whereas the twist-3 parts are evolved according to naive power counting
$f^{(3)}(Q^{2}) \sim 1/\sqrt{Q^{2}}$. The model scale has been chosen as
$\mu = 0.4$ GeV, and in Figure 2 the full twist-3 distributions at
$Q^{2}$ = 1 and 10 GeV$^{2}$ are shown. Comparing between $e(x, Q^{2})$,
which has no twist-2 part, and $h_{L}(x, Q^{2})$ and $g_{T}(x,Q^{2})$ enables
us to see how the twist-2 parts of the latter two distributions dominate at
higher $Q^{2}$.
In fact it is probably not a particularly good approximation to evolve
$e(x, Q^{2})$ according to naive power counting. While a calculation of the
anomalous dimensions for the operators contributing to $e(x)$ has yet to
be done, we can surmise a few facts about the lowest moments, including that
the lowest moment of $e(x)$ will have a small anomalous dimension despite the
corresponding operator being twist-3. The relevant local operators for
$e(x)$ in the operator product expansion are
\begin{equation}
O^{\mu_{1} \mu_{2} \ldots \mu_{n}} = {\cal S}_{n} \bar{\psi} (iD^{\mu_{1}})
(iD^{\mu_{2}}) \ldots (iD^{\mu_{n}}) \psi - \mbox{traces}
\end{equation}
where ${\cal S}_{n}$ symmetrizes over the Lorentz indices
$\mu_{1}, \ldots \mu_{n}$. The relevant matrix elements of these operators
are
\begin{equation}
\langle P | O^{\mu_{1} \mu_{2} \ldots \mu_{n}} | P \rangle =
2 M e_{n} P^{\mu_{1}} P^{\mu_{2}} \ldots P^{\mu_{n}} - \mbox{traces}.
\end{equation}
Standard techniques give the sum rules
\begin{equation}
e_{n} = \int dx x^{n} e(x).
\end{equation}
In particular, for $n = 0$ we have
\begin{equation}
\int dx e(x) = \frac{1}{2M} \langle P | \bar{\psi}\psi | P \rangle
\label{eq:e0}
\end{equation}
which is related to the nucleon $\sigma$ term,
$\langle P |\frac{1}{2}(m_{u}+m_{d})(\bar{u}u + \bar{d}d) |P\rangle$. The
$\sigma$ term is renormalization point invariant in QCD, which implies that
the matrix element in eqn.~(\ref{eq:e0}) will have an anomalous dimension of
the same magnitude as that of the quark mass operator. Thus the zeroth
moment of $e(x)$ will change quite slowly with $Q^{2}$, whereas we expect
that the higher moments should scale approximately as $1/\sqrt{Q^{2}}$.
Thus the distribution $e(x,Q^{2})$ will tend to increase at low $x$ and
decrease at high $x$ as $Q^{2}$ increases
\footnote{On completion of this work I learned of a calculation of the
anomalous dimensions of $e(x,Q^{2})$ \cite{KN} which confirms these
speculative remarks, in particular the anomalous dimensions of the operators
corresponding to the second and third moments of $e$ are similar in magnitude
to the anomalous dimensions of twist-2 distributions.}.
At twist-4 level there is no mixing of twist-2 and twist-3 operators with
the local twist-4 operators \cite{JS}, which simplifies the analysis
somewhat. Of course there are a large number of local operators of
twist-4 for each distribution function, and in the vast majority of cases
their anomalous dimensions have not been calculated. Thus to evolve the
twist-4 distributions $f^{4}(x)$ naive power counting,
$f^{4}(x, Q^{2}) \sim 1/Q^{2}$ is again used. In Figure 3 we see the twist-4
distributions at $Q^{2} =$ 1 and 10 GeV$^{2}$, where again the bag scale has
been taken as $\mu = 0.4$ GeV.
Finally, the twist-two transversity distribution $h_{1}(x)$ will be of
interest at scales above 1 GeV$^{2}$. The anomalous dimensions of the
twist-2 operators have been calculated \cite{AM}, so the evolution of the
parton distribution from the bag scale is fairly straightforward. The results
of this evolution up to $Q^{2} = 1$ GeV$^{2}$ and $Q^{2} = 1o$ GeV$^{2}$ are
shown in Figure 4.
\section{Nucleon Structure Functions}
The parton distribution functions calculated above can be measured in
various processes, most notably deep inelastic lepton-nucleon scattering
(DIS) and Drell-Yan (DY) processes. In this section we consider how the parton
distributions combine to give the nucleon structure functions which can be
measured in DIS, and those structure functions involving higher twist
distributions which have been measured are compared with the calculated
structure functions.
In the Bjorken limit ($Q^{2}, \nu \rightarrow \infty$,
$x= Q^{2}/2(P.q)$ fixed)
the nucleon tensor $W^{\mu\nu}$ can be expressed in terms of 14 structure
functions \cite{Ji}
\begin{eqnarray}
W^{\mu\nu} & = & \left(-g^{\mu\nu} + \frac{q^{\mu}q^{\nu}}{q^{2}} \right)
F_{1}(x) + \hat{p}^{\mu}\hat{p}^{\nu} \frac{F_{2}(x)}{\nu} \nonumber \\
&& + q^{\mu}q^{\nu} \frac{F_{4}(x)}{\nu} + (p^{\mu}q^{\nu}+p^{\nu}q^{\mu})
\frac{F_{5}(x)}{2\nu} \nonumber \\
&& -i\epsilon^{\mu\nu\alpha\beta}q_{\alpha}\left(p_{\beta} \frac{G_{1}(x)}{\nu}
+ MS_{\perp\beta} \frac{G_{T}(x)}{\nu} \right) \nonumber \\
&& +i\epsilon^{\mu\nu\alpha\beta} Mp_{\alpha}S_{\perp\beta}
\frac{G_{3}(x)}{\nu} +i\epsilon^{\mu\nu\alpha\beta} q_{\alpha}p_{\beta}
\frac{F_{3}(x)}{2\nu} \nonumber \\
&& + \left(-g^{\mu\nu} + \frac{q^{\mu}q^{\nu}}{q^{2}} \right)
a_{1}(x) + \hat{p}^{\mu}\hat{p}^{\nu} \frac{a_{2}(x)}{\nu} \nonumber \\
&& + q^{\mu}q^{\nu} \frac{a_{4}(x)}{\nu} + (p^{\mu}q^{\nu}+p^{\nu}q^{\mu})
\frac{a_{5}(x)}{2\nu} \nonumber \\
&& + M(S_{\perp}^{\mu}\hat{p}^{\nu} + S_{\perp}^{\nu}\hat{p}^{\mu})
\frac{b_{1}(x)}{2\nu} + M(S_{\perp}^{\mu}p^{\nu} + S_{\perp}^{\nu}p^{\mu})
\frac{b_{2}(x)}{2\nu},
\end{eqnarray}
where the last seven structure functions ($F_{3}, a_{1,2,4,5}, b_{1,2}$) are
related to parity-violating processes involving the weak interaction. I have
introduced the shorthand notation
$\hat{p}^{\mu}=p^{\mu}-\frac{p.q}{q^{2}}q^{\mu}$.
It is conventional to introduce longitudinal structure functions, describing
the scattering when the exchanged vector boson is longitudinally polarised
\begin{eqnarray}
F_{L}(x) & = & F_{2}(x)\left(1 + \frac{4M^{2}x^{2}}{Q^{2}}\right)
- 2x F_{1}(x) \nonumber \\
a_{L}(x) & = & a_{2}(x)\left(1 + \frac{4M^{2}x^{2}}{Q^{2}}\right)
- 2x a_{1}(x).
\end{eqnarray}
According to the Callen-Gross relation \cite{CG} both $F_{L}$ and $a_{L}$
vanish in the Bjorken limit, and, ignoring QCD radiative corrections and
nucleon mass effects, both are twist-4 structure functions.
The fourteen structure functions can all be expressed in terms of parton
distribution functions \cite{Ji, EFP, JS}. In the following all
contributions from parton distributions with two or more light-cone
fractions (ie distributions involving two quark fields plus one or two
gluon fields and distributions involving four quark fields) will be dropped
and the discussion is limited to the distributions calculated above. We also
keep terms linear in the quark masses. The electroweak current $J^{\mu}(\xi)$
of the quarks coupling to vector bosons is taken to be
\begin{equation}
J^{\mu}(\xi) = \bar{\psi}(\xi) \gamma^{\mu} (g_{v} + g_{a}\gamma_{5})
\psi(\xi)
\end{equation}
where the vector and pseudo-vector couplings, $g_{v}$ and $g_{a}$ take the
values of the standard model. Also, because weak currents can change quark
flavour, the mass of a quark in the initial state is denoted as $m_{i}$, and
the mass of a quark in the final state as $m_{f}$.
For unpolarised scattering there are five measureable structure functions.
$F_{1}$ and $F_{2}$ (or $F_{L}$) describe electromagnetic scattering,
$F_{3}$ is related to parity violating weak scattering, and $F_{4}$ and
$F_{5}$ are related to the scattering of non-conserved currents. We have:
\begin{eqnarray}
F_{1}(x) & = & \frac{1}{2} \sum_{q} (|g_{vq}|^{2} + |g_{aq}|^{2})
f_{1}^{q}(x) \nonumber \\
&& - \frac{M}{Q^{2}} \sum_{q} m_{f}(|g_{vq}|^{2} - |g_{aq}|^{2}) x e^{q}(x) \\
F_{L}(x) & = & \frac{M^{2}}{Q^{2}} \sum_{q} (|g_{vq}|^{2} + |g_{aq}|^{2})
4 x^{3} f_{4}^{q}(x) \nonumber \\
&& - \frac{M}{Q^{2}} \sum_{q} \left[(m_{i}+m_{f})|g_{vq}|^{2}
+ (m_{f}-m_{i})|g_{aq}|^{2}\right] 4 x^{2} e^{q}(x) \\
F_{3}(x) & = & \sum_{q} (-1)^{q} (|g_{vq}|^{2} + |g_{aq}|^{2}) f_{1}^{q}(x)
\end{eqnarray}
where $(-1)^{q}$ is +1 for quarks and -1 for antiquarks. Finally
\begin{eqnarray}
F_{4}(x) & = & \frac{M}{Q^{2}} \sum_{q} \left[|g_{vq}|^{2}(m_{i}-m_{f}) +
|g_{aq}|^{2}(m_{i}+m_{f})\right] e^{q}(x) \label{eq:F4} \\
F_{5}(x) & = & \frac{2Mx}{Q^{2}} \sum_{q} \left[|g_{vq}|^{2}(m_{i}-m_{f}) +
|g_{aq}|^{2}(m_{i}+m_{f})\right] e^{q}(x). \label{eq:F5}
\end{eqnarray}
Thus $F_{L}$, $F_{4}$ and $F_{5}$ are twist-four structure functions.
For scattering from a longitudinally polarised nucleon we have five structure
functions:
\begin{eqnarray}
G_{1}(x) & = & \frac{1}{2} \sum_{q} (|g_{vq}|^{2} + |g_{aq}|^{2})
g_{1}^{q}(x) \\
a_{1}(x) & = & \frac{1}{2} \sum_{q} (-1)^{q} (|g_{vq}|^{2} + |g_{aq}|^{2})
g_{1}^{q}(x) \\
a_{L}(x) & = & \frac{2Mx}{Q^{2}} \sum_{q} (-1)^{q}
(|g_{vq}|^{2} + |g_{aq}|^{2})\left[-2Mx^{2}g_{3}^{q}(x) +
2m_{i}x h_{L}^{q}(x) \right] \\
a_{4}(x) & = & \frac{M}{Q^{2}} \sum_{q} (-1)^{q} (|g_{vq}|^{2} + |g_{aq}|^{2})
m_{i} x h_{L}^{q}(x) \\
a_{5}(x) & = & \frac{2Mx}{Q^{2}} \sum_{q} (-1)^{q}
(|g_{vq}|^{2} + |g_{aq}|^{2}) m_{i} x h_{L}^{q}(x).
\end{eqnarray}
The twist-2 distribution $a_{1}$ is the longitudinally polarised analogue of
$F_{3}$, and if it were measured the role of the axial anomaly in the
interpretation of measurements of $G_{1}$ would be much clearer\cite{BT}.
The structure functions $a_{L,4,5}$ are all twist-4, with $a_{4,5}$ related
to non-conserved currents.
For scattering from a transversely polarised nucleon there are four
structure functions:
\begin{eqnarray}
G_{T}(x) & = & \frac{1}{2} \sum_{q} (|g_{vq}|^{2} + |g_{aq}|^{2})
g_{T}^{q}(x) \\
G_{3}(x) & = & \frac{1}{2M} \sum_{q} \left[|g_{vq}|^{2}(m_{f}-m_{i}) -
|g_{aq}|^{2}(m_{i}+m_{f}) \right] h_{1}^{q}(x) \label{eq:G3} \\
b_{1}(x) & = & \sum_{q} (-1)^{q} (|g_{vq}|^{2} + |g_{aq}|^{2})
x g_{T}^{q}(x) \\
b_{2}(x) & = & \frac{1}{M} \sum_{q} (-1)^{q} (|g_{vq}|^{2} + |g_{aq}|^{2})
m_{i} h_{1}^{q}(x).
\end{eqnarray}
These four structure functions are twist-3. $G_{T} = G_{1}+G_{2}$ is related
to the transverse spin of the quarks and antiquarks, $b_{1}$ is its
parity-violating partner, while $G_{3}$ and $b_{2}$ are related to
non-conserved currents.
Many experiments over the years have determined $F_{1}$, $F_{3}$ and
$G_{1}$ to good precision, and in the near future $G_{T}$ should be
measured to similar precision. $F_{L}$ has also been measured to a lower
degree of precision, thus it should be possible for us to extract
estimates of the parton distributions $g_{T}$ and $f_{4}$. Unfortunately
there is little hope of determining any of the other parton distributions
from DIS. From the expressions for $F_{4,5}$, eqs.~(\ref{eq:F4}, \ref{eq:F5})
one might hope that $e(x)$ would be measurable in heavy quark production.
However because $|g_{vq}|=|g_{aq}|$ the terms in $m_{f}$ cancel
\footnote{This cancellation does not occur when the exchange boson is a Z
boson, so there may exist a possibility to measure $F_{4,5}$ at HERA}.
A similar cancellation occurs in eqn.~(\ref{eq:G3}) making a determination of
$h_{1}(x)$ very difficult. However the possibility exists to measure
$h_{1,L}(x)$ and $e(x)$ in polarised Drell-Yan processes \cite{JJ} which
may be possible at RHIC.
Previous calculations \cite{SST, StT} have shown good agreement between
experimental data for $F_{1}(x)$ and $F_{3}(x)$ and the model predictions.
We are now in a position to compare the model prediction for $F_{L}(x)$ with
experimental data. Experimentally the ratio $R = F_{L}/2xF_{1}$ has been
measured \cite{E140}, and there is good evidence for a twist-4 contribution
\cite{SGM+, BRY} at medium and high $x$ and $Q^{2}<10$ GeV$^{2}$. Before we
can compare with the data we should also take into account effects from
perturbative QCD which give a non-zero $F_{L}$ at leading and next-to-leading
order, and effects from the target nucleon mass. In the simplest version of
the bag model the twist four contribution to $R$ for electromagnetic
scattering is given by
\begin{equation}
R^{(4)}(x, Q^{2}) = \frac{8 M^{2}x^{2}}{Q^{2}}
\frac{f_{4}(x,Q^{2})}{f_{1}(x,Q^{2})}
\end{equation}
In Figure 5 we show our calculated $R^{(4)}(x, Q^{2})$ at $Q^{2} = 2$ and
5 GeV$^{2}$. We also show the experimental values of $R$ and various
perturbative QCD calculations, including target mass effects, and the effect
of adding the twist-4 contribution from the bag model at these scales. Because
$R^{(4)}(x, Q^{2})$ effectively scales as $f_{4}(x, \mu^{2})/Q^{4}$, we
calculate a large correction to the perturbative calculations at only low values
of $Q^{2}$, however this twist-4 correction is in the right direction, and
for low $Q^{2}$ gives a good fit to the data at medium $x$ values.
If the data at low $Q^{2}$ improves in accuracy, it may be possible to perform
an independent check on our value of the bag scale, $\mu$, from a fit of our
calculated parton distribution $f_{4}$ with the data at various values of
$Q^{2}$.
The other higher twist structure function that has been measured is
$G_{2}(x) = G_{T}(x) - G_{1}(x)$ \cite{E143, SMC2}. In Figure 6 the
experimental data for $G_{2}^{p}(x)$ is compared with the calculated structure
function at $Q^{2} = 5$ GeV$^{2}$. The calculation is in reasonable agreement
with the data. In particular the calculated $G_{2}(x)$ crosses the $x$ axis
in the right region. As this crossing is determined by the interplay between
the twist-3 part of $G_{2}(x)$, given by $g_{T}^{(3)}(x)$ and the twist-2 portion
$-g_{1}(x) + \int_{x}^{1} dy g_{1}(y)/y$, this gives us some
confidence in our understanding of the relations between the twist-2 and
twist-3 distributions. It will be interesting to check our model calculations
against experimental at other values of $Q^{2}$. As $Q^{2}$ increases we
predict that the crossing point $x_{0}$ where $G_{2}(x_{0}) = 0$ will move
to larger $x$.
It is also interesting to note that while the bag model calculation for the
structure function $G_{1}^{p}(x)$ is not particularly good, presumably
because the calculation does not include effects arising from the axial
anomaly\cite{SST}, in the case of $G_{2}^{p}(x)$ the agreement with
the data is better, probably because of the cancellation between the two terms
in the twist-2 part of $G_{2}^{p}(x)$.
Finally in Table 1 the calculated values for some of moments of the
twist-2 and twist-3 parts of $G_{2}^{p}(x)$ are displayed and compared with
values from other calculations and those extracted from the experimental data.
Before concluding, there are some limitations to these calculations which
should be borne in mind. There is no flavour dependence of the calculated
parton distributions, so the distributions and structure functions shown
above should be taken as applying to scattering from an isoscalar target.
For our comparison of $R$ this is not important, as experiment has shown no
target dependence for $R$ \cite{E140}, but for $G_{2}$ the flavour dependence
will be as important as for $G_{1}$\cite{SST, SHT}. In the model, the
flavour dependence of the parton distributions comes from the $SU(6)$
spin-flavour part of the wavefunction, and the one gluon exchange mechanism
means that the two-quark intermediate state must be either a scalar (S=0)
or vector (S=1) with masses $m_{s}$ and $m_{v} > m_{s}$, respectively
\cite{CT, SST}. Furthermore no account has been taken of contributions to the
parton distributions from intermediate states consisting of four quarks.
However these contributions have only a small effect on valence
distributions, and this is all in the small $x$ region \cite{SST}, which is
not of great importance here.
An important correction to the parton distributions will come from mesonic
contributions via the so-called Sullivan process \cite{Sul}, where meson
($\pi, \rho, K \ldots$) exchange between the virtual photon and the
nucleon occurs. These processes are known to lead to important modifications
in the parton distributions at twist-2 \cite{SMST,SHT}, particularly for
$g_{1}(x)$, and there is no reason to suppose that the higher twist
distributions should be unaffected. This should be important in the model
calculation of the structure function $G_{2}$.
\section{Summary}
In this paper we have discussed the various parton distribution functions up
to twist four in the context of the MIT bag model. The discussion has been
limited to those distributions involving two-quark correlation functions,
however these include effects from the bag boundary, which is a simple model
of the effects of the gluon field. From the twist-3 distributions, which
involve quark-gluon correlations, we have calculated the twist-3 structure
function, $G_{2}(x)$, and found the calculation in good agreement with the
experimental data, which gives us some confidence in using the bag as a model
of the quark-gluon dynamics involved in DIS.
Four-quark correlation functions are expected to be small as they involve the
calculation of Hill-Wheeler-type overlap functions for four quarks in the bag.
Also these distributions are not directly accessible to experiment. However
in future work we shall look at the corrections four-quark distributions make
to the two-quark distributions calculated here.
At twist-four we have seen that the calculated distribution $f_{4}(x)$ gives
a contribution to the experimentally measured $R = F_{L}/2xF_{2}$. When added
to the contributions from perturbative QCD and target mass effects, the
calculation gives good agreement with the data at $Q^{2} \sim 1$ GeV$^{2}$
an dmedium values of $x$.
Some improvements can be made to our calculations. On the model side we can
add flavour dependence to our distributions, and we can also add the important
effects of the pion cloud. Also we can improve our evolution techniques for
the distribution functions. For the twist-2 distributions next-to-leading-order
QCD corrections are available. At twist-3 there have been recent developments
in the calculations of anomalous dimensions of the relevant operators, so that
a correct leading order QCD approach is possible, though this will be very
difficult.
I would like to thank the Institute for Theoretical Physics, University of
Adelaide, for hospitality and support while this work was in completion. I
would also like to thank Tony Thomas, Fernando Steffens and Steve Shrimpton
for valuable comments. This work was supported in part by the Australian
Research Council.
\newpage
|
1,314,259,994,300 | arxiv | \section{\@startsection{section}{1}%
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{definition}[theorem]{Definition}
\def1.0{1.0}
\def\varepsilon{\varepsilon}
\def\epsilon{\epsilon}
\def\xi{\xi}
\def\theta{\theta}
\def\kappa{\kappa}
\def\nu{\nu}
\def\alpha{\alpha}
\def\gamma{\gamma}
\def\delta{\delta}
\def\lambda{\lambda}
\def\phi{\phi}
\def\rho{\rho}
\def\sigma{\sigma}
\def\zeta{\zeta}
\def\omega{\omega}
\def\displaystyle{\displaystyle}
\def\infty{\infty}
\def\frac{\frac}
\def\overline{\overline}
\def\bar{\bar}
\usepackage{tikz}
\newcommand{{\rm i}}{{\rm i}}
\newcommand{{\rm Re}}{{\rm Re}}
\newcommand{{\rm Im}}{{\rm Im}}
\newcommand{{\rm d}}{{\rm d}}
\newcommand{\mathbf{f}}{\mathbf{f}}
\newcommand{\mathbf{\Gamma}}{\mathbf{\Gamma}}
\newcommand{\mathbb{B}}{\mathbb{B}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{D}}{\mathbb{D}}
\newcommand{\mathbb{J}}{\mathbb{J}}
\renewcommand{\P}{\mathbb{P}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\renewcommand{\S}{\mathbb{S}}
\newcommand{\mathbb{A}}{\mathbb{A}}
\newcommand{\Theta}{\Theta}
\newcommand{\Lambda}{\Lambda}
\newcommand{[\![}{[\![}
\newcommand{]\!]}{]\!]}
\newcommand{S}{S}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{F}}{\mathbf{F}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\mathbf{H}}{\mathbf{H}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{P}}{\mathbf{P}}
\newcommand{\mathbf{Q}}{\mathbf{Q}}
\newcommand{\mathbf{L}}{\mathbf{L}}
\newcommand{\mathbf{I}}{\mathbf{I}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbf{R}}{\mathbf{R}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{\Psi}}{\mathbf{\Psi}}
\newcommand{\mathbf{\Phi}}{\mathbf{\Phi}}
\newcommand{\mathbf{\Upsilon}}{\mathbf{\Upsilon}}
\newcommand{\mathbf{u}}{\mathbf{u}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\newcommand{\mathscr{E}}{\mathscr{E}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal {D}}{\mathcal {D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathfrak{R}}{\mathfrak{R}}
\newcommand{\mathbbm{b}}{\mathbbm{b}}
\newcommand{\mathscr{T}}{\mathscr{T}}
\newcommand{\nabla}{\nabla}
\newcommand{\triangle}{\triangle}
\newcommand{\alpha}{\alpha}
\newcommand{\beta}{\beta}
\newcommand{\gamma}{\gamma}
\newcommand{\omega}{\omega}
\newcommand{\lambda}{\lambda}
\newcommand{\delta}{\delta}
\newcommand{\sigma}{\sigma}
\newcommand{\partial}{\partial}
\newcommand{\kappa}{\kappa}
\newcommand{\epsilon}{\epsilon}
\newcommand{\rho}{\rho}
\newcommand{\theta}{\theta}
\newcommand{\vartheta}{\vartheta}
\newcommand{\varpi}{\varpi}
\newcommand{\Theta}{\Theta}
\newcommand{\Delta}{\Delta}
\newcommand{\Gamma}{\Gamma}
\newcommand{{\Lambda}}{{\Lambda}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{|\!|\!|}{|\!|\!|}
\newcommand{{[\![}}{{[\![}}
\newcommand{{]\!]}}{{]\!]}}
\newcommand{\overset{\mbox{\tiny{def}}}{=}}{\overset{\mbox{\tiny{def}}}{=}}
\newcommand{\int_0^t}{\int_0^t}
\begin{document}
\title[Boltzmann equation with time-periodic boundary]{The Boltzmann equation with time-periodic boundary temperature}
\author[R.-J. Duan]{Renjun Duan}
\address[R.-J. Duan]{Department of Mathematics, The Chinese University of Hong Kong, Hong Kong}
\email{rjduan@math.cuhk.edu.hk}
\author[Y. Wang]{Yong Wang}
\address[Y. Wang]{Institute of Applied Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China, and University of Chinese Academy of Sciences}
\email{yongwang@amss.ac.cn}
\author[Z. Zhang]{Zhu Zhang}
\address[Z. Zhang]{Department of Mathematics, The Chinese University of Hong Kong, Hong Kong}
\email{zzhang@math.cuhk.edu.hk}
\begin{abstract}
This paper is concerned with the boundary-value problem on the Boltzmann equation in bounded domains with diffuse-reflection boundary where the boundary temperature is time-periodic. We establish the existence of time-periodic solutions with the same period for both hard and soft potentials, provided that the time-periodic boundary temperature is sufficiently close to a stationary one which has small variations around a positive constant. The dynamical stability of time-periodic profiles is also proved under small perturbations, and this in turn yields the non-negativity of the profile. For the proof, we develop new estimates in the time-periodic setting. \\
\begin{center}\bf
This paper is dedicated to Professor Philippe G. Ciarlet on the occasion of his 80th birthday
\end{center}
\end{abstract}
\subjclass[2010]{35Q20, 35B20, 35B35, 35B45}
\keywords{Boltzmann equation, time-periodic boundary, time-periodic solutions, existence, dynamical stability, a priori estimates}
\maketitle
\tableofcontents
\thispagestyle{empty}
\section{Introduction}
Let a rarefied gas be contained in a bounded domain $\Omega \subset \mathbb{R}^3$ with smooth boundary $\partial\Omega$ on which the diffuse-reflection condition is postulated. We assume that the velocity of the boundary is zero while the temperature of the boundary is periodic in time. One basic problem is to see whether or not there exists a time-periodic motion of such rarefied gas with the same period.
To treat the problem, we assume that the motion of the rarefied gas is governed by the Boltzmann equation
\begin{align}\label{1.1}
\partial_tF+v\cdot\nabla_xF=Q(F,F),\quad t\in\mathbb{R},\ x\in \Omega,\ v\in\mathbb{R}^3.
\end{align}
Here $F=F(t,x,v)\geq 0$ stands for the density distribution function of gas particles with position $x\in \Omega$ and velocity $v\in\mathbb{R}^3$ at time $t\in \mathbb{R}$.
The Boltzmann collision operator $Q(\cdot,\cdot)$ is of the non-symmetric bilinear form:
\begin{align*}
Q(G,F)=&\int_{\mathbb{R}^3}\int_{\mathbb{S}^2} B(v-u,\omega)G(u')F(v')\,{\rm d}\omega{\rm d} u\notag\\
&-\int_{\mathbb{R}^3}\int_{\mathbb{S}^2} B(v-u,\omega)G(u)F(v)\,{\rm d}\omega{\rm d} u
\end{align*}
Here the relation between the velocity pair $(v',u')$ after collision with the velocity pair $(v,u)$ before collision for two particles is given by
\begin{equation*}
v'
=v-[(v-u)\cdot\omega]\omega,\quad
u'
=u+[(v-u)\cdot\omega]\omega,
\end{equation*}
with $\omega\in \S^2$, satisfying the conservations of momentum and energy due to the elastic collision:
\begin{equation*}
v'+u'=v+u,\quad |v'|^2+|u'|^2=|v|^2+|u|^2.
\end{equation*}
The Boltzmann collision kernel $B(v-u,\omega)$ takes the form of
\begin{equation*}
B(v-u,\omega)=|v-u|^{\gamma}b(\phi),
\end{equation*}
with
\begin{equation*}
-3<\gamma\leq 1,\quad 0\leq b(\phi)\leq C|\cos\phi|,\quad \cos\phi:=\frac{(v-u)\cdot \omega}{|v-u|},
\end{equation*}
for a generic constant $C$. Note that the angular cutoff assumption is required and we allow for both hard and soft potentials in the full range.
To solve the Boltzmann equation \eqref{1.1} in the bounded domain, it is supplemented with the following diffuse-reflection boundary condition:
\begin{align}\label{1.2}
F(t,x,v)\big|_{v\cdot n(x)<0}=\mu_{\theta}\int_{u\cdot n(x)>0}F(t,x,u)|u\cdot n(x)|\,{\rm d} u,
\end{align}
for any $t\in \mathbb{R}$, where $n(x)$ denotes the outward normal vector at the boundary point $x\in \partial\Omega$, and $\mu_\theta$ takes the form of
\begin{equation}
\label{def.muth}
\mu_{\theta}:=\mu_{\theta(t,x)}(v)=\frac{1}{2\pi\theta^2(t,x)}e^{-\frac{|v|^2}{2\theta(t,x)}}.
\end{equation}
Here we have assumed that the boundary velocity is zero and the boundary temperature is a function $\theta(t,x)$ which is periodic in time and may also depend on the space variable.
Throughout this paper, we assume that $\Omega =\{x:\xi (x)<0\}\ $is connected and
bounded with $\xi (x)$ being a smooth function in $\mathbb{R}^3$. We assume $\nabla \xi
(x)\neq 0$ at each boundary point $x$ with $\xi (x)=0$. The outward normal vector $n(x)$ is therefore given by
$n(x)=\nabla \xi (x)/|\nabla \xi (x)|$,
and it can be extended smoothly near $\partial \Omega =\{x:\xi (x)=0\}.$
We define that $
\Omega $ is convex if there exists a constant $c_{\xi }>0$ such that
\begin{equation*}
\sum_{i,j=1}^3\frac{\partial^2\xi}{\partial x_i\partial x_j} (x)\zeta_{i}\zeta_{j}\geq c_{\xi }|\zeta |^{2}
\end{equation*}
for all $x$ such that $\xi (x)\leq 0$ and for all $\zeta=(\zeta_1,\zeta_2,\zeta_3) \in \mathbb{R}^{3}$.
We denote the phase boundary in the space $\Omega \times \mathbb{R}^{3}$ as $
\gamma =\partial \Omega \times \mathbb{R}^{3}$, and split it into the outgoing
boundary $\gamma _{+}$, the incoming boundary $\gamma _{-}$, and the
singular boundary $\gamma _{0}$ for grazing velocities, respectively:
\begin{align}
\gamma _{+} &=\{(x,v)\in \partial \Omega \times \mathbb{R}^{3}:
n(x)\cdot v>0\}, \nonumber\\
\gamma _{-} &=\{(x,v)\in \partial \Omega \times \mathbb{R}^{3}:
n(x)\cdot v<0\}, \nonumber\\
\gamma _{0} &=\{(x,v)\in \partial \Omega \times \mathbb{R}^{3}:
n(x)\cdot v=0\}.\nonumber
\end{align}
Note that $\mu_{\theta}$ satisfies the boundary condition \eqref{1.2} but may not be a solution to the Boltzmann equation \eqref{1.1} since the boundary temperature $\theta(t,x)$ may have nontrivial variations in $t$ or $x$. When $\theta(t,x)$ is identical to a constant $\theta_0>0$, for instance, without loss of generality we assume $\theta_0=1$ to the end, the global Maxwellian corresponding to \eqref{def.muth} is reduced to
\begin{equation}
\label{def.gm}
\mu=\mu(v):=\frac{1}{2\pi}e^{-\frac{|v|^2}{2}},
\end{equation}
which satisfies both \eqref{1.1} and \eqref{1.2}. In such case, there have been extensive studies of existence, large-time behavior and regularity of small-amplitude $L^\infty$ solution around $\mu$ to the initial-boundary value problem on the Boltzmann equation, for instance, \cite{EGKM,EGKM-18,EGM,Guo2, GKTT-IM, K,LYa}. Readers may also refer to references therein for related works.
When $\theta(t,x)$ is a time-independent function $\bar{\theta}(x)$ which has a small variation around $\theta_0$, namely, $\sup_{\partial \Omega}|\bar{\theta}
{\theta_0}|$ is small enough, one may expect that the large-time behavior of solutions to the initial-boundary value problem on the Boltzmann equation is determined by solutions to
the following steady problem
\begin{equation}\label{sp}
\left\{
\begin{aligned}
&v\cdot\nabla_xF=Q(F,F), \quad x\in \Omega,\ v\in \mathbb{R}^3,\\
&F(x,v)\big|_{v\cdot n(x)<0}=\mu_{\bar{\theta}(x)}\int_{u\cdot n(x)>0}F(x,u)|u\cdot n(x)|\,{\rm d} u.
\end{aligned}\right.
\end{equation}
Indeed, for hard potentials $0\leq \gamma\leq 1$, \cite{EGKM} established the existence and dynamical stability of a stationary solution
${F^*(x,v)}$
to \eqref{sp}. Recently, the result of \cite{EGKM} has been extended in \cite{DHWZ} to the case of soft potentials $-3<\gamma<0$. We refer readers to \cite{DHWZ} for extensive discussions on the subject.
In the current work, we consider the case when $\theta(t,x)$ is a general time-space-dependent function assumed to be periodic in time with period $T>0$ and sufficiently close to $\bar{\theta}(x)$. Under such situation, we shall prove that there exists a unique time-periodic solution $F^{per}(t,x,v)$ around {$F^*(x,v)$
with the same period $T$ for the problem \eqref{1.1} and \eqref{1.2}, and further show the dynamical stability of $F^{per}(t,x,v)$ under small perturbations in the sense that the solution $F(t,x,v)$ to the initial-boundary value problem on the Boltzmann equation \eqref{1.1} with initial data
$F(0,x,v)=F_0(x,v)$
and boundary data \eqref{1.2} exists globally in time and is time-asymptotically close to $F^{per}(t,x,v)$ whenever $F_0(x,v)$ is sufficiently close to $F^{per}(0,x,v)$. Note that the limiting situation $T=0$ for the period of $\theta(t,x)$ is also allowed and this corresponds to the stationary case considered in \cite{EGKM} and \cite{DHWZ} as mentioned above. Therefore, the current work can be regarded as an extension of \cite{EGKM,DHWZ} to the time-periodic boundary.
In what follows we state the main results of this paper. Let
\begin{equation}
\label{def.wnot}
w_{q,\beta}(v):=(1+|v|^2)^{\frac{\beta}{2}}e^{q|v|^2}
\end{equation}
be the velocity weight function, and let
${F^*(x,v)}$ be the steady solution to \eqref{sp} corresponding to the stationary boundary temperature $\bar{\theta}(x)$ constructed in \cite{EGKM,DHWZ}. We assume that $F^*(x,v)$ has the same total mass as that of the global Maxwellian $\mu$ in \eqref{def.gm}, i.e.,
$$
\int_{\Omega}\int_{\mathbb{R}^3} [F^\ast(x,v)-\mu(v)]\,{\rm d} v{\rm d} x=0.
$$
To the end, for brevity we shall write $w_{q,\beta}$ as $w$ by ignoring the dependence of $w$ on parameters $q$ and $\beta$. The first result is concerned with the existence of time-periodic solutions of small amplitude.
\begin{theorem}\label{thm1.1}
Let $-3<\gamma\leq 1, 0\leq q<\f18$ and $\beta>\max\{3,3-\gamma\}$. Assume that $\theta(t,x)$ is a time-periodic function with period $T>0$. Then there exist $\delta>0$ and $C>0$ such that if
\begin{equation*
\delta_1:=\sup_{0\leq t\leq T}|\theta(t,\cdot)-\bar{\theta}(\cdot)|_{L^\infty(\partial\Omega)
\leq \delta,\quad \delta_2:=|\bar{\theta}(\cdot)-1|_{L^\infty(\partial\Omega)
\leq \delta,
\end{equation*}
then the Boltzmann equation \eqref{1.1} with the diffuse-reflection boundary \eqref{1.2} admits a unique nonnegative time-periodic solution with the same period $T$:
\begin{align}\label{1.4}
F^{per}(t,x,v)={F^{*}(x,v)
+\sqrt{\mu(v)}f^{per}(t,x,v)\geq 0,
\end{align}
satisfying
\begin{equation}
\label{add.thmcon}
\int_\Omega\int_{\mathbb{R}^3}f^{per}(t,x,v)\sqrt{\mu(v)}\,{\rm d} v{\rm d} x=0,\quad t\in \mathbb{R},
\end{equation}
and
\begin{align}\label{1.5}
\sup_{0\leq t\leq T}\|w f^{per}(t)\|_{L^\infty}+\sup_{0\leq t\leq T}|w f^{per}(t)|_{L^\infty(\gamma)}\leq C\delta_1.
\end{align}
Moreover, if $\Omega$ is convex, $\theta(t,x)$ is continuous on $\mathbb{R}\times\partial\Omega$, and $\bar{\theta}(x)$ is continuous on $\partial\Omega$, then $F^{per}(t,x,v)$ is also continuous away from the grazing set $\mathbb{R}\times \gamma_0$.
\end{theorem}
The second result is concerned with the large-time behavior of solutions to the initial-boundary value problem
\begin{equation}
\label{ibvp}
\left\{\begin{aligned}
&\partial_tF+v\cdot \nabla_xF=Q(F,F),\quad t>0,\ x\in\Omega,\ v\in \mathbb{R}^3,\\
&F(t,x,v)\big|_{v\cdot n(x)<0}=\mu_{\theta(t,x)}\int_{u\cdot n(x)>0}F(t,x,u)|u\cdot n(x)|\,{\rm d} u,\\
&F(0,x,v)=F_0(x,v),
\end{aligned}\right.
\end{equation}
whenever $F_0(x,v)$ is around $F^{per}(0,x,v)$ in a sense to be clarified later on.
\begin{theorem}\label{thm1.2}
Let $-3<\gamma\leq1$, $0<q<\f18$ and $\beta>\max\{3,3-\gamma\}$. Then there exist constants $\delta'$, $c>0$, $\varepsilon_0>0$ and $C>0$ such that if
$$
\sup_{0\leq t\leq T}|\theta(t,\cdot)-1|_{L^{\infty}(\partial\Omega)}\leq\delta',
$$
and
$F_0(x,v)=F^{per}(0,x,v)+\sqrt{\mu(v)}f_0(x,v)\geq 0$ satisfies
\begin{align}\label{1.7}
\int_{\Omega}\int_{\mathbb{R}^3}f_{0}(x,v)\sqrt{\mu(v)}\,{\rm d} v{\rm d} x=0,
\end{align}
and
\begin{align*
\|w
{f_0}\|_{L^{\infty}}\leq \varepsilon_0,
\end{align*}
then the initial-boundary value problem \eqref{ibvp} on the Boltzmann equation
admits a unique global-in-time solution
$$
F(t,x,v)=F^{per}(t,x,v)+\sqrt{\mu(v)}f(t,x,v)\geq 0,\quad t\geq 0,x\in \Omega,v\in\mathbb{R}^3,
$$
satisfying
\begin{equation*}
\int_{\Omega}\int_{\mathbb{R}^3}f(t,x,v)\sqrt{\mu(v)}\,{\rm d} v{\rm d} x=0
\end{equation*}
and
\begin{equation}\label{1.9}
\left\|
{f}(t)\right\|_{L^{\infty}}+\left|
{f}(t)\right|_{L^{\infty}(\gamma)}
\leq C e^{-c t^{\rho}}\|{w
{f_0}\|_{L^{\infty}},
\end{equation}
for all $t\geq 0$, where $\rho>0$ is determined by
\begin{equation}
\label{def.rho}
\rho=\left\{\begin{aligned}
&1\qquad\qquad\qquad\quad\ \text{if}\quad \gamma\in [0,1],\\
&{\frac{{2
}{2+|\gamma|}
\in (0,1)\quad \text{if}\quad \gamma\in (-3,0).
\end{aligned}\right.
\end{equation}
Moreover, if $\Omega$ is convex, $F_0(x,v)$ is continuous except on $\gamma_0$ satisfying
\begin{align*
F_0(x,v)|_{\gamma_-}=\mu_{\theta}(0,x,v) \int_{u\cdot n(x)>0} F_0(x,u) |u\cdot n(x)| \,{\rm d} u,
\end{align*}
and $\theta(t,x) $ is continuous over $\mathbb{R}\times\partial \Omega$, then the solution $F(t,x,v)$ is also continuous in $[0,\infty)\times \{\bar{\Omega}\times \mathbb{R}^{3}\setminus\gamma_0\}$.
\end{theorem}
\begin{remark}
In the soft potential case $-3<\gamma<0$, the time-decay estimate \eqref{1.9} implies that there is no loss of velocity weight in the weighted $L^\infty$ space for the solution compared to the one for initial data, which is different from the recent result \cite{LYa}. We refer readers to \cite{DHWZ} for more details.
\end{remark}
The issue about the time-periodic solutions to the Boltzmann equation has been studied in \cite{U} and \cite{DUYZ}. Particularly, \cite{U} first considered the case where the Boltzmann equation is driven by a time-periodic source term in the whole space. The main idea of \cite{U} is to study the extra time-decay property of the linearized solution operator $U(t)$ and look for the time-periodic solution as a fixed point to an integral equation
\begin{align*
f(t)=\int_{-\infty}^t U(t-s) N_f(s)\,{\rm d} s,
\end{align*}
where $N_f(\cdot)$ includes both the nonlinear term and the time-periodic inhomogeneous source. The approach of \cite{U} was later applied in \cite{DUYZ} to consider the Boltzmann equation with a small time-periodic external force. Note that \cite{DUYZ} has to require a strong assumption that the space dimensions are not less than five, and it has remained a big open problem to remove such restriction.
A similar time-periodic problem on the Vlasov-Poisson-Fokker-Planck system in the whole space was also considered in \cite{DL} when the background density profile is time-periodic around a positive constant, where the proof is based on another approach different from \cite{U}. It should be pointed out that three space dimensions are allowed in \cite{DL} due to the exponential time-decay structure of the linearized system.
In the current work, we carry out a proof of existence of time-periodic solutions which is different from \cite{DL,DUYZ,U} mentioned above but is similar to the one in \cite{DHWZ} for the steady problem. In fact, instead of solving the Cauchy problem, the basic idea in the present paper is to regard the time-periodic problem as a special boundary value problem over $[0,T]\times\Omega\times\mathbb{R}^3$, with the time-periodic boundary condition at $t=0$ and $t=T$. For the proof, we develop new estimates in the time-periodic setting.
In the end we remark that motivated by the works \cite{AKFG} and \cite{TA}, the existence and dynamical stability of time-periodic profiles to the Boltzmann equation in a bounded interval recently have been also established in \cite{DZ-moving} in the case when one boundary point moves with a small time-periodic velocity. Compared to the current work in the case when the boundary temperature is time-periodic, the mathematical analysis in \cite{DZ-moving} is much harder, since the reformulated problem is related to the Boltzmann equation with a time-periodic external force in the bounded domain.
The rest of this paper is organized as follows. In Section 2, we make a list of basic lemmas which will be used in the later proof. Then, Section 3 and Section 4 are devoted to the proof of Theorem \ref{thm1.1} and Theorem \ref{thm1.2}, respectively.
\medskip
\noindent{\it Notations.} Throughout this paper, $C$ denotes a generic positive constant which may vary from line to line. $C_a,C_b,\cdots$ denote the generic positive constants depending on $a,~b,\cdots$, respectively, which also may vary from line to line. $A\lesssim B$ means that there exists a constant $C>0$ so that $A\leq C B$ and $A\lesssim_{a}B$ means that the constant depends on $a$.
$\|\cdot\|_{L^2}$ denotes the standard $L^2(\Omega\times\mathbb{R}^3_v)$-norm and $\|\cdot\|_{L^\infty}$ denotes the $L^\infty(\Omega\times\mathbb{R}^3_v)$-norm. We denote $\langle\cdot,\cdot\rangle$ as the inner product in $L^2(\Omega\times \mathbb{R}^3_v)$ or $L^2(\mathbb{R}^3_v)$. Moreover, we define $\|\cdot\|_{L^{2}([0,T];L^2)}=\big\|\|\cdot\|_{L^2}\big\|_{L^{2}[0,T]}$. For the phase boundary integration, we define $d\gamma\equiv |n(x)\cdot v| dS(x)dv$, where $dS(x)$ is the surface measure and define $|f|_{L^p}^p=\int_{\gamma}|f(x,v)|^pd\gamma$ and the corresponding space is denoted as $L^p(\partial\Omega\times\mathbb{R}^3)=L^p(\partial\Omega\times\mathbb{R}^3;d\gamma)$. Furthermore, we denote $|f|_{L^p(\gamma_{\pm})}=|f\mathbf{1}_{\gamma_{\pm}}|_{L^p}$ and $|f|_{L^\infty(\gamma_{\pm})}=|f\mathbf{1}_{\gamma_{\pm}}|_{L^\infty}$. For simplicity, we denote $|f|_{L^\infty(\gamma)}=|f|_{L^\infty(\gamma_+)}+|f|_{L^\infty(\gamma_-)}$.
\section{Preliminaries}
Recall (cf.~\cite{Gl}) that around the global Maxwellian $\mu$ as in \eqref{def.gm}, one can write
\begin{equation*}
\frac{1}{\sqrt{\mu}}Q(\mu+\sqrt{\mu}f,\mu+\sqrt{\mu} f)=-Lf+\Gamma (f,f),
\end{equation*}
where $L$ and $\Gamma(\cdot,\cdot)$ are the corresponding linearized operator and nonlinear operator respectively given by
\begin{align*
Lf=-\f1{\sqrt{\mu}}\Big\{Q(\mu,\sqrt{\mu}f)+Q(\sqrt{\mu}f,\mu)\Big\},\nonumber
\end{align*}
and
\begin{align*}
\Gamma(f,g)=\f1{\sqrt{\mu}}Q(\sqrt{\mu}f,\sqrt{\mu}g).
\end{align*}
Moreover, one has $L=\nu-K$, where the velocity multiplication $\nu=\nu(v)$ is defined by
\begin{equation*}
\nu(v)=\int_{\mathbb{R}^3}\int_{\mathbb{S}^2}B(v-u,\omega)\mu(u)\,{\rm d}\omega {\rm d} u\sim (1+|v|)^{\gamma},
\end{equation*}
and the integral operator $K:=K_1-K_2$ is defined in terms of
\begin{align}
(K_1f)(v)&=\int_{\mathbb{R}^3}\int_{\mathbb{S}^2}B(v-u,\omega)\sqrt{\mu(v)\mu(u)}f(u)\,{\rm d}\omega {\rm d} u,\nonumber
\end{align}
and
\begin{align}
(K_2f)(v)&=\int_{\mathbb{R}^3}\int_{\mathbb{S}^2}B(v-u,\omega)\sqrt{\mu(u)\mu(u')}f(v')\,{\rm d}\omega {\rm d} u\nonumber\\
&\quad+\int_{\mathbb{R}^3}\int_{\mathbb{S}^2}B(v-u,\omega)\sqrt{\mu(u)\mu(v')}f(u')\,{\rm d}\omega {\rm d} u.\nonumber
\end{align}
\begin{lemma}[\cite{Guo-03,Guo2}
The operator $L$ is self-adjoint and non-negative. The kernel of $L$ is a five-dimensional space spanned by the following bases: $$
e_0=(2\pi)^{-\f14}\sqrt{\mu};\quad e_i=(2\pi)^{-\f14}v_i\sqrt{\mu},\quad i=1,2,3;\quad e_4=\frac{(2\pi)^{-\f14}}{\sqrt{6}}(|v|^2-3)\sqrt{\mu}.
$$
Define the projection $P$ by
\begin{align}\label{P}
Pf=\sum_{i=0}^4\langle f,e_i\rangle e_i.
\end{align}
Then there exists a constant $c_0>0$ such that
\begin{align}\label{c}
\langle Lf,f\rangle\geq c_0|\nu^{1/2}(I-P)f|_{L^{2}(\mathbb{R}^3)}^2.
\end{align}
\end{lemma}
Note that the integral operator $K$ can be written as
\begin{align*}
Kf(v)=\int_{\mathbb{R}^3}k(v,\eta)f(\eta)\,{\rm d} \eta,
\end{align*}
with a symmetric kernel $k(v,\eta)$.
As in \cite{Guo-03,GS}, we introduce a smooth cutoff function $0\leq\chi_m\leq 1$ with $0<m\leq 1$ such that
\begin{equation
\chi_m(s)=1~~\mbox{for}~s\leq m;~~~\chi_m(s)=0~~\mbox{for}~s\geq2m.\notag\nonumber
\end{equation}
Then we define
\begin{align}
(K^mg)(v)&=\int_{\mathbb{R}^3}\int_{\mathbb{S}^2}B(v-u,\omega)\chi_m(|v-u|)\sqrt{\mu(u)\mu(u')}f(v')\,{\rm d}\omega {\rm d} u\nonumber\\
&\quad+\int_{\mathbb{R}^3}\int_{\mathbb{S}^2}B(v-u,\omega)\chi_m(|v-u|)\sqrt{\mu(u)\mu(v')}f(u')\,{\rm d}\omega {\rm d} u\nonumber\\
&\quad-\int_{\mathbb{R}^3}\int_{\mathbb{S}^2}B(v-u,\omega)\chi_m(|v-u|)\sqrt{\mu(v)\mu(u)}f(u)\,{\rm d}\omega {\rm d} u\nonumber\\
&
K_2^mf(v)-K^m_1f(v),\nonumber
\end{align}
and
$K^c=K-K^m$.
Correspondingly, one can write
\begin{align}
(K^mf)(v)=\int_{\mathbb{R}^3} k^m(v,\eta) f(\eta)\,{\rm d}\eta,\quad
(K^cf)(v)=\int_{\mathbb{R}^3} k^c(v,\eta) f(\eta)\,{\rm d}\eta.\nonumber
\end{align}
The following estimates on $K^m$ and $K^c$ can be found in \cite{DHWY}.
\begin{lemma
Let $-3<\gamma\leq 1$. Then, for any $0<m\leq1$, it holds that
\begin{equation}\label{2.2}
|(K^mg)(v)|\leq Cm^{3+\gamma}e^{-\frac{|v|^2}{6}}\|g\|_{L^\infty},
\end{equation}
where $C$ is a generic constant independent of $m$.
The kernels $k^m(v,\eta)$ and $k^c(v,\eta)$ satisfy that for $0\leq a\leq 1$,
\begin{align*
|k^m(v,\eta)|\leq
\Big\{|v-\eta|^\gamma+|v-\eta|^{-\frac{3-\gamma}2}\Big\}e^{-\frac{|v|^2+|\eta|^2}{16}},
\end{align*}
and
\begin{align}\label{2.4}
|k^c(v,\eta)|&\leq\frac{
m^{a(\gamma-1)}}{|v-\eta|^{1+\frac{(1-a)}{2}(1-\gamma)}}\frac{1}{(1+|v|+|\eta|)^{a(1-\gamma)}}e^{-\frac{|v-\eta|^2}{10}}e^{-\frac{||v|^2-|\eta|^2|^2}{16|v-\eta|^2}}\nonumber\\
&\quad+C|v-\eta|^\gamma [1-\chi_m(|v-\eta|)] e^{-\frac{|v|^2}{4}}e^{-\frac{|\eta|^2}{4}},
\end{align}
where
$C$ is a generic constant independent of $m$ and $a$.
\end{lemma}
Particularly, since the constant $C$ in
\eqref{2.4} does not depend on $a\in[0,1]$, we have the following estimates on $k^c(v,\eta)$
by taking $a=1$ and $a=0$.
\begin{lemma}[\cite{DHWY}
Let $-3<\gamma\leq 1$. One has
\begin{align}\label{2.5}
|k^c(v,\eta)|&\leq\frac{
m^{\gamma-1}}{|v-\eta|(1+|v|+|\eta|)^{1-\gamma}}e^{-\frac{|v-\eta|^2}{10}}e^{-\frac{||v|^2-|\eta|^2|^2}{16|v-\eta|^2}},
\end{align}
and
\begin{align}\label{2.7}
|k^c(v,\eta)|&\leq
C |v-\eta|^\gamma e^{-\frac{|v|^2}{4}}e^{-\frac{|\eta|^2}{4}}
+C
|v-\eta|^{-\frac{3-\gamma}2}e^{-\frac{|v-\eta|^2}{10}}e^{-\frac{||v|^2-|\eta|^2|^2}{16|v-\eta|^2}}.
\end{align}
Moreover, it holds that
\begin{equation}\label{2.8}
\int_{\mathbb{R}^3}|k^c(v,\eta)|\cdot \frac{(1+|v|^2)^{\frac{\beta}{2}}e^{q|v|^2}}{(1+|\eta|^2)^{\frac{\beta}{2}}e^{q|\eta|^2}}\,{\rm d}\eta\leq
m^{\gamma-1}(1+|v|)^{\gamma-2},
\end{equation}
and
\begin{align*
\int_{\mathbb{R}^3}|k^c(v,\eta)|\cdot \frac{(1+|v|^2)^{\frac{\beta}{2}}e^{q|v|^2}}{(1+|\eta|^2)^{\frac{\beta}{2}}e^{q|\eta|^2}}\,{\rm d}\eta\leq
(1+|v|)^{-1},
\end{align*}
where $\beta\ge0$ is an arbitrary positive constant and $0\leq q
1/8$. Here the constant $C$ in all estimates above is independent of $m$.
\end{lemma}
In what follows we recall the back-time trajectory in phase space with respect to the diffuse-reflection boundary condition \eqref{1.2} which was first introduced in \cite{Guo2}. First of all, for each boundary point $x\in \partial\Omega$, we define the velocity space for the outgoing particles:
\begin{equation}
\mathcal{V}(x)=\{v'\in\mathbb{R}^3:~v'\cdot n(x)>0\},\nonumber
\end{equation}
associated with the probability measure ${\rm d}\sigma={\rm d}\sigma(x):=
\mu(v')|v'\cdot n(x)|\,{\rm d} v'$.
Given $(t,x,v)$, let $[{X}(s; t,x,v),V(s;t,x,v)]$ be the backward bi-characteristics for the Boltzmann equation, which is determined by
\begin{align}
\begin{cases}
\displaystyle \frac{{{\rm d}
{X}(s; t,x,v)}{{{\rm d}
s}=V(s; t,x,v),\\[2mm]
\displaystyle \frac{{{\rm d}
V(s; t,x,v)}{{{\rm d}
s}=0,\\[2mm]
[X(t; t,x,v),V(t; t,x,v)]=[x,v].\nonumber
\end{cases}
\end{align}
The solution is then given by
\begin{align}
[X(s;t,x,v),V(s;t,x,v)]=[x-v(t-s),v].\nonumber
\end{align}
For each $(x,v)$ with $x\in \bar{\Omega}$ and $v\neq 0,$ we define the {\it backward exit time} $t_{\mathbf{b}}(x,v)\geq 0$ to be the last moment at which the
back-time straight line $[X(s;0,x,v),V(s;0,x,v)]$ remains in $\bar{\Omega}$:
\begin{equation
t_{\mathbf{b}}(x,v)=\inf \{\tau \geq 0:x-v\tau\notin\bar{\Omega}\}.\nonumber
\end{equation}
We therefore have $x-t_{\mathbf{b}}{v}\in \partial \Omega $ and $\xi (x-t_{\mathbf{b}}v)=0.$ We also define
\begin{equation
x_\mathbf{b}(x,v
=x-t_{\mathbf{b}}v\in \partial \Omega .\nonumber
\end{equation}
Note that $v\cdot n(x_{\mathbf{b}})=v\cdot n({x}_{\mathbf{b}}(x,v)) \leq 0$ always holds true. Let $x\in \bar{\Omega}$, $(x,v)\notin \gamma _{0}\cup \gamma_{-}$ and
$
(t_{0},x_{0},v_{0})=(t,x,v)$. For $v_{k+1}\in {\mathcal{V}}_{k+1}:=\{v_{k+1}\cdot n({x}_{k+1})>0\}$, the back-time cycle is defined as
\begin{equation}
\left\{\begin{aligned}
X_{cl}(s;t,x,v)&=\sum_{k}\mathbf{1}_{[t_{k+1},t_{k})}(s)\{x_{k}-v_k(t_{k}-s)\},\\[1.5mm]
V_{cl}(s;t,x,v)&=\sum_{k}\mathbf{1}_{[t_{k+1},t_{k})}(s)v_{k},\nonumber
\end{aligned}\right.
\end{equation}
with
\begin{equation}
({t}_{k+1},{x}_{k+1},v_{k+1})
=({t}_{k}-{t}_{\mathbf{b}}({x}_{k},v_{k}), {x}_{\mathbf{b}}({x}_{k},v_{k}),v_{k+1}).\nonumber
\end{equation}
Define the near-grazing set of $\gamma_{+}$ as
\begin{align}\label{cut}
\gamma^{\varepsilon'}_{+}=\left\{(x,v)\in\gamma_{+}:~ |v\cdot n(x)|<{\varepsilon'}~\mbox{or}~|v|\geq{\varepsilon'}~\mbox{or}~|v|\leq\f1{\varepsilon'}\right\}.
\end{align}
Then we have
\begin{lemma}[\cite{Guo2}]\label{lemT}
Let $\varepsilon'>0$ be a small positive constant, then it holds that
\begin{multline*
\int_0^t |f(\tau)\mathbf{1}_{\gamma_+\setminus \gamma_{+}^{\varepsilon'}}|_{L^1(\gamma)}{\rm d}\tau\\
\leq C_{\varepsilon',\Omega} \bigg\{\|f(0)\|_{L^1}+\int_0^t \Big[\|f(\tau)\|_{L^1}+\|[\partial_{\tau}+v\cdot\nabla_x] f(\tau)\|_{L^1}\Big]{\rm d} \tau\bigg\},
\end{multline*}
where the positive constant $C_{\varepsilon',\Omega}>0$ depends only on $\varepsilon'$ and $\Omega$.
\end{lemma}
In the end we conclude this section with the following iteration lemma which will be crucially used later on. The proof of this lemma can be found in \cite{DHWZ}.
\begin{lemma
Let $\{a_i\}_{i=0}^\infty $ be a sequence with each $a_i\geq0$.
For an integer $k\geq 0$, we define a new sequence $\{A_i^k\}_{i=0}^\infty$ by
$$
A_i^k=\max\{a_i, a_{i+1},\cdots, a_{i+k}\},\quad i=0,1,\cdots.
$$
\begin{itemize}
\item[(i)] Let $D\geq0$ be a constant. If
$$
a_{i+1+k}\leq \f18 A_i^{k}+D, \quad i=0,1,\cdots,
$$
then it holds that
\begin{equation}\label{A.1}
A_i^k\leq \left(\f18\right)^{\left[\frac{i}{k+1}\right]}\cdot\max\{A_0^k, \ A_1^k, \cdots, \ A_k^k \}+\frac{8+k}{7} D,
\end{equation}
for any $i\geq k+1$.
\item[(ii)] Let $0\leq \eta<1$ with $\eta^{k+1}\geq\frac14$. If
$$
a_{i+1+k}\leq \f18 A_i^{k}+C_k \cdot \eta^{i+k+1},\quad i=0,1,\cdots
$$
then it holds that
\begin{align}\label{A.1-1}
A_i^k\leq \left(\f18\right)^{\left[\frac{i}{k+1}\right]}\cdot\max\{A_0^k, \ A_1^k, \cdots, \ A_k^k \}+2C_k\frac{8+k}{7} \eta^{i+k}
\end{align}
for any $i\geq k+1$.
\end{itemize}
\end{lemma}
\section{Existence of time-periodic solutions}
\subsection{Linear problem}
We start from the following linear problem with time-periodic inhomogeneous source term and boundary data:
\begin{align}\label{3.0.1}
\begin{cases}
\partial_tf+ v\cdot \nabla_xf+Lf=g,\\
f(t,x,v)|_{\gamma_{-}}=P_{\gamma}f+r.
\end{cases}
\end{align}
Here the boundary operator $P_{\gamma}$ is defined by
\begin{equation}
P_{\gamma}f(t,x,v)=\sqrt{\mu(v)}\int_{v'\cdot n(x)>0} f(t,x,v') \sqrt{\mu(v')} |v'\cdot n(x)|\,{\rm d} v'.\nonumber
\end{equation}
Both the inhomogeneous terms $g=g(t,x,v)$ and $r=r(t,x,v)$ are periodic in time with period $T>0$. Recall the weight function \eqref{def.wnot} and we write
$w(v)=w_{q,\beta}(v)$ for brevity. We define
\begin{equation}
h(t,x,v)=w(v) f(t,x,v)\nonumber.
\end{equation}
Then the equation for $h$ reads:
\begin{align*
\begin{cases}
\partial_th+v\cdot \nabla_x h+\nu(v) h=K_{w} h+wg,\\[2mm]
\displaystyle h(t,x,v)|_{\gamma_-}=\frac{1}{\tilde{w}(v)} \int_{v'\cdot n(x)>0} h(t,x,v') \tilde{w}(v') {\rm d}\sigma'+wr(t,x,v),
\end{cases}
\end{align*}
where
$$
\tilde{w}(v)\equiv \frac{1}{w(v)\sqrt{\mu(v)}},\quad
K_wh=wK(\frac{h}{w}).
$$
The proof of Theorem \ref{thm1.1} heavily relies on the solvability of the linearized time-periodic problem \eqref{3.0.1}.
\begin{proposition}\label{prop3.2}
Let $-3<\gamma\leq 1$, $0\leq q<\f18$ and $\beta>\max\{3,3-\gamma\}$. Assume that $g$ and $r$ are time-periodic functions with period $T>0$, and satisfy the zero-mass condition
\begin{align}\label{3.0.3}
\int_{\Omega}\int_{\mathbb{R}^3}g(t,x,v)\sqrt{\mu(v)}\,{\rm d} v{\rm d} x=\int_{\gamma_-}r(t,x,v)\sqrt{\mu(v)
\,{{\rm d}\gamma}=0,
\end{align}
for all $t\in \mathbb{R}$, and $L^{\infty}$ bounds
$$\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^\infty}+\sup_{0\leq t\leq T}|wr(t)|_{L^\infty(\gamma_-)}<\infty.$$ Then there exists a unique time-periodic solution $f=f(t,x,v)$ with the same period $T$ to the linearized Boltzmann equation \eqref{3.0.1}, such that
$$
\int_{\Omega\times\mathbb{R}^3} f(t,x,v) \sqrt{\mu} \,{\rm d} v{\rm d} x=0
$$
for all $t\in \mathbb{R}$, and
\begin{multline}\label{3.0.4}
\sup_{0\leq t\leq T}\|wf(t)\|_{L^\infty} +\sup_{0\leq t\leq T}{|wf(t)|_{L^\infty(\gamma)}}\\%|wf(t)|_{L^\infty} \\
\leq C\sup_{0\leq t\leq T} |wr(t)|_{L^\infty(\gamma_-)}+C\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^\infty}.
\end{multline}
Moreover, if $\Omega$ is convex, and $g$ is continuous in
$\mathbb{R}\times\Omega\times\mathbb{R}^3$ and {$r$ is continuous in $\mathbb{R}\times\gamma_-$,}
then $f(t,x,v)$ is also continuous away from the grazing set $\mathbb{R}\times\gamma_0$.
\end{proposition}
The following two subsections will be devoted to the proof of Proposition \ref{prop3.2}.
\subsection{A priori $L^\infty$ estimate}
To prove Proposition \ref{prop3.2},
we start from
the a priori $L^{\infty}$ estimate on solutions to the following time-periodic problems:
\begin{equation}\label{3.1.1}
\begin{cases}
\partial_th^{i+1}+v\cdot\nabla_x h^{i+1}+(\varepsilon+\nu(v)) h^{i+1}=\lambda K_w^m h^i+\lambda K^c_w h^i +wg,\\[3mm]
\displaystyle h^{i+1}(t,x,v)|_{\gamma_-}=\frac{1}{\tilde{w}(v)} \int_{v'\cdot n(x)>0} h^i({t},x,v') \tilde{w}(v') {\rm d}\sigma'+w(v)r({t},x,v),
\end{cases}
\end{equation}
for $i=0,1,2,\cdots$, where $h^0:=h^0(t,x,v)$ is given. Here $0\leq \lambda\leq 1$ and $\varepsilon>0$ are given parameters, and $g(t,x,v)$ and $r(t,x,v)$ are both time-periodic functions with period $T>0$. Before doing that, we need some preparations. The following lemma gives the mild formulation of $h^{i+1}$. As the proof is more or less the same as {\cite[Lemma 24]{Guo2},
we omit it for brevity.
\begin{lemma}
Let $0\leq \lambda\leq 1$ and $\varepsilon>0$. For any $t\in [0,T]$, for almost every $(x,v)\in \bar{\Omega}\times \mathbb{R}^3\backslash (\gamma_0\cup \gamma_-) $ and for any $s\leq t$,
we have
\begin{equation}
\label{3.1.2}
h^{i+1}(t,x,v)
=\sum_{\ell=1}^{4}J_\ell+\sum_{\ell=5}^{14}\mathbf{1}_{\{t_1> s\}} J_\ell
\end{equation}
with
\begin{align*}
&J_1=\mathbf{1}_{\{t_1\leq s\}} e^{-(\varepsilon+{\nu}(v))(t-s)} h^{i+1}(s,x-v(t-s),v),\\
&J_2+J_3+J_4=\int_{\max\{{t}_1,s\}}^t e^{-{\nu}(v)(t-\tau)}\Big[\lambda K_w^mh^{i}+\lambda K^c_wh^{i}+wg\Big](\tau,x-v(t-\tau),v){\rm d} \tau,\\
&J_5=e^{-(\varepsilon+{\nu}(v))(t-t_1)}w(v) r(t_1,x_1,v),
\end{align*}
\begin{align*}
&J_6=\frac{e^{-(\varepsilon+{\nu}(v))(t-t_1)}}{\tilde{w}(v)} \int_{\Pi _{j=1}^{k-1}\mathcal{V}_{j}}
\sum_{l=1}^{k-2} \mathbf{1}_{\{t_{l+1}>s\}} w(v_l)r(t_{l+1},x_{l+1},v_{l}){\rm d} \Sigma_{l}({t}_{l+1}),\\
&J_7=\frac{e^{-(\varepsilon+{\nu}(v))(t-{t}_1)}}{\tilde{w}(v)} \int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} \sum_{l=1}^{k-1} \mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}} h^{i+1-l}(s,{x}_l-{v}_l({t}_l-s),v_l) {\rm d}\Sigma_{l}(s),
\end{align*}
\begin{multline*}
J_8+J_9+J_{10}=\frac{e^{-(\varepsilon+{\nu}(v))(t-{t}_1)}}{\tilde{w}(v)} \int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} \sum_{l=1}^{k-1}\int_s^{{t}_l} \mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}}\\
[\lambda K_w^mh^{i-l}+\lambda K^c_wh^{i-l}+wg](\tau,{x}_l-{v}({t}_l-\tau),v_l) {\rm d}\Sigma_l(\tau),
\end{multline*}
\begin{multline*}
J_{11}+J_{12}+J_{13}=\frac{e^{-(\varepsilon+{\nu}(v))(t-{t}_1)}}{\tilde{w}(v)}\int_{\Pi_{j=1}^{k-1}{\mathcal{V}}_{j}} \sum_{l=1}^{k-1}\int_{{t}_{l+1}}^{{t}_l} \mathbf{1}_{\{{t}_{l+1}>s\}} \\
[\lambda K_w^mh^{i-l}+\lambda K^c_wh^{i-l}+wg](\tau,{x}_l-{v}({t}_l-\tau),v_l) {\rm d}\Sigma_l(\tau),
\end{multline*}
\begin{align*}
J_{14}=\frac{e^{-(\varepsilon+{\nu}(v))(t-{t}_1)}}{\tilde{w}(v)} \int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} {\mathbf{1}_{\{{t}_{k}>s\}}
h^{i+1-k}(t_k,{x}_k,v_{k-1}) {\rm d}\Sigma_{k-1}({t}_k).
\end{align*}
Here we have denoted
\begin{align*}
&{\rm d}\Sigma_l(\tau) = \big\{\Pi_{j=l+1}^{k-1}{\rm d}{\sigma}_j\big\}\cdot \big\{\tilde{w}(v_l) e^{-(\varepsilon+{\nu}(v_l))({t}_l-\tau)} {\rm d}{\sigma}_l\big\}\\
&\qquad\qquad\qquad\cdot \big\{\Pi_{j=1}^{l-1} e^{-(\varepsilon+{\nu}(v_j))({t}_j-{t}_{j+1})} {\rm d}{\sigma}_j\big\},\nonumber
\end{align*}
and ${\rm d}\sigma_j=\mu(v_j)\{n(x_j)\cdot v_j\}{\rm d} v_j$.
\end{lemma}
Next, the following lemma is due to \cite{Guo2}, which gives a quantitative smallness estimate on the measure of possible velocities, so that the particle can not reach down the underlying initial plane, in terms of the number of reflection.
\begin{lemma}\label{lem.smbd}
Let $T>0$. Let $n$ be sufficiently large. There exist constants $\hat{C}_1$ and $\hat{C}_2$ independent of $n$ such that for $k=\hat{C}_1(nT)^{\frac54}$ and $(t,x,v)\in[0,T]\times\bar{\Omega}\times\mathbb{R}^3$, it holds that
\begin{align}\label{3.1.3}
\int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} \mathbf{1}_{\{{t}_k>-nT\}}~ \Pi _{j=1}^{k-1} {\rm d}{\sigma} _{j}\leq \left(\frac12\right)^{\hat{C}_2(nT)^{\frac54}}.
\end{align}
\end{lemma}
\begin{proposition}\label{prop3.1}
Let $-3<\gamma\leq 1, \varepsilon>0$, {$0\leq q<1/8$} and $\beta>3$. Assume that $h^i(t,x,v)$ are all time-periodic functions with period $T>0$ and satisfy $$\sup_{0\leq t\leq T}\{\|h^i(t)\|_{L^\infty}+|h^i(t)|_{L^\infty{{(\gamma)}}}\}<\infty,$$ for $i=0,1,2,\cdots$. Then there exist
two universal constants $C>0$ and $n>1$ large enough, independent of $i,\lambda$ and $\varepsilon$, such that for $k=\hat{C}_2(nT)^{\frac54}$, it holds, for $i\geq k$, that
\begin{align}\label{3.1.4}
\sup_{0\leq t\leq T}&\|h^{i+1}(t)\|_{L^\infty}+\sup_{0\leq t\leq T}|h^{i+1}(t)|_{L^{\infty}{(\gamma)}}\nonumber\\
&\leq \frac18 \max_{0\leq l\leq k}\{\sup_{0\leq t\leq T} \|h^{i-l}(t)\|_{L^\infty}\}+C \max_{0\leq l\leq k}\left\{\left\|\frac{h^{i-l}}{\langle v\rangle^{|\gamma|} w}\right\|_{L^2([0,T];L^2)}\right\}\nonumber\\
&\quad+C\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}.
\end{align}
Here we have denoted $\langle v\rangle:=(1+|v|^2)^{1/2}$. Moreover, if $h^i\equiv h$ for $i=1,2,\cdots$, i.e., $h$ is a solution, then \eqref{3.1.4} is reduced to the following form
\begin{multline}\label{3.1.5}
\sup_{0\leq t\leq T}\|h(t)\|_{L^\infty}+\sup_{0\leq t\leq T}|h(t)|_{L^{\infty}{(\gamma)}}\\
\leq C\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}+C\left\|\frac{h}{\langle v \rangle^{|\gamma|}w}\right\|_{L^2([0,T];L^2)}.
\end{multline}
\end{proposition}
\begin{proof}
Let $s=-nT$ in \eqref{3.1.2} with $n> 1$ large enough such that \eqref{3.1.3} holds true. We first estimate $J_1$. Note that by periodicity, we have $$h^{i+1}(s,x-(t-s)v,v)=h^{i+1}(0,x-(t-s)v,v).$$ Then if $0\leq\gamma\leq1$, $\nu(v)\geq \nu_0>0$ for some constant $\nu_0$. Then it is direct to get
\begin{align}\label{3.1.6}
|J_1|\leq e^{-\nu_0(t+NT)}\sup_{0\leq t\leq T}\|h^{i+1}(t)\|_{L^{\infty}}.
\end{align}
If $-3<\gamma<0$, $\nu(v)\sim(1+|v|)^{\gamma}$ no longer has a positive lower bound, when $|v|$ is sufficiently large.
In this case we note that
$$
0\leq t_{{\mathbf{b}}}(x,v)\leq \frac{d_\Omega}{|v|},
$$
where $d_\Omega:=\sup_{x,y\infty
{\Omega}}|x-y|$ is the diameter of $\Omega$. Then for $|v|>\frac{d_\Omega}{nT}$, it holds that
$$
t_1-s=t-t_{{\mathbf{b}}}(x,v
{+nT}>0.
$$
In other words, $J_1$ appears only when the particle velocity $|v|$ is rather small, so that we have
\begin{align}\label{3.1.7}
|J_1|&\leq \mathbf{1}_{\{t_1\leq s\}}\mathbf{1}_{\{|v|\leq \frac{d_{\Omega}}{nT}\}}e^{-\nu(v)(t-s)}\sup_{0\leq t\leq T}\|h^{i+1}(t)\|_{L^{\infty}}\nonumber\\
&\leq \mathbf{1}_{\{t_1\leq s\}}\mathbf{1}_{\{|v|\leq 1\}}e^{-\nu(v)(t-s)}\sup_{0\leq t\leq T}\|h^{i+1}(t)\|_{L^{\infty}}\notag\\
&\leq Ce^{-\nu_0(t+nT)}\sup_{0\leq t\leq T}\|h^{i+1}(t)\|_{L^{\infty}},
\end{align}
for the suitably large $n$, where for simplicity of notations we have still denoted the strictly positive constant $\nu_0>0$ to be the infimum of $\nu(v)$ over $|v|\leq 1$. For contributions coming from $g$ and $r$, we notice that
$$
\tilde{w}(v)=\frac{1}{\sqrt{2\pi}} \frac{e^{(\frac14-q)|v|^2}}{(1+|v|^2)^{\frac{\beta}{2}}},
$$
so it holds that
\begin{align}\label{3.1.8}
\frac{1}{\tilde{w}(v)}\leq \sqrt{2\pi} (1+|v|^2)^{\frac{\beta}{2}} e^{-(\frac14-q)|v|^2}\leq C e^{-\frac18|v|^2}.
\end{align}
Moreover,
we have
\begin{equation}\label{3.1.9}
\left\{\begin{aligned}
&\int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} e^{\frac{5|v_m|^2}{16}}\ \Pi_{j=1}^{k-1} {\rm d}{\sigma}_j\leq C,
\\
&\int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} \sum_{l=1}^{k-1} \mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}} e^{\frac{5|v_m|^2}{16}}\ \Pi_{j=1}^{k-1} {\rm d}{\sigma}_j\leq Ck,\\
&\int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} \sum_{l=1}^{k-1} \mathbf{1}_{\{{t}_{l+1}> s\}} e^{\frac{5|v_m|^2}{16}} \Pi_{j=1}^{k-1} {\rm d}{\sigma}_j\leq Ck,
\end{aligned}\right.
\end{equation}
for all $1\leq m\leq k-1.$ Combining this with periodicity of $r$ and $g$, we get that
\begin{equation}\label{3.1.10}
\left\{\begin{aligned}
&|J_4|+|J_{10}|+|J_{13}|\leq Ck\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}, \\
&|J_5|+|J_6|\leq Ck\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}.
\end{aligned}\right.
\end{equation}
Next, we shall estimate $J_7$. If $0\leq\gamma\leq 1$, we use the fact that $\nu(v)\geq\nu_0>0$ as well as \eqref{3.1.8} {and \eqref{3.1.9}} to get
\begin{align}\label{3.1.11}
|J_7|&\leq C{e^{-\f18|v|^2}}e^{-\nu_0(t+nT)}\max_{1\leq l\leq k-1}\big\{\sup_{0\leq t\leq T}\|h^{i+1-l}(t)\|_{L^{\infty}}\big\}\notag \\
&\qquad\qquad\qquad\qquad\times\int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} \sum_{l=1}^{k-1} \mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}} \tilde{w}(v_l)\ \Pi_{j=1}^{k-1} {\rm d}{\sigma}_j\nonumber\\
&\leq Ck{e^{-\f18|v|^2}}e^{-\nu_0(t+nT)}\max_{1\leq l\leq k-1}\big\{\sup_{0\leq t\leq T}\|h^{i+1-l}(t)\|_{L^{\infty}}\big\}.
\end{align}
If $-3<\gamma<0$, we again note that $\nu(v)$ no longer has a positive lower bound. In this case, it holds from Young's inequality that
$$
\nu(v)(\tau_1-\tau_2)+\frac{|v|^2}{16}\geq c (\tau_1-\tau_2)^{\alpha},
$$
for any $\tau_1>\tau_2$, where we have taken
$
\alpha=\frac{2}{2+|\gamma|},
$ and
$c>0$ is a constant independent of $\tau_1$, $\tau_2$ and $v$. In the sequel $c>0$ may take different values at different places. So, from \eqref{3.1.8} we have
\begin{align}
\frac{e^{-\nu(v)(t-t_1)}}{\tilde{w}(v)}\leq Ce^{-\frac{|v|^2}{16}}e^{-c(t-t_1)^{\alpha}},\nonumber
\end{align}
and
\begin{align}
|J_7|\leq& Ce^{-\frac{|v|^2}{16}}e^{-c(t-t_1)^{\alpha}}\max_{1\leq l\leq k-1}\big\{\sup_{0\leq t\leq T}\|h^{i+1-l}(t)\|_{L^{\infty}}\big\}\nonumber\\
&\times\sum_{l=1}^{k-1}\int_{\Pi_{j=1}^{l}{\mathcal{V}}_{j}}\mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}}\tilde{w}(v_l)e^{-\nu(v_l)(t_l-s)}{\rm d}\sigma_l\Pi_{j=1}^{l-1}e^{-\nu(v_j)(t_j-t_{j+1})}{\rm d} \sigma_j.\nonumber
\end{align}
For each $l$, we take
$
|v_m|=\max\{|v_1|,\cdots,|v_l|\}.
$
Then it holds that
$$
\Pi_{j=1}^{l-1}e^{-\nu(v_j)(t_j-t_{j+1})}\times e^{-\nu(v_l)(t_l-s)}\tilde{w}(v_l)\\
\leq e^{-\nu(v_m)(t_1-s)}e^{\frac{|v_m|^2}{4}}\leq e^{-c(t_1-s)^{\alpha}}e^{\frac{5|v_m|^2}{16}}.
$$
Thus one has
\begin{align}\label{3.1.12}
|J_7|&\leq e^{-\frac{|v|^2}{16}}e^{-c(t
{s})^{\alpha}}\max_{1\leq l\leq k-1}\big\{\sup_{0\leq t\leq T}\|h^{i+1-l}(t)\|_{L^{\infty}}\big\}\nonumber\\
&\quad\times\sum_{l=1}^{k-1}\sum_{m=1}^l\int_{\Pi_{j=1}^{l}{\mathcal{V}}_{j}}\mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}}e^{\frac{5|v_m|^2}{16}}\Pi_{j=1}^l{\rm d} \sigma_j\nonumber\\
&\leq Ck^2e^{-\frac{|v|^2}{16}}e^{-c(t+nT)^{\alpha}}\max_{1\leq l\leq k-1}\big\{\sup_{0\leq t\leq T}\|h^{i+1-l}(t)\|_{L^{\infty}}\big\}.
\end{align}
Here we have used the elementary fact that $a^{\alpha}+b^{\alpha}\geq(a+b)^{\alpha}$ for $a$, $b\geq 0$ and $0\leq \alpha\leq 1$.
For $J_{14}$, it follows from \eqref{3.1.3} and \eqref{3.1.8} that
\begin{align}\label{3.1.13}
|J_{14}|\leq Ce^{-\frac{|v|^2}{16}}\left(\f12\right)^{\hat{C}_2(nT)^{\f54}}\sup_{0\leq t\leq T}\|h^{i+1-k}(t)\|_{L^{\infty}}.
\end{align}
For the contribution from $K^m$, we use \eqref{2.2} to obtain
\begin{equation}\label{3.1.14}
|J_2|\leq{Cm^{3+\gamma}w(v)e^{-\frac{1}{6}|v|^2}\sup_{0\leq t\leq T}\|h^{i}(t)\|_{L^{\infty}}}\leq Cm^{3+\gamma}e^{-\frac{|v|^2}{48}}\sup_{0\leq {t
\leq T}\|h^{i}(t)\|_{L^{\infty}}.
\end{equation}
Similarly, we use \eqref{2.2}, \eqref{3.1.8} and \eqref{3.1.9} to get
\begin{align}\label{3.1.15}
|J_8|\leq& Cm^{3+\gamma}e^{-\frac{|v|^2}{8}}\max_{1\leq l\leq k-1}\{\sup_{0\leq {t
\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\}\nonumber\\
&\times\int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} \sum_{l=1}^{k-1} \mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}}\int_s^{t_l}e^{-\nu(v_l)(t_l-\tau)}\nu(v_l){\rm d}\tau \nu^{-1}(v_l)\tilde{w}(v_l)\ \Pi_{j=1}^{k-1} {\rm d}{\sigma}_j\nonumber\\
\leq &Ckm^{3+\gamma}e^{-\frac{|v|^2}{8}}\max_{1\leq l\leq k-1}\{\sup_{0\leq {t
\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\},
\end{align}
and
\begin{align}\label{3.1.16}
|J_{11}|\leq& Cm^{3+\gamma}e^{-\frac{|v|^2}{8}}\max_{1\leq l\leq k-1}\{\sup_{0\leq {t
\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\}\nonumber\\
&\times\int_{\Pi _{j=1}^{k-1}{\mathcal{V}}_{j}} \sum_{l=1}^{k-1} \mathbf{1}_{\{{t}_{l+1}>s\}} \int_{t_{l+1}}^{t_l}e^{-\nu(v_l)(t_l-\tau)}\nu(v_l){\rm d} \tau \nu^{-1}(v_l)\tilde{w}(v_l)\ \Pi_{j=1}^{k-1} {\rm d}{\sigma}_j\nonumber\\
\leq &Ckm^{3+\gamma}e^{-\frac{|v|^2}{8}}\max_{1\leq l\leq k-1}\{\sup_{0\leq {t
\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\}.
\end{align}
It remains to estimate the terms involving $K^c$. Firstly, we have
\begin{align}\label{3.1.17}
|J_9| &\leq C e^{-\f18|v|^2} \sum_{l=1}^{k-1}\int_{\Pi _{j=1}^{l-1}{\mathcal{V}}_{j}} {\rm d}{\sigma}_{l-1}\cdots {\rm d}{\sigma}_1 \int_{\mathcal{V}_l}\int_{\mathbb{R}^3} \int_s^{{t}_l} e^{-\nu(v_l)(t-\tau)} \mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}} \nonumber\\
&\qquad\qquad\qquad\qquad\times \tilde{w}(v_l) |k^c_w(v_l,v') h^{i-l}(\tau,{x}_l-{{v_l}}({t}_l-\tau),v')|{\rm d} \tau{\rm d} v' {\rm d}{\sigma}_l \nonumber\\
&=C e^{-\f18|v|^2} \sum_{l=1}^{k-1}\int_{\Pi _{j=1}^{l-1}{\mathcal{V}}_{j}} {\rm d}{\sigma}_{l-1}\cdots {\rm d}{\sigma}_1 \int_{\mathcal{V}_l\cap \{|v_l|\geq N\}}\int_{\mathbb{R}^3}\int_s^{{t}_l} (\cdots){\rm d} \tau{\rm d} v' {\rm d}{\sigma}_l \nonumber\\
&\quad+C e^{-\f18|v|^2} \sum_{l=1}^{k-1}\int_{\Pi _{j=1}^{l-1}{\mathcal{V}}_{j}} {\rm d}{\sigma}_{l-1}\cdots {\rm d}{\sigma}_1 \int_{\mathcal{V}_l\cap \{|v_l|\leq N\}}\int_{\mathbb{R}^3} \int_s^{{t}_l} (\cdots){\rm d} \tau{\rm d} v' {\rm d}{\sigma}_l\nonumber\\
&:=\sum_{l=1}^{k-1} (J_{91l}+J_{92l}).
\end{align}
For $J_{91l}$, we use \eqref{3.1.9} to obtain that
\begin{align}\label{3.1.18}
\sum_{l=1}^{k-1}J_{91l}&\leq Ck e^{-\f18|v|^2}\max_{1\leq l\leq k-1}\bigg\{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^\infty}\int_{\Pi _{j=1}^{l-1}{\mathcal{V}}_{j}} {\rm d}{\sigma}_{l-1}\cdots {\rm d}{\sigma}_1
\nonumber\\
&\qquad\times\int_{\mathcal{V}_l\cap \{|v_l|\geq N\}} \int_s^{{t}_l} e^{-{\nu}(v_l)(t_l-\tau)}\nu(v_l
{e^{-\frac{|v_l|^2}{32}}{\rm d}\tau e^{\frac{5|v_l|^2}{16}}{\rm d}\sigma_l}\bigg\}\nonumber\\
&\leq Ck e^{-\f18|v|^2} e^{-\frac1{32}N^2}\max_{1\leq l\leq k-1}\big\{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^\infty}\big\}.
\end{align}
For $J_{92l}$, it holds that
\begin{align}
J_{92l}&\leq C e^{-\f18|v|^2} \int_{\Pi _{j=1}^{l-1}{\mathcal{V}}_{j}} \Pi _{j=1}^{l-1}{\rm d}{\sigma}_{j}
\int_{\mathcal{V}_l\cap \{|v_l|\leq N\}}\int_{\mathbb{R}^3} \int_{{t}_l-\frac1N}^{{t}_l}(\cdots){\rm d} \tau{\rm d} v' {\rm d}{\sigma}_l\nonumber\\
&\quad+C e^{-\f18|v|^2} \int_{\Pi _{j=1}^{l-1}{\mathcal{V}}_{j}}
\Pi _{j=1}^{l-1}{\rm d}{\sigma}_{j}
\int_{\mathcal{V}_l\cap \{|v_l|\leq N\}} \int_s^{{t}_l-\frac1N} e^{-\nu(v_l)(t_l-\tau)} e^{-\f18|v_l|^2}{\rm d}\tau {\rm d} v_l\nonumber\\
&\qquad\quad\times \int_{|v'|\geq 2N} |k_w^c(v_l,v')| e^{\frac{|v_l-v'|^2}{64}} {\rm d} v' e^{-\frac{N^2}{64}}\cdot \max_{1\leq l\leq k-1}\{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^\infty}\}\nonumber\\
&\quad+C e^{-\f18|v|^2} \int_{\Pi _{j=1}^{l-1}{\mathcal{V}}_{j}}
\Pi _{j=1}^{l-1}{\rm d}{\sigma}_{j}
\int_s^{{t}_l-\frac1N} {\rm d} \tau\int_{\mathcal{V}_l\cap \{|v_l|\leq N\}}\int_{|v'|\leq 2N} \nonumber\\
&\qquad\quad\times \mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}} e^{-\frac18|v_l|^2} |k^c_w(v_l,v') h^{i-l}(\tau,x_l-{v_l
({t}_l-{\tau
),v')|{\rm d} v' {\rm d} v_l. \nonumber
\end{align}
Then, by \eqref{2.7} we have
\begin{align}
J_{92l}
&\leq C e^{-\f18|v|^2} \int_{\Pi _{j=1}^{l-1}{\mathcal{V}}_{j}}
\Pi _{j=1}^{l-1}{\rm d}{\sigma}_{j}
\bigg\{\int_s^{{t}_l-\frac1N}\int_{\mathcal{V}_l\cap \{|v_l|\leq N\}}\int_{|v'|\leq 2N} \nonumber\\
&\qquad\quad\times \mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}} e^{-\frac18|v_l|^2} |k^c_w(v_l,v') h^{i-l}(\tau,x_l-{v_l
({t}_l-{\tau
),v')|{\rm d} v' {\rm d} v_l{\rm d}\tau\bigg\}\nonumber\\
&\qquad+\frac{C}{N}e^{-\f18|v|^2}\cdot\max_{1\leq l\leq k-1}\{\sup_{0\leq t\leq T}\|h^{i-l}{(t)}\|_{L^\infty}\}.
\label{3.1.19}
\end{align}
By H\"older's inequality, the integral term on the right-hand of \eqref{3.1.19}
\begin{equation}
\label{add.pf1}
\int_s^{{t}_l-\frac1N}\int_{\mathcal{V}_l\cap \{|v_l|\leq N\}}\int_{|v'|\leq 2N}(\cdots)\, {\rm d} v' {\rm d} v_l{\rm d}\tau:=\iiint_{\mathcal {D}}(\cdots)\, {\rm d} v' {\rm d} v_l{\rm d}\tau
\end{equation}
is bounded by
\begin{align}
&C_N\left\{
\iiint_{\mathcal {D}}
e^{-\frac18|v_l|^2} |k^c_w(v_l,v')|^2 {\rm d} v' {\rm d} v_l {\rm d}\tau\right\}^{1/2}\nonumber\\
&\quad \times \left\{
\iiint_{\mathcal {D}}
\mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}} \left|\frac{h^{i-l}(\tau,{x}_l-{v_l
({t}_l-\tau),v')}{\langle v'\rangle^{|\gamma|}w(v')}\right|^2 {\rm d} v' {\rm d} v_l {\rm d}\tau\right\}^{1/2}\nonumber\\
&\leq C_Nn^{1/2} m^{\gamma-1}\bigg\{
\iiint_{\mathcal {D}}
\mathbf{1}_{\{{t}_{l+1}\leq s<{t}_l\}} \left|\frac{h^{i-l}(\tau,{x}_l-{v_l
({t}_l-\tau),v')}{\langle v'\rangle^{|\gamma|} w(v')}\right|^2 {\rm d} v' {\rm d} v_l{\rm d}\tau \bigg\}^{1/2}.\nonumber
\end{align}
Here we have used \eqref{2.5} in the last inequality. Note that $y_l:={x}_l-{v_l
({t}_l-\tau)\in \Omega$ for $s\leq \tau\leq t_l-\f1N$. Making change of variables $v_l\rightarrow y_l$, we obtain that \eqref{add.pf1} is bounded by
\begin{align}
C_Nn^{1/2}m^{\gamma-1}\left\{\int_s^{t_l}\left\|\frac{h^{i-l}(\tau)}{\langle v\rangle^{|\gamma|}w}\right\|_{L^2}^2{\rm d}\tau\right\}^{1/2}.\nonumber
\end{align}
We use periodicity of $h^{i-l}$ to further bound the above term by
$
C_Nn^{1/2}m^{\gamma-1}\bigg\{\int_s^T\left\|\frac{h^{i-l}(\tau)}{\langle v\rangle^{|\gamma|}w}\right\|_{L^2}^2{\rm d}\tau\bigg\}^{1/2}\leq C_N nm^{\gamma-1}\left\|\frac{h^{i-l}}{\langle v\rangle^{|\gamma|}w}\right\|_{L^2([0,T];L^2)}.
$$
Combining this with \eqref{3.1.17}, \eqref{3.1.18} and \eqref{3.1.19}, we get
\begin{align}\label{3.1.20}
|J_9|\leq& \frac{Ck}{N}e^{-\frac{|v|^2}{8}}\max_{1\leq l\leq k-1 }\{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\}\nonumber\\
&
{C_{N,n,m}}e^{-\frac{|v|^2}{8}}\max_{1\leq l\leq k-1}\left\{\left\|\frac{h^{i-l}}{\langle v\rangle^{|\gamma|}w}\right\|_{L^2([0,T];L^2)}\right\}.
\end{align}
Similarly, for $J_{12}$ one has
\begin{align}\label{3.1.21}
|J_{12}|\leq& \frac{Ck}{N}e^{-\frac{|v|^2}{8}}\max_{1\leq l\leq k-1 }\{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\}\nonumber\\
&
{C_{N,n,m}}e^{-\frac{|v|^2}{8}}\max_{1\leq l\leq k-1}\left\{\left\|\frac{h^{i-l}}{\langle v\rangle^{|\gamma|}w}\right\|_{L^2([0,T];L^2)}\right\}.
\end{align}
Collecting all estimates \eqref{3.1.6}, \eqref{3.1.7}, \eqref{3.1.10}, \eqref{3.1.11}, \eqref{3.1.12}, \eqref{3.1.13}, \eqref{3.1.14}, \eqref{3.1.15}, \eqref{3.1.16}, \eqref{3.1.20} and \eqref{3.1.21}, we get that for $t\in[0,T]$,
\begin{multline}\label{3.1.22}
|h^{i+1}(t,x,v)|\leq \int_{\max\{t_1,s\}}^te^{-\nu(v)(t-\tau)}\int_{\mathbb{R}^3}|k^c_w(v,v')h^i(\tau,x-(t-\tau)v,v')|{\rm d} v'{\rm d} \tau\\
+A_i(t,v),
\end{multline}
where we have denoted
\begin{align}
A_i(t,v):=&C{k^2}e^{-\frac{|v|^2}{48}}\bigg\{m^{3+\gamma}+e^{-c(t+nT)^{\alpha}}\notag\\%\left(\f12\right)
&\qquad\qquad\quad +2^{-\hat{C}_2(nT)^{\f54}}+\f1N\bigg\}\
\max_{0\leq l\leq k-1}\{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\}\nonumber\\
&+Ce^{-c(t+nT)}\sup_{0\leq t\leq T}\|h^{i+1}(t)\|_{L^{\infty}}\notag \\
&+Ck\left\{\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}\right\}\nonumber\\
&+{C_{N,n,m}}e^{-\frac{|v|^2}{8}}\max_{0\leq l\leq k-1}\left\{\left\|\frac{h^{i-l}}{\langle v\rangle^{|\gamma|}w}\right\|_{L^{2}([0,T];L^2)}\right\},\nonumber
\end{align}
and
\begin{equation}
\label{def.kno}
k=\hat{C}_1(nT)^{\f54}\sim (nT)^{\frac{5}{4}}.
\end{equation}
Denoting $x':=x-(t-\tau)v$ and $t_1':=t_1(\tau,x',v')$, we use \eqref{3.1.22} for $h^{i}(\tau,x',v')$ to evaluate
\begin{align}\label{3.1.23}
|h^{i+1}(t,x,v)|\leq &A_i(t,v)+\int_{\max\{t_1,s\}}^te^{-\nu(v)(t-\tau)}\int_{\mathbb{R}^3}|k^c_w(v,v')A_{i-1}(\tau,v'){\rm d} v'{\rm d} \tau\nonumber\\
&+\int_{\max\{t_1,s\}}^te^{-\nu(v)(t-\tau)}{\rm d}\tau\int_{\mathbb{R}^3}\int_{\max\{t_1',s\}}^\tau
\int_{\mathbb{R}^3}\bigg\{e^{-\nu(v')(\tau-\tau')}\nonumber\\
&\quad\times|k^c_w(v,v')k^c_w(v',v'')h^{i-1}(\tau',x'-(\tau-\tau')v',v'')|\bigg\}{\rm d} v'' {\rm d}\tau'{\rm d} v'\nonumber\\
=&A_i(t,v)+B_1+B_2,
\end{align}
where $B_1$ and $B_2$ denote two integral terms on the right-hand respectively.
It follows from \eqref{2.8} that
\begin{align}\label{3.1.24}
B_1\leq &C{k^2}\left\{m^{\gamma-1}e^{-c nT}+m^{3+\gamma}+e^{-c(t+nT)^{\alpha}
+2^{-\hat{C}_2(nT)^{\f54}}+\f1N\right\}\notag\\
&\qquad\times\max_{0\leq l\leq k}
\{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\}\nonumber\\
&+Ck m^{\gamma-1}\left\{\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}\|wr(t)\|_{L^{\infty}(\gamma_-)}\right\}\nonumber\\
&+{C_{N,n,m}}\max_{0\leq l\leq k}\left\{\left\|\frac{h^{i-l}}{\langle v \rangle^{|\gamma|}w}\right\|_{L^{2}([0,T];L^2)}\right\}.
\end{align}
Finally, we estimate $B_2$. If $|v|>N$, we have from \eqref{2.8} that
\begin{align}\label{3.1.25}
B_2\leq Cm^{2(\gamma-1)}(1+|v|)^{-2}\sup_{0\leq t\leq T}\|h^{i-1}(t)\|_{L^{\infty}}\leq C\frac{m^{2(\gamma-1)}}{N^2} \sup_{0\leq t\leq T}\|h^{i-1}(t)\|_{L^{\infty}}.
\end{align}
If $|v|\leq N$, we denote the integrand of $B_2$
as $U(\tau',v',v'';\tau,v)$, and split the integral domain
with respect to ${\rm d}\tau'{\rm d} v''{\rm d} v'$ into the following four parts:
\begin{align*}
\cup_{i=1}^4\mathcal{O}_i:=&\{|v'|\geq 2N\}
\cup\{|v'|\leq 2N, |v''|>3N\}\\
&\cup\{|v'|\leq 2N, |v''|\leq 3N, \tau-\f1N\leq \tau'\leq \tau\}\nonumber\\
&\cup\{|v'|\leq 2N, |v''|\leq 3N, \max\{t_1',s\}\leq \tau'\leq \tau-\f1N\}
\end{align*}
Over $\mathcal{O}_1\cup\mathcal{O}_2$, we have either $|v-v'|\geq N$ or $|v'-v''|\geq N $, so that one of the following is valid:
\begin{equation*}
|k^c_w(v,v')|\leq e^{-\frac{N^2}{64}}e^{\frac{|v-v'|^2}{64}}|k^{c}_w(v,v')|
\ \text{or}\
|k^c_w(v',v'')|\leq e^{-\frac{N^2}{64}}e^{\frac{|v'-v''|^2}{64}}{|k^c_w(v',v'')|.}
\end{equation*}
Recall \eqref{2.5}. Then it holds that
\begin{equation*}
\int_{\mathbb{R}^3}|k^c_w(v,v')|e^{\frac{|v-v'|^2}{64}}{\rm d} v'\leq Cm^{\gamma-1}\nu(v),
\end{equation*}
or
$$
\int_{\mathbb{R}^3}|k^c_w(v',v'')|e^{\frac{|v'-v''|^2}{64}}{\rm d} v''\leq Cm^{\gamma-1}\nu(v').
$$
Therefore one has
\begin{multline}\label{3.1.26}
\int_{\max\{t_1,s\}}^te^{-\nu(v)(t-\tau)}\int_{\mathcal{O}_1\cup\mathcal{O}_2}U(\tau',v',v'';\tau,v){\rm d} v''{\rm d}\tau'{\rm d} v'{\rm d} \tau\\
\leq Cm^{2(\gamma-1)}e^{-\frac{N^2}{64}}\sup_{0\leq t\leq T}\|h^{i-1}(t)\|_{L^{\infty}}.
\end{multline}
Over $\mathcal{O}_3$, it is direct to obtain
\begin{multline}\label{3.1.27}
\int_{\max\{t_1,s\}}^te^{-\nu(v)(t-\tau)}\int_{\mathcal{O}_3}U(\tau',v',v'';\tau,v){\rm d} v''{\rm d}\tau'{\rm d} v'{\rm d} \tau\\
\leq C\frac{m^{2(\gamma-1)}}{N}\sup_{0\leq t\leq T}\|h^{i-1}(t)\|_{L^{\infty}}.
\end{multline}
For $\mathcal{O}_4$, we have, from \eqref{2.5}, that
\begin{align}\label{3.1.28}
&\int_{\mathcal{O}_4}U(\tau',v',v'';\tau,v){\rm d} v''{\rm d}\tau'{\rm d} v'\nonumber\\
&\leq C_N\bigg\{
\int_{\mathcal{O}_4}
|k^c_w(v,v')k^c_w(v',v'')|^2{\rm d} v'{\rm d} v''{\rm d}\tau'\bigg\}^{\f12}\nonumber\\
&\qquad\qquad\times\bigg\{
\int_{\mathcal{O}_4}
\mathbf{1}_{\{\max\{t_1',s\}\leq \tau'\leq \tau\}}\left|\frac{h^{i-1}(\tau',y',v'')}{\langle v''\rangle^{|\gamma|} w(v'')}\right|^2{\rm d} v'{\rm d} v''{\rm d}\tau'\bigg\}^{\f12}\nonumber\\
&\leq {C_{N,n,m}}\bigg\{
\int_{\mathcal{O}_4}
\mathbf{1}_{\{\max\{t_1',s\}\leq \tau'\leq \tau\}}\left|\frac{h^{i-1}(\tau',y',v'')}{\langle v''\rangle^{|\gamma|} w(v'')}\right|^2{\rm d} v'{\rm d} v''{\rm d}\tau'\bigg\}^{\f12},
\end{align}
where we have denoted $y':=y-(\tau-\tau')v'$. Making change of variable $v'\rightarrow y'$, the right-hand side of \eqref{3.1.28} is further bounded by
$$
C_{N,n,m}\left\{\int_s^T\left\|\frac{h^{i-1}(\tau')}{\langle v \rangle^{|\gamma|} w}\right\|^2_{L^2}{\rm d}\tau'\right\}^{1/2}
\leq {C_{N,n,m}}\left\|\frac{h^{i-1}}{\langle v\rangle^{|\gamma|} w}\right\|_{L^2([0,T],L^2)}.
$$
Then it holds that
\begin{align}
\int_{\max\{t_1,s\}}^t\int_{\mathcal{O}_4}U(\tau',v',v'';\tau,v){\rm d} v''{\rm d}\tau'{\rm d} v'{\rm d}\tau\leq {C_{N,n,m}
\left\|\frac{h^{i-1}}{\langle v\rangle^{|\gamma|} w}\right\|_{L^2([0,T],L^2)}.\nonumber
\end{align}
The above estimate
together with \eqref{3.1.25}, \eqref{3.1.26} and \eqref{3.1.27} yield that
\begin{align}
B_2\leq \frac{Cm^{2(\gamma-1)}}{N}\sup_{0\leq t\leq T}\|h^{i-1}(t)\|_{L^{\infty}}+{C_{N,n,m}
\left\|\frac{h^{i-1}}{\langle v\rangle^{|\gamma|} w}\right\|_{L^2([0,T],L^2)}.\nonumber
\end{align}
Combining this with \eqref{3.1.23} and \eqref{3.1.24}, we get, for $t\in [0,T]$, that
\begin{align}\label{3.1.29}
|h^{i+1}(t,x,v)|\leq &Ce^{-c nT}\sup_{0\leq t\leq T}\|h^{i+1}(t)\|_{L^{\infty}}+\eta\max_{0\leq l\leq k}\{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^{\infty}}\}\nonumber\\
&+C{n^{5/4}}m^{\gamma-1}\{\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}\}\nonumber\\
&+
{C_{N,n,m}
\sup_{0\leq l\leq k}\left\|\frac{h^{i-l}}{\langle v \rangle^{|\gamma|} w}\right\|_{L^{2}([0,T];L^2)},
\end{align}
where we have denoted
$$
\eta:=C{n^{5/2}}\bigg\{m^{\gamma-1}e^{-c nT}+m^{3+\gamma}+e^{-c (nT)^{\alpha}}+\left(\frac{1}{2}\right)^{\hat{C}_2(nT)^{\f54}}+\frac{m^{2(\gamma-1)}}{N}\bigg\}.
$$
We now take
$$
m=\left(\frac{1}{32C}\right)^{\frac{1}{3+\gamma}}{n^{-\frac{5}{2(3+\gamma)}}},
$$
choose $n$ suitably large, and then choose $N$ large enough, so that it holds that
$$
{C}e^{-c nT}\leq \f12,\quad
\eta\leq \frac{1}{16}.
$$
Then we obtain \eqref{3.1.4} from \eqref{3.1.29}. Finally, \eqref{3.1.5} directly follows from \eqref{3.1.4}. Therefore, the proof of Proposition \ref{prop3.1} is complete.
\end{proof}
\subsection{Approximation solutions}
It is very delicate to make the construction of approximation solutions. For readers' convenience, we first outline the procedure by four steps as follows.
\medskip
\noindent\underline{Step 1}. Construct the solution $f^{{j
,\varepsilon}$ to the following time-periodic problem:
\begin{equation}\label{3.2.1}\left\{
\begin{aligned}
&\partial_tf^{{j}
,\varepsilon}+v\cdot\nabla_x f^{{j},\varepsilon}+(\varepsilon+\nu(v))f^{{j},\varepsilon}=g,\\
&f^{{j},\varepsilon}(t,x,v)|_{\gamma_-}=(1-\f1{{j}})P_{\gamma}f^{{j},\varepsilon}+r.
\end{aligned}\right.
\end{equation}
\medskip
\noindent\underline{Step 2}. Construct the solution $f^{\varepsilon}$ to the following time-periodic problem:
\begin{equation}\label{3.2.2}\left\{
\begin{aligned}
&\partial_tf^{\varepsilon}+v\cdot\nabla_x f^{\varepsilon}+(\varepsilon+\nu(v))f^{\varepsilon}=g,\\
&f^{\varepsilon}(t,x,v)|_{\gamma_-}=P_{\gamma}f^\varepsilon+r{,
\end{aligned}\right.
\end{equation}
by passing to the limit ${j}\rightarrow\infty$.
\medskip
\noindent\underline{Step 3}. Make the uniform-in-$\lambda$ a priori estimates on the solution $f^{\lambda,\varepsilon}$ to the following time-periodic problem:
\begin{equation}\label{3.2.3}\left\{
\begin{aligned}
&\partial_tf^{\lambda,\varepsilon}+v\cdot\nabla_x f^{\lambda,\varepsilon}+(\varepsilon+\nu(v))f^{\lambda,\varepsilon}=\lambda Kf^{\lambda,\varepsilon}+g,\\
&f^{\lambda,\varepsilon}(t,x,v)|_{\gamma_-}=P_{\gamma}f^{\lambda,\varepsilon}+r,
\end{aligned}\right.
\end{equation}
and bootstrap from $\lambda=0$ to $\lambda=1$. Then the solution $f^{\varepsilon}$ to
\begin{equation}\label{3.2.4}\left\{
\begin{aligned}
&\partial_tf^{\varepsilon}+v\cdot\nabla_x f^{\varepsilon}+(\varepsilon+\nu(v))f^{\varepsilon}=Kf^{\varepsilon}+g,\\
&f^{\varepsilon}(t,x,v)|_{\gamma_-}=P_{\gamma}f^{\varepsilon}+r,
\end{aligned}\right.
\end{equation}
is therefore constructed. We remark that the zero-mass condition \eqref{3.0.3} is not necessary up to the present step.
\medskip
\noindent\underline{Step 4}. Take the limit $\varepsilon\rightarrow 0.$ Note that in the limit process, the artificial damping term guarantees that the following key zero-mass condition
\begin{align}\label{3.2.5}
\int_{\Omega}\int_{\mathbb{R}^3}\partial_tf^{\varepsilon}(t,x,v)\sqrt{\mu(v)}{\rm d} v{\rm d} x=\int_{\Omega}\int_{\mathbb{R}^3}f^{\varepsilon}(t,x,v)\sqrt{\mu(v)}{\rm d} v{\rm d} x=0,
\end{align}
holds true for any $t\in \mathbb{R}$. In fact, let
$$
\rho^{\varepsilon}(t):=\int_{\Omega}\int_{\mathbb{R}^3}f^{\varepsilon}(t,x,v)\sqrt{\mu(v)}{\rm d} v{\rm d} x.
$$
Taking the inner product of \eqref{3.2.4} with $\sqrt{\mu(v)}$ over $\Omega\times\mathbb{R}^3$ and using the zero-mass condition \eqref{3.0.3}, we get
$$
\frac{{\rm d} \rho^{\varepsilon}}{{\rm d} t}+\varepsilon\rho^{\varepsilon}=0.
$$
Since $\rho^{\varepsilon}(t)$ is periodic in time, we then obtain $\rho^{\varepsilon}(t)\equiv0$.
\medskip
In what follows, we will proceed the proof along the way mentioned above. The first lemma is related to the issue stated in Step 1. For the choice of ${j}$ in the second line of \eqref{3.2.1}, one can fix ${j_0}>1$ to be large enough such that
$$
\frac18 \left(1-\frac2{{j}}+\frac{3}{2{j}^2}\right)^{-\frac{k+1}{2}}\leq \frac12
$$
holds true for any ${j}\geq {j_0}$, where $k\sim (nT)^{5/4}$ is defined in \eqref{def.kno}. Then we only consider ${j}\geq {j_0}$ in the problem \eqref{3.2.1}.
\begin{lemma}\label{lem3.2.1}
Let $-3<\gamma\leq 1$, $\varepsilon>0$, {$0\leq q<1/8$ }and $\beta>3$. Assume that $g$ and $r$ are time-periodic functions with period $T>0$ and satisfy
$$
\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^\infty}+\sup_{0\leq t\leq T}|wr(t)|_{L^\infty(\gamma_-)}<\infty.
$$
Then there exists a unique solution $f^{{j},\varepsilon}$ to \eqref{3.2.1}, which is time-periodic with period $T$, and satisfies
\begin{multline}\label{3.2.6}
\sup_{0\leq t \leq T}\|wf^{{j},\varepsilon}(t)\|_{L^\infty}+\sup_{0\leq t\leq T}|wf^{{j},\varepsilon}(t)|_{L^\infty{(\gamma)}}\\
\leq C_{\varepsilon,{j}}\Big( \sup_{0\leq t\leq T}|wr(t)|_{L^\infty(\gamma-)}+\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^\infty} \Big),
\end{multline}
where the positive constant $C_{\varepsilon,{j}}>0$ depends only on $\varepsilon$ and ${j}$. Moreover, if the domain $\Omega$ is convex, $g$ is continuous in
$\mathbb{R}\times\Omega\times \mathbb{R}^3$, and $r$ is continuous {in $\mathbb{R}\times\gamma_-$,}
then the solution $f^{{j},\varepsilon}(t,x,v)$ is also continuous away from the grazing set $\mathbb{R}\times\gamma_0$.
\end{lemma}
\begin{proof}
For given $\epsilon>0$ and ${j}\geq {j_0}$, we shall construct the solution to \eqref{3.2.1}. To do so, we consider the approximation sequence $\{f^i(t,x,v)\}_{i=0}^\infty$ iteratively solved by
\begin{equation}\label{3.2.7}
\left\{\begin{aligned}
&\partial_tf^{i+1}+v\cdot\nabla_x f^{i+1}+(\varepsilon+\nu(v))f^{i+1}=g,\\
&f^{i+1}(t,x,v)|_{\gamma_-}=(1-\f1{{j}})P_{\gamma}f^{i}+r,
\end{aligned}\right.
\end{equation}
with $f^0\equiv0$.
Here we have dropped $\varepsilon$ and ${j}$ for brevity. Indeed, the solution to \eqref{3.2.7} can be constructed by the method of characteristics.
Let
$$
h^{i+1}(t,x,v)=w(v)f^{i+1}(t,x,v).
$$
Then for any $t\in \mathbb{R}$ and {almost every} $(x,v)\in \bar{\Omega}\times \mathbb{R}^3\setminus(\gamma_0\cup\gamma_-)$, one can write
\begin{align}\label{3.2.8}
h^{i+1}(t,x,v)=&e^{-(\varepsilon+\nu(v))t_{{\mathbf{b}}}(x,v)} w(v)\left[(1-\f1{{j}})P_{\gamma}f^i+r\right](t-t_{{\mathbf{b}}}(x,v),x_{{\mathbf{b}}}(x,v),v)\nonumber\\
&+\int_{t-t_{{\mathbf{b}}}(x,v)}^te^{-(\varepsilon+\nu(v))(t-s)}wg(s,x-(t-s)v,v){\rm d} s.
\end{align}
Note that for $(x,v)\in \gamma_- $,
it is direct to write
\begin{align}\label{3.2.8-1}
h^{i+1}(t,x,v)=w(v)\left[(1-\f1{{j}})P_{\gamma}f^i+r\right](t,x,v).
\end{align}
Now we use the induction argument to show that
\begin{equation}\label{3.2.9}
h^{i}(t,x,v) \text{ is time-periodic with period }T>0,
\end{equation}
and the following estimate holds true:
\begin{multline}\label{3.2.10}
\sup_{0\leq t\leq T}\|h^{i}(t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|h^{i}(t)|_{L^{\infty}{(\gamma)}}\\
\leq C_{{j},\varepsilon,i}\left(
\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}\right).
\end{multline}
Indeed, for $i=0$, it is obvious
to see that \eqref{3.2.9} and \eqref{3.2.10} are satisfied.
Assume that \eqref{3.2.9} and \eqref{3.2.10} hold for $i\geq 0$.
\eqref{3.2.8} implies that
\begin{align}\label{3.2.11}
&h^{i+1}(t+T,x,v)\notag\\
&=e^{-(\varepsilon+\nu(v))t_{{\mathbf{b}}}} w(v)\left[(1-\f1{{j}})P_{\gamma}f^i+r\right](t+T-t_{{\mathbf{b}}},x_{{\mathbf{b}}},v)\nonumber\\
&\quad +\int_{t+T-t_{{\mathbf{b}}}}^{t+T}e^{-(\varepsilon+\nu(v))(t+T-s)}wg(s,x-(t+T-s)v,v){\rm d} s.
\end{align}
Note that by the induction assumption that both $f^i$ and $r$ are time-periodic functions with period $T$, the first term on the right-hand side of \eqref{3.2.11} is equal to
$
e^{-(\varepsilon+\nu(v))t_{{\mathbf{b}}}} w(v)\left[(1-\f1{{j}})P_{\gamma}f^i+r\right](t-t_{{\mathbf{b}}},x_{{\mathbf{b}}},v).$$
For the second term, taking change of variables $s\rightarrow s-T $, we get that
\begin{align}
&\int_{t+T-t_{{\mathbf{b}}}}^{t+T}e^{-(\varepsilon+\nu(v))(t+T-s)}wg(s,x-(t+T-s)v,v){\rm d} s\nonumber\\
&=\int_{t-t_{{\mathbf{b}}}}^{t}e^{-(\varepsilon+\nu(v))(t-s)}wg(s+T,x-(t-s)v,v){\rm d} s\nonumber\\
&=\int_{t-t_{{\mathbf{b}}}}^{t}e^{-(\varepsilon+\nu(v))(t-s)}wg(s,x-(t-s)v,v){\rm d} s,\nonumber
\end{align}
where in the last line we have used the fact that $g$ is periodic in time with period $T$. Therefore, it follows from \eqref{3.2.11} that
$$
h^{i+1}(t+T,x,v)\equiv h^{i+1}(t,x,v),
$$
so, \eqref{3.2.9} holds true for $i+1$. Moreover, to show \eqref{3.2.10} for $i+1$, it follows from \eqref{3.2.8} that
\begin{align}
&\sup_{0\leq t\leq T}\{\|h^{i+1}(t)\|_{L^{\infty}}+|h^{i+1}(t)|_{L^{\infty}(\gamma_+)}\}\nonumber\\
&\leq C\sup_{0\leq t\leq T}\{|h^{i}(t)|_{L^{\infty}(\gamma_+)}+|wr(t)|_{L^{\infty}(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^{\infty}}\}\nonumber\\
&\leq C_{{j},i}\sup_{0\leq t\leq T}\{|wr(t)|_{L^{\infty}(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^{\infty}}\},\nonumber
\end{align}
and also one obtains by \eqref{3.2.8-1} that
\begin{align}
\sup_{0\leq t\leq T}|h^{i+1}(t)|_{L^{\infty}(\gamma_-)}&\leq C\sup_{0\leq t\leq T}|h^{i}(t)|_{L^{\infty}(\gamma_+)}+C\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}\nonumber\\
&\leq C_{{j},i}\sup_{0\leq t\leq T}\{|wr(t)|_{L^{\infty}(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^{\infty}}\}.\nonumber
\end{align}
Combing the above two estimates gives the proof of \eqref{3.2.10} for $i+1$. Therefore, by induction \eqref{3.2.9} and \eqref{3.2.10} are satisfied for all $i$.
Then, each $h^{i}(t,x,v)$ is well-defined in $L^{\infty}$ and time-periodic with period $T>0$. Moreover, if $\Omega$ is convex, $t_{{\mathbf{b}}}(x,v)$ and $x_{{\mathbf{b}}}(x,v)$ are smooth away from $\gamma_0$. If $g$ and $r$ are further continuous, then each $f^{i}(t,x,v)$ is also continuous for away from the grazing set $\mathbb{R}\times\gamma_0$.
Next, we need to obtain the uniform-in-$i$ estimate on the solution sequence $f^{i}$. We first treat it in the $L^2$ setting. Taking the inner product of \eqref{3.2.7} with $f^{i+1}$ over $[0,T]\times\Omega\times \mathbb{R}^3$ and using the periodicity of $f^{i+1}$, we obtain that
\begin{align}\label{3.2.12}
&\frac{1}{2}\int_0^T|f^{i+1}(s)|_{L^2(\gamma_+)}^2\,{\rm d} s+\int_0^T\varepsilon\|f^{i+1}(s)\|_{L^2}^2+\f34\|\nu^{1/2}f^{i+1}(s)\|_{L^{2}}^2{\rm d} s\nonumber
\nonumber\\
&\leq \frac{1}{2}(1-\f2{{j}}+\frac{3}{2{j}^2})\int_0^T|f^i(s)|_{L^2(\gamma_+)}^2{\rm d} s\notag\\
&\quad+\int_0^T\|\nu^{-1/2}g(s)\|_{L^2}^2+C_{{j}}|r(s)|_{L^{2}(\gamma_-)}^2{\rm d} s,
\end{align}
where we have used the fact that
$
|P_{\gamma}f^i|_{L^{2}(\gamma_-)}=|P_{\gamma}f^i|_{L^2(\gamma_+)}\leq |f^i|_{L^{2}(\gamma_+)}.
$
For the difference $f^{i+1}-f^i$,
in a similar way we have
\begin{align}\label{add.iter}
&\frac{1}{2}\int_0^T|[f^{i+1}-f^i](s)|_{L^2(\gamma_+)}^2\,{\rm d} s\notag\\
&\quad +\int_0^T\varepsilon\|[f^{i+1}-f^i](s)\|_{L^2}^2+\f34\|\nu^{1/2}[f^{i+1}-f^i](s)\|_{L^{2}}^2{\rm d} s\nonumber
\nonumber\\
&\leq \frac{1}{2}(1-\f2{{j}}+\frac{3}{2{{j}}^2})\int_0^T|[f^i-f^{i-1}](s)|_{L^2(\gamma_+)}^2{\rm d} s,
\end{align}
and hence, by iteration the right-hand side of \eqref{add.iter} is further bounded by
\begin{multline}\label{3.2.13}
\frac{1}{2}(1-\f2{{j}}+\frac{3}{2{{j}}^2})^i\int_0^T|[f^1-f^{0}](s)|_{L^2(\gamma_+)}^2{\rm d} s\\
\leq \frac{1}{2}(1-\f2{{j}}+\frac{3}{2{{j}}^2})^i\cdot\bigg\{C_{{j}}\int_0^T|r(s)|^2_{L^{2}(\gamma_-)}+\|\nu^{-1/2}g(s)\|^2_{L^2}{\rm d} s\bigg\},
\end{multline}
where in the second line we have used \eqref{3.2.12} for $i=0$ as well as $f^0\equiv 0$.
As ${j}_0>1$ is chosen to be large enough, one has
$
0<1-\f2{{j}}+\frac{3}{2{{j}}^2}<1
$
for any ${j}\geq {j}_0$. It then follows from \eqref{add.iter} and \eqref{3.2.13} that
$\{f^i\}_{i=0}^{\infty}$ is a Cauchy sequence in $L^2$. Moreover,
for any $i\geq 0$, it holds that
\begin{equation*}
\int_0^T\|\nu^{1/2}f^{i}(s)\|_{L^2}^2+|f^i(s)|_{L^2(\gamma_+)}^2{\rm d} s\leq C_{{j}} \int_0^T|r(s)|^2_{L^{2}
{\gamma_-})}+\|\nu^{-1/2}g(s)\|^2_{L^2}{\rm d} s,
\end{equation*}
and hence
the following uniform-in-$i$ estimate holds true:
\begin{multline}\label{3.2.14}
\int_0^T\|\nu^{1/2}f^{i}(s)\|_{L^2}^2+|f^i(s)|_{L^2(\gamma_+)}^2{\rm d} s\\
\leq C_{{j}}\big\{\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}+\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}\big\}^2.
\end{multline}
Next we turn to treat the uniform estimate in the $L^{\infty}$ setting in terms of the results obtained in the previous subsection. Note that
Proposition \ref{prop3.1} is also valid if the boundary condition of the problem \eqref{3.1.1} is replaced by
\begin{equation*}
h^{i+1}(t,x,v)|_{\gamma_-}=\frac{1-\frac{1}{{j}}}{\tilde{w}(v)} \int_{v'\cdot n(x)>0} h^i({t},x,v') \tilde{w}(v') {\rm d}\sigma'+w(v)r({t},x,v),
\end{equation*}
namely, we have only changed $1$ to $1-1/{j}$.
Correspondingly one can deduce the mild formulation \eqref{3.1.2}, and prove Lemma \ref{lem.smbd} and Proposition \ref{prop3.1}. Particularly, all constants in \eqref{3.1.4} and \eqref{3.1.5} are independent of ${j}$. Then, using \eqref{3.1.4}, we obtain that
\begin{align}
&\sup_{0\leq t\leq T}\|h^{i+1}(t)\|_{L^\infty}+\sup_{0\leq t\leq T}|h^{i+1}(t)|_{L^\infty(\gamma)}\nonumber\\
&\quad\leq \frac18 \max_{0\leq l\leq k} \{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^\infty}\}+C\sup_{0\leq t\leq T}\Big\{ |wr(t)|_{L^\infty(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^\infty}\Big\}\nonumber\\
&\qquad+C\sup_{0\leq l\leq k} \{ \|\nu^{1/2}f^{i-l}\|_{L^2([0,T];L^2)}\}.\nonumber
\end{align}
It then follows from \eqref{3.2.14} that
\begin{align}\label{3.2.15}
&\sup_{0\leq t\leq T}\|h^{i+1}(t)\|_{L^\infty}{+\sup_{0\leq t\leq T}|h^{i+1}(t)|_{L^\infty(\gamma)}}\nonumber\\
&\quad\leq \frac18 \max_{0\leq l\leq k} \{\sup_{0\leq t\leq T}\|h^{i-l}(t)\|_{L^\infty}\}
+C_{{j}}\sup_{0\leq t\leq T}\big\{|wr(t)|_{L^{\infty}(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^{\infty}}\big\}.
\end{align}
Applying \eqref{A.1} to \eqref{3.2.15}, it holds that for $i\geq k+1$,
\begin{align}\label{3.2.16}
&\sup_{0\leq t\leq T}\|h^{i}(t)\|_{L^{\infty}}{+\sup_{0\leq t\leq T}|h^{i}(t)|_{L^\infty(\gamma)}}\nonumber\\
&\quad\leq \f18\max_{1\leq l\leq 2k}\{\sup_{0\leq t\leq T}\|h^{l}(t)\|_{L^{\infty}}\}\nonumber\\
&\qquad+\frac{8+k}{7}C_{{j}}\big\{\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}+\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}\big\}\nonumber\\
&\quad\leq C_{{j}}\big\{\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}+\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}\big\},
\end{align}
where we have used \eqref{3.2.10} for $i=1,\cdots,2k$ in the last inequality. Combining \eqref{3.2.16} with \eqref{3.2.10}, we obtain that for $i\geq 1$,
\begin{align}\label{3.2.17}
&\sup_{0\leq t\leq T}\|h^{i}(t)\|_{L^{\infty}}{+\sup_{0\leq t\leq T}|h^{i}(t)|_{L^\infty(\gamma)}}\nonumber\\
&\quad\leq C_{{j}}\big\{\sup_{0\leq t\leq T}|wr(t)|_{L^{\infty}(\gamma_-)}+\sup_{0\leq t\leq T}\|\nu^{-1}wg(t)\|_{L^{\infty}}\big\}.
\end{align}
Similarly for obtaining \eqref{3.2.17}, one can apply \eqref{3.1.4} to $h^{i+2}-h^{i+1}$ to get
\begin{align}\label{3.2.18}
&\sup_{0\leq t\leq T}\|[h^{i+2}-h^{i+1}](t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|[h^{i+2}-h^{i+1}](t)|_{L^{\infty}{(\gamma)}}\nonumber\\
&\leq \f18\max_{0\leq l\leq k}\{\sup_{0\leq t\leq T}\|[h^{i+1-l}-h^{i-l}](t)\|_{L^{\infty}}\}\notag\\
&\qquad+C\max_{0\leq l\leq k}\{\|\nu^{1/2}[f^{i+1-l}-f^{i-l}]\|_{L^{2}([0,T];L^2)}\}\nonumber\\
&\leq \f18\max_{0\leq l\leq k}\{\sup_{0\leq t\leq T}\|[h^{i+1-l}-h^{i-l}](t)\|_{L^{\infty}}\}\notag\\
&\qquad+C_{{j}}\eta_{{j}}^{i-k
{\left\{\int_0^T|r(s)|^2_{L^{2}({\gamma_-
)}+\|\nu^{-1/2}g(s)\|^2_{L^2}{\rm d} s\right\}^{1/2}}\nonumber\\
&\leq \f18\max_{0\leq l\leq k}\{\sup_{0\leq t\leq T}\|[h^{i+1-l}-h^{i-l}](t)\|_{L^{\infty}}\}\notag\\
&\qquad+
C_{{j}}\sup_{0\leq t\leq T}\{|wr(t)|_{L^{\infty}(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^{\infty}}\}\eta_{{j}}^{i+k+1},
\end{align}
where we have denoted
$
\eta_{{j}}:=\sqrt{1-\f2{{j}}+\f3{{j}^2}}.
$
Let ${j}_0>1$ be suitably large such that
$
\f18\eta_{{j}}^{-k-1}\leq\f12
$
for any ${j}\geq {j}_0$. Then, applying \eqref{A.1-1} to \eqref{3.2.18}, we obtain that for $i\geq k+1$,
\begin{align}\label{3.2.19}
&\sup_{0\leq t\leq T}\|[h^{i+2}-h^{i+1}](t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|[h^{i+2}-h^{i+1}](t)|_{L^{\infty}{(\gamma)}}\nonumber\\
&\leq \left(\f18\right)^{[\frac{i}{k+1}]}\max_{0\leq l\leq 2k+1}\{\|h^{l}(t)\|_{L^{\infty}}\}\notag\\
&\qquad+C_{{j}}\sup_{0\leq t\leq T}\{|wr(t)|_{L^{\infty}(\gamma_-)}+\|v^{-1}wg(t)\|_{L^{\infty}}\}\cdot\eta_{{j}}^i\nonumber\\
&\leq C_{{j}}\big\{\left(\f18\right)^{[\frac{i}{k+1}]}+\eta_{{j}}^i\big\}\sup_{0\leq t\leq T}\{|wr(t)|_{L^{\infty}(\gamma_-)}+\|v^{-1}wg(t)\|_{L^{\infty}}\}.
\end{align}
Hence, from \eqref{3.2.19}, we see that $\{h^{i}\}$ is also a Cauchy sequence in $L^{\infty}$. Let $h(t,x,v)$ be the limit function of $h^{i}$ in $L^\infty$. It is straightforward to check that $f:=\frac{h}{w}$ solves \eqref{3.2.1} for ${j}\geq {j}_0$. Furthermore, since each $f^i$ is a time-periodic function with period $T$ and $h^i=wf^i$ converges to $h$ in $L^{\infty}$, then $f=\frac{h}{w}$ is also periodic in time with the same period $T$. If $\Omega$ is convex, the continuity of $f$ directly follows from the continuity of $f^i$. Moreover, taking the limit $i \rightarrow \infty$ in \eqref{3.2.14}, we get that
\begin{align}\label{3.2.20}
\|\nu^{1/2}f\|_{L^{2}([0,T];L^2)}\leq C_{{j}}\sup_{0\leq t\leq T}\{|wr(t)|_{L^{\infty}(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^{\infty}}\}.
\end{align}
Then the $L^{\infty}$ bound \eqref{3.2.6} directly follows from \eqref{3.1.5} and \eqref{3.2.20}. The proof of Lemma \ref{lem3.2.1} is therefore complete.
\end{proof}
As mentioned before, Lemma \ref{lem3.2.1} is the first step for obtaining the approximation solutions $f^{{j},\varepsilon}$ to \eqref{3.2.1}. We now turn to the second step to establish the solvability of the problem \eqref{3.2.2} by letting ${j}\to \infty$. For the time being, in the following lemma we omit the dependence of $f^{{j},\varepsilon}$ on $\varepsilon$ for brevity.
\begin{lemma}\label{lem3.2.2}
Let $-3<\gamma\leq 1$, $\varepsilon>0$, {$0\leq q<1/8$} and $\beta>3$. Under the same assumption as in Lemma \ref{lem3.2.1}, there exists a unique time-periodic solution $f(t,x,v)$ to \eqref{3.2.2} satisfying the estimate
\begin{multline}\label{3.2.2-2}
\sup_{0\leq t\leq T}\big\{\|wf(t)\|_{L^\infty} +|wf(t)|_{L^\infty{(\gamma)}}\big\} \\
\leq C \sup_{0\leq t\leq T}\big\{ |wr(t)|_{L^\infty(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^\infty} \big\}.
\end{multline}
Furthermore, if $\Omega$ is convex, $g$ is continuous in
$\mathbb{R}\times\Omega\times\mathbb{R}^3$ and
$r$ is continuous {in $\mathbb{R}\times \gamma_-$,}
then $f(t,x,v)$ is also continuous away from the grazing set $\mathbb{R}\times\gamma_0$.
\end{lemma}
\begin{proof}
We shall first obtain the uniform-in-${j}$ estimate on the solutions $f^{{j}}$ to \eqref{3.2.1} and then show that $h^{{j}}:=wf^{{j}}$ is Cauchy in $L^{\infty}$.
To treat $L^\infty$ estimates, we should start from $L^2$ estimates. Taking the inner product of \eqref{3.2.1} with $f^{{j}}$ over $[0,T]\times \Omega\times \mathbb{R}^3$ gives that
\begin{align}
&\int_0^T\varepsilon\|f^{{j}}(s)\|_{L^2}^2+\f12\|\nu^{1/2}f^{{j}}(s)\|_{L^2}^2+\f12|f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\leq C\int_0^T\|\nu^{-1/2}g(s)\|_{L^2}^2{\rm d} s+\f12\int_0^T\big|(1-\f1{{j}})P_{\gamma}f^{{j}}+r\big|_{L^2(\gamma_-)}^2{\rm d} s\nonumber\\
&\leq C\int_0^T\|\nu^{-1/2}g(s)\|_{L^2}^2{\rm d} s+\frac{1+\eta}{2}\int_0^T|P_\gamma f^{{j}}(s)|_{L^{2}(\gamma_+)}^2{\rm d} s+C_\eta\int_0^T|r(s)|^2_{L^{2}(\gamma_-)}{\rm d} s,\nonumber
\end{align}
which further implies that
\begin{align}\label{3.2.21}
&\int_0^T\varepsilon\|f^{{j}}(s)\|_{L^2}^2+\f12\|\nu^{1/2}f^{{j}}(s)\|_{L^2}^2+\f12|(I-P_\gamma)f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\leq C\int_0^T\|\nu^{-1/2}g(s)\|_{L^2}^2{\rm d} s+\frac{\eta}{2}\int_0^T|P_\gamma f^{{j}}(s)|_{L^{2}(\gamma_+)}^2{\rm d} s+C_\eta\int_0^T|r(s)|^2_{L^{2}(\gamma_-)}{\rm d} s,
\end{align}
where $\eta>0$ can be arbitrarily small. To estimate the second term on the right-hand side of \eqref{3.2.21}, using the same idea as in \cite{EGKM}, we {recall
the near-grazing set {$\gamma_+^{\varepsilon'}$ defined in \eqref{cut}}
{and split $P_{\gamma}f^j=P_{\gamma}(f^j\mathbf{1}_{\gamma^{\varepsilon'}})+P_{\gamma}(f^j\mathbf{1}_{\gamma_+\setminus\gamma^{\varepsilon'}_+}).$ By a direct computation, we have
\begin{align}
|P_{\gamma}(f^j\mathbf{1}_{\gamma^{\varepsilon'}})|_{L^2(\gamma_-)}\leq C\varepsilon'|f^j|_{L^2(\gamma_+)}\leq C\varepsilon'|P_{\gamma}f^j|_{L^2(\gamma_+)}+C\varepsilon'|(I-P_{\gamma})f^j|_{L^2(\gamma_+)},\nonumber
\end{align}
and
\begin{align}
|P_{\gamma}(f^j\mathbf{1}_{\gamma_+\setminus\gamma^{\varepsilon'}_+})|_{L^2(\gamma_-)}^2=&\int_{\gamma_-}\mu(v)|n(x)\cdot v|{\rm d} \gamma\nonumber\\
&\times\left(\int_{n(x)\cdot v'>0}e^{-\frac{|v|^2}{8}}f^j\mathbf{1}_{\gamma_+\setminus\gamma^{\varepsilon'}_+}e^{\frac{|v|^2}{8}}\sqrt{\mu(v)}|n(x)\cdot v'|{\rm d} v'\right)^{2},\nonumber\\
\leq &C|e^{-\frac{|v|^2}{8}}f^j\mathbf{1}_{\gamma_+\setminus\gamma^{\varepsilon'}_+})|_{L^2(\gamma_+)}^2.\nonumber
\end{align}
From the first equation of \eqref{3.2.1}, we have
\begin{align}
(\partial_t+v\cdot \nabla_x )e^{-\frac{1}{4}|v|^2}(f^{{j}})^2=2e^{-\frac{1}{4}|v|^2}gf^{{j}}-2[\varepsilon+ \nu(v)] e^{-\frac{1}{4}|v|^2} (f^{{j}})^2,\nonumber
\end{align}
which implies that
$$
\|(\partial_t+v\cdot \nabla_x )e^{-\frac{1}{4}|v|^2}(f^{{j}})^2\|_{L^1}\leq C\|e^{-\frac{|v|^2}{16}}f^{{j}}\|_{L^2}^2+C\|e^{-\frac{|v|^2}{16}}g\|_{L^2}^2.
$$
Thus, from the trace Lemma \ref{lemT}, it follows that
\begin{align}
&\int_0^T|e^{-\frac{1}{8}|v|^2}f^{{j}}(s)\mathbf{1}_{\gamma_+\setminus\gamma_+^{\varepsilon'}}|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&=\int_0^T\left|[e^{-\frac{1}{8}|v|^2}f^{{j}}]^2(s)\mathbf{1}_{\gamma_+\setminus\gamma_+^{\varepsilon'}}\right|_{L^1(\gamma_+)}{\rm d}
\nonumber\\
&\lesssim_{\varepsilon',\Omega}\int_0^T\|(\partial_t+v\cdot \nabla_x )e^{-\frac{1}{4}|v|^2}(f^{{j}})^2\|_{L^1}+\|e^{-\f14|v|^2}(f^{{j}})^2(s)\|_{L^1}{\rm d} s\notag\\
&\qquad+\|e^{-\f14|v|^2}(f^{{j}})^2(0)\|_{L^1}\nonumber\\
&\lesssim_{\varepsilon',\Omega}\int_0^T\|e^{-\frac{|v|^2}{16}}f^{{j}}(s)\|_{L^2}^2{\rm d} s+\int_0^T\|e^{-\frac{|v|^2}{16}}g(s)\|_{L^2}^2{\rm d} s+\sup_{0\leq t\leq T}\|e^{-\frac{|v|^2}{16}}f^{{j}}(t)\|_{L^{\infty}}^2.\nonumber
\end{align}
Collecting these estimates, we have
\begin{align}\label{3.2.22}
&\int_0^T|P_{\gamma}f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\quad\leq C\varepsilon'\int_0^T|P_{\gamma}f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s+C\varepsilon'\int_0^T|(I-P_{\gamma})f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\qquad+C\int_0^T|e^{-\frac{1}{8}|v|^2}P_\gamma f^{{j}}\mathbf{1}_{\gamma_+\setminus\gamma_+^{\varepsilon'}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\quad\leq C\varepsilon'\int_0^T|P_{\gamma}f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s+C\varepsilon'\int_0^T|(I-P_{\gamma})f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\qquad+C_{\varepsilon'}\int_0^T\|e^{-\frac{|v|^2}{16}}f^{{j}}(s)\|_{L^2}^2+\|e^{-\frac{|v|^2}{16}}g(s)\|_{L^2}^2{\rm d} s+C_{\varepsilon'}\sup_{0\leq t\leq T}\|e^{-\frac{|v|^2}{16}}f^{{j}}(t)\|_{L^{\infty}}^2\nonumber\\
&\quad\leq C\int_0^T|(I-P_{\gamma})f^{{j}}(s)|_{L^2(\gamma_+)}^2+\|e^{-\frac{|v|^2}{16}}f^{{j}}(s)\|_{L^2}^2+\|e^{-\frac{|v|^2}{16}}g(s)\|_{L^2}^2{\rm d} s\nonumber\\
&\qquad+C\sup_{0\leq t\leq T}\|e^{-\frac{|v|^2}{16}}f^{{j}}(t)\|_{L^{\infty}}^2.
\end{align}
Here we have taken $\varepsilon'>0$ suitably small. Plugging \eqref{3.2.22} back to \eqref{3.2.21}, }
we get that
\begin{align}\label{3.2.24}
&\int_0^T\varepsilon\|f^{{j}}(s)\|_{L^2}^2+\|\nu^{1/2}f^{{j}}(s)\|_{L^2}^2+|(I-P_\gamma)f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\leq {C}\eta\int_0^T\|\nu^{1/2}f^{{j}}(s)\|_{L^2}^2+|(I-P_\gamma)f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s
+{C}\eta \sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}f^{{j}}(t)\|_{L^{\infty}}^2\nonumber\\
&\quad+{C}\int_0^T\|\nu^{-1/2}g(s)\|_{L^2}^2{\rm d} s+{C}\int_0^T|r(s)|^2_{L^{2}(\gamma_-)}{\rm d} s.
\end{align}
Then, for any $\eta$ with $0<\eta\leq \eta_1:=\frac{1}{2C}$, it follows from \eqref{3.2.24} that
\begin{align}\label{3.2.25}
&\int_0^T\varepsilon\|f^{{j}}(s)\|_{L^2}^2+\f12\|\nu^{1/2}f^{{j}}(s)\|_{L^2}^2+\f12|(I-P_\gamma)f^{{j}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\leq
C\eta\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}f^{{j}}(t)\|_{L^{\infty}}^2+C_{\eta}\int_0^T\|\nu^{-1/2}g(s)\|_{L^2}^2{\rm d} s+C_{\eta}\int_0^T|r(s)|^2_{L^{2}(\gamma_-)}{\rm d} s\nonumber\\
&\leq C\eta\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}f^{{j}}(t)\|_{L^{\infty}}^2+C_{\eta}\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}^2.
\end{align}
On the other hand, by applying the $L^{\infty}$ estimate \eqref{3.1.5} to $h^{{j}}:=wf^{{j}}$, one has
\begin{multline*}
\sup_{0\leq t\leq T}\{\|h^{{j}}(t)\|_{L^{\infty}}+|h^{{j}}(t)|_{L^{\infty}{(\gamma)}}\}\\
\leq C\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}
+C\|\nu^{1/2}f^{{j}}\|_{L^{2}([0,T];L^2)}.
\end{multline*}
Plugging \eqref{3.2.25} in the above estimate gives
\begin{multline*}
\sup_{0\leq t\leq T}\{\|h^{{j}}(t)\|_{L^{\infty}}+|h^{{j}}(t)|_{L^{\infty}{(\gamma)}}\}\\
\leq C{\eta^{1/2}}\sup_{0\leq t\leq T}\|h^{{j}}(t)\|_{L^{\infty}}+C_{\eta}\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}.
\end{multline*}
Further letting $\eta>0$ be small enough, it then follows that
\begin{equation}\label{3.2.26}
\sup_{0\leq t\leq T}\{\|h^{{j}}(t)\|_{L^{\infty}}+|h^{{j}}(t)|_{L^{\infty}{(\gamma)}}\}
\leq C\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}.
\end{equation}
This completes the uniform-in-${j}$ $L^\infty$ estimates.
Next, we need to show that $h^{{j}}:=wf^{{j}}$ is Cauchy in $L^{\infty}$. For this, we consider the difference $h^{{{j}}_2}-h^{{j}_1}$. Note that $f^{{j}_2}-f^{{j}_1}=w^{-1}\big(h^{{j}_2}-h^{{j}_1}\big)$ solves
\begin{equation}
\left\{\begin{aligned}
&\partial_t(f^{{j}_2}-f^{{{j}}_1})+v\cdot\nabla_x(f^{{j}_2}-f^{{j}_1})+[\varepsilon+\nu(v)](f^{{j}_2}-f^{{j}_1})=0,\\
&(f^{{j}_2}-f^{{j}_1})|_{\gamma_-}=(1-\frac{1}{{j}_2})P_{\gamma}(f^{{j}_2}-f^{{j}_1})+(\f1{{j}_1}-\f1{{j}_2})P_{\gamma}f^{{j}_1}.\nonumber
\end{aligned}\right.
\end{equation}
Then, by similar energy estimates made above, it holds that
\begin{align}
&\int_0^T\|\nu^{1/2}(f^{{j}_2}-f^{{j}_1})(s)\|_{L^2}^2{\rm d} s\nonumber\\
&\leq \eta\sup_{0\leq t\leq T}\|(h^{{j}_2}-h^{{j}_1})(t)\|_{L^{\infty}}^2+C_{\eta}\int_0^T|(\f1{{j}_2}-\f1{{j}_1})P_{\gamma}f^{{j}_1}(s)|_{L^{2}(\gamma_-)}^2{\rm d} s\nonumber\\
&\leq \eta\sup_{0\leq t\leq T}\|(h^{{j}_2}-h^{{j}_1})(t)\|_{L^{\infty}}^2\notag\\
&\qquad+C_{\eta}\bigg(\frac{1}{{j}_1^2}+\frac{1}{{j}_2^2}\bigg)\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}^{{2}},\nonumber
\end{align}
where we have used \eqref{3.2.26} in the last inequality. Again, applying {\eqref{3.1.5}}
to the difference $h^{{j}_2}-h^{{j}_1}$, we get that
\begin{align}
&\sup_{0\leq t\leq T}\|(h^{{j}_2}-h^{{j}_1})(t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|(h^{{j}_2}-h^{{j}_1})(t)|_{L^{\infty}{(\gamma)}}\nonumber\\
&\leq C\sup_{0\leq t\leq T}\bigg|w(\f1{{j}_2}-\f1{{j}_1})P_{\gamma}f^{{j}_1}\bigg|_{L^{\infty}(\gamma_-)}+C\|\nu^{1/2}(f^{{j}_2}-f^{{j}_1})\|_{L^2([0,T];L^2)}\nonumber\\
&\leq C{\eta^{1/2}}\sup_{0\leq t\leq T}\|(h^{{j}_2}-h^{{j}_1})(t)\|_{L^{\infty}}\notag\\
&\qquad+C_{\eta}\bigg(\frac{1}{{j_1}}+\frac{1}{{j_2}}\bigg)\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}.\nonumber
\end{align}
Taking $\eta>0$ suitably small, the above estimate yields that $h^{{j}}$ is Cauchy in $L^{\infty}$. Let $h(t,x,v)$ be the limit function of $h^{{j}}$. It is direct to check that $f:=\frac{h}{w}$ solves \eqref{3.2.2}, and the estimate \eqref{3.2.2-2} follows from \eqref{3.2.26}. Moreover, since each $f^{{j}}$ is time-periodic with period $T$, then $f$ is also time-periodic with the same period $T$. The continuity follows in a similar way. Thus, the proof of Lemma \ref{lem3.2.2} is complete.
\end{proof}
We now move to the third step for treating the existence and uniform estimates of solutions to the linear problem \eqref{3.2.4} where the linear collision term is involved. For the proof ,we follow the same strategy as in \cite{DHWZ}.
\begin{lemma}\label{lem3.2.3}
Let $-3<\gamma\leq 1$, $\varepsilon>0$, {$0\leq q<1/8$} and $\beta>3$. Under the same assumption as in Lemma \ref{lem3.2.1}, the linear problem \eqref{3.2.4} admits a unique time-periodic solution $f^{\varepsilon}(t,x,v)$ with period $T$, satisfying the following estimate:
\begin{multline}\label{3.2.27}
\sup_{0\leq t\leq T}\big\{\|wf^{\varepsilon}(t)\|_{L^\infty} +|wf^{\varepsilon}(t)|_{L^\infty{(\gamma)}}\big\}\\
\leq C_{\varepsilon}\sup_{0\leq t\leq T}\big\{ |wr(t)|_{L^\infty(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^\infty} \big\}.
\end{multline}
Moreover, if $\Omega$ is convex, $g$ is continuous in
$\mathbb{R}\times\Omega\times\mathbb{R}^3$, and $r$ is continuous {in $\mathbb{R}\times\gamma_-$,
then $f^\varepsilon(t,x,v)$ is also continuous away from $\mathbb{R}\times\gamma_0$.
\end{lemma}
\begin{proof}
The proof relies on the following uniform-in-$\lambda$ estimate on the solution $f^{\lambda,\varepsilon}$ to the modified linear problem \eqref{3.2.3} {for $0\leq \lambda\leq 1$}:
\begin{multline}\label{3.2.28}
\sup_{0\leq t\leq T}\big\{\|wf^{\lambda,\varepsilon}(t)\|_{L^\infty} +|wf^{\lambda,\varepsilon}(t)|_{L^\infty{(\gamma)}}\big\} \\
\leq C_{\varepsilon} \sup_{0\leq t\leq T}\big\{ |wr(t)|_{L^\infty(\gamma_-)}+\|\nu^{-1}wg(t)\|_{L^\infty} \big\},
\end{multline}
where the positive constant $C_{\varepsilon}$ is independent of $\lambda$ but may depend on $\varepsilon$. Once \eqref{3.2.28} is established, one can use the same bootstrap argument as in \cite{DHWZ} to complete the whole proof of Lemma \ref{lem3.2.3}, particularly deriving the estimate \eqref{3.2.27}. Thus, for brevity of presentation, in what follows we only show the uniform estimate \eqref{3.2.28}.
Taking the inner product of \eqref{3.2.3} with $f^{\lambda,\varepsilon}$ over $[0,T]\times\Omega\times\mathbb{R}^3$ gives that
\begin{multline}\label{3.2.29}
\int_0^T\varepsilon\|f^{\lambda,\varepsilon}(s)\|_{L^2}^2+\|\nu^{1/2}f^{\lambda,\varepsilon}(s)\|_{L^2}^2+\frac{1}{2}|f^{\lambda,\varepsilon}(s)|_{L^2(\gamma_+)}^2{\rm d} s\\
\leq \int_0^T\langle \lambda Kf^{\lambda}(s),f^{\lambda}(s)\rangle+\f12\big|P_{\gamma}f^{\lambda,\varepsilon}(s)+r(s)\big|_{L^2(\gamma_-)}^2 \\
+\frac{\varepsilon}{4}\|f^{\lambda,\varepsilon}(s)\|_{L^2}^2+\frac{1}{\varepsilon}\|g(s)\|_{L^2}^2{\rm d} s.
\end{multline}
Note that due to the non-negativity of $L=\nu-K$,
$$
\langle \lambda Kf^{\lambda,\varepsilon},f^{\lambda,\varepsilon}\rangle\leq \lambda\|\nu^{1/2}f^{\lambda,\varepsilon}\|_{L^2}^2,
$$
for any {$0\leq \lambda\leq 1$.}
Then from \eqref{3.2.29}, we have
\begin{align}\label{3.2.30}
&\frac{3\varepsilon}{4}\int_0^T\|f^{\lambda,\varepsilon}(s)\|_{L^2}^2{\rm d} s+\f12\int_0^T|(I-P_{\gamma})f^{\lambda,\varepsilon}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\leq \frac{\eta}{2}\int_0^T\big|P_{\gamma}f^{\lambda,\varepsilon}(s)\big|_{L^2({\gamma_+})}^2 +C_{\eta}\int_0^T|r(s)|_{L^{2}(\gamma_-)}^2{\rm d} s+\frac{1}{\varepsilon}\int_0^T\|g(s)\|_{L^2}^2{\rm d} s.
\end{align}
Here $\eta>0$ can be chosen to be arbitrarily small. Similar for obtaining \eqref{3.2.22},
we have that
\begin{align}\label{3.2.31}
&\int_0^T|P_{\gamma}{f^{\lambda,\varepsilon}}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\quad \leq C\int_0^T\|{e^{-\frac{|v|^2}{16}}}f^{\lambda,\varepsilon}(s)\|_{L^2}^2+|(I-P_\gamma )f^{\lambda,\varepsilon}(s)|_{L^2(\gamma_+)}^2+\|{e^{-\frac{|v|^2}{16}}}g(s)\|_{L^2}^2{\rm d} s\notag\\
&\qquad+C\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}f^{\lambda,\varepsilon}(t)\|_{L^{\infty}}^2.
\end{align}
Substituting \eqref{3.2.31} into \eqref{3.2.30} gives that for any small constant $\eta>0$,
\begin{align}\label{3.2.32}
&\frac{\varepsilon}{2}\int_0^T\|f^{\lambda,\varepsilon}(s)\|_{L^2}^2{\rm d} s+\f14\int_0^T|(I-P_{\gamma})f^{\lambda,\varepsilon}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\
&\leq C\eta\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}f^{\lambda,\varepsilon}(t)\|_{L^{\infty}}^2+C_{\eta,\varepsilon}\int_0^T|r(s)|_{L^{2}(\gamma_-)}^2{\rm d} s+C_{\eta,\varepsilon}\int_0^T\|g(s)\|_{L^2}^2{\rm d} s\nonumber\\
&\leq C\eta\sup_{0\leq t\leq T}\|wf^{\lambda,\varepsilon}(t)\|_{L^{\infty}}^2+C_{\eta,\varepsilon}\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^{\infty}}+|wr(t)|_{L^{\infty}(\gamma_-)}\}^2.
\end{align}
Applying the $L^{\infty}$ estimate \eqref{3.1.5} to $h^{\lambda,\varepsilon}:=wf^{\lambda,\varepsilon}$, we have
\begin{align
&\sup_{0\leq t\leq T}\{\|h^{\lambda,\varepsilon}(t)\|_{L^\infty}+|h^{\lambda,\varepsilon}(t)|_{L^{\infty}{(\gamma)}}\}\nonumber \\
&\leq C\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}+C\left\|f^{\lambda,\varepsilon}\right\|_{L^2([0,T];L^2)}\nonumber\\
&\leq C{\eta^{1/2}}\sup_{0\leq t\leq T}\|h^{\lambda,\varepsilon}(t)\|_{L^{\infty}}+{C_{\eta,\varepsilon}}\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\},\nonumber
\end{align}
where we have used \eqref{3.2.32} in the second inequality. Letting $\eta>0$ be small enough, it then follows from the above estimate that
\begin{equation*}
\sup_{0\leq t\leq T}\{\|h^{\lambda,\varepsilon}(t)\|_{L^\infty}+|h^{\lambda,\varepsilon}(t)|_{L^{\infty}{(\gamma)}}\}
\leq C_{\varepsilon}\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}.
\end{equation*}
This shows \eqref{3.2.28} and then completes the proof of Lemma \ref{lem3.2.3}.
\end{proof}
\subsection{Solution to the linear inhomogeneous problem}
The last step is concerned with the limit procedure $\varepsilon\rightarrow 0$.
\medskip
\noindent{\it Proof of Proposition \ref{prop3.2}:} Taking the inner product of \eqref{3.2.4} of $f^{\varepsilon}$ over $[0,T]\times\Omega\times\mathbb{R}^3$, we get that for any $\eta>0$,
\begin{align}\label{3.2.34}
&\varepsilon\int_0^T\|f^{\varepsilon}(s)\|_{L^2}^2{\rm d} s+\int_0^T\langle Lf^{\varepsilon}(s), f^\varepsilon(s)\rangle{\rm d} s+
\f12\int_0^T|(I-P_{\gamma})f^{\varepsilon}(s)|_{L^2(\gamma_+)}{\rm d} s\notag\\
&\leq\eta \int_{0}^T\|\nu^{1/2}f^\varepsilon(s)\|_{L^2}^2{\rm d} s +\eta\int_0^T|P_{\gamma}f^{\varepsilon}(s)|_{L^{2}(\gamma_+)}^2{\rm d} s\notag\\
&\qquad +C_\eta\int_0^T\|\nu^{-1/2}g(s)\|_{L^2}^2{\rm d} s+C_\eta\int_0^T|r(s)|_{L^{2}(\gamma_-)}^2{\rm d} s.
\end{align}
By the coercivity estimate \eqref{c}, it holds that
$$\int_0^T\langle Lf^\varepsilon(s), f^\varepsilon(s)\rangle{\rm d} s\geq c_0\int_0^T\|\nu^{1/2}(I-P)f^\varepsilon(s)\|_{L^2}^2{\rm d} s,
$$
where the projection $P$ is defined in \eqref{P}.
For the estimate on $P_{\gamma}f^{\varepsilon}$, it is direct to see that
$$
(\partial_t+v\cdot \nabla_x)\big(e^{-\frac{|v|^2}{4}}(f^{\varepsilon})^2\big)=2e^{-\frac{|v|^2}{4}}gf^{\varepsilon}-2e^{-\frac{|v|^2}{4}}f^{\varepsilon}Lf^{\varepsilon}
-2\varepsilon e^{-\frac{|v|^2}{4}}(f^{\varepsilon})^2.
$$
Then it follows that
$$
\|(\partial_t+v\cdot \nabla_x)\big(e^{-\frac{|v|^2}{4}}(f^{\varepsilon})^2\big)\|_{L^1}\leq C\|{e^{-\frac{|v|^2}{16}}
g\|_{L^2}^2+C\
{e^{-\frac{|v|^2}{16}}}f^{\varepsilon}\|_{L^2}^2.
$$
Thus, similar for obtaining \eqref{3.2.31}, it holds that
\begin{align}\label{3.2.35}
&\int_0^T|P_{\gamma}f^{\varepsilon}(s)|^2_{L^2(\gamma_+)}{\rm d} s\nonumber\\
&\quad \leq C\int_0^T\|{e^{-\frac{|v|^2}{16}}}f^{\varepsilon}(s)\|_{L^2}^2+|(I-P_\gamma )f^{\varepsilon}(s)|_{L^2(\gamma_+)}^2+\|{e^{-\frac{|v|^2}{16}}}g(s)\|_{L^2}^2{\rm d} s\notag\\
&\qquad+C\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}f^{\varepsilon}(t)\|_{L^{\infty}}^2.
\end{align}
For the macroscopic part $Pf^{\varepsilon}$, we note that $f^{\varepsilon}$ satisfies the zero-mass condition \eqref{3.2.5}. Then from {\cite[Lemma 6.1]{EGKM}}
there exists a functional $G_{f^{\varepsilon}}(t)$ with the property $|G_{f^{\varepsilon}}(t)|{\lesssim\|f^\varepsilon(t)\|_{L^2}^2}$ such that
\begin{align}\label{3.2.36}
&\int_0^t\|\nu^{1/2}Pf^{\varepsilon}(s)\|_{L^2}^2\notag \\
&\lesssim\bigg(G_{f^{\varepsilon}}(t)-G_{f^{\varepsilon}}(0)\bigg)+\int_0^t\|\nu^{1/2}(I-P)f^{\varepsilon}(s)\|_{L^2}^2{\rm d} s\nonumber\\
&\qquad +\int_0^t\|g(s)\|_{L^2}^2{\rm d} s
+\int_0^t|r(s)|^2_{L^2(\gamma_-)}{\rm d} s+\int_0^t|(I-P_{\gamma})f^{{\varepsilon}}(s)|_{L^2(\gamma_+)}^2{\rm d} s.
\end{align}
In particular, taking $t=T$ in \eqref{3.2.36} and utilizing the periodicity of $f^{\varepsilon}$, we get
\begin{align}\label{3.2.37}
&\int_0^T\|\nu^{1/2}Pf^{\varepsilon}(s)\|_{L^2}^2\notag\\
&\leq C\int_0^T\|\nu^{1/2}(I-P)f^{\varepsilon}(s)\|_{L^2}^2{\rm d} s+C\int_0^T\|g(s)\|_{L^2}^2{\rm d} s\nonumber\\
&\qquad +C\int_0^T|r(s)|_{L^2(\gamma_-)}^2{\rm d} s+C\int_0^T|(I-P_{\gamma})f^{{\varepsilon}}(s)|_{L^2(\gamma_+)}^2{\rm d} s.
\end{align}
A suitable combination of \eqref{3.2.34}, \eqref{3.2.35} and \eqref{3.2.37} yields that
\begin{align}\label{3.2.38}
&\int_0^T\|\nu^{1/2}f^{\varepsilon}(s)\|_{L^2}^2+|f^{\varepsilon}(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\ &\leq \eta\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}f^{\varepsilon}(t)\|_{L^{\infty}}^2 +C_{\eta}\int_0^T\|\nu^{-1/2}g(s)\|_{L^2}^2+\|g(s)\|_{L^2}^2+|r(s)|_{L^{2}(\gamma_-)}^2{\rm d} s\nonumber\\
&\leq\eta\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}f^{\varepsilon}(t)\|_{L^{\infty}}^2+C_{\eta}\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}^2,
\end{align}
where $\eta>0$ can be chosen to be arbitrarily small. Moreover, in terms of the $L^{\infty}$ estimate \eqref{3.1.5}, it holds that
\begin{align}\label{3.2.39}
&\sup_{0\leq t\leq T}\{\|wf^{\varepsilon}(t)\|_{L^{\infty}}+|wf^{\varepsilon}(t)|_{L^{\infty}({\gamma})}\}\nonumber\\
&\leq C\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}+C\|\nu^{1/2}f^{\varepsilon}\|_{L^2([0,T];L^2)}\nonumber\\
&\leq C{\eta^{1/2}}\sup_{0\leq t\leq T}\|wf^{\varepsilon}(t)\|_{L^{\infty}}+C_{\eta}\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\},
\end{align}
where we have used \eqref{3.2.38} in the last inequality. Then taking $\eta>0$ suitably small in \eqref{3.2.39}, we get the desired estimate.
To pass to the limit $\varepsilon\rightarrow0^+$, we consider the difference
$
f^{\varepsilon_1}-f^{\varepsilon_2}$ with
$0<\varepsilon_1, \varepsilon_2\ll 1.
$
We see that $f^{\varepsilon_1}-f^{\varepsilon_2}$ solves the problem:
\begin{equation}\left\{
\begin{aligned}
&\partial_t(f^{\varepsilon_1}-f^{\varepsilon_2})+v\cdot \nabla_x (f^{\varepsilon_1}-f^{\varepsilon_2})+L(f^{\varepsilon_1}-f^{\varepsilon_2})=\varepsilon_2f^{\varepsilon_2}-\varepsilon_1f^{\varepsilon_1},\\
&f^{\varepsilon_1}-f^{\varepsilon_2}|_{\gamma_-}=P_{\gamma}(f^{\varepsilon_1}-f^{\varepsilon_2}).\nonumber
\end{aligned}\right.
\end{equation}
Similar as before, direct energy estimates show that
\begin{align*
&\int_0^T\|\nu^{1/2}(f^{\varepsilon_1}-f^{\varepsilon_2})(s)\|_{L^2}^2+|(f^{\varepsilon_1}-f^{\varepsilon_2})(s)|_{L^2(\gamma_+)}^2{\rm d} s\nonumber\\ &\leq \eta\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}(f^{\varepsilon_1}-f^{\varepsilon_2})(t)\|_{L^{\infty}}^2+C_{\eta}(\varepsilon_1^2+\varepsilon_2^{2})\int_0^T\|\nu^{-1/2}f^{\varepsilon_1}(s)\|_{L^2}^2\notag\\
&\qquad\qquad\qquad\qquad\qquad\qquad +\|\nu^{-1/2}f^{\varepsilon_2}(s)\|_{L^2}^2
+\|f^{\varepsilon_1}(s)\|_{L^2}^2+\|f^{\varepsilon_2}(s)\|_{L^2}^2{\rm d} s\nonumber\\
&\leq\eta\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}(f^{\varepsilon_1}-f^{\varepsilon_2})(t)\|_{L^{\infty}}^2\notag\\
&\quad +C_{\eta}(\varepsilon_1^2+\varepsilon_2^2)\sup_{0\leq t\leq T}\{\|wf^{\varepsilon_1}(t)\|_{L^\infty}+\|wf^{\varepsilon_2}(t)\|_{L^\infty}\}^2\nonumber\\
&\leq \eta\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}(f^{\varepsilon_1}-f^{\varepsilon_2})(t)\|_{L^{\infty}}^2\notag\\
&\quad +C_{\eta}(\varepsilon_1^2+\varepsilon_2^2)\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}^2.
\end{align*}
Then applying the $L^{\infty}$ estimate \eqref{3.1.5} to $h^{\varepsilon_1}-h^{\varepsilon_2}:=w(f^{\varepsilon_1}-f^{\varepsilon_2})$, we get that in the case of $0\leq\gamma\leq 1$,
\begin{align}\label{3.2.41}
&\sup_{0\leq t\leq T}\|(h^{\varepsilon_1}-h^{\varepsilon_2})(t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|(h^{\varepsilon_1}-h^{\varepsilon_2})(t)|_{L^{\infty}({\gamma})}\nonumber\\
&\leq C(\varepsilon_1+\varepsilon_2)\sup_{0\leq t\leq T}\big\{\|\nu^{-1}h^{\varepsilon_1}(t)\|_{L^{\infty}}+\|\nu^{-1}h^{\varepsilon_2}(t)\|_{L^{\infty}}\big\}\notag\\
&\quad +C\|\nu^{1/2}(f^{\varepsilon_1}-f^{\varepsilon_2})\|_{L^2([0,T];L^2)}\nonumber\\
&\leq C\eta\sup_{0\leq t\leq T}\|(h^{{\varepsilon_1}}-h^{{\varepsilon_2}})(t)\|_{L^{\infty}}\notag\\
&\quad +C_{\eta}(\varepsilon_1+\varepsilon_2)\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\},\nonumber\\
&\leq C(\varepsilon_1+\varepsilon_2)\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\},
\end{align}
and in the case of $-3<\gamma<0$,
\begin{align}\label{3.2.42}
&\sup_{0\leq t\leq T}\|\nu(h^{\varepsilon_1}-h^{\varepsilon_2})(t)\|_{L^{\infty}}+\sup_{0\leq t\leq T}|\nu(h^{\varepsilon_1}-h^{\varepsilon_2})(t)|_{L^{\infty}{(\gamma)}}\nonumber\\
&\leq C(\varepsilon_1+\varepsilon_2)\sup_{0\leq t\leq T}\big\{\|h^{\varepsilon_1}(t)\|_{L^{\infty}}+\|h^{\varepsilon_2}(t)\|_{L^{\infty}}\big\}+C\|\nu^{1/2}(f^{\varepsilon_1}-f^{\varepsilon_2})\|_{L^2([0,T];L^2)}\nonumber\\
&\leq C{\eta^{1/2}}\sup_{0\leq t\leq T}\|{e^{-\frac{|v|^2}{16}}}(h^{{\varepsilon_1}}-h^{{\varepsilon_2}})(t)\|_{L^{\infty}}\notag\\&\quad +C_{\eta}(\varepsilon_1+\varepsilon_2)\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\}\nonumber\\
&\leq C(\varepsilon_1+\varepsilon_2)\sup_{0\leq t\leq T}\{\|\nu^{-1}wg(t)\|_{L^\infty}+|wr(t)|_{L^\infty(\gamma_-)}\},
\end{align}
{by taking $\eta>0$ suitably small. }Therefore, from \eqref{3.2.41} and \eqref{3.2.42} we have respectively shown that $f^{\varepsilon}$ is Cauchy in $L^{\infty}_{w}$ for $0\leq \gamma\leq 1$, and Cauchy in $L^{\infty}_{\nu w}$ for $-3<\gamma<0$. Let $f(t,x,v)$ be the limit function of $f^{\varepsilon}(t,x,v)$ in the corresponding function space. It is direct to check that $f(t,x,v)$ satisfies \eqref{3.0.1}. Finally, the time-periodicity and continuity of $f$ directly follow from the time-periodicity and continuity of $f^{\varepsilon}$. The proof of Proposition \ref{prop3.2} is therefore complete. \qed
\subsection{Proof of Theorem \ref{thm1.1}}
We consider the solution sequence $\{f^j(t,x,v)\}$ iteratively solved from
\begin{align*
\begin{cases}
\partial_t f^{j+1}+v\cdot \nabla_x f^{j+1}+Lf^{j+1}={-}L_{\sqrt{\mu
{f^*}}f^j+\Gamma(f^j,f^j),\\[1.5mm]
f^{j+1}|_{\gamma_-}=P_\gamma f^{j+1}+\frac{\mu_{\theta}
\mu}{\sqrt{\mu}}\int_{v'\cdot n(x)>0}f^j\sqrt{\mu} \{v'\cdot n(x)\} {\rm d} v' +r,
\end{cases}
\end{align*}
for $j=0,1,2\cdots$, where we have set $f^0\equiv0$. Here we have denoted
$$
r(t,x,v)=\frac{\mu_{\theta}-\mu_{\bar{\theta}}}{\sqrt{\mu}}\int_{v'\cdot n(x)>0} {F^{*}(x,v')
\{v'\cdot n(x)\} {\rm d} v',
$$
and
$$
{L_{\sqrt{\mu}f^{*}}f^j=-\frac{1}{\sqrt{\mu}}[Q(\sqrt{\mu}f^*,\sqrt{\mu}f^j)+Q(\sqrt{\mu}f^j,\sqrt{\mu}f^*)].}
$$
A direct calculation shows that
\begin{align}\label{3.3.2}
\int_{\Omega\times\mathbb{R}^3} \Gamma(f^j,f^j)\sqrt{\mu(v)} {\rm d} v{\rm d} x=\int_{\Omega\times\mathbb{R}^3}
L_{\sqrt{\mu}{f^*
}f^j\sqrt{\mu(v)} {\rm d} v{\rm d} x=0,
\end{align}
and
\begin{equation}\label{3.3.3}
\int_{v\cdot n(x)<0} [\mu_\theta(v)
\mu(v)] \{v\cdot n(x)\} {\rm d} v =\int_{v\cdot n(x)<0} [\mu_{\theta}(v)-\mu_{\bar{\theta}}(v)] \{v\cdot n(x)\} {\rm d} v=0.
\end{equation}
Furthermore, one can verify that
\begin{align}\label{3.3.4}
\|\nu^{-1}wL_{\sqrt{\mu}{f^*}
f^j\|_{L^{\infty}}+\|\nu^{-1} w \Gamma(f^j,f^j)\|_{L^\infty}\leq C \delta\|wf^j\|_{L^\infty}+C\|wf^j\|^2_{L^\infty},
\end{align}
and
\begin{equation}\label{3.3.5}
\left| w\bigg\{r +\frac{\mu_\theta
\mu}{\sqrt{\mu}}\int_{v'\cdot n(x)>0} f^j\sqrt{\mu} \{v'\cdot n(x)\} {\rm d} v'\bigg\} \right|_{L^\infty{(\gamma_-)}}
\leq C\delta_1+C\delta |f^j|_{L^\infty(\gamma_+)}.
\end{equation}
Recall \eqref{3.3.2}, \eqref{3.3.3}, \eqref{3.3.4} and \eqref{3.3.5}. Then, by applying \eqref{3.0.4} to $f^{j+1}$, we get
\begin{multline}\label{3.3.6}
\sup_{0\leq s\leq T}\{\|wf^{j+1}(s)\|_{L^\infty}+|wf^{j+1}(s)|_{L^\infty{(\gamma)}}\}\\
\leq C\delta_1+C\sup_{0\leq s\leq T}\big\{\|f^j(s)\|_{L^\infty}^2+\delta\|wf^j(s)\|_{L^\infty}+\delta|wf^j(s)|_{L^\infty(\gamma_+)}\big\}.
\end{multline}
From \eqref{3.3.6}, it is direct to prove by an induction argument that
\begin{equation}\label{3.3.7}
\sup_{0\leq s\leq T}\|wf^{j}(s)\|_{L^\infty}+\sup_{0\leq s\leq T}|wf^{j}(s)|_{L^\infty{(\gamma)}}\leq 2C\delta_1,
\end{equation}
for $j=1,2,\cdots$, provided that $\delta>0$ is suitably small, where $C$ is a generic constant independent of $j$. For the convergence of the approximation sequence $f^j$, we consider the difference $f^{j+1}-f^j$ which satisfies
\begin{multline}
\partial_t(f^{j+1}-f^j)+v\cdot \nabla_x (f^{j+1}-f^j)+L(f^{j+1}-f^j)\nonumber\\
={-}L_{\sqrt{\mu}{f^*
}(f^j-f^{j-1})+\Gamma(f^{j}-f^{j-1},f^{j})+\Gamma(f^{j-1},f^{j}-f^{j-1}),\nonumber
\end{multline}
with the boundary condition
\begin{align}
(f^{j+1}-f^j)|_{\gamma_-}&=P_\gamma (f^{j+1}-f^j)\notag\\
&\quad+\frac{\mu_\theta-\mu}{\sqrt{\mu}}\int_{v'\cdot n(x)>0} (f^j-f^{j-1})\sqrt{\mu} \{v'\cdot n(x)\} {\rm d} v'.\nonumber
\end{align}
Once again, applying \eqref{3.0.4} to $f^{j+1}-f^j$ gives that
\begin{align}\label{3.3.8}
&\sup_{0\leq s\leq T}\big\{\|w(f^{j+1}-f^j)(s)\|_{L^\infty}+|w(f^{j+1}-f^j)(s)|_{L^\infty{(\gamma)}}\big\}\nonumber\\
&\leq C\bigg(\delta+\sup_{0\leq s\leq T}\big\{\|wf^{j}(s)\|_{L^\infty}+\|wf^{j-1}(s)\|_{L^\infty}\}\bigg)\nonumber\\
&\quad\times \sup_{0\leq s\leq T}\big\{\|w(f^{j}-f^{j-1})(s)\|_{L^\infty}+|w(f^j-f^{j-1})(s)|_{L^\infty(\gamma_+)}\Big\}\nonumber\\
&\leq C\delta \sup_{0\leq s\leq T}\big\{\|w(f^{j}-f^{j-1})(s)\|_{L^\infty}+|w(f^j-f^{j-1})(s)|_{L^\infty(\gamma_+)}\big\}\nonumber\\
&\leq \f12\sup_{0\leq s \leq T}\big\{\|w(f^{j}-f^{j-1})(s)\|_{L^\infty}+|w(f^j-f^{j-1})(s)|_{L^\infty(\gamma_+)}\big\},
\end{align}
where we have used \eqref{3.3.7} in the second inequality and also we have taken $\delta>0$ small enough such that $C\delta\leq
1/2$. Hence, $f^j(t,x,v)$ is a Cauchy sequence in $L^\infty_w$. Let
$
f^{{per}}(t,x,v)=\lim_{j\rightarrow\infty} f^j(t,x,v)
$
in $L^\infty_w$.
It is direct to check that
$$
F^{per}(t,x,v)=\mu+\sqrt{\mu} f^{{per}}(t,x,v)
$$
is the time-periodic solution to the boundary-value problem \eqref{1.1} and \eqref{1.7}, and also \eqref{add.thmcon} and \eqref{1.5} are satisfied. The proof of \eqref{1.4} for the non-negativity of $F^{per}(t,x,v)$ is left to the next section. The uniqueness and continuity of $f^{{per}}(t,x,v)$ can be obtained in a usual way, cf.~\cite{DHWZ}. Therefore this completes the proof of Theorem \ref{thm1.1}. \qed
\section{Asymptotical stability}
This section is concerned with the large-time behavior of solutions to the initial-boundary value problem \eqref{ibvp} whenever $F_0(x,v)$ is sufficiently close to $F^{per}(0,x,v)$ at initial time. As a byproduct, the result about the dynamical stability of the non-trivial time-periodic profile $F^{per}(t,x,v)$ in turn yields its non-negativity.
As for obtaining the existence of the time-periodic solution $F^{per}(t,x,v)$, we need to first study the linear inhomogeneous problem in the following Proposition \ref{prop4.1}. As its proof is is more or less the same as the one of {\cite[Proposition 7.1]{EGKM}}
for $0\leq \gamma\leq 1$ and \cite[Proposition 4.4]{DHWZ}
for $-3<\gamma<0$. The full details are omitted for brevity.
\begin{proposition}\label{prop4.1}
Let $-3<\gamma\leq 1,
{<} q<\f18$ and $\beta>\max\{3,3-\gamma\}$. Let
$$
\|wf_0\|_{L^{\infty}}+\|\nu^{-1}wg\|_{L^{\infty}}<\infty,
$$
and
\begin{align*
\int_{\Omega}\int_{\mathbb{R}^3} f_0(x,v)\sqrt{\mu(v)}\,{\rm d} x{\rm d} v=\int_{\Omega} \int_{\mathbb{R}^3} g(t,x,v)\sqrt{\mu(v)}\,{{\rm d} x{\rm d} v}=0.
\end{align*}
Then if
$$
\sup_{0\leq t\leq T}|\theta(t,\cdot)-1|_{L^\infty(\partial\Omega)}
$$ is sufficiently small, the linear inhomogeneous initial-boundary value problem:
\begin{equation}\left\{
\begin{aligned}
&\partial_tf+ v\cdot \nabla_xf+Lf=g,\quad t>0,x\in\Omega, v\in \mathbb{R}^3,\\
&f(t,x,v)|_{\gamma_{-}}=P_{\gamma}f+{\frac{\mu_{\theta}-{\mu}}{\sqrt{\mu}}\int_{v'\cdot n(x)>0}f\sqrt{\mu}\{n(x)\cdot v'\}\,{\rm d} v'},\nonumber\\
&f(t,x,v)|_{t=0}=f_0(x,v),
\end{aligned}\right.
\end{equation}
admits a unique solution $f(t,x,v)$ satisfying that
\begin{multline}\label{4.2}
\sup_{0\leq s\leq t}e^{c {s}^\rho}\{\|wf(t)\|_{L^{\infty}}+|wf(t)|_{L^{\infty}{(\gamma)}}\}\\
{\leq C\|wf_0\|_{L^{\infty}}+C\sup_{0\leq s\leq t}e^{c s^\rho}\|\nu^{-1}wg(s)\|_{L^{\infty}}},
\end{multline}
for any $t>0$, where $\rho$ is defined in \eqref{def.rho}, and $c>0$ is a generic small constant.
Moreover, if $\Omega$ is convex, $f_0(x,v)$ is continuous except on $\gamma_0$, $g$ is continuous in the interior of $[0,\infty)\times\Omega\times \mathbb{R}^3$,
\begin{align*
f_0(x,v)|_{\gamma_-}=P_{\gamma}f_0+{\frac{\mu_{\theta}-{\mu}}{\sqrt{\mu}}\int_{v'\cdot n(x)>0}f_0\sqrt{\mu}\{n(x)\cdot v'\}{\rm d} v'},
\end{align*}
and $\theta(t,x) $ is continuous over $\mathbb{R}\times\partial \Omega$, then the solution $f(t,x,v)$ is also continuous over $[0,\infty)\times \{\bar{\Omega}\times \mathbb{R}^{3}\setminus\gamma_0\}$.
\end{proposition}
\noindent{\it Proof of Theorem \ref{thm1.2}:} We construct the solution via the following iteration:
\begin{align*
\begin{cases}
\displaystyle \partial_t f^{j+1}+v\cdot \nabla_x f^{j+1}+Lf^{j+1}={-}L_{\sqrt{\mu}f^{per}}f^j+\Gamma(f^j,f^j),\\
\displaystyle f^{j+1}|_{\gamma_-}=P_\gamma f^{j+1}+\frac{\mu_{\theta}
\mu}{\sqrt{\mu}}\int_{v'\cdot n(x)>0}f^{{j+1}}\sqrt{\mu} \{v'\cdot n(x)\} {\rm d} v',\\
f^{j+1}(0,x,v)=f_0(x,v),
\end{cases}
\end{align*}
for $j=0,1,2\cdots$, where we have set $f^0\equiv0$, and also
$$
L_{\sqrt{\mu}f^{{per}}}f^j:={-\frac{1}{\sqrt{\mu}}[Q(\sqrt{\mu}f^{per},\sqrt{\mu}f)+Q(\sqrt{\mu}f,\sqrt{\mu}f^{per})]}
$$
Similar for obtaining estimates \eqref{3.3.2}-\eqref{3.3.5}, we have
\begin{align}
&\int_{\Omega\times\mathbb{R}^3} \Gamma(f^j,f^j)\sqrt{\mu(v)} {\rm d} v{\rm d} x=\int_{\Omega\times\mathbb{R}^3}
L_{\sqrt{\mu}f^{per}}f^j\sqrt{\mu(v)} {\rm d} v{\rm d} x=0,\nonumber
\end{align}
{and
\begin{align}
\|\nu^{-1}w[L_{\sqrt{\mu}f^{per}}f^j\|_{L^\infty}+\|\nu^{-1}w\Gamma(f^j,f^j)]\|_{L^{\infty}}\leq C\delta'\|wf^j\|_{L^{\infty}}+C\|wf^j\|^2_{L^{\infty}}.\nonumber
\end{align}}
Then we can apply the linear time-decay property \eqref{4.2} to $f^{j+1}$ to obtain that
\begin{align}\label{4.5}
&\sup_{0\leq s\leq t}e^{c s^{\rho}}\{\|wf^{j+1}(s)\|_{L^{\infty}}+|wf^{j+1}(s)|_{L^{\infty}(\gamma)}\}\nonumber\\
&\leq C \|wf_0\|_{L^{\infty}}+C\delta'\sup_{0\leq s\leq t}e^{c s^\rho}\|wf^j(s)\|_{L^{\infty}}+C\sup_{0\leq s\leq t}e^{c s^{\rho}}\|wf^j(s)\|_{L^{\infty}}^2.
\end{align}
From \eqref{4.5}, we can also use the induction argument to show that
$$
\sup_{0\leq s\leq t}e^{c s^{\rho}}\{\|wf^{j+1}(s)\|_{L^{\infty}}+|wf^{j+1}(s)|_{L^{\infty}(\gamma)}\}\leq 2C \|wf_0\|_{L^{\infty}},
$$
provided that both $\delta'>0$ and $\|wf_0\|_{L^{\infty}}$ are suitably small. Similar to obtain \eqref{3.3.8}, one can show that $\{f^j\}_{j=1}^{\infty}$ is a Cauchy sequence {in $L^\infty_w$}, then we obtain the solution $f(t,x,v)$ as the limit of $f^j(t,x,v)$.
The uniqueness and continuity is standard, and the positivity can be shown by the same method as in \cite{EGKM}. Therefore, we complete the proof of Theorem \ref{thm1.2}.\qed
\medskip
\noindent{\bf Acknowledgments.} Renjun Duan is partially supported by the General Research Fund (Project No.~14302817). Yong Wang is partly supported by NSFC Grant No.~11771429, 11688101, and 11671237.
|
1,314,259,994,301 | arxiv | \section{Introduction}
A Markov decision process (MDP for short) is a five-tuple $({\bf X},{\bf U},\{U(x):x\in {\bf X}\},Q,c)$ where ${\bf X}$ is the state space, ${\bf U}$ is the action set, $\{U(x):x\in {\bf X}\} $ is feasible actions, $Q$ is the transition kernel and $c$ is the cost-per-stage function. For its wide application in different areas, it has been well studied in the last few decades.
To measure different types of risk in different real models, people have raised different cost functionals (i.e. $c$) for MDPs. In this paper we are interested in the risk-sensitive cases, i.e. given an appropriate policy $\pi=\{u_t\}$, the cost functional parameterized by $\varepsilon$ is defined as
$$\displaystyle J^\varepsilon(x;\pi)=\varepsilon\log\mathbb{E}^{\varepsilon,\pi}_x[\exp(\varepsilon^{-1}{\cal J})] \text{ and } {\cal J}:=\sum_{k=1}^Nc_k(X_k,u_k)+c_N(X_{N+1}).$$
Classical risk-sensitive MDPs have been intensively studied since the seminal paper \cite{Howard1972}.
In particular the average cost criterion has attracted a lot of researchers since it is quite different from the classical risk neutral average cost problem (e.g. see \cite{Cavazos2000,Cavazos2011,Jask2007,Dima1999,Ghosh2014}). As far
as applications are concerned, for example,
where portfolio management is considered in \cite{Biele1999}, where revenue problems are treated in \cite{Barz2007} and where the application of risk-sensitive control in finance can be found in \cite{Baue2011}. In recent years, some partially observable risk-sensitive MDPs are considered in \cite{Baue2017} and a class of risk sensitive MDPs with some certain costs are investigated in \cite{Baue2013}.
For general finite $\varepsilon>0$ case, dynamic programming is an efficient method to find the optimal control and derive the equation for the cost functional under the optimal control.
If $\varepsilon\rightarrow \infty$, one can see that
$$J^\varepsilon(x,\pi)=\mathbb{E}^{\varepsilon,\pi}_x({\cal J})+\frac{\varepsilon^{-1}} 2\mathbb{V}_x^{\varepsilon,\pi}({\cal J})+O(\varepsilon^{-2})$$
where $\mathbb{V}^{\varepsilon,\pi}_x({\cal J})$ is the variance of ${\cal J}$ under $\mathbb{P}^{\varepsilon,\pi}_x$. Thus for $|\varepsilon|$ being large, the control problem is the so-called variance minimization problem in which it is to find an optimal strategy minimizing the variance cost $\mathbb{V}_x^{\varepsilon,\pi}({\cal J})$ among the set of strategies under which the mean cost $\mathbb{E}^{\varepsilon,\pi}_x({\cal J})$ attains its minimum (see \cite{Kawai1987} for example).
While the case that $\varepsilon$ is small becomes totally different. When $\varepsilon\rightarrow0^+$, the decision-makers are significantly sensitive with all possible risks including rarely existed ones which were ignored before. As a consequence, one may assume that the transition rate of the dynamics depends on $\varepsilon$ proportionally. For example, the so-called small noise model for stochastic differential equations is introduced in \cite{Flem2006} where the Brownian motion is scaled by $\sqrt{\varepsilon}$. The author investigated the limit behavior the value function of risk-sensitive type as $\varepsilon$ tends to 0. The procedure to derive the convergence as $\varepsilon\rightarrow 0^+$ is also called vanishing viscosity method. Moreover, similar idea applied to Markov chains can also be found in two-time scale problems (e.g. two-time scale Markov Chain in \cite{Yin2012}).
In this paper, our main effort are devoted to such case as well. Different from the model in \cite{Flem2006}, our cost functional is parametrized by an additional discounting $\tau$. i.e.
\begin{equation}\label{jCJ}\displaystyle J_{\tau,t}^\varepsilon(x;\pi)=\varepsilon\log\mathbb{E}^{\varepsilon,\pi}_{t,x}[\exp(\varepsilon^{-1}{\cal J}_{\tau,t})] \text{ and } {\cal J}_{\tau,t}:=\sum_{k=t}^Nc_{\tau,k}(X^\varepsilon_k,u_k)+c_\tau(X^\varepsilon_{N+1}).\end{equation}
and the corresponding value function is
$V_t^\varepsilon(x;\pi)=J^\varepsilon_{t,t}(x;\pi)$ where $\mathbb{E}^\pi_{t,x}$ is the conditional expectation on $X^\varepsilon_t=x$ under the policy $\pi$.
If $c_{\tau,t}(x,u)=c_t(x,u)$ is independent of $\tau$, note that we have following recursion,
$$V^\varepsilon_{t}(x;\pi)=c_t(x,u)+\varepsilon\log\mathbb{E}^{\varepsilon,\pi}_{t,x}\[\exp\big(\varepsilon^{-1} V^\varepsilon_{t+1}(X^\varepsilon_1;\pi)\big)\],$$ by applying the Bellman principle, one can see that the problem time-consistent, i.e. if an optimal control
can be constructed for that (initial pair), then it will stay optimal hereafter. If $c$ is in an exponential discounting form, i.e. $c_{\tau,t}(x,u)=\lambda^{t-\tau}c(x,u)$ for some $0<\lambda<1$, we have
$$V_t^\varepsilon(x,\pi)=c(x,u)+\varepsilon\log\mathbb{E}^{\varepsilon,\pi}_{t,x}[\exp(\varepsilon^{-1}\lambda{\cal J}_{t+1,t+1})].$$ Due to the non-linear structure on the right-hand side, the Bellman principle fails and the time-consistency will be lost (some concrete calculation is made in Example \ref{example1}). We also notice that even if we take $\varepsilon\rightarrow 0$, the Bellman principle still fails because in the definition of cost function, the risk-sensitive parameter $\varepsilon$ is same as the rate $\varepsilon$ in the transition of Markov chain. Similarly one can find the problem is time-inconsistent if $c$ is not exponential discounting. Therefore, different from a classical control problem, the risk-sensitive control problem is time-inconsistent if the cost functional exists a discounting factor $\tau$, even if it is in an exponential discounting form. Hence, we are motivated to analyze the time-inconsistent risk-sensitive control problem for practical purpose.
In a time-inconsistent problem, the optimal control which minimizes the value function now doesn't stay optimal in future. The detailed calculation for a new recursion involving $\tau$ can be seen in Section \ref{eequilirium}. To deal with time-inconsistency, we have to find a time-inconsistent equilibrium which is locally optimal only in some appropriate sense. After the breakthrough in \cite{Yong2012b} and \cite{Ekeland2008,Ekeland2010}, there are lots of works on time-inconsistent control concerning MDPs and continuous-time models in the last decade (e.g. see
\cite{Hu2010,Qi2017,Yong2012a,Yong2012b,Wei2017,Bjok2017,Mei2017}).
To the best knowledge of the author, there are few works being concentrated on the convergence results for time-inconsistent control problems with risk-sensitive cost functional when the risk-sensitivity parameter $\varepsilon$ goes to $0$. The paper is to fill this gap.
Compared to those previous works on time-consistent risk-sensitive problems investigated such as \cite{Ghosh2014,Ghosh2017}, time-inconsistency brings new interesting features and mathematical difficulties to work with. One of the main mathematical difficulties brought by time-inconsistency in general state space $\mathbb{R}^d$ lies in the existence of time-inconsistent equilibrium strategies. For non-degenerate stochastic diffusions in $\mathbb{R}^d$, the existence and uniqueness of time-inconsistent equilibrium can be found in \cite{Yong2012a}. While for degenerate case, the existence is still an open problem due to the lack of first-order regularity of the viscosity solution for a degenerate second-order HJB equation. More explicitly, for a time-inconsistent problem in the space of $\mathbb{R}^d$, the identification of time-inconsistent equilibrium requires that the HJB equation admits a classical solution, which is not necessarily true for a degenerate problem. To avoid such mathematical gap, most of the existed works are only concerned with the verification theorem (i.e. necessary conditions) for a strategy to be a time-inconsistent equilibrium (e.g. see \cite{Bjok2014,Bjok2017}). While in our problem, such restriction is critical since the diffusion of $\varepsilon=0$ is degenerate definitely. Thus in this paper, the dynamic is assumed to be valued in a countable-stated space with discrete topology. We hope to investigate the general cases in the future papers.
\smallskip
In view of the developments, one would question why we should be concerned with controlled Markov chains with time-inconsistent and risk-sensitive costs. There are several reasons for the works on such problems. Firstly, controlled Markov chains are the simplest controlled Markovian systems which have a broad application in real life. There are numerous systems that can be formulated as controlled Markov chains and/or Markov decision processes and the like. Thus considering such systems is not only necessary but has broader impact.
In addition, controlled Markov chains can be used to build numerical schemes for stochastic control problems.
Moreover, as introduced in previous paragraph, it is very complicated but required to take care of the regularity issues in time-inconsistent problems. As will be seen in this paper, treating controlled Markov chains valued countable-stated space with simple structures enables us to deal with the regularity issue effectively without complicated conditions. This together with aforementioned approximation may lead to future consideration of numerical approximation of time-inconsistent problems, which is of practical concerns.
\smallskip
Let ${\bf X}$ be a space with countable many states and the control space ${\bf U}$ is a complete metric space with metric $|\cdot,\cdot|_U$. Without loss of generality, we suppose that ${\bf X}$ be the set of integers. Let $M({\bf X})$ be the set of all functions on ${\bf X}$.
$B({\bf X})$ is the set of
of functions bounded from below. Write $P({\bf X})$ be the set of all probability measures on ${\bf X}$.
A function $f\in M({\bf X})$ is called inf-finite if the set $\{x\in{\bf X}:f(x)\leq K\}$ has finite elements for all $K\in {\bf R}$. Let $C({\bf U})$ be the set of continuous functions on ${\bf U}$. A function $f$ on ${\bf U}$ is called inf-compact if the set $\{u\in{\bf U}:f(u)\leq K\}$ is compact for all $K\in {\bf R}$ (i.e. the set of real numbers).
\smallskip
The set of admissible policies $\Pi$ is assumed to be the collection of all deterministic Markov policies, i.e.
$$\Pi=\{\pi=\tilde u_1\oplus\cdots\oplus\tilde u_T: \tilde u_t=u_t(\cdot) \text{ is a measurable function from ${\bf X}$ to ${\bf U}$}\}.$$
Write $\mathbb{T}:=\{1,\cdots,T\}$ and $\pi_{t}:=\tilde u_t\oplus\cdots\oplus \tilde u_{T}$. Here the notation $\tilde u$ means the strategy $u(\cdot)\in {\cal U}$.
\vspace{0.2cm}
Given a deterministic policy $\pi\in\Pi$, the transition probability is
\begin{equation}\label{transition}\mathbb{P}^{\varepsilon,\pi}(X_{t+1}=j|X_t=i,X_{t-1},\cdots,X_1)=q^\varepsilon_{t}(j;i,u_t(i)).\end{equation}
where $q^\varepsilon_{t}(j;i,u)\geq 0$ and $\sum_{j\in{\bf X}}q^\varepsilon_{t}(j;i,u)=1$.
\vspace{0.2cm}
For each $(\tau,t)\in\mathbb{T}\times\mathbb{T}$, let $f_{\tau,t}:{\bf X}\times{\bf U}\mapsto{\bf R}$ and $g_{\tau}:{\bf X}\mapsto{\bf R}$. Define the time-inconsistent $\varepsilon$-risk-sensitive cost functional by
\begin{equation}\label{cost}J^\varepsilon_{\tau,t}(x;\pi_t)=\varepsilon\log\mathbb{E}^{\varepsilon,\pi_t}_{t,x}\exp\[\varepsilon^{-1}\(\sum_{s=t}^{T}f_{\tau,s}(X_s,u_s(X_s))+g_\tau(X_{T+1})\)\]\end{equation}
and the value function at $t\in\mathbb{T}$ is
\begin{equation}\label{value}V^\varepsilon_t(x;\pi_t):=J^\varepsilon_{t,t}(x;\pi_t).\end{equation}
We define the limit cost and value function as $\varepsilon\rightarrow0^+$ by
\begin{equation}\label{cost-0}J_{\tau,t}(x;\pi_t)=\mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0^+}J^\varepsilon_{\tau,t}(x;\pi_t)
\end{equation}
and
\begin{equation}\label{value-0}V_t(x;\pi_t):=J_{t,t}(x;\pi_t).\end{equation}
The dependence of the transition matrix on $\varepsilon$ is to identify the transitions to those states which happen rarely. For example, suppose that for some $j_0\in{\bf X}$, $q_t^\varepsilon(j_0;i,u)=e^{-\varepsilon^{-1}p_{ij_0}(u)}$ for some an appropriate numbers $p_{ij}(u)>0$. When $\varepsilon$ is small, $j_0$ is a rare state which happens with probability 0 in the limit dynamic (i.e. the limit of the Markov chain as $\varepsilon\rightarrow0$). Thus in a classical optimization problem whose cost function is independent of $\varepsilon$, the state $j_0$ is ignored by the decision maker. While for a risk-sensitive problem with a cost \eqref{cost}, such rare state can not be ignored even though $j_0$ disappears in the limit dynamic. The rate $\varepsilon$ in the transition matrix corresponds to risk-sensitivity rate $\varepsilon$ in the cost functional.
\smallskip
As we mentioned before, the dependence of the cost functions $f$ and $g$ on a discounting factor $\tau$ makes the problem time-inconsistent generally. Thus we will find a time-inconsistent equilibrium which satisfies some local optimality. The following is the definition for a time-inconsistent risk-sensitive equilibrium.
\begin{definition}{\rm (1)
A $T$-step strategy
$\pi^{\varepsilon,*}\in \Pi$ is called a {\it time-inconsistent $\varepsilon$-risk-sensitive equilibrium } ($\varepsilon$-equilibrium for short) if the following {\it step-optimality} holds
\begin{equation}\label{opt1}J^\varepsilon_{t,t}(x; \pi^{\varepsilon,*}_{t})\leq J^\varepsilon_{t,t}(x;\tilde u\oplus \pi^{\varepsilon,*}_{t+1})\text{ for any } t\in\mathbb{T},~\tilde u\in{\cal U}.\end{equation}
Recall $\tilde u=u(\cdot)\in{\cal U}$.
(2) A $T$-step strategy
$\pi^*\in \Pi$ is called a {\it time-inconsistent risk-sensitive equilibrium} if the following {\it step-optimality} holds
\begin{equation}\label{opt2}J_{t,t}(x; \pi^*_{t})\leq J_{t,t}(x;\tilde u\oplus \pi^*_{t+1})\text{ for any } t\in\mathbb{T},~\tilde u\in{\cal U}.\end{equation}
}
\end{definition}
From the definition, we can see that we restrict us to the Markov policy only even though our problem is time-inconsistent. Actually, it is a natural consequence of the step-optimality. From the detailed derivation of the equilibrium in Section 3 (for example see \eqref{nequi}), the step-optimal strategy in each step is in a feed-back form of the step number $t$ and the state value $x_t$ only, independent of the past history $x_k$ for $k< t$. Thus we are only required to consider the Markov strategies to guarantee the step-optimality. The readers are also referred to \cite{Yong2012b} for more details.
From the step-optimality \eqref{opt1} and \eqref{opt2}, provided all strategies after $k$th step (i.e. $\pi_{k+1}^*$), the $k$th-step strategy $u_k$ is the optimal strategy in the step under the cost functional with the discounting factor $\tau=k$. If we suppose different players take actions in different steps, the $k$th player makes his optimal strategy to minimize the cost functional under the discounting factor $\tau=k$, given the strategies of the players thereafter.
Our main goal in the paper is to derive the time-inconsistent $\varepsilon$-risk-sensitive equilibrium and time-inconsistent risk-sensitive equilibrium. Moreover, we will prove that the convergence of time-inconsistent $\varepsilon$-equilibria to time-inconsistent equilibrium as $\varepsilon\rightarrow 0^+$.
The paper is arranged as follows. In Section \ref{sec:PR}, we will review some results for LDP which will be used in our paper and present some preliminary lemmas. In Section \ref{sec:TE}, we will derive the time-inconsistent risk-sensitive equilibria
and the corresponding recursive Hamiltonian sequences for both cases. Then in Section \ref{sec:con}, we prove the convergence of $\varepsilon$-equilibria as $\varepsilon\rightarrow0^+$. Finally, two illustrative examples are presented in Section \ref{sec:exp} and some concluding remarks are made in Section \ref{sec:conrem}.
\section{ Preliminary Results}\label{sec:PR}
\subsection{Large Deviation Principle}
In this subsection, we will review some well-known results on large deviation principle. On a complete separable space $\mathscr{Y}$, $I:\mathscr{Y}\mapsto[0,\infty]$ is called a (good) rate function if it is inf-compact. Let $Y_n$ be a sequence of $\mathscr{Y}$-valued random variables on some appropriate probability space.
$\{Y_n\}$ is said to satisfy the LDP with rate function $I$ if
(1) for any closed subset $C$ of $\mathscr{Y}$,
$$\mathop{\overline{\rm lim}}_{n\rightarrow\infty}\frac1n\log\mathbb{P}(Y_n\in C)\leq-\inf_CI.$$
(2) for any open subset $O$ of $\mathscr{Y}$,
$$\mathop{\underline{\rm lim}}_{n\rightarrow\infty}\frac1n\log\mathbb{P}(Y_n\in O)\geq-\inf_OI.$$
Roughly speaking, the large deviation principle concerns with the rate of probability to zero for rare events. Thus the corresponding risk-sensitive problem is a certain type of robustness control problems. Now let's recall some results on LDP which will be used in our paper. For more details and their proofs, one can check \cite{PDRE1997}.
\begin{theorem}\label{pret} {\rm (1)}
$\{Y_n\}$ satisfies the {\it LDP } with rate function $I$ if and only if $I$ is a rate function (i.e. inf-compact) and for any $h\in C_b(\mathscr{Y})$ (i.e. bounded continuous functions on $\mathscr{Y}$),
$$\mathop{\overline{\rm lim}}_{n\rightarrow\infty}\frac1n\log\mathbb{E}\(\exp[nh (Y_n)]\)=\sup_{\mathscr{Y}}[h-I]$$
{\rm (2)}
$\{Y_n\}$ satisfies {\it LDP } with rate function $I$ if and only if $\{Y_n\}$ is exponential tight, i.e. for any $a>0$, there exists a compact subset $K_a$ of $\mathscr{Y}$ such that
$$\frac1n\log\mathbb{P}(Y_n\in K_a^c)\leq -a$$
and for any bounded continuous function $h$ on $\mathscr{Y}$,
$$\mathop{\overline{\rm lim}}_{n\rightarrow\infty}\frac1n\log\mathbb{E}\(\exp[nh (Y_n)]\)=\sup_{\mathscr{Y}}[h-I]$$
{\rm (3)} If there exists a positive, inf-compact function ${\cal V}$ on $\mathscr{Y}$ (i.e. Lyapunov function) satisfying
\begin{equation}\label{lya}\sup_n\frac1n\log\mathbb{E}\(\exp[n{\cal V} (Y_n)]\)<\infty,\end{equation}
then $\{Y_n\}$ is exponential tight.
{\rm (4)} Let $P(\mathscr{Y})$ be the set of probability measures on $\mathscr{Y}$. The following variational equality (i.e. Varadhan's equality) holds,
\begin{equation}\label{entropy}\log\int_{\mathscr{Y}} e^h d\mu=\sup_{\nu\in P(\mathscr{Y})}\(\int_\mathscr{Y} h d\nu- {\cal R}(\nu\Vert\mu)\),\quad\text{ for any }h\in C_b(\mathscr{Y})\end{equation}
where the {\it relative entropy} ${\cal R}(\cdot\Vert\cdot)$ is defined by
$${\cal R}(\nu\Vert\mu):=\int_{\mathscr{Y}}\log\(\frac{d\nu}{d\mu}\)d\nu,\quad\mu,\nu\in P(\mathscr{Y}) .$$
Moreover, if \eqref{lya} holds, then \eqref{entropy} holds for any $h\in o({\cal V})$ (i.e. $\lim_{|y|\rightarrow\infty}|h(y)|/{\cal V}(y)=0$.)
\end{theorem}
\subsection{Preliminary Lemmas}
Let's recall the transition probability
$$\mathbb{P}^{\varepsilon,\pi}(X_{t+1}=j|X_t=i, X_{t-1},\cdots,X_1 )=q^\varepsilon_{t}(j;i,u_t(i)).$$
For each $t\in\mathbb{T}$ and $\varepsilon>0$, define $\Lambda^\varepsilon_t,\Lambda_t:{\bf X}\times{\bf U}\times B({\bf X})\mapsto {\bf X}$ by
\begin{equation}\label{Lambdae}\Lambda^\varepsilon_t(x,u;h):=\varepsilon\log\(\sum_{z\in{\bf X}}\exp\left\{\varepsilon^{-1} h(z)\right\}q^\varepsilon_{t}(z;x,u)\)\text{ and }\Lambda_t(x,u;h)=\lim_{\varepsilon\rightarrow 0^+}\Lambda^\varepsilon_t(x,u;h).\end{equation}
Note that $h$ is bounded, $\Lambda^\varepsilon_t(x,u;h)$ is well-defined. $\Lambda_t(x,u;h)$ is well-defined because of the following assumption.
\vspace{0.2cm}
{\bf Assumption (A)}: {
{\rm(A1)} There exists an inf-finite, positive function ${\cal V}:{\bf X}\mapsto {\bf R}$ such that for each $(x,u)\in{\bf X}\times{\bf U}$,
$$\sup_{0<\varepsilon\leq \varepsilon_0}\Lambda^\varepsilon_t(x,u;\lambda_0{\cal V})<\infty,\quad \text{for some } \lambda_0,\varepsilon_0>0.$$
{\rm(A2)} Given any $t\in\mathbb{T}$ and $h\in B({\bf X})$, $\Lambda_t^\varepsilon(x,\cdot;h)$ is a continuous function of $u\in{\bf U}$. Moreover for each $(x,u)\in{\bf X}\times{\bf U}$, there exists a rate function $I_t(\cdot;x,u):{\bf X}\mapsto {\bf R}$ such that for any $h\in B({\bf X})$, and $u^\varepsilon\rightarrow u$ in $
{\bf U}$
%
\begin{equation}\label{rate}\lim_{\varepsilon\downarrow0}\Lambda^\varepsilon_t(x,u^\varepsilon;h)=\sup_{z\in{\bf X}}[h(z)-I_t(z;x,u)]=\Lambda_t(x,u;h).\end{equation}
{\rm(A3)} There exists a $\lambda_0>0$ and a constant $K_u$ depending on $u$ only such that for any $\lambda\in(0,\lambda_0)$ and each $u\in {\bf U}$,
$$\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{\sup_{0<\varepsilon\leq\varepsilon_0}\Lambda^\varepsilon_t(x,u;\lambda{\cal V})}{\lambda{\cal V}(x)}< K_u.$$
}
For the positive function ${\cal V}$ on ${\bf X}$ in (A1), we define a subset $B_{\cal V}({\bf X})$ of $B({\bf X})$ by
$$B_{\cal V}({\bf X}):=\{h\in B({\bf X}):\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{h(x)}{{\cal V}(x)}=0\}.$$
We also write
$$\mathscr{B}_{\cal V}:=\left\{\{h^\varepsilon\}\subset B_{\cal V} ({\bf X}): h_\varepsilon \text{ is uniformly bounded below and } \sup_\varepsilon h^\varepsilon\in B_{\cal V} ({\bf X}) \right\}.$$
\begin{remark}\label{remarkA}
{\rm (1) By Theorem \ref{pret}, (A1) and (A2) are sufficient for that $\{X_{t+1}^\varepsilon|X_{t}^\varepsilon=x,u\}$ satisfies LDP with rate function $I_t(\cdot;x,u)$. Moreover for $h\in B_{\cal V} ({\bf X})$, $\Lambda_t^\varepsilon(x,u;h)$ and $\Lambda_t(x,u;h)$ are well-defined and \eqref{rate} holds as well.
(2) (A2) says that the rate function $I_t$ is uniform on any compact subset of ${\bf U}$. We can conclude that $\Lambda_t^\varepsilon(x,u;)$ converges to $\Lambda_t(x,u;h)$ uniformly on any compact set of ${\bf U}$. Moreover, $\Lambda_t(x,u;h)$ is continuous on any compact subset of ${\bf U}$ given fixed $x$ and $h$ (See Proposition 1.2.7 in \cite{PDRE1997}).
(3) If (A1) and (A2) hold, the definition of $\Lambda^\varepsilon_t(x,u;h)$ and $\Lambda_t(x,u;h)$ can be extended to all $h\in B_{\cal V} ({\bf X})$ and (A2) is true for all $h\in B_{\cal V} ({\bf X})$. }
\end{remark}
\vspace{0.2cm}
In this paper, $ B_{\cal V} ({\bf X})$ is equipped with the following metric,
$$w(h,h'):=\sup_{\bf X}\frac{|h-h'|}{{\cal V}}.$$
The following lemma says that $( B_{\cal V} ({\bf X}),w)$ is a complete metric space.
\begin{lemma}\label{lemmapoint}Given ${\cal V}$ defined in {\rm (A1)}, the followings hold.
{\rm (1)} If $h_n\in B_{\cal V} ({\bf X})$ with $w(h_n,h_m)\rightarrow 0$ for any $n,m\rightarrow\infty$, there exists a $h\in B_{\cal V} ({\bf X})$ such that
$w(h_n,h)\rightarrow 0$, i.e. $( B_{\cal V} ({\bf X}),w)$ is a complete metric space.\smallskip
{\rm (2)} If $h_n$ is uniformly bounded below with $\sup_n h_n\in B_{\cal V} ({\bf X})$, then $\{h_n\}$ has a convergent subsequence in $( B_{\cal V} ({\bf X}),w)$. As a result, if $\{h_n\}\in\mathscr{B}_{{\cal V}}$ and $h_n$ converges to $h$ point-wisely, then $h_n$ converges to $h$ in $( B_{\cal V} ({\bf X}),w)$ .
\end{lemma}
\begin{proof} (1) For such $\{h_n\}$, it is easy to see that there exists a $h\in M({\bf X})$ such that $h_n$ converges to $h$ point-wisely. Now we show that the convergence is in metric sense as well.
For any $\delta>0$, there exists a $N_\delta>0$ such that
$$w(h_n,h_m)<\delta,\text{ for any } n,m\geq N_\delta.$$
Note that for any $m\geq N_\delta$,
$$\begin{array}{ll}\displaystyle\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{|h(x)|}{{\cal V}(x)}&\!\!\!\displaystyle\leq \mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\(\frac{|h_m(x)|}{{\cal V}(x)}+\lim_{n\rightarrow\infty}\frac{|h_n(x)-h_m(x)|}{{\cal V}(x)}\)\\
\noalign{\smallskip}&\!\!\!\displaystyle\leq \lim_{n\rightarrow\infty}\sup_{x\in{\bf X}}\frac{|h_n(x)-h_m(x)|}{{\cal V}(x)}<\delta.\end{array}$$
By the arbitrariness of $\delta>0$, we have $h\in B_{\cal V} ({\bf X})$.
For any fixed $\delta>0$, let $n_k$ satisfy
$$w(h_n,h_m)<\frac \delta{2^k},\text{ for any } n,m\geq n_k.$$
Then one can easily see that
$$\sum_{k=1}^\infty w(h_{n_{k+1}},h_{n_{k}})<\delta.$$
It follows that for any $n>N_\delta$.
$$\begin{array}{ll}\displaystyle\sup_{x\in{\bf X}}\frac{|h(x)-h_n(x)|}{{\cal V}(x)}&\!\!\!\displaystyle\leq \sup_{x\in{\bf X}}\sum_{k=1}^\infty\frac{|h_{n_{k+1}}(x)-h_{n_k}(x)|}{{\cal V}(x)}+\sup_{x\in{\bf X}}\frac{|h_{n}(x)-h_{n_1}(x)|}{{\cal V}(x)} \\
\noalign{\smallskip}&\!\!\!\displaystyle \leq\sum_{k=1}^\infty w(h_{n_{k+1}},h_{n_{k}})+w(h_{n_{1}},h_{n})\leq2\delta.\end{array}$$
It is equivalent to say
$$\lim_{n\rightarrow\infty }w(h_n,h)=0.$$
(2) By the hypothesis, one can easily see that $\{h_n\}$ has a point-wisely convergent subsequence with limit $h$. We still write the subsequence as $\{h_n\}$. Obviously we have
$h\in B_{\cal V} ({\bf X})$ since $h_n$ is uniformly bounded below and $ h\leq \sup_n h_n\in B_{\cal V} ({\bf X})$.
Note that for any $\delta>0$, there exists a $x_\delta>0$ such that
$$\frac{\sup_nh_n(x)}{{\cal V}(x)}\leq \delta\text{ for } x\geq x_\delta.$$
Then by the point-wise convergence, it follows that
$$\lim_{n\rightarrow\infty}\sup_{x\in{\bf X}}\frac{|h_n(x)-h(x)|}{{\cal V}(x)}\leq \lim_{n\rightarrow\infty}\sup_{|x|\leq x_\delta}\frac{|h_n(x)-h(x)|}{{\cal V}(x)}+2\delta=2\delta.$$
By the arbitrariness of $\delta>0$, we have
$$\lim_{n\rightarrow\infty}w(h_n,h)=0.$$
\end{proof}
\vspace{0.2cm}
Now we first prove that well-posedness of $\Lambda$ and $\Lambda^\varepsilon$ on the space $ B_{\cal V} ({\bf X})$.
\begin{lemma}\label{corlambdah} Under Assumption {\rm (A)}, for any $\{h^\varepsilon\}\in \mathscr{B}_{\cal V}$ and each $u\in{\bf U}$,
$\{\Lambda^\varepsilon_t(\cdot,u;h^\varepsilon)\}\in \mathscr{B}_{{\cal V}}$.
Therefore, for any $h\in B_{\cal V} ({\bf X})$, $\Lambda_t(\cdot,u;h)\in B_{{\cal V}}({\bf X})$ for each $u\in{\bf U}$.
\end{lemma}
\begin{proof}
It is easy to see that $\Lambda^\varepsilon_t(\cdot,u;h^\varepsilon)$ is uniformly bounded below. Note that by \eqref{entropy},
$$\begin{array}{ll}\noalign{\smallskip}\Lambda_t^\varepsilon(x,u;h^\varepsilon)&\!\!\!\displaystyle=\sup_{\nu\in P({\bf X})}\(\int_{{\bf X}}h^\varepsilon d\nu-\varepsilon {\cal R}(\nu\Vert q_t^\varepsilon(\cdot;x,u))\)\\
\noalign{\smallskip}&\!\!\!\displaystyle\leq\sup_{\nu\in P({\bf X})}\(\int_{{\bf X}}\lambda{\cal V} d\nu-\varepsilon {\cal R}(\nu\Vert q_t^\varepsilon(\cdot;x,u))\)+\sup_{\nu\in P({\bf X})}\int_{{\bf X}}(h^\varepsilon-\lambda{\cal V}) d\nu\\
\noalign{\smallskip}&\!\!\!\displaystyle\leq \Lambda_t^\varepsilon(x,u;\lambda{\cal V})+\sup_{{\bf X}}[h^\varepsilon-\lambda{\cal V}]\end{array}$$
By (A3),
\begin{equation}\label{heun}\begin{array}{ll}\displaystyle\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{\sup_\varepsilon\Lambda_t^\varepsilon(x,u;h^\varepsilon)}{{\cal V}(x)}\leq \mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac1{{\cal V}(x)}\(\sup_\varepsilon\Lambda_t^\varepsilon(x,u;\lambda{\cal V})+\sup_{{\bf X}}[\sup_\varepsilon h^\varepsilon-\lambda{\cal V}]\)\leq \lambda K_u\end{array}\end{equation}
By the arbitrariness of $\lambda>0$, it follows that $\{\Lambda_t^\varepsilon(\cdot,u;h^\varepsilon)\}\in \mathscr{B}_{\cal V} $ for each $u\in {\bf U}$.
\end{proof}
\vspace{0.2cm}
Now we are ready to present the Hamiltonians used in our paper. Define ${\cal A}^\varepsilon_t[\cdot],~{\cal A}_t[\cdot]: B_{\cal V} ({\bf X})\mapsto M({\bf X})$ by
$${\cal A}_{t}^\varepsilon[h](x):=\inf_{u\in{\bf U}}\[f_{t,t}(x,u)+\Lambda^\varepsilon_t(x,u;h)\]\quad
\text{and}\quad{\cal A}_{t}[h](x):=\inf_{u\in{\bf U}}\[f_{t,t}(x,u)+\Lambda_t(x,u;h)\].$$
The following lemma will guarantee that ${\cal A}_t^\varepsilon$ and ${\cal A}_t$ map $ B_{\cal V} ({\bf X})$ into $ B_{\cal V} ({\bf X})$ under the following assumption.
\smallskip
{\bf Assumption (B)}: {\it
For each fixed $u\in{\bf U}$, $f_{\tau,t}(\cdot,u),~ g_\tau(\cdot)\in B_{\cal V} ({\bf X})$.
For each fixed $i\in{\bf X}$, $f_{t,t}(i,\cdot)$ is continuous and inf-compact.
}
\begin{lemma} Under Assumptions {\rm (A)} and {\rm (B)}, for any $h\in B_{\cal V} ({\bf X})$, ${\cal A}_{t}^\varepsilon[h],~{\cal A}_{t}[h]\in B_{\cal V} ({\bf X})$.
\end{lemma}
\begin{proof} Since $f_{t,t}$ and $h$ are bounded below, so are ${\cal A}_{t}[h]$ and ${\cal A}_{t}^\varepsilon[h]$ by their definitions. Since $${\cal A}_{t}[h](x)\leq f_{t,t}(x,u_0)+\Lambda_t(x,u_0;h), \text{ for some } u_0\in{\bf U},$$ by Lemma \ref{corlambdah} and Assumption (B), ${\cal A}_{t}[h](x)\in B_{\cal V} ({\bf X}).$ Similarly we have ${\cal A}^\varepsilon_{t}[h](x)\in B_{\cal V} ({\bf X}).$ Moreover the infimums can be attained by Assumptions (A) and (B).
\end{proof}
Given any $h\in B_{\cal V} ({\bf X})$, define
$$\Box\eta_t^\varepsilon(\cdot;h):x\mapsto \mathop{\rm argmin}_{u\in{\bf U}}[f_{t,t}(x,u)+\Lambda^\varepsilon_t(x,u;h)]\subset{\bf U}.$$
If $\Box\eta_t^\varepsilon(x;h)\neq\emptyset$ for any $x\in{\bf X}$, we say $\eta^\varepsilon_t(\cdot;h)$ is a choice of $\Box\eta_t^\varepsilon(\cdot;h)$ if
$$\eta^\varepsilon_t(x;h)\in\Box\eta_t^\varepsilon(x;h)\quad\text{ for any }x\in {\bf X}.$$
We write it as $\eta^\varepsilon_t(\cdot;h)\in \Box\eta^\varepsilon_t(\cdot;h)$. Since ${\bf X}$ is a countable-stated space, $\eta^\varepsilon_t(\cdot;h)$ is naturally measurable. Similarly we can define $\Box\eta_t$ and its one choice $\eta_t$.
\vspace{0.2cm}
Define ${\cal H}^\varepsilon_{\tau,t}[\cdot],~{\cal H}_{\tau,t}[\cdot]: B_{\cal V} ({\bf X})\mapsto M({\bf X}\times{\bf U})$ by
$${\cal H}_{\tau,t}^\varepsilon[h](x,u):=f_{\tau,t}(x,u)+\Lambda^\varepsilon_t(x,u;h)\quad
\text{and}\quad{\cal H}_{\tau,t}[h](x,u):=f_{\tau,t}(x,u)+\Lambda_t(x,u;h).$$
It is easy to see that
$${\cal A}_{t}^\varepsilon[h](x)=\inf_{u\in{\bf U}}{\cal H}^\varepsilon_{t,t}[h](x,u)\quad\text{and}\quad{\cal A}_{t}[h](x)=\inf_{u\in{\bf U}}{\cal H}_{t,t}[h](x,u).$$
\vspace{0.2cm}
\vspace{0.2cm}
From their definitions, we know that ${\cal H}_{\tau,t}^\varepsilon,~{\cal H}_{\tau,t}$ will map $ B_{\cal V} ({\bf X})$ into $M({\bf X})$. We raise the following assumption to guarantee ${\cal H}_{\tau,t}^\varepsilon[h],~{\cal H}_{\tau,t}[h]\in B_{\cal V} ({\bf X})$ for any $h\in B_{\cal V} ({\bf X})$ and fixed $u\in{\bf U}$.
{\bf Assumption (C)} {\it
Let $$B_{t,\lambda}(x):=\{u\in{\bf U}: f_{t,t}(x,u)\leq \lambda {\cal V}(x)\}.$$ There exists constants $\lambda_0>0$ and $K_1$ such that for any $\lambda\in(0,\lambda_0)$ and $h\in B_{\cal V} ({\bf X})$,
$$\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{\sup_{u\in B_{t,\lambda}(x)}f_{\tau,t}(x,u)}{ \lambda {\cal V}(x)}\leq K_1$$
and
$$\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{\sup_{0<\varepsilon<\varepsilon_0}\sup_{u\in B_{t,\lambda}(x)}\Lambda^\varepsilon_t(x,u;\lambda{\cal V})}{\lambda {\cal V}(x)}\leq K_1.$$
}
\begin{remark}
{\rm We can see that
$$\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{\sup_{u\in B_{t,\lambda}(x)}\Lambda_t(x,u;\lambda{\cal V})}{\lambda {\cal V}(x)}\leq K_1$$
Moreover, since $f_{t,t}\in B_{\cal V} ({\bf X})$,
any $u_0\in{\bf X}$ belongs to $B_{t,\lambda}(x)$ if $|x|$ is large. Thus
(A3) is a consequence of Assumptions (B) and (C).
}
\end{remark}
\begin{lemma} \label{BBcV}Under Assumptions {\rm (A)}, {\rm (B)} and {\rm (C)}, the followings are true.\smallskip
{\rm (1)} for any $h\in B_{\cal V} ({\bf X})$, $\eta_t(\cdot,h)\in\Box\eta_t(\cdot,h)$ and $\eta^\varepsilon_t(\cdot,h)\in\Box\eta^\varepsilon_t(\cdot,h)$,
$$f_{\tau,t}(\cdot,\eta_t(\cdot;h)),f_{\tau,t}(\cdot,\eta^\varepsilon_t(\cdot;h))\in B_{\cal V} ({\bf X}). $$
{\rm (2)} for any $h_1,h_2\in B_{\cal V} ({\bf X})$, $\eta_t(\cdot,h_2)\in\Box\eta_t(\cdot,h_2)$ and $\eta^\varepsilon_t(\cdot,h_2)\in\Box\eta^\varepsilon_t(\cdot,h_2)$, $$\Lambda^\varepsilon_t(\cdot,\eta^\varepsilon_t(\cdot;h_2);h_1),~\Lambda_t(\cdot,\eta_t(\cdot;h_2);h_1)\in B_{\cal V} ({\bf X}).$$
\end{lemma}
\begin{proof}
(1) Recall the definitions $$B_{t,\lambda}(x):=\{u\in{\bf U}:f_{t,t}(x,u)\leq \lambda {\cal V}(x)\}$$
and
$$B_t(x):=\{u\in{\bf U}: f_{t,t}(x,u)\leq f_{t,t}(x,u_0)+\Lambda_t(x,u_0;h)-\inf_{\bf X} h+\delta\}.$$
Since for each $u\in {\bf U}$, $f_{t,t}(\cdot,u)\in B_{\cal V} ({\bf X})$, $B_t(x)\subset B_{t,\lambda}(x)$ for large $|x|$.
By the definition of $\eta_t$ and $\eta_t^\varepsilon$, it follows that $$\eta_t^\varepsilon(x;h),\eta_t(x;h)\in B_{t,\lambda}(x) \text{ when $x$ is large }.$$
As a consequence, for each $\lambda\in(0,\lambda_0)$,
$$\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{f_{\tau,t}(x,\eta_t(x;h))}{{\cal V}(x)}\leq \mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{\sup_{u\in B_{t,\lambda}(x)}f_{\tau,t}(x,u)}{ {\cal V}(x)}\leq \lambda K_1.$$
%
By the arbitrariness of $\lambda\in(0,\lambda_0)$, it follows that $f_{\tau,t}(x,\eta_t(x;h))\in B_{\cal V} ({\bf X})$.
Similarly, we can prove that $f_{\tau,t}(x,\eta^\varepsilon_t(x;h))\in B_{\cal V} ({\bf X})$.\vspace{0.2cm}
(2) Let $h_1,h_2\in B_{\cal V} ({\bf X})$. Then
$$\eta_t(x;h_2),\eta^\varepsilon_t(x;h_2)\in B_{t,\lambda}(x)\text{ when $|x|$ is large. }$$
By the definition of $ \Lambda_t(\cdot,u;\lambda{\cal V})$, we have
$-I_t(z;x,u)\leq \Lambda_t(x,u;\lambda{\cal V})-\lambda{\cal V}(z).$
Note that
\begin{equation}\label{useI}\Lambda_t(x,u;h)=\sup_{z\in{\bf X}}[h(z)-I(z;x,u)]\leq \sup_{z\in{\bf X}}[h(z)-\lambda{\cal V}(z)]+\Lambda_t(x,u;\lambda{\cal V}).\end{equation}
Therefore, $$\begin{array}{ll}\Lambda_t(x,\eta_t(x;h_2);h_1)&\!\!\!\displaystyle\leq \sup_{z\in{\bf X}}[h_1(z)-\lambda{\cal V}(z)]+\Lambda_t(x,\eta_t(x;h_2);\lambda{\cal V}) \\
\noalign{\smallskip}&\!\!\!\displaystyle\leq \sup_{z\in{\bf X}}[h_1(z)-\lambda{\cal V}(z)]+\sup_{u\in B_{t,\lambda}(x)}\Lambda_t(x,u;\lambda{\cal V})\end{array}$$
and for any $\lambda\in(0,\lambda_0)$, by Assumption (C),
$$\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty}\frac{\Lambda_t(x,\eta_t(x;h_2);h_1)}{{\cal V}(x)}\leq \lambda K_1.$$
By the arbitrariness of $\lambda\in(0,\lambda_0)$, it follows that $\Lambda_t(x,\eta_t(x;h_2);h_1)\in \mathscr{B}_{\cal V} $ for any $h_1,h_2\in \mathscr{B}_{\cal V} $. Similarly, the result holds for $\Lambda^\varepsilon_t(x,\eta^\varepsilon_t(x;h_2);h_1)\in B_{\cal V} ({\bf X})$.
\end{proof}
\section{Time-inconsistent Equilibrium}\label{sec:TE}
In this section, we will derive the time-inconsistent equilibrium strategy step by step. The section will be divided into several subsections.
\subsection{Optimal Control for 1-step Transition}
In this subsection, we will review the 1-step optimal control problem with risk-sensitive cost. Consider $\{X^\varepsilon_1, X^\varepsilon_2\}$ with controlled transition probability
$$\mathbb{P}(X^\varepsilon_2=j|X^\varepsilon_1=i;u)=q_t^\varepsilon(j;i,u)$$
Let
$$\Lambda^\varepsilon (x,u;h)=\varepsilon\log\mathbb{E}\(\exp[\varepsilon^{-1}h(X_2)]\big|X_1=x;u\).$$
Given some function $\hat f :{\bf X}\times {\bf U}\mapsto{\bf R}$ and $\hat g:{\bf X}\mapsto{\bf R}$, define the cost function
$$\hat V(x)=\hat J(x;\tilde u):=\varepsilon\log\mathbb{E}\(\exp\[\varepsilon^{-1}\big(\hat f(X_1,u(X_1))+\hat g(X_2)\big)\]\big|X_1=x\).$$
{\bf Problem-(CON)}: to find a $\tilde u^*\in{\cal U}$ such that
$$\hat J(x;\tilde u^*)=\inf_{ \tilde u\in{\cal U}}\hat J(x;\tilde u).$$
By the definition of $\Lambda^\varepsilon$, we have
$$\hat J(x;\tilde u)=\hat f(x,u(x))+\Lambda^\varepsilon(x,u(x);\hat g).$$
As a result,
$$\hat V(x)=\inf_{u\in{\bf U}}[\hat f(x,u)+\Lambda^\varepsilon(x,u;\hat g)]\text{ and } u^*(x)\in\mathop{\rm argmin}_{u\in{\bf U}}[\hat f(x,u)+\Lambda^\varepsilon(x,u;\hat g)]$$
Note that the optimal strategy $u^*(\cdot)$ might not be unique. The existence of $\tilde u^*=u^*(\cdot)$ will be guaranteed by the assumptions in the proof.
\vspace{0.2cm}
\subsection{Time-inconsistent Strategy}\label{eequilirium}
Now we are ready to introduce the recursion process of finding the time-inconsistent equilibria. We start with the last step first and move backward to the first step.
\vspace{0.2cm}
{\it $T$-th step strategy.} In the last step, the control is determined by solving a classical optimal control problem with discounting factor being $\tau=T$.
{\bf Problem-$T$}: to find $\tilde u^{\varepsilon,*}_T\in{\cal U}$ such that
$$J^\varepsilon_{T,T}(x;\tilde u_{T}^{\varepsilon,*})=\inf_{\tilde u\in{\cal U}} J^\varepsilon_{T,T}(x;\tilde u).$$
By the definition of $\Lambda^\varepsilon_T$, one can see
$$J^\varepsilon_{T,T}(x;\tilde u)=f_{T,T}(x,u)+\Lambda_T^\varepsilon(x,u;g_T).$$
Thus the optimal control in this step is in the following feedback form
\begin{equation}\label{nequi}\begin{array}{ll} u^{\varepsilon,*}_T(x)&\!\!\!\displaystyle\in\mathop{\rm argmin}_{u\in {\bf U}}\left\{f_{T,T}(x,u)+\Lambda_T^\varepsilon(x,u;g_T)\right\}=\Box\eta_T^\varepsilon(x;g_T).\end{array}\end{equation}
By Assumption (B), the optimal feedback control must exist. The value function is
$$V^\varepsilon_{T}(x)=J^\varepsilon_{T,T}(x;\tilde u_{T}^{\varepsilon,*})=\inf_{u\in {\bf U}}\left\{f_{T,T}(x,u)+\Lambda_T^\varepsilon(x,u;g_T)\right\}={\cal A}_T^\varepsilon[g_T](x).$$
While the minimum point is not unique, let $\eta^\varepsilon_t(\cdot;g_T)$ be a choice of $\Box\eta_t^\varepsilon(\cdot;g_T)$. We choose
\begin{equation}\label{nequiunique}u_T^{\varepsilon,*}(x)=\eta^\varepsilon_T(x;g_T).\end{equation}
Given the optimal control we find this step, now for any $\tau\in\mathbb{T}$, let
\begin{equation}\label{ThetaN}\Theta^\varepsilon_{\tau,T}(x):=f_{\tau,T}(x,\eta^\varepsilon_T(x;g_T))+\Lambda_T^\varepsilon(x,\eta^\varepsilon_T(x;g_T);g_\tau)={\cal H}^\varepsilon_{\tau,T}[g_\tau](x,\eta^\varepsilon_T(x;g_T)).\end{equation}
It is easy to see that
\begin{equation}\label{ThetaJ}\Theta^\varepsilon_{\tau,T}(x)=J^\varepsilon_{\tau,T}(x;\eta^\varepsilon_T(x;g_T)),\end{equation}
i.e. $\Theta^\varepsilon_{\tau,T}$ is the value of the cost function at time $T$ if we use the discounting factor $\tau$ and the feed-back control $\eta^\varepsilon_T(x;g_T)$. Note that $$\Theta^\varepsilon_{T,T}(x)=\inf_{\tilde u\in{\cal U}} J^\varepsilon_{T,T}(x;\tilde u).$$
\vspace{0.2cm}
{\it $(T-1)$-th step strategy.} In the $(T-1)$th step, we know $T$-step strategy is $\tilde u_T^*=\eta^\varepsilon_T(\cdot;g_T)$ defined by \eqref{nequiunique} under discounting factor $\tau=T$. While in this step, the strategy is based on the new discounting factor $\tau=T-1$. Thus we are solving the following optimal control problem.\smallskip
{\bf Problem-$(T-1)$}: to find $\tilde u^{\varepsilon,*}_{T-1}\in{\cal U}$ such that
$$J^\varepsilon_{T-1,T-1}(x;\tilde u_{T-1}^{\varepsilon,*}\oplus \tilde u_{T}^{\varepsilon,*})=\inf_{\tilde u\in{\cal U}} J^\varepsilon_{T-1,T-1}(x;\tilde u\oplus \tilde u^{\varepsilon,*}_T).$$
Note that
$$J^\varepsilon_{T-1,T-1}(x;\tilde u\oplus \tilde u^{\varepsilon,*}_T)=f_{T-1,T-1}(x,u(x))+\Lambda^\varepsilon_{T-1}(x,u(x);J^\varepsilon_{T-1,T}(x;\tilde u_T^{\varepsilon,*}))$$
and by \eqref{ThetaJ}, $$\Theta^\varepsilon_{T-1,T}(x)=J^\varepsilon_{T-1,T}(x,\tilde u_T^{\varepsilon,*}).$$
Similarly we can take $\eta^\varepsilon_{T-1}(\cdot;\Theta^\varepsilon_{T-1,T})$, a possible choice of $\Box \eta^\varepsilon_{T-1}(\cdot;\Theta^\varepsilon_{T-1,T})$ and let
\begin{equation}\label{n-1equi}\begin{array}{ll} u^{\varepsilon,*}_{T-1}(x)&\!\!\!\displaystyle=\eta^\varepsilon_{T-1}(x;\Theta^\varepsilon_{T-1,T}).\end{array}\end{equation}
The value function
$$V^\varepsilon_{T-1}(x)=J^\varepsilon_{T-1,T-1}(x;\tilde u_{T-1}^{\varepsilon,*}\oplus \tilde u_{T}^{\varepsilon,*})=\inf_{u\in {\bf U}}\left\{f_{T-1,T-1}(x,u)+\Lambda_{T-1}^\varepsilon(x,u;\Theta^\varepsilon_{T-1,T})\right\}={\cal A}^\varepsilon_{T-1}[\Theta^\varepsilon_{T-1,T}](x).$$
Here $\Lambda_{T-1}^\varepsilon(x,u;\Theta^\varepsilon_{T-1,T})$ is well-defined since $\Theta_{T-1,T}^\varepsilon\in B_{\cal V} ({\bf X})$ by Lemma \ref{BBcV}.
Now for any $\tau\in\mathbb{T}$, let
\begin{equation}\label{ThetaN-1}\begin{array}{ll}\Theta^\varepsilon_{\tau,T-1}(x)&\!\!\!\displaystyle:=f_{\tau,T-1}(x,\eta^\varepsilon_{T-1}(x;\Theta^\varepsilon_{T-1,T}))+\Lambda_{T-1}^\varepsilon(x,\eta^\varepsilon_{T-1}(x;\Theta^\varepsilon_{T-1,T});\Theta^\varepsilon_{T-1,T})\\
\noalign{\smallskip}&\!\!\!\displaystyle={\cal H}^\varepsilon_{\tau,T-1}[\Theta^\varepsilon_{\tau,T}](x,\eta^\varepsilon_{T-1}(x;\Theta^\varepsilon_{T-1;T})).\end{array}\end{equation}
It is easy to see that
$$\Theta^\varepsilon_{\tau,T-1}(x)=J^\varepsilon_{\tau,T-1}(x;\tilde u_{T-1}^{\varepsilon,*}\oplus\tilde u_{T}^{\varepsilon,*}).$$
\vspace{0.2cm}
{\it $t$-th step strategy.} Before $t$th step, it has been already identified that $\tilde u_{t+1,\mathbb{T}}^*=\eta^\varepsilon_{t+1}(\cdot;\Theta^\varepsilon_{t+1,t+2})\oplus\cdots\oplus\eta^\varepsilon_{T}(\cdot;g_T)$. In this step, we are using the new discounting factor $\tau=t$. Thus we are solving the following optimal control problem.
{\bf Problem-$t$}: to find $\tilde u^{\varepsilon,*}_t\in{\cal U}$ such that
$$J^\varepsilon_{t,t}(x;\tilde u_{t}^*\oplus \tilde u_{t+1,\mathbb{T}}^{\varepsilon,*})=\inf_{\tilde u\in{\cal U}} J^\varepsilon_{t,t}(x;\tilde u\oplus \tilde u_{t+1,\mathbb{T}}^{\varepsilon,*})$$
Similarly we can take one choice among the possible multiple choices that
\begin{equation}\label{kequi}\begin{array}{ll} u^{\varepsilon,*}_{t}(x)&\!\!\!\displaystyle=\eta^\varepsilon_{t}(x;\Theta^\varepsilon_{t;t+1})\end{array}\end{equation}
and the value function
$$V^\varepsilon_{t}(x)=J^\varepsilon_{t,t}(x;\tilde u_{t}^{\varepsilon,*}\oplus \tilde u_{t+1,\mathbb{T}}^{\varepsilon,*})=\inf_{u\in {\bf U}}\left\{f_{t,t}(x,u)+\Lambda_{t}^\varepsilon(x,u;\Theta^\varepsilon_{t,t+1})\right\}={\cal A}_t^\varepsilon[\Theta^\varepsilon_{t,t+1}].$$
Now for any $\tau\in\mathbb{T}$, let
\begin{equation}\label{Thetak}\Theta^\varepsilon_{\tau,t}(x):=f_{\tau,t}(x,\eta^\varepsilon_{t}(x;\Theta^\varepsilon_{t,t+1}))+\Lambda_{T-1}^\varepsilon(x,\eta^\varepsilon_{t}(x;\Theta^\varepsilon_{t,t+1});\Theta^\varepsilon_{\tau,t+1})={\cal H}^\varepsilon_{\tau,t}[\Theta_{\tau,k+1}](x,\eta^\varepsilon_t(x;\Theta^\varepsilon_{t,t+1})).\end{equation}
It is easy to see that
$$\Theta^\varepsilon_{\tau,t}(x)=J^\varepsilon_{\tau,t}(x;\tilde u_t^{\varepsilon,*}\oplus\cdots\oplus\tilde u_T^{\varepsilon,*}).$$
\vspace{0.2cm}
By recursively repeating such process until the first step, we get a $T$-step strategy $\eta_\mathbb{T}^\varepsilon=\eta^\varepsilon_1\oplus\cdots\oplus\eta^\varepsilon_T$ and a sequence of functions $\{\Theta^\varepsilon_{\tau,t}:(\tau,t)\in\mathbb{T}\times\mathbb{T}\}$ by the following recursions,
\begin{equation}\label{HJBmain}\left\{\begin{array}{ll}\noalign{\smallskip}&\!\!\!\displaystyle\Theta^\varepsilon_{\tau,t}(x)={\cal H}_{\tau,t}^\varepsilon[\Theta^\varepsilon_{\tau,t+1}](x,\eta^\varepsilon_t(x;\Theta_{t,t+1}^\varepsilon)),\quad \tau,t\in\mathbb{T}\\
\noalign{\smallskip}&\!\!\!\displaystyle \eta^\varepsilon_t(\cdot;\Theta_{t,t+1}^\varepsilon)\in\Box\eta^\varepsilon_t(\cdot;\Theta_{t,t+1}^\varepsilon)\\
\noalign{\smallskip}&\!\!\!\displaystyle \Theta^\varepsilon_{\tau,T+1}(x)=g_\tau(x).\end{array}\right.\end{equation}
Similarly, we can construct
$T$-step strategy $\eta_\mathbb{T}=\eta_1\oplus\cdots\oplus\eta_T$ and a sequence of functions $\{\Theta_{\tau,t}:(\tau,t)\in\mathbb{T}\times\mathbb{T}\}$ by the following recursions,
\begin{equation}\label{HJBmain-0}\left\{\begin{array}{ll}\noalign{\smallskip}&\!\!\!\displaystyle\Theta_{\tau;t}(x)={\cal H}_{\tau,t}[\Theta_{\tau,t+1}](x,\eta_t(x;\Theta_{t,t+1})),\quad \tau,t\in\mathbb{T}\\
\noalign{\smallskip}&\!\!\!\displaystyle \eta_t(\cdot;\Theta_{t,t+1})\in\Box\eta_t(\cdot;\Theta_{t,t+1})\\
\noalign{\smallskip}&\!\!\!\displaystyle \Theta_{\tau;T+1}(x)=g_\tau(x).\end{array}\right.\end{equation}
\begin{remark}
{\rm (1) One can see that the construction of $\eta^\varepsilon_{\mathbb{T}}$ ($\eta^\varepsilon_{\mathbb{T}}$) is in a reverse order. Moreover, if the choices $\eta_t^\varepsilon$ ($\eta_t$) changes, $\eta_s^\varepsilon$ ($\eta_s$) for $s<t$ have to change correspondingly. \vspace{0.2cm}
(2) If $f_{\tau,t}$ and $g_\tau$ is independent of $\tau$, i.e. the time-consistent case, then ${\cal H}^\varepsilon_{\tau,t}[h](x,u)={\cal A}^\varepsilon_{t}[h](x)$ for any $u\in\Box\eta^\varepsilon(x;\Theta^\varepsilon_{t,t+1})$. Thus $\Theta^\varepsilon_{\tau,t}=\Theta^\varepsilon_{t,t}$ for any $\tau\in\mathbb{T}$
and the recursion for the value function is $V^\varepsilon_t(x)=\Theta^\varepsilon_{t,t}(x)$
$$V^\varepsilon_t={\cal A}^\varepsilon_t[V^\varepsilon_{t+1}], \quad\text{with}\quad V^\varepsilon_{T+1}=g.$$
One can see that the Hamiltonian recursion is independent of the choice of the optimal control in each step now.
}
\end{remark}
\bigskip
Now we are ready to introduce our first main theorem.
\begin{theorem} Under Assumptions {\rm (A),(B)} and {\rm (C)}, the followings hold.
{\rm (1)} For any choice of $\eta^\varepsilon_\mathbb{T}:=\eta^\varepsilon_1\oplus\cdots\oplus\eta^\varepsilon_T$ constructed in Section \ref{eequilirium} ,
the recursive sequence $\{\Theta^\varepsilon_{\tau,t}(\cdot):(\tau,t)\in\mathbb{T}\times\mathbb{T}\}$ from \eqref{HJBmain} is well-defined in $ B_{\cal V} ({\bf X})$. Moreover $\eta^\varepsilon_\mathbb{T}$ is a time-inconsistent $\varepsilon$-risk-sensitive equilibrium.
{\rm (2)} For any choice of $\eta_\mathbb{T}:=\eta_1\oplus\cdots\oplus\eta_T$ constructed in Section \ref{eequilirium} ,
the recursive sequence $\{\Theta_{\tau,t}(\cdot):(\tau,t)\in\mathbb{T}\times\mathbb{T}\}$ from \eqref{HJBmain-0} is well-defined in $ B_{\cal V} ({\bf X})$. Moreover $\eta_\mathbb{T}$ is a time-inconsistent risk-sensitive equilibrium.
{\rm (3)} Any time-inconsistent $\varepsilon$-risk-sensitive equilibrium $\eta_\mathbb{T}$, coupled with $\Theta^\varepsilon_{\tau,t}(x)=J^\varepsilon_{\tau,t}(x;\eta^\varepsilon_{t,\mathbb{T}})$, solves \eqref{HJBmain}.
{\rm (4)} Any time-inconsistent risk-sensitive equilibrium $\eta_\mathbb{T}$, coupled with $\Theta_{\tau,t}(x)=J_{\tau,t}(x;\eta_{t,\mathbb{T}})$, solves \eqref{HJBmain-0}.
\end{theorem}
\begin{proof}
(1) and (2). By Lemma \ref{BBcV}, $\{\Theta^\varepsilon_{\tau,t}(\cdot):(\tau,t)\in\mathbb{T}\times\mathbb{T}\}$ and $\{\Theta_{\tau,t}(\cdot):(\tau,t)\in\mathbb{T}\times\mathbb{T}\}$ are well-defined in $ B_{\cal V} ({\bf X})$.
By the construction process of $\eta^\varepsilon_\mathbb{T}$, one can see that
$$\begin{array}{ll}\Theta^\varepsilon_{t,t}(x)&\!\!\!\displaystyle={\cal H}^\varepsilon_{t,t}[\Theta_{t,t+1}](x,\eta^\varepsilon_t(x;\Theta^\varepsilon_{t,t+1}))\\
\noalign{\smallskip}&\!\!\!\displaystyle={\cal A}_t[\Theta_{t,t+1}](x)\\
\noalign{\smallskip}&\!\!\!\displaystyle=\inf_{u\in{\bf U}}[f_{t,t}(x,u)+\Lambda_t^\varepsilon(x,u;\Theta_{t,t+1}^\varepsilon)].\end{array}$$
and $\Theta^\varepsilon_{t,t}(x)=J^\varepsilon_{t,t}(x;\tilde u^{\varepsilon,*}_{t,\mathbb{T}})$.
The optimality \eqref{opt1} holds directly. Thus $\eta^\varepsilon_\mathbb{T}$ is a time-inconsistent risk-sensitive $\varepsilon$-equilibrium. Similar argument can be applied to $\eta_\mathbb{T}$ as well.
(3) and (4). If $\eta_\mathbb{T}$ is a time-inconsistent risk-sensitive equilibrium strategy, by the optimality \eqref{opt2}, $$u_t^*(\cdot)=\eta_t(\cdot;\Theta_{t,t+1})\in\Box\eta_t(\cdot;\Theta_{t,t+1}).$$
By $\Theta_{\tau,t}(x)=J_{\tau,t}(x;\eta_{t,\mathbb{T}})$, it is easy to see that
$$\Theta_{\tau;t}(x)={\cal H}_{\tau,t}[\Theta_{\tau,t+1}](x,\eta_t(x;\Theta_{t,t+1})),\quad\tau,t\in\mathbb{T}.$$
Thus $\{\eta_t,\Theta_{\tau,t}\}$ solves \eqref{HJBmain-0}. The similar results holds for $\eta_\mathbb{T}^\varepsilon$.
\end{proof}
\section{The Convergence of $\varepsilon$-equilibria}\label{sec:con}
In this section, we focus on the convergence of $\varepsilon$-equilibria as $\varepsilon\rightarrow 0^+$, i.e. whether the solutions of \eqref{HJBmain} converges to some solution of \eqref{HJBmain-0} as $\varepsilon\rightarrow 0^+$. We need the following two lemmas.
\begin{lemma}\label{uniformconvergencelemma}Under Assumptions {\rm (A)}, if $\{h^\varepsilon\}\in \mathscr{B}_{\cal V}$ and $h^\varepsilon\rightarrow h$ point-wisely,
then $\Lambda_t^\varepsilon(x,u;h^\varepsilon)$ converges to $\Lambda_t(x,u;h)$ uniformly on any compact compact set of ${\bf U}$.
\end{lemma}
\begin{proof}
Let $u^\varepsilon\rightarrow u$. Without loss of generality, we assume that that the uniform lower bound of $h^\varepsilon$ is 1 and write $\hat h=\sup_\varepsilon h^\varepsilon$ and $C_m=\sup_{x>m}(\hat h-\lambda_0{\cal V})$. It is easy to see that
$\lim_{m\rightarrow\infty}C_m=-\infty.$ Let $$D_m:=\mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0^+}\varepsilon\log\(\sum_{z>m}\exp\left\{\varepsilon^{-1} \hat h(z)\right\}q^\varepsilon_{t}(z;x,u^\varepsilon) \).$$
Note that
\begin{equation}\label{uniforminter-0}\begin{array}{ll}\displaystyle \varepsilon\log\(\sum_{z\geq m }\exp\left\{\varepsilon^{-1} \hat h(z)\right\}q^\varepsilon_{t}(z;x,u^\varepsilon) \)&\!\!\!\displaystyle\leq \varepsilon\log\(\sum_{z\geq m }\exp\left\{\varepsilon^{-1} \lambda_0{\cal V}(z)\right\}q^\varepsilon_{t}(z;x,u^\varepsilon) \)+C_m\\
\noalign{\smallskip}&\!\!\!\displaystyle
\leq \Lambda^\varepsilon(x,u^\varepsilon;\lambda_0{\cal V}_0)+C_m, \end{array}\end{equation}
%
Thus we have
\begin{equation}\label{uniforminter}\lim_{m\rightarrow\infty}D_m=-\infty, \end{equation}
Note that for any $m>0$, we have
\begin{equation}\label{uniformboundh}\mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0+}\sup_{|x|\leq m}|h^\varepsilon-h|=0.\end{equation}
By \eqref{uniformboundh},
we have
$$\begin{array}{ll}\noalign{\smallskip}&\!\!\!\displaystyle\mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0+}\Lambda_t^\varepsilon(x,u^\varepsilon;h^\varepsilon)\\
\noalign{\smallskip}&\!\!\!\displaystyle \quad=\mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0+}\varepsilon\log\(\sum_{z\in{\bf X}}\exp\{\varepsilon^{-1} h^\varepsilon(z)q^\varepsilon_{t}(z;x,u^\varepsilon)\}\)\\
\noalign{\smallskip}&\!\!\!\displaystyle \quad\leq \mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0+}\varepsilon\log\(\sum_{z\leq M}\exp\left\{\varepsilon^{-1} (h^\varepsilon(x)-h(x))\right\}\exp\{h(z)\}q^\varepsilon_{t}(z;x,u^\varepsilon)+ \sum_{z>M}\exp\{\varepsilon^{-1} \hat h(z)\}q^\varepsilon_{t}(z;x,u^\varepsilon)\)\\
\noalign{\smallskip}&\!\!\!\displaystyle\leq \mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0+}\(\sup_{|x|\leq M}|h^\varepsilon-h|+\Lambda_t(x,u;h)\)\bigvee \mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0+}\varepsilon\log\(\sum_{z>M}\exp\{\varepsilon^{-1} \hat h(z)\}q^\varepsilon_{t}(z;x,u^\varepsilon)\)\\
\noalign{\smallskip}&\!\!\!\displaystyle\leq \Lambda_t(x,u;h)\vee D_m\end{array}$$
On the other hand,
$$\begin{array}{ll}\noalign{\smallskip}&\!\!\!\displaystyle\mathop{\underline{\rm lim}}_{\varepsilon\rightarrow0+}\Lambda_t^\varepsilon(x,u^\varepsilon;h^\varepsilon)\\
\noalign{\smallskip}&\!\!\!\displaystyle \quad\geq\mathop{\underline{\rm lim}}_{\varepsilon\rightarrow0+}\varepsilon\log\(\sum_{z\leq M}\exp\{\varepsilon^{-1} h^\varepsilon(z)\}q^\varepsilon_{t}(z;x,u^\varepsilon)\}\)\\
\noalign{\smallskip}&\!\!\!\displaystyle \quad\geq \mathop{\underline{\rm lim}}_{\varepsilon\rightarrow0+}\varepsilon\log\(\sum_{z\leq M}\exp\left\{-\varepsilon^{-1} (h^\varepsilon(z)-h(z))\right\}\exp\{h(z)\}q^\varepsilon_{t}(z;x,u^\varepsilon)\)\\
\noalign{\smallskip}&\!\!\!\displaystyle \quad\geq -\mathop{\overline{\rm lim}}_{\varepsilon\rightarrow0+}\sup_{|x|\leq M}|h^\varepsilon-h|+\mathop{\underline{\rm lim}}_{\varepsilon\rightarrow0+}\varepsilon\log\(\sum_{z\in {\bf X}}\exp\{h(z)\}q^\varepsilon_{t}(z;x,u^\varepsilon)-\sum_{z> M}\exp\{\hat h(z)\}q^\varepsilon_{t}(z;x,u^\varepsilon)\)\\
\noalign{\smallskip}&\!\!\!\displaystyle\quad\geq \Lambda_t(x,u;h)\vee D_m\end{array}$$
By the arbitrariness of $m$ and \eqref{uniforminter}, it follows that $$\lim_{\varepsilon\rightarrow0+}\Lambda_t^\varepsilon(x,u^\varepsilon;h^\varepsilon)=\Lambda_t(x,u;h).$$
Since $u^\varepsilon\rightarrow u$ is arbitrary, it is equivalent to
\begin{equation}\label{uniformconvergen h}\lim_{\varepsilon\downarrow0}\Lambda_t^\varepsilon(x,u;h^\varepsilon)=\Lambda_t(x,u;h),\quad\text{ uniformly on any compact set of }{\bf U}.\end{equation}
\end{proof}
The following lemma concerns with a stability result of the Hamiltonians.
\begin{lemma}\label{strategiesconvergence}
Under Assumptions {\rm (A), (B)} and {\rm (C)}, the followings hold.\smallskip
{\rm (1)} Suppose $\{h^\varepsilon\}\in \mathscr{B}_{\cal V} $. Then $\{\eta^\varepsilon_t(\cdot;h^\varepsilon)\}_{0<\varepsilon<\varepsilon_0}$ is compact in point-wise convergence sense and the limit of any convergent subsequence (as $\varepsilon\rightarrow0$) belongs to $\Box\eta_t(\cdot;h)$. \smallskip
{\rm (2)} Let $\{h_1^\varepsilon\},\{h_2^\varepsilon\}_\varepsilon\in \mathscr{B}_{\cal V} $ and $h_1^\varepsilon\rightarrow h_1$ and $h_2^\varepsilon\rightarrow h_2$ point-wisely. For any convergent subsequence $\{\eta^{\varepsilon_n}_t(\cdot;h_2^{\varepsilon_n})\}$ ($\varepsilon_n\rightarrow0^+$) with limit $\eta_t^0(\cdot;h_2)$
\begin{equation}\label{Hconvergence}\lim_{n\rightarrow\infty}{\cal H}^{\varepsilon_n}_{\tau,t}[h_1^{\varepsilon_n}](x;\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n}))={\cal H}_{\tau,t}[h_1](x;\eta_t(x;h_2)) \text{ for any }x\in{\bf X}.\end{equation}
Moreover
$\{{\cal H}^{\varepsilon}_{\tau,t}[h_1^{\varepsilon}](\cdot;\eta^{\varepsilon}_t(\cdot;h_2^{\varepsilon}))\}\in \mathscr{B}_{\cal V} .$
\end{lemma}
\begin{proof}(1)
Recall $$\Box\eta_t^\varepsilon(x;h^\varepsilon)=\mathop{\rm argmin}_{u\in{\bf U}}[f_{t,t}(x,u)+\Lambda_t^\varepsilon(x,u;h^\varepsilon)].$$
Let $$B_t(x):=\{u\in{\bf U}:f_{t,t}(x,u)\leq f_{t,t}(x,u_0)+\sup_\varepsilon\Lambda_t^\varepsilon(x,u_0;\lambda{\cal V})+\sup_{{\bf X}}[\sup_\varepsilon h^\varepsilon-\lambda{\cal V}]-\inf_{\varepsilon} \inf_{\bf X} h^\varepsilon\}$$
By \eqref{uniformconvergen h} and \eqref{heun}, for any fixed $x$, we can see that $\Box\eta_t^\varepsilon(x;h^\varepsilon)\subset B_t(x)$ and $B_t(x)$ is compact. Thus for any sequence of choices $\{\eta_t^{\varepsilon_n}(x;h^{\varepsilon_n})\}$ ($\varepsilon_n\rightarrow0$), there exists a convergent subsequence with limit $\eta_t^0(x;h)$.
Note that
$$|{\cal A}^\varepsilon_{t}[h^\varepsilon](x)-{\cal A}_{t}[h](x)|\leq \sup_{u\in B_t(x)}|\Lambda_t^\varepsilon(x,u;h^\varepsilon)-\Lambda(x,u;h)|$$
Therefore by \eqref{uniformconvergen h},
$$\lim_{n\rightarrow\infty}{\cal A}^{\varepsilon_n}_{t}[h^{\varepsilon_n}](x)={\cal A}_{t}[h](x).$$
Moreover,
$$\begin{array}{ll}\displaystyle{\cal A}_{t}[h](x,u)&\!\!\!\displaystyle=\lim_{n\rightarrow\infty}{\cal A}^{\varepsilon_n}_{t}[h^{\varepsilon_n}](x,u)|\\
\noalign{\smallskip}&\!\!\!\displaystyle =\lim_{n\rightarrow\infty}[f_{t,t}(x,\eta_t^{\varepsilon_n}(x;h^{\varepsilon_n}))-f_{t,t}(x,\eta_t(x;h))+\Lambda^\varepsilon_{t}(x,\eta_t^{\varepsilon_n}(x;h^{\varepsilon_n});h^\varepsilon)-\Lambda_{t}(x,\eta_t(x;h);h)]\\
\noalign{\smallskip}&\!\!\!\displaystyle\quad+f_{t,t}(x,\eta_t(x;h))+\Lambda_{t}(x,\eta_t(x;h);h)\\
\noalign{\smallskip}&\!\!\!\displaystyle=f_{t,t}(x,\eta_t(x;h))+\Lambda_{t}(x,\eta_t(x;h);h).\end{array}$$
The last step holds since \eqref{uniformconvergen h} and $f_{t,t}$ is continuous. Thus $\eta_t(x;h)$ the minimum point of ${\cal A}_{t}[h](x,\cdot)$ for fixed $x\in{\bf X}$.
Since ${\bf X}$ has only countable many states, by the classical diagonalization method, one can extract a convergent subsequence $\eta_t^{\varepsilon_n}$ such that the convergence is true for any $x\in{\bf X}$, i.e.
$$\lim_{\varepsilon_n\rightarrow 0}\eta_t^{\varepsilon_n}(x;h)=\eta_t(x;h),\quad\text{for any }x\in{\bf X}.$$
(2)
Note that $$\begin{array}{ll}{\cal H}^{\varepsilon_n}_{\tau,t}[h_1^{\varepsilon_n}](x;\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n}))&\!\!\!\displaystyle=f_{\tau,t}(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n}))+\Lambda_t^{\varepsilon_n}(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n});h_1^{\varepsilon_n})\\
\noalign{\smallskip}&\!\!\!\displaystyle =f_{\tau,t}(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n}))+\Lambda_t(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n});h_1)\\
\noalign{\smallskip}&\!\!\!\displaystyle\quad+\Lambda_t^{\varepsilon_n}(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n});h_1^{\varepsilon_n})-\Lambda_t(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n});h_1)\end{array}$$
Since $f_{\tau,t}$ is continuous,
$$\lim_{n\rightarrow 0}f_{\tau,t}(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n}))=f_{\tau,t}(x,\eta_t(x;h_2)).$$
By (A2),
$$\lim_{n\rightarrow\infty}\Lambda_t(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n});h_1)=\Lambda_t(x,\eta_t(x;h_2);h_1).$$
By \eqref{uniformconvergen h},
$$\begin{array}{ll}&\!\!\!\displaystyle\lim_{n\rightarrow 0}|\Lambda_t^{\varepsilon_n}(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n});h_1^{\varepsilon_n})-\Lambda_t(x,\eta^{\varepsilon_n}_t(x;h_2^{\varepsilon_n});h_1)|\\
\noalign{\smallskip}&\!\!\!\displaystyle\quad\leq \lim_{n\rightarrow 0}\sup_{u\in B_t(x)}|\Lambda_t^{\varepsilon_n}(x,u;h_1^{\varepsilon_n})-\Lambda_t(x,u;h_1)|=0.\end{array}$$
Therefore, thus \eqref{Hconvergence} holds.
It is easy to see that $\{\sup_\varepsilon{\cal H}^{\varepsilon}_{\tau,t}[h_1^{\varepsilon}](x;\eta^{\varepsilon}_t(x;h_2^{\varepsilon}))\}_\varepsilon$ is uniformly bounded below. Now we will prove that $\sup_\varepsilon{\cal H}^{\varepsilon}_{\tau,t}[h_1^{\varepsilon}](x;\eta^{\varepsilon}_t(x;h_2^{\varepsilon}))\in B_{\cal V} ({\bf X})$.\smallskip
By (A3), $\eta_t^\varepsilon(x;h^\varepsilon_2)\in B'_t(x)$
where $$B'_t(x):=\{u\in{\bf U}:f_{t,t}(x,u)\leq f_{t,t}(x,u_0)+\sup_\varepsilon\Lambda_t^\varepsilon(x,u_0;\lambda{\cal V})+\sup_{{\bf X}}[\sup_\varepsilon h_2^\varepsilon-\lambda{\cal V}]-\inf_{\varepsilon} \inf_{\bf X} h_2^\varepsilon\}$$ and $B'_t(x)\subset B_{t,\lambda}(x)$ when $|x|$ is large, where $ B_{t,\lambda}(x)$ is defined from Assumption (C) $$B_{t,\lambda}(x)=\{u\in{\bf U}:f_{t,t}(x,u)\leq \lambda {\cal V}(x) \}.$$
Simple calculation yields
$$\begin{array}{ll}{\cal H}^{\varepsilon}_{\tau,t}[h_1^{\varepsilon}](x;\eta^{\varepsilon}_t(x;h_2^{\varepsilon}))&\!\!\!\displaystyle\leq \sup_{u\in B_{t,\lambda}(x)}[f_{\tau,t}(x,u)+\Lambda_t^\varepsilon(x,u;h_1^\varepsilon)]\\
\noalign{\smallskip}&\!\!\!\displaystyle \leq \sup_{u\in B_{t,\lambda}(x)}\(f_{\tau,t}(x,u)+\Lambda_t^\varepsilon(x,u;\lambda{\cal V})+\sup_{z}[h_1^\varepsilon(z)-\lambda{\cal V}(z)]\)\end{array}$$
By Assumption (B),
$$\mathop{\overline{\rm lim}}_{|x|\rightarrow\infty }\frac1{{\cal V}(x)}\sup_\varepsilon{\cal H}^{\varepsilon}_{\tau,t}[h_1^{\varepsilon}](x;\eta^{\varepsilon}_t(x;h_2^{\varepsilon}))\leq K_1\lambda.$$
By the arbitrariness of $\lambda$, we have $\{{\cal H}^{\varepsilon}_{\tau,t}[h_1^{\varepsilon}](x;\eta^{\varepsilon}_t(x;h_2^{\varepsilon}))\}\in \mathscr{B}_{\cal V} $.
\end{proof}
Now we are ready to establish the convergence of time-inconsistent $\varepsilon$-equilibria to time-inconsistent equilibrium as $\varepsilon\rightarrow0^+$.
\begin{theorem}Under Assumptions {\rm (A),~(B)} and {\rm (C)},
as $\varepsilon\rightarrow 0^+$, the sequence of time-inconsistent $\varepsilon$-equilibria $\{\eta_\mathbb{T}^\varepsilon\}$ is compact (in pointwise convergence sense) and the limit $\eta_\mathbb{T}$ of any convergent subsequence $\{\eta^{\varepsilon_n}_\mathbb{T}\}$ is a time-inconsistent equilibrium strategy. At the same time, $\{\Theta^{\varepsilon_n}_{\tau,t}\}$ defined in \eqref{opt1} using $\eta^{\varepsilon_n}_\mathbb{T}$ converges to $\{\Theta_{\tau,t}\}$ defined in \eqref{opt2} using $\eta_\mathbb{T}$ in $( B_{\cal V} ({\bf X}),w)$.
\end{theorem}
\begin{proof}
At $N$th step, we take a subsequence
$\{\eta_N^{\varepsilon_n}(\cdot;g_N)\}$ with limit $\eta_N$. Note that
$$\Theta^{\varepsilon_n}_{\tau,N}(x)={\cal H}_{\tau,t}^{\varepsilon_n}[g_\tau](x;\eta_N^{\varepsilon_n}(x;g_N)).$$
By Lemma \ref{strategiesconvergence}, we know that
$\Theta_{\tau,N}^{\varepsilon_n}(x)$ is uniformly bounded below and $\sup_{\varepsilon_n}\Theta^{\varepsilon_n}_{\tau,N}\in B_{{\cal V}}({\bf X})$ with limit $\Theta_{\tau,N}^{0}$ in point-wise sense. By Lemma \ref{lemmapoint}, $\Theta_{\tau,N}^{\varepsilon_n}$ converges to $\Theta_{\tau,N}^{0}$ in $( B_{\cal V} ({\bf X}),w)$.
At $(N-1)$th step, we take a subsequence of $\{\eta_{N-1}^{\varepsilon_n}(\cdot;\Theta_{N-1,N})\}$ (still written as the same sequence) with limit $\eta_{N-1}$.
Note that
$$\Theta_{\tau,N-1}^{\varepsilon_n}(x)={\cal H}_{\tau,t}^{\varepsilon_n}[\Theta^{\varepsilon_n}_{\tau,N}](x;\eta_{N-1}^{\varepsilon_n}(x;\Theta^{\varepsilon_n}_{N-1,N})).$$
By Lemma \ref{strategiesconvergence},
$\{\Theta_{\tau,N-1}^{\varepsilon_n}\in B_{{\cal V}}({\bf X})\}\in \mathscr{B}_{\cal V} $ and it converges to $\Theta_{\tau,N-1}(x)$ point-wisely as $n\rightarrow\infty$. Thus $\{\Theta_{\tau,N-1}^{\varepsilon_n}$ converges to $\Theta_{\tau,N-1}$ in $( B_{\cal V} ({\bf X}),w)$. We repeat such process until the first step. Then the proof is complete.
\end{proof}
The following corollary is obvious.
\begin{corollary}
{\rm (1)} Under Assumptions {\rm (A),(B)} and {\rm (C)}, if the solution $\Theta$ of \eqref{opt2} is unique, then $\Theta_{\tau,t}^\varepsilon$ converges to $\Theta_{\tau,t}$ in $( B_{\cal V} ({\bf X}),w)$, i.e.
$$\lim_{\varepsilon\rightarrow0^+}w(\Theta_{\tau,t}^\varepsilon,\Theta_{\tau,t})=0.$$
{\rm (2)} Under Assumptions {\rm (A)} and {\rm (B)}, if the cost functional is independent of the a discounting factor $\tau$, i.e. time-consistent case, the solution $\Theta$ ($\Theta^\varepsilon$ resp.) is independent of the choices $\eta$ ($\eta^\varepsilon$ resp.) as well. As a result the solution $\Theta$ of \eqref{opt2} is unique and $V_t^\varepsilon=\Theta^\varepsilon_{t,t}$ converges to $V_t=\Theta_{t,t}$ in $( B_{\cal V} ({\bf X}),w)$ for each fixed $t$.
\end{corollary}
This similar results for stochastic diffusion in $\mathbb{R}^d$ is called vanishing viscosity procedure in \cite{Flem2006}. While in our case, we are dealing with a discrete-time, countable-stated MDP. Compared to that of \cite{Flem2006}, our result has it own interesting feature because our problem is time-inconsistent.
\section{ Illustrative Examples}\label{sec:exp}
In this section, we will present two illustrative examples. First, we present an example in which the assumptions are possible to be verified.
\begin{example}\label{example1} {\rm
Consider a sequence of random variables defined by $$X^\varepsilon_{t+1}=X^\varepsilon_t+u+\xi^\varepsilon_t.$$
where the control $u$ is taken in ${\bf U}=\{-1,1\}$ and the distribution function of $\xi_t^\varepsilon$ is
$$\mathbb{P}(\xi^\varepsilon_t=x)=\left\{\begin{array}{ll}\kappa\exp\{-\varepsilon^{-1}|x|^2\},\qquad &\!\!\!\displaystyle\text{if } x\neq 0\\ [2mm]
\noalign{\smallskip}\displaystyle 1-\kappa\sum_{z\neq 0}\exp\{-\varepsilon^{-1}|z|^2\},\qquad &\!\!\!\displaystyle\text{if } x=0.\end{array} \right.$$
for some small $\kappa>0$.
Simple calculation yields that
$$I(z;x,u)=(z-x-u)^2\text{ and
}\Lambda(x,u;h)=\sup_{z\in{\bf X}}[h(z)-(z-x-u)^2)].$$
Let ${\cal V}(x)=|x|^2$. Take $\varepsilon_0$ small, (A1) holds. Since ${\bf U}$ is compact, (A2) holds. Note that
$$\begin{array}{ll}\Lambda^\varepsilon(x,u;\lambda{\cal V})&\!\!\!\displaystyle=\varepsilon\log\left\{\kappa\sum_{z\neq 0}\exp\{\varepsilon^{-1}(\lambda|x+z|^2-|z|^2)\}+\exp\{\varepsilon^{-1}\lambda|x|^2\}(1-\kappa\sum_{z\neq 0}\exp\{-\varepsilon^{-1}|z|^2\})\right\}\\
\noalign{\smallskip}&\!\!\!\displaystyle\leq \max\(\sup_{z}[\lambda |x+z|^2-|z|^2],\lambda|x|^2\)\leq \lambda K_1 {\cal V}(x)\end{array}$$
Therefore (A3) holds.
Let $f_{\tau,t}(\cdot,u),g_\tau(\cdot)\in B_{\cal V} ({\bf X})$. Since ${\bf U}$ is compact, Assumption (B) and (C) are trivial because of (A3).
Because the infimum or supremum can be attained, simple calculation shows that the Hamiltonians are
$$\left\{\begin{array}{ll}\noalign{\smallskip}&\!\!\!\displaystyle {\cal H}_{\tau,t}[h](x,u)=f_{\tau,t}(x,u)+\max_{z\in{\bf X}}[h(z)-(z-x-u)^2)]\\
\noalign{\smallskip}&\!\!\!\displaystyle {\cal A}_t[h](x)=\min_{u\in{\bf U}}\(f_{t,t}(x,u)+\max_{z\in{\bf X}}[h(z)-(z-x-u)^2]\).\end{array}\right.$$
We can easily get the recursion sequence defined in \eqref{HJBmain-0}.
If $f$ and $g$ are independent of the discounting factor $\tau$. Then the value function $V_t$ satisfies
$$\left\{\begin{array}{ll}&\!\!\!\displaystyle V_t(x)=\min_{u\in{\bf U}}\[f_{t}(x,u)+\max_{z\in{\bf X}}[V_{t+1}(z)-(z-x-u)^2]\],\\
\noalign{\smallskip}&\!\!\!\displaystyle V_T(x)=g(x).\end{array}\right.$$
This is the time-inconsistent case which is equivalent to discrete min-max control problem.\smallskip
Now we assume the problem is $c_{\tau,t}$ is exponential discounting, i.e. $c_{\tau,t}(x,u)=\lambda^{t-\tau}c(x,u)$ for some $\lambda\in(0,1)$. Suppose the problem was time-consistent with a (global) optimal strategy $\pi$. Due to the non-linear structure in the cost functional, in general
$J_{t-1,t}(x;\pi)\neq \lambda J_{t,t}(x;\pi)$. For example, one can see that
$J_{\tau,T}(x)=e^{T-\tau}g(x)$ and
$$\begin{array}{ll}\lambda J_{T-1,T-1}(x;\pi)&\!\!\!\displaystyle=\lambda\(c(x,u)+\inf_{z}\{\lambda g(z)-(z-x-u)^2\}\) \\
\noalign{\smallskip}&\!\!\!\displaystyle\neq \lambda c(x,u)+\inf_{z}\{\lambda^2 g(z)-(z-x-u)^2\}=J_{T-2,T-1}(x;\pi).\end{array} $$
This is to say
$$\begin{array}{ll}V_{t-1}(x)&\!\!\!\displaystyle=\inf_{u}\(c(x,u)+\inf_z\big[ J_{t-1,t}(x;\pi)-(z-x-u)^2\big] \)\\
\noalign{\smallskip}&\!\!\!\displaystyle\neq\inf_{u}\(c(x,u)+\inf_z\big[ \lambda J_{t,t}(x,\pi)-(z-x-u)^2\big] \)\\
\noalign{\smallskip}&\!\!\!\displaystyle =\inf_{u}\(c(x,u)+\inf_z\big[ \lambda V_t(x,\pi)-(z-x-u)^2\big] \). \end{array}$$
This contradicts to the global optimality of $\pi$ we supposed. Thus it is impossible for us to find an optimal strategy even if $\tau$ is in an exponential form. Such result matches what we claimed previously in introduction and motivates us to investigate time-inconsistent problems.
}
\end{example}
\begin{example}
{\rm In a regime-switching financial model, the stock market may switch between two states (i.e. bull and bear) under some probability law. We assume that the investors' actions can effect the transition of stock market between different states, while might bring some serious consequence with rare probability. For example, due to the actions $u$ taken by investors, there appears a third state (i.e. crisis) with a rare occurrence rate which is proportional to the parameter $\varepsilon$. When $\varepsilon$ is small, the rare occurrence may lead neglectable effect to general investors, but a strong effect to risk-sensitive ones.
Let $X^u_t$, the state of stock market, be a controlled Markov chain with state space $\{1,2,3\}$. The transition probability follows that
$$q^\varepsilon(1;1,u)=1-p_1(u)-e^{-\frac {\lambda(1,u)}{\varepsilon}},q^\varepsilon(2;1,u)=p_1(u)-e^{-\frac {\lambda(1,u)}{\varepsilon}}, q^\varepsilon(3;1,u)=2e^{-\frac {\lambda(1,u)}{\varepsilon}}$$
$$q^\varepsilon(1;2,u)=p_2(u)-e^{-\frac {\lambda(2,u)}{\varepsilon}},q^\varepsilon(2;2,u)=1-p_2(u)-e^{-\frac {\lambda(2,u)}{\varepsilon}}, q^\varepsilon(3;2,u)=2e^{-\frac {\lambda(2,u)}{\varepsilon}}$$
$$q^\varepsilon(1;3,u)=1-p_3(u)-e^{-\frac {\lambda(3,u)}{\varepsilon}},q^\varepsilon(2;3,u)=p_3(u)-e^{-\frac {\lambda(3,u)}{\varepsilon}}, q^\varepsilon(3;3,u)=2e^{-\frac {\lambda(3,u)}{\varepsilon}}.$$
where $\lambda(i,u)\geq 0$ and $u\in\{0,1\}$ represents whether the the investor takes action to the system. Observed from the transition law, the first two states are general and the third one is rarely existed. If $\lambda(i,0)=0$ and $\lambda (1,1),\lambda(2,1)>0$, the rare occurrence of state 3 is because of the investor's action. Now let suppose that risk-sensitive decision-maker makes their decisions with a cost functional similar to \eqref{cost-0}.
When $\varepsilon\rightarrow 0^+$, simple calculation implies that
$$\left\{\begin{array}{ll}&\!\!\!\displaystyle I(3;x,u)=\lambda (x,u),\qquad I(1;3,u)=I(2;3,u)=0;\\[2mm]
&\!\!\!\displaystyle I(z;x,u)=0, \qquad\text{if } x,z\in\{1,2\}.\end{array}\right.$$
Then one can see that
$$\begin{array}{ll}\Lambda(x,u;h)=\max[h(1),h(2),h(3)-\lambda(x,u)]
\end{array}$$
If the third state was not existed, i.e. $\lambda(x,u)=0$ for any $x,u$, one can conclude that
$\Lambda(x,u;h)=\max[h(1),h(2),h(3)]$ is independent of $u$. This essentially says risk-sensitive investor at time $t$ takes actions $u$ only to minimize the cost
the cost $f_{t,t}(x,u)$ at the step only. Due to the possibility of rare state, risk-sensitive investors have to change their strategies accordingly.
}
\end{example}
\section{Concluding Remarks}\label{sec:conrem}
We have explored the time-inconsistent risk-sensitive MDPs with countable-stated state space. Due to the time-inconsistency of the risk-sensitive cost function, the theory on the time-inconsistent equilibria and the convergence of value function as $\varepsilon\rightarrow0^+$ have some unique interesting features, e.g. the convergence of $\varepsilon$-equilibria are required for the convergence of value functions. Therefore, our results enrich the general theory of risk-sensitive MDPs and the time-inconsistent control problems. For our time-inconsistent risk-sensitive MDPs, a Hamiltonian recursion for each $\varepsilon>0$ has been derived and the convergence for the solution sequences as $\varepsilon\rightarrow0^+$ has been proved. An example is presented to show our assumptions are general.
We still can see that the theory is in its infancy and it is possible to be improved in several aspects. For example, can we conclude the similar results for general state space like ${\bf X}={\bf R}^d$? The main difficulty lies in the first-order regularity of the viscosity solutions of non-linear PDEs. We hope to report it in the other paper.\vspace{0.2cm}
\paragraph{Acknowledgements}
The author is gratitude for the two anonymous referees for their helpful suggestions which have improved the manuscript a lot. The author would also like to thank Professor Fran\c{c}ois Dufour for his valuable comments on the early version of the manuscript.
\bibliographystyle{amsplain}
|
1,314,259,994,302 | arxiv | \section{Introduction}
Accompanying the revolution of pre-trained models in many NLP applications, such as sentiment analysis~\cite{xu2019bert}, question answering~\cite{yang2019end}, information retrieval~\cite{yang2019simple}, and text generation~\cite{raffel2019exploring}, many related technologies have been deployed on the cloud to process user data from personal customers, small businesses, and large enterprises by industrial service providers. However, the convenience of the on-cloud pre-training technology also comes with a series of privacy challenges due to the sensitive nature of user data. For example, the input text or even text vector representations in user requests can leak private information, which may cause the specific user to be identified~\cite{schwartz2011pii, zhu2020deep}. This lack of privacy guarantees may impede privacy-conscious users from releasing their data to service providers. Thus, service providers may suffer from the deficiency of evolving models with user data. Besides, unintended data disclosure and other privacy breaches may result in litigation, fines, and reputation damages for service providers. These concerns spark our proposal of \textit{THE-X}, to enable privacy-preserving inference of transformer.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{overview.pdf}
\caption{An overview of our \textit{THE-X}. The transformer-based model could inference on encrypted data with our \textit{THE-X}, enabling theory-guaranteed privacy protection for users. }
\label{fig:overview}
\end{figure}
Specifically, we identify two challenges for the privacy-preserving inference of pre-trained models. The first challenge is how to protect users' plain text data from access by third-party service providers. (e.g., the clinic record or shopping history). Prior work has applied Differential Privacy (DP)~\cite{dwork2006calibrating} and its variants to address similar privatization issues - originally for statistical databases and more recently for DL~\cite{abadi2016deep} and NLP~\cite{qu2021natural, basu2021benchmarking, fernandes2019generalised, lyu2020differentially, basu2021privacy}. However, this solution may suffer from eavesdropping attackers. A handful of research~\cite{zhu2020deep, zhao2020idlg} demonstrated it possible to recover raw data from gradient leakage. Also, privacy protection could never be theory-guaranteed. The second challenge is the performance concern, recent works like \textit{TextHide}~\cite{huang2020texthide} and \textit{FedNLP}~\cite{lin2021fednlp} leverages the federated learning~\cite{yang2019federated} to train model on encrypted data, at cost of considerable performance dropping. Focusing on the privacy of training data, they have not fully explored privacy-preserving inference.
To solve the concerns above, we depict one practice of privacy-preserving inference in Figure \ref{fig:overview}, where a fine-tuned language model could be converted into the cloud service mode with \textit{THE-X}, and process users' data with its eyes blind. During inference, the content of the user query is anonymous to the transformer model. The results of computation are also ciphertext, which only can be decrypted by the user's private key.
In addition, we need a theory-guaranteed encryption solution like the homomorphic encryption (HE)~\cite{gentry2009fully} to convince both service providers and users of the privacy security in production scenarios. The semantic security of HE is guaranteed by lattice-based cryptography, and the HE computation results on ciphertext could be decrypted to the same results in plaintext, preventing performance reduction cost. The basic idea of homomorphic encryption is to perform computations on encrypted data without first decrypting it, which could fully ensure privacy in cloud-serving scenarios. It allows user data to be encrypted and out-sourced to commercial cloud environments for processing.
However, due to the complex operations (e.g., GELU activation) in transformer-based models, the popular partially homomorphic encryption solution, which only supports addition or multiplication, can not easily be adapted into scenarios of pre-trained models. Based on HE transformer backend~\cite{boemer2019ngraph1, boemer2019ngraph2, boemer2020mp2ml}, we designed a series of approximation components to fulfill the whole inference pipeline of the mainstream transformer backbone. We evaluate \textit{THE-X} for BERT-tiny on the GLUE benchmark~\cite{wang2018glue} and the CONLL2003 task~\cite{sang2003introduction}. Our results show that \textit{THE-X} can achieve the privacy-preserving inference with the averaged performance reduction of only 1.49\%.
Our contributions include:
\begin{itemize}
\item We are the first work to explore the privacy-preserving transformer inference with HE.
\item We design a practical and effective approximation workflow for converting transformer-based models into a function that consists of fully HE operations.
\item A thorough set of experiments confirms the negligible performance reduction with our proposed \textit{THE-X} approximation.
\end{itemize}
\section{Background}
\subsection{Security and Privacy Concern of Pre-trained Models}
Pre-trained models like BERT~\cite{devlin2018bert} and GPT-3~\cite{brown2020language} rely heavily on the use of plain text data to get human-like performance. Despite the remarkable achievements of pre-trained models, these state-of-the-art models can not directly answer some sensitive use cases, including the medical record~\cite{christoph2015secure}, search history~\cite{shen2007privacy} and other personally identifiable information (PII).
To avoid the direct computation on plain-text data, recent works like \textit{TextHide}~\cite{huang2020texthide} and \textit{DP-finetuning} ~\cite{kerrigan2020differentially} introduce the classical federated learning and differential privacy (DP) to protect the sensitive data. However, \textit{TextHide}~\cite{huang2020texthide} can only be applied to sentence-level tasks. Due to the mix-up operation, \textit{TextHide} fails to model token-level tasks like named entity recognition or semantic role labelling. \textit{DP-finetuning} would greatly sacrifice the performance of fine-tuned model by 20\% perplexity for a generation model like GPT-2.
\subsection{Practical Homomorphic Encryption}
The classic definition of homomorphic encryption is a form of encryption that permits users to perform computations on its encrypted data without first decrypting it. These computations results are retained in an encrypted form, which could be decrypted into identical output produced by the same computations on the unencrypted data. Let $F$ be a function or the entire pre-trained model, $E$ as an encryption function, $D$ as a decryption function. Then for any allowed plain text input $x$, we have:
\begin{equation}
F(x) = D(g(E(x)),
\end{equation}
where $g$ is a constructed function to play the same role of function $F$, except on encrypted data. Figure \ref{fig:overview} shows how a user performs inference using a cloud-deployed pre-trained model which is not trusted. First, the pre-trained model receives a ciphertext encrypted by the user private key and performs inference function $g$ on the ciphertext. Then, the server will send an encrypted result to the user, which can only be decrypted by the user key. At no point does the cloud service provider gain access to the plain text.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{workflow.pdf}
\caption{The Approximation Workflow of \textit{THE-X}. To replace the non-polynomial operations, we split the fine-tuning stage into several subphases. Given a pre-trained checkpoint, we drop the pooler of the pre-trained model and replace softmax and GeLU. Afterward, we follow the standard fine-tuning for classification or regression tasks. We add LayerNorm approximation into the fine-tuned model and distill knowledge from original LN layers. After dropping the original LN, we convert the model into fully HE-supported ops with the HE transformer.}
\label{fig:workflow}
\end{figure*}
The Intel HE transformer for nGraph~\cite{boemer2019ngraph1, boemer2019ngraph2} is a Homomorphic Encryption backend to the deep learning models. Currently, it supports the CKKS~\cite{cheon2017homomorphic} encryption scheme, implemented by the Simple Encrypted Arithmetic Library (SEAL)~\cite{sealcrypto} from Microsoft Research. It is a research tool to demonstrate the feasibility of HE on deep learning.
\subsection{Challenges of Transformer Inference with HE}
Some HE schemes only support a single algebraic operation, such as addition or multiplication. These are known as "partially homomorphic" schemes (PHE). Other schemes, called "fully homomorphic"(FHE), support two such as addition and multiplication. Note that composing addition and multiplication suffices to construct polynomial functions, and hence polynomial approximations to non-polynomial functions such as GELU~\cite{hendrycks2016gaussian} or LayerNorm~\cite{xu2019understanding}. Notably, this limitation prevents the exact computation of any comparison-based operations such as Max, Min, as well as common functions such as exponential or sigmoid. Finally, "leveled homomorphic" schemes (LHE) support addition and multiplication, only up to a fixed computational depth.
\section{\textit{THE-X}: Formal Description}
There are two core ideas in \textit{THE-X}. The first one is to incorporate the user device into the HE inference, and the second is using "simplified computation" to approximate the non-polynomial functions.
In the following, we will describe how to enable homomorphic encryption of transformer-based models with \textit{THE-X}.
\subsection{Approximation Workflow}
\setlength{\algomargin}{1em}
\begin{algorithm}
\caption{Approximation Workflow}
\label{alg:app}
\DontPrintSemicolon
\SetAlgoLined
\SetKwFor{ForAll}{for all}{do}{end}
\KwData{labeled task data $\mathcal{D}$.}
\KwIn{pre-trained Transformer model $\mathcal{M}$, softmax estimation model $\mathcal{S}$.}
\nl $\widehat{\mathcal{M}} \leftarrow \mathcal{M} \odot (\mathcal{S}, ReLU) $. \\
\tcp*{\small{replace GELU and Softmax}}
\nl \While{not done}{
\nl sample batches $(x_i, y_i)$ from $\mathcal{D}$, \\
\nl let $(x_i, y_i)$ optimize $\widehat{\mathcal{M}}$ with $\mathcal{S}$ frozen. \\
}
\nl $\tilde{\mathcal{M}} \leftarrow \widehat{\mathcal{M}} \oplus \tilde{\mathcal{N}}$. \\
\tcp*{\small{add the layernorm approximation}}
\nl \While{not done}{
\nl sample batches $(x_i, y_i)$ from $\mathcal{D}$, \\
\nl freeze the parameters of $\tilde{\mathcal{M}}$ except $\tilde{\mathcal{N}}$. \\
\nl compute $k$-th layernorm output $O_k$, $\tilde{O_k}$. \\
\nl compute loss $\ell_k$ = MSELoss($O_k$, $\tilde{O_k}$). \\
\nl update $\tilde{\mathcal{N}}$ with loss $\mathcal{L} = \sum_k{\ell_k}$.
}
\nl $\Bar{\mathcal{M}} \leftarrow \tilde{\mathcal{M}} \ominus \mathcal{N}$. \\ \tcp*{\small{discard the origin layernorm}}
\KwRet $\Bar{\mathcal{M}}$.
\end{algorithm}
First, we present the approximation workflow of \textit{THE-X}, which consists of two stages: Standard Finetuning and LN Distill as depicted in Figure \ref{fig:workflow}. Given a pre-trained model $\mathcal{M}$ and corresponding downstream data, we aim to produce a fully HE supported $\Bar{\mathcal{M}}$ which is fine-tuned and ready for deployment.
The two-stage optimization of algorithm \ref{alg:app} aims to find the best approximation checkpoint. For computation efficiency, pre-trained models can also be fine-tuned together with the layernorm approximation, and it needs only a single optimization loop. We will discuss the schedule of the different approximation workflow in Sec \ref{sec:schedule}. There are three major non-polynomial functions in the transformer block, where we will study in detail.
\subsubsection{Gaussion Error Linear Units (GLEU)}
With a computation of Gaussian error, Gaussian Error Linear Units (GLEUs)~\cite{hendrycks2016gaussian} is not suitable to serve as an active function in HE state. The Gaussian kernel includes unsupported functions like exponential. While in the implementation of the transformer, GELU is defined as a fast approximated version, where the $tanh$ function is still non-polynomial, unsupported by HE.
\begin{equation}
G(x) = 0.5x(1 + tanh[\sqrt{2/\pi}(x + 0.044715x^3)]).
\end{equation}
We illustrate the numerical comparison between GELU and RELU in Figure \ref{fig:GELU} , where the outputs of GELU are very close to RELU. Hence, we propose to replace the GELU layer in the model with a ReLU activation function. Despite the $Max$ function in ReLU, other computations are well supported by HE. To enable the computation of $Max$, we implement the first key idea, incorporating the user device into the inference. The server will convey ciphertext input to the user for local $Max$ computation. Once received the connection, a user device decrypts the ciphertext input and calls the local $Max$ function to get the results and return re-encrypted results to the server. Despite the communication cost, no plaintext is leaked during the TLS connection and semantic security is guaranteed.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{GLEU.pdf}
\caption{The activation results of GELU compared with ReLU. With an input around zero, the activation results are very close. With a larger or smaller input value, the activation results tend to converge.}
\label{fig:GELU}
\end{figure}
\subsubsection{Softmax}
The second non-polynomial function is softmax, which includes the exponential and division computation.
\begin{equation}
Softmax(x_i) = \frac{exp(x_i)}{\sum_j exp (x_j)}.
\end{equation}
The first thought to approximate softmax is to find alternatives of softmax operation in transformer, which include Taylor series approximation~\cite{vincent2014efficient}, softmax-free linear attention~\cite{lu2021soft}. However, both of them have some limitations. The Taylor series approximation can only approximate the exponential operation. Softmax-free linear attention utilizes newton-inverse to approximate division, but the approximation error is unbounded in full-scale attention settings.
For these considerations, we have no choice but to design an estimation network with addition and multiplication.
\begin{equation}
\label{eq:softmax_agent}
S(x_i) = x_i \ast T( \sum_j ReLU(((x_j)/2 + 1)^3) ).
\end{equation}
Equation \ref{eq:softmax_agent} is the formal description of our softmax estimation network. Same as the approximation of GELU, ReLU operation here is realized by communication with the client. Instead of a division operation, we approximate reciprocal operation with a three-layer linear neural network denoted as $T$.
To get a better estimation of softmax, we randomly generate input tensors whose values are between $[-3, 3]$ and use their softmax scores as MSE targets. Then we optimize the $T$ for 100k steps with a learning rate of 1e-3 until the MSE loss drop down to 1e-6.
An under-explored problem here is the \textit{Infinite value of Masked Attention}, where the input of softmax is always the masked attention scores. To prevent the attention of masked tokens, the origin transformer model fills the masked attention scores with negative infinity before softmax. When fed with an infinite value, the softmax estimation model may face numerical disaster. We will discuss this phenomenon and the corresponding solution in Sec \ref{sec:attention overflow}.
\subsubsection{LayerNorm}
Recall that the layer normalization~\cite{ba2016layer} in transformer is implemented over a mini-batch of inputs, which could be formulated as:
\begin{equation}
y = \frac{x - E[x]}{\sqrt{Var[x] + \epsilon}} \ast \gamma + \beta.
\end{equation}
The mean and standard deviation are calculated over division operations where the approximation is needed. $\gamma$ and $\beta$ are learnable affine transform parameters. To avoid the introduction of new parameters, we keep the learnable parameters while leaving the mean and standard deviation achieved by regression.
\begin{equation}
\hat{y} = x \circ \boldsymbol{\gamma} + \beta.
\end{equation}
The new parameter $\boldsymbol{\gamma}$ predicts the value of standard deviation by regression from origin $\hat{\gamma}$. We find the simple linear replacement is enough for values with a small scale of bias. Here $\boldsymbol{\gamma}, \beta \in \mathbb{R}$ and $\circ$ denotes the Hadamard product.
The layer normalization will be applied in each multi-head attention block and after the output dense layer. So the approximation error tends to accumulate when the transformer stacks with too many layers.
We treat the layernorm approximation as an individual stage in Figure \ref{fig:overview} as \textit{LN-Distill} to learn from origin LN layers. A challenge here is the \textit{Attention Overflow}, where the input attention score before normalization may have an unbounded scale, leading to numerical problems. We will discuss the detail of \textit{Attention Overflow} in Sec \ref{sec:attention overflow}.
\subsubsection{Other Practical Replacement}
After the approximation workflow, a fine-tuned model consists of only addition and multiplication operations, which is fully compatible with homomorphic encryption. We power the model by HE transformer backend. Since the HE transformer backend could only work for TensorFlow checkpoint, any pre-trained transformers inherited from PyTorch building version need to be converted into TensorFlow format first. There are some other details worth mentioning here.
\begin{itemize}
\item For the $softmax(\frac{QK^T}{\sqrt{d_k}}) V$ operation in attention score computation, we absorb the value of $\frac{1}{\sqrt{d_k}}$ into the weights of query projection layer.
\item We use a fully kernel convolution layer instead of linear projection due to the lack of supported dense operation.
\item All matrix multiplication will be converted into the element-wise style.
\item We drop the pooler layer for the unsupported operation of tanh.
\end{itemize}
\subsection{Privacy-preserving Inference}
In this section, we describe the behavior of HE models during privacy-preserving inference. Note that inference is completed by the joint effort of the server and the user device.
\begin{algorithm}
\caption{Inference with HE}
\label{alg:infer}
\DontPrintSemicolon
\SetAlgoLined
\SetKwFor{ForAll}{for all}{do}{end}
\KwIn{user plain text query $\mathcal{P}_q$, private key $\mathcal{K}$ generated under server protocol, encrypted server model $\mathcal{M}$. }
\nl client computes embeddings: $\mathcal{E}_q \leftarrow \mathcal{P}_q $. \\
\nl client encrypts query embeddings: $\mathcal{C}_q \leftarrow Encrypt(\mathcal{E}_q, \mathcal{K})$. \\
\nl server forwards the model: $\mathcal{C}_i$ = $\mathcal{M}(\mathcal{C}_q)$. \\
\nl client handles activation: $\mathcal{C}_a$ = $ReLU(\mathcal{C}_i)$. \\
\nl server continues forwarding: $\mathcal{C}_o = \mathcal{M}(\mathcal{C}_a)$. \\
\nl client decrypts results: $\mathcal{P}_o = Decrypt(\mathcal{C}_o, \mathcal{K})$. \\
\end{algorithm}
In Algorithm \ref{alg:infer}, notably absent is the support of ReLU operations, where the server exchanges the activation results with the client. However, all the communication between client and server is in ciphertext, ensuring the privacy of user queries and may prevent eavesdropping attackers from recovering private text data.
\section{Experiments}
In this section, we design both sequence-level and token-level tasks to evaluate the approximation performance of our \textit{THE-X} solution. We also discuss several identified factors which greatly affect approximation workflow.
\begin{table*}[htbp]
\centering
\caption{Performance on the GLUE\footnotemark[1] tasks for both baseline (standard finetuning) and \textit{THE-X} with BERT-tiny, measured on the development sets. We report the best results by hyper-parameter search. $|\mathcal{D}|$ denotes the number of training examples. \textit{THE-X} only suffers average utility performance loss: < 1.5\% in most tasks. `P/S corr.' is Pearson/Spearman correlation and `m/mm' denotes the accuracy scores on matched/mismatched set.}
\resizebox{\linewidth}{!}{
\begin{tabular}{cccccccccc}
\toprule
Tasks & $|\mathcal{D}|$ & Type & Metrics & Baseline & ReLU & ReLU-S & ReLU-S-L & HE & Perf $\downarrow$ \\
\midrule
SST-2 & 67k & Sentiment & Acc. & 82.45 & 82.40 & 82.34 & 82.11 & 82.11 & 0.34 \\
MRPC & 3.7k & Paraphrase & F1/Acc. & 81.57/70.10 & 81.69/70.34 & 80.81/69.85 & 79.93/68.87 & 79.94/68.87 & 1.63/1.23 \\
STS-B & 7k & Similarity & P/S corr. & 72.83/73.66 & 72.89/73.03 & 74.19/74.27 & 68.38/70.96 & 68.39/70.97 & 4.44/2.69 \\
QQP & 364k & Paraphrase & F1 & 80.28/84.03 & 79.55/82.89 & 79.38/83.36 & 78.28/83.75 & 78.33/83.63 & 1.95/0.40 \\
MNLI & 393k & NLI & m/mm & 69.75/70.75 & 69.51/70.60 & 68.61/69.13 & 68.59/69.41 & 68.47/69.08 & 1.28/1.67 \\
QNLI & 108k & NLI & ACC. & 78.38 & 78.35 & 78.33 & 78.33 & 78.20 & 0.18 \\
RTE & 2.5k & NLI & ACC. & 58.56 & 58.32 & 58.27 & 58.12 & 58.12 & 0.44 \\
\midrule
\multicolumn{4}{c}{Average Perf $\downarrow$} & 0.00 & 0.25 & 0.34 & 1.42 & 1.48 & \textbf{1.48} \\
\bottomrule
\end{tabular}}
\label{tab:main_result}%
\end{table*}
\begin{table}[htbp]
\centering
\caption{Performance on the CONLL2003 task for both baseline and \textit{THE-X} with BERT-tiny, measured on the development sets. We find that the replacement with ReLU has a slight effect on performance and even gets a better F1 score by 0.12 than original GELU activation.}
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccc}
\toprule
Metrics & Precision & Recall & F1 & Perf $\downarrow$ \\
\midrule
Raw & 82.34 & 84.85 & 83.57 & 0 \\
ReLU & 82.29 & 85.13 & 83.69 & -0.12 \\
ReLU-S & 82.08 & 84.73 & 83.38 & 0.19 \\
ReLU-S-L & 79.65 & 83.79 & 81.67 & 1.90 \\
HE & 79.65 & 83.79 & 81.67 & \textbf{1.90} \\
\bottomrule
\end{tabular}
}
\label{tab:NER_results}
\end{table}
\subsection{Evaluation Tasks}
GLUE~\cite{wang2018glue}, the General Language Understanding Evaluation benchmark, is a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks. We choose a subset of GLUE\footnote[1]{CoLA task is not reported because of the limited capacity of BERT-tiny.} tasks, which include: MRPC~\cite{dolan2005automatically}, SST-2~\cite{socher2013recursive}, QQP\footnote[2]{https://www.quora.com/profile/Ricky-Riche-2/First-Quora-Dataset-Release-Question-Pairs}, STS-B~\cite{cer2017semeval}, MNLI~\cite{williams2017broad}, QNLI~\cite{rajpurkar2016squad}, and RTE~\cite{dagan2005pascal, haim2006second, giampiccolo2007third, bentivogli2009fifth}.
Following previous work~\cite{devlin2018bert, turc2019well}, we exclude the WNLI task from the GLUE benchmark. We also use the famous CoNLL-2003~\cite{sang2003introduction} named entity recognition task as our additional token-level evaluation. In conclusion, we include the most varieties of NLU tasks, covering both sequence-level and sentence-level tasks, in both regression and classification format.
\subsection{Experiment Settings}
For computation efficiency and energy-saving consideration, we use the released BERT-tiny~\cite{turc2019well} as our demo model, which is a standard transformer-based language model with only 2 layers and a hidden size of 128. We provide four settings to evaluate different parts of our approximation components.
\begin{itemize}
\item \textbf{Baseline.} In this setting, we make no replacement or approximation. We use the raw pre-trained checkpoint to fine-tune on downstream tasks.
\item \textbf{ReLU.} We fine-tune the pre-trained model with all GELU activation replaced with ReLU.
\item \textbf{ReLU-S.} In addition to ReLU, we fine-tune the model with the softmax operation replaced by the softmax estimation model.
\item \textbf{ReLU-S-L.} We implement full approximation including a layer normalization replacement.
\item \textbf{HE.} We convert the fine-tuned checkpoint with HE-transformer and power the inference with SEAL backend.
\end{itemize}
\textbf{Implementation.} To reduce the variance of results under different settings, we choose hyper-parameters from a fixed set during approximation fine-tuning and HE inference runtime.
\begin{itemize}
\item For fine-tuning the approximation components, we choose a batch size from \{4, 8, 16, 32, 128\} and a learning rate from {1e-4, 3e-4, 3e-5, 5e-5} as mentioned in the initial bert code~\cite{turc2019well}. We use an Adam optimizer with weight decay chosen from \{0.05, 0.1, 0.2, 0.4, 0.5\}
\item For HE evaluation, we use the HE-transformer backend, where two parameters are recommended searching by Intel, the poly modules and coeff-modules. We choose the poly modules degree from \{1024, 2048, 4096, 8192, 16384\} and choose the coeff-modules from \{20, 30, 60\}.
\end{itemize}
\subsection{Approximation Results}
Table \ref{tab:main_result} shows the results of the baseline and \textit{THE-X} on the GLUE benchmark. The averaged performance reduction of \textit{THE-X} is 1.48\% when compared to the baseline model. We observe the most performance reduction comes from the approximation of layernorm, which incurs a reduction of 1.08\%. The softmax estimation model contributes the least performance drop among the approximation components, for only 0.09\% on average, indicating the softmax function could be well imitated by neural networks. We also find the average performance reduction of HE is quite negligible, where the slight drop may be due to the sequence truncation.
The results of \textit{THE-X} on token-level NER task are reported in Table \ref{tab:NER_results}. The replacement of GELU with ReLU even improves the performance of the F1 score. We assume the slight improvement may come from unexpected bias. However, the layernorm approximation incurs the most performance reduction. We assume token-level tasks need a more detailed pattern in attention score. After all, \textit{THE-X} still works well in the token-level task with a merely F1 reduction of 1.9\%.
Across different types of tasks, we find our \textit{THE-X} yields the best performance on the classification tasks, including paraphrase, sentiment and NLI. Among the classification tasks, the performance of QNLI drops the least, for only 0.18\%. We also find the performance drops most on the regression tasks, such as the similarity task STS-B, for 4.44\% pearson correlation and 2.69\% spearman correlation. We assume the regression task needs a higher numerical precision than the classification task.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{mask_value.pdf}
\caption{Performance on CONLL2003 task with different mask values. We find the “Negative Infinity” value of the 0 mask greatly reduces approximation performance. In \textit{THE-X} using a mask value in [-3, -5] might be a default choice.}
\label{fig:mask_value}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{weight_decay.pdf}
\caption{Performance on all tasks with different weight decay values, measured on the development sets. Metrics are marked on the y-axis and weight decay values are marked on the x-axis.}
\label{fig:weight_decay}
\end{figure*}
\subsection{Negative Infinity}
\label{sec:infinite}
Recall in Equation \ref{eq:softmax_agent}, we replace softmax with a neural estimation model. To prevent the attention of masked tokens, the origin transformer model fills the masked attention scores with negative infinity before softmax, where the numerical disaster occurs in our approximation method. In Figure \ref{fig:mask_value}, to solve this problem, we give an empirical study of how "negative" the masked attention scores should be. Despite the indistinguishable F1 score change of raw model fine-tune with different attention mask values, the approximation method is extremely sensitive to the numerical changes. We assume the softmax estimation model fails to deal with large input values and leads to a credible performance drop. However, when the value of the attention mask becomes too small, it serves as a bias to attention scores, which also leads to a certain performance drop. We recommend using a moderate mask value between -2 and -5.
\subsection{Attention Overflow}
\label{sec:attention overflow}
Another challenge of \textit{THE-X} is the attention score input of layer normalization. In most cases, the scale of multi-head attention output is very dense around $[-1, 1]$. However, before normalization, we also observe the attention scores are scarily sparse, with some extreme values reaching $1e4$, which is difficult for our LN-distill stage. To prevent the overflow attention scores, we use the \textbf{weight decay} of Adam optimizer as regularization.
In Figure \ref{fig:weight_decay}, we present the attention overflow phenomenon across different tasks. Without any regularization, our approximation method yields uncontrolled attention scores, leading to poor performance. As the weight decay increases, the attention scores tend to converge and benefit better approximation results. We also observe that the larger weight decay may harm the performance on NLI tasks, where the regularization could be seen as trade-off between better approximation results and higher performance upper bound. For the NER task, larger weight decay may even benefit the performance and also boost our approximation method.
\subsection{Schedule of Approximation workflow}
\label{sec:schedule}
There are still doubts about how to organize the several optimization steps for the best approximation performance. We investigate four schedule plans:
\begin{itemize}
\item \textbf{Two Stages.} Where we freeze the softmax estimation model during standard fine-tuning. We select the best checkpoint to implement the second stage - distill the layer normalization network.
\item \textbf{Joint FT S.} We optimize the softmax estimation model during standard fine-tuning and apply the LN-distillation after.
\item \textbf{Joint FT LN.} We apply one-pass optimization with the softmax estimation model frozen but update the other parameters including layer normalization network. No further LN-distill will be implemented.
\item \textbf{Joint FT S + LN.} A total one-pass optimization with all approximation parameters updated with the model together.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{schedule.pdf}
\caption{Performance on all tasks with different organizations of approximation workflow. Jointly fine-tuning the softmax estimation model or approximated layernorm leads to a performance drop across all tasks.}
\label{fig:schedule}
\end{figure}
As illustrated in Figure \ref{fig:schedule}, we observe that fine-tuning the different approximation components individually (aka. "Two stages") may be a good default to keep the best performance of approximation. For the regression task STS-B, jointly fine-tuning the softmax estimation model and approximated layernorm even fails to fulfill the approximation pipeline, pulling the performance down to 0.4\%. We assume fine-tuning different components may fall into a \textit{bi-level} optimization problem and it is hard to achieve satisfying results. In conclusion, the softmax estimation model and the approximated layernorm are both critical components to the performance of \textit{THE-X}, deserving individual optimization.
\section{Conclusions}
We present \textit{THE-X}, a practical approach to enable pre-trained transformer models to infer under homomorphic encryption. It requires several approximation components to replace the original operations in the transformer model. It imposes a slight burden in terms of performance cost but enjoys the full advantage of homomorphic encryption - the theory-guaranteed user privacy.
We see this as a first step in combing homomorphic encryption to address emerging privacy issues in pre-trained models. We hope our work motivates further research, including better approximation solutions on different NLP applications.
\section{Acknowledgments}
This paper is supported in part by the NSFC through grant No.U20B2053. We also thanks the support from Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing.
|
1,314,259,994,303 | arxiv | \section*{}
\vspace{-1cm}
\footnotetext{\textit{$^{a}$~Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warszawa, Poland. Tel: +48 2255 32908; E-mail: fdutka@fuw.edu.pl}}
\footnotetext{\textit{$^{b}$~Institute of Physical Chemistry, Polish Academy of Sciences, Kasprzaka 44/52, 01-224 Warszawa, Poland. }}
\footnotetext{\textit{$^{c}$~Faculty of Physics, Institute of Acoustics, Adam Mickiewicz University, Umultowska 85, 61-614 Pozna\'n, Poland. }}
\footnotetext{\dag~Electronic Supplementary Information (ESI) available: one movie and two animations.}
\section{Introduction}
Capillary bridges play important role in many physical phenomena, including agglomeration \cite{Balakin2015,Wang2015,Balakin2013}, mechanical strengthening \cite{Herminghaus2005,Sheel2008,Pakpour2012}, surface adhering \cite{Xue2015,Slater2014}, rheological response \cite{Koos2011,Koos2012,Hoffmann2014}, capillary-gripping \cite{Fan2015,Vasudev2008} and self-assembly \cite{Arutinov2014,Broesch2014,Mastrangeli2015}. They are often utilized in material science, e.g. for fabricating new materials using capillary suspensions \cite{Velankar2015,Dittmann2015,Schneider2016}, new structures \cite{Park2012,Wang2015b,Tisserant2015}, in nanolithography \cite{Chaix2006,Fabie2010,Eichelsdoerfer2014}, microplating \cite{Hunyh2013}, for pattern formation \cite{Xu2006}, and in printed electronics technologies \cite{Kumar2015}. It is thus essential to know their properties and behavior. Therefore, a great effort is made to understand the mechanisms of their formation and shape development \cite{Anachkov2016,Wu2016,Gogelein2010}, rupturing \cite{Perales2011,Alexandrou2010,Yang2010,Men2011} and evaporation \cite{Cho2016,Neeson2014}.
Capillary bridges exist at solid contacts between spheres \cite{Bayramli1987,Adams2002,Lian2016}, rods \cite{Mollot1993,Duprat2012,Sauret2015}, plates \cite{Dejam2015,Cheng2016}, a mix of these \cite{Rabinovich2005,Dutka2007,Guzowski2010,Dormann2014}, or other shapes \cite{Wang2016,Dutka2006}.
They are also formed, when a particle is being pulled out from a suspension. Then a liquid rises to a certain height above the bulk planar level, and forms a concave meniscus. This meniscus is called a capillary bridge between a particle and a planar liquid surface. Such capillary bridges can be used for determining the surface tension \cite{Huh1976,Bayramli1982,He2015,Ettelaie2015}, in a way similar to the methods that employ pulling the so-called Wilhelmy plate \cite{Butt2006,Butt2009b,Jorgensen2015}, a cylindrical fiber \cite{Takahashi1990,Quere1999,Gennes2004}, or a toroidal ring \cite{Hubbard2002,Drelich2002}.
In this article, we study the formation of a solid-planar liquid surface bridge and its morphological transition into a solid-solid liquid capillary bridge. We analyze the development of these capillary bridges formed on a beaded chain being pulled out from a liquid, Fig\,\ref{fig_scheme}. This research originates from the observation of the formation of liquid bridges during a novel process of fabricating one-dimensional colloidal assemblies in the presence of dipolar interactions \cite{Rozynek2016}. We observed that the assembly process proceeded unevenly when the beads forming the chain were small (micrometers) and smoothly when the beads had sub-millimeter size, see Supplementary Movie 1. The way this process proceeded was related to the mechanism of the capillary bridges formation. Here we provide the theoretical description of that mechanism, and we show that for monodispersed spherical beads forming a chain, the morphological phase transition between two types of bridges can be either continuous or discontinuous. The transition is continuous when the diameter of the spheres is larger than the capillary length $\lambda=\sqrt{\gamma/\rho_\un{l} g}$, where $\gamma$ is surface tension coefficient, $\rho_\un{l}$ liquid density, and $g$ gravitational acceleration. Otherwise, the transition is discontinuous, as is the capillary force acting on the chain.
In order to experimentally validate our theoretical predictions we prepared experiments with two beaded chains composed of either large ($2 \, \un{mm}$) or small ($30 \, \un{\mu m}$) spheres (capillary length $\lambda=1.45 \, \un{mm}$). We either glued the spheres to form a permanent chain, or assembled particle by employing the process described in reference \cite{Rozynek2016}. In short, the process requires the sum of attractive dipolar force and capillary forces between neighbouring particles $F_\un{ss}$ to be sufficiently strong to overcome the capillary force $F_\un{sp}$ stemming from a sphere-planar liquid surface bridge. Capillary bridges stabilize the growing chain.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \tw]{Fig1.eps}
\caption{Scheme illustrating the capillary forces acting on the lower spherical particle of radius $R$: sphere-planar liquid surface capillary force directed downwards $\vec{F}_\un{sp}$, and sphere-sphere capillary force directed upwards $\vec{F}_\un{ss}$. We assume the contact angle $\theta=0$, i.e. the bridge is tangential to the sphere (left panel). A magnified image of two spheres aligned along the direction of pulling, which is upwards. The upper particle is the beginning of a chain. The lower particle is just being pulled out from liquid. The liquid layer forming a capillary sphere-sphere bridge is well-resolved (right panel). \label{fig_scheme}}
\end{center}
\end{figure}
This article is structured as follows: in Sec.\,\ref{sec_sphereplane} we introduce the theoretical model that is then used to describe the formation of the sphere-planar liquid surface bridge, calculate the shape of the bridge in the presence of gravitational field, and the corresponding capillary force. We track the morphological phase transition to the sphere-sphere bridge phase. The transition can be continuous or discontinuous, depending on the size of radius of spheres and the capillary length, which is discussed in Sec.\,\ref{sec_transitions}. Both the sphere-liquid surface and sphere-sphere capillary forces are evaluated and discussed in Sec.\,\ref{sec_forces}. In Sec.\,\ref{sec_experiment}, we compare our theoretical predictions with the experiment, and close the paper with a short summary in Sec.\,\ref{sec_summary}.
\section{Liquid bridge between a sphere and a planar liquid surface \label{sec_sphereplane}}
The initial level of the planar liquid surface is assumed to be at $z=0$, see inset in Fig.\,\ref{fig_sp2}. The chain is composed of spheres of radius $R$ that touch one another, and are aligned in line along the $z$-axis. The system is assumed to have cylindrical symmetry around the chain axis, and the radius of the container is $r_\un{max}$. Upon pulling out the chain, the sphere-planar liquid surface bridge emerges, and, because the volumes of both the liquid and the spheres are fixed, the liquid level decreases from $z=0$ to $z=z_\un{min}<0$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \tw]{Fig2.eps}
\caption{Schematic shape of a sphere-planar liquid surface bridge formation, during pulling out a beaded chain from a liquid bath of size $r_\un{max}$, and initial flat configuration of the liquid surface (inset). The chain consists of aligned in line, touching each other spherical beads of radius $R$. The shape of the bridge is cylindrically symmetric and described by a function $r(z)$. The maximum level of the liquid meniscus is $z_1$, and the minimum is $z_\un{min}$. Upon increasing the height $h$, the characteristic angle $\beta$ decreases. The system is placed in a gravitational field $\vec{g}$. \label{fig_sp2}}
\end{center}
\end{figure}
The energy of the sphere-planar liquid surface bridge, as compared to the initial planar configuration of the liquid surface, is a sum of capillary and gravitational terms
\begin{align} \label{eq_functional}
\begin{split}
E_\un{sp}[r(z)] &= 2 \pi \int_{z_\un{min}}^{z_1} \dd z \Bigg[ \gamma \, r(z) \sqrt{1+r'(z)^2} \\
& \hspace{2.5cm} + \frac{1}{2} \rho_\un{l} g \, z \Big(r(z)^2-r_1(z)^2 \Big)\Bigg] \\
& - \pi r_\un{max}^2 \gamma + 2 \pi R (h-z_1) (\gamma_\un{sg}-\gamma_\un{sl}) \, ,
\end{split}
\end{align}
with the constant liquid volume constraint
\begin{align} \label{eq_constraint}
\begin{split}
\Delta V[r(z)] = & \pi \int_{z_\un{min}}^{z_1} \dd z \, r(z)^2 + \frac{\pi}{3}(h-z_1)^2(3R-h+z_1) \\
& - \pi r_\un{max}^2 |z_\un{min}| = 0 \, .
\end{split}
\end{align}
Parameters $\gamma$, $\gamma_\un{sg}$, $\gamma_\un{sl}$ are the liquid-gas, sphere-gas, and sphere-liquid surface tension coefficients, respectively. The density of the liquid is $\rho_\un{l}$, $g = 9.81 \, \un{m/s^2}$ -- gravitational constant, and $r=r_1(z)$ describes the surface of the spheres already pulled out above the planar liquid surface. The quantity $\Delta V$ is the difference of volumes corresponding to the sphere-planar liquid surface bridge configuration, and the one corresponding to the initial flat surface configuration. Note that in both configurations, the volume of the spheres is taken into account.
We assume the contact angle $\theta=0$, i.e. the bridge meniscus is tangential to the sphere at $z=z_1$. This approximates experimental situations in which one observes very small values of the contact angle. From the Young's equation, one has
\beq
\gamma_\un{sg}-\gamma_\un{sl} = \gamma \cos \theta = \gamma \, ,
\eeq
and the last term in Eq.\,(\ref{eq_functional}) describing the energy of the sphere covered with the thin liquid film ($z>z_1$) reduces to $2 \pi R (h-z_1) \gamma$.
For a given height $h$, the equilibrium profile $r(z)=r_\un{eq}(z)$ minimizes the functional $E_\un{sp}[r(z)]$, Eq.\,(\ref{eq_functional}), under the constant volume constraint
\beq
0 = \left. \frac{\delta \Big(E_\un{sp}[r(z)] - \Delta p \, \Delta V[r(z)] \Big)}{\delta r(z)} \right|_{r(z)=r_\un{eq}(z)}\, ,
\eeq
which gives the equation for the shape of the interface
\begin{align} \label{eq_shape}
\frac{1}{r_\un{eq} (1+r'_\un{eq}(z)^2)^{1/2}} - \frac{r''_\un{eq}(z)}{(1+r'_\un{eq}(z)^2)^{3/2}} = \frac{\Delta p}{\gamma} - \frac{z}{\lambda^2} \, .
\end{align}
The expression on the left hand side of the above equation represents the mean curvature multiplied by factor two, $\Delta p$ is the Lagrange multiplier, which is the difference of pressures of the inner liquid phase and the outer gas phase (Laplace pressure). If the hydrostatic pressure corresponding to height $z$, i.e., $z \, \rho_\un{l} \, g$ is small compared to the Laplace pressure $\Delta p$, then the liquid meniscus forms a constant mean curvature surface \cite{Langbein2002,Boucher1980}, and can be described analytically by elliptic integrals \cite{Kralchevsky2001,Honschoten2010}. In our analysis we take into account the the presence of gravitational field, and the shape of the bridge can be determined only numerically.
\subsection{Equilibrium and metastable states}
For large containers, $r_\un{max} \gg \lambda, R$, one can consider the limiting case of infinite system, $r_\un{max} \to \infty$. For such a case, there is no fixed volume constraint, thus, the pressure difference vanishes, $\Delta p =0$.
The equilibrium shape of the interface $r_{0}(z)$ fulfills the equation
\begin{align} \label{eq_shape0}
\frac{1}{r_0 (1+r_0'(z)^2)^{1/2}} - \frac{r_0''(z)}{(1+r_0'(z)^2)^{3/2}} = - \frac{z}{\lambda^2} \, .
\end{align}
For contact angle $\theta=0$, the boundary conditions at the sphere, $z_1=h-R(1+\cos \beta)$, for a given angle $\beta$, Fig.\,\ref{fig_sp2}, are
\begin{align}
\begin{split}
r_0(z_1) &= R \sin \beta \, , \\
r_0'(z_1) & = \cot \beta \, . \\
\end{split}
\end{align}
One would expect, that for small spheres gravity is not significant, and one can ignore the term $-z/\lambda^2$ on the rhs of Eq.\,(\ref{eq_shape0}). Then the equilibrium shape would be $r_0(z)=w \cosh (z-z_0)/w$, with $w$, $z_0$ -- two integration constants. Note, that for angles $\beta<\pi/2$, for $z=z_0$, the bridge has minimal width equal to $w=r_0(z_0)$. The slope of the interface at $z=0$ equals $r_0'(0)=-\sinh z_0/w$, and because $w, z_0>0$ are finite, it cannot be infinite. The condition that the position of the interface has both infinite value and slope at $z=0$ cannot be satisfied. Thus, the function $r_0(z)=w \cosh (z-z_0)/w$ is not an acceptable equilibrium shape of the bridge for our considerations. It turns out that the term $-z/\lambda^2$ in Eq.\,(\ref{eq_shape0}) provides that $r_0'(0) \to -\infty$, and it cannot be neglected.
A procedure of numerical integration of Eq.\,(\ref{eq_shape0}) is described in Appendix\,\ref{app_shape}. It turns out that there exists a particular value of the height $h=h_\un{sp}$ (spinodal height), such that for $h<h_\un{sp}$ there exist two solutions of Eq.\,(\ref{eq_shape0}) corresponding to different angles $\beta$, Fig.\,\ref{fig_eqmet}. On the other hand, for large heights $h> h_\un{sp}$, the solution of Eq.\,(\ref{eq_shape0}) ceases to exist. We checked that in the case $h<h_\un{sp}$ when two solutions exist, the solution corresponding to the larger value of angle $\beta$ always has lower energy than the solution corresponding to the lower value of $\beta$, Fig.\,\ref{fig_wykb}. Hence, the solution for larger angle $\beta$ describes the equilibrium capillary bridge, while the one with lower $\beta$ -- the metastable capillary bridge. For equilibrium solution, the angle $\beta$ is a decreasing function of $h$, and for metastable solution, it is an increasing function of $h$, Fig.\,\ref{fig_wykb}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig3.eps}
\caption{Shapes of equilibrium (full line) and metastable (dashed line) sphere-planar liquid surface bridges for height $h=175 \, \un{\mu m}$, radius of spheres $R=30 \, \un{\mu m}$, and capillary length $\lambda=1.45 \, \un{mm}$. \label{fig_eqmet}}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig4.eps}
\caption{The plot of angle $\beta$ and the energy of a sphere-planar liquid surface bridge $E_\un{sp}$ (inset) as a function of height $h$ for equilibrium (full line) and metastable (dashed line) solutions. The sphere radius $R=30 \, \un{\mu m}$, and the capillary length $\lambda=1.45 \, \un{mm}$. \label{fig_wykb}}
\end{center}
\end{figure}
\section{Continuous and discontinuous transitions \label{sec_transitions}}
We note that some of the solutions of Eq.\,(\ref{eq_shape0}) are non-physical, because when the presence of the lower, neighbouring sphere is taken into account, then the liquid-gas interface crosses this sphere. In other words, there exists $0<z<z_1$, for which $r_0(z)<r_1(z)$. In order to track the distance between the interface and the neighbouring lower sphere, the parameter
\beq
d_2(h) = \min_{0<z<z_1} \sqrt{r_0(z)^2+(z-(h-3R))^2}-R
\eeq
is introduced. It turns out that for $R<\lambda/2$, this distance is always positive for equilibrium bridges, while for $R>\lambda/2$, it achieves zero for certain $h_\un{tr}<h_\un{sp}$, Fig.\,\ref{fig_wykd2RR}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig5.eps}
\caption{The distance $d_2(h)$ for equilibrium bridges (full lines) and metastable bridges (dashed lines), for spheres with radius $R=30 \, \un{\mu m}<\lambda/2$ (green), and $R=2 \, \un{mm} >\lambda/2$ (orange). The capillary length equals $\lambda=1.45 \, \un{mm}$. \label{fig_wykd2RR}}
\end{center}
\end{figure}
Thus, for large spheres ($R>\lambda/2$), a continuous morphological transition takes place at $h_\un{tr}$, such that $d_2(h_\un{tr})=0$. The sphere-planar liquid surface bridge transforms continuously into: (1) the bridge between two neighbouring spheres, and (2) the bridge between lower sphere and the planar liquid surface, ESI Animation 1. On the contrary, for small spheres ($R<\lambda/2$), the morphological transition is discontinuous. It takes place at height $h_\un{tr}=h_\un{sp}$, for which the sphere-planar liquid interface bridge ceases to exist, ESI Animation 2. At this height, the sphere-sphere bridge and sphere-planar liquid interface bridge connecting the lower sphere with the planar liquid surface form discontinuously. Thus, the particular value of the radius $R=\lambda/2$, in the $(R,h)$ space, corresponds to a tricritical point $(R_\un{tric}=0.5 \lambda,h_\un{tric}=1.63 \lambda)$, i.e. the point, where the line of discontinuous morphological phase transitions meets the line of continuous morphological phase transitions \cite{Yeomans1992}, Fig.\,\ref{fig_wtricrit}. We note, that during the process of pulling the chain out of the liquid, the above transition takes place periodically, with period $\Delta h =2R$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig6.eps}
\caption{Plot of the height $h_\un{tr}$, at which the morphological phase transition takes place, as a function of radius of the spheres $R$ forming a chain. For $R<\lambda/2$, the transition is discontinuous, and for $R>\lambda/2$, the transition is continuous. The value $R=\lambda/2$ determines the tricritical point $(R_\un{tric}=0.5 \lambda,h_\un{tric}=1.63 \lambda)$. \label{fig_wtricrit}}
\end{center}
\end{figure}
The shape of the liquid bridge between two adjacent spheres that is formed in the continuous morphological transition is described by Eq.\,(\ref{eq_shape0}). Once this bridge is formed, its volume and the shape remain unchanged, there is no liquid flow along the surface of the spheres \cite{Bayramli1987,Adams2002,Lian2016}. On the other hand, in the case of the sphere-sphere bridge formed in the discontinuous transition, we assume that its shape is such that it minimizes the surface free energy. Thus, also in this case, the shape of the bridge is described by Eq.\,(\ref{eq_shape0}). Volume of the liquid bridge between two adjacent spheres is determined at transition. It is a function of the radius of the spheres $R$, and doesn't depend on $h$, Fig.\,\ref{fig_spsp_volumes}. In situations, in which the velocity of the chain being pulled out from the liquid can not be neglected, the volume of the bridge depends on the velocity, Fig.\,\ref{fig_Exp2}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig7.eps}
\caption{The volume of the sphere-sphere bridge $V_\un{tr}$ formed during the morphological phase transitions as a function of the radius of the spheres $R$. The radius $R=\lambda/2$, where $\lambda$ is a capillary length, corresponds to the tricritical point. The inset shows the capillary force $\vec{F}_2$ acting upward on the lower sphere. \label{fig_spsp_volumes}}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig8.eps}
\caption{Two sequences of photos illustrating the variations of the shape of the sphere-planar liquid surface bridge which take place in the process of pulling the chain of spheres out of the liquid. The upper panel corresponds to the chain vertical velocity $v=0.02 \, \un{mm/s}$, and the lower panel corresponds to $v=2 \, \un{mm/s}$. In both cases, $R=2 \, \un{mm}$. Note that $R>\lambda/2 = 0.725 \, \un{mm}$, and we observe a continuous morphological transition in which the sphere-sphere bridge is formed. Grey circles with yellow edges were added to mark both the shape and the position of the spheres. \label{fig_Exp2}}
\end{center}
\end{figure}
\section{Capillary forces \label{sec_forces}}
During the process of pulling the chain of spheres out from the liquid bath, i.e., for increasing values of parameter $h$, the shape of the bridge undergoes modifications, which induces modifications of the capillary force acting downward on the chain. The capillary force $\vec{F}_\un{sp}(h) = F_\un{sp}(h) \, \vec{e}_z$, where $\vec{e}_z$ denotes the unit vector directed upwards, can be calculated (see Appendix \ref{app_force}) on the basis of the surface free energy in Eq.\,(\ref{eq_functional})
\begin{align} \label{eq_Fsp}
\begin{split}
F_\un{sp}(h) = & - \frac{{\rm d}E_\un{sp}[r_0(z)]}{{\rm d} h} \\
= & - 2 \pi R \gamma \sin^2 \beta - \rho_\un{l}g z_1 \, \pi R^2 \sin^2 \beta \\
&+ \rho_\un{l} g \int_0^{z_1} \dd z \, \pi r_1(z)^2 \, .
\end{split}
\end{align}
The first term describes the contribution to the total capillary force that acts at the three phase (liquid-gas-sphere) contact line. The second term is the product of hydrostatic pressure depending on the height of the bridge $z_1$ and the cross-section area of the sphere at $z=z_1$. The third term corresponds to the buoyancy force acting upwards.
In the case of the sphere-sphere bridge, one can distinguish two forces \cite{Adams2002}, $\vec{F}_1 = F_1 \, \vec{e}_z$ and $\vec{F}_2 = F_2 \, \vec{e}_z$. Force $\vec{F}_1$ acts on the upper sphere, is directed downward, and
\begin{align}
\begin{split}
F_1 = & \, - 2 \pi R \gamma \sin^2 \beta_1 - \rho_\un{l}g \, z_1 \pi R^2 \sin^2 \beta_1 \\
& +\rho_\un{l} g \int_{h-2R}^{z_1} \dd \, z \pi r_1(z)^2 \, .
\end{split}
\end{align}
Force $\vec{F}_2$ acts on the lower sphere, is directed upward, and
\begin{align}
\begin{split}
F_2 =& \, 2 \pi R \gamma \sin^2 \beta_2 + \rho_\un{l}g \, z_2 \pi R^2 \sin^2 \beta_2 \\
& +\rho_\un{l} g \int_{z_2}^{h-2R} \dd z \, \pi r_1(z)^2 \, ,
\end{split}
\end{align}
where $\beta_{2}$ is the angle between the vertical direction and the radius directed to the three phase contact line, which is located at height $z_{2}$ (see inset in Fig.\,\ref{fig_spsp_volumes}). Both these forces have the same structure as the capillary force acting between the sphere and the planar liquid surface, Eq.\,(\ref{eq_Fsp}). For the continuous morphological transition ($R>\lambda/2$), one can check that
\beq
F_\un{sp}(h_\un{tr}) = F_\un{sp}(h_\un{tr}-2R)+F_1+F_2 \, .
\eeq
On the other hand, the sum of the forces for the sphere-sphere bridge case equals the weight of the liquid bridge (see Appendix \ref{app_volume})
\beq
F_1+F_2 = -\rho_l g V_\un{tr} \, ,
\eeq
where $V_\un{tr}$ denotes the volume of the sphere-sphere bridge. Finally, we obtain
\beq \label{Ftr}
F_\un{sp}(h_\un{tr}) = F_\un{sp}(h_\un{tr}-2R)-\rho_\un{l} g V_\un{tr} \, .
\eeq
Thus, in the case $R>\lambda/2$ (and for very small velocities), one can obtain the volume of the sphere-sphere bridge from the capillary force measurements, Fig.\,\ref{fig_FspR2}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig9.eps}
\caption{Capillary force in the sphere-planar liquid surface bridge $F_\un{sp}$ as a function of height $h$ for equilibrium (solid line) and metastable (dashed line) bridges. Black solid line joins the morphological transition point $\un{Tr}$, of height $h=h_\un{tr}$, with the point $\un{A}$, of height $h_\un{tr}-2R$, where the new period begins. The spinodal point $\un{Sp}$ denotes the height, above which, in the system without the presence of the second lower sphere, the bridge cease to exist. Radius of the sphere is $R=2 \, \un{mm} > \lambda/2$, where capillary length equals $\lambda=1.45 \, \un{mm}$. In the case of continuous transition, the difference $F_\un{sp}(h_\un{tr}-2R)-F_\un{sp}(h_\un{tr})$ equals the weight of the liquid bridge. \label{fig_FspR2}}
\end{center}
\end{figure}
In the case of discontinuous transitions (small radii), the weight of the sphere-sphere bridge doesn't compensate the difference between the sphere-planar liquid interface forces that pop up during the transition
\beq
F_\un{sp}(h_\un{tr}) < F_\un{sp}(h_\un{tr}-2R)-\rho_\un{l} g V_\un{tr} \, ,
\eeq
and one observes discontinuity in the force acting on the chain. For example, for $R=30 \, \un{\mu m}$ and $\lambda=1.45 \, \un{mm}$, the weight of the sphere-sphere bridge $\rho_\un{l} g V_\un{tr}/2 \pi R \gamma = 3 \cdot 10^{-4}$, and it is three orders of magnitude smaller than the difference between the sphere-planar liquid interface forces, Fig.\,\ref{fig_FspR003}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig10.eps}
\caption{Capillary force in the sphere-planar liquid surface bridge $F_\un{sp}$ as a function of height $h$ for equilibrium (solid line) and metastable (dashed line) bridges. Black solid line joins the morphological transition point $\un{Tr}$, of height $h=h_\un{tr}$, with the point $\un{A}$, of height $h_\un{tr}-2R$, where the new period begins. The weight of the sphere-sphere bridge $\rho_\un{l} g V_\un{tr}/2 \pi R \gamma = 3 \cdot 10^{-4}$ doesn't compensate the difference $F_\un{sp}(h_\un{tr}-2R)-F_\un{sp}(h_\un{tr})$, and one observes discontinuity in the force acting on the chain. Radius of the sphere is $R=30 \, \un{\mu m} < \lambda/2$, where capillary length equals $\lambda=1.45 \, \un{mm}$. \label{fig_FspR003}}
\end{center}
\end{figure}
We note, that the absolute value of the capillary force acting upward (see inset in Fig.\,\ref{fig_spsp_volumes}), is smaller than the maximum of the absolute value
of sphere-liquid surface bridge $|F_2|<\max_{h} |F_\un{sp}|$. Thus, in the case a chain is formed by assembling spheres from a suspension (as for example in the assembly route described in reference \cite{Rozynek2016}), one needs, on top of the capillary force, an additional force of attraction acting between the spheres (e.g., the dipolar force).
\section{Comparison with experiment \label{sec_experiment}}
To check the validity of our model, we prepared a chain of spheres of radius $R=2 \, \un{mm}$ ($R>\lambda/2$, where $\lambda=1.45 \, \un{mm}$). The adjacent spheres in the chain were glued together. The chain was very slowly pulled out from a $10 \, \un{cSt}$ silicone oil bath with $v=0.02 \, \un{mm/s}$ to ensure quasistatic conditions. A microscale was used as a very precise dynamometer, Fig.\,\ref{fig_Exp1}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig11.eps}
\caption{The experimental setup: (a) A chain of glued steel spheres of radii $R=2 \, \un{mm}$ is being pulled out from $10 \, \un{cSt}$ silicone oil by a stepper motor with constant velocity. Weight difference is measured by microscale during experiment; (b) Close-up on the spheres, silicone oil bath and the camera recording the experiment. \label{fig_Exp1}}
\end{center}
\end{figure}
For a given height $h$, weight on the microscale $m(h) \, g$ was equal to the weight of the silicone oil reduced by: the weight of the liquid in the sphere-sphere bridges, the absolute value of the sphere-planar liquid surface capillary force $|F_\un{sp}(h)|$, and the buoyancy force. The buoyancy force, during the process of pulling out the spheres, decreases by a factor proportional to the volume of the spheres drawn above $z=0$ level
\beq
\Delta F_\un{b} (h) &=& - \rho_\un{l} \, g \int_0^{h} \dd z \, \pi r_1(z)^2 \, .
\eeq
Thus, the mass difference $\Delta m(\Delta h) = m(h)- m(h_\un{tr}-2R)$, where $\Delta h = h-(h_\un{tr}-2R)$, equals
\begin{align}
\begin{split}
\Delta m(\Delta h)
= \frac{1}{g} \Big(F_\un{sp}(h)-F_\un{sp}(h_\un{tr}-2R)\Big) - \rho_\un{l} \int_{h_\un{tr}-2R}^{h} \dd z \, \pi r_1(z)^2 \, .
\end{split}
\end{align}
After one period, using Eq.\,(\ref{Ftr}), we get
\begin{align}
\Delta m(2R) = -\rho_\un{l} V_\un{tr} - \rho_\un{l} \frac{4}{3} \pi R^3 \, .
\end{align}
The first term is the mass of one sphere-sphere liquid bridge, and the second term is the mass of liquid displaced by one sphere. The comparison of experimental data and the theoretical curve is shown in Fig.\,\ref{fig_thexp}. We notice that the agreement is particularly satisfying for $\Delta h/2R < 0.5$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= \tw]{Fig12.eps}
\caption{Comparison of theoretical predictions and experimental measurements of the mass difference $\Delta m$ during the process of pulling out $R=2 \, \un{mm}$ steel spheres on height $\Delta h = h-(h_\un{tr}-2R)$ from a $10 \, \un{cSt}$ silicone oil bath. For $\Delta h = 2R$ the absolute value of the mass difference equals sum of mass of one sphere-sphere liquid bridge, and the mass of liquid displaced by one sphere, $\Delta m (2R) = - 33.27 \, \un{mg}$. Measurements were averaged over four spheres, and for theoretical model calculations the capillary tension was taken to be $\gamma=19.7 \, \un{mN/m}$, oil density $\rho_\un{l}=950 \, \un{kg/m^3}$, and the maximum radius of the system in numerical calculations to $r_\un{max}=10 \, \un{mm}$. \label{fig_thexp}}
\end{center}
\end{figure}
\section{Summary \label{sec_summary}}
We described theoretically the mechanism of capillary bridge formation on a beaded chain pulled out from a liquid. Two types of capillary bridges come into play. The first type is the bridge connecting the sphere with a planar liquid surface, and in the process of pulling out the chain of spheres from the liquid, this bridge appears first. Then, when the next sphere is pulled out from the liquid, this bridge transforms into the bridge between the adjacent spheres and the bridge connecting the lower sphere with the surface of the liquid. We showed that this morphological transition changes its order, depending on the ratio of the sphere radius and the capillary length $R/\lambda$. For $R/\lambda > 2$, it is continuous, and for $R/\lambda <2$, it is discontinuous with the particular value $R/\lambda =2$ corresponding to the tricritical point. The shape of the meniscus of the bridge is given by the solution of Eq.\,(\ref{eq_shape0}), in which the mean curvature on its lhs depends on the local height of the meniscus. There are two solutions of this equation, and the one corresponding to the larger value of $\beta$ always has smaller surface free energy. The metastable bridge corresponding to the smaller angle $\beta$ could be observed in the reverse experiment, when a sphere is pushed towards a flat liquid surface.
Besides the shape of the bridges, the accompanying capillary forces acting in the system were also calculated. It turns out that the constant capillary force binding two adjacent spheres is not sufficiently strong to prevent the breaking of the chain in the process of pulling it out from the liquid. It is smaller than the maximal value of the sphere-planar liquid surface capillary force acting downwards. This maximal value is attained for the height smaller than the height of morphological phase transition. Thus, an additional attractive force acting between the adjacent spheres is needed to provide the stability of the chain. This can be the dipolar force \cite{Rozynek2016}; in the reported experiment, the spheres were simply glued together.
We compared our theoretical predictions with experimental data corresponding to the case $R>\lambda/2$. The observed morphological transition is continuous, as expected on theoretical grounds, and the plot of the theoretically predicted capillary force fits well the experimental data, in particular for larger $R/\lambda$ values.
In our theoretical model, we assumed zero contact angle. Relaxing this constraint can lead to stabilization of the metastable bridges in the $\theta=0$ case. Such a change of stability would cause the phase diagram and the formation of liquid bridges scenarios to be more complicated. This is left for further analysis.
Summarizing, we have explained theoretically the mechanisms of capillary bridges formation in the process of pulling the chain of spheres out of a liquid bath. We predicted the existence of a morphological phase transition between two types of bridges, which can be continuous or discontinuous. The transition is continuous when the diameter of the spheres is larger than the capillary length. In the opposite case the transition is discontinuous, and so is the capillary force acting on the chain. Thus the capillary length sets the lower limit for the diameter of the spheres, for which the beaded chain formation out from a liquid dispersion is a smooth process.
|
1,314,259,994,304 | arxiv | \subsection{Energy Estimate}
As usual, we denote by
\begin{equation}\label{2.1a}
\eta^*_\delta{}={}\frac{m^2}{2\rho}{}+{}h_\delta(\rho),\qquad
q^*_\delta{}={}\frac{m^3}{2\rho^2}{}+{}m h'_\delta(\rho),
\end{equation}
as the mechanical energy pair of system \eqref{eq:NS} with $\e=0$,
where $h_\delta(\rho):=\rho e_\delta(\rho)$ for the internal energy $e_\delta(\rho):=\int_0^\rho\frac{p_\delta(s)}{s^2}\,ds$.
Note that $(\bar{\rho},0)$ is the only constant equilibrium state of the system.
For the mechanical energy pair $(\eta^*_\delta, q^*_\delta)$ in \eqref{2.1a},
we denote
\begin{equation}
\label{E}
\bar{\eta}^*_\delta(\rho,m)
{}={}\eta^*_\delta(\rho,m){}-{}\eta^*_\delta(\bar{\rho},0){}-{}(\eta^*_\delta)_\rho(\bar{\rho},0)(\rho-\bar{\rho}),
\end{equation}
as the total energy relative to the constant equilibrium state $(\bar{\rho},0)$.
\begin{proposition}\label{2.1}
Let
$$
E_0:=\sup_{\e>0}\int_a^b \bar{\eta}^*_\delta(\rho_0^\e(r), m_0^\e(r))r^{n-1}dr <\infty.
$$
Then, for the viscosity approximate solution $(\rho, m)=(\rho, \rho u)$ determined by Theorem {\rm \ref{theorem:exist}}
for each fixed $\e>0$,
we have
\begin{eqnarray}
&&
\sup_{t\in[0,T]}\int_a^b \big(\frac{1}{2}\rho u^2
+ \bar{h}_\delta(\rho, \bar{\rho})\big) r^{n-1} dr
\notag\\
&&\qquad
+{}\e\int_{Q_T}\Big( h''_\delta(\rho)|\rho_r|^2{}+{}\rho|u_r|^2+(n-1)\frac{\rho u^2}{r^2}\Big)\,r^{n-1}drdt{}
{}\le {}E_0,
\label{Energy_est}
\end{eqnarray}
where
\begin{equation}\label{Energy_est-a}
\bar{h}_\delta(\rho, \bar{\rho})=h_\delta(\rho)-h_\delta(\bar{\rho})-h'_\delta(\bar{\rho})(\rho-\bar{\rho})
\ge c_1\rho(\rho^\theta-\bar{\rho}^\theta)^2, \qquad \theta=\frac{\gamma-1}{2},
\end{equation}
for some constant $c_1=c_1(\bar{\rho},\gamma)>0$.
Furthermore, for any $t\in[0,T]$,
the measure of set $\{\rho(t,\cdot)>\frac{3}{2}\bar{\rho}\}$ is less than $c_2E_0$
for some $c_2=c_2(\bar{\rho},\gamma)>0$.
\end{proposition}
\begin{proof}
We multiply the first equation in \eqref{eq:NS} by $(\bar{\eta}^*_\delta)_\rho r^{n-1}$,
the second in \eqref{eq:NS} by $(\bar{\eta}^*_\delta)_m r^{n-1}$,
and then add them up to obtain
\begin{eqnarray*}
\label{eq:energy-1}
&&\big(\bar{\eta}^*_\delta r^{n-1}\big)_t
{}+{}\big((q^*_\delta-(\eta^*_\delta)_\rho(\bar{\rho},0)m)r^{n-1}\big)_r{}\notag\\
&&= \e r^{n-1}\big( \rho_{rr}{}+{}\frac{n-1}{r}\rho_r\big)\big((\eta^*_\delta)_\rho-(\eta^*_\delta)_\rho(\bar{\rho},0)\big)
{}+{}\e r^{n-1}\big(m_r+\frac{n-1}{r}m\big)_r(\eta^*_\delta)_m,
\end{eqnarray*}
that is,
\begin{eqnarray}
\label{eq:energy-2}
&&(\bar{\eta}^*_\delta r^{n-1})_t{}+{}\big((q^*_\delta-(\eta^*_\delta)_\rho(\bar{\rho},0)m)r^{n-1}\big)_r{}
+{} (n-1)\e m (\eta^*_\delta)_mr^{n-3}\notag\\[2mm]
&&= \e ( \rho_{r}r^{n-1})_r\big((\eta^*_\delta)_\rho-(\eta^*_\delta)_\rho(\bar{\rho},0)\big)
+\e (m_rr^{n-1})_r (\eta^*_\delta)_m.
\end{eqnarray}
Integrating both sides of \eqref{eq:energy-2} over $Q_t$ for any $t\in (0,T]$ and
using the boundary conditions \eqref{eq:bc}, we have
\begin{eqnarray*}
&&
\int_a^b \bar{\eta}^*_\delta r^{n-1}\,dr
{}+{}\e\int_{Q_t}\Big(
(\rho_r,m_r)\grad^2 \bar{\eta}^*_\delta(\rho_r,m_r)^\top{}+{}\frac{m^2}{2\rho r^2}\Big)
r^{n-1}\,drdt{}={}E_0. \qquad \label{Energy_est-b}
\end{eqnarray*}
Note that $(\rho_r,m_r)\grad^2\bar{\eta}^*_\delta(\rho_r,m_r)^\top$ is a positive
quadratic form that dominates $h''_\delta(\rho)|\rho_r|^2$ and $\rho|u_r|^2$ so that
\begin{equation}
\label{est:energy_2}
\int_a^b \bar{\eta}^*_\delta r^{n-1}\,dr
{}+{}
\e \int_{Q_T} \left( (2\delta{}+{}\kappa\gamma\rho^{\gamma-2})|\rho_r|^2{}+{}\rho|u_r|^2
+(n-1)\frac{\rho u^2}{r^2}\right)\,r^{n-1}drdt{}\leq{}E_0.
\end{equation}
Estimate \eqref{est:energy_2} also implies
\begin{equation*}
\sup_{t\in[0,T]}\int_a^b\big(\rho u^2
+ \bar{h}_\delta(\rho,\bar{\rho})\big)r^{n-1}\,dr{}\leq{} E_0.
\end{equation*}
The function
$\bar{h}_\delta(\rho,\bar{\rho})$
is positive, quadratic in $\rho-\bar{\rho}$ for $\rho$ near $\bar{\rho}$,
and grows as $\rho^{\max\{\gamma, 2\}}$
for large values of $\rho$.
In particular, there exists $c_1=c_1(\bar{\rho},\gamma)>0$ such that \eqref{Energy_est-a} holds.
Thus, for any $t\in[0,T]$,
the measure of set $\{\rho(t,\cdot)>\frac{3}{2}\bar{\rho}\}$ is less than $c_2 E_0$
for some $c_2>0$.
\end{proof}
\medskip
With the basic energy estimate \eqref{Energy_est}, we have
\begin{lemma}\label{2.1-b}
There exists $C=C(\e, T, E_0)>0$ such that
\begin{equation} \label{est:rho_int_2_gamma}
\int_0^T \|\rho(t,\cdot)\|_{L^\infty(a,b)}^{2\max\{2,\gamma\}}\,dt{}\leq{} C.
\end{equation}
\end{lemma}
\begin{proof}
In the case that the measure of set $\{\rho(t,\cdot)>\frac{3}{2}\bar{\rho}\}$ is zero, we have
the uniform upper bound $\frac{3}{2}\bar{\rho}$ for $\rho(t,r)$.
Otherwise, for $r\in (a,b)$, let $r_0\in (a,b)$ be the closest to point $r$ such that
$\rho(t,r_0)=\frac{3}{2}\bar{\rho}$.
Clearly, $|r-r_0|\leq c(\bar{\rho})E_0$.
With such a choice of $r_0$, we have
\begin{eqnarray}
\label{est:rho_bounded}
&&|\rho^\gamma(t,r)-\rho^\gamma(t,r_0)|\\
&&\leq \gamma\Big|\int_{r_0}^r\rho^{\gamma-1}(t,y) \rho_y(t,y)\,dy\Big| \notag \\
&&\leq C\,\Big|\int_{r_0}^r \rho^\gamma(t,y)y^{n-1}\,dy \Big|^{\frac{1}{2}}
\Big(\int_a^b\rho^{\gamma-2}(t,y)|\rho_y(t,y)|^2y^{n-1}\,dy\Big)^{\frac{1}{2}} \notag\\
\label{est:rho_gamma_energy}
&&\leq{} C\Big(\int_a^b\rho^{\gamma-2}(t,r)|\rho_r(t,r)|^2r^{n-1}\,dr\Big)^{\frac{1}{2}}.
\end{eqnarray}
Then estimate \eqref{est:energy_2} yields
\begin{equation}
\int_0^T \|\rho(t,\cdot)\|_{L^\infty(a,b)}^{2\gamma}\,dt{}\leq{} C,
\end{equation}
where $C$ stands for a generic function of
the parameters: $\gamma,\e,\delta, T, E_0$, and $\bar{\rho}$.
Repeating the argument with $\rho^2$ instead of $\rho^\gamma$, we conclude
\eqref{est:rho_int_2_gamma}.
\end{proof}
From now on, the constant $C>0$ is a universal constant that may {\it depend on the parameter $\e>0$} in \S 2.2--\S 2.3,
while the constant $M>0$ below is another universal constant {\it independent of the parameter $\e$} as $E_0$ from \S 3, though both of
them may also depend on $T>0$, $E_0$, and other parameters; we will also specify their dependence whenever needed.
\begin{subsection}{Maximum Principle Estimates}
Furthermore, we have
\begin{lemma}
\label{lemma:uniform}
There exists $C=C(a,T,E_0)$ such that, for any $t\in[0,T],$
\begin{equation}
\label{est:u_max}
\|u\|_{L^\infty(Q_t)}\leq{} C\big(\|u_0+R(\rho_0)\|_{L^\infty(a,b)}{}
+{} \|u_0-R(\rho_0)\|_{L^\infty(a,b)}{}+{}\|R(\rho)\|_{L^\infty(Q_t)}\big),
\end{equation}
where
\begin{equation}
\label{Riemann.invariants}
R(\rho){}={}\int_0^\rho\frac{\sqrt{p_\delta'(s)}}{s}\,ds.
\end{equation}
\end{lemma}
\begin{proof}
Consider system \eqref{eq:NS}.
The characteristic speeds of system \eqref{eq:NS} without artificial viscosity terms are
\[
\lambda_1{}={}u-\sqrt{p_\delta'(\rho)},\qquad \lambda_2{}={}u+\sqrt{p_\delta'(\rho)},
\]
and the corresponding right-eigenvectors are
\[
r_1{}={}\left[\begin{array}{c}
1\\\lambda_1 \end{array}\right],
\qquad r_2{}={}\left[\begin{array}{c}
1\\\lambda_2 \end{array}\right].
\]
The Riemann invariants $(w,z)$, defined by the conditions $\grad w\cdot r_1{}={}0$ and $\grad z\cdot r_2{}={}0$,
are given by
\[
w{}={}\frac{m}{\rho}{}+{}R(\rho),\qquad z{}={}\frac{m}{\rho}-R(\rho),
\]
with $R$ defined in \eqref{Riemann.invariants}.
They are quasi-convex:
\begin{equation}
\label{quasi}
\grad^\perp w \grad^2 w (\grad^\perp w)^\top \geq0,\qquad
-\grad^\perp z \grad^2 z(\grad^\perp z)^\top\geq 0,
\end{equation}
where $\grad^2$ is the Hessian with respect to $(\rho,m)$ and $\grad^\perp{}={}(\partial_m,-\partial_\rho).$
Let us multiply the first equation in \eqref{eq:NS} by $w_{\rho}(\rho,m)$,
the second in \eqref{eq:NS}
by $w_{m}(\rho,m)$, and add them to obtain
\begin{eqnarray*}
&&w_{t}{}+{}\lambda_2 w_{r}{}+{}\frac{n-1}{r} u \sqrt{p_\delta'(\rho)}\\
&&={}-\e\big(\rho_r (w_{\rho})_r{}+{}m_r (w_m)_r\big){}+{}\e w_{rr}
+{} \frac{(n-1)\e}{r}\big(w_{r}{}-{}\frac{1}{r}mw_{m} \big),
\end{eqnarray*}
where $\lambda_2$ is as above.
Then
\begin{eqnarray*}
&&w_{t}{}+{}\big(\lambda_2-\frac{(n-1)\e}{r}\big) w_{r}{}-{}\e w_{rr}\\
&&={}-\e (\rho_r,m_r)\grad^2 w(\rho_r,m_r)^\top
{}-{} \frac{n-1}{r}u\sqrt{p_\delta'(\rho)}-{}(n-1)\e\frac{u}{r^2}.
\end{eqnarray*}
We write
$$
(\rho_r, m_r){}={}\alpha\grad w{}+{}\beta\grad^\perp w,
$$
with
\[
\alpha{}={}\frac{w_{r}}{|\grad w|^2},\qquad
\beta{}={}\frac{\rho_r w_{m}-m_r w_{\rho}}{|\grad w|^2}.
\]
Then we can further write
\begin{eqnarray}\label{RI}
&&w_{t}{}+{}\lambda w_{r}{}-{}\e w_{rr}\notag\\
&&={}-\e \beta^2 \grad^\perp w \grad^2 w(\grad^\perp w)^\top
{}-{} \frac{n-1}{r}u \sqrt{p_\delta'(\rho)}-{}(n-1)\e\frac{u}{r^2},
\end{eqnarray}
where
\begin{eqnarray*}
\lambda{}={}\lambda_2{}-{}\frac{(n-1)\e}{r}
{}+{}\frac{\e\alpha}{|\grad w|^2} \grad w\grad^2w (\grad w)^\top
{}+{}\frac{2\e \beta }{|\grad w|^2} \grad^\perp w \grad^2w(\grad w)^\top .
\end{eqnarray*}
By setting
\[
\tilde{w}(t,r)
{}={}w(t,r){}-{}(n-1)\int_0^t\Big\|\frac{\sqrt{p_\delta'(\rho(\tau,r))}u(\tau,r)}{r}
{}+{}\frac{\e u(\tau,r)}{r^2} \Big\|_{L^\infty(a,b)}\,d\tau,
\]
and using the quasi-convexity property \eqref{quasi} and the classical maximum principle
applied to the parabolic equation \eqref{RI}, we obtain
\[
\max_{Q_t}\tilde{w}
{}\leq{} \max\big\{ \max_{(a,b)}w_{0}{},{}\max_{[0,t]\times (\{a\}\cup\{b\})}\tilde{w}\big\},
\]
or
\begin{multline*}
\max_{Q_t} w{}\leq{}\max_{(a,b)} w_{0}{}+{}\|R(\rho)\|_{L^\infty(Q_t)}
{}+{}C(\bar{\rho},a)\int_0^t\Big(1+\|\rho(\tau,\cdot)\|^{\frac{1}{2}\max\{1,\gamma-1\}}_{L^\infty(a,b)}\Big)\|u(\tau,\cdot)\|_{L^\infty(a,b)}\,d\tau.
\end{multline*}
Similarly, we have
\begin{multline*}
\max_{Q_t} (-z{})\leq{}\max_{(a,b)} (-z_{0}){}+{}\|R(\rho)\|_{L^\infty(Q_t)}
{}+{}C\int_0^t\Big(1+\|\rho(\tau,\cdot)\|^{\frac{1}{2}\max\{1, \gamma-1\}}_{L^\infty(a,b)}\Big)\|u(\tau,\cdot)\|_{L^\infty(a,b)}\,d\tau.
\end{multline*}
Since $\rho\geq0$, it follows that
\begin{eqnarray}
\max_{Q_t} |u|&\leq& \max_{(a,b)} |w_{0}|{}+{}\max_{(a,b)} |z_{0}|{}+{}\|R(\rho)\|_{L^\infty(Q_t)}\notag \\
&& {}+{}C(a) \int_0^t\Big(1
+\|\rho(\tau,\cdot)\|^{\frac{1}{2}\max\{1,\gamma-1\}}_{L^\infty(a,b)}\Big)\|u(\tau,\cdot)\|_{L^\infty(a,b)}
\,d\tau.\label{est:u_max_0.1}
\end{eqnarray}
By \eqref{est:rho_int_2_gamma} and $\max\{1, \gamma-1\}<4\gamma$, we have
$$
\int_0^T \|\rho(\tau,\cdot)\|^{\frac{1}{2}\max\{1, \gamma-1\}}_{L^\infty}\,d\tau
{}\leq{}C.
$$
Then we conclude \eqref{est:u_max} from \eqref{est:u_max_0.1}.
\end{proof}
\end{subsection}
\begin{subsection}{Lower Bound on $\rho$}
\begin{lemma}\label{lemma:high-derivative-1}
There exists $C{}={}C ( \|(\rho_0,u_0)\|_{L^\infty(a,b)}, \|(\rho_0,m_0)\|_{H^1(a,b)}, \gamma)$
such that
\begin{equation}
\sup_{t\in[0,T]}\int_a^b \big(|\rho_r|^2{}+{}|m_r|^2\big)\,dr
{}+{}\int_{Q_T} \big(|\rho_{rr}|^2
{}+{}|m_{rr}|^2)\,drdt {}\leq{} C.
\label{2.10a}
\end{equation}
\end{lemma}
\begin{proof} We multiply the first equation in \eqref{eq:NS} by $\rho_{rr}$ and
the second by $m_{rr}$ to obtain
\begin{eqnarray*}
&&-\partial_t\Big(\frac{|\rho_r|^2+|m_r|^2}{2}\Big)
{}-{}\e\big(|\rho_{rr}|^2{}+{}|m_{rr}|^2\big){}+{}
(\rho_t\rho_r)_r{}+{}(m_tm_r)_r \notag \\
&&=-m_r\rho_{rr}{}-{}\frac{(n-1)}{r}m\rho_{rr}
{}-{}(\rho u^2{}+{}p_\delta)_rm_{rr}{}-{}\frac{n-1}{r}\rho u^2 m_{rr}
\notag \\
&&\quad+\frac{(n-1)\e}{r}\rho_r\rho_{rr}{}+{}\big(\frac{(n-1)\e}{r} m\big)_r m_{rr}.
\end{eqnarray*}
We integrate this over $Q_t$ to obtain
\begin{eqnarray}
&&\int_a^b \Big(\frac{|\rho_r|^2+|m_r|^2}{2}\Big)\Big|^t_0\,dr
{}+{}\e \int_{Q_t} (|\rho_{rr}|^2{}+{}m_{rr}|^2)\,drdt \notag \\
&&{}={}
\int_{Q_t}\big(m_r\rho_{rr}{}+{}\frac{n-1}{r}m\rho_{rr}\big)\,drdt
{}+{}\int_{Q_t}(\rho u^2{}+{}p_\delta)_rm_{rr}\,drdt \notag \\
&&\quad {}+{}(n-1)\int_{Q_t}\big(\frac{\rho u^2}{r}m_{rr}
- \frac{\e}{r}\rho_r\rho_{rr}\big)\,drdt
{}-{}(n-1)\e\int_{Q_t}\big(\frac{m}{r}\big)_r m_{rr}\,drdt.\qquad\quad \label{est:higher-0}
\end{eqnarray}
We now estimate the term $\int_{Q_T}(\rho u^2+p)_rm_{rr}\,drdt$ first. Consider
\begin{eqnarray}
&&\left| \int_{Q_t} p'_\delta (\rho)\rho_rm_{rr}\,drd\tau\right|\notag \\
&&\leq \Delta \int_{Q_t}|m_{rr}|^2\,drd\tau{}
+{}C_\Delta\int_{Q_t} \big(2\delta \rho+\kappa\gamma\rho^{\gamma-1}\big)^2 |\rho_r|^2\,drd\tau \notag \\
&&\leq \Delta \int_{Q_t}|m_{rr}|^2\,drd\tau{}
+{}C_\Delta\int_0^t\Big(\big(1+\|\rho(\tau,\cdot)\|_{L^\infty}^{2\gamma}\big)\int_a^b |\rho_r|^2\,dr\Big)d\tau,
\label{est:higher-0.1}
\end{eqnarray}
where $\Delta>0$ will be chosen later.
Consider $(\rho u^2)_rm_{rr}{}=u^2\rho_r m_r+ 2\rho u u_rm_{rr}$. We estimate
\begin{eqnarray*}
&&\int_{Q_t}|u^2\rho_r m_{rr}|\,drd\tau \notag\\
&&\leq \Delta\int_{Q_t} |m_{rr}|^2\,drd\tau
{}+{}
C_\Delta \int_0^t\Big( \|u(\tau,\cdot)\|^4_{L^\infty}\int_a^b |\rho_r(\tau,r)|^2\,dr\Big) d\tau \\
&&\leq \Delta\int_{Q_t} |m_{rr}|^2\,drd\tau +
C_\Delta \int_0^t\Big(\|u(\tau,\cdot)\|_{L^\infty}^4
\int_a^b h''_\delta(\rho)|\rho_r(\tau,r)|^2\,dr\Big) d\tau.
\end{eqnarray*}
Using the uniform estimates \eqref{est:u_max}, we obtain
\begin{equation}
\label{est:u-uniform-1}
\|u(\tau,\cdot)\|_{L^\infty(a,b)}^4{}\leq{} \|u\|_{L^\infty(Q_\tau)}^4
{}\leq{} C(\bar{\rho},a,\|(\rho_0,u_0)\|_{L^\infty(a,b)})
\big(1{}+{}\| \rho\|_{L^\infty(Q_\tau)}^{2\max\{1, \gamma-1 \} }\big).
\end{equation}
Inserting this into the above inequality, we have
\begin{eqnarray*}
&&\int_{Q_t}|u^2\rho_rm_{rr}|\,drd\tau \notag\\
&&\leq{} \Delta\int_{Q_t} |m_{rr}|^2\,drd\tau
{}+{}
C_\Delta \int_0^t\Big((1+ \sup_{s\in[0,\tau]}\|\rho(s,\cdot)\|_{L^\infty}^{2\max\{1, \gamma-1\}})
\int_a^b h''_\delta(\rho)|\rho_r(\tau,r)|^2\,dr\Big) d\tau.
\end{eqnarray*}
On the other hand, using the estimate similar to \eqref{est:rho_bounded}, we can write
\begin{eqnarray}
\label{est:rho_uniform:1}
\|\rho(t,\cdot)\|_{L^\infty}^{\max\{4,\gamma+2\}}{}\leq C\Big(1+ \int_a^b |\rho_r(t,\cdot)|^2\,dr\Big)
\qquad\mbox{for $t\in[0,T]$}.
\end{eqnarray}
Using \eqref{est:rho_uniform:1} and $\gamma\in(1,3]$, we obtain
\begin{eqnarray}
&&\int_{Q_t}|u^2\rho_rm_{rr}|\,drd\tau \notag\\
&&\leq \Delta\int_{Q_t} |m_{rr}|^2\,drd\tau\notag \\
&&\quad +
C_\Delta \int_0^t\Big(\big(1+ \sup_{s\in[0,\tau]}\int_a^b |\rho_r(s,r)|^2 dr\big)
\int_a^b h''_\delta(\rho)|\rho_r(\tau,r)|^2\,dr\Big) d\tau.
\quad\label{est:high-2}
\end{eqnarray}
Furthermore, we have
\begin{eqnarray}
\int_{Q_t}|\rho uu_rm_{rr}|\,drd\tau &\leq& \Delta\int_{Q_t} |m_{rr}|^2\,drd\tau\notag \\
&&+ C_\Delta\int_0^t\Big(\|(\rho u^2)(\tau,\cdot)\|_{L^\infty}\int_a^b\rho(\tau,r)|u_r(\tau,r)|^2\,dr\Big) d\tau.
\label{2.14b}
\end{eqnarray}
Arguing as in \eqref{est:u-uniform-1} and \eqref{est:rho_uniform:1}, we obtain
\begin{eqnarray}
\|(\rho u^2)(\tau,\cdot)\|_{L^\infty}
&\leq& C\Big(1+\sup_{s\in[0,\tau]}\|\rho(s,\cdot)\|_{L^\infty}^{\max\{2,\gamma\}}\Big) \notag \\
&\leq& C\Big(1+
\sup_{s\in[0,\tau]}\int_a^b |\rho_r(s,r)|^2\,dr\Big).
\label{est:rho_u^2}
\end{eqnarray}
Inserting this into \eqref{2.14b}, we obtain
\begin{eqnarray}
&&\int_{Q_t}|\rho uu_rm_{rr}|\,drd\tau \notag\\
&&\leq{} \Delta\int_{Q_t} |m_{rr}|^2\,drd\tau \notag\\
&&\quad
+C_\Delta \int_0^t\Big(\big(1+ \sup_{s\in[0,\tau]}\int_a^b|\rho_r(s,r)|^2 dr\big)
\int_a^b \rho(\tau,r)|u_r(\tau,r)|^2\,dr\Big) d\tau. \label{est:high-3}
\end{eqnarray}
Combining \eqref{est:higher-0.1}, \eqref{est:high-2}, and \eqref{est:high-3}, we obtain
\begin{eqnarray}
\left| \int_{Q_t} (\rho u^2 {}+{}p)_rm_{rr}\,drd\tau\right| &\leq&
\Delta\int_{Q_t} |m_{rr}|^2\,drd\tau \notag\\
&&+C_\Delta \int_0^t\Phi_1(\tau)\Big(1+ \sup_{s\in[0,\tau]}\int_a^b |\rho_r(s,r)|^2 dr\Big) d\tau,
\label{2.17} \notag
\end{eqnarray}
where
\[
\Phi_1(\tau){}={}\int_a^b
\Big(h''_\delta(\rho)|\rho_r(\tau,r)|^2{}+{}\rho(\tau,r)|u_r(\tau,r)|^2\Big)\,dr
\]
is an $L^1(0,T)$--function with the norm depending on $a,\e$, and $E_0$;
see \eqref{Energy_est} and \eqref{est:rho_int_2_gamma}.
Consider now
\begin{eqnarray*}
\left|\int_{Q_t}\frac{2\rho u^2}{r}m_{rr}drd\tau\right| &\leq& \Delta \int_{Q_t}|m_{rr}|^2\,drd\tau
{}+{}C_\Delta\int_0^t\Big(\|(\rho u^2)(\tau,\cdot)\|_{L^\infty}\int_a^b (\rho u^2)(\tau,r)\,dr\Big)d\tau\\
&\leq& \Delta \int_{Q_t}|m_{rr}|^2\,drd\tau
{}+{}C_\Delta\int_0^t \Big(1+ \sup_{s\in[0,\tau]}\int_a^b|\rho_r(s,r)|^2\,dr\Big)d\tau,
\end{eqnarray*}
where, in the last inequality, we have used \eqref{Energy_est} and \eqref{est:rho_u^2}.
All the other terms in \eqref{est:higher-0} can be estimated by similar arguments.
Thus, we obtain
\begin{eqnarray*}
&&\sup_{\tau\in[0,t]}\int_a^b \big(|\rho_r(\tau,s)|^2{}+{}|m_r(\tau,s)|^2\big)\,dr{}
+{}\e\int_{Q_t}\big(|\rho_{rr}|^2{}+{}|m_{rr}|^2\big)\,drd\tau \\
&&\leq \Delta\int_{Q_t}\big(|\rho_{rr}|^2{}+{}|m_{rr}|^2\big)\,drd\tau \\
&&\quad {}+{}
C_\Delta\int_0^t\big(1+\Phi(\tau)\big)
\Big(1+\sup_{s\in[0,\tau]}\int_a^b\big(|\rho_r(s,r)|^2{}+{}|m_r(s,r)|^2\big)\,dr\Big)\,d\tau,
\end{eqnarray*}
where $\Phi(\tau)=\Phi_1(\tau)+ \|\rho(\tau,\cdot)\|_{L^\infty}^{2\max\{2, \gamma\}}$.
Choosing $\Delta$ small enough and using the Gronwall-type argument and Lemma \ref{2.1-b},
we complete the proof.
\end{proof}
As a corollary, we can first bound $\|\rho\|_{L^\infty(Q_T)}$,
which follows directly from \eqref{2.10a} and \eqref{est:rho_uniform:1}, and
then bound $\|u\|_{L^\infty(Q_T)}$ from Lemma \ref{lemma:uniform}.
\begin{lemma}
\label{lemma:2.4}
There exists an a priori bound for $\|(\rho,u)\|_{L^\infty(Q_T)}$
in terms of the parameters $T,E_0, \|(\rho_0,u_0)\|_{L^\infty(a,b)}$, and
$\|(\rho_0,u_0)\|_{H^1(a,b)}$.
\end{lemma}
Define
\[
\phi(\rho){}={}\left\{
\begin{array}{ll}
\frac{1}{\rho}-\frac{1}{\tilde{\rho}} {}+{}\frac{\rho-\tilde{\rho}}{\tilde{\rho}^2},& \rho<\tilde{\rho},\\[2mm]
0, & \rho>\tilde{\rho}.
\end{array}
\right.
\]
\begin{lemma}
There exists $C>0$ depending on $\|\phi(\rho_0)\|_{L^1(a,b)}$
and the other parameters of the problem
such that
\begin{eqnarray}
\label{est:rho_r_integral}
\sup_{t\in[0,T]}\int_a^b\phi(\rho(t,\cdot))\,dr
{}+{}\int_{Q_T}\frac{|\rho_r|^2}{\rho^3}\,drdt{}\leq{} C.
\end{eqnarray}
\end{lemma}
\begin{proof}
Indeed, multiplying the first equation in \eqref{eq:NS} by $\phi'(\rho)$, we have
\begin{eqnarray*}
&&\phi_t{}+{}(u\phi)_r{}-{}\e \, \phi_{rr}{}+{}(n-1)\e \frac{|\rho_r|^2}{\rho^3}\chi_{\{\rho<\tilde{\rho}\}}\\
&&={}2\big( \frac{1}{\rho}-\frac{1}{\tilde{\rho}}\big)u_r\chi_{\{\rho<\tilde{\rho}\}}{}+{}
\frac{n-1}{r}\rho u\big(\frac{1}{\rho^2}-\frac{1}{\tilde{\rho}^2}\big)\chi_{\{\rho<\tilde{\rho}\}}
+\frac{(n-1)\e}{r}\big(\frac{1}{\rho^2}-\frac{1}{\tilde{\rho}^2}\big)\rho_r\chi_{\{\rho<\tilde{\rho}\}}.
\end{eqnarray*}
Integrating the above equation in $(t,r)$ and using the boundary conditions \eqref{eq:bc},
we have
\begin{eqnarray}
&&\sup_{t\in[0,T]}\int_a^b\phi(\rho)\,dr
{}+{}\e (n-1)\int_{Q_T\cap\{\rho<\tilde{\rho}\}}\frac{|\rho_r|^2}{\rho^3}\,drdt \notag \\
{}&&\leq{} \left|\int_{Q_T\cap\{\rho<\tilde{\rho}\}}2\big(\frac{1}{\rho}-\frac{1}{\tilde{\rho}}\big)u_r\,drdt \right|
{}+{}\left|\int_{Q_T\cap \{\rho<\tilde{\rho}\}}\frac{n-1}{r}\rho u\left(\frac{1}{\rho^2}-\frac{1}{\tilde{\rho}^2}\right)\,drdt \right| \notag \\[1mm]
{}&&\quad +{}\left|\int_{Q_T\cap\{\rho<\tilde{\rho}\}} \frac{(n-1)\e}{r}\rho_r\left(\frac{1}{\rho^2}-\frac{1}{\tilde{\rho}^2}\right)\,drdt \right| \notag \\
\label{est_00}
{}&&={}I_1{}+{}I_2{}+{}I_3.
\end{eqnarray}
Integrating by parts, we have
\begin{eqnarray}
I_1{}\leq{}2\int_{Q_T\cap\{\rho<\tilde{\rho}\}} \left|\frac{\rho_r u}{\rho^2}\right|
{}\leq{}\frac{\e}{8}\int_{Q_T\cap \{\rho<\tilde{\rho}\}}\frac{|\rho_r|^2}{\rho^3}\,drdt
{}+{}C_\e\int_{Q_T\cap \{\rho<\tilde{\rho}\}}\frac{|u|^2}{\rho}\,drdt. \notag
\end{eqnarray}
Since $\rho^{-1}\leq\phi(\rho)$ for small $\rho$, $u$ is bounded in $L^\infty$,
and $|\{\rho(t,\cdot)\leq\tilde{\rho}\}|$ is bounded independently of $T$,
then the last term in the above inequality is bounded by
\[
C\Big(1{}+{}\int_{Q_T}\phi(\rho)\,drdt\Big).
\]
Thus, we have
\begin{eqnarray}
I_1&\leq&2\int_{Q_T\cap\{\rho<\tilde{\rho}\}} \Big|\frac{\rho_r u}{\rho^2}\Big|\,drdt\notag\\
&\leq& \Delta\int_{Q_T\cap \{\rho<\tilde{\rho}\}}\frac{|\rho_r|^2}{\rho^3}\,drdt
{}+{} C_\Delta\left(1+\int_{Q_T}\phi(\rho)\,drdt\right).
\end{eqnarray}
Also, by the similar arguments,
\begin{eqnarray}
I_2{}={}\left|\int_{Q_T\cap\{\rho<\tilde{\rho}\}}\frac{n-1}{r}\left(\frac{\rho u}{\tilde{\rho}^2}
{}-{}\frac{u}{\rho}\right)\,drdt{}\right|
\leq{} C\left(1{}+{}\int_{Q_T}\phi(\rho)\,drdt\right),
\end{eqnarray}
and
\begin{eqnarray}
I_3&\leq & C\int_{Q_T\cap \{\rho<\tilde{\rho}\}}\left| \frac{\e\rho_r}{\rho^2} \right|\,drdt\notag\\
&\leq&\Delta\int_{Q_T\cap \{\rho<\tilde{\rho}\}}\frac{|\rho_r|^2}{\rho^3}\,drdt
+ C_\Delta\left(1+\int_{Q_T}\phi(\rho)\,drdt\right).
\end{eqnarray}
Combining the last three estimates in \eqref{est_00}, choosing $\Delta>0$ sufficiently small,
and using the Gronwall-type inequality, we obtain
the {\it a priori} estimate we need.
\end{proof}
Then we have the following estimate:
\begin{eqnarray}
\label{est:v_int}
\int_0^T\Big\|\frac{1}{\rho(t,\cdot)}\Big\|_{L^\infty(a,b)}\,dt
{}&\leq&{}C\left(1{}+{}\big(\int_{Q_T}\frac{|\rho_r|^2}{\rho^3}\,drdt\big)^{\frac{1}{2}}
\big(\int_{Q_T}\phi(\rho)\,drdt\big)^{1/2}\right) \notag \\
{}&\leq&{}C \left(1{}+{}\big(\int_{Q_T}\frac{|\rho_r|^2}{\rho^3}\,drdt\big)^{\frac{1}{2}}\right).
\end{eqnarray}
\begin{lemma}
There exists $C$ depending on $\|\phi(\rho_0)\|_{L^1(a,b)}$ and the other parameters as in Lemma {\rm 2.4}
such that
\begin{equation}
\label{est:u_r_max}
\int_0^T\Big\|(\frac{m_r}{\rho}, \frac{\rho_r}{\rho}, u_r)(t,\cdot)\Big\|_{L^\infty(a,b)}\,dt{}\leq{}C,
\end{equation}
and
\begin{equation}\label{rho-lower-bound}
C^{-1}\le \rho(t,r)\le C.
\end{equation}
\end{lemma}
\begin{proof}
Indeed, by the Sobolev embedding and \eqref{est:v_int}, we have
\begin{eqnarray*}
\int_0^T\Big\|\frac{m_r(t,\cdot)}{\rho(t,\cdot)}\Big\|_{L^\infty(a,b)}\,dt
{}&\leq&{}\int_0^T\|m_r(t,\cdot)\|_{L^\infty(a,b)}\|\rho^{-1}(t,\cdot)\|_{L^\infty(a,b)}\,dt \notag \\
& \leq & C \int_0^T \Big(\int_a^b|m_{rr}|^2\,dr\Big)^{\frac{1}{2}}
\Big(1{}+{}\big(\int_a^b\frac{|\rho_r|^2}{\rho^3} \,dr\big)^{\frac{1}{2}}\Big)\,dt,
\end{eqnarray*}
which is bounded by \eqref{2.10a} and \eqref{est:rho_r_integral}.
The estimate for $\frac{\rho_r}{\rho}$ is the same. The estimate for $u_r$ follows
from $u_r{}={}\frac{m_r}{\rho}{}-{}\frac{u\rho_r}{\rho}$, the estimates above, and Lemma \ref{lemma:2.4}.
Now we can obtain a uniform estimate for $v{}={}\frac{1}{\rho}$. Notice that $v$ verifies the inequality:
\begin{equation*}
v_t{}+{}\big(u-\frac{\e(n-1)}{r}\big)v_r{}-{}\e v_{rr}{}\leq{} \big(u_r{}+{}\frac{(n-1)u}{r}\big)v.
\end{equation*}
By the maximum principle, we have
\begin{equation}
\label{est:v_max}
\max_{Q_T}v
{}\leq{}C\max\{\|v_0\|_{L^\infty(a,b)},\bar{v}\} e^{C\int_0^T\|(u_r, u)(\tau,\cdot)\|_{L^\infty(a,b)}\,d\tau}
{}\leq{}C\max\{\|v_0\|_{L^\infty(a,b)},\bar{v}\},
\end{equation}
by Lemma \ref{lemma:2.4} and \eqref{est:u_r_max}.
\end{proof}
\smallskip
The estimates in Lemma \ref{lemma:2.4} and \eqref{est:v_max}
are the required {\it a priori} estimates.
The proof of Theorem \ref{theorem:exist} is completed.
\end{subsection}
\end{section}
\medskip
\begin{section}{Proof of Theorem \ref{main}}
In this section, we provide a complete proof of Theorem \ref{main}.
As indicated earlier,
the constant $M$ is a universal constant, {\it independent of $\e>0$},
from now on.
\begin{subsection}{A Priori Estimates Independent of $\e$}
We will need the following estimate.
\begin{lemma}
\label{claim:local_integrability}
Let $l=0,\cdots, n-1$, and $a_1\in(a,1]$. There exists $M{}={}M(\gamma,a_1,E_0)$
such that,
for any $T>0$,
\begin{equation}
\sup_{t\in[0,T]}\int_{a_1}^b \rho(t,\cdot)^\gamma\,r^ldr
{}\leq M\big(1{}+{}\bar{\rho}^\gamma b^n\big).
\end{equation}
\end{lemma}
\begin{proof} The proof is based on the energy estimate \eqref{Energy_est}. Let
$$
\hat{e}(\rho){}={}\rho^\gamma -\bar{\rho}^\gamma-\gamma\bar{\rho}^{\gamma-1}(\rho-\bar{\rho}).
$$
Using the Young inequality, we find that there exists $M(\gamma)>0$ such that
$$
\rho^\gamma\leq M(\gamma)\big(\hat{e}(\rho){}+{}\bar{\rho}^\gamma\big).
$$
Then we have
\begin{equation*}
\int_{a_1}^b\rho^\gamma\,r^ldr{}
\leq{} M\Big(\int_{a_1}^b \hat{e}(\rho)\,r^ldr{}+{}\bar{\rho}^\gamma b^{l+1}\Big).
\end{equation*}
Since $0<a(\e)<1<b(\e)<\infty$, we have
\[
\int_{a_1}^b \hat{e}(\rho(t,r))\,r^ldr{}
\leq{}{a_1}^{l+1-n}\sup_{\tau\in[0,t]}\int_a^b \bar{\eta}^*(\rho(\tau,r),m(\tau,r))\,r^{n-1}dr{}\leq{}{a_1}^{1-n}E_0,
\]
by Proposition \ref{2.1} for $E_0$, independent of $\e$,
which implies that, for all $l=0, \cdots, n-1$,
\begin{equation*}
\int_{a_1}^b\rho^\gamma\,r^ldr{}
\leq{} M\big(a_1^{1-n} E_0 +{}\bar{\rho}^\gamma b^{n}\big)
\le M\big( 1+{}\bar{\rho}^\gamma b^{n}\big).
\end{equation*}
\end{proof}
\begin{lemma}
\label{lemma 3.2}
There exists $M=M(T)$, independent of $\e$, such that
\begin{equation}
\label{est:delta}
\int_0^T\int_r^b\rho^3\,y^{n-1}dydt{}\leq{}M\Big(1+ \frac{ b^n}{\e}\Big)\qquad \mbox{for any $r\in(a,b)$}.
\end{equation}
\end{lemma}
\begin{proof}
Consider first the case $\gamma\in(1,2)$. We estimate
\begin{eqnarray*}
\e\int_0^T\int_r^b\rho^3{}\, y^{n-1}dydt{}
&\leq&{}M\e\int_0^T\sup_{(r,b)}\rho^{3-\gamma}(t,\cdot)\,dt \notag \\
&\leq& M +M\e\int_0^T\int_r^b\rho^{3-\frac{3\gamma}{2}}|(\rho^{\frac{\gamma}{2}})_y|\,dydt \notag \\
&\leq& M+M\e\int_0^T\int_r^b\rho^{6-3\gamma}(y^{n-1})^{-1}\,dydt\\
{}&=&{}
M+M\e\int_0^T\int_r^b \rho^{6-3\gamma} (y^{n-1})^{2-\gamma}(y^{n-1})^{\gamma-3}\,dydt\notag \\
&\leq& M{}+{}\frac{\e}{2}\int_0^T\int_r^b\rho^3\,y^{n-1}dydt,\notag
\end{eqnarray*}
where, in the last inequality, we have used the Jensen inequality. It follows from the above computation that
\[
\e\int_0^T\int_r^b \rho^3\,y^{n-1}dydt{}\leq{}M(T)\qquad \mbox{for all $r\in(a,b)$},
\]
which arrives at \eqref{lemma 3.2}.
Let now $\gamma\in[2,3]$. First, we notice that
\begin{eqnarray*}
\sup_{t\in[0,T]} \int_r^b \rho\,y^{n-1}dy
&\leq& \sup_{t\in[0,T]} \left( \int_a^b \rho^\gamma\,r^{n-1}dr \right)^{\frac{1}{\gamma}}
\left(\int_r^b y^{n-1}dy\right)^{\frac{\gamma-1}{\gamma}}\\
&\leq & Mb^{\frac{n(\gamma-1)}{\gamma}}\le M b^n
\end{eqnarray*}
since $b>1$.
Then we argue as above:
\begin{eqnarray*}
\int_0^T\int_r^b \rho^3\,y^{n-1}dydt
&\leq& \int_0^T\Big(\sup_{(r,b)}\rho^2(t,\cdot)\int_r^b\rho\,y^{n-1}dy\Big) dt \\
&\leq& M b^{n}\left(1{}+{}\int_0^T\int_r^b \rho|\rho_r|\,dydt\right)\\
&=& M b^{n}\left(1{}
+{}\int_0^T\int_r^b \rho^{2-\frac{\gamma}{2}}\rho^{\frac{\gamma-2}{2}}|\rho_y|\,dydt\right)\\
&\leq& M b^n\left(1{}+{}\frac{1}{\e}{}+{}\int_0^T\int_r^b\rho^{4-\gamma}(y^{n-1})^{-1}\,dydt\right)\\
&\leq& M b^n\Big(1+\frac{1}{\e}\Big)\\
&\leq& M \frac{b^n}{\e},
\end{eqnarray*}
where, in the last inequality, we have used the Jensen inequality
with powers $\frac{\gamma}{4-\gamma}$ and $\frac{\gamma}{2\gamma-4}$
and the energy estimate \eqref{Energy_est}.
\end{proof}
\begin{lemma}\label{lemma:3.2} Let $K$ be a compact subset of $(a,b)$.
Then, for $T>0$, there exists $M{}={}M(K,T)$ independent of $\e$ such that
\begin{equation}
\label{HI_est_1}
\int_0^T\int_K(\rho^{\gamma+1}+\delta \rho^3)\,drdt{}\leq{}M.
\end{equation}
\end{lemma}
\begin{proof} We divide the proof into five steps.
\smallskip
1. Let $\omega(r)$ be a smooth positive, compactly supported function on $(a,b)$.
We multiply the momentum equation in \eqref{eq:NS} by $\omega$ to obtain
\begin{eqnarray}\label{3.3}
&&(\rho u \omega)_t{}+{}\big((\rho u^2+p_\delta)\omega\big)_r
{}+{}\frac{n-1}{r}\rho u^2\omega{}-{}\e\big(\omega(m_r{}+{}\frac{n-1}{r}m)\big)_r \notag \\
&&{}={}\big(\rho u^2+p_\delta{}-{}\e(m_r{}+{}\frac{n-1}{r}m)\big)\omega_r.
\end{eqnarray}
Integrating \eqref{3.3} in $r$ over $(r,b)$ yields
\begin{eqnarray}\label{3.4}
\Big(\int_r^b\rho u\omega\,dy\Big)_t
{}+{}\int_r^b\frac{n-1}{y}\rho u^2\omega\,dy
{}+{}\e\omega\big(m_r{}+{}\frac{n-1}{r}m\big)
{}={}\omega(\rho u^2{}+{}p_\delta){}+{}f_1,
\end{eqnarray}
where
\[
f_1{}={}\int_r^b\big(\rho u^2 +p_\delta -\e(m_y{}+{}\frac{n-1}{y}m)\big)\omega_y\,dy.
\]
\smallskip
2. Multiplying \eqref{3.4} by $\rho$ and using the continuity equation \eqref{eq:NS}, we have
\begin{eqnarray*}
&&\Big(\rho\int_r^b\rho u\omega\,dy\Big)_t {}+{}\Big((\rho u)_r{}+{}\frac{n-1}{r}\rho m
{}-{}\e(\rho_{rr}{}+{}\frac{n-1}{r}\rho_r)\Big)\int_r^b\rho u\omega\,dy \notag \\
&&\quad +\rho\int_r^b\frac{n-1}{y}\rho u^2\omega\,dy{}+{}\e\rho\omega\big(m_r{}+{}\frac{n-1}{r}m\big)\\
&&{}={}(\rho^2 u^2 + \rho p_\delta)\omega{}+{}\rho f_1,
\end{eqnarray*}
and
\begin{eqnarray}
&&\Big(\rho\int_r^b\rho u\omega\,dy\Big)_t{}+{}\Big(\rho u\int_r^b\rho u\omega\,dy\Big)_r\notag \\
&&{}+{}\e\Big( -\big(\rho_{rr}{}+{}\frac{n-1}{r}\rho_r\big)\int_r^b\rho u\omega\,dy
{}+{}\rho\omega\big(m_r{}+{}\frac{n-1}{r}m\big)\Big)\notag\\
&&{}={}\rho p_\delta \omega {}+{}f_2,
\end{eqnarray}
where
\[
f_2{}={}\rho f_1{}-{}\frac{n-1}{r}\rho m\int_r^b\rho u\omega\,dy-\rho\int_r^b\frac{n-1}{y}\rho u^2\omega dy.
\]
Notice that
\begin{eqnarray*}
&&-\big(\rho_{rr}{}+{}\frac{n-1}{r}\rho_r\big)\int_r^b\rho u\omega\,dy
{}+{}\rho\omega\big(m_r{}+{}\frac{n-1}{r}m\big)\notag\\
&&={} -\Big(\rho_r\int_r^b\rho u \omega\,dy\Big)_r{}-{}\rho u\rho_r\omega
{}-{}\left(\frac{n-1}{r}\rho\int_r^b\rho u \omega\,dy\right)_r{}-{}\frac{n-1}{r}\rho^2 u\omega \notag \\
{}&&\quad +{}\frac{n-1}{r^2}\rho\int_r^b\rho u \omega\,dy{}
+{}\rho^2 u_r\omega
{}+{}\rho u\rho_r\omega {}+{}\frac{n-1}{r}\rho^2 u\omega \notag \\
{}&&={}
-\big(\rho\int_r^b\rho u\omega\,dy\big)_{rr}{}-{}(\rho^2 u\omega)_r{}
-{}\left(\frac{n-1}{r}\rho\int_r^b\rho u\omega\,dy\right)_r \notag \\
{}&&\quad +{}\rho^2 u_r\omega{}+{}\frac{n-1}{r^2}\rho\int_r^b\rho u\omega\,dy.
\end{eqnarray*}
It then follows that
\begin{eqnarray}\label{3.6a}
&&\Big(\rho\int_r^b\rho u\omega\,dy\Big)_t{}+{}\Big(\rho u\int_r^b\rho u\omega\,dy\Big)_r{}
-{}\e\Big(\rho\int_r^b\rho u\omega\,dy\Big)_{rr}{}-{}\e(\rho^2 u\omega)_r \notag \\
&&\quad{}-{}\e\left(\frac{n-1}{r}\rho\int_r^b\rho u\omega\,dy\right)_r
{}+{}\e\rho^2 u_r\omega\notag\\
&&{}={}p_\delta\rho\omega{}+{}f_3,
\end{eqnarray}
where
$f_3{}={}f_2{}-{}\e\frac{n-1}{r^2}\rho\int_r^b\rho u\omega\,dy$.
\smallskip
3. We multiply \eqref{3.6a} by $\omega$ to obtain
\begin{eqnarray}
&&\Big(\rho\omega\int_r^b\rho u\omega\,dy\Big)_t{}
+{}\Big(\rho u\omega\int_r^b\rho u\omega\,dy\Big)_r{}
-{}\e\Big(\omega\big(\rho\int_r^b\rho u\omega\,dy\big)_{r}\Big)_r {}\notag \\
&&\quad +{}\e\Big(\rho\omega_r\int_r^b\rho u\omega\,dy\Big)_r
{}-{}\e(\rho^2 u\omega^2)_r{}-{}\e\left(\frac{n-1}{r}\rho\omega\int_r^b\rho u\omega\,dy\right)_r
\notag\\
&&\quad +{}\e\rho^2u_r\omega^2{}+{}\e\rho^2 u\omega \omega_r\notag\\[2mm]
&&={}p_\delta\rho\omega^2{}+{}f_4, \label{3.7a}
\end{eqnarray}
where
$f_4{}={} \omega f_3{}+{}\rho u\omega_r\int_r^b\rho u\omega\,dy{}
{}-{}\frac{n-1}{r}\rho\omega_r\int_r^b\rho u \omega\,dy$.
\smallskip
We integrate \eqref{3.7a} over $[0,T]\times[a,b]$ to obtain
\begin{eqnarray}
&&\int_{Q_T} \big(\delta\rho^3+\kappa\rho^{\gamma+1}\big)\omega^2\,drdt{}\notag\\
&&={} \int_{Q_T}\big(\e\rho^2u_r\omega^2{}+{}\e\rho^2 u\omega\omega_r\big)\,drdt \notag \\
&&\quad +{}\int_a^b\Big(\rho\omega\int_r^b\rho u\omega\,dy\Big)\Big|^T_0\,dr
{}-{}\int_{Q_T} f_4\,drdt \notag \\
&&\leq{} \e\int_{Q_T}\rho^3\omega^2\,drdt{}
+{}\e M\int_{Q_T}\big(\rho|u_r|^2\omega^2{}+{}\rho|u|^2|\omega_r|^2\big)\,drdt \notag \\
&&\quad +{}\int_a^b\Big(\rho\omega\int_r^b\rho u\omega\,dy\Big)\big|^T_0\,dr {}
-{}\int_{Q_T} f_4\,drdt \notag \\
\label{eq:current-1}
&&\leq \e\int_{Q_T}\rho^3\omega^2\,drdt{}+{}M(\mbox{supp}\,\omega,T,E_0).
\end{eqnarray}
The last inequality follows easily from \eqref{Energy_est}--\eqref{est:energy_2}
and the formula for $f_4$.
\medskip
4. Claim:
There exists $M=M(\mbox{supp}\,\omega,T,E_0)$ such that
\begin{equation}\label{3.8a}
\e\int_{Q_t} \rho^3\omega^2\, drdt{}\leq{} M{}+{}M\e\int_{Q_t}\rho^{\gamma+1}\omega^2\,drdt.
\end{equation}
\medskip
If $\gamma\geq 2$, the claim is trivial.
Let $\gamma{}<{}\beta{}\leq{} 3$. We estimate
\begin{eqnarray}\label{3.11}
&&\e\int_{Q_T}\rho^\beta\omega^2 dx dt\\
&&\leq {}\e\sup_{\mbox{supp}\,\omega}\big(\rho^{\beta-\gamma}\omega^2\big)
\int_{Q_T\cap\, {\mbox{supp}\,\omega}}\rho^\gamma{}drdt \notag\\
&&\leq{} \e M\sup_{\mbox{supp}\,\omega}\big(\rho^{\beta-\gamma}\omega^2\big)\notag \\
&&\leq{}\e M\int_{Q_T}\rho^{\beta-\gamma-\frac{\gamma}{2}}|(\rho^{\frac{\gamma}{2}})_r|\omega^2{}drdt
+{}\e M\int_{Q_T} \rho^{\beta-\gamma}\omega|\omega_r| drdt\notag \\
&&\leq \e M\Big(\int_{Q_T\cap\, {\mbox{supp}\,\omega}}\rho^\gamma drdt {}
+{} \int_{Q_T}|(\rho^{\frac{\gamma}{2}})_r|^2\omega^2drdt{}
+{}\int_{Q_T} \rho^{2\beta-3\gamma}\omega^2drdt\Big)\notag \\
&&\leq M\Big( 1{}+{}\e \int_{Q_T}\rho^{2\beta-3\gamma}\omega^2 drdt\Big).
\end{eqnarray}
If $2\beta-3\gamma\leq\gamma+1$, the estimate of the claim follows.
Otherwise, since $2\beta-3\gamma<\beta$ (note that $\beta\leq 3$),
we can iterate \eqref{3.11} with $\beta$ replaced by $2\beta-3\gamma$
and improve \eqref{3.11}:
\begin{eqnarray}
\e\int_{Q_T}\rho^\beta\omega^2 drdt
{}\leq{} M\Big(1 {}+{}\e \int_{Q_T}\rho^{4\beta-9\gamma}\omega^2 drdt\Big).
\end{eqnarray}
If $4\beta-9\gamma$ is still larger than $\gamma+1$,
we iterate the estimate again.
In this way, we obtain
a recurrence relation $\beta_n{}={}2\beta_{n-1}-3\gamma$, $\beta_0=\beta\leq3$, and the estimate
\[
\e\int_{Q_T}\rho^\beta\omega^2 drdt
{}\leq{} M(n)\Big(1 {}+{}\e \int_{Q_T} \rho^{\beta_n\gamma}\omega^2drdt \Big).
\]
Solving the recurrence relation, we obtain
$$
\beta_n{}={}2^n\beta{}-{}3\gamma(2^{n-1}-1).
$$
For some $n$, the expression is less than $\gamma+1$
(note that $\beta\leq 3$).
Then the expected estimate is obtained.
\medskip
5. Now returning to \eqref{eq:current-1}, we have
\[
\int_{Q_T}\big(\rho^{\gamma+1}+\delta\rho^2\big)\omega^2 drdt {}\leq{}M(\mbox{supp}\,\omega, T, E_0)
\]
for all small $\e>0$.
\end{proof}
The following lemma holds for weak entropies $\eta$ (also {\it cf.} \cite{DiPerna}).
\begin{lemma}
\label{Energy_control_lemma}
Let $\eta^*(\rho,m)$ be the mechanical energy of system \eqref{eq:Eu},
and let $(\eta_\psi,q_\psi)$ be an entropy pair
\eqref{eta}--\eqref{q} with the generating function $\psi(s)$ satisfying
\[
\sup_s|\psi''(s)|{}<{}\infty.
\]
Then,
for any $(\rho,m){}\in{}\mathbb{R}^2_+$ and any vector $\bar{a}{}={}(a_1,a_2)$,
\begin{equation}
|\bar{a}\grad^2\eta\bar{a}^\top|{}\leq{} M_\psi\, \bar{a}\grad^2\eta^*\bar{a}^\top
\qquad \mbox{for some $M_\psi>0$}.
\end{equation}
\end{lemma}
\begin{lemma}\label{HI:2}
Let $K\subset (a,b)$ be compact.
There exists $M{}={}M(K,T)$ independent of $\e$
such that, for any $\e>0$,
\begin{align*}
\int_0^T\int_K\big(\rho|u|^3{}+{}\rho^{\gamma+\theta}\big)\,drdt
{}\leq{}M\big(1+\bar{\rho}^\gamma b^n
{}+{}\frac{\delta}{\e}b^n\big).
\end{align*}
\end{lemma}
\begin{proof} We divide the proof into five steps.
\smallskip
1. Let $(\check{\eta},\check{q})$ be an entropy pair corresponding to $\psi(s){}={}\frac{1}{2}s|s|$.
Define
\begin{eqnarray*}
&&\tilde{\eta}(\rho,m){}={}\check{\eta}(\rho,m){}-{}\grad_{(\rho,m)}\check{\eta}(\bar{\rho},0)\cdot(\rho-\rb,m)\ge 0,\\
&&\tilde{q}(\rho,m){}={}\check{q}(\rho,m){}-{}\grad_{(\rho,m)}\check{\eta}(\bar{\rho},0)\cdot(m,\frac{m^2}{\rho}{}+{}p).
\end{eqnarray*}
Note that the entropy pair $(\check{\eta}, \check{q})$ is defined for system \eqref{eq:Eu} with
pressure $p{}={}\kappa\rho^\gamma$, rather than $p_\delta$.
Then $(\tilde{\eta},\tilde{q})$ is still an entropy pair of \eqref{eq:Eu}.
We multiply the continuity equation in \eqref{eq:NS} by $\tilde{\eta}_\rho r^{n-1}$,
the momentum equation \eqref{eq:NS} by $\tilde{\eta}_m r^{n-1}$, and then add them to obtain
\begin{eqnarray}
\label{eq:int_ii_1}
&&(\tilde{\eta} r^{n-1})_t{}+{}(\tilde{q} r^{n-1})_r{}
+{}(n-1)r^{n-2}\big(-\check{q}{}+{}m\check{\eta}_\rho{}+{}\frac{m^2}{\rho}\check{\eta}_m{}+{}\check{\eta}_m(\bar{\rho},0) p(\rho)\big) \notag \\
&&{}={} \e r^{n-1}\Big((\rho_{rr}{}+{}\frac{n-1}{r}\rho_r)\tilde{\eta}_\rho{}+{}(m_r{}+{}\frac{n-1}{r}m)_r\tilde{\eta}_m\Big){}-{}(\delta\rho^2)_r\tilde{\eta}_mr^{n-1}.
\end{eqnarray}
\smallskip
2. It can be checked directly that, for some constant $M=M(\gamma)>0$,
\begin{eqnarray}
\label{id:0.1}
&&\tilde{q}(\rho,m){}\geq{} \frac{1}{M}(\rho|u|^3{}+{}\rho^{\gamma+\theta})
{}-{}M(\rho{}+{}\rho|u|^2{}+{}\rho^\gamma),\\[2mm]
\label{id:0.2}&&-\check{q}{}+{}m(\check{\eta}_\rho{}+{}u\check{\eta}_m){}\leq{}0,\\[2mm]
&&|\check{\eta}_m|{}\leq{}M\big(|u|{}+{}\rho^\theta\big), \quad |\check{\eta}_\rho|{}\leq{}M \big(|u|^2{}+{}\rho^{2\theta}\big),\\[2mm]
&&|\tilde{\eta}|{}\leq{}M\big(\rho+\rho|u|^2+\rho^\gamma\big),
\quad \rho|\tilde{\eta}_\rho+u\tilde{\eta}_m|{}\leq{}M\big(\rho+\rho|u|^2+\rho^\gamma\big),
\end{eqnarray}
and, for $\check{\eta}_\rho{}+{}u\check{\eta}_m$ considered as a function of $(\rho,u)$,
\begin{eqnarray}
|\big(\check{\eta}_\rho{}+{}u\check{\eta}_m\big)_\rho|{}\leq{}M\big(\rho^{\theta-1}|u|{}+{}\rho^{2\theta-1}\big),
\qquad |\big(\check{\eta}_\rho{}+{}u\check{\eta}_m\big)_u|{}\leq{} M\big(|u|{}+{}\rho^\theta\big).
\end{eqnarray}
Also see \cite{CP} for these inequalities.
\smallskip
Moreover, note that, at $r=b$,
\begin{equation}
\label{id:0.4}
\tilde{q}(\bar{\rho},0){}={}\check{q}(\bar{\rho},0){}={}c_0(\gamma)\bar{\rho}^{\gamma+\theta},
\quad \;|\check{\eta}_m(\bar{\rho},0)|{}={}c_1(\gamma)\bar{\rho}^\theta,\;\quad
\check{\eta}_\rho(\bar{\rho},0){}={}0,
\end{equation}
for some positive $c_i(\gamma), i=0,1$, depending only on $\gamma$.
\medskip
3. We integrate equation \eqref{eq:int_ii_1} over $(0,T)\times(r,b)$
to find
\begin{eqnarray}
\int_0^T\tilde{q}(\tau,r)r^{n-1}\,d\tau{}
&=&{}c(\theta)\bar{\rho}^{\gamma+\theta}b^{n-1}T{}+{}\int_r^b\left(\tilde{\eta}(T,y){}-{}\tilde{\eta}(0,y)\right)\,y^{n-1}dy \notag \\
{}&&+{}(n-1)\int_0^T\int_r^b\left( -\check{q}{}+{}m\check{\eta}_\rho{}+{}\frac{m^2}{\rho}\check{\eta}_m\right)\,y^{n-2}dyd\tau \notag \\
&&{}+{}(n-1)\int_0^T\int_r^by^{n-2} \check{\eta}_m(\bar{\rho},0)\big(p(\rho)-p(\bar{\rho})\big)\,dyd\tau \notag\\
{}&&+{}\int_0^T\int_r^b\e y^{n-1}\Big( (\rho_{yy}+\frac{n-1}{y}\rho_y)\tilde{\eta}_\rho{}+{}(m_y{}+{}\frac{n-1}{y}m)_y\tilde{\eta}_m\Big)\,dyd\tau \notag \\
&&+ \int_0^T\int_r^b \delta\rho^2\big( (\tilde{\eta}_m)_\rho\rho_y{}+{}(\tilde{\eta}_m)_u u_y\big)y^{n-1}\,dyd\tau\notag \\
{}&&+{}(n-1)\int_0^T\int_r^b \delta \rho^2\tilde{\eta}_m y^{n-2}\,dyd\tau\notag \\
\label{eq:nash-1}
{}&=&{}I_1{}+{}\cdots{}+{}I_7.
\end{eqnarray}
4. Now we estimate the terms in \eqref{eq:nash-1}.
Clearly,
$$
|I_1|{}\leq{}M \bar{\rho}^{\gamma+\theta}b^{n-1}\leq M \bar{\rho}^{\gamma}b^n,
$$
since $\bar{\rho}<1$ and $b>1$ for small $\e>0$.
\medskip
Notice that $|\tilde{\eta}(\rho,m)|{}\leq{} \eta^*(\rho,m)$.
It then follows that
\begin{eqnarray*}
|I_2|& \leq &
\int_r^b|\tilde{\eta}(\rho(T,r),m(T,r))|\,r^{n-1}dr\\
&\leq &
\int_a^b \eta^*(\rho(T,r),m(T,r))\,r^{n-1}dr.
\end{eqnarray*}
By the energy estimate \eqref{est:energy_2}, $|I_2(t,r)|{}\leq{}E_0$.
\medskip
The term $I_3$ is nonpositive by \eqref{id:0.2} and can be dropped.
\medskip
Using Step 2, we have
\begin{equation}
|I_4(t,r)|{}\leq{} M(a_1,T)\big(1{}+{}\bar{\rho}^\gamma b^n\big) \qquad \mbox{for any}\,\, (t,r)\in[0,T]\times [a_1,b].
\end{equation}
\medskip
5. Consider $I_5$. We write
\begin{eqnarray*}
&&r^{n-1}(\rho_{rr}{}+{}\frac{n-1}{r}\rho_r)\tilde{\eta}_\rho{}={}(r^{n-1}\rho_r)_r\tilde{\eta}_\rho,\\
&&r^{n-1}(m_r{}+{}\frac{n-1}{r}m)_r\tilde{\eta}_m{}={}(r^{n-1}m_r)_r\tilde{\eta}_m{}-{}(n-1)r^{n-3}m\tilde{\eta}_m,
\end{eqnarray*}
and employ integration by parts (note that $\tilde{\eta}_\rho(\bar{\rho},0){}={}\tilde{\eta}_m(\bar{\rho},0){}={}0$) to obtain
\begin{eqnarray}
I_5{}&=&{}-\e\int_0^t\int_r^b\big(\rho_y (\tilde{\eta}_\rho)_y{}
+{}m_y(\tilde{\eta}_m)_y\big)\,y^{n-1}dyd\tau{}
-{}(n-1)\e\int_0^t\int_r^b m\tilde{\eta}_m\,y^{n-3}dyd\tau \notag \\
& &+ \e \int_0^t\tilde{\eta}_r(\tau,r)\,r^{n-1}d\tau\notag\\
&=& J_1{}+{}J_2{}+{}J_3.
\end{eqnarray}
Using the energy estimate \eqref{Energy_est}
and Lemma \ref{Energy_control_lemma}, we have
$$
|J_1(t,r)|{}\leq{}M E_0.
$$
Also, using Step 2 and \eqref{id:0.4}, we have
\[
|m\tilde{\eta}_m|{}
\leq{}M\big(\rho|u|^2{}+{}\rho^\gamma{}+{}\rho|\tilde{\eta}_m(\bar{\rho},0)|\big){}
\leq{} M\big(\eta^*(\rho,m){}+{}\bar{\rho}^{2\theta}\rho\big).
\]
It follows by the energy estimate \eqref{Energy_est} that
\[
\big|\int_a^b J_2\, \omega\, dr\big|{}\leq{} M(\mbox{supp}\, \omega,T)\big(1{}+{}\bar{\rho}^\gamma b^n\big),
\]
for any nonnegative smooth function $\omega$
with $\mbox{supp} \, \omega\subset (a, b)$.
We write
\[
\tilde{\eta}_r{}={}\rho_r\tilde{\eta}_\rho{}+{}m_r\tilde{\eta}_m{}={}\rho_r(\tilde{\eta}_\rho+u\tilde{\eta}_m){}+{}\rho\tilde{\eta}_mu_r.
\]
Then we consider the integral
\begin{eqnarray*}
\int_a^b J_3\,\omega\,dr
{}&=&{} \e\int_{Q_T}\rho\big(\tilde{\eta}_\rho{}+{}u\tilde{\eta}_m\big) \omega_r \, r^{n-1}drd\tau
{}-{}(n-1)\e\int_{Q_T}\rho\big(\tilde{\eta}_\rho{}+{}u\tilde{\eta}_m\big)\omega\,r^{n-2}drd\tau \notag \\
{}&&-{} \e\int_{Q_T} \rho \big(\rho_r(\tilde{\eta}_\rho+u\tilde{\eta}_m)_\rho{}+{}
u_r(\tilde{\eta}_\rho+u\tilde{\eta}_m)_u{}-{}\tilde{\eta}_mu_r\big)\omega\,r^{n-1}drd\tau.
\end{eqnarray*}
Noticing that $\tilde{\eta}_\rho{}+{}u\tilde{\eta}_m{}={}\check{\eta}_\rho+u\check{\eta}_m{}+{}const.$ and
using Step 2 and estimates \eqref{Energy_est}--\eqref{est:energy_2}
and \eqref{3.8a},
we obtain
\begin{eqnarray*}
\big| \int_a^b J_3(t,r)\,\omega \,dr\big|
\leq M(a_1,T,\|\omega\|_{C^1})
{}+{}\frac{1}{2}\int_{Q_T}\big(\rho|u|^3{}+{}\rho^{\gamma+\theta}\big)\omega\,r^{n-1}drd\tau.
\end{eqnarray*}
To estimate $I_6$,
employing that $|(\tilde{\eta}_m)_\rho|\leq M\rho^{\theta-1}$, $|(\tilde{\eta}_m)_u|{}\leq{} M$,
and the energy estimate \eqref{Energy_est}, we have
\[
|I_6|{}\leq{} M\frac{\delta^2}{\e}\int_0^T\int_r^b \rho^3\,r^{n-1}drd\tau{}
\leq{}M\frac{\delta^2}{\e^2}b^n\leq{}M\frac{\delta}{\e}b^n,
\]
where we have used the result of Lemma \ref{lemma 3.2}
and $\frac{\delta}{\e}<1$ for small $\e>0$ in the last inequality.
\medskip
The last term $I_7$ is estimated in the similar fashion:
\[
\big|\int_a^b I_7\,\omega \, dr\big|{}\leq{} M(\mbox{supp}\, \omega)\frac{\delta^2}{\e}b^n
\leq{} M(\mbox{supp}\, \omega)\frac{\delta}{\e}b^n,
\]
since $\delta<1$ for small $\e>0$.
Finally, we multiply equation \eqref{eq:nash-1} by the nonnegative smooth function $\omega$,
integrate it over $(a,b)$, and use estimate \eqref{id:0.1}, together with
the above estimates for $I_j, j=1,\cdots, 7$, and an appropriate choice of $\delta$ to obtain
\begin{eqnarray*}
&&\int_{Q_t}\big(\rho|u|^3+\rho^{\gamma+\theta}\big)\,\omega\,r^{n-1}drd\tau\\
&& \leq M\big(1+\bar{\rho}^\gamma b^n {}+{}\frac{\delta}{\e}b^n\big)
+ \frac{1}{2} \int_{Q_t}\big(\rho|u|^3{}+{}\rho^{\gamma+\theta}\big)\,\omega\,r^{n-1}drd\tau.
\end{eqnarray*}
This completes the proof.
\end{proof}
\end{subsection}
\smallskip
\begin{subsection}{Weak Entropy Dissipation Estimates}
Let $a=a(\e)\to0$ and $b=b(\e)\to \infty$.
We choose $\bar{\rho}{}={}\bar{\rho}(\e){}\to{}0$ and $\delta{}={}\delta(\e){}\to{}0$ such that
\begin{equation}
\label{size delta}
\bar{\rho}^\gamma b^n+
\frac{\delta}{\e}b^{n} {}\leq{} M
\qquad \mbox{uniformly in $\e$}.
\end{equation}
With this choice $(\bar{\rho},\delta)$,
the estimates on the lemmas in \S 3.1 are uniform in $\e{}\to{}0$.
Given a sequence of the initial data functions as in Theorem \ref{main},
denote $(\rho^\e,m^\e)$ by the corresponding solution of the viscosity equations \eqref{eq:NS}
on $Q^\e{}={}[0,\infty)\times[a(\e),b(\e)]$ with $\bar{\rho}=\bar{\rho}(\e)$ as above.
\begin{proposition}
Let $(\eta,q)$ be an entropy pair of system \eqref{eq:Eu} with form \eqref{eta}--\eqref{q}
for a smooth, compactly supported function $\psi(s)$ on $\mathbb{R}$.
Then the entropy dissipation measures
\begin{equation}\label{dissipation-measures}
\eta(\rho^\e,m^\e)_t + q(\rho^\e,m^\e)_r
\qquad \mbox{are compact in }\,\, H_{loc}^{-1}.
\end{equation}
\end{proposition}
\begin{proof} We divide the proof into seven steps.
\medskip
1. Denote $\eta^\e{}={}\eta(\rho^\e,m^\e)$, $q^\e{}={}q(\rho^\e,m^\e)$, and $m^\e{}={}\rho^\e u^\e$.
We compute
\begin{eqnarray}
\label{approx_entropy}
\eta^\e_t{}+{}q^\e_r
{}&=&{}-\frac{n-1}{r}\rho u^\e\left(\eta^\e_{\rho} +u^\e\eta^\e_m \right)
{}+{} \e\frac{n-1}{r}\Big(\rho^\e_r\eta^\e_{\rho}{}+{}r\big(\frac{1}{r}m^\e\big)_r\eta^\e_m \Big) \notag \\[2mm]
{}&&-{}\e\big(\rho^\e_r(\eta^\e_\rho)_r{}+{} m^\e_r(\eta^\e_m)_r\big){}+{}\e\eta^\e_{rr}{}-{}
(\delta\rho^2)_r\eta^\e_m\notag \\[2mm]
{}&=&{}I^\e_1{}+\cdots +{}I^\e_5.
\end{eqnarray}
\smallskip
2. We notice that
\begin{equation}
|I^\e_1(t,r)|{}\leq{}M\rho^\e|u^\e|\big(1+(\rho^\e)^\theta\big)
{}\leq{}M\big(\rho^\e|u^\e|^2+\rho^\e{}+{}(\rho^\e)^\gamma\big),
\end{equation}
bounded in $L^1\left(0,T; L^1_{loc}(0,\infty)\right)$, independent of $\e$
(all of the functions are extended by $0$ outside $(a,b)$).
\medskip
3. Next,
\begin{eqnarray}
I^\e_2{}={}\e\frac{n-1}{r^2}\big(\eta^\e-m^\e\eta^\e_m\big) + \e\big(\frac{n-1}{r}\eta^\e \big)_r
=: I_{2a}^\e+I_{2b}^\e.
\end{eqnarray}
Since
\[
|\eta^\e{}-{}m^\e\eta^\e_m|{}\leq{}M \big(\rho^\e{}+{}\rho^\e|u^\e|^2\big),
\]
then
\begin{equation}\label{3.26a}
I_{2a}^\e \to 0 \qquad \mbox{in $L^1_{loc}(\R_+^2)$ as $\e\to 0$}.
\end{equation}
On the other hand, if $\omega$ is smooth and compactly supported on $\mathbb{R}^2_+$, then
\begin{eqnarray}
\e\left|\int_{Q^\e} I_{2b}^\e\omega(t,r)\,drdt\right|
&=&\e\left|\int\frac{n-1}{r}\eta^\e\omega_r\,drdt\right|\notag\\
&\leq& \e M(\mbox{supp}\,\omega)\|\rho^\e\|_{L^{\gamma+1}(\mbox{supp}\,\omega)}\|\omega\|_{H^1(\mathbb{R}^2_+)}.
\notag
\end{eqnarray}
Since $\|\rho^\e\|_{L^{\gamma+1}(\mbox{supp}\,\omega)}$ is bounded, independent of $\e$ (see \eqref{HI_est_1}),
the above estimate shows that
\begin{equation}\label{3.26b}
I_{2b}^\e{}\to{}0\qquad \mbox{in $H^{-1}_{loc}(\mathbb{R}^2_+)$ as $\e\to 0$}.
\end{equation}
\medskip
4. For $I_3^\e$, we use Lemma \ref{2.1} to obtain
\begin{eqnarray}
|I_3^\e|&=&\e |\langle \nabla^2\eta(\rho^\e, m^\e)(\rho_r^\e,m^\e_r), (\rho_r^\e,m^\e_r)\rangle|\notag\\
&\le & M_\psi\, \e \langle \nabla^2\bar{\eta}^*(\rho^\e, m^\e)(\rho_r^\e,m^\e_r), (\rho_r^\e,m^\e_r)\rangle.
\notag \label{I-3a}
\end{eqnarray}
Combining \eqref{I-3a} with Proposition \ref{2.1} and Lemma \ref{Energy_control_lemma}, we
conclude that
\begin{equation}\label{I-3b}
I_3^\e \qquad \mbox{is uniformly bounded in $L^1(0,T; L^1_{loc}(0,\infty))$}.
\end{equation}
\medskip
5. To show that $I^\e_4{}\to{}0$ in $H^{-1}_{loc}$ as $\e\to 0$,
we need the following claim, adopting the arguments from \cite{LPS}.
{\it Claim}:
{\it Let $K\subset (0,\infty)$ be a compact subset. Then, for any $0<\Delta<1$ and $\e>0$,
\begin{equation}
\label{small_rho_est_1}
\int_0^T\int_K\e^{\frac{3}{2}}|\rho^\e_r|^2\,drdt{}\leq{}M\big(\sqrt{\e}\Delta^{\frac{\gamma}{2}}{}+{}\Delta+\e\big).
\end{equation}
In particular,
$$
\int_0^T\int_K\e^{\frac{3}{2}}|\rho^\e_r|^2\,drdt{}\to{}0,
$$
and
$$
\e\eta^\e_r{}\to{}0 \qquad\,\,\,\mbox{in}\,\, L^p(0,T;L^p_{loc}(0,\infty)) \quad\, \mbox{for}\,\,
p:=2-\frac{2}{\gamma+1}\in{}(1,2).
$$
}
Now we prove the claim.
For the simplicity of notation, we suppress superscript $\e$ in all of the functions.
Define
\[
\phi(\rho){}={}\left\{\begin{array}{ll}
\frac{\rho^2}{2}, & \rho<\Delta,\\[2mm]
\frac{\Delta^2}{2}{}+{}\Delta(\rho-\Delta), &\rho\geq\Delta,
\end{array}
\right.
\]
so that
\begin{eqnarray*}
&&\phi''(\rho){}={}\chi_{\{\rho<\Delta\}}(\rho),\\
&&\rho\phi'(\rho)-\phi(\rho){}={}\frac{\rho^2}{2} \qquad \mbox{for $\rho<\Delta$},\\
&&\rho\phi'(\rho)-\phi(\rho){}={}\frac{\Delta^2}{2} \qquad\mbox{for $\rho\geq\Delta$,}
\end{eqnarray*}
where $\chi_{A}(\rho)$ is the indicator function that is $1$ when $\rho\in A$ and $0$ otherwise.
\smallskip
Let $\omega(r)$ be a nonnegative smooth, compactly supported function on $(0,\infty)$.
We compute from the continuity equation, the first equation, in \eqref{eq:NS}:
\begin{eqnarray}
&& (\phi\omega)_t+(\phi u \omega)_r{}-{}\phi u\omega_r{}
-{}\frac{1}{2}\big(\rho^2\chi_{\{\rho<\Delta\}}{}+{}\delta^2\chi_{\{\rho>\Delta\}} \big)\omega u_r{}
+{}\frac{n-1}{r}\rho u\min\{\rho,\Delta\} \notag \\
&&=\e(\phi'\omega\rho_r)_r{}-{}\e\min\{\rho,\Delta\}\omega'\rho_r
{}+{}\frac{(n-1)\e}{r}\omega\min\{\rho,\Delta\}\rho_r{}
-{}\e\omega|\rho_r|^2\chi_{\{\rho<\Delta\}}. \label{3.32}
\end{eqnarray}
Integrating \eqref{3.32} over $(0,T)\times(0,\infty)$, we obtain
\begin{eqnarray}
&&\int_0^T\int\e\omega|\rho_r|^2\chi_{\{\rho<\Delta\}}\,drdt\notag\\
&&={}-\int \phi\omega\big|^T_0\,dr{}+{}\int_0^T\int\phi u\omega_r\,dr dt \notag \\
&&\quad +\frac{1}{2}\int_0^T\int \big(\rho^2\chi_{\{\rho<\Delta\}}{}+{}\delta\chi_{\{\rho>\Delta\}}\big)\omega u_r\,drdt{}
-{}\int_0^T\int \frac{n-1}{r}\rho u\min\{\rho,\Delta\}\,drdt \notag \\
&&\quad -\int_0^T\int\e\min\{\rho,\Delta\}\omega'\rho_r\,drdt{}
+{}\int_0^T\int\frac{(n-1)\e}{r}\omega\min\{\rho,\Delta\}\rho_r\,drdt\notag\\
&&={}J_1{}+ \cdots +{}J_6.
\end{eqnarray}
We estimate the integrals on the right:
\begin{eqnarray}
|J_1|{}\leq{}M(\mbox{supp}\,\omega)\Big(\Delta^2+\Delta\int_0^T\int_{\mbox{supp}\,\omega}\rho\,drdt\Big)
{}\leq M(\mbox{supp}\,\omega,T)\Delta;
\end{eqnarray}
\begin{eqnarray}
|J_2|{}&\leq&{}\int_0^T\int_{\mbox{supp}\,\omega}\big(\Delta|\rho u|\chi_{\{\rho<\Delta\}}{}
+{}(\Delta^2+\Delta\rho)|u|\chi_{\{\rho>\Delta\}}\big)\,drdt \notag \\
{}&\leq&{} \Delta\int_0^T\int_{\mbox{supp}\,\omega}\big(\rho{}+{}\rho|u|^2\big)dtdt{}\notag\\
{}&\leq&{}M(\mbox{supp}\,\omega,T)\Delta;
\end{eqnarray}
\begin{eqnarray}
|J_3|{}\leq{}\frac{\Delta^{\frac{3}{2}}}{\sqrt{\e}}\int_0^T\int_{\mbox{supp}\,\omega} \big(\rho{}+{}\e\rho|u_r|^2\big)\,drdt
{}\leq{}M(\mbox{supp}\,\omega,T)\frac{\Delta}{\sqrt{\e}};
\end{eqnarray}
\begin{eqnarray}
|J_4|{}\leq{}M(\mbox{supp}\,\omega)\Delta\int_0^T\int_{\mbox{supp}\,\omega}\big(\rho{}+{}\rho|u|^2\big)\,drdt
{}\leq M(\mbox{supp}\,\omega,T)\Delta;
\end{eqnarray}
\begin{eqnarray}
|J_5|{}&\leq&{}\sqrt{\e} \Delta^{\frac{\gamma}{2}}\int_0^T\int_{\mbox{supp}\,\omega}
\sqrt{\e}\rho^{\frac{\gamma-2}{2}}|\rho_r|\,drdt{}+{}\e\int_0^T\int_{\mbox{supp}\,\omega}\rho |\rho_r|\chi_{\{\rho<\Delta\}}\omega'\,drdt \notag\\[2mm]
{}&\leq&{}
\frac{\e}{4}\int_0^T\rho^{\gamma-2}|\rho_r|^2\omega\,drdt{}+{}2\e\int_0^T\int_{\mbox{supp}\,\omega}\rho^2\frac{|\omega'|^2}{\omega}\,drdt
{}+{}\sqrt{\e}\Delta^{\frac{\gamma}{2}}M(\mbox{supp}\,\omega,T) \notag \\[2mm]
{}&\leq&{} \frac{\e}{4}\int_0^T\int|\rho_r|^2\omega\,drdt{}+{}\e M(\mbox{supp}\,\omega,T) \notag\\[2mm]
&&+ \sqrt{\e}\Delta^{\frac{\gamma}{2}}M(\mbox{supp}\,\omega, T).
\end{eqnarray}
Moreover, $J_6$ is estimated in the same way as $J_5$.
Thus, estimate \eqref{small_rho_est_1} is proved.
Now we prove the second part of the claim.
\smallskip
Notice that
\begin{eqnarray}
|\eta_r|{}\leq{}M\big(|\rho_r||\eta_\rho+u\eta_m|{}+{}\rho |u_r|\big)
{}\leq{}M\big(|\rho_r|(1+\rho^\theta){}+{}\rho|u_r|\big).
\end{eqnarray}
Let $q\in (1,2)$ to be chosen later on. Compute
\begin{eqnarray}
\int_0^T\int_K \e^q|\eta_r|^q\,drdt
{}&\leq&{}M\int_0^T\int_K\e^q|\rho_r|^q\,drdt{}+{}
\int_0^T\int_K\e^q\big||\rho_r|\rho^\theta{}+{}\rho|u_r|\big|^q\,drdt \notag \\[2mm]
{}&\leq&{} \Delta{}+{}\frac{M}{\Delta}\int_0^T\int_K\e^{2q}|\rho_r|^2\,drdt \notag \\[2mm]
&&{}+{}M
\int_0^T\int_K\e^p\rho^{\frac{q}{2}}\big(|\rho^{\frac{\gamma-2}{2}}\rho_r|^q
{}+{}|\rho^{\frac{1}{2}}u_r|^q\big)\,drdt \notag \\[2mm]
{}&\leq&{}\Delta{}+{}\frac{M}{\Delta}\int_0^T\int_K\e^{\frac{3}{2}}|\rho_r|^2\,drdt \notag\\[2mm]
&&{}+{}\e^{q-1}M\int_0^T\int_K\big(\e(\rho^{\gamma-2}|\rho_r|^2{}+{}\rho|u_r|^2){}+{}\e\rho^{\frac{q}{2-q}}\big)\,drdt \notag \\[2mm]
{}&\leq&{}\Delta{}+{}\frac{M}{\Delta}\int_0^T\int_K\e^{\frac{3}{2}}|\rho_r|^2\,drdt{}+{}\e^{q-1}C(T,K),
\end{eqnarray}
provided that
$\frac{2}{2-q}={}\gamma+1$, which holds if and only if $q{}={}2 - \frac{2}{\gamma+1}$.
Combining this with estimate \eqref{small_rho_est_1}, we arrive at the conclusion of the claim.
\smallskip
6. Consider the last term $I_5^\e$. This term is bounded in $L^1(0,T\,:\, L^1_{loc}(0,\infty))$.
Indeed, for a compact set $K\subset(0,\infty)$,
using the energy estimates \eqref{Energy_est} and Lemma \ref{lemma 3.2}, we obtain
\begin{eqnarray*}
\int_0^T\int_K|I_5|\,drdt{}&\leq&{}M_\psi\int_0^T\int_K \delta\rho|\rho_r|\,drdt\\
&\leq& M(\psi,K)\Big(1 {}+{}\frac{\delta^2}{\e}\int_0^T\int_K \rho^{4-\gamma}\,drdt\Big)\\
&\leq& M(\psi,K)\Big(1{}+{}\frac{\delta^2}{\e} +\frac{\delta^2}{\e}\int_0^T\int_K\rho^3\,drdt\Big)\\
&\leq& M(\psi,K)\Big(1{}+{}\frac{\delta^2}{\e} +\frac{\delta^2}{\e}b^n\Big).
\end{eqnarray*}
From the choice of $\delta$, the term on the right is uniformly bounded in $\e$.
\smallskip
7. Combining Steps 1--6, we conclude
\begin{equation}
\eta(\rho^\e, m^\e)_t{}+{}q(\rho^\e, m^\e)_r{}={}f^\e{}+{}g^\e,
\end{equation}
where $f^\e$ is bounded in $L^1\left(0,T; L^1_{loc}(0,\infty)\right)$ and $g^\e{}\to{}0$ in
$W^{-1,q}_{loc}(\mathbb{R}^2_+)$ for some $q{}\in{}(1,2)$.
This implies that, for $1<q_1<2$,
\begin{equation}\label{h-3}
\eta(\rho^\e, m^\e)_t{}+{}q(\rho^\e, m^\e)_r{}
\quad\mbox{are confined in a compact subset of }\,\,
W^{-1,q_1}_{loc}.
\end{equation}
\smallskip
On the other hand, using formulas \eqref{eta}--\eqref{q}
and the estimates in Proposition \ref{2.1} and Lemma \ref{HI:2},
we obtain that, for any smooth, compactly supported function $\psi(s)$ on $\R$,
$$
\eta(\rho^\e,m^\e),\,q(\rho^\e,m^\e) \qquad \mbox{are
uniformly bounded in } L^{q_2}_{loc}(\R_+^2),
$$
for $q_2=\gamma+1>2$ when $\gamma>1$.
This implies that, for some $q_2>2$,
\begin{equation}\label{h-4}
\eta(\rho^\e, m^\e)_t{}+{}q(\rho^\e, m^\e)_r{}
\qquad\mbox{are uniformly bounded in }\,\, W^{-1,q_2}_{loc}.
\end{equation}
The interpolation compactness theorem ({\it cf.} \cite{Chen1,DCL})
indicates that, for $q_1>1$, $q_2\in(q_1, \infty]$, and $q_0\in [q_1,
q_2)$,
\[
\begin{array}{l}
\big(\mbox{compact set of}\: W^{-1,q_1}_{loc}(\R_+^2)\big)
\cap \big(\mbox{bounded set of}\: W^{-1,q_2}_{loc}(\R_+^2)\big)\\[1mm]
\subset \big(\mbox{compact set of}\: W^{-1,q_0}_{loc}(\R_+^2)\big),
\end{array}
\]
which is a generalization of Murat's lemma in \cite{Murat,Tartar}.
Combining this interpolation compactness theorem for $1<q_1<2,
q_2>2$, and $q_0=2$ with the facts in \eqref{h-3}--\eqref{h-4}, we
conclude the result.
\end{proof}
\end{subsection}
\begin{subsection}{Strong Convergence and the Entropy Inequality}
The {\it a priori} estimates and compactness properties we have obtained in \S 3.1--\S 3.2
imply that the viscous solutions satisfy the compensated compactness
framework in Chen-Perepelitsa \cite{CP}.
Then the compactness theorem established in \cite{CP} for the case $\gamma>1$ (also see LeFloch-Westdickenberg \cite{LW})
yields that
\[
(\rho^\e,m^\e){}\to{}(\rho,m) \qquad \mbox{a.e. $(t,r)\in\mathbb{R}^2_+\quad $ in $L^{p}_{loc}\left(\mathbb{R}^2_+\right)\times L^{q}_{loc}\left(\mathbb{R}^2_+\right)$}
\]
for $p\in [1,\gamma+1)$ and $q\in [1, \frac{3(\gamma+1)}{\gamma+3})$.
This requires the uniform bounds \eqref{HI_est_1}--\eqref{HI:2} and the estimate:
\[
|m|^q{}={}\rho^{\frac{q}{3}}|u|^q \rho^{\frac{2q}{3}}{}\leq{}\rho|u|^3{}+{}\rho^{\gamma+1}
\]
for $q{}={}\frac{3(\gamma+1)}{\gamma+3}$.
From the same estimates, we also obtain the convergence of the energy as $\e\to 0$:
$$
\eta^*(\rho^\e, m^\e)\to \eta^*(\rho, m)
\qquad \mbox{in $L^{1}_{loc}\left(\mathbb{R}^2_+\right)$}.
$$
Since the energy $\eta^*(\rho,m)$ is a convex function,
by passing to the limit in \eqref{est:energy_2}, we obtain
\[
\int_{t_1}^{t_2} \int_0^\infty
\eta^*(\rho, m)
(t,r)\,r^{n-1}drdt{}\leq{}
(t_1-t_2)\int_0^\infty \eta^*(\rho_0, m_0)
(r)\,r^{n-1}dr,
\]
which implies that, for {\it a.e.} $t\ge 0$,
\begin{equation}\label{energy-control-a}
\int_{\R_+}
\eta^*(\rho, m)
(t,r)\,r^{n-1}dr
{}\leq{}
\int_0^\infty \eta^*(\rho_0, m_0)
(r)\,r^{n-1}dr.
\end{equation}
This implies that there is no concentration formed in the density $\rho$ at the
origin $r=0$.
Furthermore, we multiply both sides of \eqref{eq:energy-2} by a smooth function $\varphi(t)\in C_0^1(\R_+)$ with $\varphi(0)=0$,
integrate it over $\R_+^2$, and pass to the limit $\e\to 0$ to obtain
$$
\int_{\R_+^2}\eta^*(\rho, m)\varphi'(t) \, dr dt \ge 0,
$$
which, together with \eqref{energy-control-a}, concludes \eqref{finite_energy}.
Finally, the energy estimates \eqref{Energy_est}--\eqref{est:energy_2} and the estimates
in Lemmas \ref{HI_est_1}--\ref{HI:2}
imply the equi-integrability of a
sequence of
$$
\eta_\psi^\e, \quad q_\psi^\e, \quad m^\e\partial_\rho\eta_{\psi}^\e,\quad
\frac{(m^\e)^2}{\rho^\e}\partial_m\eta_{\psi}^\e, \quad q_\psi^\e,
$$
for any $\psi(s)$ that is convex with subquadratic growth at infinity:
$\lim_{s\to\infty}\frac{|\psi(s)|}{s^2}=0$.
Passing to the limit in \eqref{approx_entropy} multiplied by $r^n$ and integrated against
a smooth compactly function supported on $(0,\infty)\times(0,\infty)$, we obtain \eqref{entropy_sol}.
\end{subsection}
\begin{subsection}{Limit in the Equations}
Let $\varphi(t,r)$ be a smooth, compactly supported function on $[0,\infty)\times[0,b(\e))$,
with $\varphi_r(t,r){}={}0$ for all $r$ close to $0$.
Assume that the viscosity solutions $(\rho^\e,$ $m^\e)$ are extended by $0$ outside of $[a(\e),b(\e)]$.
Multiplying the first equation
in \eqref{eq:NS} by $r^{n-1}\varphi$ and then integrating it over $\mathbb{R}^2_+$, we have
\begin{eqnarray*}
&&
\int_{\mathbb{R}^2_+} \Big(\rho^\e\varphi_t{}+{}m^\e\varphi_r{}+{}\e\rho^\e(\varphi_{rr}
{}+{}\frac{n-1}{r}\varphi_r)\Big)\,r^{n-1}drdt{}\notag\\
&&\,\,\, +{}\int_{\mathbb{R}_+}\rho^\e_0(r)\varphi(0,r)\,r^{n-1}dr{}={}0.
\label{weak_form_eq_1}
\end{eqnarray*}
Note that, by the energy inequality, $\int_0^1(\rho^\e)^\gamma\,r^{n-1}dr$
is bounded, independent of $\e$, which implies that there is no concentration of mass at $r{}={}0.$
\smallskip
Passing to the limit in the above equation, we deduce
\begin{eqnarray*}
\int_{\mathbb{R}^2_+} \big(\rho \varphi_t{}+{}m\varphi_r\big){}\,r^{n-1}drdt
{}+{}\int_{\mathbb{R}_+}\rho_0(r)\varphi(0,r)\,r^{n-1}dr{}={}0,
\end{eqnarray*}
which can be extended to hold for all smooth, compactly supported function $\varphi(t,r)$
on $[0,\infty)\times[0,\infty)$, with $\varphi_r(t,0){}={}0$.
\smallskip
Consider now the momentum equation in \eqref{eq:Eu}. Let $\varphi(t,r)$ be
a smooth, compactly supported function on $[0,\infty)\times(a(\e),b(\e))$.
Multiplying the first equation
in \eqref{eq:NS} and then integrating it over $\mathbb{R}^2_+$, we obtain
\begin{eqnarray*}
&&\int_{\mathbb{R}^2_+} \Big(m^\e\varphi_t{}+{}\frac{(m^\e)^2}{\rho^\e}\varphi_r
{}+{}p_\delta(\rho^\e)\big(\varphi_r{}+{}\frac{n-1}{r}\varphi\big){}+{}\e m^\e\varphi_{rr}\Big)\,r^{n-1}drdt{}\\
&& +{}\int_{\mathbb{R}_+} m^\e_0(r)\varphi(0,r)\,r^{n-1}dr{}={}0.
\end{eqnarray*}
Passing to the limit, we find
\begin{eqnarray*}
\label{weak_form_eq_2}
\int_{\mathbb{R}^2_+} \Big(m\varphi_t{}+{}\frac{m^2}{\rho}\varphi_r{}+{}p(\rho)\big(\varphi_r{}+{}\frac{n-1}{r}\varphi\big)\Big)\,r^{n-1}drdt
{}+{}\int_{\mathbb{R}_+} m_0(r)\varphi(0,r)\,r^{n-1}dr{}={}0.
\end{eqnarray*}
Note that the term containing $\delta \rho^2$ converges to zero by Lemma \ref{lemma 3.2}
since $\delta=\delta(\e)\to 0$ as $\e\to 0$.
This equation can be extended for all smooth compactly supported function $\varphi(t,r)$
on $[0,\infty)\times[0,\infty)$ with $\varphi(t,0){}={}\varphi_r(t,0){}={}0$,
since $(\frac{m^2}{\rho}+\rho^\gamma)(t,r)r^{n-1}\in L^1_{loc}([0,\infty)\times [0,\infty))$.
\end{subsection}
\end{section}
\vspace{.25in}
\noindent
{\bf Acknowledgements:} The research of
Gui-Qiang G. Chen was supported in part by
the UK EPSRC Science and Innovation
Award to the Oxford Centre for Nonlinear PDE (EP/E035027/1),
the UK EPSRC Award to the EPSRC Centre for Doctoral Training
in PDEs (EP/L015811/1),
the NSFC under a joint project Grant 10728101, and
the Royal Society--Wolfson Research Merit Award (UK).
The research of Mikhail Perepelitsa was
supported in part by the NSF Grant DMS-1108048.
The authors would like to thank the Isaac Newton Institute for Mathematical Sciences,
Cambridge, for support and hospitality during the 2014 Programme on
{\it Free Boundary Problems and Related Topics} where work on this paper was undertaken.
\bigskip
|
1,314,259,994,305 | arxiv | \section{Introduction}
Mean curvature flow is a geometric evolution equation for surfaces $\Sigma\subset\R^3$, under which each point $x\in\Sigma$ moves in the inward normal direction proportionally to the mean curvature of $\Sigma$ at $x$. More generally, mean curvature flow can be defined as the gradient flow for the area functional. This flow has many applications in geometry and topology, as well as in image denoising.
Under mean curvature flow, spheres and cylinders will shrink by dilations. However, these are not the only examples of such \emph{self-shrinkers}. Angenent found a self-shrinking torus in 1989 \cite{a92}, and since then many other examples have been constructed \cite{dk17, dln18, kkm18, km14, m15, mo11, n09, n10, n14}. These surfaces are important because they model singularities that develop under mean curvature flow. For example, if the initial surface is convex, then it will become rounder as it shrinks, eventually becoming close to a sphere before it shrinks to a point. On the other hand, if the initial surface is not convex, then it may develop a singularity that looks like a different self-shrinker.
Colding and Minicozzi \cite{cm12} show that spheres and cylinders are stable, in the sense that if we perturb a sphere or a cylinder, then the resulting mean curvature flow will still develop a spherical or cylindrical singularity, respectively. Other self-shrinkers are unstable. Nonetheless, the space of unstable variations, defined appropriately, is still finite-dimensional. As in Morse theory, this dimension is called the \emph{index} of the self-shrinker. Note, however, that there are different conventions for the index in the literature; the disagreement is about whether or not to include translations and dilations, which give rise to singularities of the same shape but at different places and times. We exclude translations and dilations in this work.
\subsection{Results}
In this paper, we compute the index of the Angenent torus. However, the computational methods in this paper apply equally well to any rotationally symmetric immersed self-shrinker. See \cite{dk17} for infinitely many examples of such self-shrinkers. More generally, we expect that combining the methods in this paper with finite element methods could be used to compute the indices of self-shrinkers that do not have rotational symmetry.
\begin{result}
Excluding dilations and translations, the index of the Angenent torus is $5$.
\end{result}
Computing the index of a self-shrinker amounts to counting the number of negative eigenvalues of a certain differential operator called the \emph{stability operator}, which acts on functions that represent normal variations of the self-shrinker. In this paper, we construct a suitable finite-dimensional approximation to the stability operator, and then we compute the eigenvalues and eigenvectors of this matrix. We recover the facts \cite{cm12} that the variation corresponding to dilation has eigenvalue $-1$ and that the variations corresponding to translations have eigenvalue $-\frac12$. Additionally, we discover that, surprisingly, two other variations also have eigenvalue $-1$, and these variations have simple explicit formulas.
\subsection{Relationship to other work}
One can view this paper in several ways.
\begin{itemize}
\item This paper can be viewed a sequel to \cite{bk19}, where we compute the Angenent torus and its entropy. These results are the starting point for the index computation in the current work.
\item This paper can be viewed as a numerical implementation of \cite{l16}, where Liu gives a formula for the stability operator of rotationally symmetric self-shrinkers and shows, in particular, that the index of the Angenent torus is at least $3$.
\item This paper can be viewed as a numerical companion to \cite{bk20a}, where we prove upper bounds on the index of self-shrinking tori. The numerical discovery of the two new variations with eigenvalue $-1$ inspired a simple new formula for the stability operator \cite[Theorem 3.7]{bk20a}. In turn, this formula gives a two-line proof that these numerically discovered variations do indeed have eigenvalue $-1$ \cite[Theorem 6.1]{bk20a}.
\end{itemize}
\subsection{Outline}
In Section~\ref{sec:preliminaries}, we introduce notation and give basic properties of mean curvature flow, self-shrinkers, and the stability operator, both in general and in the rotationally symmetric case. In Section~\ref{sec:methods}, we discuss the numerical methods we use to compute the eigenvalues and eigenfunctions of the stability operator. We present our results in Section~\ref{sec:results}, listing the first several eigenvalues of the stability operator and presenting plots of the corresponding variations of the Angenent torus cross-section. We obtain the index by counting the variations with negative eigenvalues; we present three-dimensional plots of all of these variations in Figure~\ref{fig:3d}. Next, in Section~\ref{sec:error}, we estimate the error in our numerical computations by giving plots that show the rate at which our numerically computed eigenvalues converge as we increase the number of sample points. We summarize the first few computed eigenvalues and our error estimates in Table~\ref{tab:errorplots}. Finally, in Section~\ref{sec:future}, we give a few promising directions for future work. Additionally, we include Appendix~\ref{sec:asymptotics}, where, rather than looking at small eigenvalues, we instead take a cursory look at eigenvalue asymptotics.
\section{Preliminaries: Mean curvature flow and self-shrinkers}\label{sec:preliminaries}
In this section, we introduce mean curvature flow for surfaces in $\R^3$, define self-shrinkers and their index, and discuss the rotationally symmetric case. For a more detailed introduction in general dimension, see the companion paper \cite{bk20a}. We also refer the reader to \cite{cm12, cmp15, h90, l16}.
\subsection{Notation for surfaces and mean curvature flow}
Let $\Sigma\subset\R^3$ be an immersed oriented surface. Let $\n$ denote the unit normal vector to $\Sigma$. Given a point $x\in\Sigma$ and a vector $v\in T_x\R^{n+1}$, let $v^\perp$ denote the scalar projection of $v$ onto $\n$, namely $v^\perp=\langle v,\n\rangle$. Let $v^\top$ denote the projection of $v$ onto $T_x\Sigma$, namely $v^\top=v-v^\perp\n$.
\begin{definition}
Let $A_\Sigma$ denote the \emph{second fundamental form} of $\Sigma$. That is, given $v,w\in T_x\Sigma$, let
\begin{equation*}
A_\Sigma(v,w)=(\nabla_vw)^\perp
\end{equation*}
\end{definition}
\begin{definition}
Let $H_\Sigma$ denote the \emph{mean curvature} of $\Sigma$, defined with the normalization convention $H_\Sigma=-\tr A_\Sigma$.
\end{definition}
That is, if $e_1,e_2$ is an orthonormal frame at a particular point $x\in\Sigma$, then, at that point $x$, $H_\Sigma=-A_\Sigma(e_1,e_1)-A_\Sigma(e_2,e_2)$.
\begin{definition}
A family of surfaces $\Sigma_t$ evolves under \emph{mean curvature flow} if
\begin{equation*}
\dot x=-H_\Sigma\n.
\end{equation*}
That is, each point on $\Sigma$ moves with speed $H_\Sigma$ in the inward normal direction.
\end{definition}
\subsection{Self-shrinkers}
A surface $\Sigma$ is a self-shrinker if it evolves under mean curvature flow by dilations. For this paper, however, we will restrict this terminology to refer only to surfaces that shrink to the origin in one unit of time.
\begin{definition}
A surface $\Sigma$ is a \emph{self-shrinker} if $\Sigma_t=\sqrt{-t}\,\Sigma$ is a mean curvature flow for $t<0$.
\end{definition}
We have an extremely useful variational formulation for self-shrinkers as critical points of Huisken's $F$-functional.
\begin{definition}
The \emph{$F$-functional} takes a surface and computes its weighted area via the formula
\begin{equation*}
F(\Sigma)=\frac1{4\pi}\int_\Sigma e^{-\abs x^2/4}\,d\vol_\Sigma.
\end{equation*}
\end{definition}
The role of the normalization constant $\frac1{4\pi}$ is to ensure that if $\Sigma$ is a plane through the origin, then $F(\Sigma)=1$.
Since varying a surface in a tangential direction does not change the surface, we can define the critical points of $F$ solely in terms of normal variations.
\begin{definition}
$\Sigma$ is a \emph{critical point} of $F$ if for any $f\colon\Sigma\to\R$ with compact support, $F$ does not change to first order as we vary $\Sigma$ by $f$ in the normal direction. More precisely, if we let $\Sigma_s=\{x+sf\n\mid x\in\Sigma\}$, then we have $\dd s\bigr\rvert_{s=0}F(\Sigma_s)=0$.
\end{definition}
\begin{proposition}[\cite{cm12}]\label{prop:shrinkercritical}
$\Sigma$ is a self-shrinker if and only if $\Sigma$ is a critical point of $F$.
\end{proposition}
The definition of the $F$-functional is not invariant under translation and dilation: The Gaussian weight is centered around the origin in $\R^3$, and the length scale of the Gaussian is designed so that the critical surfaces become extinct in exactly one unit of time. Colding and Minicozzi introduce a related concept called the entropy, which coincides with the $F$-functional on self-shrinkers but is invariant under translation and dilation.
\begin{definition}
The \emph{entropy} of a surface $\Sigma\subset\R^3$ is the supremum of the $F$-functional evaluated on all translates and dilates of $\Sigma$, that is $\sup_{x_0,t_0}F(x_0+\sqrt{t_0}\Sigma)$.
\end{definition}
If $\Sigma$ is a self-shrinker, defined as above to shrink to the origin in one unit of time, then the supremum among translates and dilates is attained at $\Sigma$ itself, so the entropy of $\Sigma$ coincides with $F(\Sigma)$. However, entropy-decreasing variations of $\Sigma$ and $F$-decreasing variations of $\Sigma$ are not quite the same: when we ask about entropy-decreasing variations, we exclude the ``trivial'' $F$-decreasing variations of translation and dilation.
\subsection{The stability operator}
Given a critical point of a flow, the next natural question to ask is about the stability of that critical point. If we perturb a self-shrinker, will the resulting surface flow back to the self-shrinker under the gradient flow for the $F$-functional, or will it flow to a different critical point? What is the maximum dimension of a space of unstable variations? As in Morse theory, answering this question amounts to computing the eigenvalues of the Hessian of the $F$-functional.
\begin{definition}\label{def:stability}
Let $\Sigma$ be a self-shrinker. The \emph{stability operator} $L_\Sigma$ is a differential operator acting on functions $f\colon\Sigma\to\R$ that is the Hessian of $F$ in the following sense.
\begin{itemize}
\item For any $f\colon\Sigma\to\R$,
\begin{equation}\label{eq:stability}
\ddtwo s\Bigr\rvert_{s=0}F(\Sigma_s)=\frac1{4\pi}\int_\Sigma f(-L_\Sigma)f\,e^{-\abs x^2/4}\,d\vol_\Sigma,
\end{equation}
where $\Sigma_s=\{x+sf\n\mid x\in\Sigma\}$ is the normal variation corresponding to $f$, and
\item $L_\Sigma$ is symmetric with respect to the Gaussian weight, in the sense that $\int_\Sigma f_1L_\Sigma f_2\,e^{-\abs x^2/4}\,d\vol_\Sigma=\int_\Sigma f_2L_\Sigma f_1\,e^{-\abs x^2/4}\,d\vol_\Sigma$.
\end{itemize}
\end{definition}
The reason for the odd choice of sign is so that the differential operator $L_\Sigma$ has the same leading terms as the Laplacian $\Delta_\Sigma=\div_\Sigma\grad_\Sigma$. More precisely, Colding and Minicozzi \cite{cm12} compute that,
\begin{equation*}
L_\Sigma=\L_\Sigma+\abs{A_\Sigma}^2+\tfrac12,
\end{equation*}
where
\begin{equation*}
\L_\Sigma f=e^{\abs x^2/4}\div_\Sigma\left(e^{-\abs x^2/4}\grad_\Sigma f\right).
\end{equation*}
Consequently, we make the following sign convention for eigenvalues.
\begin{definition}[Sign convention for eigenvalues]
We say that $f\neq0$ is an eigenfunction of a differential operator $L$ with eigenvalue $\lambda$ if $-Lf = \lambda f$.
\end{definition}
We conclude that eigenfunctions of the stability operator $L_\Sigma$ with negative eigenvalues are unstable variations of the self-shrinker $\Sigma$: If we vary $\Sigma$ in that direction, then the gradient flow for $F$ will take the surface away from $\Sigma$. Meanwhile, eigenfunctions of the stability operator $L_\Sigma$ with positive eigenvalues are stable variations of $\Sigma$: There exists a gradient flow line that approaches $\Sigma$ from that direction.
Translating $\Sigma$ in the direction $v\in\R^3$ corresponds to the normal variation $f=v^\perp$, and dilating $\Sigma$ corresponds to the normal variation $f=H_\Sigma$. Colding and Minicozzi compute the corresponding eigenvalues of the stability operator.
\begin{proposition}[\cite{cm12}]
For any vector $v\in\R^3$, we have $L_\Sigma v^\perp=\frac12v^\perp$. Meanwhile, for dilation, we have $L_\Sigma H_\Sigma=H_\Sigma$.
\end{proposition}
Thus, assuming these functions are nonzero, $v^\perp$ and $H_\Sigma$ are eigenfunctions of $L_\Sigma$, giving us $4$ independent eigenfunctions. With our sign convention, the eigenvalue corresponding to $v^\perp$ is $-\frac12$, and the eigenvalue corresponding to $H_\Sigma$ is $-1$. Additionally, because $F$ is invariant under rotations about the origin, a variation of $\Sigma$ corresponding to a rotation about the origin will be an eigenfunction with eigenvalue $0$.
Because $L_\Sigma$ has the same symbol as $\Delta_\Sigma$, it has a finite number of negative eigenvalues, at least in the case of compact $\Sigma$. Usually, one defines the index of a critical point of a gradient flow to be the number of negative eigenvalues of the Hessian. However, because translations and dilations do not change the shape of the self-shrinker, we exclude them in this context.
\begin{definition}
The \emph{index} of a self-shrinker $\Sigma$ is the number of negative eigenvalues of the stability operator $L_\Sigma$, excluding those eigenvalues corresponding to translations and dilations.
\end{definition}
Assuming that $\Sigma$ is not invariant under any translations or dilations, its index is simply $4$ less than the usual Morse index.
Under mild assumptions, Colding and Minicozzi show that the only self-shrinkers with index zero are planes, round spheres, and round cylinders \cite{cm12}
\subsection{Rotationally symmetric self-shrinkers}
If $\Sigma\subset\R^3$ is a hypersurface with $SO(2)$ rotational symmetry, we can understand it in terms of its cross-sectional curve $\Gamma$. We refer the reader to \cite{l16}.
We will use cylindrical coordinates $(r,\theta,z)$ on $\R^3$.
\begin{definition}
We say that a hypersurface is \emph{rotationally symmetric} if it is invariant under rotations about the $z$-axis.
\end{definition}
If $\Sigma$ is rotationally symmetric, we let $\Gamma$ denote its $\theta=0$ cross-section, which we also think of as being a curve in the half-plane $\{(r,z)\mid r\ge0,z\in\R\}$.
We can write the $F$-functional in terms of $\Gamma$.
\begin{proposition}\label{prop:FGamma}
If $\Sigma$ is a rotationally symmetric hypersurface with cross-section $\Gamma$, then
\begin{equation*}
F(\Sigma)=\frac12\int_\Gamma re^{-\abs x^2/4}\,d\ell,
\end{equation*}
where $d\ell$ denotes integration with respect to arc length.
\end{proposition}
To simplify our notation, we will let $\sigma$ denote this weight.
\begin{definition}
Let $\sigma\colon\R_{\ge0}\times\R\to\R$ denote the weight
\begin{equation*}
\sigma=\frac12re^{-\abs x^2/4}.
\end{equation*}
\end{definition}
With this notation, our expression $\frac12\int_\Gamma re^{-\abs x^2/4}\,d\ell$ for $F(\Sigma)$ is simply the length of the curve $\Gamma$ with respect to the conformally changed metric $\sigma^2(dr^2+dz^2)$.
\begin{definition}
Let $g^\sigma$ denote the metric $\sigma^2(dr^2+dz^2)$ on the half-plane $\{(r,z)\mid r\ge0,z\in\R\}$.
\end{definition}
If $\Sigma$ is a self-shrinker, then $\Sigma$ is a critical point for $F$, so $\Gamma$ is a critical point for $g^\sigma$-length. In other words, $\Gamma$ is a geodesic with respect to $g^\sigma$.
\subsection{The stability operator for rotationally symmetric self-shrinkers}
Varying the cross-section $\Gamma$ only yields rotationally symmetric variations of $\Sigma$. To understand the stability operator $L_\Sigma$ in terms of $\Gamma$, we must understand non-rotationally symmetric variations of $\Sigma$ as well. Liu \cite{l16} does so by decomposing normal variations $f\colon\Sigma\to\R$ into their Fourier components. The stability operator $L_\Sigma$ commutes with this Fourier decomposition, so we can decompose $L_\Sigma$ into its Fourier components $L_k$, which are operators acting on functions on $\Gamma$.
We begin with the rotationally symmetric part of the Fourier decomposition.
\begin{definition}
Let $\Sigma$ be a rotationally symmetric self-shrinker with cross-section $\Gamma$. For any $u\colon\Gamma\to\R$, we have a corresponding rotationally symmetric function $f\colon\Sigma\to\R$. Define the operator $L_0$ by
\begin{equation*}
L_0u=L_\Sigma f.
\end{equation*}
\end{definition}
Note that, if $f$ is a rotationally symmetric function as above, then the normal variation $\Sigma_s=\{x+sf\n\mid x\in\Sigma\}$ has cross-section $\Gamma_s=\{x+su\n\mid x\in\Gamma\}$, and equation~\eqref{eq:stability} in Definition~\ref{def:stability} becomes
\begin{equation}\label{eq:l0}
\ddtwo s\Bigr\rvert_{s=0}\int_{\Gamma_s}\sigma\,d\ell=\int_\Gamma u(-L_0)u\,\sigma\,d\ell,
\end{equation}
where $d\ell$ denotes integration with respect to the usual Euclidean arc length.
We now define the other Fourier components of $L_\Sigma$.
\begin{definition}
For each non-negative integer $k$, let $L_k$ be the operator
\begin{equation*}
L_k=L_0-\frac{k^2}{r^2}.
\end{equation*}
\end{definition}
\begin{proposition}[\cite{l16}]
For any $u\colon\Gamma\to\R$ and any $k\ge0$, we have
\begin{align*}
L_\Sigma(u\cos k\theta)&=(L_ku)\cos k\theta,\\
L_\Sigma(u\sin k\theta)&=(L_ku)\sin k\theta.
\end{align*}
\end{proposition}
Thus, the eigenfunctions of $L_\Sigma$ are of the form $u\cos k\theta$ and $u\sin k\theta$, where $u$ is an eigenfunction of $L_k$. Consequently, we can determine the eigenvalues and eigenfunctions of the stability operator $L_\Sigma$ by determining the eigenvalues and eigenfunctions of $L_k$ for all $k\ge0$.
In the case where $\Sigma$ is a rotationally symmetric torus, we have the following formula for $L_k$.
\begin{proposition}[\cite{bk20a}]\label{prop:lk}
Let $\Sigma$ be a rotationally symmetric torus with cross-section $\Gamma$. Then
\begin{equation*}
L_k=\sigma\Delta_\Gamma^\sigma\sigma+1+\frac{1-k^2}{r^2},
\end{equation*}
where $\sigma$ is the weight $\frac12re^{-\abs x^2/4}$ and $\Delta_\Gamma^\sigma$ is the Laplacian on $\Gamma$ with respect to the conformally changed metric $g^\sigma=\sigma^2(dr^2+dz^2)$.
\end{proposition}
See also Appendix~\ref{sec:asymptotics}, where we rewrite $L_k$ as a Schr\"odinger operator, and \cite{l16}, where Liu gives the general formula for $L_k$.
\section{Numerical methods}\label{sec:methods}
In \cite{bk19}, we computed the Angenent torus cross-section $\Gamma$ by viewing it as a geodesic with respect to the metric $g^\sigma=\sigma^2(dr^2+dz^2)$. The result is a discrete approximation $\Gamma_d=\{q_0,q_1\dotsc,q_M=q_0\}$ to the curve $\Gamma$, where the $q_m$ are equally spaced with respect to the metric $g^\sigma$, as illustrated in Figure \ref{fig:curvedots}. As we will see, having the points $q_m$ be equally spaced in this way is particularly well-suited for computing the index of the Angenent torus. Additionally, we computed the entropy $F(\Sigma)$ of the Angenent torus, which is the length of the curve $\Gamma$ with respect to the metric $g^\sigma$.
\begin{figure}
\centering
\includegraphics{figures/curvedots.pdf}
\caption{The Angenent torus cross-section (blue curve) and the discrete approximation to it (orange dots) computed in \cite{bk19}.}
\label{fig:curvedots}
\end{figure}
We now proceed to compute a matrix approximation to the differential operator $L_0$, using equation~\eqref{eq:l0}. We can first rewrite this equation in terms of the metric $g^\sigma$.
\begin{proposition}
Let $\Gamma$ be the cross-section of a self-shrinking torus, and let $u\colon\Gamma\to\R$. Then
\begin{equation}\label{eq:l0length}
\ddtwo s\Bigr\rvert_{s=0}\ell^\sigma(\Gamma_s)=\int_\Gamma u(-L_0)u\,d\ell^\sigma,
\end{equation}
where $\Gamma_s=\{x+su\n\mid x\in\Gamma\}$ is the normal variation corresponding to $u$, the expression $\ell^\sigma(\Gamma_s)$ denotes the length of the curve $\Gamma_s$ with respect to the conformally changed metric $g^\sigma=\sigma^2(dr^2+dz^2)$, and $d\ell^\sigma$ denotes integration with respect to $g^\sigma$ arc length.
\end{proposition}
In other words, with respect to the metric $g^\sigma$, the operator $-L_0$ is the Hessian of the length functional.
The task now is to make discrete approximations to all of the terms in equation~\eqref{eq:l0length}. We begin by approximating length the same way as in \cite{bk19}.
\subsection{Discrete length and its Hessian}
\begin{definition}
Given two points $q_m$ and $q_{m+1}$, we approximate the distance between them with respect to the metric $g^\sigma$ by setting $\sigma_{\text{mid}}$ to be $\sigma$ evaluated at the midpoint $(q_m+q_{m+1})/2$, and then computing the discrete approximation to the distance to be
\begin{equation}\label{eq:discretedistance}
\dist^\sigma_d(q_m,q_{m+1}):=\sigma_{\text{mid}}\norm{q_{m+1}-q_m},
\end{equation}
where $\norm\cdot$ denotes the usual Euclidean norm.
\end{definition}
\begin{definition}
Given a discrete curve $\Gamma_d=\{q_0,q_1,\dotsc,q_M=q_0\}$ that approximates a curve $\Gamma$, we approximate its length $\ell^\sigma(\Gamma)$ with the discrete length functional
\begin{equation}\label{eq:discretelength}
\ell^\sigma_d(\Gamma_d)=\sum_{m=0}^{M-1}\dist^\sigma_d(q_m,q_{m+1}).
\end{equation}
\end{definition}
The computation of the Angenent torus cross-section $\Gamma_d=\{q_0,q_1,\dotsc,q_M=q_0\}$ in \cite{bk19} is set up so that $\Gamma_d$ is a critical point for the functional $\ell^\sigma_d$, analogously to the fact that the true cross-sectional curve $\Gamma$ is a critical point for the length functional $\ell^\sigma$. The corresponding critical value $\ell^\sigma_d(\Gamma_d)$ is an approximation of $\ell^\sigma(\Gamma)$, which is the entropy of the Angenent torus.
Thus, we can approximate the left-hand side of \eqref{eq:l0length} with
\begin{equation*}
\ddtwo s\Bigr\rvert_{s=0}\ell^\sigma(\Gamma_s)\approx\ddtwo s\Bigr\rvert_{s=0}\ell_d^\sigma(\Gamma_{d;s}),
\end{equation*}
where $\Gamma_{d;s}$ is a variation of the discrete curve $\Gamma_d$. More precisely, $\Gamma_{d;s}=\{q_{0;s},q_{1;s},\dotsc,q_{M;s}=q_{0;s}\}$, where $q_{m;s}=q_m+sv_m$ for a sequence of vectors $v_0,v_1,\dotsc,v_M=v_0$ representing a discrete vector field $v_d$ on $\Gamma_d$.
We can think of $\ddtwo s\bigr\rvert_{s=0}\ell_d^\sigma(\Gamma_{d;s})$ as giving us the Hessian of $\ell^\sigma_d(q_0,q_1\dotsc,q_M=q_0)$.
\begin{definition}\label{def:hd}
Let $\Gamma_d$ be a discrete curve. Viewing $\ell^\sigma_d$ as a function $(\R^2)^M=\R^{2M}\to\R$, let $H_d$ denote the Hessian of $\ell^\sigma_d$ at $\Gamma_d$, a $2M\times2M$ matrix.
\end{definition}
Then, by definition of the Hessian, if we view $v_d$ as a vector in $(\R^2)^M=\R^{2M}$, we have
\begin{equation}\label{eq:hessvd}
\ddtwo s\Bigr\rvert_{s=0}\ell_d^\sigma(\Gamma_{d;s})=v_d^TH_dv_d.
\end{equation}
\subsection{The outward unit normal of a discrete curve}
Recall, however, that we restricted our attention to normal variations of $\Gamma$, because tangential variations do not change length. In other words, we considered only those variations $\Gamma_s=\{x+sv\mid x\in\Gamma\}$ where $v=u\n$ for some function $u\colon\Gamma\to\R$. To do the same for our discrete variations $v_d$ of our discrete curve $\Gamma_d$, we must appropriately define a normal vector field $\n_d=\{\n_0,\n_1,\dotsc,\n_M=\n_0\}$.
In this situation, there is a very natural way to do so by considering what happens to the discrete length $\ell^\sigma_d$ if we vary a single point $q_m$ of $\Gamma_d$ while leaving all of the other points fixed. Because $\Gamma_d$ is a critical point for $\ell^\sigma_d$, the first derivative is zero. Meanwhile, the second derivative is the $m$th $2\times2$ submatrix on the diagonal of the Hessian $H_d$, which we denote by $H_m$.
We expect that if we move $q_m$ in a tangential direction, either towards $q_{m-1}$ or $q_{m+1}$, then $\ell^\sigma_d$ will not change very much. On the other hand, if we move $q_m$ in a normal direction, then $\ell^\sigma_d$ will increase. Thus, we expect $H_m$ to have an eigenvalue close to zero, whose eigenvector approximates the direction tangent to the curve, and a positive eigenvalue, whose eigenvector approximates the normal direction tangent to the curve. We obtain the following definition of the normal vector field $\n_d$.
\begin{definition}\label{def:nm}
Let $\Gamma_d$ be a discrete curve. Let $H_1,\dotsc,H_M$ be the $2\times2$ blocks along the diagonal of $H_d$. We define the \emph{outward unit normal vector field} $\n_d=\{\n_0,\n_1,\dotsc,\n_M=\n_0\}$ by letting $\n_m$ be the unit eigenvector corresponding to the larger eigenvalue of $H_m$, with the sign chosen so that $\n_m$ points outwards.
\end{definition}
A discrete function $u_d\colon\Gamma_d\to\R$ is just a discrete set of values $u_0,u_1,\dotsc,u_M=u_0$, with $u_m:=u_d(q_m)$. Now that we have our outward unit normal vector field, we can define $v_m=u_m\n_m$. Viewing $u_d$ as a vector in $\R^M$ and $v_d$ as a vector in $\R^{2M}$, this formula defines a linear transformation $\mbf N\colon\R^M\to\R^{2M}$.
\begin{definition}
The matrix $\mbf N$ is an $2M\times M$ block diagonal matrix whose $2\times1$ diagonal blocks are the $\n_m$.
\end{definition}
Using $\mbf N$, we can rewrite equation~\eqref{eq:hessvd} as
\begin{equation}\label{eq:hessud}
\ddtwo s\Bigr\rvert_{s=0}\ell^\sigma_d(\Gamma_{d;s})=u_d^T\left(\mbf N^TH_d\mbf N\right)u_d,
\end{equation}
where the variation $\Gamma_{d;s}$ is defined by $q_{m;s}=q_m+su_m\n_m$.
\subsection{Integrating over a discrete curve}
We now turn to the right-hand side of \eqref{eq:l0length}. To approximate it, we must first understand how to approximate integration with respect to $g^\sigma$ arc length, which is relatively straightforward because the points $q_m$ are equally spaced with respect to $g^\sigma$ arc length. We can view a discrete function $u_d\colon\Gamma_d\to\R$ as an approximation of a function $u\colon\Gamma\to\R$. Intuitively, because the $q_m$ are equally spaced, the $u_m$ should be weighted equally, so the average value of the $u_m$ should approximate the average value of $u$. In other words, we expect
\begin{equation*}
\frac1{\ell^\sigma(\Gamma)}\int_\Gamma u\,d\ell^\sigma\approx\frac1M\sum_{m=0}^{M-1}u_m.
\end{equation*}
To understand this approximation more precisely, we can let $q(t)$ denote the parametrization of $\Gamma$ with respect to $g^\sigma$ arc length. Because the $q_m$ are equally spaced, setting $\Delta t=\ell^\sigma(\Gamma)/M$, we have that $q_m$ is an approximation for $q(m\Delta t)$, and hence $u_m$ is an approximation for $u(q(m\Delta t))$. Thus,
\begin{multline*}
\int_\Gamma u\,d\ell^\sigma=\int_0^{\ell^\sigma(\Gamma)}u(q(t))\,dt=\sum_{m=0}^{M-1}\int_{\left(m-\frac12\right)\Delta t}^{\left(m+\frac12\right)\Delta t}u(q(t))\,dt\\
\approx\sum_{m=0}^{M-1}u(q(m\Delta t))\Delta t\approx\sum_{m=0}^{M-1}u_m\frac{\ell^\sigma(\Gamma)}M\approx\frac{\ell^\sigma_d(\Gamma_d)}M\sum_{m=0}^{M-1}u_m.
\end{multline*}
\subsection{The discrete stability operator}
Applying these approximations to \eqref{eq:l0length}, we can approximate $L_0$ with a discrete operator $L_{0;d}$.
\begin{definition}
Viewing a discrete function $u_d=\{u_0,u_1,\dotsc,u_M=u_0\}$ as an element of $\R^M$, let $L_{0;d}$ be the $M\times M$ symmetric matrix satisfying
\begin{equation*}
\ddtwo s\Bigr\rvert_{s=0}\ell^\sigma_d(\Gamma_{d;s})=\frac{\ell^\sigma_d(\Gamma_d)}M\sum_{m=0}^{M-1}u_m(-L_{0;d}u_d)_m
\end{equation*}
for all $u_m$, where $\Gamma_{d;s}$ is defined by $q_{m;s}=q_m+su_m\n_m$.
\end{definition}
\begin{proposition}
\begin{equation*}
-L_{0;d}=\frac M{\ell^\sigma_d(\Gamma_d)}\mbf N^TH_d\mbf N.
\end{equation*}
\end{proposition}
\begin{proof}
Using equation~\eqref{eq:hessud} for the left-hand side and rewriting the right-hand side, we have
\begin{equation*}
u_d^T\left(\mbf N^TH_d\mbf N\right)u_d=u_d^T\left(-\frac{\ell^\sigma_d(\Gamma_d)}ML_{0;d}\right)u_d.
\end{equation*}
The result follows using the fact that the Hessian $H_d$ is symmetric.
\end{proof}
From here, it is easy to approximate the $k$th Fourier component of the stability operator. Recall that $-L_k=-L_0+\frac{k^2}{r^2}$. Letting $r_m$ denote the $r$-coordinate of $q_m$, we can approximate the operator $\frac{k^2}{r^2}$ with the diagonal matrix whose entries are $\frac{k^2}{r_m^2}$.
\begin{definition}
Let $L_{k;d}$ be the $M\times M$ matrix defined by
\begin{equation*}
-L_{k;d}=-L_{0;d}+k^2R_d^{-2},
\end{equation*}
where $R_d$ is the diagonal $M\times M$ matrix whose entries are the $r_m$.
\end{definition}
By computing the eigenvalues of these matrix approximations $L_{k;d}$ of $L_k$, we can estimate the eigenvalues of $L_k$. Counting the number of negative eigenvalues will give us the index of the Angenent torus.
\subsection{Implementation}
In practice, we do not compute the Hessian $H_d$ all at once. Instead, our first step is to use \texttt{sympy} to symbolically compute the Hessian of $\dist^\sigma_d\colon\R^2\times\R^2\to\R$, giving us a $4\times4$ matrix of expressions. Then, for each $m$, we evaluate these expressions at $(q_m,q_{m+1})$, giving us a $4\times4$ matrix that we denote by $H_{m,m+1}$. While we could assemble the $H_{m,m+1}$ into the full matrix $H_d$ by placing the $4\times4$ blocks $H_{m,m+1}$ in appropriate locations along the diagonal and adding them together, we instead first compute the normal vectors and use them to reduce the dimension of the problem.
Recall that we compute the normal vector $\n_m$ by seeing how $\ell^\sigma_d(\Gamma_d)$ changes as we vary the single point $q_m$. Since $\dist^\sigma_d(q_{m-1},q_m)+\dist^\sigma_d(q_m,q_{m+1})$ are the only two terms in $\ell^\sigma_d(\Gamma_d)$ where $q_m$ appears, we only need to know $H_{m-1,m}$ and $H_{m,m+1}$. Adding the bottom right $2\times2$ block of $H_{m-1,m}$ to the top left $2\times2$ block of $H_{m,m+1}$, we obtain the $2\times2$ matrix $H_m$ from Definition~\ref{def:nm}, so $\n_m$ is the unit eigenvector corresponding to the larger eigenvalue of this $2\times2$ matrix.
Next, we would like to reduce the dimension of $H_{m,m+1}$ by considering normal variations only. To do so, we assemble $\n_m$ and $\n_{m+1}$ into a $4\times2$ block diagonal matrix $\mbf N_{m,m+1}$, from which we obtain the $2\times2$ matrix
\begin{equation*}
-L_{0;m,m+1}:=\frac M{\ell^\sigma_d(\Gamma_d)}\mbf N_{m,m+1}^TH_{m,m+1}\mbf N_{m,m+1}.
\end{equation*}
Essentially, $-L_{0;m,m+1}$ represents how the distance $\dist^\sigma_d(q_m,q_{m+1})$ changes if we vary $q_m$ and $q_{m+1}$ in the normal directions, weighted by $\frac M{\ell^\sigma_d(\Gamma_d)}\approx\dist^\sigma_d(q_m,q_{m+1})^{-1}$.
Finally, we assemble the $2\times2$ matrices $-L_{0;m,m+1}$ into the $M\times M$ matrix $-L_{0;d}$ by placing $-L_{0;m,m+1}$ into the $2\times2$ block formed by the $m$th and $(m+1)$st rows and columns, and then summing over $m$. From here, it is an easy matter to obtain $-L_{k;d}:=-L_{0;d}+k^2R_d^{-2}$ by using the $r$-coordinates of the $q_m$ to assemble the diagonal matrix $R_d^{-2}$. Once we have the matrices $-L_{k;d}$, we can compute their eigenvalues and eigenvectors with \texttt{numpy}.
See the supplementary materials for a Jupyter notebook implementation.
\section{Results}\label{sec:results}
\begin{figure}
\centerline{\includegraphics[trim = 0 6.8in 0 0, clip]{figures/k0variations}}
\caption{The first few eigenvalues and eigenfunctions of $L_0$. The eigenfunctions are pictured as variations (orange dashed) of the Angenent torus cross-section (blue solid). The variation corresponding to $\lambda_1=-1$ is dilation, and the variation corresponding to $\lambda_2=-\frac12$ is vertical translation.}
\label{fig:k0}
\end{figure}
\begin{figure}
\centerline{\includegraphics{figures/k1variations}}
\caption{The first few eigenvalues and eigenfunctions of $L_1$. The variation corresponding to $\lambda_0=-1$ is $\sigma^{-1}$ \cite[Section 6]{bk20a}. The variation corresponding to $\lambda_1=-\frac12$ is horizontal translation, and the variation corresponding to $\lambda_2=0$ is rotation about the origin.}
\label{fig:k1}
\end{figure}
\begin{figure}
\centerline{\includegraphics{figures/k2variations}}
\caption{The first few eigenvalues and eigenfunctions of $L_2$.}
\label{fig:k2}
\end{figure}
We estimate the first few eigenvalues and eigenfunctions of $L_k$ by computing the eigenvalues and eigenvectors of $L_{k;d}$ with $M=2048$. We present these variations of the Angenent torus cross-section in Figures~\ref{fig:k0}--\ref{fig:k2}. Recall that for $k>0$, each eigenfunction $u$ of $L_k$ corresponds to two eigenfunctions $u\cos k\theta$ and $u\sin k\theta$ of the stability operator $L_\Sigma$. The variations with negative eigenvalues, excluding translation and dilation, contribute to the index. We present three-dimensional plots of the variations with negative eigenvalues in Figure~\ref{fig:3d}. There are $9$ such variations. One of the variations is dilation, and three are translations, so the entropy index of the Angenent torus is $5$.
\begin{figure}
\newcommand\plot[1]{\includegraphics[trim = 30mm 54mm 30mm 52mm, clip, scale=.6]{figures/#1\res res}}
\newcommand\plotv[1]{\plot{variation3D#1}}
\centerline{
\begin{tabular}{ccc}
\plot{torus3D}&\plot{torus3D}&\plot{torus3D}\\\hline
$k=0$&$k=1$&$k=2$\\\hline\\
$\lambda_0\approx-3.740$&$\lambda_0=-1$&$\lambda_0\approx-0.488$\\
\plotv{00}&\plotv{10c}&\plotv{20c}\\
$\lambda_1=-1$&$\lambda_0=-1$&$\lambda_0\approx-0.488$\\
\plotv{01}&\plotv{10s}&\plotv{20s}\\
$\lambda_2=-\frac12$&$\lambda_1=-\frac12$\\
\plotv{02}&\plotv{11c}&\\
&$\lambda_1=-\frac12$\\
&\plotv{11s}
\end{tabular}
}
\caption{The Angenent torus (top row) and its variations with negative eigenvalues. In the first column, we have dilation with eigenvalue $-1$ and vertical translation with eigenvalue $-\frac12$. In the second column, we have the pair of variations with eigenvalue $-1$ discussed in \cite[Section 6]{bk20a}, and the two horizontal translations with eigenvalue $-\frac12$.}
\label{fig:3d}
\end{figure}
\section{Error analysis}\label{sec:error}
To have confidence in our index result, we must approximate the eigenvalues of $L_k$ to sufficient accuracy to know that values we compute to be negative are indeed negative, and that the lowest computed positive eigenvalue of each $L_k$ is indeed positive. Eigenvalues close to zero could pose a problem, but the only such eigenvalue is $\lambda_2$ for $k=1$. Fortunately, for this eigenvalue, we know that its true value is exactly $\lambda_2=0$, corresponding to the variations that rotate the Angenent torus about the $x$ or $y$ axes.
To estimate the error in our values, we perform our computation with different numbers of points $M$. Specifically, we use $M\in\{128, 256, 512, 1024, 2048\}$. When we know the true value, we can directly see how fast our estimate converges by plotting the logarithm of the error against the logarithm of $M$. When the true value is unknown, we estimate it with the value that results in the best linear fit on a log-log plot. We present our results in Figure~\ref{fig:errorplots} and Table~\ref{tab:errorplots}
\begin{figure}
\centerline{\includegraphics[scale=.9]{figures/errorplots}}
\caption{Log-log plots showing the rate at which the error in our eigenvalue estimates decreases as the number of points $M$ increases.}
\label{fig:errorplots}
\end{figure}
\begin{table}
\centering
\begin{tabular}{c|c|ccc}
&&\makecell{value computed\\with $M=2048$}&\makecell{true value (known or\\ estimated from fit)}&error\\\hline
\multirow4*{$k=0$}
&$ \lambda_{0} $& $-3.73965698$&$ -3.73976151 $&$ \phantom-1.0\times10^{-4}$\\
&$ \lambda_{1} $&$ -0.99998145 $&$ -1 $&$ \phantom-1.9\times10^{-5}$\\
&$ \lambda_{2} $&$ -0.49999650 $&$ -\frac12 $&$ \phantom-3.5\times10^{-6}$\\
&$ \lambda_{3} $&$ \phantom-0.99199758 $&$ \phantom-0.99199444 $&$ \phantom-3.1\times10^{-6}$\\\hline
\multirow4*{$k=1$}
&$ \lambda_{0} $&$ -0.99997152 $&$ -1 $&$ \phantom-2.8\times10^{-5}$\\
&$ \lambda_{1} $&$ -0.49993807 $&$ -\frac12 $&$ \phantom-6.2\times10^{-5}$\\
&$ \lambda_{2} $&$ \phantom-0.00000351 $&$ \phantom-0 $&$ \phantom-3.5\times10^{-6}$\\
&$ \lambda_{3} $&$ \phantom-1.72697331 $&$ \phantom-1.72695473 $&$ \phantom-1.9\times10^{-5}$\\\hline
\multirow4*{$k=2$}
&$ \lambda_{0} $&$ -0.48762926 $&$ -0.48764694 $&$ \phantom-1.8\times10^{-5}$\\
&$ \lambda_{1} $&$ \phantom-0.86403182 $&$ \phantom-0.86402875 $&$ \phantom-3.1\times10^{-6}$\\
&$ \lambda_{2} $&$ \phantom-2.06670611 $&$ \phantom-2.06671610 $&$ -1.0\times10^{-5}$\\
&$ \lambda_{3} $&$ \phantom-3.71233427 $&$ \phantom-3.71235546 $&$ -2.1\times10^{-5}$\\\hline
\multirow4*{$k=3$}
&$ \lambda_{0} $&$ \phantom-0.11296571 $&$ \phantom-0.11294858 $&$ \phantom-1.7\times10^{-5}$\\
&$ \lambda_{1} $&$ \phantom-1.86256149 $&$ \phantom-1.86256033 $&$ \phantom-1.2\times10^{-6}$\\
&$ \lambda_{2} $&$ \phantom-3.51166663 $&$ \phantom-3.51168935 $&$ -2.3\times10^{-5}$\\
&$ \lambda_{3} $&$ \phantom-5.53593246 $&$ \phantom-5.53597822 $&$ -4.6\times10^{-5}$\\
\end{tabular}
\vspace\baselineskip
\caption{For the first four eigenvalues of each of the first four $L_k$, we give the error between the eigenvalue computed with $M=2048$ and either the true value if known or our estimate of the true value based on the fits in Figure~\ref{fig:errorplots}.}
\label{tab:errorplots}
\end{table}
The slopes of the best fit lines in Figure~\ref{fig:errorplots} are all between $-1.978$ and $-2.002$, suggesting a quadratic rate of convergence. This rate of convergence matches the expected quadratic rate of convergence for the entropy of the Angenent torus that we found in \cite{bk19}. We observe that the computed eigenvalues sometimes overestimate and sometimes underestimate the true value, and that the magnitude of the error varies. In all cases, however, there is by far more than enough accuracy to determine the sign of the eigenvalues. Because the $j$th eigenvalue of $L_k$ increases with both $j$ and $k$, we know that Table~\ref{tab:errorplots} lists all of the negative eigenvalues; there can be no others.
\section{Future work}\label{sec:future}
There are several directions for future work, of varying levels of complexity.
\subsection{Other rotationally symmetric self-shrinkers}
The methods in \cite{bk19} and in this paper can be immediately applied to any other rotationally symmetric surface, of which there are infinitely many examples \cite{dk17}. Mramor's work \cite{m20} suggests that the entropy of these examples should grow to infinity. Meanwhile, our work \cite{bk20a} shows that the index should grow at least linearly with the entropy, but our upper bound on the index allows for faster growth. Numerically computing the entropies and indices of these examples could give some insight about whether these index bounds are optimal in terms of asymptotic growth rate, or if they could be improved.
\subsection{Self-shrinkers without symmetry}
Rotational symmetry allows us to reduce the dimension of the problem: Rather than computing variations of a critical surface in $\R^3$, we can compute variations of a critical curve in the half-plane, which allows us to work with ordinary differential equations rather than partial differential equations. However, by using numerical methods for working with partial differential equations, we could analyze the general problem without rotational symmetry in much the same way. Namely, we could discretize the problem by triangulating the surface. We could then approximate the $F$-functional by summing the weighted areas of the triangles, giving us a functional on a finite-dimensional space. Finally, we could compute the critical points of this functional and compute the Hessian.
\subsection{Error analysis}
We have strong numerical evidence that the values we obtained in \cite{bk19} and in this paper are accurate. When true values are known, our methods find them. Even when true values are unknown, we observe a quadratic rate of convergence as we increase the number of points. Additionally, the value of the entropy in \cite{bk19} has since been reproduced using different numerical methods \cite{bdn19}. Nonetheless, numerical evidence does not constitute a proof, so it would be good to prove error bounds on our estimates.
The starting point would be to consider our estimate $\dist^\sigma_d(q_m,q_{m+1})$ for the distance between two points $q_m$ and $q_{m+1}$ in the half-plane $Q=\{(r,z)\mid r\ge0\}$ with respect to the metric $g^\sigma$. We would like to bound the difference between this estimate and the true distance $\dist^\sigma(q_m,q_{m+1})$. One can do so by computing the Taylor polynomials of the distance squared at the diagonal of $Q\times Q$. From there, we could bound the error for the length functional, critical curve, Hessian, and so forth. One caveat is that variations of the discrete curve do a poor job of capturing variations of the true curve that are highly oscillatory, as we illustrate in Appendix~\ref{sec:asymptotics}. This issue is resolved by the fact that, as we show in \cite{bk20a}, highly oscillatory variations must increase length and therefore cannot contribute to the index.
\bibliographystyle{plain}
|
1,314,259,994,306 | arxiv | \section{Introduction}
Bayesian parameter estimation is a problem usually encountered when testing physical models against some experimental data. In these cases, inference from the posterior of the model is often done using a sampling algorithm. The posterior's likelihood in realistic scenarios is usually a complex hierarchical model, involving costly computations of intermediate quantities and possibly multiple likelihoods from different data sets. Each part of the calculation may have its own parameter dependencies, from the underlying parameters of main physical interest to nuisance parameters describing the experimental model or uncertainties in different parts of the theoretical calculation. Different subsets of parameters may have very different computational costs, depending on which parts of the model have to be recalculated when each parameter is varied. If subsets of the parameters can be updated with low computational cost, so there is a significant speed hierarchy, this can be exploited to greatly improve sampling efficiency. However, the simplest applications of standard sampling algorithms usually fail to exploit this property.
\texttt{Cobaya}\xspace (COde for BAYesian Analysis; meaning ``guinea pig'' in Spanish) is a general framework for defining a pipeline of interdependent calculations, automatically exploiting the hierarchical nature of the model when sampling from its parameters. The main advantage of Cobaya resides in its ability to automatically analyse the dependency structure of the different model components, and to block parameters according to which parts of the computation (directly or indirectly) depend on them, sorting these parameters blocks in an optimal way so that looping over them takes the least amount of time and no part of the calculation is unnecessarily repeated (intermediate results are automatically cached). This also allows for oversampling of fast blocks according to their speed and use of sampling algorithms (such as fast dragging~\cite{Neal04}) that can efficiently exploit the presence of very fast parameter subspaces. \texttt{Cobaya}\xspace includes its own MCMC algorithms (adapted from \texttt{CosmoMC}\xspace \cite{Lewis:2013hha}) and an interface to the nested sampler \texttt{PolyChord}\xspace \cite{2015MNRAS.450L..61H,2015MNRAS.453.4384H}. In addition to Monte Carlo samples, \texttt{Cobaya}\xspace includes a importance-weighting tool, and interfaces with popular minimizers for posterior and likelihood maximization (from \texttt{scipy}\xspace \cite{scipy} and \texttt{Py-BOBYQA}\xspace implementation \cite{2018arXiv180400154C} of the \texttt{BOBYQA}\xspace algorithm \cite{Powell2009})\footnote{The structure is however general for samplers where posterior derivatives are not available: additional samplers could be used by providing a compatible sampler class implementation.} \texttt{Cobaya}\xspace is also ready for high-performance computation in clusters, featuring MPI parallelization and batch job generation.
Calculations can be separated into physically distinct but interdependent theoretical calculations and likelihood evaluations, where any part of the calculation can be provided by separately-maintained external Python packages each with its own parameter dependencies. This allows theory modellers and analysers of experimental data to provide and maintain different cleanly separated parts of the calculation rather than relying on a single monolithic package that does everything.
Though \texttt{Cobaya}\xspace is a completely generic framework, it includes out-of-the-box support for cosmological parameter estimation via a series of interfaces to a number of cosmological theory codes (\texttt{CAMB}\xspace \cite{Lewis:1999bs,Howlett:2012mh} and \texttt{CLASS}\xspace \cite{Lesgourgues:2011re,Blas:2011rf})and data likelihoods. In that regard, it compares to \texttt{CosmoMC}\xspace \cite{Lewis:2002ah,Lewis:2013hha}, and later \texttt{MontePython}\xspace \cite{2013JCAP...02..001A,Brinckmann:2018cvx} and \texttt{CosmoSIS}\xspace \cite{Zuntz:2014csq}, amongst others. \texttt{Cobaya}\xspace does not re-distribute most of the original likelihoods, data sets or theory codes, but provide installers for stock distributions. \texttt{Cobaya}\xspace can also be used as a simple cosmological likelihood wrapper, providing stand-alone cosmological posteriors that can be incorporated into the user's own statistical or machine-learning pipeline.
\texttt{Cobaya}\xspace is distributed as a standard \texttt{Python}\xspace package\footnote{\url{https://pypi.org/project/cobaya/}}, which allows for easy management of dependencies and simple roll-out of updates. Extensive documentation is provided at \url{https://cobaya.readthedocs.io}. Users are also welcome to contribute through the central git repository at \url{https://github.com/CobayaSampler/cobaya}.
In the rest of the paper, we describe the probabilistic model underlying \texttt{Cobaya}\xspace's framework (Sec.~\ref{sec:probmodel}), comment on our main goals and consequent design choices (Sec.~\ref{sec:goals}), and describe the main structure of the code (Sec.~\ref{sec:structure}). We then briefly discuss use on high-performance computers (Sec.~\ref{sec:hpc}) and describe the specific realistic case of using \texttt{Cobaya}\xspace\ for cosmological data analysis (Sec.~\ref{sec:cosmo}). We explain the technical details of our new optimized parameter blocking in Appendix~\ref{app:blocking}, and display some of the source code necessary to reproduce the used case presented in Sec.\ \ref{sec:cosmo} in Appendix \ref{sec:cosmo_demo_source}.
\section{Probabilistic model}\label{sec:probmodel}
Bayesian inference on model parameters is usually done by performing Monte Carlo sampling on the posterior $\prob{\theta\,|\,\mathcal{D},\mathcal{M}}$, where $\mathcal{M}$ is some model parameterized by $\theta$ following some \emph{prior} $\prior{\theta\,|\,\mathcal{M}}$, and $\mathcal{D}$ is some data set we aim to model using $\mathcal{M}$. The probability for observing the data $\mathcal{D}$ given some particular set of model parameters is the \emph{likelihood} $\like{\mathcal{D}\,|\,\mathcal{M}(\theta)}$. These quantities are related through the \emph{Bayes theorem}:
\begin{equation}
\label{eq:bayestheorem}
\prob{\theta\,|\,\mathcal{D},\mathcal{M}} =
\frac{\like{\mathcal{D}\,|\,\mathcal{M}(\theta)}\prior{\theta\,|\,\mathcal{M}}}
{\prob{\mathcal{D}\,|\,\mathcal{M}}}
\eqpunctsp ,
\end{equation}
where $\prob{\mathcal{D}\,|\,\mathcal{M}}$ is the marginal likelihood or \emph{evidence}, which equals the integral of the numerator in the right-hand side.
In simple physical applications, the data that appears in the likelihood is expressed in terms of some summary \emph{observable} $\vector{O}_\mathcal{D}$. In those cases, the likelihood consists of a comparison between the observed $\vector{O}_\mathcal{D}$ and the one computed from a deterministic \emph{theoretical model} $\mathcal{T}$, $\vector{O}_\mathcal{T}$, filtered through the simulated pipeline $\mathcal{F}_\mathcal{E}$ of the experiment following some \emph{experimental model} $\mathcal{E}$. For a simple Gaussian likelihood, this would look like
\begin{equation}
\label{eq:like}
\vector{O}_\mathcal{D} \sim \normal{\mathcal{F}_\mathcal{E}(\vector{O}_\mathcal{T})}{\Sigma_{\mathcal{E},\mathcal{T}}}
\eqpunctsp ,
\end{equation}
where the covariance matrix $\Sigma_{\mathcal{E},\mathcal{T}}$, dependent on both the theoretical and the experimental model, parameterizes those deviations.
This simple example illustrates how a natural \emph{speed hierarchy} appears in the parameter space: whenever we vary parameters of the theoretical model, we would have to recompute both the theoretical prediction for the observable $\vector{O}_\mathcal{T}$ \emph{and} the likelihood, whereas a variation in the experimental model parameters only requires recomputation of the likelihood (as long as we cache and reuse the computed $\vector{O}_\mathcal{T}$). The likelihood calculation for fixed theoretical model is also often much faster than computing the theoretical prediction $\vector{O}_\mathcal{T}$. Exploiting this speed hierarchy efficiently can save a considerable amount of time when sampling from the posterior (see appendix \ref{app:blocking}).
In general, physical inference pipelines may have several components and dependencies, without a sharp distinction between a theoretical and an experimental model. In \texttt{Cobaya}\xspace we allow the experimental likelihoods to communicate with one or more theory codes (and possibly other likelihoods) to calculate the required observables and any necessary derived parameters. Theory codes can depend on sampled parameters and on the outputs of other theory codes, so we allow for general (non-circular) dependencies between the different components so that the calculation can be cleanly modularized. This also allows any resulting hierarchy in the speed of the likelihood calculation under changes in different parameter variations to be exploited efficiently.
When various theoretical codes are available that can produce the same quantities, interfaces can be choice-agnostic so that they can be exchanged easily for testing and comparison.
\section{Goals and design choices}\label{sec:goals}
We aimed at the following development and usability goals, and made design choices accordingly:
\paragraph{Modularity:} The different actors in the Bayesian model (prior, likelihood, theory model, Monte Carlo sampler), are individual objects which share as little information as possible: prior and sampler know which parameters are sampled, but they do not know about any other fixed parameter of the model; the likelihood, in turn, deals only with input/output parameters and observables, not caring about their prior; the sampler does now know which individual likelihood (or theory code) understands which parameters, and lets a \emph{model} wrapper manage that. These design choices impose some compromises, but are a fair price for making the code more easily extendable and maintainable.
\paragraph{Rapid prototyping:} We have attempted to lower as much as possible the barrier to adding new priors, likelihoods, theory codes or samplers, or modify existing ones, all without touching \texttt{Cobaya}\xspace's source code or having to write a lot of \textit{wrapping} code. In order to achieve this, we have developed an API (i.e.\ \emph{application programming interface}) that offloads the need by the user to write specific code to handle communication between different parts of the pipeline, to cache intermediate results, or to exploit the parameter speed hierarchy. The user just needs to describe what a pipeline component needs and provides, and the actual calculation of the provided quantities. Creating such an API has been made much easier by writing our code in \texttt{Python}\xspace, which allows us to discover parameter dependencies of generic functions (used as priors, likelihoods or parameter redefinitions) via its \textit{introspection} capabilities. Python is able to interface with libraries written in other languages, allowing use of external compiled theory codes and likelihoods, and also has useful computational capabilities (e.g.\ handling infinities directly). Though Python, as in interpreted language, is in general slower than compiled languages such as C or Fortran, this will not necessarily have a significant impact on computational speed, since most heavy computations are usually performed by calling functions in optimized packages based on a compiled back-end library (such as \texttt{numpy}\xspace). Not-easy-to-offload computations (e.g.\ those containing long explicit loops) can also be implemented using popular just-in-time compilers or language extensions such as NUMBA \cite{Lam:2015:NLP:2833157.2833162} or Cython \cite{behnel2010cython}, or coded in C/Fortran and interfaced to Python.
\paragraph{Supporting external components:}
Using Python makes it simple to load theory or likelihood codes from other Python packages. \texttt{Cobaya}\xspace supports direct use of theory and likelihood classes defined in other packages simply by referencing their fully-qualified name in the package, making it straightforward for modellers or experimental collaborations to release their own code packages that can be used directly. External likelihood codes can either inherit directly from \texttt{Cobaya}\xspace\ classes (which also support direct instantiation and use outside of Cobaya), or include a separate compatible wrapping class that can be used with \texttt{Cobaya}\xspace.
More information on design choices can be found in the documentation at \url{https://cobaya.readthedocs.io}.
\section{Structure of the code}\label{sec:structure}
\texttt{Cobaya}\xspace's main structure is shown in Fig.~\ref{fig:structure}. The main two classes are the Bayesian \texttt{Model}, and the Monte Carlo \texttt{Sampler} (or, more broadly, any analysis tool that operates on the model). This section describes the elements shown there.
\begin{figure}[ht]
\includegraphics[width=0.8\columnwidth]{img_diagram.pdf}
\caption{Simplified structure of \texttt{Cobaya}\xspace's source, showing classes (squares) and parameters (ellipses). See section \ref{sec:structure} for a description of each class and parameter role. The arrows between \texttt{TheoryCollection} and \texttt{LikelihoodCollection} represent computed quantities and parameters that can be exchanged arbitrarily between theories and likelihoods.
}
\label{fig:structure}
\end{figure}
\subsection{Input}
For each particular run, the input must describe the model and its parameter space in enough detail, and also specify the analysis tool that will be used. Ideally, the syntax of this input would reproduce the structure of the code as closely as possible (it makes it easier to swap different input chunks), and should be as literate (self-descriptive), human-readable and easy to remember as possible.
\texttt{Cobaya}\xspace's input takes the form of a \texttt{Python}\xspace dictionary, and can be serialized in plain text in the YAML format \cite{YAML} (as long as its elements can be serialized in plain text). We show an example in Fig.~\ref{fig:input}, including specification of parameter roles and dynamic parameter definition, as well as definition of the likelihood and prior for the simple case where everything is just a basic Python function.
\begin{figure}[ht]
\centering
\lstinputlisting[language=yaml, frame=single]{src_input_example.yaml}
\caption{Example input in plain text (YAML). It defines a Gaussian-ring likelihood with radius $1$ and standard deviation $0.02$, over the combination of a uniform prior $(x,y)\in(0,2)^2$ (notice the two possible different specifications used for \texttt{x} and \texttt{y}) and a Gaussian prior of standard deviation $0.3$ along the $x=y$ direction (n.b.: simple 1D priors are defined in \texttt{params}, while multidimensional ones are defined in \texttt{prior}). The likelihood, the multidimensional prior and the derived parameters \texttt{r} and \texttt{theta} are given as \texttt{Python}\xspace functions (here source strings, but can be assigned \texttt{Python}\xspace functions directly when working in a \texttt{Python}\xspace file or shell -- for source strings \texttt{scipy.stats} and \texttt{numpy} are pre-imported as \texttt{stats} and \texttt{np} resp.). The results of this MCMC sample will be written in a folder called \texttt{chains} with file name prefix \texttt{ring}, as per the \texttt{output} option. The resulting densities can be seen in Fig.~\ref{fig:results}.}
\label{fig:input}
\end{figure}
Describing complex models and pipelines can take a large amount of input. To facilitate these cases, we have implemented an inheritance system by which defaults for every class are automatically loaded. Options only need to be specified when their values are different from the default ones. In the example of Fig.\ \ref{fig:input}, for the MCMC sampler only the stopping criterion is specified, while the rest of the options are either inherited from the defaults of this sampler (which can be retrieved via a shell script ``\texttt{cobaya-doc mcmc}''), or automated from other options (e.g.\ a diagonal proposal covariance matrix constructed from the parameters' \texttt{proposal} property).
\subsection{Bayesian \texttt{Model}}
The Bayesian \texttt{Model} consists of the Bayesian prior and likelihood (this last one including theory and experimental likelihood components), as described in section \ref{sec:probmodel}. It also contains a \texttt{Parameterization} layer that manages the flow of parameters to and from the likelihood and prior, and computes the dynamically-defined ones, and a \texttt{Provider} that handles the exchange of parameters and computed quantities between different theory and likelihood components.
A \texttt{Model} instance can be passed as an argument to a sampler, or it can be integrated by the user into an external pipeline (using its API to access the member classes shown in Fig.\ \ref{fig:structure} and described below).
\subsubsection{Parameterization}
The \texttt{Parameterization} class controls the flow of parameters into and out of the \texttt{Model}, taking into account their roles with respect to different parts of the code.
On their way out of the model (i.e.\ from the point of view of the \texttt{Sampler}), parameters can play three different roles:
\begin{itemize}
\item \emph{Sampled} parameters are the ones whose value is to be varied and explored by the \texttt{Sampler} or the user-defined pipeline. They are identified in the input by having a defined prior.
\item \emph{Fixed} parameters are those whose value is not going to change, and are needed as input by some piece of the \texttt{Model}.
\item \emph{Derived} parameters are arbitrary functions of the rest of the parameters at every step, and are tracked and stored for the user's convenience. Functions defining them can be provided \emph{on the fly} (like \texttt{r} and \texttt{theta} in the example in Fig.~\ref{fig:input}) or can be implicitly defined inside the code of a theory or likelihood.
\end{itemize}
The \texttt{Parameterization} class processes the parameters to turn them into \emph{input} parameters for the likelihoods and theories, and requests from them the \emph{output} parameters that are needed to compute \emph{derived} parameters that cannot be computed directly from inputs.
The parameterization layer also manages other properties of the parameters, such as their labels (used for plots).
\subsubsection{Prior}
Priors for the sampled parameters can be specified in two different ways:
\begin{itemize}
\item The \texttt{prior} keyword for each parameter defines a separable product of 1D priors. They are interfaced directly from the 1D continuous distributions of \texttt{scipy.stats}\footnote{\url{https://docs.scipy.org/doc/scipy/reference/stats.html\#continuous-distributions}}. In the example in Fig.\ \ref{fig:input}, these are the \emph{uniform} priors $(x,y)\in(0,2)^2$.
\item Additional, possibly-multi-dimensional priors can be defined under the global \texttt{prior} block. In the example in Fig.\ \ref{fig:input}, this is the Gaussian prior along $x=y$.
\end{itemize}
\subsubsection{Likelihood and theory}
\label{sec:liketheo}
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.5\textwidth}
\begin{python}
import numpy as np
import scipy.stats as stats
from cobaya.theory import Theory
from cobaya.likelihood import Likelihood
class ToPolarCoords(Theory):
params = {'x': None, 'y': None}
def get_can_provide(self):
return ['r', 'theta']
def calculate(self, state, want_derived=True,
**params_values):
x = params_values['x']
y = params_values['y']
state['r'] = np.sqrt(x**2 + y**2)
state['theta'] = np.arctan(y / x)
class GaussianRingLikelihood(Likelihood):
params = {'mean_radius': None,
'std_radius': None}
def get_requirements(self):
return ['r']
def logp(self, **params_values):
r = self.provider.get_result('r')
mean_radius = \
params_values['mean_radius']
std_radius = params_values['std_radius']
return stats.norm.logpdf(
r, loc=mean_radius, scale=std_radius)
\end{python}
\end{subfigure}\hfill
\begin{subfigure}{0.48\textwidth}
\lstinputlisting[language=yaml, frame=single, linewidth=0.99\linewidth]{src_input_external.yaml}
\end{subfigure}
\caption{
Example similar to the one in Fig.\ \ref{fig:input}, now using \texttt{Cobaya}\xspace classes to split the computation in two: the transformation between orthogonal and polar coordinates, and the likelihood in terms of polar coordinates. Here we let the mean radius of the gaussian ring vary over a narrow prior, to illustrate \texttt{Cobaya}\xspace's automated blocking: when sampling using MCMC or \texttt{PolyChord}\xspace, jumps in the $(x, y)$ directions will be alternated with jumps in \texttt{mean\_radius}. After every jump on $(x, y)$, the resulting intermediate product $r$ is cached, so that $r$ does not need to be recomputed when only \texttt{mean\_radius} is varied. In this trivial example, the intermediate quantity $r$ exchanged between the theory and the likelihood is just a real number, but it could be any arbitrarily complicated and many-dimensional numerical quantity, as well as a general Python object. \texttt{Cobaya}\xspace knows about the interdependency between the likelihood, that needs $r$, and the theory, that computes $r$, via the respective declarations in methods \texttt{Likelihood.get\_requirements} and \texttt{Theory.get\_can\_provide}.}
\label{fig:external}
\end{figure*}
In general there can be multiple theory and likelihood components in a single model, with each theory component calculating some quantity required as input to the likelihoods or to another theory component. Likelihoods are just a special subclass of a general Theory class that directly return likelihood values. Both inherit a caching layer that increases efficiency when steps in the parameter space are blocked in particular ways, such that only the piece that depends on the varied parameters (directly or via a dependency) needs to be recomputed.
A model class instance holds a list of the required likelihood and theory component instances, and uses their interface methods or introspection to work out their dependencies and hence required execution order in order to calculate the final likelihood. Components may also calculate derived parameters that are useful to include in the output parameter chain (or may be used by other components).
In \texttt{Cobaya}\xspace's input, likelihoods and theory components can be specified as \texttt{Python}\xspace functions, mentions to \emph{internal} classes (i.e.\ those distributed with \texttt{Cobaya}\xspace, either as stand-alone versions or as wrappers to external code/data), inherited classes, or mentions to classes distributed in external packages.
A simple example illustrating the use of \texttt{Cobaya}\xspace Theory and Likelihood classes can be seen in Fig.\ \ref{fig:external}. This example highlights one of the biggest advantages of \texttt{Cobaya}\xspace's Bayesian model API with respect to most generalist statistical languages: the specification of what is needed and what is provided by each stage of the pipeline is enough to produce effective posterior exploration. Automatically, \texttt{Cobaya}\xspace will separate the sampled parameters in two blocks according to their dependencies, [\texttt{x}, \texttt{y}] and [\texttt{mean\_radius}], it will sort them optimally (in this simple case [\texttt{mean\_radius}] comes obviously last), it will prompt the sampler, either MCMC of \texttt{PolyChord}\xspace, to vary the parameters block-wise, and it will make sure that the intermediate quantity \texttt{r} is cached so that variations of \texttt{mean\_radius} only do not require its recomputation.
\subsection{Sampler}
Monte Carlo samplers in \texttt{Cobaya}\xspace take models and explore their \emph{sampled} parameters.
\texttt{Cobaya}\xspace implements adaptive fast-slow-optimized MCMC samplers, translated from \texttt{CosmoMC}\xspace \cite{Lewis:2002ah,Lewis:2013hha}. This includes a Metropolis-Hastings MCMC \cite{Metropolis53,Hastings70} sampler, with optional oversampling of fast parameters, and a variation of it that performs \emph{dragging} \cite{Neal04} along the fastest directions. In many physical applications, complex interdependencies between multiple theory codes and experimental likelihoods produce a slow-fast parameter hierarchy. Efficient exploitation of this hierarchy, while keeping the ability to propose steps in random directions of the parameter space, is crucial to efficient sampling \cite{Lewis:2002ah, Lewis:2013hha}.
Previous implementations have required a manual specification of the parameter blocking and oversampling configuration. For \texttt{Cobaya}\xspace\ we have implemented a new automated optimization scheme that blocks parameters according to their role in the different stages of the likelihood, measures the computational cost of varying parameters in each of these blocks, and determines the optimal parameter block sorting so that varying all parameters in a row takes the smallest amount of time possible, taking into account both parameter mixing (necessary to explore possible degeneracies) and the amount of oversampling requested by the user. For details on this algorithm see Appendix \ref{app:blocking}.
Chain sampling convergence is quantified using a modified version of the Gelman-Rubin $R-1$ statistic \cite{Gelman92,Brooks98}, either over several chains run in parallel, or over the latest chunks of a single chain. The $R-1$ statistic quantifies the variance in the means of parameters estimated from different chains (or chain subsets), and termination of the chain can be specified by giving a target small value for $R-1$. Where tail exploration is important, an equivalent stopping target can also be set for the dispersion of confidence limits.
In addition, \texttt{Cobaya}\xspace contains a wrapper for the nested sampler \texttt{PolyChord}\xspace \cite{2015MNRAS.450L..61H,2015MNRAS.453.4384H}, which can also estimate model evidences and explore complicated multi-modal likelihood surfaces. \texttt{PolyChord}\xspace\ can also exploit the parameter speed hierarchy determined by \texttt{Cobaya}\xspace's parameter blocking. \texttt{Cobaya}\xspace provides an installer for \texttt{PolyChord}\xspace, as well as for all supported external dependencies. Wrappers for additional samplers may be implemented in the future.
Under the \textit{sampler} category, \texttt{Cobaya}\xspace also includes interfaces to minimizers (those in \texttt{scipy}\xspace \cite{scipy} and the \texttt{Py-BOBYQA}\xspace implementation \cite{2018arXiv180400154C} of the \texttt{BOBYQA}\xspace algorithm \cite{Powell2009}), a simple test function to evaluate different quantities of the model at fixed points, and an importance-reweighting tool.
\subsection{Analysis -- interface to \texttt{GetDist}\xspace}
\texttt{Cobaya}\xspace manages Monte Carlo samples as wrapped \texttt{DataFrame} objects from \texttt{Pandas}\xspace~\cite{mckinney-proc-scipy-2010}. When written to the hard drive, they can be stored in plain text as parameter tables, including the corresponding probabilities and sample weights. Both the sample objects and output files can be easily loaded and analysed with the user's tool of choice.
The suggested analysis package is \texttt{GetDist}\xspace\footnote{\url{https://github.com/cmbant/getdist/}} \cite{Lewis:2019xzd}, which can load \texttt{Cobaya}\xspace results transparently. \texttt{GetDist}\xspace provides summary statistics including confidence intervals, density estimates (via optimized kernel density estimation), and convergence diagnostics, plotting tools, and a graphical user interface. Examples of \texttt{GetDist}\xspace outputs can be seen in Fig.~\ref{fig:results}, obtained from the model described by the input in Fig.~\ref{fig:input}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\columnwidth]{img_triangle.pdf}
\includegraphics[width=0.9\columnwidth]{img_rtheta.pdf}
\caption{Results from sampling using the inputs shown in Fig.\ \ref{fig:input}, analysed by \texttt{GetDist}\xspace: The upper figure shows a \emph{triangle plot} combining 2D posterior contours (enclosing 68\% and 95\% of the probability) and marginalized 1D posteriors for the sampled parameters $(x,y)$. The lower plot shows the 1D posteriors for the derived parameters $(r,\theta)$. All the posteriors are shown normalized to the same maximum.}
\label{fig:results}
\end{figure}
\section{High performance computing with \texttt{Cobaya}\xspace}
\label{sec:hpc}
\texttt{Cobaya}\xspace is tailored for high-performance computing on clusters. The overhead of \texttt{Cobaya}\xspace per posterior evaluation is $\alt 0.2\,\mathrm{ms}$, with no strong dependence on the dimensionality for $d<100$; this means that for theory or likelihood computations that take more than a few milliseconds to compute, any number of evaluations while sampling their posterior in \texttt{Cobaya}\xspace would not take significantly longer than evaluating the same code in a simple loop the same number of times. For higher dimensions, the components of the overhead that have a stronger dependence on dimensionality start taking over, but usually so does computation time of likelihoods that depend on an increasingly high number of parameters.
\subsection{Parallelization}
\texttt{Cobaya}\xspace takes advantage of hybrid MPI+threading parallelization. All samplers are MPI-aware and use communication between parallel processes to improve sampling efficiency (e.g.\ MCMC uses all chains to learn a better covariance matrix and to assess convergence). Each single MPI process carries an independent \texttt{Model}, which takes advantage of threading to accelerate its likelihood and theory computations. Some of the sampler-intrinsic operations can also take advantage of threading (e.g.\ recomputing the proposal covariance matrix). For most sampling algorithms MPI usage is a very small fraction of the computing time, so interconnect speed is largely irrelevant and the code will run well on cheap commodity (or virtualized) clusters.
Threading is usually leveraged via \texttt{numpy}\xspace's \cite{numpy} interface to a threading-aware linear algebra library such as \texttt{OpenBLAS}\xspace\footnote{\url{https://www.openblas.net/}}, or via the use of externally linked compiled codes where necessary.
\subsection{Batch runs}
In physical applications, it is usual to test a number of theoretical models against different combinations of data sets. Running the sampler for every combination in this \texttt{grid} of theory--data requires a large amount of very redundant input: it differs little between cases in the grid.
In order to facilitate that, we have incorporated a piece of code first implemented for \texttt{CosmoMC}\xspace, that can define such a \texttt{grid} starting from common definitions plus two lists of variations: theoretical models and data sets. This code prepares input files containing all possible (or allowed) combinations, creates a set of nested \texttt{model/data\_set} folder structures to store the results, and generates scripts to be submitted to a cluster queue.
\subsection{Cloud computing}
Users without access to a cluster may be able to pay for on-demand online computing resources, or cloud computing. We have explored and documented\footnote{\url{https://cobaya.readthedocs.io/en/latest/cluster_amazon.html}} \texttt{Cobaya}\xspace's usage in one popular solution, Amazon's EC2.
\section{Cosmology}
\label{sec:cosmo}
Although \texttt{Cobaya}\xspace\ is a general analysis tool, its development was originally motivated for its use in cosmology, using various standard codes for calculating cosmological observables and a variety of publicly available likelihoods from different data sets. We have included tools specific to this use case and have used it for testing on real-world problems. The use of any sampler in a cosmological context presents a series of challenges, some already presented in section \ref{sec:goals}, and others intrinsic to Cosmology:
\\
\textbf{Input complexity:} To deal with the large amount of information needed to describe cosmological models and data sets, we have created a graphical application to generate input based on the user's choice for each piece of the cosmological model (including usual priors in the literature), and the data sets used. Also, this tool automatically selects an optimal sampler configuration for each particular problem (including picking an appropriate proposal covariance matrix for MCMC from a database of standard cosmological runs).
\\
\textbf{External dependencies:}
In general, we do not modify and re-distribute existing publicly available theory and likelihood codes, but instead interface stock versions through light wrappers\footnote{We do re-implement a few dependencies (e.g.\ the $H_0$, BAO and DES cosmological likelihoods), specially when the cost of interfacing very simple external code outweighs that of maintaining our own version, or because of other potential benefits, such as extensibility, re-usability for other experimental pipelines, etc.}. This allows us to keep \texttt{Cobaya}\xspace's source light and more easily maintainable, letting us focus on the statistical and user experience aspects. We mitigate the added complication of having to download external code by providing a one-line installer for all those external packages, that works across multiple systems (GNU/Linux, macOS and Windows, subject to OS compatibility of each of those external codes). The installer script can take an input file and install everything necessary to run it, including theory and likelihood codes, and data sets.
\\
\textbf{User-modified external code:} By not repackaging and re-distributing our own version of external codes, we impose no entry barrier for user-modified versions: users just need to provide paths to the installation folders of their modified versions.
\\
\textbf{Alternative external codes with the same role:} We have made the interface between experimental likelihoods and cosmological theory codes agnostic to the theory code used (as long as the quantities requested by other parts of the code can be computed by all of the alternatives).
\\
\textbf{Computational cost:} For cosmological data probing non-background cosmology, a likelihood evaluation for given cosmological parameters typically requires running at least a Boltzmann code to calculate the perturbation transfer functions (taking a second or more depending on number of cores and the specific model), plus possibly additional steps such as modelling non-linear evolution, calculating angular power spectra, correlation functions, etc.\ from the transfer functions. The second step can be fast in linear theory, or seconds when non-linear modelling is involved or complex observables. The original \texttt{CosmoMC}\xspace\ code provided a split between cosmological parameters affecting the transfer functions and parameters governing in the initial power spectrum, allowing the latter to be sampled very quickly for fixed values of the transfer function parameters~\cite{Lewis:2002ah}, but assumed linear theory. In \texttt{Cobaya}\xspace\ the parameter dependencies and possible speed hierarchy can also be exploited by splitting the theory calculation up into a transfer function calculation, and subsequent calculations involving initial power spectrum parameters and non-linear evolution parameters. \texttt{Cobaya}\xspace's \texttt{CAMB}\xspace\ wrapper explicitly supports this with non-linear modelling based on variations of \textsc{halofit}~\cite{Smith:2002dz,Takahashi:2012em,Mead:2015yca}, generalizing the approach used in \texttt{CosmoMC}\xspace. Although the speed hierarchy is modest once non-linear corrections to the CMB lensing\footnote{A Limber approximation for the non-linear corrections could likely speed this up with very small loss of accuracy, but is not currently implemented.} and other spectra are included, the saving can be significant since at the very least the cost of computing the transfer functions does not have to be paid every time power spectrum and non-linear evolution parameters are varied.
Despite the small additional Python overhead, \texttt{Cobaya}\xspace cosmology MCMC chains runs using the $\Lambda$CDM model with \texttt{CAMB}\xspace and {\it Planck} data~\cite{PCP2018} have comparable computation cost to using \texttt{CosmoMC}\xspace. In particular, assuming a good starting guess of the proposal covariance matrix,\footnote{This is done to reduce random noise in the test, by removing the very variable initial fraction of samples during which most of the proposal covariance matrix is learned. Also notice that in \texttt{CosmoMC}\xspace (and also in \texttt{MontePython}\xspace and \texttt{CosmoSIS}\xspace at the moment of writing) the proposal covariance matrix guess needs to be given by hand, while in Cobaya it is automatically selected from a database for a range of cosmological configurations.} both \texttt{Cobaya}\xspace and \texttt{CosmoMC}\xspace consistently reach Gelman-Rubin convergence of $R-1\alt 0.02$ in 8 hours on four cores per chain and 4 chains. Equivalence with \texttt{CosmoMC}\xspace as the \textit{de facto} standard in the field should be enough, specially when considering the additional advantages of \texttt{Cobaya}\xspace. In particular, \texttt{Cobaya}\xspace's model API (see section \ref{sec:liketheo}) and its optimal-sorting algorithm (see Appendix \ref{app:blocking}) guarantee that as pipelines become more complex and are further modularised, \texttt{Cobaya}\xspace will consistently outperform simpler approaches.
For example, \texttt{Cobaya}\xspace\ would be more efficient when varying a larger number of primordial power spectrum parameters with non-linear modelling, e.g.\ when performing primordial power spectrum reconstruction (which is explicitly supported in \texttt{Cobaya}\xspace\ by using a separate primordial power spectrum Theory class; currently compatible with \texttt{CAMB}\xspace only). For application to future data the calculation could be further modularized. For example, the non-linear modelling could be separated into a separate Theory class taking inputs from a Boltzmann code and calculating non-linear power spectra. This could be implemented in different ways and used consistently by likelihoods using different class implementations for cross-comparison. The calculation of actual observables from the underlying power spectra, for example by codes such as CCL\cite{Chisari:2018vrw}, could be implemented as a separate Theory class taking non-linear power spectra and outputs from a Boltzmann Theory code, and using them to calculate observables such as tomographic lensing correlation functions required as the theoretical model input used by likelihoods.
\subsection{An example use case in Cosmology}
\label{sec:cosmo_use_case}
As a demonstration of some of \texttt{Cobaya}\xspace's capabilities when applied to a cosmological scenario, we present here an inference problem in which we search for a primordial power spectrum feature of the form
\begin{equation}
\label{eq:feature}
\frac{\Delta P(k)}{P_0(k)} = A \exp^{-0.5 \left[\frac{\log_{10}\left(\frac{k}{k_c}\right)}{w}\right]^2} \sin\left(2 \pi \frac{k}{l}\right)
\,,
\end{equation}
where $P_0(k)$ is a power-law nearly-scale-invariant scalar power spectrum, that has been injected into a fiducial cosmological model. This ansatz has four parameters: a relative \emph{amplitude} $A$ of the feature, a \emph{wavelength} $l$, and an \emph{envelope centre} $k_c$ and \emph{envelope log-width} $w$. We forecast how well the feature can be characterized using highly idealized versions of the Planck and forthcoming Simons Observatory (SO) data~\cite{Ade:2018sbj,PL2018}. To use the maximal effective sky area at each multipole we separate the likelihood into low- and high-$\ell$ ranges, with different sensitivities, sky fractions, and polarization combinations (we will see that a single likelihood wrapper can incorporate all of these).
We try to constrain/recover the injected feature parameters in two scenarios: one using standard lensed CMB power spectra, and in which we assume that a certain amount of delensing~\cite{Sherwin:2015baa,Larsen:2016wpa,Green:2016cjr} is possible. Large-scale lensing modes shift the scales of small-scale oscillation modes leading to them being smoothed out in the lensed power spectrum. If lensing multipoles $L\alt 200$ can recovered using an external tracer like the Cosmic Infrared Background (CIB) or internally reconstructed, this smoothing effect can be substantially reduced by delensing. As a simplified model we simply assume that the lensing power can be reduced by a constant factor of $0.7$ using a combination of internal delensing, CIB, and large-scale structure\footnote{The detectability of small-scale features is not very sensitive to whether or not the $L>200$ lensing modes are also delensed, since they largely contribute to adding power to both spectra.}. We do not attempt to model the statistics of the delensed power spectrum in detail, and take the log likelihood of both delensed and lensed spectra to be given by an $f_{\rm sky}$-scaled ideal full-sky mean log likelihood.
We demonstrate here how \texttt{Cobaya}\xspace can be used to build a pipeline for this use case. To do this, we take advantage of \texttt{Cobaya}\xspace's \texttt{CAMB}\xspace interface for computing the CMB power spectrum given a primordial spectrum, that we will define in a separate Theory class; we will also create a generic Likelihood class using data that we have generated based on a fiducial CMB power spectrum containing a feature. The step-by-step building of this pipeline, would proceed in the following way (the source code of a possible implementation is included in Appendix \ref{sec:cosmo_demo_source}):
\begin{itemize}
\item We define a Theory class that computes the primordial power spectrum including the feature (see \texttt{FeaturePrimordialPk} in Fig.\ \ref{fig:src_demo_theory}). This class, inheriting from the \texttt{Cobaya}\xspace generic Theory prototype class, only needs to define two methods: one returning what this class can compute and starting with \texttt{get\_}, in this case \texttt{get\_primordial\_scalar\_pk}; and another called \texttt{calculate} taking the parameters defining the power spectrum and computing it, in the example of Fig.\ \ref{fig:src_demo_theory}, simply a wrapper around a Python function that defines a linearly-oscillating feature with a Gaussian envelope on top of a nearly-scale-invariant power spectrum. The \emph{class attribute} \texttt{params} defines what parameters among all the ones declared in the input are to be passed to this class (and also which derived parameters are made available by this likelihood). The rest of the class attributes (in our example \texttt{n\_samples\_wavelength} and \texttt{k\_pivot}), define parameters that can be set in the declaration of the class in the \texttt{Cobaya}\xspace input (see Fig.\ \ref{fig:src_demo_input_delensed}), and their default values if they are omitted.
\item Regardless of the definition of the parameters passed to the new Theory class, we may want to redefine them e.g.\ to sample them over a log-uniform prior. We show how to do that for the feature \texttt{amplitude}, its \texttt{wavelength} and the \texttt{centre} of its envelope in Fig.\ \ref{fig:src_demo_input_delensed}: we define the respective dummy log-parameters in the \texttt{params} block, indicate that they shall not be passed to the pipeline with the option ``drop: True'', and define the original ones as a function of them. These redefinitions are specially useful when we cannot easily modify the source code of the original Theory code or likelihood (which is a common occurrence for complex codes such as official experimental likelihoods).
\item We may want to impose additional priors that depend on more than one parameter. In our example, we would like to ban combinations of amplitude and position and width of the feature envelope that produce no significant trace in the observable CMB multipole range. To do that, we can define the corresponding log-prior function e.g.\ in the same file as the Theory code (see \texttt{logprior\_high\_k} in Fig.\ \ref{fig:src_demo_theory}), and reference it in the input as shown in Fig.\ \ref{fig:src_demo_input_delensed}.
\item The likelihood segment of the pipeline is usually the most complicated one for research cases that do not use public data as-is, or public likelihood codes. Users can take two alternative approaches: on the one hand, they can write generic (i.e.\ non-\texttt{Cobaya}\xspace-dependent) code to generate or load some data and compute their likelihood (possibly assuming a parameterized experimental model), and then wrap it in a \texttt{Cobaya}\xspace Likelihood class; alternatively they can inherit from one of \texttt{Cobaya}\xspace's cosmological likelihoods and override the necessary methods. In both cases, the methods needing defining/overriding are \texttt{get\_requirements}, declaring which cosmological observables/quantities will be requested from the Theory code(s), and \texttt{logp}, that retrieves said quantities and passes them to the user-defined probability function (or does this computation directly). These two methods for our use case can be seen in Fig.\ \ref{fig:src_demo_likelihood}. Here, we inherit from one of \texttt{Cobaya}\xspace's likelihoods, \texttt{CMBlikes}, that takes care of loading the necessary data and computing a mean log-likelihood given the polarized CMB power spectra. We override \texttt{get\_requirements} to add the \emph{results} object of \texttt{CAMB}\xspace (\texttt{CAMBdata}, from which we will extract the partially-lensed CMB spectra), to the list of CMB polarizations and multipole ranges defined by the data with which the likelihood was initialized (and set here via a \texttt{super} call). This shows the flexibility of \texttt{Cobaya}\xspace's wrappers: even if \texttt{Cobaya}\xspace does not know about a particular observable, it can still be retrieved from the theory code by hand. We also need to override the \texttt{logp} method of this class to retrieve the partially-lensed power spectra instead of the fully-lensed ones.
Notice that likelihoods inheriting from an existing cosmological \texttt{Cobaya}\xspace likelihood can work outside the \texttt{Cobaya}\xspace pipeline: they can e.g.\ be used as standalone objects that can evaluate data probabilities given an arbitrary realization of the required observable. As with the Theory codes, we make use of \emph{class attributes} to specify which options can be defined in the input at run time, which allows us to write a single likelihood wrapper for our four likelihoods, with the respective data specified via \texttt{dataset\_file}, and the lensing scenario defined by the value of \texttt{Alens\_delens} ($0.3$ for the delensed case, and $1.0$ for the non-delensed one).
\item On the sampler side, here MCMC, thanks to our use of \texttt{Cobaya}\xspace's class wrappers, the parameter speed hierarchy is taken advantage of automatically: parameters are split into a \emph{primordial power spectrum} block (those sent to \texttt{FeaturePrimordialPk}) and a \emph{CMB transfer functions} block (those sent to \texttt{CAMB}\xspace), and these last ones, as they are the slowest, are placed first. Since we are exploring both feature and baseline $\Lambda$CDM parameters, and the latter will have approximately similar posteriors in delensed and non-delensed cases (since we are injecting a feature for which we don't expect strong correlations with background cosmology), we may want to perform an initial run with $\Lambda$CDM cosmology and a liberal convergence criterion, that we will use to generate a covariance matrix (a standard output file of \texttt{Cobaya}\xspace's MCMC sampler). We can then set this file as the proposal \texttt{covmat} for the \texttt{mcmc} sampler (see Fig.\ \ref{fig:src_demo_input_delensed}). The \texttt{mcmc} sampler will then create the total covariance matrix, using the provided one (including correlations) for the $\Lambda$CMD parameters, and the \texttt{proposal} property of each parameter for the feature parameters.
\end{itemize}
Notice that when defining this pipeline, we did not need to modify \texttt{Cobaya}\xspace or \texttt{CAMB}\xspace, but simply write the code carrying out the calculations in a non-\texttt{Cobaya}\xspace-dependent way, and code \texttt{Cobaya}\xspace wrappers for these calculations, what they require, and what they make available. The advantages of this approach are on the one hand that it is more accessible for researchers that do not know the inner workings of \texttt{Cobaya}\xspace (or \texttt{CAMB}\xspace), and on the other that distributing a small set of files (for reproducibility and archiving purposes\footnote{Reproducibility is ensured despite possible backwards-incompatible changes in the codes used, since \texttt{Cobaya}\xspace stores both its version and those of the external codes used (here \texttt{CAMB}\xspace) in the output files.}) is more practical and scalable than distributing whole modified versions of codes.\footnote{The source code to reproduce this example can be downloaded from \url{https://github.com/CobayaSampler/paper_cosmo_demo}}
In order to test our pipeline, we have produced both a delensed and a non-delensed MCMC run based on a fiducial cosmological model containing a feature as defined in Eq.\ \eqref{eq:feature} with fiducial parameter values $A=0.1$, $l=0.008\,\mathrm{Mpc}^{-1}$, $k_c=0.2\,\mathrm{Mpc}^{-1}$, $w=0.1\,\mathrm{Mpc}^{-1}$. For the rest of the cosmological model, we assume Planck-like $\Lambda$CDM \cite{Aghanim:2018eyx} with parameter values given by the best fit to the low-$\ell$ TT+EE, high-$\ell$ TT+TE+EE and lensing power spectrum Planck 2018 likelihoods. We vary both feature and $\Lambda$CDM parameters, with a prior for the feature parameters centered around the fiducial values (the aim for this demonstration is to characterize the injected feature, not to detect its presence). We show the result for the delensed scenario in Fig.\ \ref{fig:cosmo_demo} (for the sake of brevity, the marginalized 1- and 2-d posteriors for the rest of the cosmological parameters are not shown).\footnote{The sampling process took 72h (wall-clock time) to converge (Gelman-Rubin $R-1<0.01$) running 4 parallel chains with 4 threads each, on an Intel Xeon CPU E5-2640v3. The running time is significantly higher than in the test case discussed in Sec.\ \ref{sec:cosmo} due to the increased precision necessary to run this pipeline (at the levels of transfer function computation, CMB multipole integration, and lensing potential computation), which makes the full pipeline in this setting take $\approx 7s$.} Not surprisingly, we find a degeneracy between the feature, at the same time, (1) being centered towards higher $k$, (2) having a wider envelope, and (3) having a bigger amplitude. We cannot usually map the end of this degeneracy because it would involve considering amplitudes $\Delta P(k) / P(k)\sim 1$, which would contradict the assumption implicit in Eq.\ \ref{eq:feature} that the feature is a small perturbation on top of vanilla slow-roll dynamics producing nearly-scale-invariant power-law primordial spectrum.
\begin{figure*}[ht]
\centering
\includegraphics[width=.8\textwidth]{img_cosmo_demo.pdf}
\caption{1- and 2-d marginalized posteriors for the feature parameters in the delensed scenario ($\Lambda$CDM cosmological parameters were sampled but are not shown here). There exists a degeneracy towards simultaneous large amplitude, wide envelope and large-$k$ center reaching the prior boundaries (see discussion in main text).}
\label{fig:cosmo_demo}
\end{figure*}
In the non-delensed case, not shown in Fig.\ \ref{fig:cosmo_demo}, the MCMC chains mostly reproduce the prior region in which the feature does not leave a significant imprint on top of baseline $\Lambda$CDM (i.e.\ small feature amplitudes, envelopes centered outside the CMB window, or envelopes being too narrow). This is due to the small imprint the fiducial feature leaves on top of $\Lambda$CDM when full lensing is considered.
This example use case is presented as a demonstration of an array of \texttt{Cobaya}\xspace's features, and it should not be taken as proof that delensing significantly enhances primordial feature recovery. While this is probably true for a small region of the parameter space, one would need to perform a more detailed study, using more realistically-generated data and more realistic delensing, and characterizing the $\chi^2$ distribution over a number of simulations to assess detectability. The resulting posterior would also be more irregular than the smooth one shown in Fig.\ \ref{fig:cosmo_demo}, and possibly multimodal, in which case it would make sense to use \texttt{PolyChord}\xspace instead as a sampler.
\section{Conclusion}
\texttt{Cobaya}\xspace is a general modularized framework for Bayesian analysis of models with complex pipelines, with simple and structured input specification, and a minimal API for interfacing external theories and likelihoods. Model posteriors can be sampled, maximized and importance-reweighted. \texttt{Cobaya}\xspace's main advantage is its ability to automatically account for interdependencies between the different components of the pipeline, and to use these dependencies and the computational costs of varying each parameter to optimally sort the parameter blocks of the model so that the posterior is efficiently explored, all without the need for specific code by the user.
\texttt{Cobaya}\xspace is well suited for high-performance computing thanks to its low overhead, MPI parallelization, and batch-running of grids of jobs. In a future release, we plan to implement HPC-enabled containerization capabilities (\texttt{Docker}+\texttt{Shifter} and \texttt{Singularity}), as well as to improve the batch-running tools.
\section{Acknowledgements}
Thanks to J.\ Lesgourgues and W.\ Handley for support on interfacing \texttt{CLASS}\xspace and \texttt{PolyChord}\xspace, respectively. Thanks to G.\ Ca\~nas Herrera, A.\ Finke, X. Garrido, S.\ Heimersheim, L.\ Hergt, M.S.\ Madhavacheril, V.\ Miranda, T.\ Morton, J.\ Zunz and many other for help debugging and patching Cobaya.
We acknowledge support from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement No. [616170].
AL is also supported by the UK STFC grants ST/P000525/1 and ST/T000473/1.
|
1,314,259,994,307 | arxiv |
\chapter{Introduction}
\heads
Graded manifolds are geometrical objects introduced by Kostant,
{\kost} and Berezin, Leites, {\berezin}, as the natural
mathematical tool in order to study supersymmetric problems. In
particular, the formalism of graded manifolds provides the possibility
to deal with classical dynamics of particles with spin. Indeed, it has
been proved, {\bmarinov}, that the extension of ordinary phase spaces
by anti-commuting variables yields the classical phase spaces of
particles with spin. Thus, Kostant's article, {\kost}, can be also
considered as a treatment on super-Hamiltonian systems and on their
geometrical prequantization. We mention {\ibortsolano}, {\montedyo},
{\monterma}, {\ruipma} as most important references related more
or less to the global problems of Lagrangian supermechanics.
Our aim here is to investigate several aspects of the graded
differential geometry, unexplored up today, and to use them in order
to establish a theory of connections on graded principal bundles.
This work is mainly motivated by the need to make clear why graded
connection theory is convenient for the description of supersymmetric
gauge theories, and how one could use this theory in practice, in
order to obtain, for example, the supersymmetric standard model
in the same spirit as one obtains the standard model of fundamental
interactions from ordinary gauge theory. The notion of graded
principal bundle has first appeared in {\alm}, {\almdyo}: there,
the notion of connection on such bundles has been also presented,
but in a rather academic way, i.e. using jet bundles, bundles of
connections and Atiyah's exact sequence, conveniently generalized
in this context. This point of view turned out to be particularly
useful for mathematics, as it inspired and affected more recent
developments in connection theory on far more sophisticated types
of supermanifolds, see {\bbr}.
Our analysis focuses on the investigation of the geometry of graded
principal bundles, as well as on the reformulation of the notion of
graded connection in terms of graded distributions and Lie
superalgebra-valued graded differential forms. We believe that our
approach makes the theory of graded connections more accessible and
close to applications in supersymmetric Yang-Mills theory, providing
thus the possibility to this elegant and economical theory to be
applied on specific problems of supersymmetric physics.
Throughout the paper, we follow the original terminology of
Kostant, {\kost} and we avoid the use of the term
``supermanifold" or ``Lie supergroup". This strict distinction
between the terms ``supermanifold" and ``graded manifold" is
justified if one wishes to avoid confusion with the DeWitt's or
Roger's supermanifold theory, {\bruzzo}, {\arcad}, {\roger},
{\rogers}, {\dewitt}. In this latter approach, one constructs the
supermanifold following the pattern of ordinary differential geometry
(using supercharts, superdifferentiability, etc.). In this way, the
supermanifold is a topological space, and in particular, a Banach
manifold, {\bruzzo}. In Kostant-Leites approach, the supermanifold is
a pair which consists of a usual differentiable manifold equipped with
a sheaf satisfying certain properties.
Despite the fact that Kostant's graded manifolds seem to be more
abstract, we have chosen to follow Kostant's formalism for several
reasons. The most important is that in this formalism we avoid the
complications of DeWitt's supergeometry coming from
superdifferentiability of mappings. Furthermore, Kostant's graded
manifold theory is closer to ordinary manifold theory, since it
constitutes a special kind of sheaf theory over ordinary manifolds.
Finally, the algebraic concepts entering in graded manifold theory
often significantly simplify the calculus on graded manifolds.
The article is organized as follows. In section 2, we recall
the fundamentals of graded manifold theory, {\kost}, which
will be necessary for the subsequent analysis, introducing at
the same time some new concepts and tools, useful in computing
pull-backs of graded differential forms. Next section deals
with graded Lie theory. After a review of the basics of the theory,
{\kost}, {\alm}, we show how one can obtain a right action of a
graded Lie group from a left one (and vice versa); we also explain
how a graded Lie group action gives rise to graded manifold morphisms
and derivations. A graded analog of the adjoint action of a Lie group
on itself is analyzed, and the parallelizability theorem for graded
Lie groups is proved. In section 4 we investigate the relation between
graded distributions and graded Lie group actions. The
free action in graded geometry is the central notion of this section for
which we provide two equivalent characterizations. The main result is
that each free action of a graded Lie group induces a regular graded
distribution. We also study quotient graded structures, completing the
work of {\alm} on this subject.
The notion and the geometry of graded principal bundle are analyzed
in sections 5 and 6. Here, we adopt a definition of this object
slightly different from that in {\almdyo}, completing in some sense, the
definition of the last reference. We prove that several
properties of ordinary principal bundles remain valid in the graded
context, making of course the appropriate modifications and
generalizations. For example, we prove that the orbits of the structure
graded Lie group are identical to the fibres of the projection and
that a graded principal bundle is globally trivial, if and only if
admits a global graded section. As an application of the tools
developed until now, we provide an alternative definition of the
graded principal
bundle, often very useful as we explain in section 6. After the
introduction and study of the main properties of Lie superalgebra-valued
graded differential forms in section 7, we are ready to introduce the
notion of graded connection. Our definition is guided by the
geometrical structure of the graded principal bundle: the sheaf of
vertical derivations coincides with the graded distribution induced by
the action of the structure graded Lie group. We discuss two
equivalent definitions of the graded connection and we show how one
can construct a graded connection locally. As a direct application, we
establish the existence theorem for graded connections.
The graded curvature is the subject of section 9. We prove the
structure equation and the Bianchi identity for the curvature of a
graded connection. We show finally that this notion controls the
involutivity of the horizontal graded distribution determined by a
graded connection.
The previous analysis on the curvature shows also that the graded
connection $\biomega$ (as well as its curvature $F^{\biomega}$)
decompose as $\biomega=\biomega_{0}+\biomega_{1}$, where
$\biomega_{0}$ is an even graded differential 1-form with values in
the even part of the Lie superalgebra $\frak g$ of the structure
group, while $\biomega_{1}$ is an odd graded differential 1-form with
values in $\frak g_{1}$, the odd part of $\frak g$. Furthermore, the
restriction of $\biomega_{0}$ to the underlying differentiable
manifold (which is an ordinary principal bundle) gives rise to a usual
connection and exactly this observation suggests to interpret
$\biomega$, in physics terminology, as a supersymmetric gauge
potential incorporating the usual gauge potential $\biomega_{0}$ and
its supersymmetric partner $\biomega_{1}$.
\vskip0.2cm
{\bf Notational conventions.} For an algebra $A$, $A^{\ast}$ denotes
its full dual and $A^{\circ}$ its finite dual (in contrast to
Kostant's conventions where $A\hbox{\kern-1.3pt\lower0.7pt\hbox{$\mathchar"6013$}}$ denotes the full, and $A^{\ast}$
the finite dual). If $\hbox{$\script A$}$ is a sheaf of algebras over a differentiable
manifold $M$, then $\hbox{\bbm 1}_{\fam4 \rsfs A}$ denotes the unit of $\hbox{$\script A$}(M)$ and
$m_{\fam4 \rsfs A}$ the algebra multiplication. Throughout this article
the term ``graded commutative" means ``$\hbox{\bf\char'132}_{2}$-graded commutative",
unless otherwise stated. If $E$ is a $\hbox{\bf\char'132}_{2}$-graded vector
space, then $E_{0}$ and $E_{1}$ stand for its even and odd subspaces:
$E=E_{0}\;{\mathchar"2208}\; E_{1}$. For an element $v\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} E_{i}, i=0,1$,
$|v|=i$ denotes the $\hbox{\bf\char'132}_{2}$-degree of $v$. Elements belonging only to
$E_{0}$ or $E_{1}$ are called homogeneous (even or odd, respectively).
\chapter{Graded manifold theory}
In this section, we review the basic notions of graded
manifold theory, {\kost}. We also introduce some new concepts
following the pattern of ordinary differential geometry as well as
tools that will be useful in the sequel. More precisely,
the notions of vertical and projectable derivations are
introduced; push-forward (and pull-back) of derivations under
isomorphisms of graded manifolds is defined and some useful
formulas for the computation of pull-backs of forms are
established. We begin with the notion of the graded manifold, {\kost}.
A ringed space $(M,\hbox{$\script A$})$ is called a graded manifold of dimension
$(m,n)$ if $M$ is a differentiable manifold of dimension $m$, $\hbox{$\script A$}$ is
a sheaf of graded commutative algebras, there exist a sheaf
epimorphism $\varrho\colon\kern2pt\hbox{$\script A$}\rightarrow C^{\infty}$ and for each open
$U$, an open covering $\{V_{i}\}$ such that $\hbox{$\script A$}(V_{i})$
is isomorphic as a graded commutative algebra to
$C^{\infty}(V_{i})\;{\mathchar"220A}\;\Lambda\hbox{\bf\char'122}^{n}$, in a manner compatible with
the restriction morphisms of the sheaves involved.
Here, $C^{\infty}$ stands for the sheaf of differentiable functions on
$M$, equipped with its trivial grading:
$\big(C^{\infty}(U)\big)_{0}=C^{\infty}(U)$ for each open
subset $U\subset M$. We note ${\rm dim\kern1pt}(M,\hbox{$\script A$})=(m,n)$.
An open $U\subset M$, for which $\hbox{$\script A$}(U)\cong
C^{\infty}(U)\;{\mathchar"220A}\;\Lambda\hbox{\bf\char'122}^{n}$, is called an $\hbox{$\script A$}$-splitting
neighborhood. A graded coordinate system on an $\hbox{$\script A$}$-splitting
neighborhood $U$ is a collection
$(x^{i},s^{j})=(x^{1},\ldots,x^{m},s^{1},\ldots,s^{n})$ of homogeneous
elements of $\hbox{$\script A$}(U)$ with $|x^{i}|=0,|s^{j}|=1$, such that
$(\tilde{x}^{1},\ldots,\tilde{x}^{m})$ is an ordinary coordinate
system on the open $U$, where $\tilde{x}^{i}=\varrho(x^{i})
\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} C^{\infty}(U)$ and $(s^{1},\ldots,s^{n})$ are algebraically
independent elements, that is, $\prod_{j=1}^{n}s^{j}\neq 0$.
It can also be shown, {\kost}, that if $\hbox{$\script A$}^{1}(U)$ is the set of nilpotent
elements of $\hbox{$\script A$}(U)$, then the sequence
$$0\longrightarrow\hbox{$\script A$}^{1}(U)\buildrel\over\longrightarrow
\hbox{$\script A$}(U)\buildrel\over\longrightarrow
C^{\infty}(U)\longrightarrow 0\eqno\eqname\exactsequence$$
is exact.
Given two graded manifolds $(X,\hbox{$\script A$})$ and $(Y,\hbox{$\script B$})$, one can form their
product $(Z,\hbox{$\script C$})=(X,\hbox{$\script A$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})$ which is also a graded manifold,
{\ruipma}, where $Z=X\;{\mathchar"2202}\; Y$ and the sheaf $\hbox{$\script C$}$ is given by
$\hbox{$\script C$}(Z)=\hbox{$\script A$}(X)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script B$}(Y)$; in the previous tensor product, $\pi$ means
the completion of $\hbox{$\script A$}(X)\;{\mathchar"220A}\;\hbox{$\script B$}(Y)$ with respect to
Grothendieck's $\pi$-topology, {\grot}, {\ruipma}.
One can define a morphism between two graded manifolds as being a
morphism of ringed spaces compatible with the sheaf epimorphism
$\varrho$, {\ruipma}, but often it is more convenient to do this
in a more concise way: a morphism $\sigma\colon\kern2pt(M,\hbox{$\script A$})\rightarrow(N,\hbox{$\script B$})$
between two graded manifolds is just a morphism
$\sigma^{\ast}\colon\kern2pt\hbox{$\script B$}(N)\rightarrow\hbox{$\script A$}(M)$ of graded commutative
algebras.
A very useful object in graded manifold theory is the finite dual
$\hbox{$\script A$}(M)^{\circ}$ of $\hbox{$\script A$}(M)$ defined as
$$\hbox{$\script A$}(M)^{\circ}=\{a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(M)^{\ast}\,|\,a\,\hbox{vanishes on
an ideal of $\hbox{$\script A$}(M)$ of finite codimension}\}.$$
Using general algebraic techniques (see for example {\mont}), one
readily verifies that $\hbox{$\script A$}(M)^{\circ}$ is a graded cocommutative
coalgebra, the coproduct $\hbox{$\Delta^{\circ}_{\script A}$}$ and counit $\epsilon_{\fam4 \rsfs
A}^{\circ}$ on $\hbox{$\script A$}(M)^{\circ}$ being given by $\hbox{$\Delta^{\circ}_{\script A}$} a(f\;{\mathchar"220A}\;
g)=a(fg)$, $\epsilon_{\fam4 \rsfs A}^{\circ}(a)=a(\hbox{\bbm 1}_{\fam4 \rsfs A})$, $\forall
a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(M)^{\circ}$, $f,g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(M)$.
The set of group-like elements of this coalgebra contains only
elements of the form $\delta_{p}$ for $p\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} M$, where
$\delta_{p}\colon\kern2pt\hbox{$\script A$}(M)\rightarrow\hbox{\bf\char'122}$ is defined by
$$\delta_{p}(f)=\varrho(f)(p)=\tilde{f}(p).\eqno\eqname\deltap$$
Furthermore, if $\sigma\colon\kern2pt(M,\hbox{$\script A$})\rightarrow(N,\hbox{$\script B$})$ is a
morphism of graded manifolds, then the element
$\sigma_{\ast}a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(N)^{\ast}$ defined for $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(M)^{\circ}$ by
$$\sigma_{\ast}a(g)=a(\sigma^{\ast}g),
\forall g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(N)\eqno\eqname\sigmorph$$
vanishes on an ideal of finite codimension. We thus obtain a morphism
of graded coalgebras
$\sigma_{\ast}\colon\kern2pt\hbox{$\script A$}(M)^{\circ}\rightarrow\hbox{$\script B$}(N)^{\circ}$ which
respects the group-like elements and induces a differentiable map
$\sigma_{\ast}|_{M}\colon\kern2pt M\rightarrow N$.
Another very important property of the graded coalgebra
$\hbox{$\script A$}(M)^{\circ}$ is that the set of its primitive elements with respect
to the group-like element $\delta_{p}$, i.e. elements
$v\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(M)^{\circ}$ for which $\hbox{$\Delta^{\circ}_{\script A}$}(v)=v\;{\mathchar"220A}\;\delta_{p}+
\delta_{p}\;{\mathchar"220A}\; v$, is equal to the set of derivations at $p$
on $\hbox{$\script A$}(M)$, that is, the set of elements $v\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(M)^{\ast}$ for which
$$v(fg)=(vf)(\delta_{p}g)+(-1)^{|v||f|}(\delta_{p}f)(vg), \forall
f,g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(M)\eqno\eqname\tangentvector$$
if $v$ and $f$ are homogeneous. This set is a subspace of
$\hbox{$\script A$}(M)^{\circ}$ which we call tangent space of $(M,\hbox{$\script A$})$ at $p\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} M$
and we note it by $T_{p}(M,\hbox{$\script A$})$. One easily verifies that the morphisms
of graded manifolds preserve the subspaces of primitive elements:
if $v\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{p}(M,\hbox{$\script A$})$, then $\sigma_{\ast}v\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{q}(N,\hbox{$\script B$})$
where $q=\sigma_{\ast}|_{M}(p)$. Hence, we have a well-defined notion of
the tangent (or the differential) of the morphism $\sigma$ at any
point $p\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} M$ and we adopt the notation
$$T_{p}\sigma\colon\kern2pt T_{p}(M,\hbox{$\script A$})\rightarrow T_{q}(N,\hbox{$\script B$}),\quad
T_{p}\sigma(v)=\sigma_{\ast}v.\eqno\eqname\tangent$$
In this context, the set $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$ of derivations of $\hbox{$\script A$}(U)$ plays
the r\^ole of graded vector fields on $(U,\hbox{$\script A$}|_{U}), U\subset M$ open.
The difference with the ordinary differential geometry is that we
cannot evaluate directly a derivation $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$ at a point
$p\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} U$ in order to obtain a tangent vector belonging to $T_{p}(M,\hbox{$\script A$})$.
Instead, we may associate to each $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$ a tangent vector
$\tilde{\xi}_{p}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{p}(M,\hbox{$\script A$}), \forall p\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} U$ in the following way:
for each $f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(U)$, we define $$\tilde{\xi}_{p}(f)=\delta_{p}(\xi
f).\eqno\eqname\derivation$$
If $U$ is an open subset of $M$ and $(m,n)={\rm dim\kern1pt}(M,\hbox{$\script A$})$,
the set of derivations $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$ is a free left $\hbox{$\script A$}(U)$-module of
dimension $(m,n)$. If $U$ is an $\hbox{$\script A$}$-splitting neighborhood,
$(x^{i},s^{j})$ a graded coordinate system on $U$ and
$\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$, then there exist elements $\xi^{i},\xi^{j}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(U)$
such that
$$\xi=\sum_{i=1}^{m}\xi^{i}{\hbox{\cyr\char'144\kern0.3pt}\over{\hbox{\cyr\char'144\kern0.3pt} x^{i}}}+
\sum_{j=1}^{n}\xi^{j}{\hbox{\cyr\char'144\kern0.3pt}\over{\hbox{\cyr\char'144\kern0.3pt} s^{j}}},$$
where the derivations $\hbox{\cyr\char'144\kern0.3pt}/\hbox{\cyr\char'144\kern0.3pt} x^{i}$ and
$\hbox{\cyr\char'144\kern0.3pt}/\hbox{\cyr\char'144\kern0.3pt} s^{j}$ are defined by
$${\hbox{\cyr\char'144\kern0.3pt}\over{\hbox{\cyr\char'144\kern0.3pt} s^{k}}}(s^{\ell})=\delta_{k}^{\ell}\hbox{\bbm 1}_{U},\,
{\hbox{\cyr\char'144\kern0.3pt}\over{\hbox{\cyr\char'144\kern0.3pt} s^{k}}}(x^{i})=0,\,
{\hbox{\cyr\char'144\kern0.3pt}\over{\hbox{\cyr\char'144\kern0.3pt} x^{i}}}(s^{k})=0,\,
{\hbox{\cyr\char'144\kern0.3pt}\over{\hbox{\cyr\char'144\kern0.3pt} x^{i}}}(x^{j})=\delta_{i}^{j}\hbox{\bbm 1}_{U},\eqno\eqname\defderiv$$
$\hbox{\bbm 1}_{U}$ being the unit of $\hbox{$\script A$}(U)$. Clearly, this
decomposition is not valid in general for derivations belonging to
$\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(M)$. Therefore, we give the following definition:
\math{Definition.}{\parallel}{\sl A graded manifold $(M,\hbox{$\script A$})$ is called
parallelizable if the set of derivations $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(M)$ admits a global
basis on $\hbox{$\script A$}(M)$ consisting of $m$ even and $n$ odd derivations.}
\vskip0.3cm
The difficulty one encounters in ordinary manifold theory to
push-forward (or to pull-back) a vector field by means of a
differentiable mapping is also present in the context of graded
manifolds. As in the case of ordinary manifolds, this is possible
only if we use isomorphisms of the graded manifold structure. More precisely:
\math{Definition.}{\pushfor}{\sl Let
$\sigma\colon\kern2pt(M,\hbox{$\script A$})\rightarrow(N,\hbox{$\script B$})$ be an isomorphism between the
graded manifolds $(M,\hbox{$\script A$})$ and $(N,\hbox{$\script B$})$ and let $U\subset M$ be an open
subset such that $U=\sigma_{\ast}^{-1}(V), V\subset N$ open. If
$\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$, then we define the push-forward
$\sigma_{\ast}\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(V)$ as
$\sigma_{\ast}\xi=(\sigma^{\ast})^{-1}\hbox{\lower5.8pt\hbox{\larm\char'027}}\xi\hbox{\lower5.8pt\hbox{\larm\char'027}}\sigma^{\ast}$.
For the pull-back of $\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(V)$, we define
$\sigma^{\ast}\eta=(\sigma^{-1})_{\ast}\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$.}
\vskip0.3cm
In many situations it happens that some derivations may be related
through a morphism (which is not necessarily an isomorphism) in a
manner similar to those of pull-back. We give the following
definition:
\math{Definition.}{\proj}{\sl Let
$\sigma\colon\kern2pt(M,\hbox{$\script A$})\rightarrow(N,\hbox{$\script B$})$ be a morphism of graded
manifolds. We call two derivations $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(M)$ et $\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(N)$
$\sigma$-related if for each $f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(N)$ we have
$\sigma^{\ast}(\eta f)=\xi(\sigma^{\ast}f)$. Especially, if we fix
$\xi$ and $\sigma$ is an epimorphism, the derivation $\eta$, if it
exists, is unique;
in such a case we note $\eta=\sigma_{\ast}\xi$, we call $\xi$ a
$\sigma_{\ast}$-projectable derivation and $\sigma_{\ast}\xi$ its
projection by means of $\sigma$. $\xi$ will be called vertical
derivation if it is $\sigma_{\ast}$-projectable and $\sigma_{\ast}\xi=0$.}
\vskip0.3cm
It is easy to verify that the set $\hbox{$\script P$\kern-2pt\callig ro\kern2pt}(\sigma_{\ast},\hbox{$\script A$})(M)$ of
$\sigma_{\ast}$-projectable derivations is a left $\hbox{$\script B$}(N)$-module and
that the set $\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\sigma_{\ast},\hbox{$\script A$})(M)$
of vertical derivations is a left $\hbox{$\script A$}(M)$-module. Indeed,
for $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(N)$ and $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script P$\kern-2pt\callig ro\kern2pt}(\sigma_{\ast},\hbox{$\script A$})(M)$ let us
define $g\cdot\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(M)$ as $g\cdot\xi=(\sigma^{\ast}g)\xi$. Then
for $f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(N)$ we find: $(g\cdot
\xi)(\sigma^{\ast}f)=(\sigma^{\ast}g)\xi(\sigma^{\ast}f)=\sigma^{\ast}g\cdot
\sigma^{\ast}[\sigma_{\ast}\xi(f)]=\sigma^{\ast}[g(\sigma_{\ast}\xi)f]$,
which proves that $g\cdot\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script P$\kern-2pt\callig ro\kern2pt}(\sigma_{\ast},\hbox{$\script A$})(M)$ as well. We
proceed analogously for $\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\sigma_{\ast},\hbox{$\script A$})(M)$. The corresponding
sheaves are denoted by
$\hbox{$\script P$\kern-2pt\callig ro\kern2pt}(\sigma_{\ast},\hbox{$\script A$}),\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\sigma_{\ast},\hbox{$\script A$})$.
The final part of this section is devoted to a short review
of the basic properties of graded differential forms from {\kost}.
We also develop some useful tools for computing pull-backs of
forms using the previously mentioned concepts of push-forward
and $\sigma$-related derivations.
Let then $(M,\hbox{$\script A$})$ be a graded manifold of dimension $(m,n)$. For an open
$U\subset M$ we consider the tensor algebra $\yfra T(U)$ of
$\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$ with respect to its $\hbox{$\script A$}(U)$-module structure and the ideal
$\yfra J(U)$ of $\yfra T(U)$ generated by homogeneous elements of the form
$\xi\;{\mathchar"220A}\;\eta+(-1)^{|\xi||\eta|}\eta\;{\mathchar"220A}\;\xi$, for
$\xi,\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$. Let also $\yfra T^{r}(U)\cap\yfra J(U)=
\yfra J^{r}(U)$. We call set of graded differential
$r$-forms ($r\geq 1$) on $U\subset M$ the set $\Omega^{r}(U,\hbox{$\script A$})$ of
elements belonging to ${\rm Hom}_{{\fam4 \rsfs A}(U)}(\yfra T^{r}(U),\hbox{$\script A$}(U))$
which vanish on $\yfra J^{r}(U)$. For $r=0$ we define
$\Omega^{0}(U,\hbox{$\script A$})=\hbox{$\script A$}(U)$ and we note the direct sum
$\displaystyle\bigoplus_{p=0}^{\infty}\Omega^{r}(U,\hbox{$\script A$})$ by
$\Omega(U,\hbox{$\script A$})$.
If $\alpha\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{r}(U,\hbox{$\script A$})$ and $\xi_{1},\ldots,\xi_{r}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$,
then we denote the evaluation of $\alpha$ on the $\xi$'s by
$(\xi_{1}\;{\mathchar"220A}\;\ldots\;{\mathchar"220A}\;\xi_{r}|\alpha)$, or simply
$(\xi_{1},\ldots,\xi_{r}|\alpha)$.
Clearly, the elements de $\Omega(U,\hbox{$\script A$})$ have a
$(\hbox{\bf\char'132}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2})$-bidegree and further we may define an algebra
structure on $\Omega(U,\hbox{$\script A$})$ which thus becomes a bigraded commutative
algebra over $\hbox{$\script A$}(U)$, {\kost}. Here, we mention only the bigraded
commutativity relation for this structure:
if $\alpha\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{i_{\alpha}}(U,\hbox{$\script A$})_{j_{\alpha}},
\beta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{i_{\beta}}(U,\hbox{$\script A$})_{j_{\beta}}$, then
$\alpha\beta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{i_{\alpha}+i_{\beta}}(U,\hbox{$\script A$})_{j_{\alpha}+j_{\beta}}$
and $\alpha\beta=(-1)^{i_{\alpha}i_{\beta}+j_{\alpha}j_{\beta}}
\beta\alpha$.
Next consider the linear map
$d\colon\kern2pt\Omega^{0}(U,\hbox{$\script A$})\rightarrow\Omega^{1}(U,\hbox{$\script A$})$ defined by
$$(\xi|dg)=\xi(g), \,\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U), g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(U).\eqno\eqname\orismosd$$
In a graded coordinate system $(x^{i},s^{j}), i=1,\ldots
m,j=1,\ldots n$ on $U$, we take
$$dg=\sum_{i=1}^{m}dx^{i}{\hbox{\cyr\char'144\kern0.3pt} g\over\hbox{\cyr\char'144\kern0.3pt}
x^{i}}+\sum_{j=1}^{n}ds^{j}{\hbox{\cyr\char'144\kern0.3pt} g\over\hbox{\cyr\char'144\kern0.3pt} s^{j}}.\eqno\eqname\dfison$$
One can extend this linear map to a derivation
$d\colon\kern2pt\Omega(U,\hbox{$\script A$})\rightarrow\Omega(U,\hbox{$\script A$})$
of bidegree (1,0) such that $d^{2}=0$ and $d|_{\Omega^{0}(U,\fam4 \rsfs A)}$
gives equation $\dfison$. We will call $d$ exterior differential on graded
differential forms.
Interior products and Lie derivatives with
respect to elements of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$ also make sense in the graded setting.
Indeed, if $\alpha\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{r+1}(U,\hbox{$\script A$})$, then for
$\xi,\xi_{1},\ldots,\xi_{r}$ homogeneous elements of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$
we define
$$(\xi_{1},\ldots,\xi_{r}|{\bit
i}(\xi)\alpha)=(-1)^{|\xi|\sum_{i=1}^{r}|\xi_{i}|}
(\xi,\xi_{1},\ldots,\xi_{r}|\alpha)\eqno\eqname\interiorproduct$$
and we thus obtain a linear map
${\bit i}(\xi)\colon\kern2pt\Omega(U,\hbox{$\script A$})\rightarrow\Omega(U,\hbox{$\script A$})$
of bidegree $(-1,|\xi|)$. This is the interior product with respect
to $\xi$.
Lie derivatives are defined as usual by means of Cartan's algebraic
formula: $\hbox{$\bit L$}_{\xi}=d\hbox{\lower5.8pt\hbox{\larm\char'027}}{\bit i}(\xi)+{\bit i}(\xi)\hbox{\lower5.8pt\hbox{\larm\char'027}} d$,
thus $\hbox{$\bit L$}_{\xi}$ has bidegree $(0,|\xi|)$. Furthermore, it can be
proved that the morphism of graded commutative algebras
$\sigma^{\ast}\colon\kern2pt\hbox{$\script B$}(W)\rightarrow\hbox{$\script A$}(U),
U=\sigma^{-1}_{\ast}(W)\subset M$ coming from a morphism
$\sigma\colon\kern2pt(M,\hbox{$\script A$})\rightarrow(N,\hbox{$\script B$})$ of graded manifolds,
can be extended to a unique morphism of bigraded commutative algebras
$\sigma^{\ast}\colon\kern2pt\Omega(W,\hbox{$\script B$})\rightarrow\Omega(U,\hbox{$\script A$})$, which
commutes with the exterior differential.
As for the case of derivations, see relation $\derivation$, one can define
for each graded differential form $\alpha\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{r}(U,\hbox{$\script A$})$ a
multilinear form $\tilde{\alpha}_{p}$ (with real values) on the
tangent space $T_{p}(M,\hbox{$\script A$})$ for each $p\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} U$. It suffices to set
$$\big((\tilde{\xi}_{1})_{p},\ldots,
(\tilde{\xi}_{r})_{p}|\tilde{\alpha}_{p}\big)=\delta_{p}
(\xi_{1},\ldots,\xi_{r}|\alpha).\eqno\eqname\multiliforms$$
The set of forms on $U$ obtained in this way is denoted by
$\Omega^{r}_{\fam4 \rsfs A}(U)$.
We establish now a general method for the calculation of pull-backs of
graded differential forms.
\math{Proposition.}{\pullback}{\sl Let $\sigma\colon\kern2pt(M,\hbox{$\script A$})\rightarrow
(N,\hbox{$\script B$})$ be a morphism of graded manifolds, $W\subset N$ and $U=
\sigma^{-1}_{\ast}(W)\subset M$. Then, if $\alpha\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{r}(W,\hbox{$\script B$})$ and
$\xi_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$ and $\eta_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(W)$ are $\sigma$-related
for $i=1,\ldots,r$, we have:
$$(\xi_{1},\ldots,\xi_{r}|\sigma^{\ast}\alpha)=
\sigma^{\ast}(\eta_{1},\ldots,\eta_{r}|\alpha).\eqno\eqname\pullba$$
In particular, when $\sigma$ is an isomorphism, the previous relation
holds for each $\xi_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$, setting $\eta_{i}=\sigma_{\ast}
\xi_{i}$.}
\indent \undertext{\it Proof}. Using the fact that $\sigma^{\ast}$ is an
isomorphism of bigraded commutative algebras and that $\Omega(W,\hbox{$\script B$})$
is the exterior algebra of $\Omega^{1}(W,\hbox{$\script B$})$, it is sufficient to
prove this formula for $\alpha=df, f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(W)$. If $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$
and $\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(W)$ are $\sigma$-related, then
$(\xi|\sigma^{\ast}\alpha)=(\xi|d(\sigma^{\ast}f))=
\xi(\sigma^{\ast}f)=\sigma^{\ast}(\eta
f)=\sigma^{\ast}(\eta|df)=\sigma^{\ast}(\eta|\alpha)$. In particular,
when $\sigma$ is an isomorphism, each $\xi$ is $\sigma$-related to
$\eta=\sigma_{\ast}\xi$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
One easily proves that pull-backs on forms belonging to
$\Omega^{r}_{\fam4 \rsfs B}(W)$ can be expressed using the
familiar formulas of ordinary differential geometry but taking as
tangent of the morphism $\sigma$, the linear mapping defined through
relations $\sigmorph$, $\tangentvector$.
\chapter{Elements of graded Lie theory}
As we will see later in detail, graded Lie theory plays a central
r\^ole in the geometry of graded principal bundles just as ordinary
theory of Lie groups does in differentiable principal bundles. This
section deals with the notion and elementary properties of graded
Lie groups; more information can be found in {\kost}. Furthermore,
we prove some facts about the theory of actions of graded Lie groups,
which, to the best of our knowledge, have not ever appeared in the
literature. We first give the definition of a graded Lie group,
{\alm}, {\kost}.
\math{Definition.}{\gradedliegroup}{\sl A graded Lie group $(G,\hbox{$\script A$})$
is a graded manifold such that $G$ is an ordinary Lie group, the
algebra $\hbox{$\script A$}(G)$ is equipped with the structure of a graded
Hopf algebra with antipode and furthermore, the algebra epimorphism
$\varrho\colon\kern2pt\hbox{$\script A$}(G)\rightarrow C^{\infty}(G)$ is a morphism of
graded Hopf algebras.}
\math{Remark.}{\completionpi}{\it In the graded Hopf algebra structure of
the previous definition, all tensor products are completions of the
usual ones with respect to Grothendieck's $\pi$-topology, {\rm{\alm}},
{\rm{\ruipma}}.}
\vskip0.2cm
We denote by $\hbox{$\Delta_{\script A}$}$, $\epsilon_{\fam4 \rsfs A}$, $s_{\fam4 \rsfs A}$ the
coproduct, counit and antipode of $\hbox{$\script A$}(G)$, respectively.
It is possible to prove that the finite dual $\hbox{$\script A$}(G)^{\circ}$
inherits also a graded Hopf algebra structure from $\hbox{$\script A$}(G)$.
The algebra multiplication on $\hbox{$\script A$}(G)^{\circ}$ is given by
the convolution product:
$$(a\;{\mathchar"220C}\; b)=(a\;{\mathchar"220A}\; b)\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}, \,\forall a,b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ},
\eqno\eqname\convolution$$
and the unit of $\hbox{$\script A$}(G)^{\circ}$ with respect to $\;{\mathchar"220C}\;$ is
the counit of $\hbox{$\script A$}(G)$. One proves easily that the set of primitive
elements of $\hbox{$\script A$}(G)^{\circ}$ with respect to $\delta_{e}$ ($e$ is the
identity of $G$), that is, the tangent space $T_{e}(G,\hbox{$\script A$})$, is a graded Lie
algebra, the bracket being given by $[u,v]=u\;{\mathchar"220C}\;
v-(-1)^{|u||v|}v\;{\mathchar"220C}\; u$, for homogeneous elements $u,v$. We call
$T_{e}(G,\hbox{$\script A$})$ Lie superalgebra of $(G.\hbox{$\script A$})$ and denote it by $\frak
g$. Clearly, $\frak g=\frak g_{0}\;{\mathchar"2208}\;\frak g_{1}$, where $\frak
g_{0}=T_{e}G$.
Using also the fact that $\varrho\colon\kern2pt\hbox{$\script A$}(G)\rightarrow C^{\infty}(G)$
is a morphism of graded Hopf algebras one readily verifies that the
convolution product $\;{\mathchar"220C}\;$ is compatible with the group structure of
$G$ in the sense that $\delta_{g}\;{\mathchar"220C}\;\delta_{h}=\delta_{gh}$.
A very important property of the finite dual $\hbox{$\script A$}(G)^{\circ}$ is given
by the following, {\kost}.
\math{Theorem.}{\liehopfalgebra}{\sl For each graded Lie group
$(G,\hbox{$\script A$})$, the finite dual $\hbox{$\script A$}(G)^{\circ}$ has the structure of a
Lie-Hopf algebra. In fact, $\hbox{$\script A$}(G)^{\circ}=\hbox{\bf\char'122}(G)\hbox{\bb\char'076} E(\frak g)$,
where $\hbox{\bf\char'122}(G)$ is the group algebra of $G$, $E(\frak g)$ is the
universal enveloping algebra of $\frak g$ and $\hbox{\bb\char'076}$ is the smash
product of $\hbox{\bf\char'122}(G)$ and $E(\frak g)$ with respect to the adjoint
representation of $G$ on the superalgebra $\frak g$.}
\vskip0.3cm
For the adjoint representation of $G$ on $\frak g$, see also below in
this section.
There exist, in this setting, analogs of left and right translations
on a Lie group.
\math{Definition.}{\leftrighttrans}{\sl Let $(G,\hbox{$\script A$})$ be a graded Lie
group and $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$. Set $r_{a}=(id\;{\mathchar"220A}\; a)\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}$
and $\ell_{a}=(a\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}$. We call the endomorphisms $r_{a}$
and $\ell_{a}$ of $\hbox{$\script A$}(G)$ right and left translations respectively on
$(G,\hbox{$\script A$})$.}
\vskip0.3cm
In order to justify this terminology, we first discuss some
important properties of these endomorphisms.
\math{Proposition.}{\leftrightprop}{\sl
\item{1.} $r_{\epsilon_{\fam4 \rsfs A}}=\ell_{\epsilon_{\fam4 \rsfs A}}=id$
\item{2.} $r_{a\;{\mathchar"220C}\; b}=r_{a}\hbox{\lower5.8pt\hbox{\larm\char'027}} r_{b}$,
$\ell_{a\;{\mathchar"220C}\; b}=(-1)^{|a||b|}\ell_{b}\hbox{\lower5.8pt\hbox{\larm\char'027}}\ell_{a}$
\item{3.} $r_{b}\hbox{\lower5.8pt\hbox{\larm\char'027}}\ell_{a}=(-1)^{|a||b|}\ell_{a}\hbox{\lower5.8pt\hbox{\larm\char'027}} r_{b}, \forall
a,b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$
\item{4.} If $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$ is group-like, then $r_{a}$ and
$\ell_{a}$ are graded algebra isomorphisms.
}
\vskip0.3cm
We postpone the proof until the study of actions of graded Lie groups
(see below in this section). Then, it will be clear that the previous
proposition is immediate using the general techniques of actions (note
that Proposition {\leftrightprop} has already appeared in {\alm} without
proof).
Part (4) of this proposition tells us that if $a$ is group-like
$a=\delta_{g}$, $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$, then there exist morphisms of graded
manifolds $R_{g}\colon\kern2pt(G,\hbox{$\script A$})\rightarrow(G,\hbox{$\script A$})$,
$L_{g}\colon\kern2pt(G,\hbox{$\script A$})\rightarrow(G,\hbox{$\script A$})$ such that
$R^{\ast}_{g}=r_{a}$ and $L^{\ast}_{g}=\ell_{a}$. It is interesting
to calculate the coalgebra morphisms
$R_{g\ast}\colon\kern2pt\hbox{$\script A$}(G)^{\circ}\rightarrow\hbox{$\script A$}(G)^{\circ}$ and
$L_{g\ast}\colon\kern2pt\hbox{$\script A$}(G)^{\circ}\rightarrow\hbox{$\script A$}(G)^{\circ}$.
Consider for example $R_{g\ast}$. If $b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$ and
$f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)$, we find:
$$R_{g\ast}b(f)=b(r_{a}f)=b\big(\sum_{i}
(-1)^{|a||I^{i}f|}(I^{i}f) a(J^{i}f)\big)
=\sum_{i}(b\;{\mathchar"220A}\; a)(I^{i}f\;{\mathchar"220A}\; J^{i}f),$$
where we have set $\hbox{$\Delta_{\script A}$} f=\sum_{i}I^{i}f\;{\mathchar"220A}\; J^{i}f$. Hence,
$R_{g\ast}b=b\;{\mathchar"220C}\; a$ and similarly $L_{g\ast}b=a\;{\mathchar"220C}\; b$.
This means that $r_{a}$ and $\ell_{a}$ correspond to right and left
translations, as one can see at the coalgebra level.
Next, we introduce the graded analog of actions (see also {\alm}).
Let then $(G,\hbox{$\script A$})$ be a graded Lie group and $(Y,\hbox{$\script B$})$ a graded
manifold. We give the following definition.
\math{Definition.}{\action}{\sl We say that the graded Lie group
$(G,\hbox{$\script A$})$ acts on the graded manifold $(Y,\hbox{$\script B$})$ to the right if there exists a
morphism $\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$ of graded
manifolds such that the corresponding morphism of graded commutative
algebras $\Phi^{\ast}\colon\kern2pt\hbox{$\script B$}(Y)\rightarrow\hbox{$\script B$}(Y)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)$ defines
a structure of right $\hbox{$\script A$}(G)$-comodule on $\hbox{$\script B$}(Y)$. Using the notion
of left comodule, we may define the left action of $(G,\hbox{$\script A$})$ on $(Y,\hbox{$\script B$})$.}
\vskip0.3cm
More explicitly, if $\Phi$ is a right and $\Psi$ is a left action,
then the morphisms
$\Phi^{\ast}$, $\Psi^{\ast}$ satisfy the following properties:
$$(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}=(\Phi^{\ast}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast},
\,\,(id\;{\mathchar"220A}\;\epsilon_{\fam4 \rsfs
A})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}=id,\eqno\eqname\rightcomodulestructure$$
$$(\hbox{$\Delta_{\script A}$}\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}=(id\;{\mathchar"220A}\;\Psi^{\ast})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast},
\,\,(\epsilon_{\fam4 \rsfs A}\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}=id.\eqno\eqname\leftcomodulestructure$$
Let now $\Psi^{r}\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$ be the
morphism of graded manifolds defined by
$$\Psi^{r\ast}=(id\;{\mathchar"220A}\; s_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}}
T\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast},\eqno\eqname\psirightactiondef$$
where $T$ is the twist morphism, $T(a\;{\mathchar"220A}\; b)=(-1)^{|a||b|}b\;{\mathchar"220A}\;
a$.
\math{Lemma.}{\psirightactionlemma}{\sl The morphism $\Psi^{r}$
defined by $\psirightactiondef$ is a right action of $(G,\hbox{$\script A$})$ on
$(Y,\hbox{$\script B$})$.}
\undertext{\it Proof}. It suffices to prove that relations
$\rightcomodulestructure$ are valid for the morphism $\Psi^{r\ast}$.
We take:
$(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{r\ast}=
\big(id\;{\mathchar"220A}\;(s_{\fam4 \rsfs
A}\;{\mathchar"220A}\; s_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}} T\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}\big)\hbox{\lower5.8pt\hbox{\larm\char'027}} T\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}=
\big(id\;{\mathchar"220A}\; (s_{\fam4 \rsfs A}\;{\mathchar"220A}\; s_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}} T\big)\hbox{\lower5.8pt\hbox{\larm\char'027}}
T\hbox{\lower5.8pt\hbox{\larm\char'027}}(\hbox{$\Delta_{\script A}$}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}=
\big(id\;{\mathchar"220A}\; (s_{\fam4 \rsfs A}\;{\mathchar"220A}\; s_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}} T\big)\hbox{\lower5.8pt\hbox{\larm\char'027}}
T\hbox{\lower5.8pt\hbox{\larm\char'027}}\break(id\;{\mathchar"220A}\;\Psi^{\ast})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}=
(id\;{\mathchar"220A}\; s_{\fam4 \rsfs A}\;{\mathchar"220A}\; s_{\fam4 \rsfs
A})\hbox{\lower5.8pt\hbox{\larm\char'027}}(T\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}
T\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}=(\Psi^{r\ast}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{r\ast}$.
On the other hand,
$$(id\;{\mathchar"220A}\;\epsilon_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{r\ast}=
(id\;{\mathchar"220A}\;\epsilon_{\fam4 \rsfs A}\hbox{\lower5.8pt\hbox{\larm\char'027}} s_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}}
T\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}=(id\;{\mathchar"220A}\;\epsilon_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}} T\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}=id,$$
which completes the proof.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
Similarly, for a right action $\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})
\rightarrow(Y,\hbox{$\script B$})$ one can define in a
canonical way, a left action
$\Phi^{\ell}\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})\rightarrow(Y,\hbox{$\script B$})$ as
$\Phi^{\ell\ast}=(s_{\fam4 \rsfs A}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}} T\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}$.
Then, the restriction $\Phi_{\ast}|_{Y\;{\mathchar"2202}\; G}\colon\kern2pt Y\;{\mathchar"2202}\;
G\rightarrow Y$ defines a right action of $G$ on the manifold $Y$ and
furthermore, for the canonically associated left action $\Phi^{\ell}$,
the restriction $\Phi^{\ell}_{\ast}|_{G\;{\mathchar"2202}\; Y}\colon\kern2pt G\;{\mathchar"2202}\;
Y\rightarrow Y$ is a left action given by
$\Phi^{\ell}_{\ast}|_{G\;{\mathchar"2202}\; Y}(g,y)=\Phi_{\ast}|_{Y\;{\mathchar"2202}\;
G}(y,g^{-1})$, as one expects. We have analogous facts for the left
action $\Psi$. Observe here that the possibility to define
$\Phi^{\ell\ast}$ and $\Psi^{r\ast}$ as morphisms of graded
commutative algebras depends crucially on the fact that the antipode
$s_{\fam4 \rsfs A}\colon\kern2pt\hbox{$\script A$}(G)\rightarrow\hbox{$\script A$}(G)$ is a morphism of graded
commutative algebras.
\math{Remark.}{\paratirisidyo}{\it The right action $\Phi^{\ell r}$
canonically associated to $\Phi^{\ell}$ equals to $\Phi$:
$\Phi^{\ell r\ast}=(id\;{\mathchar"220A}\; s_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}} T\hbox{\lower5.8pt\hbox{\larm\char'027}}(s_{\fam4 \rsfs
A}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}} T\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}=(id\;{\mathchar"220A}\; s_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}}
T^{2}\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\; s_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}=\Phi^{\ast}$,
since $T^{2}=id, s_{\fam4 \rsfs A}^{2}=id$.}
\vskip0.2cm
For a right action $\Phi$, one may introduce for each
$a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$ and $b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)^{\circ}$, two linear
maps $(\Phi^{\ast})_{a}\colon\kern2pt\hbox{$\script B$}(Y)\rightarrow\hbox{$\script B$}(Y)$ and
$(\Phi^{\ast})_{b}\colon\kern2pt\hbox{$\script B$}(Y)\rightarrow\hbox{$\script A$}(G)$ as follows:
$$(\Phi^{\ast})_{a}=(id\;{\mathchar"220A}\; a)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\quad\hbox{and}\quad
(\Phi^{\ast})_{b}=(b\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}.\eqno\eqname\linearmapsena$$
Similarly, for a left action $\Psi$ one defines
$$(\Psi^{\ast})_{a}=(a\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}\quad\hbox{and}\quad
(\Psi^{\ast})_{b}=(id\;{\mathchar"220A}\; b)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Psi^{\ast}.\eqno\eqname\linearmapsdyo$$
The following theorem clarifies the r\^ole of these maps.
\math{Theorem.}{\thewrima}{\sl
\item{1.} $(\Phi^{\ast})_{\epsilon_{\fam4 \rsfs
A}}=(\Psi^{\ast})_{\epsilon_{\fam4 \rsfs A}}=id$
\item{2.} $(\Phi^{\ast})_{a_{1}\;{\mathchar"220C}\;
a_{2}}=(\Phi^{\ast})_{a_{1}}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{a_{2}}$,
$(\Psi^{\ast})_{a_{1}\;{\mathchar"220C}\;
a_{2}}=(-1)^{|a_{1}||a_{2}|}
(\Psi^{\ast})_{a_{2}}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Psi^{\ast})_{a_{1}}$
\item{3.}$(\Phi^{\ast})_{b}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{a}=
(-1)^{|a||b|}r_{a}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{b}$,
$(\Psi^{\ast})_{b}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Psi^{\ast})_{a}=
(-1)^{|a||b|}\ell_{a}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Psi^{\ast})_{b}$
\item{4.} If $a,b$ are group-like elements, then $(\Phi^{\ast})_{a}$
is an isomorphism and $(\Phi^{\ast})_{b}$ is a morphism of graded
commutative algebras. In particular, if $a=\delta_{g},b=\delta_{y}$,
then we write the corresponding morphisms of graded manifolds as
$\Phi_{g}\colon\kern2pt\break(Y,\hbox{$\script B$})\rightarrow(Y,\hbox{$\script B$})$ and
$\Phi_{y}\colon\kern2pt(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$, so
$\Phi_{g}^{\ast}=(\Phi^{\ast})_{\delta_{g}}$ and
$\Phi_{y}^{\ast}=(\Phi^{\ast})_{\delta_{y}}$. Similarly for
$(\Psi^{\ast})_{a},(\Psi^{\ast})_{b}$.
\item{5.} If $a$ is a primitive element with respect to $\delta_{e}$,
then $(\Phi^{\ast})_{a},(\Psi^{\ast})_{a}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$. We call these
derivations the induced (by the action and the element $a$)
derivations on $\hbox{$\script B$}(Y)$.
}
\undertext{\it Proof}.
(1) Evident, by the defining properties of the left and right action.
(2) We prove this property only for the right action $\Phi$; one
proceeds in a similar way for the left action $\Psi$. We have:
$$\eqalign{(\Phi^{\ast})_{a_{1}\;{\mathchar"220C}\; a_{2}}&=(id\;{\mathchar"220A}\;(a_{1}\;{\mathchar"220C}\;
a_{2}))\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}=(id\;{\mathchar"220A}\; a_{1}\;{\mathchar"220A}\;
a_{2})\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\cr
\hfill&=(id\;{\mathchar"220A}\; a_{1}\;{\mathchar"220A}\;
a_{2})\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}
=(id\;{\mathchar"220A}\; a_{1})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;
a_{2})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\cr
\hfill&=(\Phi^{\ast})_{a_{1}}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{a_{2}}.\cr}$$
(3) Again, we give the proof only for the right action.
$$\eqalign{(\Phi^{\ast})_{b}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{a}&=
(b\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\; a)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\cr
\hfill&=(-1)^{|a||b|}(id\;{\mathchar"220A}\; a)(b\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\cr
\hfill&=(-1)^{|a||b|}(id\;{\mathchar"220A}\; a)\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}\hbox{\lower5.8pt\hbox{\larm\char'027}}(b\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}=(-1)^{|a||b|}r_{a}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{b}.\cr}$$
(4) The fact that these maps are morphisms is evident because
they are compositions of morphisms when $a,b$ are group-like.
Furthermore, $(\Phi^{\ast})_{a}$ and $(\Psi^{\ast})_{a}$
are isomorphisms because their inverses exist, as one can check
from parts (1) and (2).
(5) A derivation $\xi$ on the graded commutative algebra $\hbox{$\script B$}(Y)$ has
the property $\xi(fg)=\xi(f)g+(-1)^{|\xi||f|}f\xi(g)$ for each
homogeneous element $f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)$. This can be restated as follows:
$\xi\hbox{\lower5.8pt\hbox{\larm\char'027}} m_{\fam4 \rsfs B}=m_{\fam4 \rsfs B}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\xi\;{\mathchar"220A}\;
id+id\;{\mathchar"220A}\;\xi)$. We call the endomorphism $\xi$ primitive, so
the set of derivations coincides with the set of primitive
elements. Using this terminology, we must prove that
$(\Phi^{\ast})_{a}$ and $(\Psi^{\ast})_{a}$ are primitive when
$a$ is primitive with respect to $\delta_{e}$. Consider for example
$\xi=(\Phi^{\ast})_{a}$. We have:
$$\eqalign{\xi\hbox{\lower5.8pt\hbox{\larm\char'027}} m_{\fam4 \rsfs B}&
=(m_{\fam4 \rsfs B}\;{\mathchar"220A}\; a\hbox{\lower5.8pt\hbox{\larm\char'027}} m_{\fam4 \rsfs A})\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;
T\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast}\;{\mathchar"220A}\;\Phi^{\ast})\cr
\hfill&=(m_{\fam4 \rsfs B}\;{\mathchar"220A}\;(a\;{\mathchar"220A}\;\delta_{e}+\delta_{e}\;{\mathchar"220A}\;
a))\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\; T\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast}\;{\mathchar"220A}\;\Phi^{\ast})\cr
\hfill&=m_{\fam4 \rsfs B}\hbox{\lower5.8pt\hbox{\larm\char'027}}[(id\;{\mathchar"220A}\;
a)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\;{\mathchar"220A}\; id]+m_{\fam4 \rsfs B}\hbox{\lower5.8pt\hbox{\larm\char'027}}
[id\;{\mathchar"220A}\;(id\;{\mathchar"220A}\; a)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}]\cr
\hfill&=m_{\fam4 \rsfs B}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\xi\;{\mathchar"220A}\; id+id\;{\mathchar"220A}\;\xi).\cr}$$
One proceeds similarly for $(\Psi^{\ast})_{a}$. We note finally that
if $a$ is homogeneous, then
$|(\Phi^{\ast})_{a}|=|(\Psi^{\ast})_{a}|=|a|$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.2cm
\math{Corollary.}{\porisma}{\sl Proposition {\leftrightprop}.}
\undertext{\it Proof}. Since the coproduct $\hbox{$\Delta_{\script A}$}$ on the Hopf algebra
$\hbox{$\script A$}(G)$ has the properties $(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}=(\hbox{$\Delta_{\script A}$}\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}$ and $(id\;{\mathchar"220A}\;\epsilon_{\fam4 \rsfs
A})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}=(\epsilon_{\fam4 \rsfs A}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}=id$, it defines
left and right actions $L$ and $R$ respectively of $(G,\hbox{$\script A$})$ on
itself. Choosing thus $(Y,\hbox{$\script B$})=(G,\hbox{$\script A$})$ in the previous theorem, we may
write $(L^{\ast})_{a}=\ell_{a}$, $(R^{\ast})_{a}=r_{a}$; Proposition
{\leftrightprop} is then immediate.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
Next, let $(G,\hbox{$\script A$})$ be a graded Lie group and
$\ell\colon\kern2pt\hbox{$\script A$}(G)\rightarrow\hbox{$\script A$}(G)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)$
the linear map defined by $$\ell=[m_{\fam4 \rsfs A}\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;
s_{\fam4 \rsfs A})\;{\mathchar"220A}\; id]\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;
T)\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}.\eqno\eqname\linearmap$$
\math{Proposition.}{\adjointaction}{\sl The linear map $\ell$ is a
morphism of graded commutative algebras defining thus a morphism of
graded manifolds which we denote by
$AD\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(G,\hbox{$\script A$})$. Furthermore,
$AD$ is a left action of $(G,\hbox{$\script A$})$ on itself.}
\undertext{\it Proof}. $\ell$ is a morphism of graded algebras as
composition of morphisms; so we can write
$\ell=AD^{\ast}$ for a morphism of graded manifolds
$AD\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(G,\hbox{$\script A$})$.
We now check relations $\leftcomodulestructure$ for $AD^{\ast}$.
For the first one, the following identity is the key of
the proof: $(id\;{\mathchar"220A}\; id\;{\mathchar"220A}\; id\;{\mathchar"220A}\; \hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;
id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}(\hbox{$\Delta_{\script A}$}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\break\hbox{$\Delta_{\script A}$}=(id\;{\mathchar"220A}\;
id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$}\;{\mathchar"220A}\; id)
\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}$. Indeed, applying the two members of the
previous identity on the same $f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)$, we find after a long and
cumbersome calculation the first of $\leftcomodulestructure$.
For the second of $\leftcomodulestructure$, we proceed as follows:
$$\eqalign{(\epsilon_{\fam4 \rsfs A}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\ell&=
(\epsilon_{\fam4 \rsfs A}\hbox{\lower5.8pt\hbox{\larm\char'027}} m_{\fam4 \rsfs A}\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\; s_{\fam4 \rsfs A}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;
T)\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}\cr
\hfill&=(\epsilon_{\fam4 \rsfs A}\;{\mathchar"220A}\;\epsilon_{\fam4 \rsfs A}\hbox{\lower5.8pt\hbox{\larm\char'027}}
s_{\fam4 \rsfs A}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\; T)
\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}\cr
\hfill&=(\epsilon_{\fam4 \rsfs A}\;{\mathchar"220A}\; id\;{\mathchar"220A}\;\epsilon_{\fam4 \rsfs
A})\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}=(\epsilon_{\fam4 \rsfs A}\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}=id,\cr}$$
which completes the proof.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
We call the action $AD$ adjoint action of $(G,\hbox{$\script A$})$ on itself. As in
ordinary Lie theory, the adjoint action respects the primitive
elements with respect to $\delta_{e}$ in the sense of the following
proposition.
\math{Proposition.}{\adjointactionprop}{\sl Let $AD_{\ast
a}\colon\kern2pt\hbox{$\script A$}(G)^{\circ}\rightarrow\hbox{$\script A$}(G)^{\circ}, a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$ be
defined as $AD_{\ast a}(b)=AD_{\ast}(a\;{\mathchar"220A}\; b)$. Then, for an
element $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$ group-like or primitive with respect to
$\delta_{e}$, $AD_{\ast a}$ is a linear map on the Lie superalgebra
$\frak g$.}
\undertext{\it Proof}. Consider first the case where $a$ is a
group-like element, $a=\delta_{g},g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$. If $v\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$, we have:
$AD_{\ast a}(v)=AD_{\ast}(a\;{\mathchar"220A}\; v)=a\;{\mathchar"220C}\; v\;{\mathchar"220C}\; a^{-1}$, because for
group-like elements the antipode $s_{\fam4 \rsfs A}^{\circ}$ of
$\hbox{$\script A$}(G)^{\circ}$ is given by $s_{\fam4 \rsfs
A}^{\circ}a=a^{-1}=\delta_{g^{-1}}$. It is then immediate to verify
that if $\hbox{$\Delta^{\circ}_{\script A}$}$ is the coproduct of $\hbox{$\script A$}(G)^{\circ}$, we have:
$\hbox{$\Delta^{\circ}_{\script A}$}(AD_{\ast a}(v))=AD_{\ast
a}(v)\;{\mathchar"220A}\;\delta_{e}+\delta_{e}\;{\mathchar"220A}\; AD_{\ast a}(v)$ which means
that $AD_{\ast a}(v)$ belongs also to $\frak g$. Proceeding in the
same way for the case where $a$ is primitive with respect to
$\delta_{e}$, we find $AD_{\ast a}(v)=a\;{\mathchar"220C}\; v-(-1)^{|a||v|}v\;{\mathchar"220C}\;
a=[a,v]\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$. Thus, for an element $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$
group-like or primitive with respect to $\delta_{e}$, we take
$AD_{\ast a}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}{\rm End\kern1pt}\frak g$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
The previous proof makes clear that if $a=\delta_{g}$, $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$,
then $AD_{\ast a}$ is an isomorphism of the Lie
superalgebra $\frak g$. Indeed, in this case
$AD_{\ast a}=R_{g\ast}^{-1}\hbox{\lower5.8pt\hbox{\larm\char'027}} L_{g\ast}$. For
the case where $a$ is primitive with respect to $\delta_{e}$,
$AD_{\ast a}$ coincides with the adjoint representation of $\frak
g$ on itself, $AD_{\ast a}=ad(a)$, where $ad(a)(b)=[a,b],\forall
b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$.
\math{Remark.}{\epeksigisidyo}{\it One can define a linear map
$\Psi_{\ast a}\colon\kern2pt\hbox{$\script B$}(Y)^{\circ}\rightarrow\hbox{$\script B$}(Y)^{\circ}$
for each left action $\Psi\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})\rightarrow(Y,\hbox{$\script B$})$
and $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$ by $\Psi_{\ast a}(b)=\Psi_{\ast}(a\;{\mathchar"220A}\; b)$,
$\forall b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)^{\circ}$. It is then easily verified that for a
group-like, $\Psi_{\ast a}$ is an isomorphism of graded coalgebras;
furthermore, $\Psi_{\ast a}=\Psi_{g\ast}$ if $a=\delta_{g}$ (see
Theorem {\thewrima}). We have analogous facts for a right action.}
\vskip0.2cm
Graded Lie groups provide an important and wide class of
parallelizable graded manifolds as the following theorem asserts.
\math{Theorem.}{\paralleltheorem}{\sl Each graded Lie group $(G,\hbox{$\script A$})$
is a parallelizable graded manifold.}
\undertext{\it Proof}. Let
$\Phi\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(G,\hbox{$\script A$})$ be the right action
such that $\Phi^{\ast}=\hbox{$\Delta_{\script A}$}$ (see the proof of Corollary {\porisma}).
Then, $(\Phi^{\ast})_{a}=(id\;{\mathchar"220A}\; a)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}=r_{a}$. If
$a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$, then we know by Theorem {\thewrima} that
$(\Phi^{\ast})_{a}$ is a derivation on $\hbox{$\script A$}(G)$. For $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$,
the tangent vector $\widetilde{(\Phi^{\ast})_{a}}(g)\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{g}(G,\hbox{$\script A$})$ is
calculated by means of $\derivation$:
$\widetilde{(\Phi^{\ast})_{a}}(g)=\delta_{g}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{a}
=\delta_{g}\;{\mathchar"220C}\; a=L_{g\ast}(a)$; but $L_{g\ast}$ is an isomorphism
by Proposition {\leftrightprop}.
This means that if $\{a^{i},b^{j}\}$
is a basis of the Lie superalgebra $\frak g$, $a^{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak
g_{0},b^{j}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g_{1}$, then
$\left\{\widetilde{(\Phi^{\ast})_{a^{i}}}(g),
\widetilde{(\Phi^{\ast})_{b^{j}}}(g)\right\}$
is a basis of $T_{g}(G,\hbox{$\script A$})$ for each $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$. By Proposition 2.12.1 of
{\kost}, we conclude that
$\{(\Phi^{\ast})_{a^{i}},(\Phi^{\ast})_{b^{j}}\}$ is a global basis of
$\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(G)$ for its left $\hbox{$\script A$}(G)$-module structure.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\chapter{Actions, graded distributions and quotient structures}
Two important notions in the study of graded Lie group actions on
graded manifolds are those of graded distributions and quotient graded
manifolds. In this section, we investigate the relation between graded
distributions and free actions properly defined in the graded setting.
In addition, we find a necessary and sufficient condition in order
that the quotient defined by the action of a graded Lie group be a
graded manifold.
We first introduce the notion of the graded distribution (see also
{\riccigia}).
\math{Definition.}{\gdistribution}{\sl Let $(M,\hbox{$\script A$})$ be a graded
manifold of dimension $(m,n)$. We call graded distribution of
dimension $(p,q)$ on $(M,\hbox{$\script A$})$ a sheaf $U\rightarrow\hbox{$\script E$}(U)$ of free
$\hbox{$\script A$}(U)$-modules such that each $\hbox{$\script E$}(U)$ is a graded submodule
of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)$ of dimension $(p,q)$. The distribution will be called
involutive if for each $\xi,\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script E$}(U)$, we have $[\xi,\eta]\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script E$}(U),
\forall U\subset M$ open.}
\vskip0.3cm
Thus, for each open $U\subset M$, there exist elements
$\xi_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\big(\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)\big)_{0},i=1,\ldots,p$,
$\eta_{j}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\big(\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(U)\big)_{1},j=1,\ldots,q$ such that
$$\hbox{$\script E$}(U)=\hbox{$\script A$}(U)\cdot\xi_{1}\;{\mathchar"2208}\;\cdots\;{\mathchar"2208}\;\hbox{$\script A$}(U)\cdot\xi_{p}
\;{\mathchar"2208}\;\hbox{$\script A$}(U)\cdot\eta_{1}\;{\mathchar"2208}\;\cdots\;{\mathchar"2208}\;\hbox{$\script A$}(U)\cdot\eta_{q}.$$
Given the graded distribution $\hbox{$\script E$}$ on $(M,\hbox{$\script A$})$, one obtains, for each
$x\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} M$, a graded subspace $E_{x}$ of $T_{x}(M,\hbox{$\script A$})$, calculating
the tangent vectors $\tilde{\xi}_{x}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{x}(M,\hbox{$\script A$})$ via relation
$\derivation$, for each $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script E$}(U)$, $x\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} U$. Clearly,
$E_{x}=(E_{x})_{0}\;{\mathchar"2208}\;(E_{x})_{1}$ with
${\rm dim\kern1pt}(E_{x})_{0}=\epsilon_{0}(x)\leq p$ and
${\rm dim\kern1pt}(E_{x})_{1}=\epsilon_{1}(x)\leq q$. Therefore, we make the
following distinction:
\math{Definition.}{\regdistribution}{\sl A graded distribution $\hbox{$\script E$}$ of
dimension $(p,q)$ on the graded manifold $(M,\hbox{$\script A$})$ is called 0-regular
if $\epsilon_{0}(x)=p$, and 1-regular if $\epsilon_{1}(x)=q$, for each
$x\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} M$. We say that $\hbox{$\script E$}$ is regular if $\epsilon_{0}(x)=p$ and
$\epsilon_{1}(x)=q$, for each $x\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} M$.}
\vskip0.3cm
For the subsequent analysis, a graded generalization of free actions
will be necessary.
\math{Definition.}{\freeaction}{\sl We call the right action
$\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$ free, if for each
$y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y$ the morphism $\Phi_{y}\colon\kern2pt(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$ is such
that $\Phi_{y\ast}\colon\kern2pt\hbox{$\script A$}(G)^{\circ}\rightarrow\hbox{$\script B$}(Y)^{\circ}$ is
injective.}
\vskip0.3cm
Similarly, one defines the left free action. It is clear that if the
graded Lie group $(G,\hbox{$\script A$})$ acts freely on $(Y,\hbox{$\script B$})$, then we obtain a
free action of $G$ on $Y$, but if only the restriction
$\Phi_{\ast}|_{Y\;{\mathchar"2202}\; G}$ is a free action then, in general, the
action $\Phi$ is not free.
An equivalent characterization of the free action in graded Lie theory
is provided by the following proposition for the case of a right
action.
\math{Proposition.}{\freeactionequiv}{\sl The action
$\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$ is free if and only if
the morphism of graded manifolds $\tilde{\Phi}=
(\Phi\;{\mathchar"2202}\;\pi_{1})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Delta\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})
\rightarrow(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})$ is such that
$\tilde{\Phi}_{\ast}$ is injective.
Here, $\Delta$ denotes the diagonal morphism on $(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$
and $\pi_{1}$ is the projection on the first factor.}
\undertext{\it Proof}. Consider elements
$a=\delta_{g}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}, b=\delta_{y}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)^{\circ}$
group-like and $u\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{y}(Y,\hbox{$\script B$}), w\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{g}(G,\hbox{$\script A$})$ primitive.
Then, a simple calculation gives
$$\tilde{\Phi}_{\ast}(b\;{\mathchar"220A}\; a)=\Phi_{y\ast}(a)\;{\mathchar"220A}\;
b\eqno\eqname\relena$$
$$\tilde{\Phi}_{\ast}(u\;{\mathchar"220A}\; a+b\;{\mathchar"220A}\; w)=
[\Phi_{g\ast}(u)+\Phi_{y\ast}(w)]\;{\mathchar"220A}\; b+
\Phi_{g\ast}(b)\;{\mathchar"220A}\; u.\eqno\eqname\reldyo$$
Suppose now that $\Phi$ is a free action; then the morphism
$\Phi_{y\ast}\colon\kern2pt\hbox{$\script A$}(G)^{\circ}\rightarrow\hbox{$\script B$}(Y)^{\circ}$ is
injective which implies immediately, thanks to $\relena$ and
$\reldyo$, that $\tilde{\Phi}_{\ast}$ is injective on all
group-like and primitive elements. By Proposition 2.17.1 of
{\kost}, this is a necessary and sufficient condition for the
morphism $\tilde{\Phi}_{\ast}$ to be injective on the whole
graded coalgebra $\hbox{$\script A$}(G)^{\circ}$. The converse is immediate
again by $\relena$ and $\reldyo$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
Consider now a right action
$\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$; by Theorem
{\thewrima}, we have a linear map ${\bit I}_{\Phi}\colon\kern2pt\frak
g\rightarrow\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$ defined as ${\bit
I}_{\Phi}(a)=(\Phi^{\ast})_{a}$. We thus obtain a subspace
$\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(Y)={\rm im\kern1pt}{\bit I}_{\Phi}$ of the Lie superalgebra of
derivations on $\hbox{$\script B$}(Y)$. As a matter of fact, $\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(Y)$ is a
graded Lie subalgebra of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$. Indeed, one readily verifies
that for all $a,b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$ we have
$[(\Phi^{\ast})_{a},(\Phi_{\ast})_{b}]=(\Phi^{\ast})_{[a,b]}$, which
means that ${\bit I}_{\Phi}([a,b])=[{\bit I}_{\Phi}(a),{\bit
I}_{\Phi}(b)]$. The following theorem provides an important property
of free actions on graded manifolds.
\math{Theorem.}{\freeactiontheorem}{\sl Let $\Phi$ be a free right action
of the graded Lie group $(G,\hbox{$\script A$})$ on the graded manifold $(Y,\hbox{$\script B$})$,
${\rm dim\kern1pt}(G,\hbox{$\script A$})=(m,n)$. Then $\Phi$ induces a
regular and involutive graded distribution $\hbox{$\script E$}$ on $(Y,\hbox{$\script B$})$ of
dimension $(m,n)$.}
\undertext{\it Proof}.
$\bullet$ {\it Step 1}. Let us first calculate the kernel of the Lie
superalgebra morphism ${\bit I}_{\Phi}\colon\kern2pt\frak
g\rightarrow\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$ when $\Phi$ is a free action. To this end, the
following general property of actions is useful:
$$\widetilde{(\Phi^{\ast})_{a}}(y)=\Phi_{y\ast}(a),\;\forall
y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y,\;\forall a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g.\eqno\eqname\generalprop$$
For the proof of $\generalprop$, we note only that, by relation
$\derivation$, $\widetilde{(\Phi^{\ast})_{a}}(y)=
\delta_{y}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{a}=\Phi_{\ast}(\delta_{y}\;{\mathchar"220A}\; a)$.
Suppose now that ${\bit I}_{\Phi}(a)=0\Leftrightarrow(\Phi^{\ast})_{a}=0$;
by $\generalprop$, this implies that
$\Phi_{y\ast}(a)=0,\forall y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y$. Since $\Phi$ is a free
action, we know by Definition {\freeaction}, that $\Phi_{y\ast}$ is
injective for all $y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y$, which implies that $a=0$. As a result,
${\rm ker\kern1pt}{\bit I}_{\Phi}=0$, or ${\bit I}_{\Phi}$ is injective; hence,
$\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(Y)$ is a graded Lie subalgebra of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$ whose
even and odd dimensions are $m$ and $n$ respectively:
$(\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(Y))_{0}\cong\frak g_{0}$,
$(\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(Y))_{1}\cong\frak g_{1}$.
$\bullet$ {\it Step 2}. Let now $\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}$ be the correspondence
$U\rightarrow\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(U)$, where $\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(U)=P_{YU}
\big(\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(Y)\big)$, $U\subset Y$ and
$P_{UV}\colon\kern2pt\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(U)\rightarrow\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(V)$ are the restriction
maps for the sheaf $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}$. Clearly,
$\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}$ is a subpresheaf of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}$. Consider now the
subpresheaf $\hbox{$\script E$}=\hbox{$\script B$}\cdot\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}$ of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}$, $\hbox{$\script E$}(U)=\hbox{$\script B$}(U)\cdot
\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(U)=P_{YU}\big(\hbox{$\script B$}(Y)\cdot\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(Y)\big)$. $\hbox{$\script E$}(U)$
is the set of finite linear combinations of elements of $
\hbox{$\yfra D\yfra e\yfra r$}_{\Phi}\hbox{$\script B$}(U)$ with coefficients in $\hbox{$\script B$}(U)$. In order to prove
that $\hbox{$\script E$}$ is a sheaf, let us consider an open $U\subset Y$, an open
covering $\{U_{\alpha}\}_{\alpha\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Lambda}$ of $U$ and elements
$D_{\alpha}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script E$}(U_{\alpha})$ such that $P_{U_{\alpha}U_{\alpha\beta}}
(D_{\alpha})=P_{U_{\beta}U_{\alpha\beta}}(D_{\beta})$,
$\forall\alpha,\beta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Lambda$ when $U_{\alpha\beta}=U_{\alpha}\cap
U_{\beta}\neq\kenosyn$. Then by the sheaf properties of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}$,
there exists an element $D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(U)$ such that
$P_{UU_{\alpha}}(D)=D_{\alpha}$. But if we write
$D_{\alpha}=\sum_{i}f^{i}_{\alpha}P_{YU_{\alpha}}
(\Phi^{\ast})_{e_{i}}$, $f^{i}_{\alpha}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(U_{\alpha})$ and
$\{e_{i}\}$ is a basis of $\frak g$, then by step 1, we find easily
that $f^{i}_{\alpha}=f^{i}|_{U_{\alpha}}$, $f^{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(U)$, because
$D_{\alpha}$'s coincide on the intersections $U_{\alpha\beta}$. This
means that $D_{\alpha}=P_{YU_{\alpha}}(E)$, where
$E=\sum_{i}F^{i}(\Phi^{\ast})_{e_{i}}$ with $F^{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)$ such that
$F^{i}|_{U}=f^{i}$, $\forall i$. It is then immediate that
$D=P_{YU}(E)\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script E$}(U)$.
$\bullet$ {\it Step 3}. It is evident that the sheaf $\hbox{$\script E$}$
previously constructed, has the properties of a graded distribution.
In fact, $\hbox{$\script E$}(U)$ is a graded submodule of $\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(U)$ of dimension
$(m,n)$ for each open $U\subset Y$. This distribution is
clearly regular thanks to relation $\generalprop$ and to the fact that
the action is free. It remains to show that
it is involutive. To this end, consider two elements
$\xi=\sum_{i}f^{i}P_{YU}(\Phi^{\ast})_{a^{i}}$ and
$\eta=\sum_{j}g^{j}P_{YU}(\Phi^{\ast})_{b^{j}}$ of $\hbox{$\script E$}(U)$,
with $f^{i},g^{j}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(U), a^{i},b^{j}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$.
Then, direct calculation shows that
$$\eqalign{[\xi,\eta]&=\sum_{i,j}f^{i}
\big(P_{YU}(\Phi^{\ast})_{a^{i}}
g^{j}\big)P_{YU}(\Phi^{\ast})_{b^{j}}\cr
\hfill&\hskip0.4cm-(-1)^{|\xi||\eta|}\sum_{i,j}g^{j}
\big(P_{YU}(\Phi^{\ast})_{b^{j}}f^{i}\big)
P_{YU}(\Phi^{\ast})_{a^{i}}\cr
\hfill&\hskip0.4cm+\sum_{i,j}(-1)^{|a^{i}||g^{j}|}f^{i}g^{j}
P_{YU}(\Phi^{\ast})_{[a^{i},b^{j}]},\cr}$$
from which the involutivity is evident.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
We focus now our attention on graded quotient structures defined by
equivalence relations on graded manifolds (see {\alm} for a general
treatment on this subject). A special case of equivalence relation is
provided by the action of a graded Lie group on a graded manifold and
this will be the interesting one for us.
\math{Definition.}{\regularaction}{\sl We call a right action
$\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$ regular if the
morphism $\tilde{\Phi}=(\Phi\;{\mathchar"2202}\;\pi_{1})\hbox{\lower5.8pt\hbox{\larm\char'027}}\Delta\colon\kern2pt(Y,\hbox{$\script B$})
\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})$ defines
$(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$ as a closed graded submanifold of
$(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})$.}
\vskip0.3cm
Recall here, {\kost}, that $(R,\hbox{$\script D$})$ is a graded submanifold of
$(Y,\hbox{$\script B$})$ if $\hbox{$\script D$}(R)^{\circ}\subset\hbox{$\script B$}(Y)^{\circ}$ and there exists a
morphism of graded manifolds $i\colon\kern2pt(R,\hbox{$\script D$})\rightarrow(Y,\hbox{$\script B$})$ such
that $i_{\ast}\colon\kern2pt\hbox{$\script D$}(R)^{\circ}\rightarrow\hbox{$\script B$}(Y)^{\circ}$ is simply
the inclusion; $(R,\hbox{$\script D$})$ will be called closed if, furthermore,
${\rm dim\kern1pt}(R,\hbox{$\script D$})<{\rm dim\kern1pt}(Y,\hbox{$\script B$})$. Then, the action is regular if the subset
$\tilde{\Phi}_{\ast}\big(\hbox{$\script B$}(Y)^{\circ}\;{\mathchar"220A}\;\hbox{$\script A$}(G)^{\circ}\big)\subset
\hbox{$\script B$}(Y)^{\circ}\;{\mathchar"220A}\;\hbox{$\script B$}(Y)^{\circ}$ defines a graded submanifold of
$(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})$ in the sense of Kostant, {\kost}.
The following theorem generalizes in a natural way to the graded
case, a fundamental result about quotients defined by actions in
ordinary manifold theory.
\math{Theorem.}{\regularactiontheorem}{\sl
The action $\Phi\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})\rightarrow(Y,\hbox{$\script B$})$
is regular if and only if the quotient $(Y/G,\hbox{$\script B$}/\hbox{$\script A$})$ is a
graded manifold.}
\undertext{\it Proof}. Thanks to Theorem 2.6 of {\alm},
it suffices to prove that the projections
$p_{i}\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(Y,\hbox{$\script B$})\rightarrow(Y,\hbox{$\script B$}), i=1,2$ on the first
and second factors restricted to the image of $(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$
under $\tilde{\Phi}$ are submersions.
In other words, we must show that the morphisms of graded coalgebras
$p_{i\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\tilde{\Phi}_{\ast}\colon\kern2pt\hbox{$\script B$}(Y)^{\circ}\;{\mathchar"220A}\;
\hbox{$\script A$}(G)^{\circ}\rightarrow\hbox{$\script B$}(Y)^{\circ},i=1,2$ restricted to primitive
elements are surjective.
Consider an arbitrary primitive element $V=u\;{\mathchar"220A}\; a+b\;{\mathchar"220A}\; w$,
for $a=\delta_{g}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ},b=\delta_{y}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)^{\circ}$
group-like and $u\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{y}(Y,\hbox{$\script B$})$, $w\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{g}(G,\hbox{$\script A$})$ primitive.
Using relation $\reldyo$, we find easily:
$p_{1\ast}\tilde{\Phi}_{\ast}(V)=\Phi_{g\ast}(u)+\Phi_{y\ast}(w)$
and $p_{2\ast}\tilde{\Phi}_{\ast}(V)=u$, which proves that
$p_{i}\hbox{\lower5.8pt\hbox{\larm\char'027}}\tilde{\Phi}$ are submersions, $i=1,2$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
Note here that if $U\subset Y/G$ is an open subset, then the sheaf
$\hbox{$\script B$}/\hbox{$\script A$}$ is given by
$$(\hbox{$\script B$}/\hbox{$\script A$})(U)=\{f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(\check{\pi}^{-1}(U))\,|\,\Phi^{\ast}f=
f\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}\},\eqno\eqname\phiastf$$
where $\check{\pi}\colon\kern2pt Y\rightarrow Y/G$ is the projection, {\alm}.
Furthermore, the dimension of the quotient graded manifold
$(Y/G,\hbox{$\script B$}/\hbox{$\script A$})$ is equal to ${\rm dim\kern1pt}(Y/G,\hbox{$\script B$}/\hbox{$\script A$})=2{\rm dim\kern1pt}(Y,\hbox{$\script B$})-
{\rm dim\kern1pt}({\rm im\kern1pt}\tilde{\Phi})$, where ${\rm im\kern1pt}\tilde{\Phi}$
denotes the closed graded submanifold defined by $\tilde{\Phi}$. When
$\Phi$ is a free action, then by Proposition {\freeactionequiv}, we
take that ${\rm dim\kern1pt}(Y/G,\hbox{$\script B$}/\hbox{$\script A$})={\rm dim\kern1pt}(Y,\hbox{$\script B$})-{\rm dim\kern1pt}(G,\hbox{$\script A$})$.
We make finally some comments about graded isotropy subgroups
recalling their construction from {\kost}, but in a more concise way.
Consider a right action $\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})
\rightarrow(Y,\hbox{$\script B$})$ and $b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)^{\circ}$ a group-like element,
$b=\delta_{y}$, $y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y$.
Let $H_{y}(G,\frak g)$ be the set of elements $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$
with the property
$$\Phi_{\ast}(\delta_{y}\;{\mathchar"220A}\; a)=\epsilon_{\fam4 \rsfs A}^{\circ}
(a)\delta_{y}.\eqno\eqname\isotropyrelationena$$
Let $H_{y}(G,\frak g)\cap G=G_{y}$ and
$H_{y}(G,\frak g)\cap\frak g=\frak g_{y}$, then $(\frak g_{y})_{0}$ is
the Lie algebra of $G_{y}$. It is then clear that we can form the
Lie-Hopf algebra $\hbox{\bf\char'122}(G_{y})\hbox{\bb\char'076} E(\frak g_{y})$ because $\frak
g_{y}$ is stable under the adjoint action of $G_{y}$, see Proposition
{\adjointactionprop} and relation $\isotropyrelationena$. By Proposition
3.8.3 of {\kost}, $\hbox{\bf\char'122}(G_{y})\hbox{\bb\char'076} E(\frak g_{y})$ corresponds to a
graded Lie subgroup of $(G,\hbox{$\script A$})$. We denote this subgroup by $(G_{y},\hbox{$\script A$}_{\kern
-1.5pt y})$ and call it graded isotropy subgroup of $(G,\hbox{$\script A$})$ at the
point $y$.
\chapter{Graded principal bundles}
Graded principal bundles were first introduced in {\alm}, {\almdyo}.
Here, we discuss this notion with slight modifications suggested by
the requirement that the definition of graded principal bundles
reproduces well the ordinary principal bundles.
\math{Definition.}{\gpb}{\sl A graded principal bundle over a graded
manifold $(X,\hbox{$\script C$})$ consists of a graded manifold $(Y,\hbox{$\script B$})$ and an action
$\Phi$ of a graded Lie group $(G,\hbox{$\script A$})$ on $(Y,\hbox{$\script B$})$ with the following
properties:
\item{1.} $\Phi$ is a free right action
\item{2.} the quotient $(Y/G,\hbox{$\script B$}/\hbox{$\script A$})$ is a graded manifold, isomorphic
to $(X,\hbox{$\script C$})$, such that the natural projection $\pi\colon\kern2pt(Y,\hbox{$\script B$})
\rightarrow(X,\hbox{$\script C$})$ is a submersion
\item{3.} $(Y,\hbox{$\script B$})$ is locally trivial that is, for each open $U\subset
X$, there exists an isomorphism of graded manifolds
$\phi\colon\kern2pt(V,\hbox{$\script B$}|_{V})\rightarrow(U\;{\mathchar"2202}\; G,\hbox{$\script C$}|_{U}\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$})$,
$V=\pi_{\ast}^{-1}(U)\subset Y$, such that the isomorphism
$\phi^{\ast}$ of graded algebras is a morphism of $\hbox{$\script A$}(G)$-comodules,
where the $\hbox{$\script A$}(G)$-comodule structures on $\hbox{$\script C$}(U)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)$ and $\hbox{$\script B$}(V)$
are given by $id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$}$ and $\Phi^{\ast}$ respectively.
Furthermore, we require that $\phi^{\ast}=m_{\fam4 \rsfs
B}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\pi^{\ast}\;{\mathchar"220A}\;\psi^{\ast})$, where
$\psi\colon\kern2pt(V,\hbox{$\script B$}|_{V})\rightarrow(G,\hbox{$\script A$})$ is a morphism of graded
manifolds.
}
\vskip0.3cm
The fact that $\phi^{\ast}$ is a morphism of $\hbox{$\script A$}(G)$-comodules,
that is, $$(\phi^{\ast}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}(id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$})=\Phi^{\ast}
\hbox{\lower5.8pt\hbox{\larm\char'027}}\phi^{\ast}\eqno\eqname\comodmorphi$$
implies that $\psi^{\ast}$ is also a morphism of $\hbox{$\script A$}(G)$-comodules:
$$(\psi^{\ast}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}=\Phi^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\psi^{\ast}.
\eqno\eqname\comodmorpsi$$
One easily verifies that the underlying differentiable manifolds of
Definition {\gpb} form an ordinary principal bundle, and further, if
the graded manifolds become trivial, in the sense that
$\hbox{$\script A$}=C^{\infty}_{G},\hbox{$\script B$}=C^{\infty}_{Y}, \hbox{$\script C$}=C^{\infty}_{X}$,
then we obtain the definition of an ordinary principal bundle.
We refer the reader to {\kobay} for a general and systematic treatment
on the subject of principal bundles in differential geometry.
Let us now compute the graded isotropy subgroups in the case
where the action $\Phi$ is free (for example, this is the case
of the graded principal bundle).
The Lie group $G_{y}$ which is defined as $G_{y}=\{g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G\,|\,\Phi_{\ast}
(b\;{\mathchar"220A}\;\delta_{g})=b\}$ is equal to $e$ because the action of $G$ on
$Y$ is free. On the other hand, $\frak g_{y}=\{a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g\,|\,
\Phi_{\ast}(b\;{\mathchar"220A}\; a)=0\}=0$, again because the action is
free (the morphism $\Phi_{y\ast}$ is injective). Hence, in this case
the graded isotropy subgroup $(G_{y},\hbox{$\script A$}_{\kern-1.5pt y})$ is simply
$(e,{\fam4 \rsfs R})$, where $\fam4 \rsfs R$ is the trivial sheaf over the
identity $e\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$, ${\fam4 \rsfs R}(e)=\hbox{\bf\char'122}$.
In order to calculate the quotient graded manifold
$(G/G_{y},\hbox{$\script A$}/\hbox{$\script A$}_{\kern-1.5pt y})$ which represents the orbit of the
point $y$ under the action of $(G,\hbox{$\script A$})$, we need the expression of
the canonical right action $\Phi$ of the subgroup $(G_{y},\hbox{$\script A$}_{\kern-1.5pt
y})$ on $(G,\hbox{$\script A$})$. If $i\colon\kern2pt(G_{y},\hbox{$\script A$}_{\kern-1.5pt
y})\rightarrow(G,\hbox{$\script A$})$ is the inclusion, we have:
$\Phi^{\ast}=(id\;{\mathchar"220A}\; i^{\ast})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}$ and for the case of the free
action, where $(G_{y},\hbox{$\script A$}_{\kern-1.5pt y})=(e,{\fam4 \rsfs R})$, one finds that
$i^{\ast}=\epsilon_{\fam4 \rsfs A}$ and finally $\Phi^{\ast}=id$. In view of
relation $\phiastf$, it is straightforward that $\hbox{$\script A$}/\hbox{$\script A$}_{\kern-1.5pt y}=\hbox{$\script A$}$.
Therefore,
\math{Property.}{\idiotita}{\sl The orbits of a free action of
$(G,\hbox{$\script A$})$ are always isomorphic as graded manifolds to $(G,\hbox{$\script A$})$.}
\vskip0.3cm
For the case now of the graded principal bundle, the orbit
$(\hbox{$\script O$}_{y},\hbox{$\script B$}_{y})$ of $(G,\hbox{$\script A$})$
through $y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y$ will be called fibre of $(Y,\hbox{$\script B$})$ over
$x=\pi_{\ast}|_{Y}(y)\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} X$. Using the graded version of the submersion
theorem, {\ruipma}, one can justify this terminology as follows.
The pre-image $\pi^{-1}(x,{\fam4 \rsfs R})$ of the closed graded submanifold
$(x,{\fam4 \rsfs R})\hookrightarrow(X,\hbox{$\script C$})$ is a closed graded submanifold of
$(Y,\hbox{$\script B$})$ whose underlying differentiable manifold is
$\pi_{\ast}^{-1}(x)$. So, if we write $\pi^{-1}(x,{\fam4 \rsfs R})=
(\pi_{\ast}^{-1}(x),\hbox{$\script D$})$, then for each $z\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\pi_{\ast}^{-1}(x)$ we have:
$T_{z}(\pi_{\ast}^{-1}(x),\hbox{$\script D$})={\rm ker\kern1pt} T_{z}\pi$. We know already that
$\pi_{\ast}^{-1}(x)=\hbox{$\script O$}_{y}$, the orbit under $G$ of a point $y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y$
such that $\pi_{\ast}|_{Y}(y)=x$. Furthermore, if
$\delta_{z}=\Phi_{\ast}(\delta_{y}\;{\mathchar"220A}\;\delta_{g})$, $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$ and
$v\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{g}(G,\hbox{$\script A$})$, then $V=\Phi_{\ast}(\delta_{y}\;{\mathchar"220A}\; v)\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}
T_{z}(\hbox{$\script O$}_{y},\hbox{$\script B$}_{y})$ and $\pi_{\ast}(V)=0$. Consequently,
$T_{z}(\hbox{$\script O$}_{y},\hbox{$\script B$}_{y})\subset{\rm ker\kern1pt} T_{z}\pi$. By a simple argument
on dimensions, we obtain that $T_{z}(\hbox{$\script O$}_{y},\hbox{$\script B$}_{y})=
T_{z}(\pi_{\ast}^{-1}(x),\hbox{$\script D$})$. We conclude that the tangent bundles
of $\pi^{-1}(x,{\fam4 \rsfs R})$ and $(\hbox{$\script O$}_{y},\hbox{$\script B$}_{y})$ are identical; but
then, Theorem 2.16 of {\kost} tells us that these graded manifolds
coincide.
Next, we discuss an elementary example of graded principal bundle, the
product bundle. In this case, one can directly verify the axioms of
Definition {\gpb}. Nevertheless, there exist also graded
principal bundles for which Definition {\gpb} cannot be directly
applied, even though this is possible for the corresponding ordinary
principal bundles. For such cases, one may use an equivalent
definition of the graded principal bundle, see next section.
\math{Example.}{\paradeigmaena}{Consider a graded manifold $(X,\hbox{$\script C$})$,
a graded Lie group $(G,\hbox{$\script A$})$ and their product
$(Y,\hbox{$\script B$})=(X,\hbox{$\script C$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$. One has a canonical right action
$\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$ defined as
$\Phi^{\ast}=id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$}$. This action is free: if $\delta_{y}=
\delta_{x}\;{\mathchar"220A}\;\delta_{g}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script C$}(X)^{\circ}\;{\mathchar"220A}\;\hbox{$\script A$}(G)^{\circ}$ is
group-like and $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$, then $\Phi_{y\ast}(a)=\Phi_{\ast}
(\delta_{x}\;{\mathchar"220A}\;\delta_{g}\;{\mathchar"220A}\; a)=
\delta_{x}\;{\mathchar"220A}\;(\delta_{g}\;{\mathchar"220C}\; a)=\delta_{x}\;{\mathchar"220A}\; L_{g\ast}(a)$,
which implies that $\Phi_{y\ast}$ is an injective
morphism of graded coalgebras, because $L_{g\ast}$ is an
isomorphism. Evidently, the quotient $Y/G$ is equal to $X$ and
the sheaf $\hbox{$\script B$}/\hbox{$\script A$}$
over $X$ is given by the elements $f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script C$}(U)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)$ for which
$\Phi^{\ast}f=f\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}$, $U\subset X$ open.
If we decompose $f$ as $f=\sum_{i}f_{i}\;{\mathchar"220A}\; h_{i}$, $f_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script C$}(U),
h_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)$, we take easily $\hbox{$\Delta_{\script A}$} h_{i}=h_{i}\;{\mathchar"220A}\;
\hbox{\bbm 1}_{\fam4 \rsfs A}$, hence $h_{i}$'s are such that $\epsilon_{\fam4 \rsfs
A}(h_{i})\hbox{\bbm 1}_{\fam4 \rsfs A}=h_{i}$. We conclude that $f$ is of the form
$f=\sum_{i}\epsilon_{\fam4 \rsfs A}(h_{i})f_{i}\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs
A}=f_{\fam4 \rsfs C}\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}$ and finally
$(\hbox{$\script B$}/\hbox{$\script A$})(U)\cong\hbox{$\script C$}(U)$, which proves that the quotient $(Y/G,\hbox{$\script B$}/\hbox{$\script A$})$
is isomorphic to $(X,\hbox{$\script C$})$. Further, the identity map
$\phi^{\ast}=id\colon\kern2pt\hbox{$\script C$}(U)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)\rightarrow\hbox{$\script C$}(U)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)$
admits the decomposition of Definition {\gpb} (it suffices to choose
$\pi=\pi_{1},\psi=\pi_{2}$, the projections on the first and second
factors respectively) and satisfies trivially the relation
$\comodmorphi$.}
\chapter{The geometry of graded principal bundles}
In this section we analyze three aspects of the geometry of graded
principal bundles: the relation between the sheaf of vertical
derivations and the graded distribution induced by the action of the
structure group, a criterion of global triviality of the graded
principal bundle, and, finally, a way to reformulate Definition {\gpb}
avoiding the use of local trivializations.
For this and the subsequent sections, we will adopt the following
notation in order to simplify the discussion: if $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$ and
$V\subset Y$ is an open, then the restriction
$P_{YV}(\Phi^{\ast})_{a}$ (Theorem {\freeactiontheorem}) will be
simply denoted by $(\Phi^{\ast})_{a}$.
It is well-known that if $Y(X,G)$ is an ordinary principal bundle,
then the set of vertical vectors at $y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y$ is equal to the set
of induced vectors at the same point. In the previous section,
we saw that the same is true for graded principal bundles;
however, it is not evident that this property remains valid for
the sheaves of vertical and induced derivations. Nonetheless,
as the following theorem confirms, this is indeed the case.
\math{Theorem.}{\ekerpi}{\sl Let $(Y,\hbox{$\script B$})$ be a graded principal bundle
over $(X,\hbox{$\script C$})$ with structure group $(G,\hbox{$\script A$})$. If $\hbox{$\script E$}$ is the natural
graded distribution induced by the free action of $(G,\hbox{$\script A$})$ on
$(Y,\hbox{$\script B$})$, then $\hbox{$\script E$}$ is equal to the sheaf of vertical derivations,
$\hbox{$\script E$}=\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\pi_{\ast},\hbox{$\script B$})$.}
\undertext{\it Proof}. We show first that
$\hbox{$\script E$}\subset\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\pi_{\ast},\hbox{$\script B$})$; to this end, it is sufficient to prove
that for each $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$, we have $\pi_{\ast}(\Phi^{\ast})_{a}=0$.
Indeed, if $f$ is a homogeneous element of $\hbox{$\script C$}(U)$, $U\subset X$ open,
we take:
$$\pi^{\ast}\big[\pi_{\ast}(\Phi^{\ast})_{a}(f)\big]=(\Phi^{\ast})_{a}
(\pi^{\ast}f)=(id\;{\mathchar"220A}\; a)(\pi^{\ast}f\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A})=0,$$
since $a$ is primitive with respect to $\delta_{e}$.
Now the following argument on dimensions completes the proof. A
derivation $D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\pi_{\ast},\hbox{$\script B$})(V)$, $V=\pi_{\ast}^{-1}(U)$,
is characterized by the property:
$\pi^{\ast}[(\pi_{\ast}D)f]=D(\pi^{\ast}f)=0$,
$\forall f\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script C$}(U)$. This means in terms of coordinates that $D$ does
not depend on the graded coordinates on $V$ obtained by pulling-back
the graded coordinates of $U$ via $\pi^{\ast}$. Using the fact that
$\pi^{\ast}$ is an injection, we find that the dimension of
$\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\pi_{\ast},\hbox{$\script B$})(V)$ equals to ${\rm dim\kern1pt}(Y,\hbox{$\script B$})-{\rm dim\kern1pt}(X,\hbox{$\script C$})={\rm dim\kern1pt}(G,\hbox{$\script A$})=
{\rm dim\kern1pt}\hbox{$\script E$}(V)$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
The fact that a local trivialization $\phi$ is an isomorphism
of $\hbox{$\script A$}(G)$-comodules is expressed by relation $\comodmorphi$ but it is
also reflected in the induced derivations. The following lemma makes
this precise, providing a relation between them.
\math{Lemma.}{\indvectorfields}{\sl If $\phi\colon\kern2pt(V,\hbox{$\script B$}|_{V})
\rightarrow(U,\hbox{$\script C$}|_{U})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$, $V=\pi_{\ast}^{-1}(U)$,
is a local trivialization of $(Y,\hbox{$\script B$})$,
then the following relation is true for each $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$:
$$\phi_{\ast}(\Phi^{\ast})_{a}=id\;{\mathchar"220A}\; (R^{\ast})_{a}.$$}
\indent\undertext{\it Proof}. We show first that $\phi_{\ast}
(\Phi^{\ast})_{a}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(G)$. Indeed, if $f_{\fam4 \rsfs C}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script C$}(U)$, we
take:
$$\phi_{\ast}(\Phi^{\ast})_{a}(f_{\fam4 \rsfs C}\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs
A})=\big((\phi^{-1})^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast})_{a}\hbox{\lower5.8pt\hbox{\larm\char'027}}\phi^{\ast}
\big)(f_{\fam4 \rsfs C}\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A})=
(\phi^{\ast})^{-1}(\Phi^{\ast})_{a}(\pi^{\ast}f_{\fam4 \rsfs C})=0.$$
It is then sufficient to calculate $\phi_{\ast}(\Phi^{\ast})_{a}$
on elements of $\hbox{$\script C$}(U)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)$ of the form $\hbox{\bbm 1}_{\fam4 \rsfs C}
\;{\mathchar"220A}\; f_{\fam4 \rsfs A}$ for $f_{\fam4 \rsfs A}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)$. Taking into
account relation $\comodmorpsi$ and if $\hbox{$\Delta_{\script A}$}
f_{\fam4 \rsfs A}=\sum_{i}I^{i}f_{\fam4 \rsfs A}\;{\mathchar"220A}\; J^{i}f_{\fam4 \rsfs A}$,
one finds:
$$\eqalign{\phi_{\ast}(\Phi^{\ast})_{a}(\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;
f_{\fam4 \rsfs A})&=(\phi^{-1})^{\ast}(id\;{\mathchar"220A}\; a)(\psi^{\ast}\;{\mathchar"220A}\; id)
\hbox{$\Delta_{\script A}$} f_{\fam4 \rsfs A}\cr
\hfill&=\sum_{i}(-1)^{|a||I^{i}f_{\fam4 \rsfs A}|}
(\phi^{\ast})^{-1}\psi^{\ast}(I^{i}f_{\fam4 \rsfs A})a(J^{i}f_{\fam4 \rsfs
A})\cr
\hfill&=\sum_{i}(-1)^{|a||I^{i}f_{\fam4 \rsfs A}|}(\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;
I^{i}f_{\fam4 \rsfs A})a(J^{i}f_{\fam4 \rsfs A})\cr
\hfill&=\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;(id\;{\mathchar"220A}\; a)\hbox{$\Delta_{\script A}$} f_{\fam4 \rsfs A}
=(id\;{\mathchar"220A}\;(R^{\ast})_{a})(\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\; f_{\fam4 \rsfs A}),\cr}$$
where we have used that $\phi^{\ast}(\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;
I^{i}f_{\fam4 \rsfs A})=\psi^{\ast}(I^{i}f_{\fam4 \rsfs A})$ implies
$(\phi^{-1})^{\ast}\psi^{\ast}(I^{i}f_{\fam4 \rsfs A})=
\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\; I^{i}f_{\fam4 \rsfs A}$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
Next we discuss the notion of section of a graded principal bundle
and we show that graded and ordinary sections exhibit several
analogous properties.
\math{Definition.}{\gradedsectiondef}{\sl Let $U\subset X$ be an open
on the base manifold $(X,\hbox{$\script C$})$ of a graded principal bundle $(Y,\hbox{$\script B$})$.
We call graded section of $(Y,\hbox{$\script B$})$ on $U$ a morphism of graded
manifolds $s\colon\kern2pt(U,\hbox{$\script C$}|_{U})\rightarrow(Y,\hbox{$\script B$})$ having the property
$s^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\pi^{\ast}=id$. We write also $\pi\hbox{\lower5.8pt\hbox{\larm\char'027}}
s=id\colon\kern2pt(U,\hbox{$\script C$}|_{U})\rightarrow(U,\hbox{$\script C$}|_{U})$.}
\vskip0.3cm
A first property of graded sections is that to each local
trivialization, one can associate in a canonical way a graded section.
More precisely:
\math{Lemma.}{\gradedsectionlem}{\sl Let $\phi\colon\kern2pt(V,\hbox{$\script B$}|_{V})
\rightarrow(U,\hbox{$\script C$}|_{U})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$, $V=\pi_{\ast}^{-1}(U)$, be a local
trivialization. Then, if $E\colon\kern2pt\hbox{$\script C$}(U)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)\rightarrow\hbox{$\script C$}(U)$ is
defined as $E(f_{\fam4 \rsfs C}\;{\mathchar"220A}\; f_{\fam4 \rsfs A})=\delta_{e}
(f_{\fam4 \rsfs A})f_{\fam4 \rsfs C}$, the map $E\hbox{\lower5.8pt\hbox{\larm\char'027}}(\phi^{-1})^{\ast}
\colon\kern2pt\hbox{$\script B$}(V)\rightarrow\hbox{$\script C$}(U)$ defines a morphism of graded manifolds
with the properties of a graded section.}
\undertext{\it Proof}. The fact that $E$ is a morphism of graded
manifolds is evident because we may write $E=id\;{\mathchar"220A}\;\delta_{e}$.
Therefore, there exists a morphism of graded manifolds
$s\colon\kern2pt(U,\hbox{$\script C$}|_{U})\rightarrow(Y,\hbox{$\script B$})$ such that
$s^{\ast}=E\hbox{\lower5.8pt\hbox{\larm\char'027}}(\phi^{-1})^{\ast}$. Now if $f_{\fam4 \rsfs C}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script C$}(U)$,
we have: $(s^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\pi^{\ast})(f_{\fam4 \rsfs
C})=E\big((\phi^{-1})^{\ast}\pi^{\ast}f_{\fam4 \rsfs C}\big)$ and since
$\phi^{\ast}(f_{\fam4 \rsfs C}\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs
A})=\pi^{\ast}f_{\fam4 \rsfs C}$, we finally obtain $s^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}
\pi^{\ast}=id$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
Conversely now, consider a graded section $s\colon\kern2pt(U,\hbox{$\script C$}|_{U})
\rightarrow(Y,\hbox{$\script B$})$. We wish to show that there exists a local
trivialization $\phi\colon\kern2pt(V,\hbox{$\script B$}|_{V})\rightarrow(U,\hbox{$\script C$}|_{U})
\;{\mathchar"2202}\;(G,\hbox{$\script A$})$ canonically associated to $s$,
$V=\pi_{\ast}^{-1}(U)\subset Y$. To this end, we first define
a morphism of graded algebras $\tilde{\phi}^{\ast}\colon\kern2pt\hbox{$\script B$}(V)
\rightarrow\hbox{$\script C$}(U)\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$}(G)$ by $\tilde{\phi}^{\ast}=(s^{\ast}
\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}$. Let $\tilde{\phi}_{\ast}=\Phi_{\ast}
\hbox{\lower5.8pt\hbox{\larm\char'027}}(s_{\ast}\;{\mathchar"220A}\; id)\colon\kern2pt\hbox{$\script C$}(U)^{\circ}\;{\mathchar"220A}\;\hbox{$\script A$}(G)^{\circ}
\rightarrow\hbox{$\script B$}(V)^{\circ}$ be the corresponding morphism of graded
coalgebras. Clearly, the differentiable mapping
$\tilde{\phi}_{\ast}|_{U\;{\mathchar"2202}\; G}\colon\kern2pt U\;{\mathchar"2202}\; G\rightarrow V$ is
bijective. Consider now the tangent of $\phi$ at the arbitrary point
$(x,g)\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} U\;{\mathchar"2202}\; G$. If $z=u\;{\mathchar"220A}\;\delta_{g}+\delta_{x}\;{\mathchar"220A}\; w
\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{(x,g)}(U\;{\mathchar"2202}\; G,\hbox{$\script C$}|_{U}\hbox{$\kern-1pt\hat{\;{\mathchar"220A}\;}_{\hskip-0.1cm\pi}\kern1pt$}\hbox{$\script A$})$ is a tangent vector at
$(x,g)$, $u\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} T_{x}(U,\hbox{$\script C$}|_{U})$, $w=T_{g}(G,\hbox{$\script A$})$, then:
$$\tilde{\phi}_{\ast}(z)=\Phi_{g\ast}(s_{\ast}u)+
(\Phi_{s_{\ast}x})_{\ast}(w).$$
This implies that $T_{(x,g)}\tilde{\phi}$ is injective, because
$\Phi_{g\ast}$ is an isomorphism and $\Phi_{y\ast}$ is injective for
each $y\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} Y$ (the action is free). Thus, $T_{(x,g)}\tilde{\phi}$
is an injection between two vector spaces of the same dimension and
hence an isomorphism. Using now Theorem 2.16 of {\kost}, we conclude
that $\tilde{\phi}$ is an isomorphism of graded manifolds.
Let now $\pi_{1}\colon\kern2pt(U,\hbox{$\script C$}|_{U})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(U,\hbox{$\script C$}|_{U})$ and
$\pi_{2}\colon\kern2pt(U,\hbox{$\script C$}|_{U})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(G,\hbox{$\script A$})$ be the
projections. Then:
\math{Proposition.}{\localtr}{\sl Let $\phi\colon\kern2pt(V,\hbox{$\script B$}|_{V})
\rightarrow(U,\hbox{$\script C$}|_{U})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$, $V=\pi_{\ast}^{-1}(U)$,
be the morphism of graded manifolds defined as $\phi^{\ast}=
m_{\fam4 \rsfs B}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\pi^{\ast}\;{\mathchar"220A}\;\psi^{\ast})$, where $\psi^{\ast}=
(\tilde{\phi}^{\ast})^{-1}\hbox{\lower5.8pt\hbox{\larm\char'027}}\pi_{2}^{\ast}$. Then, $\phi$ is a local
trivialization of the graded principal bundle $(Y,\hbox{$\script B$})$.}
\undertext{\it Proof}. We show first that $\phi^{\ast}$ is an
isomorphism. To this end, consider the composition
$\tilde{\phi}^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\phi^{\ast}$:
$$\eqalign{\tilde{\phi}^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\phi^{\ast}&=
(s^{\ast}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}} m_{\fam4 \rsfs
B}\hbox{\lower5.8pt\hbox{\larm\char'027}}(\pi^{\ast}\;{\mathchar"220A}\;\psi^{\ast})\cr
\hfill&=m_{\fam4 \rsfs C\fam4 \rsfs A}\hbox{\lower5.8pt\hbox{\larm\char'027}}[(s^{\ast}\;{\mathchar"220A}\;
id)\;{\mathchar"220A}\;(s^{\ast}\;{\mathchar"220A}\;
id)]\hbox{\lower5.8pt\hbox{\larm\char'027}}(\Phi^{\ast}\;{\mathchar"220A}\;\Phi^{\ast})\hbox{\lower5.8pt\hbox{\larm\char'027}}(\pi^{\ast}
\;{\mathchar"220A}\;\psi^{\ast})\cr
\hfill&=m_{\fam4 \rsfs C\fam4 \rsfs A}\hbox{\lower5.8pt\hbox{\larm\char'027}}[(s^{\ast}\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\pi^{\ast}\;{\mathchar"220A}\;(s^{\ast}\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\psi^{\ast}]\cr
\hfill&=m_{\fam4 \rsfs C\fam4 \rsfs A}\hbox{\lower5.8pt\hbox{\larm\char'027}}[(s^{\ast}\;{\mathchar"220A}\;
id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\pi^{\ast}\;{\mathchar"220A}\;\pi^{\ast}_{2}].\cr}$$
Using now the fact that $(s^{\ast}\;{\mathchar"220A}\; id)\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}
\pi^{\ast}=\pi_{1}^{\ast}$, we find that $\tilde{\phi}^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}
\phi^{\ast}=id$, which proves that $\phi^{\ast}$ is also an isomorphism.
It remains to show relation $\comodmorpsi$, or equivalently,
$\psi_{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}}\Phi_{\ast}=m_{\fam4 \rsfs A}^{\circ}\hbox{\lower5.8pt\hbox{\larm\char'027}}
(\psi_{\ast}\;{\mathchar"220A}\; id)$. Since $\tilde{\phi}_{\ast}$ is an isomorphism
between $\hbox{$\script B$}(V)^{\circ}$ and $\hbox{$\script C$}(U)^{\circ}\;{\mathchar"220A}\;\hbox{$\script A$}(G)^{\circ}$, we
may write each element $b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(V)^{\circ}$ as $b=\Phi_{\ast}
(\sum_{i}s_{\ast}c^{i}\;{\mathchar"220A}\; a^{i}_{0})$, for $c^{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script C$}(U)^{\circ}$ and
$a^{i}_{0}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$. If $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$, we take:
$$\eqalign{\psi_{\ast}\Phi_{\ast}(b\;{\mathchar"220A}\; a)&=
\psi_{\ast}\Phi_{\ast}\big(\Phi_{\ast}(\sum_{i}s_{\ast}c^{i}\;{\mathchar"220A}\;
a^{i}_{0})\;{\mathchar"220A}\; a\big)=\psi_{\ast}\Phi_{\ast}
\big(\sum_{i}s_{\ast}c^{i}\;{\mathchar"220A}\;(a^{i}_{0}\;{\mathchar"220C}\; a)\big)\cr
\hfill&=\psi_{\ast}\Phi_{\ast}(s_{\ast}\;{\mathchar"220A}\; id)\big(\sum_{i}c^{i}
\;{\mathchar"220A}\;(a^{i}_{0}\;{\mathchar"220C}\; a)\big)=
\pi_{2\ast}\big(\sum_{i}c^{i}\;{\mathchar"220A}\;(a^{i}_{0}\;{\mathchar"220C}\; a)\big)\cr
\hfill&=\sum_{i}c^{i}(\hbox{\bbm 1}_{\fam4 \rsfs C})a^{i}_{0}\;{\mathchar"220C}\; a=
\psi_{\ast}b\;{\mathchar"220C}\; a,\cr}$$
since $\psi_{\ast}b=\pi_{2\ast}(\sum_{i}c^{i}\;{\mathchar"220A}\; a^{i}_{0})=
\sum_{i}c^{i}(\hbox{\bbm 1}_{\fam4 \rsfs C})a^{i}_{0}$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
By Lemma {\gradedsectionlem} and Proposition {\localtr}, if we set
$U=X$, it is straightforward that the condition of global triviality
of a principal bundle remains valid in the graded setting.
\math{Corollary-Theorem.}{\porismatheorima}{\sl A graded principal
bundle $(Y,\hbox{$\script B$})$ is globally isomorphic to the product $(X,\hbox{$\script C$})
\;{\mathchar"2202}\;(G,\hbox{$\script A$})$ if and only if it admits a global section
$s\colon\kern2pt(X,\hbox{$\script C$})\rightarrow(Y,\hbox{$\script B$})$.}
\vskip0.3cm
We observe here that Lemma {\gradedsectionlem} and
Proposition {\localtr} remain valid if we replace the graded principal
bundle $(Y,\hbox{$\script B$})$ by a graded manifold $(Y,\hbox{$\script B$})$ on which the graded Lie
group $(G,\hbox{$\script A$})$ acts freely to the right in such a way that the
quotient $(X,\hbox{$\script C$})=(Y/G,\hbox{$\script B$}/\hbox{$\script A$})$ is a graded manifold, and the projection
$\pi\colon\kern2pt(Y,\hbox{$\script B$})\rightarrow(X,\hbox{$\script C$})$ is a submersion. In that case,
one can construct via Lemma {\gradedsectionlem} and Proposition {\localtr}
the local trivializations of Definition {\gpb}. In other words:
\math{Theorem.}{\orismoskaithewrima}{\sl A graded principal
bundle is a graded manifold $(Y,\hbox{$\script B$})$ together with a free right
action $\Phi\colon\kern2pt(Y,\hbox{$\script B$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(Y,\hbox{$\script B$})$ of a graded Lie
group $(G,\hbox{$\script A$})$ such that:
\item{1.} the quotient $(X,\hbox{$\script C$})=(Y/G,\hbox{$\script B$}/\hbox{$\script A$})$ is a graded manifold
\item{2.} the projection $\pi\colon\kern2pt(Y,\hbox{$\script B$})\rightarrow(X,\hbox{$\script C$})$ is a
submersion.
}
\vskip0.3cm
As an immediate application, we examine if the principal bundles
formed by Lie groups and closed Lie subgroups possess graded
analogs.
\math{Example.}{\paradeigmadyo}{ Consider a graded Lie group $(G,\hbox{$\script A$})$
and a closed graded Lie subgroup $(H,\hbox{$\script D$})$ of $(G,\hbox{$\script A$})$. The natural
right action $\Phi\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(H,\hbox{$\script D$})\rightarrow(G,\hbox{$\script A$})$ is
given by $\Phi^{\ast}=(id\;{\mathchar"220A}\; i^{\ast})\hbox{\lower5.8pt\hbox{\larm\char'027}}\hbox{$\Delta_{\script A}$}$, where $i\colon\kern2pt
(H,\hbox{$\script D$})\rightarrow(G,\hbox{$\script A$})$ is the inclusion. Furthermore, we know
that the quotient $(G/H,\hbox{$\script A$}/\hbox{$\script D$})$ is a graded manifold and the
projection $(G,\hbox{$\script A$})\rightarrow(G/H,\hbox{$\script A$}/\hbox{$\script D$})$ is a submersion, {\kost}.
Now if $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$, the morphism $\Phi_{g\ast}\colon\kern2pt\hbox{$\script D$}(H)^{\circ}\rightarrow
\hbox{$\script A$}(G)^{\circ}$ is given by $\Phi_{g\ast}(d)=\delta_{g}\;{\mathchar"220C}\; d=
L_{g\ast}(d)$, $\forall d\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script D$}(H)^{\circ}$. As a result,
$\Phi_{g\ast}$ is injective, so the action $\Phi$ is free and
Theorem {\orismoskaithewrima} holds: $(G,\hbox{$\script A$})$ is a graded
principal bundle over $(G/H,\hbox{$\script A$}/\hbox{$\script D$})$ with typical fibre $(H,\hbox{$\script D$})$.}
\chapter{Lie superalgebra-valued graded differential forms}
Let $(Y,\hbox{$\script B$})$ be a graded manifold and $\frak g$ a Lie superalgebra.
We call $\frak g$-valued graded differential form on $(Y,\hbox{$\script B$})$, an
element of $\Omega(Y,\hbox{$\script B$})\;{\mathchar"220A}\;\frak g$. It is clear that the set
$\hbox{$\Omega(Y,\B,{\frak g})$}=\Omega(Y,\hbox{$\script B$})\;{\mathchar"220A}\;\frak g$ of these forms constitutes a
$(\hbox{\bf\char'132}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2})$-graded vector space; however, it is more
convenient to introduce a $(\hbox{\bf\char'132}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2})$-grading as follows:
if $\alpha\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\Omega(Y,\B,{\frak g})$}$ and ${\rm deg\kern1pt}(\alpha)=(i_{\alpha},j_{\alpha},
k_{\alpha})$ is its $(\hbox{\bf\char'132}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2})$-degree, then we
set $|\alpha|=(i_{\alpha},j_{\alpha}+k_{\alpha})\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{\bf\char'132}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2}$.
If $\{e_{i}\}$ is a basis of the Lie superalgebra $\frak g$ and
$\alpha,\beta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\Omega(Y,\B,{\frak g})$}$, then we may write $\alpha=\sum_{i}\alpha^{i}
\;{\mathchar"220A}\; e_{i}$ and $\beta=\sum_{i}\beta^{i}\;{\mathchar"220A}\; e_{i}$, for
$\alpha^{i}$, $\beta^{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega(Y,\hbox{$\script B$})$. In the case where $\alpha$
and $\beta$ are homogeneous with ${\rm deg\kern1pt}(\alpha)=(i_{\alpha},j_{\alpha},
k_{\alpha})$, ${\rm deg\kern1pt}(\beta)=(i_{\beta},j_{\beta},k_{\beta})$, we define
the $\frak g$-valued graded differential form $[\alpha,\beta]$ of
degree ${\rm deg\kern1pt}([\alpha,\beta])=(i_{\alpha}+i_{\beta},j_{\alpha}+
j_{\beta},k_{\alpha}+k_{\beta})$ as:
$$[\alpha,\beta]=\sum_{i,j}(-1)^{j_{\beta}k_{\alpha}}\alpha^{i}\beta^{j}
\;{\mathchar"220A}\;[e_{i}^{\alpha},e_{j}^{\beta}],\eqno\eqname\formediff$$
if $\alpha=\sum_{i}\alpha^{i}\;{\mathchar"220A}\; e_{i}^{\alpha}$ with
$|\alpha^{i}|=(i_{\alpha},j_{\alpha})$ and
$e_{i}^{\alpha}=k_{\alpha}$; similarly for $\beta$.
We extend this definition to non-homogeneous elements by linearity.
Clearly, equation $\formediff$ gives the same result for every basis of
the Lie superalgebra $\frak g$. Thus, we have a bilinear map
$[-,-]\colon\kern2pt\Omega^{i}(Y,\hbox{$\script B$})_{j}
\;{\mathchar"220A}\;\frak g_{k}\;{\mathchar"2202}\;\Omega^{i\hbox{\kern-1.3pt\lower0.9pt\hbox{\tenprm\char'23}}}(Y,\hbox{$\script B$})_{j\hbox{\kern-1.3pt\lower0.9pt\hbox{\tenprm\char'23}}}\;{\mathchar"220A}\;
\frak g_{k\hbox{\kern-1.3pt\lower0.9pt\hbox{\tenprm\char'23}}}\rightarrow\Omega^{i+i\hbox{\kern-1.3pt\lower0.9pt\hbox{\tenprm\char'23}}}(Y,\hbox{$\script B$})_{j+j\hbox{\kern-1.3pt\lower0.9pt\hbox{\tenprm\char'23}}}
\;{\mathchar"220A}\;\frak g_{k+k\hbox{\kern-1.3pt\lower0.9pt\hbox{\tenprm\char'23}}}$ with the following properties:
\math{Proposition.}{\propert}{\sl If $\alpha,\beta,\gamma\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\Omega(Y,\B,{\frak g})$}$ are
homogeneous, then we have:
\vskip0.1cm
\item{1.} $[\alpha,\beta]=-(-1)^{|\alpha||\beta|}[\beta,\alpha]$
\item{2.}
$\frak S(-1)^{|\alpha||\gamma|}\big[[\alpha,\beta],\gamma\big]=0$.
In the previous relations, we have set $|\alpha||\beta|=i_{\alpha}
i_{\beta}+(j_{\alpha}+k_{\alpha})(j_{\beta}+k_{\beta})$ and $\frak S$
means the cyclic sum on the argument which follows.}
\undertext{\it Proof}. Routine calculations using relation
$\formediff$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
We realize that the space $\hbox{$\Omega(Y,\B,{\frak g})$}$ possesses the structure of a
$(\hbox{\bf\char'132}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2})$-graded Lie algebra, inherited from the the Lie
superalgebra structure of $\frak g$.
The action now of elements of $\hbox{$\Omega(Y,\B,{\frak g})$}$ on derivations can be seen as
follows: if $\alpha=\sum_{i}\alpha^{i}\;{\mathchar"220A}\; e_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\Omega(Y,\B,{\frak g})$}$ is an
$r$-form and $\xi_{1},\ldots,\xi_{r}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$, we set $(\xi_{1},
\ldots,\xi_{r}|\alpha)=\sum_{i}(\xi_{1},\ldots,\xi_{r}|\alpha^{i})
\;{\mathchar"220A}\; e_{i}$. Accordingly, one can extend the exterior differential
$d$ to a differential on $\frak g$-valued graded differential forms,
also noted by $d$ in the following manner: if $\alpha=\sum_{i}
\alpha^{i}\;{\mathchar"220A}\; e_{i}$, then we set $d\alpha=\sum_{i}d\alpha^{i}
\;{\mathchar"220A}\; e_{i}$. The exterior differential $d\colon\kern2pt\hbox{$\Omega(Y,\B,{\frak g})$}\rightarrow\hbox{$\Omega(Y,\B,{\frak g})$}$
defined previously, is a derivation of degree $|d|=(1,0)$. By
straightforward verification, we find that, if $\alpha$, $\beta$ are
$\frak g$-valued graded differential forms and $\alpha$ is
homogeneous, then $d[\alpha,\beta]=[d\alpha,\beta]+(-1)^{|\alpha||d|}
[\alpha,d\beta]$.
In a similar way, one can extend the pull-back of graded differential forms
under a morphism of graded manifolds $\sigma\colon\kern2pt(Y,\hbox{$\script B$})\rightarrow
(Z,{\fam4 \rsfs Y})$ to a linear map $\sigma^{\ast}\colon\kern2pt\Omega(Z,{\fam4 \rsfs
Y},\frak g)\rightarrow\hbox{$\Omega(Y,\B,{\frak g})$}$ which commutes with the exterior
differential, that is, $d\hbox{\lower5.8pt\hbox{\larm\char'027}}\sigma^{\ast}=\sigma^{\ast}\hbox{\lower5.8pt\hbox{\larm\char'027}} d$,
and preserves the bracket $[-,-]$:
$\sigma^{\ast}[\alpha,\beta]=
[\sigma^{\ast}\alpha,\sigma^{\ast}\beta]$. We have analogous
generalizations for the Lie derivative. The following properties
of the bracket and the Lie derivative on $\frak g$-valued graded
differential forms will be useful; the proof proceeds by a
straightforward calculation with graded differential forms and Lie
superalgebra elements, and is left as an exercise for the reader.
\math{Proposition.}{\idiothtalie}{\sl If $\alpha,\beta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\Omega(Y,\B,{\frak g})$}$ and
$\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$ are homogeneous, then
\item{1.} $\hbox{$\bit L$}_{\xi}[\alpha,\beta]=[\hbox{$\bit L$}_{\xi}\alpha,\beta]+(-1)^{|{\bit
L}_{\xi}||\alpha|}[\alpha,\hbox{$\bit L$}_{\xi}\beta]$
\item{2.} $\big(id\;{\mathchar"220A}\; ad(v)\big)[\alpha,\beta]=\big[(id\;{\mathchar"220A}\;
ad(v))\alpha,\beta\big]+(-1)^{|v||\alpha|}
\big[\alpha,(id\;{\mathchar"220A}\; ad(v))\beta\big]$, $\forall v\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$.}
\vskip0.3cm
Finally, one can restate Proposition {\pullback} for the case of
$\frak g$-valued graded differential forms. For example, if $\sigma$
is an isomorphism of graded manifolds, relation $\pullba$ becomes:
$$(\xi_{1},\ldots,\xi_{r}|\sigma^{\ast}\alpha)
=(\sigma^{\ast}\;{\mathchar"220A}\; id)(\sigma_{\ast}\xi_{1},\ldots,\sigma_{\ast}
\xi_{r}|\alpha).\eqno\eqname\neostypos$$
One can also define multilinear forms on the tangent spaces of a
graded manifold taking its values in the Lie superalgebra $\frak g$,
using a simple modification of $\multiliforms$.
\chapter{Graded connections}
We have seen that on each graded principal bundle there always
exists a natural distribution induced by the action of the structure
group which is equal to the sheaf of vertical derivations. The choice of a
connection is essentially the choice of a complementary distribution.
More precisely:
\math{Definition.}{\connexiong}{\sl Let $(Y,\hbox{$\script B$})$ be a graded principal
bundle with structure group $(G,\hbox{$\script A$})$, over the graded manifold
$(X,\hbox{$\script C$})$. A graded connection on $(Y,\hbox{$\script B$})$ is a regular distribution
$\hbox{$\script H$}\subset\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}$ of dimension ${\rm dim\kern1pt}\hbox{$\script H$}={\rm dim\kern1pt}(X,\hbox{$\script C$})$ such that:
\item{1.} $\hbox{$\script H$}\;{\mathchar"2208}\;\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\pi_{\ast},\hbox{$\script B$})=\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}$,
$\pi\colon\kern2pt(Y,\hbox{$\script B$})\rightarrow(X,\hbox{$\script C$})$ is the projection
\item{2.} $\hbox{$\script H$}$ is $(G,\hbox{$\script A$})$-invariant.
}
\vskip0.3cm
Let us explain the second statement in the previous definition:
$\hbox{$\script H$}$ will be called $(G,\hbox{$\script A$})$-invariant if, for each open
$U\subset X$ and $D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(\pi_{\ast}^{-1}(U))$, the derivations
$\Phi_{g}^{\ast}D$ and $[(\Phi^{\ast})_{a},D]=\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}D$
belong also to $\hbox{$\script H$}(\pi_{\ast}^{-1}(U))$, $\forall g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$,
$\forall a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$. In order to put these condition in a more
compact form, we introduce the following notation
$$(\Phi^{\ast})_{a}D=\left\{\matrix{\Phi^{\ast}_{g}D,\quad\hbox{if}\quad
a=\delta_{g}, g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G\cr
\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}D,\quad\hbox{if}\quad a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g.\hfill\cr}\right.$$
Then, $\hbox{$\script H$}$ will be $(G,\hbox{$\script A$})$-invariant if
$(\Phi^{\ast})_{a}D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(\pi_{\ast}^{-1}(U))$, for each element $a$
group-like or primitive with respect to $\delta_{e}$. We can now
reformulate this notion in terms of $\frak g$-valued graded
differential forms.
Given the graded connection $\hbox{$\script H$}$ on $(Y,\hbox{$\script B$})$, we have:
$\hbox{$\script H$}(Y)\;{\mathchar"2208}\;\hbox{$\script E$}(Y)=\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$, $\hbox{$\script E$}=\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\pi_{\ast},\hbox{$\script B$})$
(see Theorem {\ekerpi}), and each derivation $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$
decomposes as $\xi=\xi^{H}+\sum_{i}f^{i}(\Phi^{\ast})_{e_{i}}$,
where $\xi^{H}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(Y)$, $\{e_{i}\}$ is a basis of $\frak g$
and $f^{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)$. Then, we define
a 1-form $\biomega\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$},\frak g)$ setting
$(\xi|\biomega)=\sum_{i}f^{i}\;{\mathchar"220A}\; e_{i}$. Let us now calculate
$(\Phi^{\ast})_{a}\biomega$, where by definition
$$(\Phi^{\ast})_{a}\biomega=\left\{\matrix{\Phi^{\ast}_{g}\biomega,
\quad\hbox{if}\quad a=\delta_{g}, g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G\cr
\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega,\quad\hbox{if}\quad a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g.\hfill\cr}
\right.$$
Consider first the case $a=\delta_{g}$. Then, by equation $\neostypos$
we take
$$(\xi|\Phi_{g}^{\ast}\biomega)=
(\Phi_{g}^{\ast}\;{\mathchar"220A}\; id)(\Phi_{g\ast}\big(\sum_{i}f^{i}
(\Phi^{\ast})_{e_{i}}\big)|\biomega)$$
and using the fact that $\Phi_{g\ast}(\Phi^{\ast})_{e_{i}}=
(\Phi^{\ast})_{AD_{g^{-1}\ast}(e_{i})}$, we obtain:
$$(\xi|\Phi_{g}^{\ast}\biomega)=(\Phi_{g}^{\ast}\;{\mathchar"220A}\; id)
\big(\sum_{i}\Phi^{\ast}_{g^{-1}}f^{i}\;{\mathchar"220A}\;
AD_{g^{-1}\ast}(e_{i})\big)=(id\;{\mathchar"220A}\; AD_{g^{-1}\ast})(\xi|\biomega).
$$ or
$$\Phi_{g}^{\ast}\biomega=(id\;{\mathchar"220A}\; AD_{g^{-1}\ast})\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega.
\eqno\eqname\graconnexion$$
If now $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$, we have:
$$(\xi|\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega)=(-1)^{|a||\xi|}
\Big[\big((\Phi^{\ast})_{a}\;{\mathchar"220A}\; id\big)(\xi|\biomega)-
\big([(\Phi^{\ast})_{a},\xi]|\biomega\big)\Big]\eqno\eqname\xrhsimo$$
and it is sufficient to examine two cases:
\item{1.} $\xi=\xi^{H}$ (horizontal derivation):
it is then immediate that $(\xi|\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega)=0$
\item{2.} $\xi=(\Phi^{\ast})_{b}, b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$
(vertical derivation): $$(\xi|\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega)=
(-1)^{|a||b|+1}\big((\Phi^{\ast})_{[a,b]}|\biomega\big)=
-\big((\Phi^{\ast})_{b}|(id\;{\mathchar"220A}\; AD_{\ast
a})\biomega\big)$$
where we have defined $\big(\xi|(id\;{\mathchar"220A}\; AD_{\ast a})\big)
\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega\big)=(-1)^{|a||\xi|}
(id\;{\mathchar"220A}\; AD_{\ast a})(\xi|\biomega)$.
We may thus write:
$$\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega
=-(id\;{\mathchar"220A}\; AD_{\ast a})\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega.\eqno\eqname\liead$$
Summarizing the previous results on the $\frak g$-valued graded
differential form $\biomega$, we have:
\item{1.}
$(\sum_{i}f^{i}(\Phi^{\ast})_{e_{i}}|\biomega)=
\sum_{i}f^{i}\;{\mathchar"220A}\; e_{i}$ and $|\biomega|=(1,0)$
\item{2.} $(\Phi^{\ast})_{a}\biomega=
(id\;{\mathchar"220A}\; AD_{\ast s^{\circ}_{\fam4 \rsfs
A}(a)})\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega$,
for each $a$ group-like or primitive with respect to $\delta_{e}$.
Conversely now, if $\biomega$ is a $\frak g$-valued graded
differential form with the previous two properties, then
${\rm ker\kern1pt}\biomega=\hbox{$\script H$}$ is a regular distribution on $(Y,\hbox{$\script B$})$ such that
$\hbox{$\script H$}\;{\mathchar"2208}\;\hbox{$\script V$\kern-3.5pt\callig er\kern2pt}(\pi_{\ast},\hbox{$\script B$})=\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}$. Consider $D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(V)$,
$V=\pi_{\ast}^{-1}(U)$; if $a=\delta_{g}$ and $\biomega_{V}$
means the pull-back of $\biomega$ under the inclusion
$(V,\hbox{$\script B$}|_{V})\hookrightarrow(Y,\hbox{$\script B$})$, we take:
$(D|\Phi_{g}^{\ast}\biomega_{V})=(\Phi_{g}^{\ast}\;{\mathchar"220A}\;
id)(\Phi_{g\ast}D|\biomega_{V})=
(id\;{\mathchar"220A}\; AD_{g^{-1}\ast})(D|\biomega_{V})=0$;
thus, $\Phi_{g\ast}D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(V)$. Replacing $g$ by $g^{-1}$, this
gives $\Phi_{g}^{\ast}D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(V)$. On the other hand, if $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$,
we find: $(D|\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega_{V})=-(-1)^{|a||D|}
([(\Phi^{\ast})_{a},D]|\biomega_{V})=0$. This means that
$\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}D$ belongs also to $\hbox{$\script H$}(V)$. In summary:
$(\Phi^{\ast})_{a}D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(V)$. We have thus proved the proposition.
\math{Proposition.}{\connexiongequiv}{\sl The graded connection
$\hbox{$\script H$}$ of Definition {\connexiong} is described equivalently by a
$\frak g$-valued graded differential 1-form $\biomega\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}
\Omega^{1}(Y,\hbox{$\script B$},\frak g)$ of total $\hbox{\bf\char'132}_{2}$ degree zero such that:
\item{1.}
$(\sum_{i}f^{i}(\Phi^{\ast})_{e_{i}}|\biomega)=
\sum_{i}f^{i}\;{\mathchar"220A}\; e_{i}$,
\item{2.} $(\Phi^{\ast})_{a}\biomega=
(id\;{\mathchar"220A}\; AD_{\ast s^{\circ}_{\fam4 \rsfs A}(a)})\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega,$
for each element $a$ group-like or primitive with respect to
$\delta_{e}$.
We call $\biomega$ graded connection form.}
\vskip0.3cm
Graded principal bundles are by
definition locally isomorphic to products of open graded submanifolds
of the base space by the structure graded Lie group. So, it is quite
natural to ask how one can construct graded connections on the trivial
graded principal bundle of Example {\paradeigmaena}.
\math{Example.}{\paradeigmatria}{Let $(Y,\hbox{$\script B$})=(X,\hbox{$\script C$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$ be
as in Example {\paradeigmaena}. The right action $\Phi$ of $(G,\hbox{$\script A$})$
on $(Y,\hbox{$\script B$})$ is such that $\Phi^{\ast}=id\;{\mathchar"220A}\;\hbox{$\Delta_{\script A}$}$. Consider now a
$\frak g$-valued 1-form $\beta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(X,\hbox{$\script C$},\frak g)$ and a
basis $\{e_{k}\}$ of $\frak g$. One can write $\beta=\sum_{k}
\beta^{k}\;{\mathchar"220A}\; e_{k}$, $\beta^{k}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$})$ with
$|\beta|=(1,0)$. If $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script C$}(X)$ and $\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(G)$, we
define a $\frak g$-valued 1-form $\biomega\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$},\frak g)$
as:
$$(\xi\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}+\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;\eta|\biomega)=
\sum_{k}(\xi|\beta^{k})\;{\mathchar"220A}\;((L^{\ast})_{e_{k}}|\theta)+\hbox{\bbm 1}_{\fam4 \rsfs
C}\;{\mathchar"220A}\;(\eta|\theta).\eqno\eqname\syndesh$$
In the previous relation $L\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})\rightarrow(G,\hbox{$\script A$})$ is
the left action of $(G,\hbox{$\script A$})$ on itself, $L^{\ast}=\hbox{$\Delta_{\script A}$}$ and
$\theta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(G,\hbox{$\script A$},\frak g)$ the graded Maurer-Cartan form on
$(G,\hbox{$\script A$})$ defined as $((R^{\ast})_{a}|\theta)=\hbox{\bbm 1}_{\fam4 \rsfs A}\;{\mathchar"220A}\;
a$, $\forall a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$, where $R\colon\kern2pt(G,\hbox{$\script A$})\;{\mathchar"2202}\;(G,\hbox{$\script A$})
\rightarrow(G,\hbox{$\script A$})$ is the right action of $(G,\hbox{$\script A$})$ on itself,
$R^{\ast}=\hbox{$\Delta_{\script A}$}$. Recall that the derivations $(L^{\ast})_{e_{k}}$ and
$(R^{\ast})_{a}$ are given by Theorem {\thewrima}. We shall check now
if the form $\biomega$ in $\syndesh$ has the properties of a
graded connection form.
(1) If $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$, then $(\Phi^{\ast})_{a}=id
\;{\mathchar"220A}\;(R^{\ast})_{a}$ and $((\Phi^{\ast})_{a}|\biomega)=
\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;((R^{\ast})_{a}|\theta)=\hbox{\bbm 1}_{\fam4 \rsfs B}\;{\mathchar"220A}\;
a$. Further, $|\biomega|=(1,0)$ because $|\beta|=(1,0)$ and
$|\theta|=(1,0)$.
(2) Consider now an element $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$; we shall calculate
$\Phi^{\ast}_{g}\biomega$. To this end, we use formula $\neostypos$,
as well as the fact that each graded Lie group is parallelizable
(Theorem {\paralleltheorem}), so it is sufficient to take
$\eta=(R^{\ast})_{a}$, $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$. Thus, if we set $D=\xi\;{\mathchar"220A}\;
\hbox{\bbm 1}_{\fam4 \rsfs A}+\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;(R^{\ast})_{a}$ and
$a\hbox{\kern-1.3pt\lower0.7pt\hbox{$\mathchar"6013$}}=AD_{g^{-1}\ast}(a)$, we find
$$\eqalign{(D|\Phi^{\ast}_{g}\biomega)&=
(\Phi^{\ast}_{g}\;{\mathchar"220A}\; id)(\xi\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}+\hbox{\bbm 1}_{\fam4 \rsfs
C}\;{\mathchar"220A}\;(R^{\ast})_{a\hbox{\kern-1.3pt\lower0.9pt\hbox{\tenprm\char'23}}}|\biomega)\cr
\hfill&=\sum_{k}(\xi|\beta^{k})\;{\mathchar"220A}\;(R^{\ast}_{g}\;{\mathchar"220A}\;
id)((L^{\ast})_{e_{k}}|\theta)+\hbox{\bbm 1}_{\fam4 \rsfs B}\;{\mathchar"220A}\;
AD_{g^{-1}\ast}(a)\cr
\hfill&=\sum_{k}(\xi|\beta^{k})\;{\mathchar"220A}\;(id\;{\mathchar"220A}\; AD_{g^{-1}\ast})
((L^{\ast})_{e_{k}}|\theta)+\hbox{\bbm 1}_{\fam4 \rsfs B}\;{\mathchar"220A}\; AD_{g^{-1}\ast}(a)\cr
\hfill&=(\xi\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}+\hbox{\bbm 1}_{\fam4 \rsfs
C}\;{\mathchar"220A}\;(R^{\ast})_{a}|(id\;{\mathchar"220A}\;
AD_{g^{-1}\ast})\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega).\cr}$$
Note that in the previous calculation we used that
$R^{\ast}_{g}\theta=(id\;{\mathchar"220A}\; AD_{g^{-1}\ast})\hbox{\lower5.8pt\hbox{\larm\char'027}}\theta$ and
$R_{g\ast}(L^{\ast})_{e_{k}}=(L^{\ast})_{e_{k}}$, which can be
verified straightforwardly.
(3) Let finally $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$; we calculate the Lie derivative
$\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega=(\Phi^{\ast})_{a}\biomega$. One can
use formula $\xrhsimo$, which is valid for $\biomega$ because
$((\Phi^{\ast})_{a}|\biomega)=\hbox{\bbm 1}_{\fam4 \rsfs B}\;{\mathchar"220A}\; a$. If we set now
$D=\xi\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}+\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;(R^{\ast})_{b}$
and $x=|a||\beta^{k}|$, we have:
$$\eqalign{(D|(\Phi^{\ast})_{a}\biomega)&=(-1)^{|a||\xi|}
((\Phi^{\ast})_{a}\;{\mathchar"220A}\; id)(\xi\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs
A}|\biomega)-(-1)^{|a||b|}(\hbox{\bbm 1}_{\fam4 \rsfs
C}\;{\mathchar"220A}\;(R^{\ast})_{[a,b]}|\biomega)\cr
\hfill&=(-1)^{x}(\xi|\beta^{k})\;{\mathchar"220A}\; ((R^{\ast})_{a}\;{\mathchar"220A}\; id)
((L^{\ast})_{e_{k}}|\theta)-(-1)^{|a||b|}\hbox{\bbm 1}_{\fam4 \rsfs B}\;{\mathchar"220A}\;[a,b]\cr
\hfill&=-(-1)^{x}(\xi|\beta^{k})\;{\mathchar"220A}\;(id\;{\mathchar"220A}\; AD_{\ast a})
((L^{\ast})_{e_{k}}|\theta)-(-1)^{|a||b|}\hbox{\bbm 1}_{\fam4 \rsfs B}\;{\mathchar"220A}\;[a,b]\cr
\hfill&=-(\xi\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}|(id\;{\mathchar"220A}\; AD_{\ast
a})\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega)-(\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;(R^{\ast})_{b}|(id\;{\mathchar"220A}\;
AD_{\ast a})\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega)\cr}$$
which implies that $(\Phi^{\ast})_{a}\biomega=-(id\;{\mathchar"220A}\; AD_{\ast
a})\hbox{\lower5.8pt\hbox{\larm\char'027}}\biomega$. In the previous calculation, summation over the
repeated index $k$ is understood. Note also that we used the property
$(R^{\ast})_{a}\theta=-(id\;{\mathchar"220A}\; AD_{\ast a})\hbox{\lower5.8pt\hbox{\larm\char'027}}\theta$ of the
graded Maurer-Cartan form. }
\vskip0.2cm
\math{Remark.}{\tmisi}{\it Let $s\colon\kern2pt(X,\hbox{$\script C$})\rightarrow(Y,\hbox{$\script B$})$ be a
graded section for the previous example. Then we have a morphism
$s^{\ast}\colon\kern2pt\hbox{$\script C$}(X)\;{\mathchar"220A}\;\hbox{$\script A$}(G)\rightarrow\hbox{$\script C$}(X)$ of graded commutative algebras and therefore, a morphism $\sigma^{\ast}\colon\kern2pt
\hbox{$\script A$}(G)\rightarrow\hbox{$\script C$}(X)$ given by $\sigma^{\ast}(f_{\fam4 \rsfs
A})=s^{\ast}(\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\; f_{\fam4 \rsfs A})$. This defines
a morphism of graded manifolds $\sigma\colon\kern2pt(X,\hbox{$\script C$})\rightarrow(G,\hbox{$\script A$})$.
Let now $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script C$}(X)$ and $\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script A$}(G)$ be two
$\sigma$-related derivations; then the derivations
$\xi\hbox{\kern-1.3pt\lower0.7pt\hbox{$\mathchar"6013$}}=\xi\;{\mathchar"220A}\;\hbox{\bbm 1}_{\fam4 \rsfs A}+\hbox{\bbm 1}_{\fam4 \rsfs C}\;{\mathchar"220A}\;\eta$
and $\xi$ are $s$-related and one can use Proposition {\pullback}
in order to calculate the pull-back $s^{\ast}\biomega\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}
(X,\hbox{$\script C$},\frak g)$. The result is:
\vskip0.2cm
\centerline{$s^{\ast}\biomega=\sum_{k}\beta^{k}(\sigma^{\ast}\;{\mathchar"220A}\;
id)((L^{\ast})_{e_{k}}|\theta)+\sigma^{\ast}\theta,$}
\vskip0.1cm
\noindent as one easily finds.}
\vskip0.2cm
One can use Example {\paradeigmatria} in order to construct graded
connections on general graded principal bundles. Indeed, let
$(Y,\hbox{$\script B$})$ be such a bundle with base space $(X,\hbox{$\script C$})$ and structure group
$(G,\hbox{$\script A$})$, $\{U_{i}\}_{i\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Lambda}$ a locally finite open
covering of $X$ and $\{f_{i}\}$ a graded partition of unity
subordinate to $\{U_{i}\}$: $f_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script C$}(X)_{0}$, ${\rm supp}f_{i}
\subset U_{i}$ and $\sum_{i}f_{i}=\hbox{\bbm 1}_{\fam4 \rsfs C}$, {\kost}. Let also
$V_{i}=\pi_{\ast}^{-1}(U_{i})$ and $\biomega_{i}$ be the graded
connection 1-forms that one can construct on
$(V_{i},\hbox{$\script B$}|_{V_{i}})\cong(U_{i},\hbox{$\script C$}|_{U_{i}})\;{\mathchar"2202}\;(G,\hbox{$\script A$})$ as in the
previous example (see also Definition {\gpb}). If now $D\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$,
then we define $(D|\biomega_{i})\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(V_{i})\;{\mathchar"220A}\;\frak g$ as
$(D|\biomega_{i})=(D_{i}|\biomega_{i})$, $D_{i}=D|_{V_{i}}$. Then we
have also $(D|\biomega_{i})\cdot(\pi^{\ast}f_{i}|_{V_{i}})\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(V_{i})
\;{\mathchar"220A}\;\frak g$ and so, there exists an element of $\hbox{$\script B$}(Y)\;{\mathchar"220A}\;\frak
g$, which we denote by $(D|\biomega_{i})\pi^{\ast}f_{i}$, such that
$[(D|\biomega_{i})\pi^{\ast}f_{i}]|_{V_{i}}=(D|\biomega_{i})
(\pi^{\ast}f_{i}|_{V_{i}})$ and whose support is a subset of $V_{i}$.
If now $\{\tilde{U}_{k}\}$ is another open covering of $X$, then
setting $W_{k}=\pi_{\ast}^{-1}(\tilde{U}_{k})$ we have an open
covering of $Y$ and for $k$ fixed, $W_{k}\cap V_{i}$
is non-empty only for finitely many of the $V_{i}$. Taking the
restrictions $[(D|\biomega_{i})\pi^{\ast}f_{i}]|_{W_{k}}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(W_{k})
\;{\mathchar"220A}\;\frak g$, only finite many terms will be non-zero as $i$ runs
over $\Lambda$. Thus, the sum $\sum_{i}[(D|\biomega_{i})
\pi^{\ast}f_{i}]|_{W_{k}}$ is finite and well-defined as an element of
$\hbox{$\script B$}(W_{k})\;{\mathchar"220A}\;\frak g$. Furthermore, the restrictions of such
elements to the intersections $W_{k}\cap W_{\ell}$ coincide and this
means, by the sheaf properties of $\hbox{$\script B$}$, that there exists a unique
element of $\hbox{$\script B$}(Y)\;{\mathchar"220A}\;\frak g$, whose restriction to $W_{k}$ gives
the previous element of $\hbox{$\script B$}(W_{k})\;{\mathchar"220A}\;\frak g$. We denote this
unique element by $(D|\biomega)$ and by its linearity on $D$ it
determines a $\frak g$-valued graded 1-form $\biomega$. The form
$\biomega$ is a graded connection form; this results without
difficulty from the properties $\Phi_{g}^{\ast}\pi^{\ast}f_{i}=
\pi^{\ast}f_{i}$ and $(\Phi^{\ast})_{a}\pi^{\ast}f_{i}=0$, $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$,
$a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$, as well as from the fact that for each $i\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Lambda$,
$\biomega_{i}$ is a graded connection. We have thus proved the following:
\math{Existence Theorem for graded connections.}{\yparxh}{\sl On each
graded principal bundle there exists an infinity of graded
connections.}
\vskip0.2cm
\chapter{Graded curvature}
In ordinary differential geometry, one can define in a canonical way,
for each connection, a Lie superalgebra-valued 2-form, the curvature
form. In this section, we will define the curvature in the graded
setting, using the notion of graded connection, previously developed.
Let then $\biomega\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$},\frak g)$ be a graded connexion form on
the graded principal bundle $(Y,\hbox{$\script B$})$. Fix a basis of the Lie
$\{e_{k}\}$ of the Lie superalgebra $\frak g$; then for each
derivation $\xi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$ one can find $\xi^{H}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(Y)$ and
$f^{k}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script B$}(Y)$ such that $\xi=\xi^{H}+\sum_{i}f^{k}
(\Phi^{\ast})_{e_{k}}$. We have thus a canonical projection
$\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)\rightarrow\hbox{$\script H$}(Y)$, $\xi\mapsto\xi^{H}$;
we call $\xi^{H}$ horizontal part of $\xi$. Clearly, this
mechanism of taking the horizontal part of a derivation can be
applied in the same way if instead of $Y$ we put an open $V\subset Y$.
Consider now a $\frak g$-valued graded differential form
$\phi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{r}(Y,\hbox{$\script B$},\frak g)$ and let $\phi^{H}$ be
defined as $(\xi_{1},\ldots,\xi_{r}|\phi^{H})=
(\xi_{1}^{H},\ldots,\xi_{r}^{H}|\phi)$,
$\xi_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$, $i=1,\ldots,r$.
\math{Definition.}{\covariante}{\sl The covariant exterior derivative
of $\phi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{r}(Y,\hbox{$\script B$},\frak g)$ is the $\frak g$-valued graded
differential form $D^{\biomega}\phi\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{r+1}(Y,\hbox{$\script B$},\frak g)$
defined as $D^{\biomega}\phi=(d\phi)^{H}$.
The curvature of the graded connection $\biomega\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}
\Omega^{1}(Y,\hbox{$\script B$},\frak g)$ is the covariant exterior
derivative $D^{\biomega}\biomega$; we use the notation
$F^{\biomega}=D^{\biomega}\biomega$.}
\vskip0.3cm
We will next study in more detail the properties of $F^{\biomega}$.
As a general observation, we may say that $F^{\biomega}$ has,
formally, properties analogous to those of the ordinary curvature.
\math{Theorem.}{\structureth}{\sl The graded curvature
$F^{\biomega}$ is given by the relation
$$F^{\biomega}=d\biomega+{{1}\over{2}}[\biomega,\biomega].
\eqno\eqname\structureequ$$
This is the graded structure equation.}
\undertext{\it Proof}. It is sufficient to prove that for
each $\xi_{1}$, $\xi_{2}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\yfra D\yfra e\yfra r$}\hbox{$\script B$}(Y)$, the following is true:
$$(\xi_{1}^{H},\xi_{2}^{H}|d\biomega)=(\xi_{1},\xi_{2}|d\biomega)+
{{1}\over{2}}(\xi_{1},\xi_{2}|[\biomega,\biomega]).\eqno\eqname\structureequdyo$$
As we have seen $|\biomega|=(1,0)$; however, $\biomega$ is not a
homogeneous element with respect to the $(\hbox{\bf\char'132}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2}\;{\mathchar"2208}\;
\hbox{\bf\char'132}_{2})$-grading of $\hbox{$\Omega(Y,\B,{\frak g})$}$. More precisely, $\biomega=\biomega_{0}+
\biomega_{1}$, where ${\rm deg\kern1pt}(\biomega_{0})=(1,0,0)$,
${\rm deg\kern1pt}(\biomega_{1})=(1,1,1)$ and if $\{e_{i},e_{j}\}$ is a
basis of $\frak g$ with $|e_{i}|=0$, $|e_{j}|=1$, we may write:
$\biomega=\sum_{i}\biomega^{i}\;{\mathchar"220A}\; e_{i}+\sum_{j}\biomega^{j}
\;{\mathchar"220A}\; e_{j}$, with $\biomega^{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$})_{0}$, $\biomega^{j}
\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$})_{1}$. In particular, $\biomega^{i}$, $\biomega^{j}$
vanish on horizontal derivations and if we decompose $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$ as
$a=\sum_{i}a^{i}e_{i}+\sum_{j}a^{j}e_{j}$, we find immediately
$((\Phi^{\ast})_{a}|\biomega^{i})=a^{i}\hbox{\bbm 1}_{\fam4 \rsfs B}$,
$((\Phi^{\ast})_{a}|\biomega^{j})=a^{j}\hbox{\bbm 1}_{\fam4 \rsfs B}$.
Let us now calculate the term ${{1}\over{2}}(\xi_{1},\xi_{2}|
[\biomega,\biomega])$.
$$\eqalign{(\xi_{1},\xi_{2}|[\biomega,\biomega])&=
\big(\xi_{1},\xi_{2}\big|\big[{\textstyle\sum_{i}}\biomega^{i}\;{\mathchar"220A}\;
e_{i}+{\textstyle\sum_{j}}\biomega^{j}\;{\mathchar"220A}\; e_{j},
{\textstyle\sum_{p}}\biomega^{p}\;{\mathchar"220A}\;
e_{p}+{\textstyle\sum_{q}}\biomega^{q}\;{\mathchar"220A}\; e_{q}\big]\big)\cr
\hfill&={\textstyle\sum_{i,p}}
\Big((\xi_{1}|\biomega^{i})(\xi_{2}|\biomega^{p})+
(-1)^{1+|\xi_{1}||\xi_{2}|}(\xi_{2}|\biomega^{i})
(\xi_{1}|\biomega^{p})\Big)
\;{\mathchar"220A}\;[e_{i},e_{p}]\cr
\hfill&+{\textstyle\sum_{i,q}}\Big((\xi_{1}|\biomega^{i})
(\xi_{2}|\biomega^{q})+(-1)^{1+|\xi_{1}||\xi_{2}|}(\xi_{2}|\biomega^{i})
(\xi_{1}|\biomega^{q})\Big)\;{\mathchar"220A}\;[e_{i},e_{q}]\cr
\hfill&+{\textstyle\sum_{j,p}}
\Big((-1)^{|\xi_{2}|}(\xi_{1}|\biomega^{j})
(\xi_{2}|\biomega^{p})+(-1)^{x}
(\xi_{2}|\biomega^{j})(\xi_{1}|\biomega^{p})\Big)\;{\mathchar"220A}\;[e_{j},e_{p}]\cr
\hfill&-{\textstyle\sum_{j,q}}
\Big((-1)^{|\xi_{2}|}(\xi_{1}|\biomega^{j})
(\xi_{2}|\biomega^{q})+(-1)^{x}
(\xi_{2}|\biomega^{j})(\xi_{1}|\biomega^{q})\Big)\;{\mathchar"220A}\;[e_{j},e_{q}].\cr}
\eqno\eqname\makrinari$$
In the previous calculation, the indices $i,p$ label the even elements
while $j,q$ the odd ones. We also used the fact that if
$\beta_{1},\beta_{2}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$})$, then
$(\xi_{1},\xi_{2}|\beta_{1}\beta_{2})=
(-1)^{|\xi_{2}||\beta_{1}|}(\xi_{1}|\beta_{1})(\xi_{2}|\beta_{2})+
(-1)^{1+|\xi_{1}||\xi_{2}|+|\xi_{1}||\beta_{1}|}(\xi_{2}|\beta_{1})
(\xi_{1}|\beta_{2})$, see relation 4.1.9 of {\kost}, and we have set
$x=1+|\xi_{1}||\xi_{2}|+|\xi_{1}|$. On the other hand, the term
$(\xi_{1},\xi_{2}|d\biomega)$ is calculated via
$(\xi_{1},\xi_{2}|d\biomega)=(\xi_{1}\;{\mathchar"220A}\; id)(\xi_{2}|\biomega)-
(-1)^{|\xi_{1}||\xi_{2}|}(\xi_{2}\;{\mathchar"220A}\; id)(\xi_{1}|\biomega)-
([\xi_{1},\xi_{2}]|\biomega)$, which is an immediate generalization of
4.3.10 of {\kost}. We distinguish now the following cases:
\item{1.} $\xi_{1}$, $\xi_{2}$: horizontal
$\Rightarrow\xi_{1}=\xi_{1}^{H},\xi_{2}=\xi_{2}^{H}$.
The graded structure equation holds, since
${1\over 2}(\xi_{1},\xi_{2}|[\biomega,\biomega])=0$ and
$(\xi_{i}|\biomega)=0$.
\item{2.} $\xi_{1}$: horizontal, $\xi_{2}=(\Phi^{\ast})_{a}$,
$a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$. The left-hand side of $\structureequdyo$ is zero
because $\xi_{2}^{H}=0$. The right-hand side of the same equation
reads: $(\xi_{1},(\Phi^{\ast})_{a}|d\biomega)=
(\xi_{1}\;{\mathchar"220A}\; id)((\Phi^{\ast})_{a}|\biomega)-([\xi_{1},
(\Phi^{\ast})_{a}]|\biomega)=0$, because by the definition of the
graded connexion, if $\xi$ is horizontal, $[\xi,(\Phi^{\ast})_{a}]$ is
horizontal too. Furthermore, it is clear that in this case,
$\makrinari$ gives $(\xi_{1},\xi_{2}|[\biomega,\biomega])=0$.
\item{3.} $\xi_{1}=(\Phi^{\ast})_{a}$, $\xi_{2}=(\Phi^{\ast})_{b}$,
$a,b\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$. Clearly, the left-hand side of $\structureequdyo$ is
zero. Examining now the cases $|a|=|b|=0$, $|a|=|b|=1$ and
$|a|=1,|b|=0$, we find always that the right-hand side is also
zero.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
By the graded structure equation, it is clear that $\biomega$ and
$F^{\biomega}$ have the same $\hbox{\bf\char'132}_{2}$ total degree:
$|F^{\biomega}|=(2,0)$; therefore,
$F^{\biomega}=F^{\biomega}_{0}+F^{\biomega}_{1}$ with
$F^{\biomega}_{0}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{2}(Y,\hbox{$\script B$})_{0}\;{\mathchar"220A}\;\frak g_{0}$,
$F^{\biomega}_{1}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{2}(Y,\hbox{$\script B$})_{1}\;{\mathchar"220A}\;\frak g_{1}$.
Bianchi's identity is a well-known property satisfied by the curvature
in differential geometry. Using the generalization of the covariant
derivative in the graded setting and the previous theorem, we may
establish an analogous property in the context of graded manifolds.
\math{Proposition. (Bianchi's Identity)}{\bianchi}
{$D^{\biomega}F^{\biomega}=0$.}
\undertext{\it Proof}. Let us first calculate the differential
$dF^{\biomega}$. Using the graded structure equation and the fact that
$\big[[\biomega,\biomega],\biomega\big]=0$ (Jacobi identity), we find easily:
$$dF^{\biomega}={1\over 2}d[\biomega,\biomega]
={1\over 2}([d\biomega,\biomega]-[\biomega,d\biomega])
=[d\biomega,\biomega]=[F^{\biomega},\biomega].$$
Thus, $dF^{\biomega}=[F^{\biomega},\biomega]$ and
$(\xi_{1},\xi_{2},\xi_{3}|D^{\biomega}F^{\biomega})=
(\xi_{1}^{H},\xi_{2}^{H},\xi_{3}^{H}|[F^{\biomega},\biomega])=0$,
because $\biomega$ vanishes on horizontal derivations.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
We will show now that the graded curvature $F^{\biomega}$ satisfies
the second property of the connexion $\biomega$ described in
Proposition {\connexiongequiv}. To this end, consider first
$a=\delta_{g}$, $g\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$} G$. Then:
$$\eqalign{\Phi^{\ast}_{g}F^{\biomega}&=\Phi^{\ast}_{g}
(d\biomega+{1\over 2}[\biomega,\biomega])
=d\Phi^{\ast}_{g}\biomega+{1\over
2}\Phi^{\ast}_{g}[\biomega,\biomega]\cr
\hfill&=(id\;{\mathchar"220A}\; AD_{g^{-1}\ast})d\biomega+{1\over 2}
(id\;{\mathchar"220A}\; AD_{g^{-1}\ast})[\biomega,\biomega]
=(id\;{\mathchar"220A}\; AD_{g^{-1}\ast})F^{\biomega}.\cr}$$
Suppose now that $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g$ homogeneous; thanks to Proposition
{\idiothtalie}, direct calculation gives:
$$\eqalign{(\Phi^{\ast})_{a}F^{\biomega}&=
\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}F^{\biomega}
=\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}d\biomega+{1\over
2}\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}[\biomega,\biomega]\cr
\hfill&=d\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega+{1\over 2}
\big([\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega,\biomega]+
[\biomega,\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}\biomega]\big)\cr
\hfill&=-d(id\;{\mathchar"220A}\; AD_{\ast a})\biomega+
{1\over 2}\big([-(id\;{\mathchar"220A}\; AD_{\ast a})\biomega,\biomega]+
[\biomega,-(id\;{\mathchar"220A}\; AD_{\ast a})\biomega]\big)\cr
\hfill&=-\big((id\;{\mathchar"220A}\; AD_{\ast a})d\biomega+{1\over 2}
(id\;{\mathchar"220A}\; AD_{\ast a})[\biomega,\biomega]\big)
=-(id\;{\mathchar"220A}\; AD_{\ast a})F^{\biomega},\cr}$$
because $|\biomega|=(1,0)$, $|\hbox{$\bit L$}_{(\Phi^{\ast})_{a}}|=(0,|a|)$. We
have thus proved the following:
\math{Property.}{\toidioisxuei}{\sl If $a\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script A$}(G)^{\circ}$ is a
group-like or primitive element with respect to $\delta_{e}$, we have
$(\Phi^{\ast})_{a}F^{\biomega}=(id\;{\mathchar"220A}\; AD_{\ast s^{\circ}_{\fam4 \rsfs
A}(a)})\hbox{\lower5.8pt\hbox{\larm\char'027}} F^{\biomega}$.}
\vskip0.3cm
We finally prove that the graded curvature provides a criterion for
checking whether the horizontal distribution is involutive or not.
\math{Theorem.}{\involutiveH}{\sl The graded horizontal distribution
is involutive if and only if the graded curvature $F^{\biomega}$ of
the connexion $\biomega$ is zero:
$[\hbox{$\script H$},\hbox{$\script H$}]\subset\hbox{$\script H$}\Leftrightarrow F^{\biomega}=0$.}
\undertext{\it Proof}. It is sufficient to check the involutivity
of $\hbox{$\script H$}(Y)$. If $\xi,\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(Y)$ and $\hbox{$\script H$}$ is involutive, then:
$([\xi,\eta]|\biomega)=0\Rightarrow(\xi,\eta|d\biomega)+{1\over
2}(\xi,\eta|[\biomega,\biomega])=0\Rightarrow(\xi,\eta|F^{\biomega})=0$.
But $F^{\biomega}$ vanishes identically on vertical derivations by its
definition (relation $\structureequdyo$); thus $F^{\biomega}=0$.
Conversely, suppose that $F^{\biomega}=0$. Then for each
$\xi,\eta\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(Y)$, we find: $(\xi,\eta|F^{\biomega})=0\Rightarrow
(\xi,\eta|d\biomega)+{1\over 2}(\xi,\eta|[\biomega,\biomega])=0
\Rightarrow(\xi,\eta|d\biomega)=0\Rightarrow-([\xi,\eta]|\biomega)=0$,
which implies that the derivation $[\xi,\eta]$ is also horizontal,
$[\xi,\eta]\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\hbox{$\script H$}(Y)$.\hbox{\kern0.3cm\vrule height6.7pt width6.7pt depth-0.2pt}
\vskip0.3cm
\chapter{Concluding remarks}
Consider a graded principal bundle $(Y,\hbox{$\script B$})$ equipped with a connection
form $\biomega\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$},\frak g)$. We know from the general
theory of graded differential forms, {\kost}, that there always exists
an algebra morphism $\kappa\colon\kern2pt\Omega(Y,\hbox{$\script B$})\rightarrow\Omega(Y)$
defined as follows: if $i\colon\kern2pt(Y,C^{\infty})\rightarrow(Y,\hbox{$\script B$})$ is
the morphism of
graded manifolds determined by $\hbox{$\script B$}(Y)\raise0.5pt\hbox{$\kern2pt\scriptstyle\ni\kern2pt$} f\mapsto\tilde{f}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}
C^{\infty}(Y)$ (see exact sequence $\exactsequence$), then $i^{\ast}$
is just $\kappa$. On the other hand, the decomposition $\frak g=\frak
g_{0}\;{\mathchar"2208}\;\frak g_{1}$ induces a canonical projection
$\epi_{0}\colon\kern2pt\frak g\rightarrow\frak g_{0}$. So we have a linear map
$\kappa_{0}=\kappa\;{\mathchar"220A}\;\epi_{0}\colon\kern2pt\Omega(Y,\hbox{$\script B$},\frak
g)\rightarrow\Omega(Y)\;{\mathchar"220A}\;\frak g_{0}$. Explicitly, if
$\alpha=\sum_{i}\alpha^{i}\;{\mathchar"220A}\; e_{i}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega(Y,\hbox{$\script B$},\frak g)$, then
$\kappa_{0}(\alpha)=\sum_{i}\kappa\big((\alpha^{i})_{0}\big)
\;{\mathchar"220A}\;(e_{i})_{0}$, where $(\alpha^{i})_{0}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega(Y,\hbox{$\script B$})_{0}$ and
$(e_{i})_{0}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\frak g_{0}$ are the even elements in the development
of $\alpha$; moreover, we easily realize that $\kappa_{0}$ is not a
$(\hbox{\bf\char'132}\;{\mathchar"2208}\;\hbox{\bf\char'132}_{2})$-graded Lie algebra morphism.
As we have seen in the proof of Theorem {\structureth}, $\biomega$ can
be decomposed as $\biomega=\biomega_{0}+\biomega_{1}$, where
$\biomega_{0}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$})_{0}\;{\mathchar"220A}\;\frak g_{0}$ and
$\biomega_{1}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{1}(Y,\hbox{$\script B$})_{1}\;{\mathchar"220A}\;\frak g_{1}$. Then clearly,
$\kappa_{0}(\biomega)=\kappa_{0}(\biomega_{0})=\sum_{i}\kappa
(\biomega^{i})\;{\mathchar"220A}\; e_{i}$, where $i$ labels the even elements.
Using the fact that the derivations induced on $(Y,\hbox{$\script B$})$ and
$(Y,C^{\infty})$ by the right actions of $(G,\hbox{$\script A$})$ and $(G,C^{\infty})$
respectively (according to Theorem {\thewrima}) are $i$-related, as
well as the defining properties of a graded connection form, one can
prove that $\kappa_{0}(\biomega)$ is a connection form on the ordinary
principal bundle $(Y,C^{\infty})$. Furthermore, the curvatures of
$\biomega$ and $\kappa_{0}(\biomega)$ are related through $\kappa_{0}
(F^{\biomega})=F^{\kappa_{0}(\biomega)}$.
The previous observations suggest that the connection theory on graded
principal bundles is the suitable framework for the mathematical
formulation of super-gauge field theories. The fact that the graded
connection $\biomega$ splits always as $\biomega=\biomega_{0}+
\biomega_{1}$ with $\kappa_{0}(\biomega_{0})$ being a usual connection
form, incorporates automatically the idea of supersymmetric partners:
$\biomega_{0}$ corresponds to the gauge potential of an ordinary
Yang-Mills theory, while $\biomega_{1}$ corresponds to its
supersymmetric partner. A very interesting feature of this approach is
that there is no graded connection form $\biomega$ for which one of
the terms $\biomega_{0}$ or $\biomega_{1}$ is zero. In physics
terminology, all gauge potentials have super-partners. The same is
also true for the curvature $F^{\biomega}$, the super-gauge field,
since $F^{\biomega}=F^{\biomega}_{0}+F^{\biomega}_{1}$,
$F^{\biomega}_{0}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{2}(Y,\hbox{$\script B$})_{0}\;{\mathchar"220A}\;\frak g_{0}$,
$F^{\biomega}_{1}\raise0.5pt\hbox{$\kern2pt\scriptstyle\in\kern2pt$}\Omega^{2}(Y,\hbox{$\script B$})_{1}\;{\mathchar"220A}\;\frak g_{1}$.
Thus, our approach sets the gauge potentials (resp. fields) and
its supersymmetric partners on the same footing: they both ``live"
in a Lie superalgebra $\frak g$ as components of the same connection
form (resp. curvature form). This is an essential difference between
our approach and the standard treatment of this problem by means of
DeWitt's or Roger's supermanifolds, (see {\grasso} and references
therein), where the connections take values in the even part of the
$\hbox{\bf\char'132}_{2}$-graded Lie module corresponding to a Lie supergroup.
\vskip0.5cm
\noindent {\bf Acknowledgements.}
\vskip0.1cm
I would like to thank Professor R. Coquereaux
for his critical reading of the manuscript and for many stimulating
discussions.
\refout
\end
|
1,314,259,994,308 | arxiv | \section{Introduction}
Models of inflation \cite{Guth:1980} considering a nonzero coupling $\xi$ between the scalar inflaton field and the gravitational Ricci scalar have been studied for more than 20 years.
Originally introduced to solve the graceful exit problem in the old inflationary scenario \cite{La:1989za}, it was soon realized that a nonminimal coupling could also improve the chaotic inflationary scenario \cite{Salopek:1988qh,Futamase:1987ua,Fakir:1990eg}. We will motivate the use of a nonzero $\xi$ later on. In the context of inflation a nonminimal coupling $\xi$ can not only relax the initial conditions for chaotic inflation, but it can also weaken the constraints on the inflaton potential. For successful chaotic inflation the quartic self-coupling of a minimally coupled inflaton field should take the unnaturally small value $\lambda \sim 10^{-13}$, but a (large) nonminimal coupling $\xi$ modifies this condition as $\lambda/|\xi|^2\sim 10^{-13}$. Therefore $\lambda$ can increase by many orders of magnitude if $|\xi|$ is sufficiently large. This allows the Higgs boson itself to be the inflaton field \cite{Salopek:1988qh}, an appealing idea that was revived recently by Bezrukov and Shaposhnikov \cite{Bezrukov:2007ep}. \\
The constraints on the inflaton potential are obtained from observations of the temperature fluctuations in the CMB\cite{Komatsu:2010fb}. The temperature fluctuations are ultimately a consequence of quantum fluctuations of the inflaton field, which are in turn described by the theory of gauge invariant cosmological perturbations \cite{Bardeen:1980kt,Mukhanov:1981xt,Hawking:1982cz,Starobinsky:1982ee,Guth:1982ec,Bardeen:1983qw}. Scalar perturbations of the metric and inflaton field, coupled through the Einstein equations, beautifully combine into a single gauge invariant variable often referred to as the comoving curvature perturbation.\\
At first the theory of cosmological perturbations was derived for a minimally coupled scalar field. However, Mukhanov et al. \cite{Mukhanov:1990me} correctly pointed out that a nonminimal coupling of the inflaton field to gravity can be removed by performing a conformal transformation of the metric $g_{\mu\nu,E}=\omega^2 g_{\mu\nu}$. Therefore it is in principle sufficient to know the standard results (\textit{e.g.} the primordial power spectrum) in the frame where the scalar-gravity coupling is minimal, the so-called Einstein frame. The Jordan frame (\text{i.e.} nonminimal coupling) results are then obtained by performing the conformal transformation. Although the Jordan and Einstein frame are physically equivalent at the classical level, it is not obvious that the frames are also equivalent at the level of (quantum) fluctuations. However, Makino and Sasaki \cite{Makino:1991sg} and Fakir et al. \cite{Fakir:1992cg} proved that the comoving curvature perturbation is not only gauge invariant, but also invariant under a conformal transformation. This means that it is for example possible to calculate the action for the comoving curvature perturbation in the Einstein frame and then obtain the action in the Jordan frame by performing a conformal transformation of the metric, which was done by Hwang \cite{Hwang:1996np}. Before, Hwang and Noh\cite{Hwang:1996xh} already found the field equation for the comoving curvature perturbation in the Jordan frame, by considering the linearized Einstein equations for a nonminimally coupled scalar field in the uniform field and curvature gauges.
\\
In this paper it is our goal to derive the gauge invariant free action for the nonminimally coupled inflaton field. To avoid any confusion regarding gauge freedom or conformal invariance between Jordan and Einstein frames, we do not fix a gauge or perform a conformal transformation in the derivation. Instead we will keep using all dynamical and constraint fields in the action and work exclusively in the Jordan frame. As we will see we obtain in a straightforward and unambiguous way the completely gauge invariant action for the nonminimally coupled scalar field, where as expected the only dynamical degrees of freedom are the (scalar) comoving curvature perturbation and the (tensor) graviton. We will use the canonical approach put forward recently in Ref. \cite{Prokopec:2010be}. This new approach is fundamental and very general since it keeps all the constraint fields and can in principle be generalized to arbitrary order in field perturbations. Moreover we will perform our calculations in $D$ dimensions, anticipating dimensional regularization in future loop calculations.\\
Our work is motivated by a number of points. First of all the gauge invariant action for cosmological perturbations is crucial in order to calculate quantum corrections to the inflaton potential. The quantum corrected inflaton potential determines whether or not the conditions for slow-roll inflation are met. In this paper we consider a nonminimally coupled inflaton field and we show that we can consistently calculate the free action. The next step is to derive the higher order gauge invariant action, which is outlined in \cite{Prokopec:2010be}. A second motivation for our work is to establish the physical equivalence of the Jordan and Einstein frames at the level of the free action. The main complication in this respect is the fact that the Einstein and Jordan frames are related by nonlinear field transformations. In fact we will show that the two frames are also physically equivalent when considering field fluctuations up to quadratic order. Chisholm \cite{Chisholm1961469} and Kamefuchi et al. \cite{Kamefuchi1961529} already proved almost 50 years ago that, although the field equations may differ in detail under point-transformations of the fields (\text{i.e.} transformations without time derivatives of fields), the (Euler-Lagrange) form of these equations and of the stress-energy tensor remains identical. Thereby the (quantum) equivalence of two frames related by nonlinear field transformations is established. In this paper we would like to understand how this equivalence works in detail for the case of the conformal transformation when applied to Einstein's gravity coupled nonminimally to a scalar matter.\\
As for the motivation of the use of a nonzero nonminimal coupling $\xi$: if $\xi$ does not have the conformal coupling $\frac16$ (which is the case), then $\xi$ will run with the energy scale, see Ref. \cite{Bilandzic:2007nb}. In other words, if we pick $\xi$ to be zero at some scale, it will not be zero at another energy scale. Moreover, if we pick $\xi$ to be large at some scale, then $\xi$ will remain large since the running is generically logarithmic. In the end of course it is Nature who decides which value $\xi$ takes at some energy scale. Fortunately, if $\xi$ would be nonzero then we should in principle be able to observe this. Minimal and nonminimal inflationary models are physically different since the matter and gravitational fields propagate differently if $\xi$ is nonzero. This is true in both the Einstein and Jordan frame. \\
The outline of the paper is the following: in section \ref{sec:canonicalaction} we formulate the action for the nonminimally coupled inflaton field in canonical form. We derive the background Friedmann and field equations and show the relation to the background fields in the Einstein frame. In section \ref{sec:freeaction} we perturb the action up to quadratic order in field fluctuations and perform a diagonalization procedure of this action. Our final and most important result is the completely gauge invariant free action for the nonminimally coupled inflaton field. By performing the conformal transformation we show that this action, including both dynamical and constraint fields, is physically equivalent to the quadratic action in the Einstein frame. Finally in section \ref{sec:Higgsinflation} we generalize our result to the case of Higgs inflation, where the Higgs boson itself is the inflaton field. We briefly discuss the idea and the current status of Higgs inflation and show that a scalar field theory with a local $SU(N)$ or $O(N)$ symmetry contains one dynamical inflaton field.
{ \section{\label{sec:canonicalaction}Canonical action for the nonminimally coupled inflaton field} }
We start with the $D$-dimensional action for a scalar field $\Phi$ that is coupled to the Ricci scalar $R$ through some function $F(\Phi)$,
\begin{equation}
S=\int d^D x \sqrt{-g}\left\{-R^{(D)} F(\Phi)-\frac12 g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu}\Phi -V(\Phi)\right\}.
\label{nonminimalaction}
\end{equation}
The metric convention is $(-,+,+,+)$ and we will work in units where $\hbar=c=1$. For a nonminimally coupled inflaton field $F(\Phi)=\frac12 M_P^2 -\frac12 \xi \Phi^2$ (where $M_P^2=(8\pi G_N)^{-1}$), where $\xi=+\frac{D-2}{4(D-1)}$ is the conformal coupling value and $\xi=0$ corresponds to minimal coupling. In the following we will keep $F(\Phi)$ completely general. To avoid any confusion for the rest of this paper we label the $D$-dimensional Ricci scalar with an index $D$.\\
In the Lagrange formulation the action \eqref{nonminimalaction} is invariant under coordinate transformations of the metric field $g_{\mu\nu}$. It is precisely this coordinate invariance however which makes the extraction of true dynamical fields problematic. Because we are interested in the dynamical fields in the context of cosmological perturbations, we therefore want to break the general covariance of the metric by separating spacetime into spatial surfaces of constant time. To this end we use the ADM\cite{Arnowitt:1962hi} decomposition of the metric with the line element
\begin{equation}
ds^2=-N^2dt^2+g_{ij}(dx^i+N^idt)(dx^j+N^jdt),
\label{ADMlineelement}
\end{equation}
where $N$ and $N^i$ are called the lapse and shift functions respectively. Under a time change $dt$ the corresponding change in a coordinate $x^i$ is $Ndt$ in the direction perpendicular to the spatial surface, and $N^idt$ in the direction parallel to the surface. This geometrical interpretation shows that the lapse and shift functions correspond to coordinate changes, which seems to leave the spatial metric $g_{ij}$ as the true dynamical field. In fact we can determine this precisely in the Hamiltonian formulation of gravity, which is obtained using the ADM metric. The ADM formalism is therefore necessary for a first principle quantization and can be used to check the correctness of any other quantization procedure. After a series of steps (presented in Appendix \ref{secappendixADMaction}) where we derive the canonical momenta and substitute these back into the action, we obtain the action for a nonminimally coupled scalar field in canonical form,
\begin{equation}
S=\int d^D x \left[p^{ij}\partial_0 g_{ij}+p_{\Phi}\partial_0{\Phi}-N\mathcal{H}-N_i\mathcal{H}^{i}\right],
\label{canonicalactionnonminimal}
\end{equation}
where
\begin{align}
\nonumber \mathcal{H}=&-\sqrt{g}RF+\frac{1}{\sqrt{g} F}\left[p^{ij}g_{ik}g_{jl}p^{kl}-\frac{1}{D-2}\frac{\left(1+2\frac{F^{\prime 2}}{F}\right)}{\Omega}p^2\right]\\
&+\frac{1}{\sqrt{g}}\frac{1}{2\Omega}p_{\Phi}^2+\sqrt{g}\frac{1}{2}g^{ij}\partial_i\Phi\partial_j\Phi+\sqrt{g}V(\Phi)
-\frac{1}{\sqrt{g}F}\frac{2}{D-2}\frac{1}{\Omega}F'p p_{\Phi}+2\sqrt{g}g^{ij}\nabla_i\nabla_jF\label{Nconstraint}\\
\mathcal{H}^{i}=&\partial^i\Phi p_{\Phi}-2\nabla_jp^{ij}\label{Niconstraint}.
\end{align}
$p^{ij}$ and $p_{\Phi}$ are the canonical momenta conjugate to $g_{ij}$ and $\Phi$ respectively and $p\equiv g_{ij}p^{ij}$. The action \eqref{canonicalactionnonminimal} is a new result and indeed reduces to the well known canonical action for a minimally coupled scalar field if we set $F=\frac12 M_p^2\equiv 1$. The canonical action indeed shows that the only dynamical field is $g_{ij}$, whereas the lapse $N$ and shift $N^i$ functions appear as Lagrange multipliers to the constraints. Since $p^{ij}$ is a densitized tensor the covariant derivative is understood as $\nabla_j p^{\ij}=\partial_j p^{ij}+\Gamma^{i}_{jl} p^{jl}$, where $\Gamma^{i}_{jl}$ only depends on spatial derivatives of the spatial metric $g_{ij}$. The Ricci scalar $R$ in \eqref{Nconstraint} is the 'spatial' Ricci scalar and only depends on (spatial derivatives of) $\Gamma^{i}_{jl}$. In the canonical action indices are raised and lowered by the spatial metric $g_{ij}$. Furthermore we use shorthand notation where $F=F(\Phi)$ and $F'=dF/d\Phi$, and we define the convenient variable
\begin{equation}
\Omega= 1+2\frac{D-1}{D-2}\frac{F^{\prime 2 }}{F}.\label{definitionOmega}
\end{equation}
As a consequence of the nonminimal coupling between $\Phi$ and $R$, the latter containing double derivatives, the momenta $p$ and $p_{\Phi}$ are coupled in the Hamiltonian $\mathcal{H}$ in Eq. \eqref{Nconstraint}. Since this leads to coupled equations when we derive the Hamilton equations for $p^{ij}$ and $p_{\Phi}$ we would like to decouple the momenta. We do this by introducing a shifted momentum
\begin{align}
\hat{p}_{\Phi}&\equiv p_{\Phi}-\frac{2}{D-2}\frac{F'}{F}p.
\label{nonminimalscalarmomentumshifted2}
\end{align}
Since the shift in the momentum only depends on $F(\Phi)$ and $p$, the transformation is canonical, thus the resulting Hamilton equations of motion will be equivalent for either $\hat{p}_{\Phi}$ and $p_{\Phi}$. In terms of the shifted momentum $\hat{p}_{\Phi}$ we find that we can write the action \eqref{nonminimalaction} as
\begin{equation}
S=\int d^D x \left[p^{ij}\partial_0 g_{ij}+\hat{p}_{\Phi}\partial_0{\Phi}+\frac{2}{D-2}\frac{F'}{F}p\partial_0\Phi -N\mathcal{H}-N_i\mathcal{H}^{i}\right],
\label{canonicalactionnonminimalshifted2}
\end{equation}
where
\begin{align}
\nonumber \mathcal{H}=&-\sqrt{g}RF+\frac{1}{\sqrt{g} F}\left[p^{ij}g_{ik}g_{jl}p^{kl}-\frac{1}{D-2}p^2\right]\\
&+\frac{1}{\sqrt{g}}\frac{1}{\Omega}\frac{1}{2}\hat{p}_{\Phi}^2+\sqrt{g}\frac{1}{2}g^{ij}\partial_i\Phi\partial_j\Phi +\sqrt{g}V(\Phi)
+2\sqrt{g}g^{ij}\nabla_i\nabla_jF\label{Nconstraintshifted2}\\
\mathcal{H}^{i}=&\partial^i\Phi (\hat{p}_{\Phi}+\frac{2}{D-2}\frac{F'}{F}p)-2\nabla_jp^{ij}\label{Niconstraintshifted2}.
\end{align}
The Hamiltonian $\mathcal{H}$ has dramatically simplified because of the shifted momentum. On the other hand, there are additional terms in the kinetic part of the action \eqref{canonicalactionnonminimalshifted2} and the momentum density $\mathcal{H}^{i}$. Our goal is to perturb the action up to second order in fluctuations around a FLRW background. Therefore we separate all fields in a classical background plus a small perturbation as
\begin{align}
p^{ij}&=\frac{\mathcal{P}(t)}{2(D-1) a(t)}\left(\delta^{ij}+\pi^{ij}(t,\bvec{x})\right)\label{perturbedpij}\\
\hat{p}_{\Phi}&=\hat{\mathcal{P}}_{\phi}(t)\left(1+\hat{\pi}_{\varphi}(t,\bvec{x})\right)\\
g_{ij}&=a(t)^2\left(\delta_{ij}+h_{ij}(t,\bvec{x})\right)\label{perturbedgij}\\
\Phi&=\phi(t)+\varphi(t,\bvec{x})\\
N&=\bar{N}(t)+n(t,\bvec{x})\label{perturbedN}.
\end{align}
The shift $N^i$ is a pure fluctuation, \textit{i.e.} its background value is zero. Note that we keep working with hatted quantities $\hat{\mathcal{P}}_{\phi}$ and $\hat{\pi}_{\varphi}$ to clarify that these are not the canonical momenta conjugate to $\phi$ and $\varphi$.
\subsection{Background equations}
To recover the background equations we can perturb the action \eqref{canonicalactionnonminimalshifted2} up to linear order in perturbations \eqref{perturbedpij}-\eqref{perturbedN} and set the resulting expressions to vanish. In general this gives the Hamilton equations of motion, which are derived from the background action
\begin{equation}
S^{(0)}=\int d^Dx \left\{\mathcal{P}\partial_0 a+\hat{\mathcal{P}}_{\phi}\partial_0\phi+\frac{1}{D-2}\frac{F'}{F}a\mathcal{P}\partial_0 \phi-\bar{N}\mathcal{H}^{(0)}\right\},
\label{backgroundactionshift2}
\end{equation}
where
\begin{equation}
\mathcal{H}^{(0)}=-\frac{1}{F}\frac{1}{a^{D-3}}\frac{\mathcal{P}^2}{4(D-1)(D-2)}+\frac{\hat{\mathcal{P}}_{\phi}^2}{2\bar{\Omega} a^{D-1}}+a^{D-1}V,
\label{backgroundhamiltonianshift2}
\end{equation}
where $F(\phi)$ and $\bar{\Omega}(\phi)$ are functions of the background fields only. Varying this action with respect to $\mathcal{P}$ and $\hat{\mathcal{P}}_{\phi}$ gives
\begin{align}
\mathcal{P}&=-2(D-1)(D-2)F a^{D-2}\left[H+\frac{1}{D-2}\frac{F'}{F}\dot{\phi}\right]\label{Ponshell}\\
\hat{\mathcal{P}}_{\phi}&=\bar{\Omega} a^{D-1}\dot{\phi}\label{Pphionshell},
\end{align}
where a dotted derivative corresponds to $\dot{a}\equiv \bar{N}^{-1} da/dt$ and we have identified the Hubble parameter as $H\equiv \dot{a}/a$. Since $\bar{N}$ can be picked arbitrary, the action \eqref{backgroundactionshift2} is time reparametrization invariant, a remnant of the diffeomorphism invariance of the original action \eqref{nonminimalaction}. Equations \eqref{Ponshell} and \eqref{Pphionshell} are the on-shell expressions for $\mathcal{P}$ and $\hat{\mathcal{P}}_{\phi}$. A variation of the background action \eqref{backgroundactionshift2} with respect to $a$ and $\phi$ gives
\begin{align}
\dot{\mathcal{P}}&=\frac{1}{D-2}\frac{F'}{F}\mathcal{P}\dot{\phi}-\frac{1}{F}\frac{D-3}{a^{D-2}}\frac{\mathcal{P}^2}{4(D-1)(D-2)}+ \frac{D-1}{\bar{\Omega}}\frac{\hat{\mathcal{P}}_{\phi}^2}{2a^{D}}-(D-1)a^{D-2}V
\label{fieldequationP2}\\
\dot{\hat{\mathcal{P}}}_{\phi}&=-\frac{1}{D-2}\frac{F'}{F}\left(a\mathcal{P}\right)^{\cdot}+\left(\frac{1}{F}\right)'\frac{1}{a^{D-3}}\frac{\mathcal{P}^2}{4(D-1)(D-2)}-\left( \frac{1}{\bar{\Omega}}\right)'\frac{\hat{\mathcal{P}}_{\phi}^2}{2a^{D-1}}-a^{D-1}V_{,\phi},
\label{fieldequationphi2}
\end{align}
where $V_{,\phi}=dV/d\phi$. Finally we can vary the background action with respect to $\bar{N}$ to find the constraint equation
\begin{equation}
\frac{1}{F}\frac{1}{a^{D-3}}\frac{\mathcal{P}^2}{4(D-1)(D-2)}=\frac{\hat{\mathcal{P}}_{\phi}^2}{2\bar{\Omega} a^{D-1}}+a^{D-1}V.
\label{constraintonshell}
\end{equation}
If we insert the canonical momenta \eqref{Ponshell} and \eqref{Pphionshell} in Eqs. \eqref{fieldequationP2}-\eqref{constraintonshell} we obtain the background Friedmann and field equations
\begin{align}
&H^2=\frac{1}{(D-1)(D-2)F}\left[\frac12 \dot{\phi}^2+V-2(D-1)H\dot{F}\right]
\label{nonminimalH2}\\
&\dot{H}=\frac{1}{(D-2)F}\left(-\frac12\dot{\phi}^2+H\dot{F}-\ddot{F}\right)
\label{nonminimaldotH}\\
&\ddot{\phi}+(D-1)H\dot{\phi}-(D-1)\left(D H^2+2\dot{H}\right)F'+V_{,\phi}=0,
\label{fieldequation}
\end{align}
where we recognize the background Ricci scalar
\begin{equation}
R=(D-1)\left(D H^2+2\dot{H}\right).
\label{ricciscalarFLRW}
\end{equation}
Eqs. \eqref{nonminimalH2}-\eqref{fieldequation} agree with the Friedmann and field equations obtained from a variation of the action \eqref{nonminimalaction} with respect to $g^{\mu\nu}$ and $\phi$, see for example Ref. \cite{Hwang:1996xh}. Note that in all the above equations the minimal result is recovered by setting $F=\frac12 M_P^2\equiv 1$. Furthermore, only two out of the three equations \eqref{nonminimalH2}-\eqref{fieldequation} are independent.
\subsection{\label{section:classicalequivalence} Classical equivalence of Jordan and Einstein frames}
The Einstein frame is the frame in which the inflaton field is minimally coupled to gravity. In the Einstein frame the action in canonical form is \cite{Prokopec:2010be}
\begin{equation}
S^{(0)}=\int d^Dx \left\{\mathcal{P}_E\partial_0 a_E+\mathcal{P}_{\phi,E}\partial_0\phi_E-\bar{N}_E\mathcal{H}_E^{(0)}\right\},
\label{backgroundactionEinstein}
\end{equation}
where
\begin{equation}
\mathcal{H}_E^{(0)}=-\frac{1}{a_E^{D-3}}\frac{\mathcal{P}_E^2}{4(D-1)(D-2)}+\frac{\mathcal{P}_{\phi,E}^2}{2 a_E^{D-1}}+a_E^{D-1}V_E.
\label{backgroundhamiltonianEinstein}
\end{equation}
The subscript $E$ denotes the quantities in the Einstein frame. This action can be obtained from Eq. \eqref{backgroundactionshift2} by setting $F=\frac12 M_P^2\equiv 1$. We can always make a transformation from the Einstein frame to the Jordan frame (with nonminimal coupling) by performing a conformal transformation of the metric,
\begin{equation}
g_{\mu\nu,E}=\omega^2 g_{\mu\nu},
\label{conformaltransformation}
\end{equation}
where $\omega=\omega\left(\Phi(x)\right)$. It is a well known fact that the Einstein and Jordan frames are physically equivalent at the level of the background equations of motion. Let us now establish this physical equivalence for the background Einstein and Jordan frame actions. Thus, we want to find out how the background fields in the Einstein frame action \eqref{backgroundactionEinstein} should be rescaled in order to arrive at the Jordan frame action \eqref{backgroundactionshift2}. Considering the ADM metric \eqref{ADMlineelement} the background lapse $\bar{N}_E$ and scale factor $a_E$ transform under the conformal transformation \eqref{conformaltransformation} as
\begin{align}
\bar{N}_E&=\bar{\omega} \bar{N} \label{relationlapse}\\
a_E&=\bar{\omega} a \label{relationscalefactor},
\end{align}
where we have decomposed the conformal factor $\omega$ in a background part plus a small (quantum) fluctuation
\begin{equation}
\omega(\Phi(x))=\bar{\omega}(t)+\delta\omega(t,\bvec{x}).\label{conformaldecomposition}
\end{equation}
Now, in order to arrive at the Jordan frame Hamiltonian \eqref{backgroundhamiltonianshift2} from Eq. \eqref{backgroundhamiltonianEinstein} we see that the momenta, field derivative and the potential in the Einstein and Jordan frames are related as
\begin{align}
\mathcal{P}_E&=\frac{1}{\bar{\omega}}\mathcal{P} \label{relationEinsteinframemmomentumP}\\
\mathcal{P}_{\phi,E}&=\sqrt{\frac{\bar{\omega}^{D-2}}{\bar{\Omega}}}\hat{\mathcal{P}}_{\phi} \label{relationEinsteinframemmomentumPphi}\\
V_E(\phi_E)&=\frac{1}{\bar{\omega}^D}V(\phi_E(\phi))\label{relationpotential},
\end{align}
if we make the identification
\begin{equation}
F(\phi)= \bar{\omega} ^{D-2}.
\end{equation}
With these field redefinitions the Hamiltonian in the Jordan frame \eqref{backgroundhamiltonianshift2} can be derived from the Einstein frame Hamiltonian \eqref{backgroundhamiltonianEinstein}. Furthermore we can verify that $\mathcal{P}_E\partial_0 a_E\rightarrow \mathcal{P}\partial_0 a+\frac{1}{D-2}\frac{F'}{F}a\mathcal{P}\partial_0 \phi$ under the field redefinition of $\mathcal{P}_E$. What remains to be checked is the relation between $\partial_0\phi_E$ and $\partial_0\phi$. Since the canonical momentum $\hat{\mathcal{P}}_{\phi}$ depends on $\partial_0\phi$ in a specific way, we should find the expression for the canonical momentum $P_{\phi,E}$ in terms of $\partial_0\phi_E$. By varying the action \eqref{backgroundactionEinstein} with respect to $\mathcal{P}_E$ and $\mathcal{P}_{\phi,E}$ we can find the definition of the canonical momenta in the Einstein frame,
\begin{align}
\mathcal{P}_E&=-2(D-1)(D-2)a_E^{D-2}H_E\label{EinsteinframemmomentumP}\\
\mathcal{P}_{\phi,E}&=a_E^{D-1}\dot{\phi}_E\label{EinsteinframemmomentumPphi},
\end{align}
where $H_E=\dot{a}_E/a_E$ and $\dot{\phi}_E=\bar{N}_E^{-1}\partial_0 \phi_E$ is the dotted derivative in the Einstein frame. We now compare Eqs. \eqref{EinsteinframemmomentumP} and \eqref{EinsteinframemmomentumPphi} to the momenta in the Jordan frame \eqref{Ponshell} and \eqref{Pphionshell} and use the relations between Jordan and Einstein frame momenta in Eqs. \eqref{relationEinsteinframemmomentumP} and \eqref{relationEinsteinframemmomentumPphi}. This will give us the relation between the Hubble parameter and the background field in the Jordan and Einstein frames,
\begin{align}
H_E&=\frac{1}{\bar{\omega}}\left(H+\frac{\dot{\bar{\omega}}}{\bar{\omega}}\right)=\frac{1}{F^{\frac{1}{D-2}}}\left(H+\frac{1}{D-2}\frac{F'}{F}\dot{\phi}\right)\label{relationHubble}\\
\dot{\phi}_E&=\frac{1}{\bar{\omega}}\sqrt{\frac{\bar{\Omega}}{\bar{\omega}^{D-2}}}\dot\phi=\frac{1}{F^{\frac{1}{D-2}}}\sqrt{\frac{\bar{\Omega}}{F}}\dot\phi,
\label{relationdotphi}
\end{align}
where the dotted derivatives on the left- and right-hand sides are the reparametrization invariant dotted derivatives in the Einstein and Jordan frames, respectively\footnote{Note that Eq. \eqref{relationdotphi} corresponds to the nonlinear field redefinition which is commonly used in the Lagrange formulation to bring the kinetic terms in the Einstein frame into canonical form. See for example Ref. \cite{Hwang:1996np}, or Ref. \cite{Bezrukov:2007ep} for the specific nonminimal coupling term $\xi R \Phi^2$.}. Using the relation for $\dot{\phi}_E$ in Eq. \eqref{relationdotphi}, and the field redefinitions in Eqs. \eqref{relationlapse}-\eqref{relationpotential}, we finally find that we can derive the Jordan frame action \eqref{backgroundactionshift2} from the Einstein frame action \eqref{backgroundactionEinstein}. Since the background fields in the Jordan and Einstein frames are related by time-dependent rescalings, we thereby establish the physical equivalence between the two frames at the classical level both on- and off-shell. In the next section we will establish the equivalence of the Jordan and Einstein frame actions up to second order in (quantum) fluctuations.
{\section{\label{sec:freeaction} Free action for cosmological perturbations} }
In this section we will derive the free action for gauge invariant cosmological perturbations for all dynamical and constraint fields. A common approach is to fix a gauge by setting either scalar field or metric perturbations to zero, and then to solve for the lapse and shift perturbations from the linearized constraint equations, see for example Ref. \cite{Maldacena:2002vr} and Ref. \cite{Feng:2010ya} for minimal and nonminimal coupling, respectively. In this paper we do not solve any linearized constraint equations, nor do we use gauge freedom to set some fields to zero. Instead we keep all the fields up to second order in fluctuations \eqref{perturbedpij}-\eqref{perturbedN}. We find for the action \eqref{canonicalactionnonminimalshifted2} up to second order in perturbations
\begin{align}
\nonumber S^{(2)}=&\int d^Dx\biggl\{\frac{\mathcal{P}}{2(D-1)}\left(2\pi^{ij}h_{ij}\partial_0 a+a \pi^{ij}\partial_0 h_{ij}\right)+\hat{\mathcal{P}}_{\phi}\hat{\pi}_{\varphi}\partial_0 \varphi\\
\nonumber &+\frac{a\mathcal{P}}{(D-1)(D-2)}\biggl[\frac{F'}{F}\left(\pi^{ij}h_{ij}\partial_0\phi+\pi^{ij}\delta_{ij}\partial_0\varphi+h\partial_0 \varphi\right)+\frac12 (D-1)\left(\frac{F'}{F}\right)''\varphi^2\partial_0\phi\\
&+\left(\frac{F'}{F}\right)'\varphi\left((D-1)\partial_0\varphi+(\pi^{ij}\delta_{ij}+h)\partial_0\phi\right)\biggr]-\bar{N}\mathcal{H}^{(2)}-n\mathcal{H}^{(1)}-N_i\mathcal{H}^{i(1)}\biggr\},
\label{quadraticnonminimalaction}
\end{align}
where $h\equiv h^{ij}\delta_{ij}$. We refer the reader to Appendix A of Ref. \cite{Prokopec:2010be} for some intermediate steps in the derivation. In this quadratic action indices are raised and lowered by the Kronecker delta $\delta_{ij}$. The Hamiltonian up to first order in perturbations is
\begin{align}
\nonumber \mathcal{H}^{(1)}=&-a^{D-3}F\left[\partial_i\partial_{j}h^{ij}-\nabla^2 h\right]-\frac{1}{a^{D-3}}\frac{1}{F}\frac{\mathcal{P}^2}{2(D-1)^2(D-2)}\left[\pi^{ij}\delta_{ij}-\frac14 (D-5)h-\frac12 (D-1)\frac{F'}{F}\varphi\right]\\
&+\frac{1}{a^{D-1}}\frac{\hat{\mathcal{P}}_{\phi}^2}{2\bar{\Omega}}\left[2\hat{\pi}_{\varphi}-\frac12 h-\frac{\bar{\Omega}'}{\bar{\Omega}}\varphi\right]+a^{D-1}\left[\frac12 h V+V_{,\phi}\varphi\right] +2a^{D-3}F'\nabla^{2}\varphi,
\label{linearnonminimalhamiltonian}
\end{align}
where $\nabla^2=\partial_i \partial^{i}=\delta^{ij}\partial_i\partial_j$. The Hamiltonian up to second order in perturbations is
\begin{align}
\nonumber \mathcal{H}^{(2)}=&-Fa^{D-3}\left[-\frac14 h\nabla^2h+\frac12 h\partial^{i}\partial^{j}h_{ij}-\frac12 h_{ij}\partial^{i}\partial^{l}h_{jl}+\frac14 h^{ij}\nabla^2h_{ij}+\frac{F'}{F}\varphi(\partial^{i}\partial^{j}h_{ij}-\nabla^2h)\right]\\
\nonumber &+\frac{\mathcal{P}^2}{4(D-1)^2a^{D-3}F}\biggl[ \frac12 \pi^{ij}A_{ijkl}\pi^{kl}+\frac{\pi^{ij}}{D-2}(2(D-3)h_{ij}-h\delta_{ij}) +h_i^jh_j^{i}\\
\nonumber & -\frac{D-1}{D-2}\left(\frac14 h_i^jh_j^{i}+\frac18 h^2\right)
-\frac{2F}{D-2}\left(\frac{1}{F}\right)'\varphi(\pi^{ij}\delta_{ij}+h-\frac14 (D-1)h)-\frac{F}{2}\frac{D-1}{D-2}\left(\frac{1}{F}\right)''\varphi^2\biggr]\\
\nonumber & +\frac{\hat{\mathcal{P}}_{\phi}^2}{2a^{D-1}\bar{\Omega}}\left[\hat{\pi}_{\varphi}^2+\frac14 h_i^jh_j^{i}+\frac18 h^2-h\hat{\pi}_{\varphi}+ \bar{\Omega}\left(\frac{1}{\bar{\Omega}}\right)'\varphi(2\hat{\pi}_{\varphi}-\frac12 h)+\frac{\bar{\Omega}}{2}\left(\frac{1}{\bar{\Omega}}\right)''\varphi^2 \right]\\
\nonumber &+a^{D-3}\frac12 \partial^{i}\varphi\partial_i\varphi+a^{D-1}\left[\left(-\frac14 h_i^jh_j^{i}+\frac18 h^2\right)V+\frac12 h \varphi V_{,\phi}+\frac12 V_{,\phi\phi}\varphi^2\right]\\
&+2\partial_i\left(a^{D-3}\left[\frac12 h F'\partial^{i} \varphi -F'h^{ij}\partial_j\varphi+F'' \varphi \partial^{i} \varphi\right]\right),
\label{quadraticnonminimalhamiltonian}
\end{align}
where $A_{ijkl}=\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}-\frac{2}{D-2}\delta_{ij}\delta_{kl}$. Note that the final term in Eq. \eqref{quadraticnonminimalhamiltonian} is a total derivative term and vanishes, but we give this term explicitly for future purpose. Finally the momentum density up to first order in perturbations is
\begin{align}
\mathcal{H}^{i(1)}=\partial^{i}\varphi\frac{1}{a^2}\left(\hat{\mathcal{P}}_{\phi}+\frac{aP}{D-2}\frac{F'}{F}\right)-\frac{\mathcal{P}}{(D-1)a}\left(\partial_j\pi^{ij}+\partial_jh^{ij}-\frac12 \partial^{i}h\right).
\label{linearnonminimalNiconstraint}
\end{align}
Now that we have found the free action \eqref{quadraticnonminimalaction} we want to make a few remarks:
\begin{itemize}
\item The action \eqref{quadraticnonminimalaction} is quite complicated due to many coupled fields;
\item It is unclear what are the dynamical degrees of freedom in Eq. \eqref{quadraticnonminimalaction};
\item The action \eqref{quadraticnonminimalaction} is not explicitly gauge invariant.
\end{itemize}
Some clarification is in order. The action \eqref{quadraticnonminimalaction} contains many different fields (e.g. $\pi^{ij}$, $h^{ij}$, $\hat{\pi}_{\varphi}$, $\varphi$, $n$ and $N_i$) coupled in a nontrivial way. We know however that the $n$ and $N_i$ are non-dynamical and impose constraints on $h^{ij}$ and $\varphi$. Furthermore $n$ and $N_i$ are completely arbitrary and need to be fixed by imposing gauge conditions\cite{Prokopec:2010be}. The 14-dimensional phase space of $h^{ij}$ and $\varphi$ is therefore reduced to a $14-4-4=6$-dimensional physical phase space. Indeed, a well known result from cosmological perturbation theory is that there is only one dynamical scalar degree of freedom and two dynamical tensor degrees of freedom, corresponding to a 6-dimensional phase space. This is not at all obvious from the action \eqref{quadraticnonminimalaction}. Finally we remark that, being derived from a diffeomorphism invariant action \eqref{nonminimalaction}, the action \eqref{quadraticnonminimalaction} should be gauge invariant (\textit{i.e.} invariant under infinitesimal coordinate transformations). However it is difficult to see this from Eq. \eqref{quadraticnonminimalaction}. All fields transform in a specific way under a coordinate transformation, and it is only a special combination of the fields that will be gauge invariant.\\
In order to extract the three dynamical degrees of freedom and show the explicit gauge invariance of the action, we will only have to do one thing: decouple all fields by defining shifted fields that diagonalize the action. As it turns out, the shifted fields will all be gauge invariant and there will only be three dynamical degrees of freedom. As a bonus, the action acquires a nice and simple form. As a start it is convenient to use the scalar-vector-tensor decomposition of the spatial metric~\footnote{\label{footnote:notation}Our notation differs from the one used in most literature where $h=\text{Tr}(h_{ij})=2(D-1)\psi+2\nabla^2 E, h-\nabla^2\tilde{h}=2(D-1)\psi$, see \cite{Mukhanov:1990me}.}, see Ref. \cite{Prokopec:2010be},
\begin{align}
h_{ij}=\frac{\delta_{ij}}{D-1}h+\left(\partial_i\partial_j-\frac{\delta_{ij}}{D-1}\nabla^2\right)\tilde{h}+\partial_{\left(i\right.}h^T_{\left.j\right)}+h_{ij}^{TT},
\label{scalarvectortensordecomposition}
\end{align}
with
\begin{align}
\partial^{i}h_i^T=0,~~~~~~~~~~~~~\partial^{i}h_{ij}^{TT}=0=\partial^{j}h_{ij}^{TT}.
\end{align}
Furthermore we decompose the shift vector $N^{i}$ in its longitudinal and transverse components,
\begin{equation}
N_i=\partial_i S+N_{i}^T,~~~~~~~~~~~~~~~~\text{with}~~~~~~~~~~\partial^{i}N_i^T=0.
\label{shiftdecomposition}
\end{equation}
The action can now be diagonalized by defining shifted fields (see Appendix \ref{sec:appendixDiagonalisingaction} for a derivation and definitions of introduced variables)
\begin{align}
\hat{\pi}_{\varphi}&=\hat{\tilde{\pi}}_{\varphi}-\frac12 \hat{I}_{\varphi}\\
\pi^{ij}&=\tilde{\pi}^{ij}-\frac12\left(I_{ij}-\delta_{ij} I\right)\\
n&=\tilde{n}-\frac{\bar{N}}{2W}I_n\\
\nabla^2S&=\nabla^2\tilde{S}-\frac{1}{2(D-1)}\frac{1}{1-\alpha}\left[J-(D-2)\bar{N}a^2\nabla^2 (\dot{\tilde{h}}-J_{h_{ij}}\tilde{h})\right]\\
\partial_{\left(i\right.}N^T_{\left.j\right)}&=\partial_{\left(i\right.}\tilde{N}^T_{\left.j\right)}+\frac{a^2\bar{N}}{2}\left(\partial_{\left(i\right.}\dot{h}^T_{\left.j\right)} -J_{h_{ij}}\partial_{\left(i\right.}h^T_{\left.j\right)}\right),
\end{align}
and the Sasaki-Mukhanov variable~\footnote{From Footnote \ref{footnote:notation} it follows that Eq. \eqref{comovingCutvperturbation} yields the better known form $\tilde{\varphi}=\varphi-\frac{\dot{\phi}}{H}\psi$.}
\begin{equation}
\tilde{\varphi}=\varphi-
z_0(h-\nabla^2\tilde{h})\label{comovingCutvperturbation},
\end{equation}
where
\begin{equation}
z_0=\frac{\dot{\phi}}{2(D-1)H}.
\label{definitionz0}
\end{equation}
After tedious calculations, of which we present some intermediate results in Appendix \ref{sec:appendixDiagonalisingaction}, we obtain the free action
\begin{align}
\nonumber S^{(2)}&=\int d^{D-1}x \bar{N}dt a^{D-1}\Biggl\{\frac{z^2}{z_0^2}\left[\frac12\dot{\tilde{\varphi}}^2-\frac12 \left(\frac{\partial_i\tilde{\varphi}}{a}\right)^2+\frac12 \frac{z_0}{a^{D-1} z^2}\left[a^{D-1}\frac{z^2}{z_0^2}\dot{z_0}\right]^{\cdot}\tilde{\varphi}^2\right]+\frac{F}{4}\biggl[(\dot{h}_{ij}^{TT})^2-\Bigl(\frac{\partial h_{ij}^{TT}}{a}\Bigr)^2\biggr]\\
&-\frac{\hat{\mathcal{P}}_{\phi}^2}{2a^{2(D-1)}\bar{\Omega}}\hat{\tilde{\pi}}_{\varphi}^2-\frac{\mathcal{P}^2}{4(D-1)^2a^{2(D-2)}F}\tilde{\pi}^{ij} \frac{A_{ijkl}}{2}\tilde{\pi}^{kl}+\frac{W}{a^{D-1}\bar{N}^2}\tilde{n}^2+\frac{F}{a^4 \bar{N}^2}\left((1-\alpha)[\nabla^2 \tilde{S}]^2+[\partial_{\left(i\right.}\tilde{N}^T_{\left.j\right)}]^2\right)\Biggr\},
\label{freeAction}
\end{align}
where
\begin{equation}
z^2=\frac{1}{4(D-1)^2}\frac{\bar{\Omega} \dot{\phi}^2}{(H+\frac{1}{D-2}\frac{F'}{F}\dot{\phi})^2}.
\end{equation}
Note that by setting $F=\frac12 M_p^2\equiv 1$ that $z^2\rightarrow z_0^2$ and we obtain the well known result from gauge invariant cosmological perturbation theory for a minimally coupled scalar field (see also Ref. \cite{Prokopec:2010be}).\\
The action \eqref{freeAction} is our most important result. When we compare this new free action to the original free action from Eq. \eqref{quadraticnonminimalaction} we can make the following remarks:
\begin{itemize}
\item All shifted fields are decoupled in the action \eqref{freeAction};
\item The only dynamical degrees of freedom in \eqref{freeAction} are 1 scalar and 2 tensor degrees of freedom;
\item All shifted fields in Eq. \eqref{freeAction} are gauge-invariant up to linear order in coordinate transformations.
\end{itemize}
For a proof of the third point we refer the reader to Appendix \ref{sec:appendixgaugeinvariance}. Thus our tedious diagonalization procedure has paid off: we have obtained a simple, explicitly gauge invariant action with one propagating scalar field $\tilde{\varphi}$ and a propagating graviton $h^{TT}_{ij}$. A variation of the action with respect to the non-dynamical $\tilde{\pi}^{ij}$ and $\hat{\tilde{\pi}}_{\varphi}$ gives the linearized Hamilton equations of motion. On the other hand the variation with respect to $\tilde{n}$, $\tilde{S}$ and $\tilde{N}^{T}$ gives the solutions of the linearized constraint equations. Therefore the free action \eqref{freeAction} contains all the properties of linearized inflationary perturbations, as well as the transition between the Hamilton and Lagrange formalism.\\
In the gauge invariant form \eqref{freeAction} the scalar field $\tilde{\varphi}$ can be quantized and the true scalar propagator can be extracted from the action. If we would also know the gauge invariant cubic and quartic vertices (meaning we have to calculate the action up to fourth order in perturbations), we would be able to calculate quantum corrections to the inflaton potential. We would have to work much harder to make the action gauge invariant when we also include these higher order interaction terms. We leave this for future work. We emphasize that the action \eqref{freeAction} is gauge invariant up to linear order in coordinate transformations. If we include higher order terms the free action would still have the same form as Eq. \eqref{freeAction}, but the gauge invariant fields will now also consist of combinations of higher order field perturbations. This will affect, for example, canonical quantization. This fact makes the construction of a fully gauge invariant formalism a worthy effort. As a final comment we note that one can extend our treatment for models which contain non-canonical kinetic terms such as the DBI model \cite{Easson:2009wc}.
\subsection{\label{section:quantumequivalence} Quantum equivalence of Jordan and Einstein frames}
In section \ref{section:classicalequivalence} we showed the classical equivalence of the Jordan and Einstein frame actions in Hamiltonian form. Now we want to demonstrate the quantum equivalence of the Jordan and Einstein frames at the level of the free action. Let us first consider the dynamical scalar $\tilde{\varphi}$ in the Jordan frame free action Eq. \eqref{freeAction}. If we redefine the field $\tilde{\varphi}$ in terms of the comoving curvature perturbation in the Jordan frame $\mathcal{R}$
\begin{equation}
\mathcal{R}=-\frac{\tilde{\varphi}}{z_0}
\end{equation}
the scalar action becomes
\begin{equation}
S^{(2)}_{\mathcal{R}}=\int d^{D-1}x \bar{N}dt a^{D-1}z^2\left[\frac12\dot{\mathcal{R}}^2-\frac12 \left(\frac{\partial_i\mathcal{R}}{a}\right)^2\right].
\label{Jordanframeactioncomoving}
\end{equation}
On the other hand the form of the action for a minimally coupled scalar field (see Ref. \cite{Prokopec:2010be}) is
\begin{align}
\nonumber S^{(2)}_{\mathcal{R}_E}=&\int d^{D-1}x \bar{N}_E dt a_E^{D-1}z_E^2\left[\frac12\dot{\mathcal{R}}_E^2-\frac12 \left(\frac{\partial_i\mathcal{R}_E}{a_E}\right)^2\right],\\
z_E^2=&\frac{1}{4(D-1)^2}\frac{\dot{\phi}_E^2}{H_E^2},
\label{Einsteinframeactioncomoving}
\end{align}
where the dotted derivative here means $\dot{\mathcal{R}}_E=\bar{N}_E^{-1}\partial_0\mathcal{R}_E$ and $\mathcal{R}_E$ is the comoving curvature perturbation in the Einstein frame. The prefactor in the action \eqref{Einsteinframeactioncomoving} only depends on the background fields and can therefore be transformed to a physically equivalent prefactor by performing a conformal transformation. Indeed, we can derive the action in the Jordan frame \eqref{Jordanframeactioncomoving} from the action in the Einstein frame \eqref{Einsteinframeactioncomoving} by a redefinition of the background fields as in Eqs. \eqref{relationlapse}, \eqref{relationscalefactor}, \eqref{relationHubble} and \eqref{relationdotphi}. The free actions \eqref{Jordanframeactioncomoving} and \eqref{Einsteinframeactioncomoving} are however only truly physically equivalent if the comoving curvature perturbation does not change under a conformal transformation, \textit{\textit{i.e.}} $\mathcal{R}=\mathcal{R}_E$. This can be proved in the following way. If we decompose the conformal factor and the metric in a background plus a perturbed part as in Eqs. \eqref{conformaldecomposition} and \eqref{perturbedgij} we can show that (see also Appendix \ref{sec:appendixconformalinvariance})
\begin{equation}
(h_E-\nabla^2\tilde{h}_E)=(h-\nabla^2\tilde{h})+2(D-1)\frac{\delta\omega}{\bar{\omega}}.
\end{equation}
Now we want to know how the scalar inflaton fluctuation in the Einstein frame $\varphi_E$ is related to $\varphi$ in the Jordan frame. Suppose now we have a scalar field $\Phi_E$ in the Einstein frame action which is a function of the scalar field $\Phi$ in the Jordan frame. We also wish to decompose this $\Phi_E$ into a background part plus a small quantum fluctuation by substituting $\Phi=\phi+\varphi$. This gives
\begin{equation}
\Phi_E(\phi+\varphi)=\Phi_E(\phi)+\frac{\partial \Phi_E}{\partial\phi}\varphi=\Phi_E(\phi)+\frac{\partial_0\Phi_E}{\partial_0 \phi}\varphi\equiv \phi_E+\varphi_E.
\end{equation}
This leads to the convenient relation at the level of linearized perturbations
\begin{equation}
\frac{\varphi_E}{\dot{\phi}_E}=\bar{\omega}\frac{\varphi}{\dot{\phi}}=\bar{\omega}\frac{\delta\omega}{\dot{\bar{\omega}}}\label{relationvarphi},
\end{equation}
where the extra factor of $\bar{\omega}$ appears because of the difference between the dotted derivatives in the Einstein and the Jordan frame. The last relation is true since the conformal transformation is a function of $\Phi$, \textit{i.e.} $\omega=\omega(\Phi)$. With this relation and the Hubble parameter in the Einstein frame from Eq. \eqref{relationHubble}, we find
\begin{equation}
\frac{H_E}{\dot{\phi}_E}\varphi_E=\frac{H}{\dot{\phi}}\varphi+\frac{\delta\omega}{\bar{\omega}},
\end{equation}
such that
\begin{equation}
\mathcal{R}_E=(h_E-\nabla^2\tilde{h}_E)-2(D-1)\frac{H_E}{\dot{\phi}_E}\varphi_E =(h-\nabla^2\tilde{h})-2(D-1)\frac{H}{\dot{\phi}}\varphi=\mathcal{R}.
\end{equation}
Thus the comoving curvature perturbation is invariant under a conformal transformation up to linear order in perturbations. This was first proved by Makino and Sasaki\cite{Makino:1991sg} and Fakir et al. \cite{Fakir:1992cg}. In fact one can show that the comoving curvature perturbation is conformally invariant in the fully nonlinear approach, see Ref. \cite{Chiba:2008ia}. Therefore we have established the equivalence of the Jordan and Einstein frame scalar actions at the classical level as well as at the level of quadratic (quantum) fluctuations.\\
Now that we have checked the physical equivalence for the scalar sector, let us see how the rest of the action \eqref{freeAction} transforms under the conformal transformation. First of all, let us give the Einstein frame action for the graviton and constraint fields,
\begin{align}
\nonumber S^{(2)}=\int d^{D-1}x \bar{N}_Edt a_E^{D-1}\Biggl\{&\frac{1}{4}\biggl[(\dot{h}_{ij,E}^{TT})^2-\Bigl(\frac{\partial h_{ij,E}^{TT}}{a_E}\Bigr)^2\biggr]-\frac{\hat{\mathcal{P}}_{\phi,E}^2}{2a_E^{2(D-1)}}\tilde{\pi}_{\varphi,E}^2-\frac{\mathcal{P}_E^2}{4(D-1)^2a_E^{2(D-2)}}\tilde{\pi}_E^{ij} \frac{A_{ijkl}}{2}\tilde{\pi}_E^{kl}\\
&+\frac{W_E}{a_E^{D-1}\bar{N}_E^2}\tilde{n}_E^2+\frac{1}{a_E^4 \bar{N}_E^2}\left((1-\alpha_E)[\nabla^2 \tilde{S}_E]^2+[\partial_{\left(i\right.}\tilde{N}^T_{\left.j\right),E}]^2\right)\Biggr\},
\label{freeActionEinstein}
\end{align}
where $W_E$ and $\alpha_E$ are defined in Eqs. \eqref{definitionW} and \eqref{definitionalpha} of Appendix \ref{sec:appendixDiagonalisingaction}, where the subscript $E$ denotes that these quantities depend on the Einstein frame background fields. Now we perform the conformal transformation of the action \eqref{freeActionEinstein} using Eqs. \eqref{relationlapse}, \eqref{relationscalefactor} and \eqref{relationEinsteinframemmomentumP} - \eqref{relationpotential}. We find that the Einstein frame action transforms to the Jordan frame action \eqref{freeAction} if the graviton and constraint fields transform as
\begin{align}
\tilde{n}_E&=\bar{\omega}\tilde{n}\label{conformaltransfn}\\
\tilde{S}_E&=\bar{\omega}^2 \tilde{S}\\
\tilde{N}^T_{i,E}&=\bar{\omega}^2\tilde{N}^T_{i}\label{conformaltransformNiT}\\
h_{ij,E}^{TT}&=h_{ij}^{TT}\label{conformaltransfhij}\\
\tilde{\pi}_{\varphi,E}&=\hat{\tilde{\pi}}_{\varphi}\\
\tilde{\pi}_E^{ij}&=\tilde{\pi}^{ij}\label{conformaltransfpiij}.
\end{align}
In Appendix \ref{sec:appendixconformalinvariance} we show that the graviton and constraint fields transform in precisely this way under a conformal transformation\footnote{The gauge invariant lapse perturbation $\tilde{n}$ can be made invariant under a conformal transformation if we define the lapse perturbation as $N=\bar{N}(t)(1+n(t,\bvec{x}))$ as compared to Eq. \eqref{perturbedN}. Moreover, the lapse part of the free action would be time reparametrization invariant. Furthermore we could have defined the shift perturbations as $N_i=a\bar{N}n_i$, where the diagonalized shift perturbation $\tilde{n}_i$ would also be invariant under a conformal transformation and the action as a whole would be time reparametrization invariant. Note also that, with these definitions of the perturbed fields, every spatial derivative in the action \eqref{freeAction} as well as \eqref{quadraticnonminimalaction} contains a factor $a^{-1}$.}. Therefore the complete free Jordan frame \eqref{freeAction} and Einstein frame \eqref{Einsteinframeactioncomoving}+\eqref{freeActionEinstein} actions are physically equivalent.
{\section{Higgs inflation}\label{sec:Higgsinflation} }
Recently Bezrukov and Shaposhnikov\cite{Bezrukov:2007ep} revived the old idea by Salopek, Bond and Bardeen \cite{Salopek:1988qh} that the Higgs boson can be the inflaton field if it is nonminimally coupled to gravity. The requirement for Higgs inflation is a large nonminimal coupling $|\xi|\gg 1$, ensuring the flatness of the Higgs potential for large field values. Since then there has been much debate whether or not quantum corrections destroy the flatness of the Higgs potential, thereby spoiling Higgs inflation. One and two loop corrections have been calculated in both the Einstein\cite{Bezrukov:2008ej,DeSimone:2008ei,Bezrukov:2009db} and Jordan\cite{Barvinsky:2008ia,Barvinsky:2009fy,Barvinsky:2009ii} frames. Although there is some debate about the calculational methods, all loop calculations predict that Higgs inflation is valid if the Higgs mass lies in a specific range, testable by the LHC.\\
In Refs. \cite{Bezrukov:2008ej,DeSimone:2008ei,Bezrukov:2009db,Barvinsky:2008ia,Barvinsky:2009fy,Barvinsky:2009ii} the validity of Higgs inflation was tested in the inflationary regime. Here the Higgs boson has a large expectation value $\langle H \rangle\geq M_P/\sqrt{|\xi|}$ and slowly rolls down the inflaton potential. Recently \cite{Burgess:2009ea,Lerner:2009na,Hertzberg:2010dc,Burgess:2010zq} however, the validity of Higgs inflation was questioned in the small-field limit where the Higgs expectation value is $\langle H \rangle=v=246 \text{GeV}$. Hertzberg\cite{Hertzberg:2010dc} considered the general case of a theory with multiple scalar fields. It was found that, for the pure gravity and kinetic sectors, the small field effective theory has a cut-off at an energy scale of $M_P$ if there is only one scalar field, but when more than one scalar field is involved the cut-off is $M_P/|\xi|$. In the Jordan frame the cut-off $M_P/|\xi|$ can be almost directly read off from the scalar-graviton interaction term. However, when considering scalar-scalar scattering via graviton exchange, the lowest order diagrams add up to zero for a single scalar field. Therefore the actual cut-off scale is $M_P$. In the Einstein frame this is even more clear. After the conformal transformation the cut-off appears in a dimension 6 scalar kinetic term, but this term can be removed via a nonlinear field redefinition.\\
In the case of multiple scalar fields the above reasoning no longer applies. In the Jordan frame the lowest order diagrams do not vanish because the scalar fields are not identical, giving the cut-off $M_P/|\xi|$. In the Einstein frame the unitarity violating kinetic term cannot be removed by a field redefinition, because it is in general not possible to bring the kinetic term into canonical form for multiple scalar fields (see Ref. \cite{Kaiser:2010ps} for more details). The arguments above apply to the pure gravity and kinetic sectors of the theory, but even for the single field case Hertzberg \cite{Hertzberg:2010dc} finds that scalar self-interactions due to the non-polynomial potential in the Einstein frame most likely cause unitarity problems at the scale $M_P/|\xi|$.
\\
Now we switch to the Standard Model. In this case the Higgs doublet contains in principle 4 scalar fields, but the 3 Goldstone bosons are eaten up by the $W^{\pm}$ and $Z$ bosons. Therefore one might wonder if the cut-off shows up in the terms containing these gauge bosons. Indeed, Burgess, Lee and Trott \cite{Burgess:2010zq} showed that in the Standard Model the cut-off scale $M_P/|\xi|$ appears in the Higgs-gauge interactions.\\
Now the crucial point is that the cut-off $M_P/|\xi|$ of the small-field effective theory is very close to the energy scale at the end of inflation $H_{\text{end}}\simeq \sqrt{\lambda/12} M_P/|\xi|$ (where $0.11<\lambda\lesssim 0.27$ at the electroweak scale), which is also the point where the small-field limit becomes valid. This means that higher order operators, needed to solve the unitarity problems at the cut-off scale $M_P/|\xi|$ in the small-field effective theory, will affect the inflationary theory and thereby destroy Higgs inflation. Therefore it seems that Higgs inflation is ruled out as a valid theory.\\
In contrast to the previous arguments, Bezrukov et al. \cite{Bezrukov:2010jz} very recently showed that the effective cut-off actually depends on the expectation value of the Higgs inflaton field. An intermediate region was identified for field values $M_p/|\xi|<\langle \phi \rangle < M_P/\sqrt{|\xi|}$ where the cut-off scale scales as $\Lambda = |\xi|\langle\phi\rangle^2/M_P$. The authors showed that all relevant energy scales throughout the evolution of the universe are below the corresponding cut-off scale. Still quantum corrections could spoil the unitarity of Higgs inflation, and a systematic way of obtaining quantum loop corrections has been proposed.\\
Considering the ongoing discussion about the unitarity of Higgs inflation, we would like to make a few remarks. First of all there are so far no rigorous calculations of quantum corrections to the Higgs potential or Higgs-gauge interactions in the small field limit ($\langle \phi \rangle < M_P/|\xi|$) or the intermediate region ($M_p/|\xi|<\langle \phi \rangle < M_P/\sqrt{|\xi|}$). Secondly, the cut-off scale is found in the Jordan frame by considering Higgs-graviton interactions. As we have shown before, the inflaton perturbation actually combines with the scalar part of the metric to form one 'gauge' invariant variable. Therefore, in order to consistently calculate quantum corrections to either the Higgs potential or Higgs-gauge interactions, we need to construct the completely diffeomorphism\footnote{We use the terminology "diffeomorphism invariance" here instead of the previously used "gauge invariance" in order to avoid confusion with the well known concept of gauge freedom in the Standard Model} invariant Higgs action. In the previous section we derived the free action for a single inflaton field. In this section we apply this to the Standard Model Higgs action with a nonminimal coupling to gravity. The action reads
\begin{equation}
S=\int d^D x \sqrt{-g}\left\{-\left(\frac{M_P^2}{2}-\xi H^{\dagger} H\right)R-g^{\mu\nu}(D_{\mu} H)^{\dagger}D_{\nu} H -\lambda \left(H^{\dagger} H-\frac{v^2}{2}\right)^2\right\},
\label{Higgsaction}
\end{equation}
where $H$ is the complex Higgs doublet with vev $\langle H \rangle_0=v/\sqrt{2}$ and
\begin{equation}
D_{\mu}H=\left(\partial_{\mu}-igA_{\mu}^{a}\tau^{a}-i\frac12 g' B_{\mu}\right)H,
\end{equation}
is the covariant derivative with $A_{\mu}^{a}$ and $B_{\mu}$ the $SU(2)$ and $U(1)$ gauge bosons with coupling constants $g$ and $g'$, and $\tau^{a}=\sigma^{a}/2$.\\
Now, in conventional chaotic inflationary scenarios the inflaton field is a real scalar field with a large classical expectation value. The Higgs doublet in Eq. \eqref{Higgsaction} contains two complex scalar fields, thus it is not clear what the inflaton field is. This becomes more obvious when we choose the following decomposition of the Higgs doublet
\begin{equation}
H=\frac{\Phi}{\sqrt{2}}\exp{({i\tau^{a} \alpha^{a}})}\cdot \left(\begin{array}{c} 0\\ 1 \end{array}\right),
\label{Higgsdecomposition}
\end{equation}
where $\Phi$ and the $\alpha^{a}$ are now four real scalar fields and the projection vector $(1,0)^T$ ensures that $H$ is a doublet. In this decomposition it is easy to see that $H^{\dagger} H=\frac12 \Phi^2$. Furthermore, by a redefinition $\tilde{A}_{\mu}^{a}=A_{\mu}^{a}-\frac{1}{g}\partial_{\mu}\alpha^{a}-i\alpha^{a}A_{\mu}^{b}[\tau^{a},\tau^{b}]$ we can absorb the three would-be Goldstone bosons $\alpha^{a}$ into the gauge bosons $A_{\mu}^{a}$. In fact, we can always perform an $SU(2)$ rotation on the Higgs doublet in Eq. \eqref{Higgsdecomposition} such that the three would-be Goldstone bosons disappear, which corresponds to fixing the unitary gauge. If we now define
\begin{align}
W^{\pm}_{\mu}&=\frac{1}{\sqrt{2}}\left(A_{\mu}^{1}\mp i A_{\mu}^{2}\right)\\
Z_{\mu}^{0}&=\frac{1}{\sqrt{g^2+g^{\prime 2}}}\left(g A_{\mu}^{3}-g'B_{\mu}\right),
\end{align}
the action \eqref{Higgsaction} becomes
\begin{align}
S=\int d^D x \sqrt{g}\Biggl\{& -\frac12(M_P^2-\xi \Phi^2)R-\frac12 g^{\mu\nu}\partial_{\mu} \Phi\partial_{\nu}\Phi -\frac14\lambda (\Phi^2-v^2)^2\\
&-\frac{m_W^2}{v^2}g^{\mu\nu}W_{\mu}^{+}W_{\nu}^{-}\Phi^2+\frac12 \frac{m_Z^2}{v^2} g^{\mu\nu}Z^0_{\mu}Z^0_{\nu}\Phi^2\Biggr\},
\label{Higgsactionunitary}
\end{align}
with $m_W^2=\frac14 g^2 v^2$ and $m_Z^2=\frac14 (g^2+g^{\prime 2})v^2$. We see that the first part of the action \eqref{Higgsactionunitary} is equal to the action \eqref{nonminimalaction} with the identification $F(\Phi)=\frac12(M_P^2-\xi \Phi^2)$ and $V(\Phi)=\frac14\lambda (\Phi^2-v^2)^2$. The second part contains the Higgs-gauge interaction terms. If we now want to calculate the free gauge invariant action for the Higgs sector of the SM we can simply do the expansions \eqref{perturbedpij}-\eqref{perturbedN} with $\Phi=\phi(t)+\varphi(t,\bvec{x})$, which will result in \eqref{freeAction}. The field $\varphi$ is in this case identified with the Higgs boson and $\phi$ is the classical background field with vev $v$. Since the gauge bosons $W^{\pm}_{\mu}$ and $Z^{0}_{\mu}$ are pure fluctuations, the free action \eqref{freeAction} will have additional terms
\begin{equation}
S^{(2)}_{\text{add}}=\int d^{D-1}x dt a^{D-1}\Biggl\{-\frac{m_W^2}{v^2}\bar{g}^{\mu\nu}W_{\mu}^{+}W_{\nu}^{-}\phi^2+\frac12 \frac{m_Z^2}{v^2} \bar{g}^{\mu\nu}Z^0_{\mu}Z^0_{\nu}\phi^2\Biggr\},
\end{equation}
where $\bar{g}^{\mu\nu}=\text{diag}(-\bar{N}^{-2},a^{-2}\delta^{ij})$ is the background ADM metric. So in the end we have shown that the free Standard Model with a nonminimally coupled Higgs boson, which has a local $SU(2)$ symmetry, can be written in terms of one diffeomorphism invariant scalar $\tilde{\varphi}$ \eqref{comovingCutvperturbation} and three mass terms for the gauge bosons. The free action can be used to extract the diffeomorphism invariant propagators for the Higgs inflaton field and gauge bosons. If we want to calculate quantum corrections to the free propagators and the Higgs potential in an invariant manner, we need to find the gauge invariant action up to third and fourth order in perturbations. We will leave this for future work. The analysis in this section shows that, when the backreaction from the $W^{\pm}$ and $Z$ bosons is neglected, the single scalar field and $SU(2)$ Higgs doublet lead to identical quadratic actions for cosmological perturbations in nonminimally coupled models.\\
The approach that we used in this section can be applied in a much more general setting than the Standard Model. In fact, any theory with a local $SU(N)$ or $SO(N)$ symmetry (for example GUT theories) can be written in terms of one dynamical scalar and a number of massive gauge bosons if we use the suitable generalization of \eqref{Higgsdecomposition}. Therefore it is always possible in these theories to have a single light inflaton field, thus opening the way for an inflationary era.
\bigskip
\bigskip
{\section{Discussion}\label{sec:discussion}}
In this paper we derived the free perturbation action for the nonminimally coupled inflaton field in the Jordan frame. By working in the ADM formalism we could split the metric tensor into scalar, vector, tensor and constraint fields. By performing a diagonalization procedure we obtained the main result \eqref{freeAction}. In this form the action is explicitly gauge invariant up to linear order in coordinate transformations and the only propagating degrees of freedom are the scalar inflaton field and the graviton. We showed that the Jordan frame action can be derived from the Einstein frame action by performing a conformal transformation of the metric, thereby establishing the physical equivalence of the two frames at the level of quadratic fluctuations.\\
In order to calculate quantum corrections to the inflaton potential we need an unambiguous, gauge invariant action. So far we have obtained the free gauge invariant action, what remains is the higher order gauge invariant action, with which one can perform definite calculations that will tell us whether or not nonminimal inflation is spoiled by quantum corrections. The one loop effective inflaton potential was already found in de Sitter space in the limit where $|\xi|\ll 1$, see Ref. \cite{Bilandzic:2007nb}; what remains to be done is to calculate $V_{eff}$ in the limit $|\xi|\gg 1$ for more general cosmological backgrounds \cite{Janssen:2009pb,Sloth:2006az,Seery:2007wf,Burgess:2009bs}. These thorough calculations are especially needed in the context of the Higgs inflation model, which at the moment seems to possess troublesome large quantum corrections to the small field effective theory. We hope to give a definite conclusion concerning this question in future work.
\bigskip
\bigskip
\section*{Acknowledgements}
We would like to thank Sander Mooij, Marieke Postma and Damien George for useful discussions, and especially for their significant contribution to section \ref{sec:Higgsinflation}. This research was supported by the Dutch Foundation for 'Fundamenteel Onderzoek der Materie' (FOM) under the program "Theoretical particle physics in the era of the LHC", program number FP 104.
|
1,314,259,994,309 | arxiv | \section{INTRODUCTION}
In the near future, robots will interact, explore, and cooperate with human beings, and they will permeate into people's lives and our societies. This application calls for new social functions in robotic systems. Several studies have explored social navigation based on robots being aware of people \cite{sisbot2007human}\cite{nonaka2004evaluation} \cite{shi2008human}. Several studies introduced the concept of social distance, or proxemics, as a personal space that influences robot navigation performance \cite{mumm2011human} \cite{rios2015proxemics}. These kinds of studies highlighted the need to consider the social and cognitive side of people for effective navigation. However, they do not consider the people's awareness of robot which we propose as a new metric for robot navigation.
To insert human parameters (social parameters) into planning problems, a human-aware motion planner has been used to not only provide safe robot paths, but also to synthesize socially-acceptable and legible paths in the presence of humans \cite{sisbot2007human}. Another study characterizes the concept of comfort, naturalness, and sociability of navigation performance \cite{kruse2013human}. A popular method has considered the above social characteristics to build a cost map in combination with a conventional sampling-based planner \cite{mainprice2011planning}. On the other hand, there exist other approaches that utilize human teaching to navigate by demonstrating or providing feedback \cite{chernova2009interactive}. Although these kinds of researches showed us the possibility of social navigation, real-time motion planning in human-crowded environments still has
not reached a satisfactory level, especially when navigating in crowds.
\begin{figure}[t]
\centering
{\includegraphics[width=1.0\linewidth]{front.png}}
\caption{Ambiguous situation in navigation: Who has the lowest trust to the robot? A. A person who is riding a bike with a high velocity? B. A couple who are talking to each other? C. A person who is quite close to the robot? Through incorporating notions of awareness, a robot can navigate confidently for each situation while maintaining a level of comfort for the surrounding people.}
\label{Figure:Problem definition}
\end{figure}
For planning motions of mobile robots in dynamic situations, many studies regarding the prediction of pedestrians have been conducted, from a social force model \cite{helbing1995social} to machine learning-based estimations \cite{scovanner2009learning} \cite{keller2014will}. However, they did not consider an human's awareness state, which could be major factor for human robot interactions. Although the study presented in \cite{nonaka2004evaluation} uses mental states, it does not use them for navigation.
In fact, a human mental state is quite difficult to examine becauseit is fundamentally uncertain, both due to the unobservable and unpredictable nature of mental state; the same state could produce many different actions, while many different mental states could lead to the same action. Rather, this ambiguous problem can be simplified if it is confined to the problem of navigation with assumptions. We assume that a mental state can be only determined
by eye-contact between two agents. This assumption can be justified by studies from the social cognitive field \cite{macrae2002you} \cite{baker2014modeling} \cite{hollands2002look}. According to these studies, it can be confirmed that eye contact plays an important role in anticipating other's intentions for navigation. Focusing on this point, the proposed idea is quite simple: insert this eye contact model for robots to predict future movement of humans by defining
a mental state with the concept of awareness.
Many researchers have already pointed out that probabilistic representations of target states and reasoning over them are quite effective for navigation of mobile robots in dynamic environments. In situations of probabilistic decision making, Partially Observable Markov Decision Processes (POMDPs) have been widely used in robot navigation and human interactions \cite{foka2007real}. Since finding optimal control strategies in POMDP cases is computationally intractable due to the continuous and high dimensional belief space, POMDPs have usually been applied to topological navigation \cite{pineau2003point}.
In this paper, we propose a navigation planning framework of mobile robots based on human detection, POMDP and human awareness estimated by an eye contact (gaze) model.
More specifically, the proposed approach can be applied to human-crowded environments, including moving pedestrians and dynamic obstacles. The main contribution of this study
is to integrate human state estimation from real-time human detection and tracking and navigation planning to manage uncertainties, both position and awareness, from people. In
particular, the concept of human awareness of a robot is incorporated in the state model and the reward model of POMDP to improve social navigation performance in a way inspired by humans.
This paper is organized as follows. In section II, we provide background for the proposed methods containing Markov Decision Process (MDP) and POMDP models. Section III presents the framework of the proposed model, including detailed algorithms and the method for measuring human awareness based on gaze detection. In Section IV, the proposed methodology is validated in simulation and real hardware. We discuss the simulation results in terms of the effect of awareness in the navigation process.
\section{Background}
\subsection{MDP $\&$ POMDP}
The basic concept of MDP is that of a decision-making problem formulated as a set of states and actions given defined costs. A crucial assumption in MDP is that the transition of states is a Markov process, and that their future distribution is conditionally independent of the history of states and only affected by the current state. Highly probable states are determined by a reward (so-called "value") function. The goal of the agent is to select an action which will generate the maximum value for a predetermined time horizon.
Partially Observable MDP, or POMDP, is proposed to enable the MDP to be applied to the real world, which has a lot of uncertainties and disturbances. States usually cannot be measured directly, so we have to use incomplete sensors to perceive an environment. A POMDP model can be described by the tuple ($S, \pi, A, T, Z, E, R$), a finite set of states $S=\{s_1,\cdots s_{|S|}\} $, an initial probability distribution over these states $\pi$, a finite set of actions $A=\{a_1,\cdots a_{|A|}\} $, a finite set of observations $Z=\{z_1,\cdots z_{|Z|}\} $, and a transition function $T^{a,z}(s_i,s_j)=P(s_j|s_i,a,z)$ that maps $S\times A$ into discrete probability distributions over $S$.
The transition model $T(s',a,s)$ specifies the conditional probability distribution of shifting from state $s$ to $s'$ by applying action policy $a$. $O(s',a,z)$ is the observation mapping that computes the probability of observing $z$ in state $s'$ when executing action $a$. Usually, the transition model and observation model can be rewritten as $T(s',a,s)= p(s'|s,a)$ , $Z(s',a,z) = P(z|s',a)$
where $s \in S$ , $a \in A$ , $z \in Z$.
\subsection{DESPOT}
An on-line approach to solve POMDP is to combine planning and execution together through calculating and executing optimal action based on current belief state which is updated recursively over time. These on-line methods apply algorithmic techniques for computational efficiency. For example, heuristic search, branch-and-bound pruning, Monte Carlo sampling \cite{silver2010monte}, POMCP \cite{ross2008online}, and DESPOT \cite{somani2013despot} are among the fastest on-line POMDP methods recently.
We adopted DESPOT as an on-line POMDP solver. The key concept of DESPOT is to reduce all policies under $K$ sampled scenarios. Under each scenario, a policy traces out a path in the belief tree consisting of a particular sequence of actions and observations. DESPOT is defined by a tree, which keeps only the belief-tree nodes and edges that are generated from all policies under the sampled scenarios. Assuming that there is height, $H$, in the belief tree, DESPOT is more sparse, only including $O(|A|^HK)$ nodes, than the original belief tree, which contains $O(|A|^H|Z|^HK)$ nodes, leading to a dramatic improvement in computational efficiency for moderate $K$ values. Equally importantly, it can be proven that a small DESPOT tree is sufficient to generate suboptimal policy, which admits a compact representation, with bounded regret \cite{somani2013despot}.
\begin{figure}[b]
\centering
{\includegraphics[width=0.98\linewidth]{flowchart_new.png}}
\caption{Overall flowchart of proposed framework : a global planner calculates a global path while local planner seeks the optimal action policy. States for local planner (POMDP) are determined based on the combination of the human position and awareness from Human tracker (dotted block) and dynamic occupancy grid. Finally, a global path and an optimal action policy are delivered to the robot navigation controller.}
\label{Figure:OverallFramework}
\end{figure}
\section{Methods}
\subsection{Overall framework}
Our framework consists of double-layered planners: a global path planner with MDP and a local planner with POMDP, as shown in Fig.\ref{Figure:OverallFramework}. A global planner generates a collision-free path based on the environment, which can be pre-known or perceived in advance. This collision-free path can be described as a static occupancy grid map, which considers only static obstacles. On the other hand, a local planner acts as a reactive planner to cope with various possible situations that can occur during real-time navigation. Human tracking continuously detects and tracks people in a local window, a relatively small area around the robot, in order to react according to the movement of people. A POMDP-based local planner is designed to search for the optimal action policy that can obtain the maximum reward based on the belief states which can be estimated from observations. The proposed framework properly discretizes the global path given bound on the sub-path segments until arriving at the final goal position. Based on the action policy calculated from the local planner, the robot navigation controller finds the input command for the robot to reach the desired position.
\begin{algorithm}[t]
\caption{MDP Planner ($\mathcal{M}$)}
\begin{algorithmic}
\Procedure {MDPsolve}{}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\Require {$S$$\,$(state), $A$$\,$(Action), $R$$\,$(Reward), $T$$\,$(Transition)}
\Ensure{$\pi^*(s)$ : The Optimal action policy}
\State $V_0(s) \leftarrow 0$, $\,$ $k \leftarrow 0$
\Repeat
\For {$k \leftarrow k+1$}
\ForAll {$s \in S$} \\
$V_k(b) =\max_{a \in A} \{\sum_{s^{'}}T(s,a,s^{'})[R(s,a,s^{'})+\gamma V_{k-1}(s^{'})] $ \EndFor
\EndFor
\Until{$\forall$ $s\|V_k(s)-V_{k-1}(s) \| < \epsilon $}
\ForAll {$s \in S$} \\
$\pi^{*}(s)=argmax_{a} \{\sum_{s^{'}}T(s,a,s^{'})[R(s,a,s^{'})+\gamma V_{k}(s^{'})]$
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Global Path Planning: MDP}
The 2D environment can be represented by an occupancy grid, which is described as $o(i,j)$ for $i$, $j$ are 2D coordinates of grid map respectively \cite{elfes1989using}. This occupancy value becomes 0 if it is free while it becomes 1 or 2 if it is occupied with obstacles or human. Assuming that there is no human and mapping and localization are done in advance, based on this grid map occupancy, MDP-based global planner finds a collision-free path as shown in \ref{Figure:MDP_path}. Start position is the current robot position and goal position can be set with Graphic User Interface. In fact, this MDP planner obtains mapping from occupancy grid to action policy ($\mathcal{M}:\Re^1 \rightarrow \Re^1$). Value iteration can be used to solve the MDP problem as shown in Algorithm 1. Since we have optimal solutions for all the lattice points (grid map), robot is able to flexibly cope with the environment. Then, every cell which is not occupied with obstacles can have a desired action policy:
\begin{equation}
\bar{S}(i,j)=
\Big\{ \begin{array}{cl} \pi^*(s) \in A & \textrm{if} \quad o(i,j) = 0 \quad (Free) \\ \phi & \textrm{otherwise} \quad (Occupied) \end{array}
\end{equation}
where $A = \{E,EN,N,NW,W,WS,S,SE\}$, or eight possible action sets, and $o(i,j)$ means occupancy grid. Particularly, starting from the robot's current position, a desired path can be generated by choosing consecutive grid cells until reaching goal position. Once the robot reaches the goal position, this path can be represented as
\begin{equation}
\bm{P} = \left\{ \left(p^1_x, p^1_y \right), \left(p^2_x, p^2_y \right)\cdots, \left(p^n_x, p^n_y \right) \right\}_{ \mathcal{D}_{grid}}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width = 0.88\linewidth]{global_path.png}
\label{Figure:MDP_path}
\caption{ Global path (MDP solution) in a grid space, and the robot view.}
\end{figure}
where $\mathcal{D}_{grid}$ is the resolution of grid, which is set as 0.75 (m) in this study. We selected this value so that actual mobile robot hardware fits in one; 0.75 (m) $\times$ 0.75 (m) cell, which is applicable and reasonable for real hardware implementation. Based on this path information, we construct a smooth path with cubic spline interpolation methods. Fig. \ref{Figure:MDP_path} shows the collision-free path with static obstacles.
\subsection{People Detection \& Tracking}
In order to successfully navigate human-robot coexisting environment, real-time detection and tracking of people are essential. In practice, tracking a human is quite a challenging task due to the limitation of sensor visibility, noise in sensor readings, possible target occlusions, and confusions from multiple targets. (RGB-D camera and laser scan data) is used to detect and track humans around the robot. This sensor fusion technique can improve the accuracy of detection and tracking people.
\subsubsection{Vision-based Detection}
For the vision based detection algorithm, one of the state of the art, a deep learning based real-time object detection algorithm, or You Only Look Once (YOLO) \cite{redmon2016you} is implemented to detect the existence of people. This algorithm has the advantage of being able to accurately recognize human only with some part of the shape of a person. In addition, this method is known for its rapid detection capability, as time latency is less than 25 milliseconds with optimal settings, which is far faster than conventional vision-based approaches. We can obtain the number of existing humans in the camera region and approximate 3D position of findings. Using point cloud data from the RGB-D camera, 3D bounding boxes of the human can be obtained, which is described in Fig. \ref{Figure:human_recognition}.
\begin{figure}[t]
\centering
{\includegraphics[width=1.0\linewidth]{human_recognition.png}}
\caption{ (a) Robot in grid space with the global path and human in front of it. (b) Snapshot of real laboratory. (c) Image-based human detection with YOLO. (d) Cropped face from face recognition.}
\label{Figure:human_recognition}
\end{figure}
\subsubsection {Laser Sensor-based Detection}
Laser scan data also can be utilized to detect a human leg through pattern recognition. Human leg pattern can be detected as three categories - Leg Apart (LA), Forward Straddle (FS), and Single Leg (SL) with feature-based classifiers, as described in \cite{bellotto2009multisensor}. We adopted their work to find all possible human leg patterns based on laser scan information. An advantage of using laser sensors is that algorithm is able to rapidly detect human legs when people are moving at a high speed, while vision-based methods might not able to detect human movements due to their processing and computational costs.
\subsubsection{Measurement Fusion- Data processing}
The main reason for using sensor fusion method is to simultaneously utilize the advantages of the observations. To combine two different types of observations (vision-based and laser-based measurements), the measurement fusion method is known for its effectiveness to produce better estimation results. We assumed that YOLO detection is much more reliable because the laser detection algorithm only provides possible candidates of human leg, rather than exact human leg information. This means that laser-based observations need to be filtered based on YOLO detections. Consequently, once robot detects the number of humans around it, referring to $N$, data processing method is designed to extract the same number of human legs out of all candidates from laser sensors. This processing method can be used in combination with gaze control since two sensor's field of view are different. Outputs of data processing are delivered to the Kalman filter.
\subsubsection{Kalman filter}
The position of each detected humans is individually tracked over time using the Kalman filter technique \cite{julier1997new}. The Kalman filter algorithm consists of two steps: prediction and correction (update step). The first step predicts the current state from the previous states and the second step uses sensor measurement to update or correct the estimation from the previous step.
Each filter estimates each position of human candidates, or $x_k$, over time as one element of target clusters which is denoted as $X_k=\{x^1_k,x^2_k, \cdots x_k^N\}$, where N is the total target numbers at time step $k$. Here, a state includes 2D position and velocity of particle as $x_k^i=(x,y,\dot{x}, \dot{y})^T$, and the Kalman filter model uses the linear dynamic model and measurement model which are formulated as
\begin{eqnarray}
\dot{x}&=&Ax+Bu+w \\
z&=&Hx+v
\end{eqnarray}
where $A$ is a state transition matrix, $B$ is the input matrix, $u$ is input variable, and $w$ is a white Gaussian noise with covariance $Q$. The measurement variable, $z$, can be modeled with $H$, which is the observation matrix, and $v$ is observation noise variable, of which the covariance is $R$. Based on this formulation, the Kalman filter iteratively estimates a state variable, $x$, with consequent measurements, $z$. System model parameters used in this study can be referred to in the previous study \cite{leigh2015person}.
\subsubsection{Face Recognition}
\label{face_recognition}
The face recognition package from \cite{ageitgey2013} is also implemented in our system. It basically uses Histogram of Oriented Gradients (HOG) \cite{dalal2005histograms} to detect faces and face landmark estimation \cite{kazemi2014one} to extract face features. Then, extracted features are used to train Deep Convolutional Neural Network to recognize face. Therefore, human faces can be recognized from the video stream, which leads to capturing human gaze. This face recognition is only used for gaze detection, not for human detection and tracking.
\subsubsection{Gaze detection (Awareness)}
In order to observe $p_{gaze}$, given face image from the above algorithm, a gaze tracker using a simple image gradient-based eye center algorithm is applied to track the gaze of a human \cite{timm2011accurate}. This algorithm provides relatively high accuracy results with low computational cost.
The robot's success of eye contact recognition is proportional to the time a person stares at the robot. It can be detected based on the fact that the human eyes are looking at the center of the camera of the robot. Therefore, the proposed method calculates time spent with both eyes on the center rectangular region, which can be defined as $C(R)$ with width $w$, and height $h$, as shown in Fig.\ref{Figure:gaze detection model description}. If measured time duration exceeds the time threshold, or $\epsilon$, then, the awareness variable is activated. This time threshold can be determined as 5 seconds through a trial and error approach. Based on this detection model, we can judge whether a person is aware of a robot or not, which can be described as the awareness variable. Consequently, the following equation is used to define the awareness variable
\begin{equation}
G(t)=
\Big\{ \begin{array}{cc} 1 & \int_{t_0}^{t}{P_{gaze}}(\tau)d\tau> \epsilon \\ -1 & \textrm{otherwise} \end{array}
\end{equation}
\begin{figure}[t]
\centering
{\includegraphics[width=1.0\linewidth]{gaze_final.png}}
\caption{Gaze detection model}
\label{Figure:gaze detection model description}
\end{figure}
If a person is aware of a robot, when the awareness variable is active, the robot is allowed to approach closer than to those who did not make eye contact. This feature can be obtained by differentiating the reward model according to pedestrians. In our model, the collision reward is individually set for each observed pedestrian to change the permission range of distance between human and robot.
\subsection{Local planner: POMDP}
\subsubsection{States}
As a local planner, we utilize the on-line POMDP solver, DESPOT, to obtain the optimal action policy based on current belief. The state variable contains a robot state and people state. A robot state includes robot position, $R_{pos}$, which can be measured directly by sensors in a local window describing the surroundings in the form of an occupancy grid. Pedestrians state contains the current position, and awareness, which are represented as $p_{pos}$ and $p_{awareness}$ respectively. The dimension of states varies according to the number of pedestrians in the local planner, $N$. To sum up, the state of the POMDP model can be described as
\begin{equation}
\begin{split}
\bm{S} &=\{S_{\textrm{robot}}, \, S^1_{\textrm{ped}}, \, \cdots, \, S^N_{\textrm{ped}} \}. \\
S_{\textrm{robot}} &= [R_{pos}] \\
S_{\textrm{ped}} &=[p_{pos}, \, p_{awareness} ]
\end{split}
\end{equation}
\label{Eq : POMDP STATE MODEL}
\subsubsection{Observations}
The observation information can be measured by a scanning laser range finder and a RGB-D Camera. The field of view of laser sensor is 270˚ range. The distance range is assumed to be 5 meters ahead, to sense humans, static and dynamic obstacles. From this sensor model, our observation model is written as:
In a simulation environment, each sensor has its own noise model, which is based on its specification. Based on sensor information, we can update our belief over the state variable at each time step.
\subsubsection{Actions (Policy Set)}
Simplifying current mobile robot movements, we have 3 possible action policies: Go, Wait and Avoid. The "Go" policy make a robot go forward along the global path, while "Wait" means do not move for one time step. These two policies are included in the POMDP action set. Lastly, the "Avoid" action makes the robot move to the collision-free position when a robot can not move along the path due to pedestrians or dynamic obstacles. This action is only activated when the robot can not move for the predefined amount of time, and this command makes the global planner regenerate a global path, depending on dynamic occupancy grid. The default step size of this action policy is set to one grid resolution. Reducing the dimension of action policy sets can lessen the computational burden of the DESPOT algorithm.
\begin{equation}
\textrm{A} = \{\textrm{Go},\, \textrm{Wait}, \, \textrm{Avoid}\}
\end{equation}
\begin{algorithm}[t]
\caption{Proposed Planner (Main loop)}
\begin{algorithmic}[1]
\State SetStartGoal()
\State $P\leftarrow$ MDPSolve(), $k \leftarrow 0$, $N\leftarrow \text{size}(P)$
\While{Not Goal}
\State StateUpdate() \Comment{Static/Dynamic Obs}
\Repeat
\For {$k \leftarrow k+1$}
\State TrackerUpdate() \Comment{People}
\State BeliefUpdate() \Comment{Particle Filtering}
\State Action$^*$ $\leftarrow$ POMDPSolve() \Comment{DESPOT}
\If {Action$^*$ $=$'Avoid'}
\State Go To Line 2
\EndIf
\State RobotControl(Action$^*$)
\EndFor
\Until{$k<N$}
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsubsection{Rewards}
Establishing a reward model is quite a sensitive problem because we can design the characteristics of the desired action. Our proposed reward basically consists of reward for goal, penalty for collision, and time. Reward for goal state is set to the highest value. If a robot collides with a human or wall, reward gives a penalty. Lastly, since a navigation time is also one indicator that can evaluate performance, it is regarded as a penalty. Thus, reward function is written as
\begin{equation}
R(s) = w_gR_{goal}(s)+w_cR_{col}(s)+w_tR_{time}(s)
\end{equation}
where $w_g$,$w_c$,and $w_g$ are weighting factors of each reward function. In particular, a human-collision reward model must be designed more carefully, depending on the degree of awareness that we defined above. A potential field approach \cite{ge2000new} is used to model the collision function based on the awareness variable and the distance between robot and human. This collision reward enables the planner to consider an awareness effect for each pedestrian. In other words, the reward collision model varies depending on awareness to change an acceptable permission range in equation (\ref{equation:R_col}). By differentiating this range, robot flexibly navigates with pedestrians who have different levels of awareness between robot and human. $R_{\textrm{Aware}}$ is set to smaller than $R_{\textrm{Non-Aware}}$ to allow the robot to have a high proximity to humans who are aware of it.
\begin{equation}
R_{col}=
\Big\{ \begin{array}{cl} R_{col} & dist(R_{pos},P_{pos}) \leq \rho_{Aware} \\ 0 & \textrm{otherwise} \end{array}
\label{equation:R_col}
\end{equation}
\begin{equation}
\rho_{Aware}=(\frac{1-G(k)}{2}){R_{\textrm{Aware}}}+(\frac{1+G(k)}{2}){R_{\textrm{Non-Aware}}} \\
\label{Equation: Reward+collision}
\end{equation}
\begin{figure*}[t]
\centering
{\includegraphics[width=1.0\linewidth]{subplot4-eps-converted-to.pdf}}
\caption{Time series of grid map of local window (10 $\times$ 10). The robot goes on its own pre-calculated MDP path because the robot knows pedestrians who are aware of it. Otherwise, a robot executes "Wait" action if a pedestrian who is not aware of robot is nearby. }
\label{Figure:Simulation result}
\end{figure*}
\subsection{Belief Update} \label{Section:Belief Update}
For every time step, POMDP maintains distribution over the states, or belief space. This belief space can be updated with the following equation:
\begin{equation}
b^{'}(s^{'}) = \eta Z(s^{'},a,o)\sum_{s \in S} T(s,a,s^{'})
\end{equation}
where $\eta$ is a normalizing factor. DESPOT does not obtain the exact belief space, but calculates an approximate belief by a set of $K$ particles. Each particle corresponds to a sampled POMDP state, which can be written as,
\begin{align}
B_t &:= (s^1_t,s^2_t, \cdots , s^K_t) \\
s^i_{t+1} &= p(s_{t+1}|s_t,a_{t+1},o_{t+1})
\end{align}
where each state vector, $s^i_t$ represents the state vector for the $i$th particle. A general particle filter is applied to update belief space with k, the number of particles, equal to 5000. This filtering is an approximation of Bayes filter approach.
\subsection{Navigation Control}
For basic navigation functions, ROS navigation stack is utilized. When the global path planner finds the desired trajectory, it can generate safety velocity commands to control the wheels of mobile base to follow a desired trajectory according to information from odometry, sensor streams, and a goal pose.
\section{Experimental Results}
\subsection{System Description}
A Toyota Human Support Robot (HSR) mobile robot is used as the hardware platform for both simulation and experiments. The mobile base consists of two drive omni-wheels and three casters, which are located to the front and rear of those. It can smoothly change the direction of navigation and has the capability to avoid obstacles. The maximum speed of HSR is approximately 0.22m/s, the maximum step size of the mobile base is 5mm, and the maximum incline is $5^{\circ}$. HSR has a variety of sensors. For vision information, two stereo cameras are mounted around the two eyes, a wide angle camera is on the forehead; a depth camera (Xtion, Asus) is placed on the top of the head to get RGB-D video stream. Furthermore, a laser range scanner (UST-20LX, Hokuyo) is equipped at the front side of the mobile base platform. The robot uses two different computers; one is for the main operating programs regarding basic functions and the other, which has a GPU (NVIDIA Jetson TK1), is only for running YOLO for object detection. All sub-programs to operate the robot are able to communicate, and to send and receive useful information to each other via the ROS interface.
\subsection{Pedestrian Model (Human)}
One important feature of simulation is how to make movements in a pedestrian model. In our simulation, some pedestrians have their own path, others move randomly to collision-free space. They can even go out of the grid map, and the robot does not take them in to consideration anymore. The key feature of a pedestrian, the awareness variable, $G$, is determined by the gaze-detection. Although there are the limitations of continuously tracking people's faces in real situation, it is assumed that a camera has been able to keep track of the people's faces so it can track the movement of the pupil. During simulation, the awareness variables are manually set to each pedestrian. In contrast, for the real experiment, we assumed that once $G$ is activated, it never turns off so that the robot regards the person as being aware of it.
\subsection{Scenario Analysis}
We conducted both simulation and real platform experiments. For an actual experiment, there was a limit to closely analyzing all the situations. The evaluation of the algorithm is focused with the analysis result of the simulation.
Each scenario has its own start and goal position. To test a robot's navigation performance according to the movements of pedestrians, we fixed the initial position and repeated the scenarios. The initial and goal positions of the robots are set to (0,1) and (7,7), respectively. For each scenario, 3 or 4 pedestrians are set to roam within the grid space. The initial positions of pedestrians are manually set to near the goal position. One pedestrian moves along the predefined path and others walk around with random direction in a local window. Each pedestrian has their mental state, $G$, the awareness variable, which is randomly chosen as a constant for each scenario.
Fig. \ref{Figure:Simulation result} shows the one case scenario which contains the desired navigation strategy. A Robot is supposed to follow a global path (\bm$P$) and the DESPOT solver generates the optimal action policy depending on the belief states of the robot. In this scenario, it is assumed that pedestrians 1 and 2 (represented within blue triangle and red circle in Fig. \ref{Figure:Simulation result}) are aware of the robot and that pedestrian 3 (green cross) has no awareness of robot. Looking closely, at $t=7$ and $t=10$, a robot executed the 'Go' action because pedestrians who were aware are in front of it. This is because the robot assumes that they will not move in the direction where they will collide with it. At $t=12$, robot chose the 'Wait' action when the pedestrian who was not aware of it is near its predefined path. At the next time step, $t=13$, we confirmed that the robot moved along the path when the location of the untrustworthy pedestrian is a certain distance away. This scenario is a good example, showing that the robot changed its navigation policy according to how people are aware of a robot.
\subsection{Performance Analysis}
The proposed algorithm was tested in simulation with twenty-five sampled scenarios in simulation that have the same initial condition to ensure reliability.
\subsubsection{Navigation time}
Due to a randomness of movement by pedestrians and the corresponding reactions of a robot, navigation time is different for each scenario. Since navigation time is also an important factor, we analyzed average navigation time to evaluate performance. If a robot never stops and keeps going along the path, a robot can reach the goal position point through 14 movements. As shown in Fig. \ref{Figure:Simulation result2}.(a), average navigation time (the number of steps to reach the goal position) is 20.54 and standard deviation equals to 3.97. When considering that there are 4 peoples, this time seems to be a reasonable navigation time.
\begin{figure}[h]
\centering
{\includegraphics[width=0.95\linewidth]{performance_edit-eps-converted-to.pdf}}
\caption{Navigation performance : (a) Navigation Time (b) The distance between the closest human and robot for each cases. In both cases, there is one pedestrian with awareness, pedestrian without awareness.}
\label{Figure:Simulation result2}
\end{figure}
\subsubsection{Proximity to pedestrians}
The evaluation of navigation performance is quite difficult to define. To measure the social efficiency of the proposed navigation methods, we analyzed an average distance to the pedestrian when the robot executes the "Wait" action. By calculating this distance, we can examine how robot reacts to close pedestrians based on awareness. In Fig.\ref{Figure:Simulation result2}.(b), we first measured the mean distance from the closest pedestrian. The distance data is divided into two categories based on the existence of awareness. The two average distances between the pedestrian and robot, for the aware and non-aware cases were equal to 1.64 and 2.25, respectively. We confirmed that the robot achieves low proximity to a person who is not aware of the robot while it obtain high proximity to those who are aware of it. In other words, if pedestrians make eye contact with the robot during navigation, the robot can trust and navigate closer to them. As a result, by acquiring this characteristic of a distinctive proximity to humans, the social navigation ability of the robot improved.
\section{CONCLUDING REMARKS}
The awareness-based navigation planning method is proposed for coexistence of humans and mobile robots. The proposed motion planner utilizes the concept of awareness as a simplified version of a human mental state. Both MDP and POMDP planners are integrated to increase social navigation performance while reducing real-time computational cost. To achieve the social interaction ability of a robot, a human detection and tracking system which includes a gaze detection model is implemented to obtain human positions and human awareness of the robot, which can be a key factor for socially-acceptable navigation. Adopting the concept of awareness, robots can react to or handle dynamic situations in a social manner, which is a key characteristic of human navigation. The simulation results and actual experiments with the HSR robot showed that the proposed planner makes it possible for a robot to handle ambiguous situations flexibly. If a person is aware of a robot, the robot is allowed to approach closer than to those who do not make eye contact, which indicates that they are not aware of the robot.
However, several future works still remain; currently, our proposed on-line POMDP planner currently can only select actions from discrete set. A next step is to extend this study to apply continuous-action sets. In addition, there exist hardware-implementation issues for mobile robots to detect awareness or other mental states. Human intention recognition technologies such as gaze-tracking or facial expression detection should be upgraded to estimate human mental state accurately. Lastly, an actual navigation of a robot should be evaluated properly from a human subject perspective. In other words, a metric to measure social-acceptableness for navigation need to be defined. Since performance of social navigation is quite difficult to measure, conducting human-related experiments and analyzing feedback from pedestrians can be a good reference for building desired strategies for a motion planner.
\section*{ACKNOWLEDGMENT}
This research was partially supported by a NASA Space Technology Research Fellowship (NSTRF), Grant number NNX15AQ42H.
\bibliographystyle{IEEEtran}
|
1,314,259,994,310 | arxiv | \section{Introduction}
The Chajnantor area, a high altitude site in the Chilean Altiplano, is recognized as one of the best sites in the world for the millimeter, submillimeter, and mid-infrared astronomical observations \citep{bustos14}. Located at $23^{\circ}$00'54"S $+67^{\circ}$45'45"W and 5080 m in altitude, this area has averages of 555 mbar in atmospheric pressure and 273 K in temperature. Cerro Chajnantor, located next to the plateau, has a summit altitude of 5640 m and has averages of 518 mbar in atmospheric pressure and 268 K in temperature. The combination of high altitude and extreme dryness makes the area one of the most accessible sites for submillimeter astronomy in the world.
The outstanding observing conditions at Chajnantor have prompted the installation of world-class astronomical observatories. These include the Atacama Large Millimeter Array (ALMA) \citep{wotten09}, the Atacama Pathfinder Experiment (APEX) \citep{gusten06}, the Atacama Cosmology Telescope (ACT) \citep{kosowsky03}, the Cosmic Background Imager (CBI) \citep{padin02}, and future experiments such as the Tokyo Atacama Observatory (TAO) \citep{motohara11}, Simons Observatory (SO) \citep{ade19}, CCAT-prime \citep{ccat-paper}, and the Leigthon Chajnantor Telescope (LCT)\footnote{http://www.astro.caltech.edu/twiki\_cstc/bin/view}.
In the context of atmospheric characterization of the site, the first measurements of observing conditions on the Chajnantor Plateau began in April 1995 \citep{radford98}. Since then, many atmospheric characterization instruments have been deployed to the site. These instruments have covered multiple time epochs and geographical locations, depending on the specific interests of each research group or astronomical observatories.
There are a number of publications in the literature referring to measurements of precipitable water vapor (PWV) and weather conditions at the Chajnantor area and their implications for astronomical observations. As an example of these studies, based on data from radiosonde launched from the Chajnantor Plateau, \cite{giovanelli01} suggested peaks such as Cerro Chajnantor would be drier than the Chajnantor Plateau. Observations at terahertz frequencies from Sairecabur (5500\,m) indicated this was correct \citep{marrone04, marrone05}; in addition, simultaneous $350$ $\mu m$ measurements from the Chajnantor Plateau and Cerro Chajnantor provided confirmation of these observations. \cite{delgado99, radford08} derived a conversion scheme to obtain PWV from radiometric measurements of the $183$ GHz water emission line, while longer-term climatology studies in the Chajnantor area were presented in \cite{bustos00, otarola05, otarola19}. Estimates and forecasts of PWV\ for the Chajnantor area using the Geostationary Operational Environmental Satellite (GOES) were presented in \cite{marin15}, and in addition, seasonal and intraseasonal variations for PWV were studied in \cite{marin17}. In \cite{holdaway04}, the relationship between global warming and PWV for the area was studied in a period of six years with inconclusive results. An atmospheric measurement campaign was performed on Cerro Toco, , a site located within the Chajnantor area, by \cite{turner10} in 2009 and their data were used in \cite{cortes16} to assess the vertical distribution of PWV in the area. The atmospheric transparency was studied in depth, spanning long periods of time by \cite{radford00, giovanelli01, radford11, radford16}. In addition, PWV ratios between the plateau and Cerro Chajnantor were presented in \cite{bustos14} for a time span of five days and expanded over longer periods in \cite{cortes16}, including estimates for atmospheric scale heights.
Studying the dynamics of the atmospheric conditions is crucial for short wavelength radio astronomy, since it directly affects data quality in the form of attenuation and differential phase delay of the astronomical signals as they reach the telescopes. The timescale for these effects is very broad and includes intra-day variations, seasonal patterns, and even exhibits long-term features such as the El Ni\~no--Southern Oscillation (ENSO), as an example. On short timescales, of order seconds, atmospheric variability can affect radio seeing on large aperture radio telescopes through anomalous refraction \citep{olmi01}. This paper is a revision and refinement of a previous study of the mentioned atmospheric dynamics and tropospheric distribution of the water vapor over the Chajnantor area. To achieve this goal and to improve the quality of our data, we revisited the methodology for the conversion of the submillimeter tipping radiometer (tipper) data into PWV presented in \cite{cortes16}. As explained in the paper, the conversion method now includes information on the surface temperature at the instrument location at the time of the measurement, which reduces the uncertainty in the results. The new scheme is applied to all the tipper data that exists for the area, and we achieve an expansion of our local atmospheric database now spanning from 1997 to 2017. We validate the results of the conversion scheme by comparing these findings with data from other instruments fielded in the area based on various technologies. We demonstrate that the tippers do not need another instrument as a reference for absolute PWV calibration, as their PWV converted results are only a few percent off the data from the considered measurement standards. This is relevant because the tipping instruments can now be treated as independent integrated PWV measurement devices if they are coupled with an appropriate atmospheric model for the site under study, as discussed in \cite{cortes16}. Ultimately, we hope that existing and future observatories will find this information useful in the planning of their logistical activities at the site, therefore maximizing their scientific return.
\section{Instruments and software tools}
In this section, we provide details on the instruments that were used for this study. The data used in this work were obtained with $183$ GHz radiometers and with $350$ $\mu$m tipping radiometers. These are different in their technical characteristics and measurement techniques, location, altitude, and deployment time coverage. A summary of these instruments including specific location, altitude, and nomenclature is presented in Table \ref{tab-instr}; more details about these instruments are shown in Table 1 \citep{cortes16} and in citations therein. The following are noteworthy comments about these measuring devices:
\begin{table}[h]
\caption{\label{t7}Instruments used in this study and their location, altitude, and ID for reference in this paper.}
\centering
\begin{tabular}{lcccc}
\hline\hline
Instrument & Location & Altitude (m) &Time & ID \\
& & & span& \\ \hline \hline \\
APEX & Chajnantor & 5107 & 2006 & APEX \\
radiometer & Plateau & & to 2017 &\\ \\ \hline \\
& Chajnantor & 5080 & 1997 &TA-1 \\
& Plateau & & to 2005 & \\
& (NRAO) & & \\ \\
Tipper & Chajnantor & 5080 &2005 &TA-2 \\
radiometer A & Plateau & & to 2010 &\\
& (CBI) & & \\ \\
& Chajnantor & 5107 &2011& TA-3\\
& Plateau & & to now& \\
& (APEX) & & \\ \\ \hline \\
& Chajnantor & 5080 &2000 & TB-1 \\
& Plateau & & to 2005 &\\
& (NRAO) & & \\ \\
Tipper & Chajnantor & 5080 & 2005 &TB-2 \\
radiometer B & Plateau & & to 2009& \\
& (CBI) & & \\ \\
& Cerro & 5612 & 2009 &TB-3 \\
& Chajnantor & & to now & \\ \\ \hline \\
WVR UDEC & Cerro & 5612 & 2011 & UdeC \\
& Chajnantor & & to now&\\
\hline
\end{tabular}
\label{tab-instr}
\end{table}
Water vapor radiometers (WVR): The APEX and Universidad de Concepción (UdeC) WVRs provide measurements of the atmospheric brightness temperature of the $183$ GHz water line and over a number of defined bandpasses to spectrally characterize the emission. These instruments are based on Schottky-diode mixers coupled with a baseband analog filter bank \citep{delgado99}. Because these systems operate at room temperature, the receiver noise temperature for these instruments is usually above $1500$ K. Once the spectral data are taken, an atmospheric model is typically used to fit the observations and estimate the PWV. Data from APEX WVR were obtained on the APEX webpage\footnote{http://www.apex-telescope.org/}.
Tipping radiometers of $350$ $\mu$m: These instruments measure the sky brightness at different air masses in order to estimate the atmospheric opacity. They are built with bandpass filtered pyroelectric detectors and integrate the incoming radiation over a $103$ GHz band, centered at $850$ GHz. The calibration uses measurements of absorbers at known physical temperatures to determine the antenna temperature at each air mass. The measurements are fitted to an exponential function to estimate the radiometric temperature of the sky and the atmospheric opacity \citep{radford16}. The data from tipping radiometers and UdeC WVR are available online\footnote{http://doi.org/10.5281/zenodo.3880373}.
The computational tools used in this work were Atmospheric Model 9.0 (AM) \citep{paine16}, Python version 2.7.2, and Tool for Operations on Catalogues And Tables (TOPCAT) version 4.1. We note that the atmospheric transmission at microwaves (ATM) model has been extensively used in atmospheric monitoring at observatories, such as APEX and ALMA. We compared the AM and ATM models and their differences are $<3\%$ over the bands of interest.
\section{Improved conversion from 350 $\mu$m tipper opacity to PWV}
A major goal of our research is to bring all the existing atmospheric database for the Chajnantor area to a common physical unit, PWV. This, in turn, has the aim of understanding the distribution of the water vapor over the area along with its temporal variability. With this study we are also attempting an insight into climate variations or trends, taking advantage of the more than $20$ years of data collected at the site.
The WVR mounted on the APEX telescope is considered a comparison standard for PWV measurements at Chajnantor and has been operational since 2006. However, the measurements carried out by the 350 $\mu$m tipper radiometers long predate those from APEX, spanning from 1997, and can be used to establish a longer time baseline. The submillimeter tipper delivers atmospheric opacity in Neper units [$N_p$] and a method to convert from tipper opacity to PWV in millimeters of integrated columns was presented in \citep{cortes16}. The method models the mechanical and radiometric functionality of the instrument and a conversion
from measured opacity to PWV was devised using the AM software to solve for the atmospheric radiative emission. A basic block diagram describing the procedure to convert from tipper opacity to PWV is shown in Figure \ref{figure110}.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{blockes_c.png}}
\caption{Illustrative block diagram describing the procedure to convert from measured tipper opacity to PWV. The one-end pointed rectangle blocks are considered inputs to the procedure, while the full rectangles are procedural actions. The circles represent experimental data coming from the submillimeter tipper.}
\label{figure110}
\end{figure}
In the first version of the tipper opacity to PWV conversion method \citep{cortes16}, the ground temperature ($T_{gnd}$) input to the AM model was set to a constant value, which was taken as the historical average ground temperature for each site that was analyzed. However, $T_{gnd}$ is considered as the tropospheric end of the AM layered radiative transfer configuration that models the atmosphere and, therefore, it has significant impact on the simulation results and modeling of the atmospheric conditions. We know from local weather data that $T_{gnd}$ varies from $-23^\circ$C to $+14.5^\circ$C (APEX public database) in the Chajnantor area, and not taking the diurnal oscillation into account introduces unnecessary uncertainties in the conversion process. Following \cite{cortes17}, we started using the actual tropospheric temperature at the instrument location as a variable input to the AM model runs. The temperature values used were measured at each site by local weather stations, concurrently with the opacity measurements available for conversion.
The method to convert from $350\>\mu m$ tipper opacity to PWV includes the components explained in~\cite{cortes16} and adds new procedures, which are detailed below. The original components can be summarized as follows: a) A multilayered atmospheric model for each site is created, which is an input configuration file for the AM software, and is entirely based on National Aeronautics and Space Administration, Modern-Era Retrospective analysis for Research and Applications, Version 2 (NASA MERRA-2) reanalysis data \citep{molod15}. b) The use of the AM software simulates the mechanical and radiometric operation of the tipper, which gives an effective temperature for the atmosphere averaged over the 350 $\mu$m band for a given value of PWV, and is weighted by the spectral response of the filter located at the input of the tipper. c) We extract the opacity and atmospheric brightness temperature by fitting an exponential function to the simulated temperature versus tipper zenith angle. From now on, we detail the new procedures of the proposed methodology with the aim of reducing the uncertainty in the conversion to PWV and obtaining an accurate database available for climate studies:
d) Opacity to PWV conversion: This step includes a relevant difference from our previous work \citep{cortes16}, in that the relationship between opacity and PWV was presented as a linear characteristic and the average $T_{gnd}$ was used as a fixed value input to the simulation. Introducing $T_{gnd}$ as a swept variable input and extending the PWV range input to the simulation, we found a quadratic relationship between the PWV and tipper opacity, as shown in Figure \ref{figure3}. In the upper panel of the figure, the conversion between the PWV and tipper opacity for the Chajnantor Plateau is presented, while the lower panel corresponds to the modeled conversion for the summit of Cerro Chajnantor. For both plots, there is an opacity pedestal at zero PWV, which is known as the dry opacity component that includes other atmospheric gases, such as oxygen and ozone. We note that the estimated dry opacity from the simulations for the plateau and the Cerro Chajnantor are 0.272 Np and 0.271 Np, respectively. The impact of $T_{gnd}$ in the emissivity of the water molecule was recently noted and discussed in~\cite{radford16}, in agreement with our analysis and supporting the changes in our conversion scheme for the submillimeter tippers.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{opacity_cbi_curve.png}}\\ \resizebox{\hsize}{!}{\includegraphics{opacity_cc_curve.png}}
\caption{Tipper opacity simulations. The black dots indicate values extracted from the AM simulations. The lines correspond to weighted quadratic fits to the simulated data and for each $T_{gnd}$. The upper panel presents the situation for the Chajnantor Plateau, while the lower panel shows the data and fits for the summit of Cerro Chajnantor. The blue (red) line in the middle of the distribution corresponds to the results for the average ground temperature at the plateau (Cerro Chajnantor) of $272.4$ K ($268.6$ K), as previously used in \citep{cortes16}.}
\label{figure3}
\end{figure}
The 350 $\mu$m tipper opacity is measured at a certain ground temperature. We use that specific ground temperature to derive the conversion to PWV, as depicted in~Figure \ref{figure3}. The conversion to PWV is derived by interpolating the model grid points in~Figure \ref{figure3} using the actual $T_{gnd}$ and the measured tipper opacity.
e) WVR sampling normalization: This is another important difference from our previous work. The APEX radiometer output rate is one sample per minute, while the tipping radiometer provides a measurement every 13 minutes, which is time stamped at the middle of the measurement cycle. In our previous work \citep{cortes16}, the data from both instruments were matched in time in order to be processed, which induced a delay between the two atmospheric observations and therefore an extra source of uncertainty. In this version of the analysis, an average windowing filter with length of 13 samples was applied to the WVR measurements and only the samples prior to the time stamped data were considered. The outcome is a more precise and representative comparison between the results of both instruments.
As mentioned above, the PWV output of the tipper data that pass through the conversion procedure is readily available to be used for atmospheric analysis. The only correction that is applied to the PWV data is the cross calibration between both tippers to match their results when these are colocated. Both tippers shared the same site over years $2005$ and $2006$; therefore, this was an opportunity to evaluate for measurement consistency between the tipping instruments. This campaign showed that the tippers were consistent to a figure of $1.12\%$ in PWV, taken from the slope in PWV vs PWV ratio, and that factor was applied to the TA tipper to cross calibrate both data outputs as follows: $PWV_{TB}^{corr} = PWV_{TB}/1.012$. Once this factor was applied, the PWV versus PWV diagram for this concurrent measurement session delivered the results shown in~Figure \ref{fig8}. The slope between both instrument outputs was $1.001 \pm 0.003$, with a Pearson correlation factor of $0.94$.
\begin{figure}[hbt]
\resizebox{\hsize}{!}{\includegraphics{ta_tb.png}}
\caption{Corrected tipper data for a shared location measurement campaign, for tippers TA and TB, at the Chajnantor Plateau (CBI) site, covering years 2005 and 2006. The cyan circles are the PWV quantiles used in linear regression. The red circles indicate the quartiles ( 25\%, 50\%, and 75\%). Error bars are y-axis standard deviation for each quantile.}
\label{fig8}
\end{figure}
The error in the slope was calculated using the Equation \ref{eq2}. This equation was used to estimate the standard error of a slope in simple linear regression method \citep{crunk14}, as follows:
\begin{equation}
\hspace{20mm}STD_{slope}\>=\> \displaystyle\frac{\sqrt{\displaystyle\sum\frac{(Y_{model}-Y_{data})^2}{N-2}}}{\displaystyle\sqrt{\sum(x-\bar{x})^2}}
\label{eq2}
,\end{equation}
where $Y_{model}$ is the obtained value predicted by the linear regression, $Y_{data}$ is the real value, $N$ is the number of data involved in the linear regression, x is the real value, and $\bar{x}$ is the mean of x values.
\section{Estimating PWV ratios}
Estimating the PWV ratio between datasets from different sites or instruments is crucial if the aim is to assess the distribution of water vapor on a certain area, compare how much drier a site is than another, or evaluate the scale height for water vapor over a certain site. The method to estimate the PWV ratio we applied so far, detailed in \cite{cortes16}, is to fit a line to a time-matched PWV vervus PWV diagram and extract the slope directly from the fit. It was found that the result in the estimate for the slope is very sensitive to how clumped the data are near the origin of the diagram as well as what exact range is assumed as the linear regime of the system prior to the analysis. The nature of the PWV datasets for the Chajnantor area is such that the data in the PWV versus PWV diagram are mostly clumped near the origin and small variations on the distribution of the data for low values of PWV can have significant effects on the result for the PWV ratio, which may lead to wrong interpretations about the physics governing a certain site.
In this paper, we applied a variation to the method of estimating the PWV ratio. We instead used the cumulative distribution of PWV for a given site of the data on each axis of the diagram. Each cumulative distribution is divided into $20$ quantiles; therefore, each quantile for the involved sites forms a pair in the PWV versus PWV diagram and is used for parameter extraction with the line regression algorithm of choice. We believe this method uses a more faithful statistical representation of each dataset compared to the previous method, providing a more robust estimate for the sought PWV ratio. The data used for the line regression consider up to $75\%$ of the full dataset because the remaining data are found to be dominated by measurement uncertainty and, for high PWV, the submillimeter tippers respond nonlinearly. The linear regime used for the analysis in PWV has an upper limit of $3$ mm, which is considered to be useful for millimeter and submillimeter radio astronomy interests. Figure~\ref{fig8} shows the results of the new method, in which cyan circles denote some of the $20$ quantiles mentioned before (10\%, 20\%, 30\%, 40\%, 60\%, and 70\% specifically), while the red circles indicate the quartiles, for reference.\\
\section{Validation of submillimeter tipper data from revised conversion procedure}
In our previous work \citep{cortes16}, the PWV measurements from the WVR at APEX are considered the calibration standard for our atmospheric characterization database. In this work, it was found that the submillimeter tipper radiometer measurements require a small correction to match the APEX WVR measurements, $(< 2\%)$ that is, to produce a PWV versus PWV diagram with a slope of unity. Therefore, we can safely consider TA and TB, namely the submillimeter tipper radiometers, as the two instruments providing measurement outputs independent of any external calibrator. The implication of this finding is that the tippers can operate independently, installed at multiple sites of low PWV, and obtain scientifically valid PWV data provided that the suggested conversion method explained in Section 3 is followed. To confirm this statement, in the following we compare the measurements from tippers against other local instruments using PWV versus PWV scatter plots.
The tipper TA was colocated with the APEX WVR for a span of three years. A comparison between the PWV measured by the APEX WVR and the PWV from converted tipper opacity from tipper TA is shown in Figure~\ref{figure7}. The slope from the data, estimated using the procedure described in the previous section, is 1.011, which indicates a $1.1 \%$ difference between both measurements. The Pearson correlation coefficient for such dataset was calculated as described in~\cite{cortes16}, giving a value of $0.92$.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{apex_ta3.png}}
\caption{Comparison between the APEX WVR measurements and the PWV from tipper TA-3 for 2011-2014, when both instruments coincided in both site and time. The cyan circles are the PWV quantiles used in linear regression. The red circles indicate the quartiles ( $25\%$, $50\%$, and $75\%$). Error bars are y-axis standard deviation for each quantile. As noted in the text, TA-3 shows a small offset with APEX, which shows the measuring independence of the tipper instrument when the proposed conversion is used.}
\label{figure7}
\end{figure}
Figure \ref{figure10} shows a visual comparison between the results of the conversion method for the TB-3 tipping radiometer located at the summit of Cerro Chajnantor and data from the UdeC WVR, when concurrently deployed at the same site. The WVR data from the APEX observatory are reported as a reference and the PWV variations are well correlated for both sites. Interestingly and looking at Figure \ref{figure10}, we notice what could be the appearance of atmospheric inversion layers for certain periods of the data. These can be observed when the PWV at Cerro Chajnantor drops dramatically compared to the PWV at the plateau, which is also mentioned in \cite{bustos14}. The appreciable difference in absolute PWV on both sites is mainly due to the vertical distribution of water vapor, which drops monotonically as altitude increases.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{comp_ap_cc_udec.png}}
\caption{PWV reported for a period of five days from Cerro Chajnantor summit (data from UdeC-WVR and TB-3) and from the Chajnantor Plateau (data from the APEX WVR). The correlation between the UdeC and TB-3 data, both located at the Chajnantor summit, and the instantaneous difference with the PWV at the plateau measured by the WVR at APEX.}
\label{figure10}
\end{figure}
We assessed the potential improvement of the revised methodology in data quality by calculating the standard deviation of the various procedures applied to the same dataset. This is referred to as the conversion error of the method and we compare the errors given by the old methodology against the errors from the revised method. The results show an improvement of $10\>\%$ with the new method as expected from the impact of the conversion curves shown in Figure~\ref{figure3}.
\section{Distribution of PWV\ from site comparisons}
In this analysis, we consider $20$ years of time-overlapped data from various instruments fielded at the Chajnantor area. Using the revised method presented in Section 2 to convert opacity to PWV, a recalculation of our quantitative comparison between the atmospheric conditions for sites of interest for deployment of millimeter and submillimeter astronomy instrumentation is performed. The results of the analysis are presented by comparing the atmospheric conditions for two sites and according to the procedure that was previously described in Section 4. As mentioned before, the two datasets are appropriately downsampled in time to bring them to the same cadence.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{apex_ta2.png}}
\caption{PWV determined from the APEX WVR vs. TA-2 converted PWV located at the CBI site, over the period of 2006 to 2010. The APEX site is $27$ m above the CBI site. The cyan circles are the PWV quantiles used in linear regression. The red circles indicate the quartiles ( 25\%, 50\%, and 75\%). Error bars are y-axis standard deviation for each quantile.}
\label{figure9}
\end{figure}
The APEX VWR data versus TA-2 tipper converted PWV scatter plot is presented in Figure \ref{figure9}. The TA-2 was deployed at the CBI site; therefore it was $27$ m lower in altitude than the APEX site and about $2.5$ km away in linear distance. It is relevant to note that the CBI site is very close to the ALMA array center, only $720$ meters away; consequently, this site also faithfully represents the atmospheric characteristics for the submillimeter interferometer site. In Figure \ref{figure9}, the time-matched linear regression gives $1.9\%$ excess in PWV for the CBI site compared to the APEX site, which agrees with what is expected from the use of a standard atmosphere and the difference in their altitudes. The Pearson correlation coefficient for the linear regression in Figure \ref{figure9} is $0.86$.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{apex_cc.png}}
\caption{PWV comparison between Cerro Chajnantor and the plateau. Data from 2009 to 2012 are included in this figure. The cyan circles are the PWV quantiles used in linear regression. The red circles indicate the quartiles ( 25\%, 50\%, and 75\%). Error bars are y-axis standard deviation for each quantile. The slope in this graph denotes the significant drop in PWV at the Cerro Chajnantor summit as opposed to the Chajantor plateau, with a $505$~m difference.}
\label{figure8}
\end{figure}
Figure~\ref{figure8} shows PWV versus PWV scatter-plot results for APEX and TB-3. TB-3 is the third location for the submillimeter tipping radiometer at the summit of Cerro Chajnantor. We note that the location of TB-3 is $505$~m higher that the APEX site. In Figure \ref{figure8}, the slope of $0.72$ indicates an overall year-round reduction of $28\%$ in PWV when ascending in altitude from the Chajnantor Plateau to the Cerro Chajnantor summit. This result is consistent with the ratio of $0.7$ between these altitudes reported by \cite{otarola19}, using data from radiosondes launched from Antofagasta, and \cite{bustos14}, who report only five days of data.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{ntx_am.png}}
\caption{Atmospheric transmission for $30-1200$ GHz calculated with the AM software for the median measured conditions at the Chajnantor plateau and at the summit of Cerro Chajnantor.}
\label{figure34}
\end{figure}
The effect of the $28\%$ difference between the Chajnantor Plateau and Cerro Chajnantor summit on the atmospheric transmittance is shown in Figure \ref{figure34}. This image is the result of a simulation using the AM software for both sites. The input parameters in AM were $0^{\circ}$ of zenith angle, a frequency window from 30 GHz to 1200 GHz, a mean ground temperature of $272.6$ K, and $268$ K for Chajnantor Plateau and Cerro Chajnantor summit, respectively. The considered amount of PWV in the Chajnantor Plateau and for the Cerro Chajnantor summit was $1$ mm and $0.72$ mm, respectively. These values for PWV are taken from the 50th percentile result of our long-term analysis for both sites. As can be seen in Figure~\ref{figure34}, the impact in transmittance is more relevant at the higher frequency bands in the submillimeter regime and, thus, supports the installation of submillimeter astronomical equipment at the summit of Cerro Chajnantor.
A summary with the results for the comparison between the characterized sites is presented in Table \ref{tabla-instr}. These results are calculated from PWV versus PWV derived slopes and by applying the proposed conversion method to the submillimeter tipper data. The instruments at the Chajnantor Plateau are APEX and TA-2 (CBI site). The instrument at Cerro Chajnantor summit is TB-3.
\begin{table*}[h]
\caption{\label{t7} Ratios of PWV between instruments at multiple sites in the Chajnantor area. The table includes time span of concurrent measurements, derived slopes between sites, the altitude difference between sites, and inferred scale height.}
\centering
\begin{tabular}{lcccc}
\hline\hline
Instrument & Period & Slope & Altitude & Scale \\
pairs & (yrs) & & diff (m)& height (m)\\
\hline\\
{\bf{ $\displaystyle\frac{\rm{TA-2}}{\rm{APEX}}$}} & {{ 06-10 }} & {\bf{ $1.019\pm0.002$ }} & -27 & $1434\pm149$ \\ \\
{\bf{ $\displaystyle\frac{\rm{TB-3}}{\rm{APEX }}$}} & {{09-12}} & {\bf{$0.720\pm0.003$}} & 505& $1537\pm19$\\ \\
{\bf{$\displaystyle\frac{ \rm{TB-3}}{\rm{TA-2 }}$}} & {{ 09-10}} & {\bf{$0.710\pm0.01$}} & 532 &$1553\pm63$ \\ \\
\hline
\end{tabular}
\label{tabla-instr}
\end{table*}
The comparison between sites is consistent with the exponential decay in PWV of a standard atmosphere. This decay can be modeled as follows:
\begin{equation}
\hspace{30mm}\displaystyle PWV\>=\> PWV_{0}\cdot {\rm e}^{-\frac{\Delta h}{h_{o}}}
\label{eq1}
,\end{equation}
\noindent where $PWV_{0}$ is the PWV measured at the lowest altitude, $h_{0}$ is the scale height, and $\Delta h$ the difference in altitude for the two sites. Using the values reported in Table~\ref{tabla-instr}, we calculate a water vapor scale height for the Chajnantor Plateau of $1537$~meters. This scale height agrees with previous results from \cite{bustos00, giovanelli01, bustos14, cortes16, radford16, kuo17}
\section{Twenty-year water vapor study for the Chajnantor Plateau}
In this section, we present the $20$ years of PWV data for the Chajnantor Plateau, from $1997$ to $2017$. We present our PWV dataset in full, gathering information from various types of instruments, but we convert these findings to the same unit. The APEX radiometer was included in this database for completeness.
With the aim of understanding the general annual PWV cycles on a month-by-month basis at the Chajnantor Plateau, we present all the median PWV results per instrument in the period from $1997$ to $2017$ in Figure~\ref{figure19}. The PWV from APEX radiometer (with more cadence of data) is included in Figure~\ref{figure19} on purpose to compare the results per month versus the results of other instruments that have been converted to the same unit, providing consistency to the results from all instruments.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{pwv_plateau_instruments.png}}
\caption{Median PWV per month per instrument located at the Chajnantor Plateau, over a $20$-year span. The highest value and dispersion for the mean PWV in the months of January and February and the lower PWV in the months of June, July, and August are noted. This plot shows the consistency between the measurements of the instruments.}
\label{figure19}
\end{figure}
At the plateau, January and February are always shown as the wetter months of the year, while June through August, the dryest months, offer the best conditions for submillimeter astronomy observations. Taking the median of the data per month over the 20-year span of our study gives an overall view for the year-round climate at the Chajnantor site. Such analysis is shown in Figure~\ref{figure55}. We note again the drastic increase in PWV for the months December through March, which is a clear signature of local effect called Altiplanic Winter, which enhances the east-West water vapor circulation from the wet side of the Andes, north Argentina, Paraguay, and south Brazil into the west and through the Chajnantor area.
The upper plot of Figure~\ref{figure30} is a plot of quartiles, which shows the variations over months of three different percentile distributions ($25\%$, $50\%,$ and $75\%$). The shape of the three curves is similar to Figure~\ref{figure19}. This figure allows a robust statistical assessment of the atmospheric quality for the Chajnantor Plateau.
With the aim of understanding the most extreme months in the Chajnantor area, a cumulative fraction plot is presented at the bottom of Figure~\ref{figure30}. This is an assessment of the conditions for the Chajnantor Plateau (solid line), with the year-round, 50th percentile value near $1$~mm for the site. In addition, we find the median values for the extreme months of January and August, with a 50th percentile value for each condition of $2.56$~mm and $0.72$~mm, respectively.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{plateau_quartiles.png}}\\
\resizebox{\hsize}{!}{\includegraphics{cumulative_plateau.png}}\\
\caption{Twenty years of PWV study in Chajnantor Plateau. Monthly quartiles (25\%, 50\%, and 75\%) of PWV measurements over the full period (upper panel). On the plateau, the best observing conditions are shown over June, July, and August. Cumulative distribution fraction of PWV for Chajnantor Plateau is shown in the lower panel. Total fraction, along with the extreme cases of January and August, are included to limit the typical range of PWV for the remaining months.}
\label{figure30}
\end{figure}
\begin{table}[h]
\caption{\label{t7} Summary of PWV cumulative fractions for the Chajnantor Plateau. The extreme cases of January and August are also shown.}
\centering
\begin{tabular}{lccc}
\hline\hline
Period of time & $25\>\%$ & $50\>\%$ & $75\>\%$ \\ \hline
Year-round & 0.60 mm & 1.05 mm & 1.98 mm \\
January & 1.28 mm & 2.56 mm & 3.97 mm \\
August & 0.44 mm & 0.72 mm & 1.19 mm \\
\hline
\end{tabular}
\label{tab-acum}
\end{table}
The results from Figure \ref{figure19} are also consistent with the results from \cite{marin17}. These authors used the Climate Forecast System Reanalysis (CFSR) \citep{saha10} as source for the PWV data for Chajnantor and the APEX radiometer data ($2006$ to $2010$) to validate the analysis. In \cite{marin17}, the authors argued about a connection between the Madden-Julian Oscillation (MJO) pattern and the variability of the PWV at Chajnantor. Given that the MJO starts in the western Pacific Ocean and develops through the east sometimes reaching the South American coast, we believe a study of the MJO could be used as a tool to predict wet events in Chajnantor. Therefore, this study would be a good addition for the planning of scientific activities in the area.
The long-term median climatology of the Chajnantor Plateau is shown in Figures~ \ref{figure19} and ~\ref{figure30}. A possibility for studying large-scale anomalies in the climate for the Chajanantor area is to review the evolution of the PWV over the span of this study. We decided to assess such evolution by plotting the PWV for each month over the years in a single plot, as shown in Figure~\ref{figure20}. Each subplot in this figure corresponds to a single month and each data point in the plot corresponds to the median PWV from that month in our consolidated database. In addition, a linear regression and their math expression is presented in each subplot. Each linear regression are in function of time (years) and with these expressions we can estimate the amount of PWV in the future for each month, given the data collected in this period. All the slopes are small; therefore, they do not indicate a clear tendency (increase or decrease) for the amount of PWV in the period of study or in the future for the Chajnantor Plateau.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{meses_f.png}}
\caption{Median PWV per month over 1997-2017 in the Chajnantor Plateau. The dots indicate the median value of PWV in this year, while the straight line indicates the trend for the PWV over the period of the study, with a possibility of projecting future PWV values. The median for all the monthly slopes is $-0.001\pm0.014$, that is, consistent with zero. We do not see a long-term increase or decrease in the atmospheric variable of interest for Chajnantor.}
\label{figure20}
\end{figure}
\section{Cerro Chajnantor summit}
Cerro Chajnantor is one of the highest peak in the Chajnantor area. The atmospheric conditions at the peak are different from the Chajnantor Plateau, as expected given the altitude difference. In addition, the appearance of inversion layers affect the instantaneous value for PWV, drying out the summit of Cerro Chajnantor at a higher rate than the plateau.
\begin{figure}[h!]
\resizebox{\hsize}{!}{\includegraphics{median_cc_ll.png}}
\caption{Comparison between Cerro Chajnantor Summit and Chajnantor Plateau.The differences per month in median values for PWV between both sites are shown. July and August show the least difference between both sites.}
\label{figure55}
\end{figure}
Median PWV values over the full span of our study are used to compare the atmospheric conditions between Cerro Chajnantor Summit and Chajnantor Plateau, as shown in Figure~\ref{figure55}. The Chajnantor Plateau always has greater values of PWV in comparison with the Cerro Chajnantor Summit. The months of December, January, and February show the highest values of PWV and error-bar size. February is considered a special case since less data are available for this month and the data are highly variable. High PWV (bad weather) drives the instruments into nonlinear conditions and turns them off when safety conditions are triggered. This might explain the anomalous values of PWV for that month. Ratios of PWV with data from both sites are presented in the upper panel of Figure~\ref{figure56}. Equation 1 is used to calculate the water vapor scale height for the Chajnantor area, using an altitude difference of $505$ m. As expected, the water vapor scale height is highly time variable.
\begin{figure}[h!]
\resizebox{\hsize}{!}{\includegraphics{ratio_sh.png}}
\caption{Chajnantor summit vs. plateau PWV ratio and scale heights. The upper plot shows the ratio $PWV_{cc}/PWV_{plateau}$ per month. The lower panel shows the scale height from the ratios obtained in the upper panel using the equation (1), considering $505$ m of altitude difference.}
\label{figure56}
\end{figure}
Cumulative fraction plots for the Chajnantor summit are shown in Figure~\ref{figure303}, confirming the very good submillimeter observing conditions at the site. In summary, it offers less than $0.5$ mm for at least $30\%$ of the time, during the austral winter, which is outstanding for ground-based submillimeter access. Again, we hope this information can be appropriately used for the observations planning by all the instruments located into the area. The PWV quartiles were extracted from the lower panel of Figure~\ref{figure303} and are presented in Table \ref{tab-acum}. The reported quartiles from Table \ref{tab-acum} are in good agreement with the results recently shown in \cite{radford16, otarola19}.\\
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{cumulative_cerro.png}}
\resizebox{\hsize}{!}{\includegraphics{quantiles_cerro.png}}
\caption{PWV data for the summit of Cerro Chajnantor. Five years of study are included in this analysis. The upper panel shows the cumulative distribution fraction of PWV. The lower panel shows the monthly quartiles (25\%, 50\%, and 75\%) of the PWV measurements for the summit.}
\label{figure303}
\end{figure}
\begin{table}[h]
\caption{\label{t7} Summary of PWV cumulative fractions for the summit of Cerro Chajnantor summit. The extreme cases of January and August are shown and are added for reference.}
\centering
\begin{tabular}{lccc}
\hline\hline
Period of time & $25\>\%$ & $50\>\%$ & $75\>\%$ \\ \hline
Year-round & 0.36 mm & 0.67 mm & 1.28 mm \\
January & 0.53 mm & 1.08 mm & 2.04 mm \\
August & 0.36 mm & 0.54 mm & 0.79 mm\\
\hline
\end{tabular}
\label{tab-acum}
\end{table}
\section{Conclusions}
In this paper, we present the assembly of a $20$-year long database for the PWV conditions at the Chajnantor area, covering from $1997$ to $2017$. Multiple instruments with different variables were used to compile such a large dataset. However, they were appropriately calibrated and converted to make the largest possible climatic evaluation of the site. The results of this study can be used, in addition to the scientific value, to plan observatory operations. The millimeter and submillimeter telescopes, such as ALMA, APEX, Atacama Submillimeter Telescope Experiment (ASTE), NANTEN2 and the future CCAT-prime, and LCT, will be able to plan the use of instruments at specific wavelengths depending on the month, that is, a 350 $\mu$m or near terahertz instrument should only be available during austral winter months.
The methodology to convert atmospheric opacity now includes the use of the instantaneous ground temperature as an input parameter, which contributed to produce more accurate results, as seen on Figure~\ref{figure3}. The cadences for the various instruments were also matched in order to appropriately use the data for a comparative analysis of the various sites. It was found that more robust and statistically significant results, for PWV ratios between different sites and instruments, are calculated if the PWV versus PWV scatter plots with time-matched samples are replaced with cumulative fractions for each site or instrument.
Regarding the comparison between the Chajnantor Plateau and the summit of Cerro Chajnantor, we found a decrease in PWV for the summit versus the plateau of $28\%$, confirming the summit as a great submillimeter astronomy site. This result is in good agreement with previous works \citep{bustos14, radford16, cortes16, otarola19}. In addition, we found differences in the PWV within the plateau (i.e., the CBI site), which is $27$~m below the APEX site, shows a $1.9\%$ excess in PWV compared to the APEX site, which is consistent with a model of standard atmosphere. We calculated a year-round atmospheric scale height for the Chajnantor area of $1537$~m. This scale height also agrees with previous works from \cite{cortes16, radford16, kuo17}, whom reported a scale height of $1475$~m for the same area.
Given the conversion method that has been presented, the tipper radiometers are validated and can be used to characterize other sites of interest for the installation of future telescopes and their logistical considerations. Using the appropriate atmospheric model for the site under measurements, the system does not require external calibrators to deliver PWV, as has been assumed in the past.
Interestingly, our long-term study of the PWV conditions at the Chajnantor Plateau did not show evidence of PWV trends, neither an increase nor a decrease over the 20 years of evaluation.
The AM configuration files used in this paper can be requested by e-mail from F. Cort\'es (fercortes@udec.cl).
\section*{Acknowledgements}
We thank the people and observatories who contributed to this project. This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX). APEX is a collaboration between the Max-Planck-Institut fur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory.
R. Reeves and F. Cort\'es acknowledge support from Fondo Gemini-Conicyt programa de astronom\'ia/pci folio $32140030$. R. Reeves acknowledges support from CONICYT project Basal AFB-170002 and from Fondo Astronom\'ia QUIMAL-CONICYT 140005.
K. Cort\'es acknowledges support from Conicyt with her PhD fellowship.
R. Bustos acknowledges support from UCSC internal funds FAA Nº 259/2019 and Fondo Astronom\'ia QUIMAL-CONICYT 180003.
R. Reeves acknowledges support from CONICYT project Basal AFB-170002 and from Fondo Astronomía QUIMAL- CONICYT 140005 / QUIMAL-CONICYT 160012.
\small{
\bibliographystyle{cys}
|
1,314,259,994,311 | arxiv | \section{Introduction}
\label{s1}
\subsection{$p$-Adic mathematical physics.}\label{s1.1}
It is well known that apart from the {\it ``usual'' mathematical physics\/}
(``${\mathbb C}$--case'', where all functions and distributions are complex or
real valued defined on spaces with real or complex coordinates)
there is a {\it $p$-adic mathematical physics\/} where all functions
and distributions are defined on the field ${\mathbb Q}_p$ of $p$-adic numbers
(definition of the field ${\mathbb Q}_p$ see below in Sec.~\ref{s2}).
There are a lot of papers where different applications of
$p$-adic analysis to physical problems (in the strings theory, in quantum mechanics),
stochastics, in the theory of dynamical systems, cognitive sciences
and psychology are studied~\cite{Al-Gor-Kh}--\cite{Al-Kh1},
~\cite{Al-Kh-Ti}--\cite{Bik-V},~\cite{Ev},~\cite{Fr-W1},~\cite{Kh1}--~\cite{Kh5},
~\cite{Koch3},~\cite{Par},~\cite{Vl-V-Z}--~\cite{V2} (see also the
references therein).
Note that the theory of $p$-adic distributions (generalized functions)
plays an important role in solving mathematical problems of
$p$-adic analysis and applications. Fundamental results about the
$p$-adic theory of distributions can be found in~\cite{Br},~\cite{G-Gr-P},
~\cite{Kh1},~\cite{Taib3},~\cite{Vl-V-Z}).
Note that to deal with {\it nonlinear singular problems\/} of $p$-adic
mathematical physics, in~\cite{Al-Kh-Sh3}--~\cite{Al-Kh-Sh5} algebraic
nonlinear theories of distributions were constructed.
Since there exists a $p$-adic analysis connected with the mapping
${\mathbb Q}_p$ into ${\mathbb Q}_p$ and an analysis connected with the mapping ${\mathbb Q}_p$
into the field of complex numbers ${\mathbb C}$, there exist two types of
$p$-adic physics models.
It is known that for the $p$-adic analysis related to the mapping
${\mathbb Q}_p \to {\mathbb C}$, the operation of partial differentiation is
{\it not defined\/}, and the Vladimirov fractional operator
$D^{\alpha}=f_{-\alpha}*$ plays a corresponding role~\cite[IX]{Vl-V-Z}, where
$f_{\alpha}$ is the $p$-adic {\it Riesz kernel\/} (\ref{56}), $*$ is a
convolution. Moreover, large quantity of $p$-adic models use the Vladimirov
fractional operator and the theory of $p$-adic distributions
~\cite{Al-K},~\cite{Al-Kh1},~\cite{Av-Bik-Koz-O},~\cite{Bik-V},
~\cite{Fr-W1},~\cite{Kh1}--~\cite{Kh4},~\cite{Koch3},~\cite{Par},~\cite{Vl-V-Z};
further generalizations can be found in~\cite{Koz1},~\cite{Koz2}.
However, in general, $D^{\alpha}\varphi \not\in {{\mathcal D}}({\mathbb Q}_p)$ for
$\varphi \in {{\mathcal D}}({\mathbb Q}_p)$, and consequently, the operation $D^{\alpha}f$
is well defined only for some distributions $f\in {{\mathcal D}}'({\mathbb Q}_p)$.
For example, in general, $D^{-1}$ is not defined in the space of
test functions ${{\mathcal D}}({\mathbb Q}_p)$~\cite[IX.2]{Vl-V-Z}.
We recall that similar problems arise for the ``${\mathbb C}$-case'' of
fractional operators~\cite{Rub}, \cite{Sam3}, \cite{Sam-Kil-Mar}. Namely,
in general, the Schwartzian test function space ${{\mathcal S}}({\mathbb R}^n)$
{\it is not invariant\/} under fractional operators.
A solution of this problem (in the ``${\mathbb C}$-case'') was suggested by
P.~I.~Lizorkin in the excellent papers~\cite{Liz1}--~\cite{Liz3}
(see also~\cite{Sam1},~\cite{Sam2}).
Namely, in~\cite{Liz1}--~\cite{Liz3} a new type spaces
{\it invariant\/} under fractional operators were introduced.
We recall the definition of one type of the Lizorkin space
(for details, see~\cite{Liz3}, \cite{Sam3}, \cite{Sam-Kil-Mar}).
Denote by ${\mathbb N}$, ${\mathbb R}$, ${\mathbb C}$ the sets of
positive integers, real numbers and complex
numbers, respectively, and set ${{\mathbb N}}_0={0}\cup{{\mathbb N}}$. For
$\alpha=(\alpha_1,\dots,\alpha_n)\in {{\mathbb N}}_0^n$ and
$x=(x_1,\dots,x_n)\in {\mathbb R}^n$ we assume
$|\alpha|=\sum_{k=1}^n\alpha_k$ and
$x^{\alpha}=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$. We shall
denote partial derivatives of the order $|\alpha|$ by
$\partial_x^{\alpha}=\frac{\partial^{|\alpha|}}
{\partial{x_1}^{\alpha_1}\cdots\partial{x_n}^{\alpha_n}}$.
Now let us consider the following subspace of test functions
\begin{equation}
\label{5}
\Psi({\mathbb R}^n)=\{\psi(\xi): \psi\in {\mathcal S}({\mathbb R}^n):
(\partial_{\xi}^{j}\psi)(0)=0, \, |j|=1,2,\dots\},
\end{equation}
The space of functions
\begin{equation}
\label{6}
\Phi({\mathbb R}^n)=\{\phi: \phi=F[\psi], \, \psi\in \Psi({\mathbb R}^n)\}
\subset {\mathcal S}({\mathbb R}^n),
\end{equation}
is called the {\it Lizorkin space\/}, where $F$ is the Fourier transform.
This space admits a simple characterization: $\phi\in \Phi({\mathbb R}^n)$
if and only if $\phi\in {\mathcal S}({\mathbb R}^n)$ and
\begin{equation}
\label{6.1}
\int_{{\mathbb R}^n}x^j\phi(x)\,d^nx=0, \quad |j|=0,1,2,\dots.
\end{equation}
Thus $\Phi({\mathbb R}^n)$ is the subspace of Schwartzian test functions,
for which all the moments are equal to zero.
It is well known that $\Phi({\mathbb R}^n)$ is invariant under the Riesz
fractional operator $D^{\alpha}$, $\alpha \in {\mathbb C}$, given by the formula
\begin{equation}
\label{7}
\big(D^{\alpha}\phi\big)(x)
\stackrel{def}{=}(-\Delta)^{\alpha/2}\phi(x)
=\kappa_{-\alpha}(x)*\phi(x), \quad \phi \in \Phi({\mathbb R}^n),
\end{equation}
where the {\it Riesz kernel\/} is defined as
$\kappa_{\alpha}(x)
=\frac{\Gamma(\frac{n-\alpha}{2})}
{2^{\alpha}\pi^{\frac{n}{2}}\Gamma(\frac{\alpha}{2})}|x|^{\alpha-n}$,
where $|x|=\sqrt{x_1^2+\cdots+x_n^2}$ and $|x|^{\alpha}$ is a homogeneous
distribution of degree~$\alpha$, \ $\Delta$ is the Laplacian.
Note that fractional operators in the ``${\mathbb C}$-case'',
as well as in the $p$-adic case have many applications and are intensively
used in mathematical physics~\cite{Eid-Koch}, \cite{Sam3},~\cite{Sam-Kil-Mar}.
These two last fundamental books have the exhaustive references.
We recall also that in the ``${\mathbb C}$-case'' the {\it Tauberian theorems\/}
have numerous applications, in particular, in mathematical physics.
Tauberian theorems are usually assumed to connect the asymptotic
behavior of a function (distribution) at zero with asymptotic
behavior of its Fourier, Laplace or other integral transform at infinity.
The inverse theorems are usually called ``Abelian'' \cite{D-Zav1},
\cite{Kor},~\cite{Vl-D-Zav} (see also the references cited therein).
Multidimensional Tauberian theorems for distributions
are treated in the fundamental book~\cite{Vl-D-Zav}, some of them
are connected with the fractional operator. In~\cite{Vl-D-Zav},
as a rule, theorems of this
type were proved for distributions whose supports belong to
a cone in ${\mathbb R}^n$ (semiaxis for $n=1$). This is related to the
fact that such distributions form a convolution algebra.
In this case the kernel of the fractional operator is a distribution
whose support belongs to the cone in ${\mathbb R}^n$ or a semiaxis for $n=1$
~\cite[\S2.8.]{Vl-D-Zav}.
\subsection{Contents of the paper.}\label{s1.2}
In this paper the $p$-adic Lizorkin type spaces and multidimensional
fractional operators and pseudo-differential operators
on these spaces are constructed. Since the Lizorkin
spaces are invariant under fractional operators, they are ``natural''
definition domains of them, and can play
a key role in models related to the fractional operators problems.
In this paper we also prove $p$-adic analogs of Tauberian theorems
for the Lizorkin distributions. Tauberian theorems of this type are
connected with the fractional operators.
Taking into account the fact that kernels of the fractional
operators are defined on the whole space ${\mathbb Q}_p^n$ (by virtue of the
$p$-adic field nature), Tauberian theorems proved in this paper
are not direct analogs of Tauberian theorems from~\cite{Vl-D-Zav}.
Some $p$-adic Tauberian theorems for distributions in ${{\mathcal D}}'({\mathbb Q}_p^n)$
were first proved in~\cite{Kh-Sh1},~\cite{Kh-Sh2}. Since the space of
distributions ${{\mathcal D}}'({\mathbb Q}_p^n)$ is not invariant under Vladimirov's
operator, mentioned Tauberian theorems in~\cite{Kh-Sh1},~\cite{Kh-Sh2}
have been proved only under reasonable restrictions.
In this respect the present paper gives a more natural framework
for such results.
In Sec.~\ref{s2}, we recall some facts from the $p$-adic
theory of distributions.
In Subsec.~\ref{s3.1} we introduce the $p$-adic Lizorkin spaces of test
functions $\Phi_{\times}({\mathbb Q}_p^n)$ and distributions
$\Phi_{\times}'({\mathbb Q}_p^n)$ of the first kind, and in Subsec.~\ref{s3.2}
the $p$-adic Lizorkin spaces of test functions $\Phi({\mathbb Q}_p^n)$ and
distributions $\Phi'({\mathbb Q}_p^n)$ of the second kind. It is easy to see
that the $p$-adic Lizorkin space $\Phi({\mathbb Q}_p^n)$ is an analog of
the Lizorkin space $\Phi({\mathbb R}^n)$ defined by (\ref{6}).
The Lizorkin spaces $\Phi_{\times}({\mathbb Q}_p^n)$ and $\Phi({\mathbb Q}_p^n)$
admit characterizations (\ref{50}) and (\ref{54}), respectively.
In Subsec.~\ref{s3.3}, by Lemmas~\ref{lem2.2},~\ref{lem2.3}, we prove
that the Lizorkin spaces $\Phi_{\times}({\mathbb Q}_p^n)$ and $\Phi({\mathbb Q}_p^n)$
are dense in ${\mathcal L}^{\rho}({\mathbb Q}_p^n)$, $1<\rho<\infty$.
In fact, for $n=1$ and $\rho=2$ this statement was proved
in~\cite[IX.4.]{Vl-V-Z}. Note that for $\rho=2$ the statements
of Lemmas~\ref{lem2.2},~\ref{lem2.3} are almost obvious, but
for $\rho\ne 2$, similarly as for the ``${\mathbb C}$-case''~\cite{Sam3},
these statements are nontrivial. Our proofs of these lemmas
almost word for word follow the proofs developed for the
``${\mathbb C}$-case'' in~\cite{Sam3}.
In Sec.~\ref{s4} two types of the multidimensional fractional
operators are constructed.
In Subsec.~\ref{s4.1}, we recall some facts on the Vladimirov
one- dimensional fractional operator and introduce the Vladimirov
multidimensional operator $D^{\alpha}_{\times}$ as the direct product
of one-dimensional fractional Vladimirov's operators $D^{\alpha_j}_{x_j}$.
Next, we define this operator in the Lizorkin space of distributions
$\Phi_{\times}'({\mathbb Q}_p^n)$ for all $\alpha\in {\mathbb C}^n$.
In Subsec.~\ref{s4.2} we recall some facts on the multidimensional
fractional operator $D^{\alpha}_{x}$ introduced by
Taibleson~\cite[\S2]{Taib1},~\cite[III.4.]{Taib3}
in the space of distributions ${{\mathcal D}}'({\mathbb Q}_p^n)$ for $\alpha\in {\mathbb C}$,
$\alpha\ne -n$ and define this operator in the Lizorkin space of
distributions $\Phi'({\mathbb Q}_p^n)$ for all $\alpha\in {\mathbb C}$.
The Lizorkin space $\Phi_{\times}({\mathbb Q}_p^n)$ is invariant under
the Vladimirov fractional operator (Lemma~\ref{lem4.1}), while
the Lizorkin space $\Phi({\mathbb Q}_p^n)$ is invariant under the Taibleson
fractional operator (Lemma~\ref{lem4.1}).
These fractional operators form Abelian groups on
on the corresponding the Lizorkin spaces (see (\ref{63})).
In fact, in order to define the one-dimensional fractional Vladimirov
operators $D^{-1}$, the one-dimensional Lizorkin space of test functions
$\Phi({\mathbb Q}_p)$ was introduced in~\cite[IX.2]{Vl-V-Z} (compare with (\ref{54})).
For $n=1$, according to~\cite[IX,(5.7),(5.8)]{Vl-V-Z} and~\cite{Koz0},
the eigenfunctions (\ref{62.1}) of Vladimirov's operator $D^{\alpha}$,
$\alpha>0$ satisfy condition (\ref{54}), and, consequently, belong
to the Lizorkin space $\Phi({\mathbb Q}_p)$.
Moreover, our results imply that these functions (\ref{62.1}) are also
eigenfunctions of the operator $D^{\alpha}$ for $\alpha<0$ (see Remark~\ref{rem1}).
In Subsec.~\ref{s4.3}, by analogy with the ``${\mathbb C}$-case'' ~\cite{Sam3},
~\cite{Sam-Kil-Mar}, two types of $p$-adic Laplacians are discussed.
Note that such types of $p$-adic Laplacians were introduced in~\cite{Kh0}.
In Sec.~\ref{s5}, a class of pseudo-differential operators $A$
(\ref{64.3}) on the Lizorkin spaces are introduced. The Lizorkin spaces
are {\it invariant\/} under our pseudo-differential operators.
The fractional operator $D^{\alpha}_{x}$, $\alpha\in {\mathbb C}$ belongs to
this class of pseudo-differential operators.
The family of pseudo-differential operators $A$ with symbols
${\mathcal A}(\xi)\ne 0$, $\xi\in {\mathbb Q}_p^n\setminus \{0\}$ forms an Abelian group.
In this subsection solutions of pseudo-differential equations
$Af=g$, $g\in \Phi'({\mathbb Q}_p^n)$ are also constructed.
In Sec.~\ref{s6}, we recall a notion of a $p$-adic {\it quasi-asymptotics\/}
from our papers~\cite{Kh-Sh1},~\cite{Kh-Sh2}.
In Sec.~\ref{s7}, a few multidimensional Tauberian type
theorems (Theorems~\ref{th5}--~\ref{th10}, Corollary~\ref{cor6})
for distributions are proved.
Theorem~\ref{th5} and Corollary~\ref{cor6} are related to the Fourier
transform and hold for distributions from ${{\mathcal D}}'({\mathbb Q}_p^n)$.
Theorems~\ref{th7}--~\ref{th9} are related to the fractional operators
and hold for distributions from the Lizorkin spaces $\Phi_{\times}'({\mathbb Q}_p^n)$
and $\Phi'({\mathbb Q}_p^n)$. Theorem~\ref{th10} is related to the
pseudo-differential operator (\ref{64.3}) in the Lizorkin space
$\Phi'({\mathbb Q}_p^n)$.
\section{$p$-Adic distributions.}\label{s2}
We shall use the notations and results from~\cite{Vl-V-Z}.
We denote by ${\mathbb Z}$ the sets of integers numbers.
Recall that the field ${\mathbb Q}_p$ of $p$-adic numbers is defined as the
completion of the field of rational numbers ${\mathbb Q}$ with respect to the
non-Archimedean $p$-adic norm $|\cdot|_p$. This norm is defined as
follows: $|0|_p=0$; if an arbitrary rational number $x\ne 0$ is
represented as $x=p^{\gamma}\frac{m}{n}$, where $\gamma=\gamma(x)\in {\mathbb Z}$,
and $m$ and $n$ are not divisible by $p$, then $|x|_p=p^{-\gamma}$.
This norm in ${\mathbb Q}_p$ satisfies the strong triangle inequality
$|x+y|_p\le \max(|x|_p,|y|_p)$.
Denote by ${\mathbb Q}_p^{*}={\mathbb Q}_p\setminus\{0\}$ the multiplicative group
of the field ${\mathbb Q}_p$.
The space ${\mathbb Q}_p^n={\mathbb Q}_p\times\cdots\times{\mathbb Q}_p$ consists of points
$x=(x_1,\dots,x_n)$, where $x_j \in {\mathbb Q}_p$, $j=1,2\dots,n$, \ $n\ge 2$.
The $p$-adic norm on ${\mathbb Q}_p^n$ is
\begin{equation}
\label{8}
|x|_p=\max_{1 \le j \le n}|x_j|_p, \quad x\in {\mathbb Q}_p^n.
\end{equation}
Denote by $B_{\gamma}^n(a)=\{x: |x-a|_p \le p^{\gamma}\},$ the ball
of radius $p^{\gamma}$ with the center at a point $a=(a_1,\dots,a_n)\in {\mathbb Q}_p^n$
and $B_{\gamma}^n(0)=B_{\gamma}^n$, \ $\gamma \in {\mathbb Z}$.
Here
\begin{equation}
\label{9}
B_{\gamma}^n(a)=B_{\gamma}(a_1)\times\cdots\times B_{\gamma}(a_n),
\end{equation}
where $B_{\gamma}(a_j)=\{x_j: |x_j-a_j|_p \le p^{\gamma}\}$ is a disc
of radius $p^{\gamma}$ with the center at a point $a_j\in {\mathbb Q}_p$,
$j=1,2\dots,n$.
On ${\mathbb Q}_p$ there exists the Haar measure, i.e., a positive measure $dx$
invariant under shifts, $d(x+a)=dx$, and normalized
by the equality $\int_{|\xi|_p\le 1}\,dx=1$.
The invariant measure $dx$ on the field ${\mathbb Q}_p$ is extended to an
invariant measure $d^n x=dx_1\cdots dx_n$ on ${\mathbb Q}_p^n$ in the standard way.
A complex-valued function $f$ defined on ${\mathbb Q}_p^n$ is called
{\it locally-constant} if for any $x\in {\mathbb Q}_p^n$ there exists
an integer $l(x)\in {\mathbb Z}$ such that
$$
f(x+x')=f(x), \quad x'\in B_{l(x)}^n.
$$
Denote by ${{\mathcal E}}({\mathbb Q}_p^n)$ and ${{\mathcal D}}({\mathbb Q}_p^n)$ the
linear spaces of locally-constant ${\mathbb C}$-valued functions on ${\mathbb Q}_p^n$
and locally-constant ${\mathbb C}$-valued functions with compact supports
(so-called test functions), respectively; ${{\mathcal D}}={{\mathcal D}}({\mathbb Q}_p)$,
${{\mathcal E}}={{\mathcal E}}({\mathbb Q}_p)$.
If $\varphi \in {{\mathcal D}}({\mathbb Q}_p^n)$, according to Lemma~1 from~\cite[VI.1.]{Vl-V-Z},
there exists $l\in {\mathbb Z}$, such that
$$
\varphi(x+x')=\varphi(x), \quad x'\in B_l^n, \quad x\in {\mathbb Q}_p^n.
$$
The largest of such numbers $l=l(\varphi)$ is called the
{\it parameter of constancy} of the function $\varphi$.
Let us denote by ${{\mathcal D}}^l_N({\mathbb Q}_p^n)$ the finite-dimensional space of
test functions from ${{\mathcal D}}({\mathbb Q}_p^n)$ having supports in the ball $B_N^n$
and with parameters of constancy $\ge l$.
Any function $\varphi \in {{\mathcal D}}^l_N({\mathbb Q}_p^n)$ is represented in the
following form
\begin{equation}
\label{9.4}
\varphi(x)=\sum_{\nu=1}^{p^{n(N-l)}}
\varphi(b^{\nu})\Delta_{l}(x_1-b_1^{\nu})\cdots\Delta_{l}(x_n-b_n^{\nu}),
\quad x\in {\mathbb Q}_p^n,
\end{equation}
where $\Delta_{\gamma}(x_j-b_j^{\nu})$ is the characteristic function
of the ball $B_{l_j}(b_j^{\nu})$, and the points
$b^{\nu}=(b_1^{\nu},\dots b_n^{\nu})\in B_N^n$ do not depend on
$\varphi$~\cite[VI,(5.2')]{Vl-V-Z}
Denote by ${{\mathcal D}}'({\mathbb Q}_p^n)$ the set of all linear functionals
(distributions) on ${{\mathcal D}}({\mathbb Q}_p^n)$. It follows from~\cite[VI.3.]{Vl-V-Z}
that any linear functional $f$ is continuous on ${{\mathcal D}}({\mathbb Q}_p^n)$.
Let us introduce in ${{\mathcal D}}({\mathbb Q}_p^n)$ a {\it canonical
$\delta$-sequence} $\delta_k(x)\stackrel{def}{=}p^{nk}\Omega(p^k|x|_p)$,
and a {\it canonical $1$-sequence}
$\Delta_k(x)\stackrel{def}{=}\Omega(p^{-k}|x|_p)$, $k \in {\mathbb Z}$, \
$x\in {\mathbb Q}_p^n$, where
\begin{equation}
\label{10}
\Omega(t)=\left\{
\begin{array}{lcr}
1, &&\quad 0 \le t \le 1, \\
0, &&\quad t>1. \\
\end{array}
\right.
\end{equation}
Here $\Delta_k(x)$ is the characteristic function of the ball $B_{k}^n$.
It is clear~\cite[VI.3., VII.1.]{Vl-V-Z} that
$\delta_k \to \delta$, $k \to \infty$ in ${{\mathcal D}}'({\mathbb Q}_p^n)$
and $\Delta_k \to 1$, $k \to \infty$ in ${{\mathcal E}}({\mathbb Q}_p^n)$.
The convolution $f*g$ for distributions $f,g\in{{\mathcal D}}'({\mathbb Q}_p^n)$ is
defined (see~\cite[VII.1.]{Vl-V-Z}) as
\begin{equation}
\label{11}
\langle f*g,\varphi\rangle
=\lim_{k\to \infty}\langle f(x)\times g(y),\Delta_k(x)\varphi(x+y)\rangle
\end{equation}
if the limit exists for all $\varphi\in {{\mathcal D}}({\mathbb Q}_p^n)$,
where $f(x)\times g(y)$ is the direct product of distributions.
The Fourier transform of $\varphi\in {{\mathcal D}}({\mathbb Q}_p^n)$ is defined by the
formula
$$
F[\varphi](\xi)=\int_{{\mathbb Q}_p^n}\chi_p(\xi\cdot x)\varphi(x)\,d^nx,
\quad \xi \in {\mathbb Q}_p^n,
$$
where $\chi_p(\xi\cdot x)=\chi_p(\xi_1 x_1)\cdots \chi_p(\xi_n x_n)
=e^{2\pi i\sum_{j=1}^{n}\{\xi_j x_j\}_p}$, \
$\xi\cdot x$ is the scalar product of vectors, and the function
$\chi_p(\xi_j x_j)=e^{2\pi i\{\xi_j x_j\}_p}$ for every fixed
$\xi_j \in {\mathbb Q}_p$ is an additive character of the field ${\mathbb Q}_p$, \
$\{\xi_j x_j\}_p$ is the fractional part of a number $\xi_j x_j$, \
$j=1,\dots,n$~\cite[VII.2.,3.]{Vl-V-Z}. It is known that the
Fourier transform is a linear isomorphism ${{\mathcal D}}({\mathbb Q}_p^n)$ into
${{\mathcal D}}({\mathbb Q}_p^n)$.
Moreover, according to~\cite[Lemma~A.]{Taib1},~\cite[III,(3.2)]{Taib3},
~\cite[VII.2.]{Vl-V-Z},
\begin{equation}
\label{12}
\varphi(x) \in {{\mathcal D}}^l_N({\mathbb Q}_p^n) \quad \text{iff} \quad
F\big[\varphi(x)\big](\xi) \in {{\mathcal D}}^{-N}_{-l}({\mathbb Q}_p^n).
\end{equation}
We define the Fourier transform $F[f]$ of a distribution
$f\in {{\mathcal D}}'({\mathbb Q}_p^n)$ by the relation~\cite[VII.3.]{Vl-V-Z}
\begin{equation}
\label{13}
\langle F[f],\varphi\rangle=\langle f,F[\varphi]\rangle,
\quad \forall \, \varphi\in {{\mathcal D}}({\mathbb Q}_p^n).
\end{equation}
Let $A$ be a matrix and $b\in {\mathbb Q}_p^n$. Then for a distribution
$f\in{{\mathcal D}}'({\mathbb Q}_p^n)$ the following relation holds~\cite[VII,(3.3)]{Vl-V-Z}:
\begin{equation}
\label{14}
F[f(Ax+b)](\xi)
=|\det{A}|_p^{-1}\chi_p\big(-A^{-1}b\cdot \xi\big)F[f(x)]\big(A^{-1}\xi\big),
\quad \det{A} \ne 0.
\end{equation}
In particular, if $f\in{{\mathcal D}}'({\mathbb Q}_p)$, \ $a\in {\mathbb Q}_p^{*}$, \
$b\in {\mathbb Q}_p$ then
$$
F[f(ax+b)](\xi)
=|a|_p^{-1}\chi_p\Big(-\frac{b}{a}\xi\Big)F[f(x)]\Big(\frac{\xi}{a}\Big).
$$
According to~\cite[IV,(3.1)]{Vl-V-Z},
\begin{equation}
\label{14.1}
F[\Delta_{k}](x)=\delta_{k}(x), \quad k\in {\mathbb Z}, \qquad x \in {\mathbb Q}_p^n.
\end{equation}
In particular, $F[\Omega](x)=\Omega(x)$.
If for distributions $f,g\in {{\mathcal D}}'({\mathbb Q}_p^n)$ a convolution $f*g$
exists then~\cite[VII,(5.4)]{Vl-V-Z}
\begin{equation}
\label{15}
F[f*g]=F[f]F[g].
\end{equation}
It is well known (see, e.g.,~\cite[III.2.]{Vl-V-Z}) that any
{\it multiplicative character\/} $\pi$ of the field ${\mathbb Q}_p$
can be represented as
\begin{equation}
\label{16}
\pi(x)\stackrel{def}{=}\pi_{\alpha}(x)=|x|_p^{\alpha-1}\pi_{1}(x),
\quad x \in {\mathbb Q}_p,
\end{equation}
where $\pi(p)=p^{1-\alpha}$ and $\pi_{1}(x)$ is a
{\it normed multiplicative character\/} such that
\begin{equation}
\label{16.1}
\pi_1(x)=\pi_{1}(|x|_px), \quad \pi_1(p)=\pi_1(1)=1, \quad
|\pi_1(x)|=1.
\end{equation}
We denote $\pi_{0}=|x|_p^{-1}$.
\begin{Definition}
\label{de1} \rm
Let $\pi_{\alpha}$ be a multiplicative character of the
field ${\mathbb Q}_p$.
{(a)} (~\cite[Ch.II,\S 2.3.]{G-Gr-P},~\cite[VIII.1.]{Vl-V-Z})
A distribution $f \in {{\mathcal D}}'({\mathbb Q}_p)$ is called
{\it homogeneous} of degree $\pi_{\alpha}$ if for all
$\varphi \in {{\mathcal D}}({\mathbb Q}_p)$ and $t \in {\mathbb Q}_p^*$ we have the relation
$$
\Bigl\langle f,\varphi\Big(\frac{x}{t}\Big) \Bigr\rangle
=\pi_{\alpha}(t)|t|_p \langle f,\varphi \rangle,
$$
i.e., $f(tx)=\pi_{\alpha}(t)f(x)$, $t \in {\mathbb Q}_p^{*}$.
{(b)} We say that a distribution $f \in {{\mathcal D}}'({\mathbb Q}_p^n)$ is
{\it homogeneous} of degree $\pi_{\alpha}$ if for all $t \in {\mathbb Q}_p^*$
we have
\begin{equation}
\label{17}
f(tx)=f(tx_1,\dots,tx_n)=\pi_{\alpha}(t)f(x),
\quad x=(x_1,\dots,x_n)\in {\mathbb Q}_p^{n}.
\end{equation}
A {\it homogeneous} distribution of degree $\pi_{\alpha}(x)=|x|_p^{\alpha-1}$
($\alpha \ne 0$) is called homogeneous of degree~$\alpha-1$.
\end{Definition}
For every multiplicative character $\pi_{\alpha}(x)\ne \pi_{0}=|x|_p^{-1}$,
$x\ne 0$ a {\it homogeneous\/} distribution $\pi_{\alpha}\in {{\mathcal D}}'({\mathbb Q}_p)$
of degree $\pi_{\alpha}(x)$ is defined by~\cite[VIII,(1.6)]{Vl-V-Z}
$$
\langle \pi_{\alpha},\varphi \rangle
=\int_{B_0}|x|_p^{\alpha-1}\pi_1(x)\big(\varphi(x)-\varphi(0)\big)\,dx
\qquad\qquad\qquad\qquad
$$
\begin{equation}
\label{24}
\qquad\qquad
+\int_{{\mathbb Q}_p\setminus B_0}|x|_p^{\alpha-1}\pi_1(x)\varphi(x)\,dx
+\varphi(0)I_0(\alpha),
\end{equation}
for all $\varphi\in {{\mathcal D}}({\mathbb Q}_p)$, where
$$
I_0(\alpha)=\int_{B_0}|x|_p^{\alpha-1}\pi_1(x)\,dx
=\left\{
\begin{array}{rcl}
0, \quad \pi_1(x) &\not\equiv& 1, \\
\frac{1-p^{-1}}{1-p^{-\alpha}}, \quad \pi_1(x) &\equiv& 1. \\
\end{array}
\right.
$$
$\alpha \ne \mu_j=\frac{2\pi i}{\ln p}j$, \ $j\in {\mathbb Z}$.
\begin{Definition}
\label{de1.1} \rm
{(a)} (~\cite{Al-Kh-Sh1}~\cite{Al-Kh-Sh2}) A distribution
$f_m\in {{\mathcal D}}'({\mathbb Q}_p)$ is said to be {\it associated
homogeneous {\rm(}in the wide sense{\rm)}\/} of
degree~$\pi_{\alpha}$ and order~$m$, \ $m \in {\mathbb N}_{0}$, if
$$
\Bigl\langle f_m,\varphi\Big(\frac{x}{t}\Big)\Bigr\rangle
=\pi_{\alpha}(t)|t|_p \langle f_m,\varphi \rangle
+\sum_{j=1}^{m}\pi_{\alpha}(t)|t|_p\log_p^j|t|_p
\langle f_{m-j},\varphi \rangle
$$
for all $\varphi \in {{\mathcal D}({\mathbb Q}_p)}$ and $t \in {\mathbb Q}_p^*$, where
$f_{m-j}\in {{\mathcal D}}'({\mathbb Q}_p)$ is an associated homogeneous distribution
of degree~$\pi_{\alpha}$ and order $m-j$, \ $j=1,2,\dots,m$, i.e.,
$$
f_m(tx)=\pi_{\alpha}(t)f_m(x)
+\sum_{j=1}^{m}\pi_{\alpha}(t)\log_p^j|t|_pf_{m-j}(x), \quad t \in {\mathbb Q}_p^*.
$$
If $m=0$ we set that the above sum is empty.
{(b)} We say that a distribution
$f \in {{\mathcal D}}'({\mathbb Q}_p^n)$ is {\it associated homogeneous
{\rm(}in the wide sense{\rm)}\/} of degree $\pi_{\alpha}$ and order~$m$, \
$m \in {\mathbb N}_{0}$, if for all $t \in {\mathbb Q}_p^*$ we have
\begin{equation}
\label{18}
f_m(tx)=f_m(tx_1,\dots,tx_n)=\pi_{\alpha}(t)f_m(x)
+\sum_{j=1}^{m}\pi_{\alpha}(t)\log_p^j|t|_pf_{m-j}(x),
\end{equation}
where $f_{m-j}\in {{\mathcal D}}'({\mathbb Q}_p^n)$ is an associated homogeneous
distribution of degree~$\pi_{\alpha}$ and order $m-j$, \ $j=1,2,\dots,m$.
An {\it associated homogeneous {\rm(}in the wide sense{\rm)}\/} distribution
of degree $\pi_{\alpha}(t)=|t|_p^{\alpha-1}$ and order~$m$ is called
{\it associated homogeneous} of degree~$\alpha-1$ and order~$m$.
{(c)} Associated homogeneous distribution (in the wide sense) of order
$m=1$ is called {\it associated homogeneous} distribution (see~\cite{Ge-Sh}
and~\cite{Al-Kh-Sh1},~\cite{Al-Kh-Sh2}).
\end{Definition}
The theorem describing all one-dimensional
{\it associated homogeneous {\rm(}in the wide sense{\rm)}\/} distributions
was proved in~\cite{Al-Kh-Sh1},~\cite{Al-Kh-Sh2}.
According to~\cite{Al-Kh-Sh1},~\cite{Al-Kh-Sh2},~\cite[\S 3]{Al-Kh-Sh3},
an associated homogeneous distribution of
degree~$\pi_{\alpha}(x)=|x|_p^{\alpha-1}\pi_1(x) \ne |x|_p^{-1}$
and order $m$, \ $m\in {\mathbb N}$ is defined as
$$
\langle \pi_{\alpha}(x)\log_p^m|x|_p,\varphi(x) \rangle
=\int_{B_0}|x|_p^{\alpha-1}\pi_1(x)\log_p^m|x|_p
\big(\varphi(x)-\varphi(0)\big)\,dx
$$
$$
+\int_{{\mathbb Q}_p\setminus B_0}|x|_p^{\alpha-1}\pi_1(x)\log_p^m|x|_p\varphi(x)\,dx
\qquad\qquad
$$
\begin{equation}
\label{19.3}
\quad
+\varphi(0)\int_{B_0}|x|_p^{\alpha-1}\pi_1(x)\log_p^m|x|_p\,dx,
\quad \forall \, \varphi\in {{\mathcal D}}({\mathbb Q}_p),
\end{equation}
where
$I_{0}(\alpha;m)=\int_{B_0}|x|_p^{\alpha-1}\pi_1(x)\log_p^m|x|_p\,dx
=\frac{d^m I_{0}(\alpha)}{d\alpha^m} \log_p^m e$.
In~\cite{Al-Kh-Sh1},~\cite{Al-Kh-Sh2},~\cite[\S 3]{Al-Kh-Sh3} an
associated homogeneous distribution of
degree $\pi_{0}(x)=|x|_p^{-1}$ and order $m$, $m\in {\mathbb N}$ is defined as
$$
\Bigl\langle P\Big(\frac{\log_p^{m-1}|x|_p}{|x|_p}\Big),\varphi \Bigr\rangle
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
$$
\begin{equation}
\label{19.5}
=\int_{B_0}\frac{\log_p^{m-1}|x|_p}{|x|_p}\big(\varphi(x)-\varphi(0)\big)\,dx
+\int_{{\mathbb Q}_p\setminus B_0}\frac{\log_p^{m-1}|x|_p}{|x|_p}\varphi(x)\,dx,
\end{equation}
for all $\varphi\in {{\mathcal D}}({\mathbb Q}_p)$.
The integrals
\begin{equation}
\label{25}
\Gamma_p(\alpha)\stackrel{def}{=}\Gamma_p(|x|_p^{\alpha-1})
=\int_{{\mathbb Q}_p} |x|_p^{\alpha-1}\chi_p(x)\,dx
=\frac{1-p^{\alpha-1}}{1-p^{-\alpha}},
\end{equation}
\begin{equation}
\label{25.1}
\Gamma_p(\pi_{\alpha})\stackrel{def}{=}F[\pi_{\alpha}](1)
=\int_{{\mathbb Q}_p} |x|_p^{\alpha-1}\pi_{1}(x)\chi_p(x)\,dx
\qquad\qquad\qquad
\end{equation}
are called $p$-adic $\Gamma$-{\it functions\/}
~\cite[VIII,(2.2),(2.17)]{Vl-V-Z}.
If $\pi_{\alpha}^1(x)$, $\pi_{\beta}^2(x)$ are multiplicative characters
then the following relation holds~\cite[VIII,(3.6)]{Vl-V-Z}:
\begin{equation}
\label{25.4}
\big(\pi_{\alpha}^1*\pi_{\beta}^2\big)(x)
={{\mathcal B}}_p(\pi_{\alpha}^1,\pi_{\beta}^2)
|x|_p^{\alpha+\beta-1}\pi_{1}^1(x)\pi_{1}^2(x),
\quad x \in {\mathbb Q}_p,
\end{equation}
where
\begin{equation}
\label{25.5}
{{\mathcal B}}_p(\pi_{\alpha}^1,\pi_{\beta}^2)
=\frac{\Gamma_p(\pi_{\alpha}^1)\Gamma_p(\pi_{\beta}^2)}
{\Gamma_p(\pi_{\alpha}^1\pi_{\beta}^2|x|_p)},
\end{equation}
is the ${{\mathcal B}}$-function.
The multidimensional homogeneous distribution
$|x|_p^{\alpha-n}\in {{\mathcal D}}'({\mathbb Q}_p^n)$
of degree $\alpha-n$ is constructed as follows.
If $Re\,\alpha>0$ then the function $|x|_p^{\alpha-n}$
generates a regular functional
\begin{equation}
\label{63.0}
\langle |x|_p^{\alpha-n},\varphi \rangle
=\int_{{\mathbb Q}_p^n}|x|_p^{\alpha-n}\varphi(x)\,d^nx,
\quad \forall \, \varphi\in {{\mathcal D}}({\mathbb Q}_p^n).
\end{equation}
If $Re\,\alpha \le 0$ this distribution is defined by means
of analytic continuation~\cite[(*)]{Taib1},~\cite[III,(4.3)]{Taib3},
~\cite[VIII,(4.2)]{Vl-V-Z}:
$$
\langle |x|_p^{\alpha-n},\varphi \rangle
=\int_{B_0^n}|x|_p^{\alpha-n}\big(\varphi(x)-\varphi(0)\big)\,d^nx
\qquad\qquad\qquad\qquad
$$
\begin{equation}
\label{63.1}
\qquad\qquad
+\int_{{\mathbb Q}_p^n\setminus B_0^n}|x|_p^{\alpha-n}\varphi(x)\,d^nx
+\varphi(0)\frac{1-p^{-n}}{1-p^{-\alpha}},
\end{equation}
for all $\varphi\in {{\mathcal D}}({\mathbb Q}_p^n)$, \ $\alpha\ne \mu_j=\frac{2\pi i}{\ln p}j$,
$j\in {\mathbb Z}$, where $|x|_p$, \ $x\in {\mathbb Q}_p^n$ is given by (\ref{8}).
The distribution $|x|_p^{\alpha-n}$ is an entire function of the complex
variable $\alpha$ everywhere except the points $\mu_j$, $j\in {\mathbb Z}$,
where it has simple poles with residues $\frac{1-p^{-n}}{\log p}\delta(x)$.
Similarly to the one-dimensional case (\ref{19.5}), one can construct the
distribution $P(\frac{1}{|x|_p^{n}})$ called the principal value of the
function~$\frac{1}{|x|_p^{n}}$:
\begin{equation}
\label{63.1*}
\Bigl\langle P\Big(\frac{1}{|x|_p^{n}}\Big),\varphi \Bigr\rangle
=\int_{B_0^n}\frac{\varphi(x)-\varphi(0)}{|x|_p^{n}}\,d^nx
+\int_{{\mathbb Q}_p^n\setminus B_0^n}\frac{\varphi(x)}{|x|_p^{n}}\,d^nx,
\end{equation}
for all $\varphi\in {{\mathcal D}}({\mathbb Q}_p^n)$.
It is easy to show that this distribution is
{\it associated homogeneous\/} of degree $-n$ and order $1$
(see~\cite{Al-Kh-Sh1}~\cite{Al-Kh-Sh2}).
The Fourier transform of $|x|_p^{\alpha-n}$ is given by the
formula~\cite{Sm},~\cite[Theorem~2.]{Taib1},~\cite[III,Theorem~(4.5)]{Taib3},
~\cite[VIII,(4.3)]{Vl-V-Z}
\begin{equation}
\label{63.2}
F[|x|_p^{\alpha-n}]=\Gamma^{(n)}_p(\alpha)|\xi|_p^{-\alpha},
\quad \alpha \ne 0,\,n
\end{equation}
where the n-dimensional $\Gamma$-{\it function\/} $\Gamma^{(n)}_p(\alpha)$
is given by the following formulas~\cite{Sm},~\cite[Theorem~1.]{Taib1},
~\cite[III,Theorem~(4.2)]{Taib3},~\cite[VIII,(4.4)]{Vl-V-Z}:
$$
\Gamma_p^{(n)}(\alpha)\stackrel{def}{=}
\lim_{k\to\infty}
\int_{p^{-k}\le |x|_p\le p^{k}} |x|_p^{\alpha-n}\chi_p(u\cdot x)\,d^nx
\qquad\qquad\qquad\qquad
$$
\begin{equation}
\label{63.3}
\qquad\qquad
=\int_{{\mathbb Q}_p^n} |x|_p^{\alpha-n}\chi_p(x_1)\,d^nx
=\frac{1-p^{\alpha-n}}{1-p^{-\alpha}}
\end{equation}
where $|u|_p=1$. Here $\Gamma_p^{(1)}(\alpha)=\Gamma_p(\alpha)$.
\section{The $p$-adic Lizorkin spaces}
\label{s3}
\subsection{The Lizorkin space of the first kind.}\label{s3.1}
Consider the subspaces of the space of test functions ${\mathcal D}({\mathbb Q}_p^n)$
$$
\Psi_{\times}=\Psi_{\times}({\mathbb Q}_p^n)
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\qquad
$$
$$
\qquad
=\{\psi(\xi)\in {\mathcal D}({\mathbb Q}_p^n):
\psi(\xi_1,\dots,\xi_{j-1},0,\xi_{j+1},\dots,\xi_{n})=0, \, j=1,2,\dots,n\}
$$
and
$$
\Phi_{\times}=\Phi_{\times}({\mathbb Q}_p^n)
=\{\phi: \phi=F[\psi], \, \psi\in \Psi_{\times}({\mathbb Q}_p^n)\}.
$$
Obviously, $\Psi_{\times}, \Phi_{\times} \ne \emptyset$.
Since the Fourier transform is a linear isomorphism
${{\mathcal D}}({\mathbb Q}_p^n)$ into ${{\mathcal D}}({\mathbb Q}_p^n)$, we have
$\Psi_{\times}, \, \Phi_{\times}\subset {\mathcal D}({\mathbb Q}_p^n)$.
The space $\Phi_{\times}$ admits the following characterization:
$\phi\in \Phi_{\times}$ if and only if $\phi\in {\mathcal D}({\mathbb Q}_p^n)$ and
\begin{equation}
\label{50}
\int_{{\mathbb Q}_p}\phi(x_1,\dots,x_{j-1},x_{j},x_{j+1},\dots,x_{n})\,dx_j=0,
\quad j=1,2,\dots,n.
\end{equation}
The space $\Phi_{\times}$ is called the $p$-adic {\it Lizorkin
space of test functions of the first kind\/}. By analogy with
the ${\mathbb C}$-case~\cite[2.2.]{Sam3},~\cite[\S 25.1.]{Sam-Kil-Mar},
$\Phi_{\times}$ can be equipped with the topology of the space
${\mathcal D}({\mathbb Q}_p^n)$ which makes $\Phi_{\times}$ a complete space.
The space $\Phi_{\times}'=\Phi_{\times}'({\mathbb Q}_p^n)$ is called
the $p$-adic {\it Lizorkin space of distributions of the first kind\/}.
Let $\Psi^{\perp}_{\times}({\mathbb Q}_p^n)
=\{f\in {\mathcal D}'({\mathbb Q}_p^n): \langle f,\psi\rangle=0,
\, \forall \, \psi\in \Psi_{\times}\}$, i.e., $\Psi^{\perp}_{\times}({\mathbb Q}_p^n)$
be the set of functionals from ${\mathcal D}'({\mathbb Q}_p^n)$ concentrated on
the set $\cup_{j=1}^n\{x\in {\mathbb Q}_p^n: x_j=0\}$.
Let $\Phi^{\perp}_{\times}({\mathbb Q}_p^n)=\{f\in {\mathcal D}'({\mathbb Q}_p^n):
\langle f,\phi\rangle=0, \, \forall \, \phi\in \Phi_{\times}\}$.
Thus $\Phi^{\perp}_{\times}$ and $\Psi^{\perp}_{\times}$ are subspaces
of functionals in ${\mathcal D}'$ orthogonal to $\Phi_{\times}$ and $\Psi_{\times}$,
respectively. It is clear that the set $\Psi^{\perp}_{\times}$ consists
of linear combinations of functionals of the form
$f(\xi_1\dots,\widehat{\xi_j},\dots,\xi_n)$, $j=1,2,\dots,n$,
where the hat \ $\widehat{\,}$ \ over $\xi_j$ denotes deletion
of the corresponding variable from the vector $\xi=(\xi_1\dots,\xi_n)$.
The set $\Phi^{\perp}_{\times}$ consists of linear combinations
of functionals of the form
$g(x_1\dots,\widehat{x_j},\dots,x_n)\times\delta(x_j)$, $j=1,2,\dots,n$.
\begin{Proposition}
\label{pr1}
The spaces of linear and continuous functionals $\Phi'_{\times}$
and $\Psi'_{\times}$ can be identified with the quotient spaces
$$
\Phi'_{\times}={\mathcal D}'/\Phi^{\perp}_{\times}, \qquad
\Psi'_{\times}={\mathcal D}'/\Psi^{\perp}_{\times}
$$
modulo the subspaces $\Phi^{\perp}_{\times}$ and $\Psi^{\perp}_{\times}$,
respectively.
\end{Proposition}
\begin{proof}
This proposition can be proved in the same way as~\cite[Proposition~2.5.]{Sam3}.
It follows from the well-known assertion:
if $E$ is a topological vector space with a closed subspace $M$ then $E'$
can be identified with the quotient space $M'=E'/M^{\perp}$, where
$M^{\perp}=\{f\in E': \langle f,\varphi\rangle=0,\, \forall \, \varphi\in M\}$.
\end{proof}
Analogously to (\ref{13}), we define the Fourier transform of
distributions $f\in \Phi_{\times}'({\mathbb Q}_p^n)$ and $g\in \Psi_{\times}'({\mathbb Q}_p^n)$
by the relations:
\begin{equation}
\label{51}
\begin{array}{rcl}
\displaystyle
\langle F[f],\psi\rangle=\langle f,F[\psi]\rangle,
&& \forall \, \psi\in \Psi_{\times}({\mathbb Q}_p^n), \\
\displaystyle
\langle F[g],\phi\rangle=\langle g,F[\phi]\rangle,
&& \forall \, \phi\in \Phi_{\times}({\mathbb Q}_p^n). \\
\end{array}
\end{equation}
By definition, $F[\Phi_{\times}({\mathbb Q}_p^n)]=\Psi_{\times}({\mathbb Q}_p^n)$ and
$F[\Psi_{\times}({\mathbb Q}_p^n)]=\Phi_{\times}({\mathbb Q}_p^n)$, i.e., (\ref{51})
give well defined objects. Moreover,
$F[\Phi_{\times}'({\mathbb Q}_p^n)]=\Psi_{\times}'({\mathbb Q}_p^n)$ and
$F[\Psi_{\times}'({\mathbb Q}_p^n)]=\Phi_{\times}'({\mathbb Q}_p^n)$,
\subsection{The Lizorkin space of the second kind.}\label{s3.2}
Now we consider the spaces
$$
\Psi=\Psi({\mathbb Q}_p^n)
=\{\psi(\xi)\in {\mathcal D}({\mathbb Q}_p^n): \psi(0)=0\}
$$
and
$$
\Phi=\Phi({\mathbb Q}_p^n)=\{\phi: \phi=F[\psi], \, \psi\in \Psi({\mathbb Q}_p^n)\}.
$$
Here $\Psi, \Phi\subset {\mathcal D}({\mathbb Q}_p^n)$.
The space $\Phi({\mathbb Q}_p^n)$ is called the $p$-adic {\it Lizorkin space of
test functions of the second kind\/}. Similarly to $\Phi_{\times}$,
the space $\Phi$ can be equipped with the topology of the space
${\mathcal D}({\mathbb Q}_p^n)$ which makes $\Phi$ a complete space.
Since the Fourier transform is a linear isomorphism ${{\mathcal D}}({\mathbb Q}_p^n)$ into
${{\mathcal D}}({\mathbb Q}_p^n)$, in view of (\ref{12}) the following lemma holds.
\begin{Lemma}
\label{lem1}
{\rm (a)} $\phi\in \Phi({\mathbb Q}_p^n)$ iff $\phi\in {\mathcal D}({\mathbb Q}_p^n)$ and
\begin{equation}
\label{54}
\int_{{\mathbb Q}_p^n}\phi(x)\,d^nx=0.
\end{equation}
{\rm (b)} $\phi \in {{\mathcal D}}^l_N({\mathbb Q}_p^n)\cap\Phi({\mathbb Q}_p^n)$, i.e.,
$$
\int_{B^n_{N}}\phi(x)\,d^nx=0,
$$
iff $\psi=F^{-1}[\phi]\in {{\mathcal D}}^{-N}_{-l}({\mathbb Q}_p^n)\cap\Psi({\mathbb Q}_p^n)$,
i.e.,
$$
\psi(\xi)=0, \qquad \xi \in B^n_{-N}.
$$
\end{Lemma}
In fact, for $n=1$, this lemma was proved in~\cite[IX.2.]{Vl-V-Z}.
Unlike the ${\mathbb C}$-case situation (\ref{5}), (\ref{6}),
any function $\psi(\xi)\in \Phi$ is equal to zero not only at $\xi=0$
but in a ball $B^n \ni 0$, as well.
It follows from (\ref{54}) that the space $\Phi({\mathbb Q}_p^n)$ does not
contain real-valued functions everywhere different from zero.
Let $\Phi'=\Phi'({\mathbb Q}_p^n)$ denote the topological dual of the space
$\Phi({\mathbb Q}_p^n)$. We call it the $p$-adic {\it Lizorkin space of distributions
of the second kind\/}.
By $\Psi^{\perp}$ and $\Phi^{\perp}$ we denote the
subspaces of functionals in ${\mathcal D}'$ orthogonal to $\Psi$ and
$\Phi$, respectively. Thus
$\Psi^{\perp}=\{f\in {\mathcal D}'({\mathbb Q}_p^n): f=C\delta, \, C\in {\mathbb C}\}$ and
$\Phi^{\perp}=\{f\in {\mathcal D}'({\mathbb Q}_p^n): f=C, \, C\in {\mathbb C}\}$.
\begin{Proposition}
\label{pr2}
$$
\Phi'={\mathcal D}'/\Phi^{\perp}, \qquad \Psi'={\mathcal D}'/\Psi^{\perp}.
$$
\end{Proposition}
This assertion is proved in the same way as Proposition~\ref{pr1}.
The space $\Phi'({\mathbb Q}_p^n)$ can be obtained from ${\mathcal D}'({\mathbb Q}_p^n)$ by
``sifting out'' constants. Thus two distributions in ${\mathcal D}'({\mathbb Q}_p^n)$
differing by a constant are indistinguishable as elements of $\Phi'({\mathbb Q}_p^n)$.
We define the Fourier transform of distributions $f\in \Phi'({\mathbb Q}_p^n)$
and $g\in \Psi'({\mathbb Q}_p^n)$ by an analog of formula (\ref{51}).
It is clear that $F[\Phi'({\mathbb Q}_p^n)]=\Psi'({\mathbb Q}_p^n)$
and $F[\Psi'({\mathbb Q}_p^n)]=\Phi'({\mathbb Q}_p^n)$,
Let $\Psi'_{M}({\mathbb Q}_p^n)$ be a class of multipliers in $\Psi({\mathbb Q}_p^n)$
and $\Phi'_{*}({\mathbb Q}_p^n)$ a class of convolutes in $\Phi({\mathbb Q}_p^n)$.
It is clear that a distribution $f\in \Psi'({\mathbb Q}_p^n)$ is a multiplier in
$\Psi({\mathbb Q}_p^n)$ if and only if $f\in {\mathcal E}({\mathbb Q}_p^n\setminus\{0\})$.
Thus $\Phi'_{*}({\mathbb Q}_p^n)=F[\Psi'_{M}({\mathbb Q}_p^n)]$.
Since ${\mathcal E}({\mathbb Q}_p^n)\subset \Psi'_M({\mathbb Q}_p^n)$, according to the
theorem from~\cite[VII.3.]{Vl-V-Z}, the class of all compactly supported
distributions from $f\in {\mathcal D}'({\mathbb Q}_p^n)$ is a subset of $\Phi'_{*}({\mathbb Q}_p^n)$ .
\subsection{Density of the Lizorkin spaces in ${\mathcal L}^{\rho}({\mathbb Q}_p^n)$.}\label{s3.3}
Repeating the proof of the assertions from~\cite{Sam2},~\cite[2.2.,2.4.]{Sam3}
practically word for word, we obtain the following $p$-adic analogs
of these assertions.
\begin{Lemma}
\label{lem2.1}
Let $g(\cdot)\in {\mathcal L}^{1}({\mathbb Q}_p^n)$ and $f(\cdot)\in {\mathcal L}^{\rho}({\mathbb Q}_p^n)$,
$1<\rho<\infty$. Then
$$
h_{t}(x)=\int_{{\mathbb Q}_p^n}g(y)f(x-ty)\,d^ny
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad
$$
\begin{equation}
\label{55}
=\frac{1}{|t|_p^n}
\int_{{\mathbb Q}_p^n}g\Big(\frac{\xi}{t}\Big)f(x-\xi)\,d^n\xi
\stackrel{{\mathcal L}^{\rho}}{\to}0,
\quad |t|_p \to \infty, \quad t\in {\mathbb Q}_p^{*},
\quad x\in {\mathbb Q}_p^{n}.
\end{equation}
\end{Lemma}
\begin{proof}
If $\rho=2$, taking into account the Parseval equality~\cite[VII,(4.4)]{Vl-V-Z},
formula (\ref{14}), and using the Riemann-Lebesgue lemma~\cite[VII.3.]{Vl-V-Z},
we have
\begin{equation}
\label{55.1}
||h_{t}||_{2}=||F[h_{t}]||_{2}=\bigg(\int_{{\mathbb Q}_p^n}
\big|F[g](ty) \, F[f](y) \, \big|^2\,d^ny\bigg)^{\frac{1}{2}}\to 0,
\quad |t|_p \to \infty.
\end{equation}
Here the passage to the limit under the integral sign is justified
by the Lebesgue dominated theorem~\cite[IV.4]{Vl-V-Z}.
Let now $\rho \ne 2$. In view of the Young inequality~\cite[III,(1.7)]{Taib3},
$h_{t}(x)\in {\mathcal L}^{\rho}({\mathbb Q}_p^n)$ and
\begin{equation}
\label{55.2}
||h_{t}||_{\rho} \le ||g||_{1} \, ||f||_{\rho},
\end{equation}
where the last estimate is uniform.
Clearly, it is sufficient to prove (\ref{55}) for $f\in {\mathcal D}({\mathbb Q}_p^n)$.
Let $r>1$ be such that $\rho$ is located between $2$ and $r$. Using
the H\"older inequality and taking into account that $f\in {\mathcal D}({\mathbb Q}_p^n)$,
we obtain
\begin{equation}
\label{55.3}
||h_{t}||_{\rho} \le ||h_{t}||_{r}^{1-\lambda} \, ||h_{t}||_{2}^{\lambda},
\end{equation}
where $\frac{1}{\rho}=\frac{1-\lambda}{r}+\frac{\lambda}{2}$
(i.e., $\lambda=\frac{2(\rho-r)}{\rho(2-r)}$). Since the lemma holds for
$\rho=2$, i.e., $||h_{t}||_{2}\to 0$, $|t|_p \to \infty$, by (\ref{55.1}),
(\ref{55.2}), we have
$$
||h_{t}||_{\rho} \le \big(||g||_{1} \, ||f||_{r}\big)^{1-\lambda} \,
||h_{t}||_{2}^{\lambda}\to 0, \quad |t|_p \to \infty,
\quad t\in {\mathbb Q}_p^{*}.
$$
The lemma is thus proved.
\end{proof}
\begin{Lemma}
\label{lem2.1*}
Let $g(\cdot)\in {\mathcal L}^{1}({\mathbb Q}_p^{n-m})$, $m\le n-1$ and
$f(\cdot)\in {\mathcal L}^{\rho}({\mathbb Q}_p^n)$, \ $1<\rho<\infty$. Then
\begin{equation}
\label{55*}
h_{t}(x)=\int_{{\mathbb Q}_p^{n-m}}g(y)f(x',x''-ty)\,d^{n-m}y
\stackrel{{\mathcal L}^{\rho}}{\to}0,
\quad |t|_p \to \infty, \quad t\in {\mathbb Q}_p^{*},
\end{equation}
where $x'=(x_1,\dots,x_m)\in {\mathbb Q}_p^{m}$,
$x''=(x_{m+1},\dots,x_n)\in {\mathbb Q}_p^{n-m}$, \ $1\le m \le n-1$.
\end{Lemma}
\begin{proof}
If $\rho=2$, just as above, using the Parseval
equality~\cite[VII,(4.4)]{Vl-V-Z}, and formula (\ref{14}), we have
\begin{equation}
\label{55.1*}
||h_{t}||_{2}=||F[h_{t}]||_{2}=\bigg(\int_{{\mathbb Q}_p^n}
\big|F[g](ty'') \, F[f](y) \, \big|^2\,d^ny\bigg)^{\frac{1}{2}}\to 0,
\quad |t|_p \to \infty.
\end{equation}
Let now $\rho \ne 2$. In view of the Young inequality, we have the
uniform estimate
\begin{equation}
\label{55.2*}
||h_{t}||_{{\mathcal L}^{\rho}({\mathbb Q}_p^n)} \le
||g||_{{\mathcal L}^{1}({\mathbb Q}_p^{n-m})} \, ||f||_{{\mathcal L}^{\rho}({\mathbb Q}_p^n)},
\end{equation}
Let $r>1$ be such that $\rho$ is located between $2$ and $r$.
Setting $f\in {\mathcal D}({\mathbb Q}_p^n)$, using inequality (\ref{55.3}),
and taking into account that $||h_{t}||_{2}\to 0$, $|t|_p \to \infty$,
we obtain
$$
||h_{t}||_{{\mathcal L}^{\rho}({\mathbb Q}_p^n)} \le
\big(||g||_{{\mathcal L}^{1}({\mathbb Q}_p^{n-m})} \, ||f||_{{\mathcal L}^{r}({\mathbb Q}_p^n)}\big)^{1-\lambda}
\, ||h_{t}||_{{\mathcal L}^{2}({\mathbb Q}_p^n)}^{\lambda}\to 0, \quad |t|_p \to \infty.
$$
The lemma is thus proved.
\end{proof}
\begin{Lemma}
\label{lem2.2}
The space $\Phi({\mathbb Q}_p^n)$ is dense in ${\mathcal L}^{\rho}({\mathbb Q}_p^n)$, $1<\rho<\infty$.
\end{Lemma}
\begin{proof}
Since ${\mathcal D}({\mathbb Q}_p^n)$ is dense in ${\mathcal L}^{\rho}({\mathbb Q}_p^n)$, $1<\rho<\infty$
(see~\cite[VI.2.]{Vl-V-Z}), it is sufficient to approximate the
function $\varphi\in {\mathcal D}({\mathbb Q}_p^n)$ by functions
$\phi_{t}\in \Phi({\mathbb Q}_p^n)$ in the norm of ${\mathcal L}^{\rho}({\mathbb Q}_p^n)$.
Consider a family of functions
$$
\psi_{t}(\xi)=(1-\Delta_{t}(\xi))F^{-1}[\varphi](\xi) \in \Psi({\mathbb Q}_p^n),
$$
where $\Delta_{t}(\xi)=\Omega(|t\xi|_p)$ is the characteristic function of
the ball $B_{\log_{p}|t|_p^{-1}}^n$, $x\in {\mathbb Q}_p^n$, \ $t\in {\mathbb Q}_p^{*}$, \
the function $\Omega$ is defined by (\ref{10}).
In view of (\ref{15}), we have
$$
\phi_{t}(x)=F[\psi_{t}](x)=F[\big(1-\Delta_{t}(\xi)\big)](x)*\varphi(x)
\qquad\qquad\qquad\qquad
$$
$$
\qquad\qquad\qquad
=\delta(x)*\varphi(x)-F[\Delta_{t}(\xi)](x)*\varphi(x) \in \Phi({\mathbb Q}_p^n).
$$
According to (\ref{14.1}),
$F[\Delta_{t}(\xi)](x)=\frac{1}{|t|_p^{n}}\Omega\big(\frac{|x|_p}{|t|_p}\big)$,
i.e., the last relation can be rewritten as follows
$$
\phi_{t}(x)=\varphi(x)-\int_{{\mathbb Q}_p^n}\Omega(|y|_p)\varphi(x-ty)d^ny.
$$
Applying Lemma~\ref{lem2.1} to the last relation, we see that
$||\phi_{t}-\varphi||_{\rho}\to 0$ as $|t|_p \to \infty$.
\end{proof}
\begin{Lemma}
\label{lem2.3}
The space $\Phi_{\times}({\mathbb Q}_p^n)$ is dense in ${\mathcal L}^{\rho}({\mathbb Q}_p^n)$,
$1<\rho<\infty$.
\end{Lemma}
\begin{proof}
The proof of this lemma is based on the same calculations
as those carried out above.
In this case we set $\varphi\in {\mathcal D}({\mathbb Q}_p^n)$ and
$$
\psi_{t}(\xi)=(1-\Delta_{t}(\xi_1)\cdots-\Delta_{t}(\xi_n))F^{-1}[\varphi](\xi)
\in \Psi_{\times}({\mathbb Q}_p^n),
$$
where $\Delta_{t}(\xi_j)=\Omega(|t\xi_j|_p)$ is the characteristic
function of the disc $B_{\log_{p}|t|_p^{-1}}$, $x_j\in {\mathbb Q}_p$, \
$t\in {\mathbb Q}_p^{*}$, \ $j=1,\dots,n$.
By (\ref{15}), we obtain
$$
\phi_{t}(x)
=\varphi(x)
-\big(\delta(x_2,\dots,x_n)\times F[\Delta_{t}(\xi_1)](x_1)\big)*\varphi(x)
\qquad\qquad\qquad\quad
$$
$$
\qquad\quad
\cdots
-\big(\delta(x_1,\dots,x_{n-1})\times F[\Delta_{t}(\xi_n)](x_n)\big)*\varphi(x)
\in \Phi_{\times}({\mathbb Q}_p^n).
$$
Since $F[\Delta_{t}(\xi_j)](x_j)
=\frac{1}{|t|_p}\Omega\big(\frac{|x_j|_p}{|t|_p}\big)$, $x_j\in {\mathbb Q}_p$,
\ $j=1,\dots,n$, the last relation can be rewritten as
$$
\phi_{t}(x)=\varphi(x)
-\int_{{\mathbb Q}_p}\Omega(|y_1|_p)\varphi(x_1-ty_1,x_2,\dots,x_n)dy_1
\qquad\qquad
$$
$$
\qquad\qquad
\cdots
-\int_{{\mathbb Q}_p}\Omega(|y_n|_p)\varphi(x_1,\dots,x_{n-1},x_n-ty_n)dy_n.
$$
According to Lemma~\ref{lem2.1*},
$$
h_{j,t}(x)=\int_{{\mathbb Q}_p}\Omega(|y_j|_p)
\varphi(x_1,\dots,x_{j-1},x_j-ty_j,x_{j+1},\dots,x_n)dy_j
\stackrel{{\mathcal L}^{\rho}}{\to}0
$$
as $|t|_p \to \infty$, \ $j=1,\dots,n$. Thus
$||\phi_{t}-\varphi||_{\rho}
\le ||h_{1,t}||_{\rho}+\cdots+||h_{n,t}||_{\rho}\to 0$
as $|t|_p \to \infty$.
\end{proof}
For $n=1$ and $\rho=2$ the statements of Lemmas~\ref{lem2.2},~\ref{lem2.3}
coincide with the lemma from~\cite[IX.4.]{Vl-V-Z}
\section{Fractional operators}
\label{s4}
\subsection{The Vladimirov operator.}\label{s4.1}
Let us introduce a distribution from the space ${\mathcal D}'({\mathbb Q}_p)$
\begin{equation}
\label{56}
f_{\alpha}(z)=\frac{|z|_p^{\alpha-1}}{\Gamma_p(\alpha)},
\quad \alpha \ne \mu_j, \quad \alpha \ne 1+\mu_j,
\quad z \in {\mathbb Q}_p,
\end{equation}
called the {\it Riesz kernel\/}~\cite[VIII.2.]{Vl-V-Z}, where
$\mu_j=\frac{2\pi i}{\ln p}j$, $j\in {\mathbb Z}$, \ $|z|_p^{\alpha-1}$
is a homogeneous distribution of degree~$\pi_{\alpha}(z)=|z|_p^{\alpha-1}$
defined by (\ref{24}), the $\Gamma$-function $\Gamma_p(\alpha)$ is given
by (\ref{25}).
The distribution $f_{\alpha}(z)$ is an entire function of the complex
variable $\alpha$ and has simple poles at the points $\alpha=\mu_j$,
$\alpha=1+\mu_j$, $j\in {\mathbb Z}$.
According to~\cite[VIII,(2.20)]{Vl-V-Z}, we define $f_{0}(\cdot)$
as a distribution from ${\mathcal D}'({\mathbb Q}_p)$:
\begin{equation}
\label{56.1}
f_{0}(z)\stackrel{def}{=}\lim_{\alpha \to 0}f_{\alpha}(z)=\delta(z),
\quad z \in {\mathbb Q}_p,
\end{equation}
where the limit is understood in the weak sense.
Using~\cite[IX,(2.3)]{Vl-V-Z}, we define $f_{1}(\cdot)$
as a distribution from $\Phi'({\mathbb Q}_p)$:
\begin{equation}
\label{56.2}
f_{1}(z)\stackrel{def}{=}\lim_{\alpha \to 1}f_{\alpha}(z)
=-\frac{p-1}{\log p}\log|z|_p,
\quad z \in {\mathbb Q}_p,
\end{equation}
where the limit is understood in the weak sense.
It is easy to see that if $\alpha \ne 1$ then the Riesz kernel
$f_{\alpha}(z)$ is a {\it homogeneous\/} distribution of
degree~$\alpha-1$, and if $\alpha=1$ then the Riesz kernel
is an {\it associated homogeneous\/} distribution of degree
$0$ and order $1$ (see Definitions~\ref{de1},~\ref{de1.1}).
It is well known that
\begin{equation}
\label{57.1}
f_{\alpha}(z)*f_{\beta}(z)=f_{\alpha+\beta}(z), \qquad
\alpha, \ \beta, \ \alpha+\beta \ne 1,
\end{equation}
in the sense of the space ${{\mathcal D}}'({\mathbb Q}_p)$~\cite[VIII,(2.20),(3.8),(3.9)]{Vl-V-Z}.
Formulas (\ref{57.1}), (\ref{56.2}), i.e., in fact, results
of~\cite[IX.2]{Vl-V-Z}, imply that
\begin{equation}
\label{57.2}
f_{\alpha}(z)*f_{\beta}(z)=f_{\alpha+\beta}(z), \qquad
\alpha, \ \beta \in {\mathbb C},
\end{equation}
in the sense of distributions from ${\Phi}'({\mathbb Q}_p)$.
Let $\alpha=(\alpha_1,\dots,\alpha_n)$, $\alpha_j\in {\mathbb C}$, $j=1,2,\dots$,
and $|\alpha|=\alpha_1+\cdots+\alpha_n$. We denote by
\begin{equation}
\label{58}
f_{\alpha}(x)=f_{\alpha_1}(x_1)\times\cdots\times f_{\alpha_n}(x_n),
\end{equation}
the {\it multi-Riesz kernel\/}, where the one-dimensional Riesz kernel
$f_{\alpha_j}(x_j)$, $j=1,\dots,n$ is defined by (\ref{56})--(\ref{56.2}).
If $\alpha_j \ne 1$, $j=1,2,\dots$ then the Riesz kernel
$$
f_{\alpha}(x)=\frac{|x_1|_p^{\alpha_1-1}}{\Gamma_p(\alpha_1)}
\times\cdots\times \frac{|x_n|_p^{\alpha_n-1}}{\Gamma_p(\alpha_n)}
$$
is a {\it homogeneous\/} distribution of degree~$|\alpha|-n$
(see Definition~\ref{de1}.(b)).
If $\alpha_1=\cdots=\alpha_k=1$, \ $\alpha_{k+1},\cdots,\alpha_{n}\ne 1$
then
$$
f_{\alpha}(x)
=(-1)^k\frac{(p-1)^k}{\log^k p}\log|x_1|_p\times\cdots\times\log|x_k|_p
\qquad\qquad\qquad\qquad
$$
\begin{equation}
\label{58.1}
\qquad\qquad\qquad\quad
\times\frac{|x_{k+1}|_p^{\alpha_{k+1}-1}}{\Gamma_p(\alpha_{k+1})}
\times\cdots\times \frac{|x_n|_p^{\alpha_n-1}}{\Gamma_p(\alpha_n)}.
\end{equation}
Thus, if among all $\alpha_1,\dots,\alpha_n$ there are $k$ pieces such
that $=1$ and $n-k$ pieces such that $\ne 1$ then the Riesz kernel
$f_{\alpha}(x)$ is an {\it associated homogeneous\/} distribution of
degree~$|\alpha|-n$ and order $k$, \ $k=1,\dots,n$
(see Definition~\ref{de1.1}.(b)).
For example, if $n=2$ and $\alpha_1=\alpha_2=1$ then we have
$f_{(1,1)}(x_1,x_2)=\frac{(p-1)^2}{\log^2 p}\log|x_1|_p\log|x_2|_p$, \
$x=(x_1,x_2)\in {\mathbb Q}_p^2$ and
$$
f_{(1,1)}(tx_1,tx_2)=\frac{(p-1)^2}{\log^2 p}
\Big(\log|x_1|_p\log|x_2|_p
\qquad\qquad\qquad\qquad\qquad\qquad
$$
$$
\qquad\qquad
+(\log|x_1|_p+\log|x_2|_p)\log|t|_p+\log^2|t|_p\Big), \quad t\in {\mathbb Q}^*_p.
$$
Define the multi-dimensional Vladimirov operator of the first kind
$D^{\alpha}_{\times}: \phi(x) \to D^{\alpha}_{\times}\phi(x)$
as the convolution
$$
\Big(D^{\alpha}_{\times}\phi\Big)(x)\stackrel{def}{=}f_{-\alpha}(x)*\phi(x)
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad
$$
\begin{equation}
\label{59}
=\langle f_{-\alpha_1}(x_1)\times\cdots\times f_{-\alpha_n}(x_n),
\phi(x-\xi)\rangle,
\quad x\in {\mathbb Q}_p^n,
\end{equation}
where $\phi\in \Phi_{\times}({\mathbb Q}_p^n)$. Here
$D^{\alpha}_{\times}=D^{\alpha_1}_{x_1}\times\cdots\times D^{\alpha_n}_{x_n}$,
where $D^{\alpha_j}_{x_j}=f_{-\alpha_j}(x_j)*$, $j=1,2,\dots,n$.
It is known that in the general case,
$(D^{\alpha}_{\times}\varphi)(x) \not\in {{\mathcal D}}({\mathbb Q}_p^n)$
for $\varphi \in {{\mathcal D}}({\mathbb Q}_p^n)$~\cite[IX]{Vl-V-Z}, i.e.,
the Bruhat--Schwartz space ${\mathcal D}({\mathbb Q}_p^n)$ is not invariant
under the operator $D^{\alpha}_{\times}$.
\begin{Lemma}
\label{lem4}
The Lizorkin space of the first kind $\Phi_{\times}({\mathbb Q}_p^n)$ is
invariant under the Vladimirov fractional operator
$D^{\alpha}_{\times}$. Moreover,
$$
D^{\alpha}_{\times}(\Phi_{\times}({\mathbb Q}_p^n))=\Phi_{\times}({\mathbb Q}_p^n).
$$
\end{Lemma}
\begin{proof}
Taking into account formula~\cite[VIII,(2.1)]{Vl-V-Z}
\begin{equation}
\label{60}
F[f_{\alpha_j}(x_j)](\xi)=|\xi_j|_p^{-\alpha_j},
\quad j=1,\dots,n
\end{equation}
and (\ref{59}), (\ref{15}), we see that
$$
F[D^{\alpha}_{\times}\phi](\xi)
=|\xi_1|_p^{-\alpha_1}\times\cdots\times|\xi_n|_p^{-\alpha_n}
F[\phi](\xi), \quad \phi \in \Phi_{\times}({\mathbb Q}_p^n).
$$
Since $F[\phi](\xi)\in \Psi_{\times}({\mathbb Q}_p^n)$ and
$|\xi_1|_p^{-\alpha_1}\times\cdots\times|\xi_n|_p^{-\alpha_n}
F[\phi](\xi)\in \Psi_{\times}({\mathbb Q}_p^n)$ for any
$\alpha=(\alpha_1,\dots,\alpha_n)\in {\mathbb C}^n$
then $D^{\alpha}_{\times}\phi \in \Phi_{\times}({\mathbb Q}_p^n)$, i.e.,
$D^{\alpha}_{\times}(\Phi_{\times}({\mathbb Q}_p^n))\subset \Phi_{\times}({\mathbb Q}_p^n)$.
Moreover, any function from $\Psi_{\times}({\mathbb Q}_p^n)$ can be represented
as $\psi(\xi)=|\xi_1|_p^{\alpha_1}\times\cdots\times|\xi_n|_p^{\alpha_n}
\psi_1(\xi)$, $\psi_1 \in \Psi_{\times}({\mathbb Q}_p^n)$.
This implies that
$D^{\alpha}_{\times}(\Phi_{\times}({\mathbb Q}_p^n))=\Phi_{\times}({\mathbb Q}_p^n)$.
\end{proof}
In view of (\ref{60}), (\ref{15}), formula (\ref{59}) can be rewritten as
\begin{equation}
\label{61}
\big(D^{\alpha}_{\times}\phi\big)(x)
=F^{-1}\big[|\xi_1|_p^{\alpha_1}\times\cdots\times|\xi_n|_p^{\alpha_n}
F[\phi](\xi)\big](x),
\quad \phi \in \Phi_{\times}({\mathbb Q}_p^n).
\end{equation}
The operator $D^{\alpha}_{\times}=f_{-\alpha}(x)*$ is called
the operator of fractional partial differentiation of order
$|\alpha|$, for $\alpha_j>0$, $j=1,\dots,n$; the operator of
fractional partial integration of order $|\alpha|$, for
$\alpha_j<0$, $j=1,\dots,n$; for $\alpha_1=\cdots=\alpha_n=0$, \
$D^{0}_{\times}=\delta(x)*$ is the identity operator.
According to formulas (\ref{59}), (\ref{11}), we define the
Vladimirov fractional operator $D^{\alpha}_{\times}f$, \ $\alpha\in {\mathbb C}^n$
of a distribution $f\in \Phi_{\times}'({\mathbb Q}_p^n)$ by the relation
\begin{equation}
\label{62}
\langle D^{\alpha}_{\times}f,\phi\rangle\stackrel{def}{=}
\langle f, D^{\alpha}_{\times}\phi\rangle,
\qquad \forall \, \phi\in \Phi_{\times}({\mathbb Q}_p^n).
\end{equation}
In view of (\ref{62}) and Lemma~\ref{lem4},
$D^{\alpha}_{\times}(\Phi_{\times}'({\mathbb Q}_p^n))=\Phi_{\times}'({\mathbb Q}_p^n)$.
Moreover, in view of (\ref{57.2}), the family of operators $D^{\alpha}_{\times}$,
$\alpha \in {\mathbb C}^n$ forms an Abelian group: if $f \in \Phi'({\mathbb Q}_p^n)$ then
\begin{equation}
\label{63}
\begin{array}{rcl}
\displaystyle
D^{\alpha}_{\times}D^{\beta}_{\times}f&=&
D^{\beta}_{\times}D^{\alpha}_{\times}f=D^{\alpha+\beta}_{\times}f, \medskip \\
\displaystyle
D^{\alpha}_{\times}D^{-\alpha}_{\times}f
&=&f,
\qquad \alpha,\beta \in {\mathbb C}^n, \\
\end{array}
\end{equation}
where $\alpha+\beta=(\alpha_1+\beta_1,\dots,\alpha_n+\beta_n)\in {\mathbb C}^n$.
\begin{Example}
\label{ex1} \rm
If $\alpha_j>0$, $j=1,2,\dots$ then the fractional integration formula
for the delta function holds
$$
D^{-\alpha}_{\times}\delta(x)=\frac{|x_1|_p^{\alpha_1-1}}{\Gamma_p(\alpha_1)}
\times\cdots\times \frac{|x_n|_p^{\alpha_n-1}}{\Gamma_p(\alpha_n)}.
$$
\end{Example}
\subsection{The Taibleson operator.}\label{s4.2}
Let us introduce the distribution from ${{\mathcal D}}'({\mathbb Q}_p^n)$
\begin{equation}
\label{63.4}
\kappa_{\alpha}(x)=\frac{|x|_p^{\alpha-n}}{\Gamma_p^{(n)}(\alpha)},
\quad \alpha \ne 0, \, n, \qquad x\in {\mathbb Q}_p^n,
\end{equation}
called the multidimensional {\it Riesz kernel\/}~\cite[\S2]{Taib1},
~\cite[III.4.]{Taib3}, where the function $|x|_p$, \ $x\in {\mathbb Q}_p^n$
is given by (\ref{8}).
The Riesz kernel has a removable singularity at $\alpha=0$ and according
to~\cite[\S2]{Taib1},~\cite[III.4.]{Taib3},~\cite[VIII.2]{Vl-V-Z}, we have
$$
\langle \kappa_{\alpha}(x),\varphi(x)\rangle
=\frac{g_{\alpha}}{\Gamma_p^{(n)}(\alpha)}
+\frac{1-p^{-n}}{(1-p^{-\alpha})\Gamma_p^{(n)}(\alpha)}\varphi(0)
\qquad\qquad
$$
$$
\qquad\qquad
=g_{\alpha}\frac{1-p^{-\alpha}}{1-p^{\alpha-n}}
+\frac{1-p^{-n}}{1-p^{\alpha-n}}\varphi(0),
\quad \varphi\in {{\mathcal D}}({\mathbb Q}_p^n),
$$
where $g_{\alpha}$ is an entire function in $\alpha$.
Passing to the limit in the above relation, we obtain
$$
\langle \kappa_{0}(x),\varphi(x)\rangle\stackrel{def}{=}
\lim_{\alpha\to 0}\langle \kappa_{\alpha}(x),\varphi(x)\rangle=\varphi(0),
\quad \forall \, \varphi\in {{\mathcal D}}({\mathbb Q}_p^n).
$$
Thus we define $\kappa_{0}(\cdot)$ as a distribution from ${{\mathcal D}}'({\mathbb Q}_p^n)$:
\begin{equation}
\label{63.5}
\kappa_{0}(x)\stackrel{def}{=}\lim_{\alpha\to 0}\kappa_{\alpha}(x)=\delta(x).
\end{equation}
Next, using (\ref{63.0}), (\ref{63.4}), and taking into account
(\ref{54}), we define $\kappa_{n}(\cdot)$ as a distribution from the
{\it Lizorkin space of distributions\/} $\Phi'({\mathbb Q}_p^n)$:
$$
\langle \kappa_{n}(x),\phi \rangle\stackrel{def}{=}
\lim_{\alpha \to n}\langle \kappa_{\alpha}(x),\phi \rangle
=\lim_{\alpha \to n}
\int_{{\mathbb Q}_p^n}\frac{|x|_p^{\alpha-n}}{\Gamma^{(n)}_p(\alpha)}\phi(x)\,d^nx
\qquad\qquad
$$
$$
=-\lim_{\beta \to 0}\big(1-p^{-n-\beta}\big)
\int_{{\mathbb Q}_p^n}\frac{|x|_p^{\beta}-1}{p^{\,\beta}-1}\phi(x)\,d^nx
\qquad
$$
$$
=-\frac{1-p^{-n}}{\log p}\int_{{\mathbb Q}_p^n}\log|x|_p\phi(x)\,d^nx,
\quad \forall \, \phi\in \Phi({\mathbb Q}_p^n),
$$
where $|\alpha-n|\le 1$. Similarly to the one-dimensional
case~\cite[IX.2]{Vl-V-Z}, the passage to the limit under the integral
sign is justified by the Lebesgue dominated theorem~\cite[IV.4]{Vl-V-Z}.
Thus,
\begin{equation}
\label{63.7}
\kappa_{n}(x)\stackrel{def}{=}\lim_{\alpha \to n}\kappa_{\alpha}(x)
=-\frac{1-p^{-n}}{\log p}\log|x|_p.
\end{equation}
Thus the Riesz kernel $\kappa_{\alpha}(x)$ is well defined distribution
from the Lizorkin space $\Phi'({\mathbb Q}_p^n)$ for all $\alpha \in {\mathbb C}$.
According to Definitions~\ref{de1}.(b) and~\ref{de1.1}.(b),
if $\alpha \ne n$ then $\kappa_{\alpha}(x)$ is a {\it homogeneous\/}
distribution of degree~$\alpha-n$, and if $\alpha=n$ then
$\kappa_{\alpha}(x)$ is an {\it associated homogeneous\/}
distribution of degree $0$ and order $1$.
With the help of (\ref{63.2}), (\ref{63.5}), we obtain the
formulas~\cite[(**)]{Taib1},~\cite[III,(4.6)]{Taib3},
~\cite[VIII,(4.9),(4.10)]{Vl-V-Z}:
\begin{equation}
\label{63.8}
\kappa_{\alpha}(x)*\kappa_{\beta}(x)=\kappa_{\alpha+\beta}(x),
\quad \alpha, \, \beta, \, \alpha+\beta \ne n,
\end{equation}
which holds in the sense of the space ${{\mathcal D}}'({\mathbb Q}_p^n)$.
Taking into account formula (\ref{63.7}), it is easy to see that
\begin{equation}
\label{63.9}
\kappa_{\alpha}(x)*\kappa_{\beta}(x)=\kappa_{\alpha+\beta}(x),
\quad \alpha, \beta \in {\mathbb C},
\end{equation}
in the sense of the Lizorkin space $\Phi'({\mathbb Q}_p^n)$.
Define the multi-dimensional Taibleson operator in the Lizorkin space
$\phi\in \Phi({\mathbb Q}_p^n)$ as the convolution:
\begin{equation}
\label{59**}
\big(D^{\alpha}_{x}\phi\big)(x)\stackrel{def}{=}\kappa_{-\alpha}(x)*\phi(x)
=\langle \kappa_{-\alpha}(x),\phi(x-\xi)\rangle,
\quad x\in {\mathbb Q}_p^n,
\end{equation}
$\phi\in \Phi({\mathbb Q}_p^n)$, \ $\alpha \in {\mathbb C}$.
\begin{Lemma}
\label{lem4.1}
The Lizorkin space of the second kind $\Phi({\mathbb Q}_p^n)$ is
invariant under the Taibleson fractional operator
$D^{\alpha}_{x}$ and $D^{\alpha}_{x}(\Phi({\mathbb Q}_p^n))=\Phi({\mathbb Q}_p^n)$.
\end{Lemma}
\begin{proof}
The proof of Lemma~\ref{lem4.1} is carried out in the
same way as the proof of Lemma~\ref{lem4}.
In view of formula (\ref{63.2}),
$F[\kappa_{\alpha}(x)](\xi)=|\xi|_p^{-\alpha}$.
Consequently, using (\ref{15}), we have
$$
F[D^{\alpha}_{x}\phi](\xi)=|\xi|_p^{-\alpha}F[\phi](\xi),
\quad \phi \in \Phi({\mathbb Q}_p^n).
$$
Thus $F[\phi](\xi), |\xi_1|_p^{-\alpha}F[\phi](\xi)\in \Psi({\mathbb Q}_p^n)$, \
$\alpha\in {\mathbb C}$ and $D^{\alpha}_{x}\phi \in \Phi({\mathbb Q}_p^n)$. That is
$D^{\alpha}_{x}(\Phi({\mathbb Q}_p^n))\subset \Phi({\mathbb Q}_p^n)$.
Since any function from $\Psi({\mathbb Q}_p^n)$ can be represented
as $\psi(\xi)=|\xi|_p^{\alpha}\psi_1(\xi)$, $\psi_1 \in \Psi({\mathbb Q}_p^n)$,
we have $D^{\alpha}_{x}(\Phi({\mathbb Q}_p^n))=\Phi({\mathbb Q}_p^n)$.
\end{proof}
In view of (\ref{63.2}), (\ref{15}), formula (\ref{59**}) can be
represented in the form
\begin{equation}
\label{61**}
\big(D^{\alpha}_{x}\phi\big)(x)
=F^{-1}\big[|\xi|^{\alpha}_pF[\phi](\xi)\big](x),
\quad \phi \in \Phi({\mathbb Q}_p^n).
\end{equation}
According to (\ref{59**}), (\ref{11}), we define
$D^{\alpha}f$ of a distribution $f\in \Phi'({\mathbb Q}_p^n)$ by the relation
\begin{equation}
\label{62**}
\langle D^{\alpha}_{x}f,\phi\rangle\stackrel{def}{=}
\langle f, D^{\alpha}_{x}\phi\rangle,
\qquad \forall \, \phi\in \Phi({\mathbb Q}_p^n).
\end{equation}
It is clear that $D^{\alpha}_{x}(\Phi'({\mathbb Q}_p^n))=\Phi'({\mathbb Q}_p^n)$
and the family of operators $D^{\alpha}_{x}$, $\alpha \in {\mathbb C}$ have
group properties of the form (\ref{63}) on the space of distributions
$\Phi'({\mathbb Q}_p^n)$.
\begin{Example}
\label{ex2} \rm
If $\alpha>0$ then the fractional integration formula for the
delta function holds
$$
D^{-\alpha}\delta(x)=\frac{|x|_p^{\alpha-n}}{\Gamma_p^{(n)}(\alpha)}.
$$
\end{Example}
\begin{Remark}
\label{rem1} \rm
In~\cite[IX.5.]{Vl-V-Z}, the orthonormal complete basis in
${{\mathcal L}}^2({\mathbb Q}_p)$ of eigenfunctions of Vladimirov's operator
$D^{\alpha}=f_{-\alpha}*$, $\alpha>0$ was constructed. Another
orthonormal complete basis in ${{\mathcal L}}^2({\mathbb Q}_p)$ of eigenfunctions
of the operator $D^{\alpha}$, $\alpha>0$
\begin{equation}
\label{62.1}
\Theta_{\gamma j a}(x)=p^{-\gamma/2}\chi_p\big(p^{-1}j(p^{\gamma}x-a)x\big)
\Omega\big(|p^{\gamma}x-a|_p\big), \quad x\in {\mathbb Q}_p,
\end{equation}
$\gamma\in {\mathbb Z}$, $a\in I_p={\mathbb Q}_p/{\mathbb Z}_p$, $j=1,2,\dots,p-1$, was
later constructed by S.~V.~Kozyrev in~\cite{Koz0}.
Here elements of the group $I_p={\mathbb Q}_p/{\mathbb Z}_p$ can be
represented in the form
$$
a=p^{-\gamma}\big(a_{0}+a_{1}p^{1}+\cdots+a_{\gamma-1}p^{\gamma-1}\big),
\quad \gamma\in {\mathbb N},
$$
where $a_j=0,1,\dots,p-1$, \ $j=0,1,\dots,\gamma-1$.
Thus
\begin{equation}
\label{62.2}
D^{\alpha}\Theta_{\gamma j a}(x)=p^{\alpha(1-\gamma)}\Theta_{\gamma j a}(x),
\quad \alpha>0.
\end{equation}
Since, according to~\cite[IX,(5.7),(5.8)]{Vl-V-Z},~\cite{Koz0},
$\int_{{\mathbb Q}_p}\Theta_{\gamma j a}(x)\,dx=0$, the eigenfunctions
$\Theta_{\gamma j a}(x)$ of Vladimirov's operator $D^{\alpha}$,
$\alpha>0$ belong to the Lizorkin space $\Phi({\mathbb Q}_p)$ (see Lemma~\ref{lem1}).
Since the Lizorkin space is invariant under the Vladimirov operator,
$\Theta_{\gamma j a}(x)$ {\it are also eigenfunctions\/} of Vladimirov's
operator $D^{\alpha}$ for $\alpha<0$, i.e., relation (\ref{62.2})
holds for any $\alpha\in {\mathbb C}$.
\end{Remark}
\subsection{$p$-Adic Laplacians.}\label{s4.3}
By analogy with the ``${\mathbb C}$-case'' ~\cite{Sam3},~\cite{Sam-Kil-Mar},
and the $p$-adic case~\cite{Kh0},~\cite[X.1,Example~2]{Vl-V-Z},
using the fractional operators one can introduce the $p$-adic Laplacians.
The Laplacian of the first kind is an operator
$$
-\widehat\Delta f(x)\stackrel{def}{=}\sum_{k=1}^n\big(D^{2}_{x_k}f\big)(x),
\quad f\in \Phi'({\mathbb Q}_p^n)
$$
with the symbol $-\sum_{k=1}^n|\xi_j|_p^{2}$, $\xi_k\in {\mathbb Q}_p$, $k=1,2,\dots,n$;
the Laplacian of the second kind is an operator
$$
-\Delta f(x)\stackrel{def}{=}\big(D^{2}_{x}f\big)(x),
\quad f\in \Phi'({\mathbb Q}_p^n).
$$
with the symbol $-|\xi|_p^{2}$, $\xi\in {\mathbb Q}_p^n$.
Moreover, one can define powers of the Laplacian by the formula
$$
(-\Delta)^{\alpha/2}f(x)\stackrel{def}{=}\big(D^{\alpha}_{x}f\big)(x),
\quad f\in \Phi'({\mathbb Q}_p^n), \quad \alpha \in {\mathbb C}.
$$
\section{Pseudo-differential operators and equations.}\label{s5}
Similarly to the representation (\ref{61**}), one can consider
a class of pseudo-differential operators in the Lizorkin space
of the test functions $\Phi({\mathbb Q}_p^n)$
$$
(A\phi)(x)=F^{-1}\big[{\mathcal A}(\xi)\,F[\phi](\xi)\big](x)
\qquad\qquad\qquad\qquad\qquad\qquad\qquad
$$
\begin{equation}
\label{64.3}
=\int_{{\mathbb Q}_p^n}\int_{{\mathbb Q}_p^n}\chi_p\big((y-x)\cdot \xi\big)
{\mathcal A}(\xi)\phi(y)\,d^n\xi\,d^ny,
\quad \phi \in \Phi({\mathbb Q}_p^n)
\end{equation}
with symbols ${\mathcal A}(\xi)\in {\mathcal E}({\mathbb Q}_p^n\setminus \{0\})$.
In view of Subsec.~\ref{s3.2}, functions
$F[\phi](\xi)$ and ${\mathcal A}(\xi)F[\phi](\xi)$ belong to $\Psi({\mathbb Q}_p^n)$,
and, consequently, $(A\phi)(x)\in \Phi({\mathbb Q}_p^n)$. Thus the pseudo-differential
operators (\ref{64.3}) are well defined and the Lizorkin space
$\Phi({\mathbb Q}_p^n)$ is invariant under them.
If we define a conjugate pseudo-differential operator $A^{T}$ as
\begin{equation}
\label{64.5}
(A^{T}\phi)(x)=F^{-1}[{\mathcal A}(-\xi)F[\phi](\xi)](x)
=\int_{{\mathbb Q}_p^n}\chi_p(-x\cdot \xi){\mathcal A}(-\xi)F[\phi](\xi)\,d^n\xi
\end{equation}
then one can define operator $A$ in the Lizorkin space of distributions:
for $f \in \Phi'({\mathbb Q}_p^n)$ we have
\begin{equation}
\label{64.4}
\langle Af,\phi\rangle=\langle f,A^{T}\phi\rangle,
\qquad \forall \, \phi \in \Phi({\mathbb Q}_p^n).
\end{equation}
It is clear that
\begin{equation}
\label{64.3*}
Af=F^{-1}[{\mathcal A}\,F[f]]\in \Phi'({\mathbb Q}_p^n),
\end{equation}
i.e., the Lizorkin space of distributions $\Phi'({\mathbb Q}_p^n)$
is invariant under pseudo-differential operators $A$.
If $A, B$ are pseudo-differential operators with symbols
${\mathcal A}(\xi), {\mathcal B}(\xi)\in {\mathcal E}({\mathbb Q}_p^n\setminus \{0\})$, respectively,
then the operator $AB$ is well defined and represented by the formula
$$
(AB)f=F^{-1}[{\mathcal A}{\mathcal B}\,F[f]]\in \Phi'({\mathbb Q}_p^n).
$$
If ${\mathcal A}(\xi)\ne 0$, $\xi\in {\mathbb Q}_p^n\setminus \{0\}$ then we define
the inverse pseudo-differential by the formula
$$
A^{-1}f=F^{-1}[{\mathcal A}^{-1}\,F[f]], \quad f\in \Phi'({\mathbb Q}_p^n).
$$
Thus the family of pseudo-differential operators $A$ with symbols
${\mathcal A}(\xi)\ne 0$, $\xi\in {\mathbb Q}_p^n\setminus \{0\}$ forms an Abelian group.
If the symbol ${\mathcal A}(\xi)$ of the operator $A$ is an {\it associated homogeneous\/}
function then the operator $A$ is called an {\it associated homogeneous
pseudo-differential operator\/}.
According to formulas (\ref{61**}), (\ref{63.4})--(\ref{63.7}),
and Definitions~\ref{de1},~\ref{de1.1}
the operator $D^{\alpha}_{x}$, $\alpha\ne -n$ is
a {\it homogeneous\/} pseudo-differential operator of
degree~$\alpha$ with the symbol ${\mathcal A}(\xi)=|\xi|_p^{\alpha}$
and $D^{-n}_{x}$ is a {\it homogeneous\/} pseudo-differential
operator of degree $-n$ and order $1$ with the symbol
${\mathcal A}(\xi)=P(|\xi|_p^{-n})$ (see (\ref{63.1*})).
Let us consider a pseudo-differential equation
\begin{equation}
\label{64.3**}
Af=g, \qquad g\in \Phi'({\mathbb Q}_p^n),
\end{equation}
where $A$ is a pseudo-differential operator (\ref{64.3}), $f$
is the desired distribution.
\begin{Theorem}
\label{th4.2}
If the symbol of a pseudo-differential operator $A$ is such that
${\mathcal A}(\xi)\ne 0$, $\xi\in {\mathbb Q}_p^n\setminus \{0\}$ then the equation
{\rm (\ref{64.3**})} has the unique solution
$$
f(x)=F^{-1}\Big[\frac{F[g](\xi)}{{\mathcal A}(\xi)}\Big](x)=(A^{-1}g)(x)\in \Phi'({\mathbb Q}_p^n).
$$
\end{Theorem}
\begin{proof}
Applying the Fourier transform to the left-hand and right-hand sides of
equation $Af=g$, in view of representation (\ref{64.3*}), we obtain that
${\mathcal A}(\xi)F[f](\xi)=F[g](\xi)$. Since according to Subsec.~\ref{s3.2},
$F[\Phi'({\mathbb Q}_p^n)]=\Psi'({\mathbb Q}_p^n)$, $F[\Psi'({\mathbb Q}_p^n)]=\Phi'({\mathbb Q}_p^n)$,
and ${\mathcal A}(\xi)$ is a multiplier in $\Psi({\mathbb Q}_p^n)$, we have
$F[f](\xi)={\mathcal A}^{-1}(\xi)F[g](\xi)\in \Psi'({\mathbb Q}_p^n)$. Thus
$f(x)=F^{-1}[{\mathcal A}^{-1}(\xi)F[g](\xi)](x)=(A^{-1}g)(x)\in \Phi'({\mathbb Q}_p^n)$
is a solution of the problem (\ref{64.3**}).
Now we study solutions of the homogeneous problem (\ref{64.1}).
Let $f\in {{\mathcal D}}'({\mathbb Q}_p^n)$ and $Af=0$, i.e., according to (\ref{64.4}),
$\langle Af,\phi\rangle=\langle f,A^{T}\phi\rangle=0$, for all
$\phi\in \Phi({\mathbb Q}_p^n)$. Since $A^{T}(\Phi({\mathbb Q}_p^n))=\Phi({\mathbb Q}_p^n)$,
we have $\langle f,\phi\rangle=0$, for all $\phi\in \Phi({\mathbb Q}_p^n)$, and
consequently, $f\in \Phi^{\perp}$ (see Proposition~\ref{pr2}). Thus the
solutions of the homogeneous problem (\ref{64.3**}) are indistinguishable
as elements of the space $\Phi'({\mathbb Q}_p^n)$.
\end{proof}
Let $P_N(z)=\sum_{k=0}^Na_kz^{k}$ be a polynomial, where
$a_k\in {\mathbb C}$ are constants.
Let us consider the equation
\begin{equation}
\label{64.1}
P_N\big(D^{\alpha}_{x}\big)f=g, \qquad g\in \Phi'({\mathbb Q}_p^n),
\end{equation}
where $\big(D^{\alpha}_{x}\big)^k\stackrel{def}{=}D^{\alpha k}_{x}$,
$\alpha \in {\mathbb C}$ and $f$ is the desired distribution.
\begin{Theorem}
\label{th4}
If $P_N(z)\ne 0$ for all $z>0$ then equation {\rm (\ref{64.1})} has
the unique solution
\begin{equation}
\label{64.2}
f(x)=F^{-1}\Big[\frac{F[g](\xi)}{P_N\big(|\xi|_p^{\alpha}\big)}\Big](x)
\in \Phi'({\mathbb Q}_p^n).
\end{equation}
In particular, the unique solution of the equation
$$
D^{\alpha}_{x}f=g, \qquad g\in \Phi'({\mathbb Q}_p^n),
$$
is given by the formula
$f=D^{-\alpha}_{x}g\in \Phi'({\mathbb Q}_p^n)$.
\end{Theorem}
\begin{proof}
According to formulas (\ref{63.1})--(\ref{63.2}), (\ref{63.4})--(\ref{63.7}),
$$
F[\kappa_{\alpha}(x)]=|\xi|_p^{-\alpha}, \quad \alpha \in {\mathbb C}
$$
in $\Phi'({\mathbb Q}_p^n)$. Consequently, applying the Fourier transform to
the left-hand and right-hand sides of relation (\ref{64.1}), we obtain
(\ref{64.2}). Here we must take into account the fact that
$\frac{1}{P_N(|\xi|_p^{\alpha})}$ is a multiplier in $\Psi({\mathbb Q}_p^n)$.
Thus (\ref{64.2}) is the solution of the problem (\ref{64.1}).
In view of the proof of Theorem~\ref{th4.2}, the homogeneous problem
(\ref{64.1}) has only a trivial solution.
\end{proof}
In a similar way we can prove the following theorem.
\begin{Theorem}
\label{th4.1}
If $P_N(z)\ne 0$ for all $z>0$ then
the equation
$$
P_N\big(D^{\alpha}_{\times}\big)f=g, \qquad g\in \Phi'_{\times}({\mathbb Q}_p^n),
$$
$\alpha \in {\mathbb C}^n$ has the unique solution
$$
f(x)=F^{-1}\Big[\frac{F[g](\xi)}
{P_N\big(|\xi_1|_p^{\alpha_1}\cdots |\xi_n|_p^{\alpha_n}\big)}\Big](x)
\in \Phi'_{\times}({\mathbb Q}_p^n).
$$
In particular, the unique solution of the equation
$$
D^{\alpha}_{\times}f=g, \qquad g\in \Phi'_{\times}({\mathbb Q}_p^n),
$$
is given by formula $f=D^{-\alpha}_{\times}g\in \Phi'_{\times}({\mathbb Q}_p^n)$.
\end{Theorem}
Now we prove an analog of the statement for the Vladimirov
fractional operator~\cite[IX.1,Example~4]{Vl-V-Z}.
\begin{Proposition}
\label{pr4}
Let $A$ be a pseudo-differential operator with a symbol ${\mathcal A}(\xi)$
and $0\ne z\in {\mathbb Q}_p^n$. Then the additive character $\chi_p(z\cdot x)$
is an eigenfunction of the operator $A$ with the eigenvalue ${\mathcal A}(-z)$,
i.e.,
$$
A\chi_p(z\cdot x)={\mathcal A}(-z)\chi_p(z\cdot x).
$$
\end{Proposition}
\begin{proof}
Since $F[\chi_p(z\cdot x)]=\delta(\xi+z)$, $z\ne 0$, we have
${\mathcal A}(\xi)\delta(\xi+z)={\mathcal A}(-z)\delta(\xi+z)$. Thus
$$
A\chi_p(z\cdot x)=F^{-1}[{\mathcal A}(\xi)F[\chi_p(z\cdot x)](\xi)](x)
\qquad\qquad\qquad\qquad
$$
$$
\qquad\quad
={\mathcal A}(-z)F^{-1}[\delta(\xi+z)](x)={\mathcal A}(-z)\chi_p(z\cdot x).
$$
\end{proof}
\section{Distributional quasi-asymptotics.}
\label{s6}
We recall some facts from our papers~\cite{Kh-Sh1},~\cite{Kh-Sh2},
where we introduced the notion of the {\it quasi-asymptotics\/}~\cite{D-Zav1},
~\cite{Vl-D-Zav} adapted to the $p$-adic case.
\begin{Definition}
\label{de4.1} \rm
(~\cite{Kh-Sh1},~\cite{Kh-Sh2}) A continuous complex valued function
$\rho(z)$ on the multiplicative group ${\mathbb Q}_p^*$ such that for any
$z\in {\mathbb Q}_p^*$ the limit
$$
\lim_{|t|_p \to \infty}\frac{\rho(tz)}{\rho(t)}=C(z)
$$
exists is called an {\it automodel {\rm(}or regular varying{\rm)}\/}
function.
\end{Definition}
It is easy to see that the function $C(z)$ satisfies the functional
equation $C(ab)=C(a)C(b)$, \ $a,b\in {\mathbb Q}_p^*$. According
to~\cite[Ch.II,\S 1.4.]{G-Gr-P},~\cite[III.2.]{Vl-V-Z}, the solution
of this equation is a multiplicative character $\pi_{\alpha}$ of the
field ${\mathbb Q}_p$ defined by (\ref{16}), (\ref{16.1}), i.e.,
\begin{equation}
\label{64}
C(z)=|z|_p^{\alpha-1}\pi_{1}(z), \quad z\in {\mathbb Q}_p^*.
\end{equation}
In this case we say that an {\it automodel\/} function $\rho(x)$ has the
degree $\pi_{\alpha}$. In particular, if
$\pi_{\alpha}(z)=|z|_p^{\alpha-1}$ we say that the {\it automodel\/}
function has the degree $\alpha-1$.
If an {\it automodel\/} function $\rho(t)$,
$t\in {\mathbb Q}_p^*$ has the degree $\pi_{\alpha}$ then the {\it automodel\/}
function $|t|_p^{\beta}\rho(t)$ has the degree
$\pi_{\alpha}\pi_{0}^{-\beta}=\pi_{1}(t)|t|_p^{\alpha+\beta}$,
where $\pi_{0}(t)=|t|_p^{-1}$.
For example, the functions $|t|_p^{\alpha-1}\pi_1(t)$ and
$|t|_p^{\alpha-1}\pi_1(t)\log_{p}^{m}|t|_p$ are {\it automodel\/}
of degree $\pi_{\alpha}$.
\begin{Definition}
\label{de4} \rm
(~\cite{Kh-Sh1},~\cite{Kh-Sh2})
Let $f\in {{\mathcal D}}'({\mathbb Q}_p^n)$. If there exists an {\it automodel\/} function
$\rho(t)$, $t\in {\mathbb Q}_p^{*}$ of degree $\pi_{\alpha}$ such that
$$
\frac{f(tx)}{\rho(t)} \to g(x)\not\equiv 0, \quad |t|_p \to \infty,
\quad \text{in} \quad {{\mathcal D}}'({\mathbb Q}_p^n).
$$
then we say that the distribution $f$ has the {\it quasi-asymptotics\/}
$g(x)$ of degree $\pi_{\alpha}$ at infinity with respect to $\rho(t)$,
and write
$$
f(x) \stackrel{{{\mathcal D}}'}{\sim} g(x), \quad |x|_p \to \infty \ \big(\rho(t)\big).
$$
If for any $\alpha$ we have
$$
\frac{f(tx)}{|t|_p^{\alpha}} \to 0, \quad |t|_p \to \infty,
\quad \text{in} \quad {{\mathcal D}}'({\mathbb Q}_p^n)
$$
then we say that the distribution $f$ has a {\it quasi-asymptotics\/}
of degree $-\infty$ at infinity and write $f(x) \stackrel{{{\mathcal D}}'}{\sim} 0$, \
$|x|_p \to \infty$.
\end{Definition}
\begin{Lemma}
\label{lem6}
{\rm (~\cite{Kh-Sh1},~\cite{Kh-Sh2})}
Let $f\in {{\mathcal D}}'({\mathbb Q}_p^n)$. If $f(x)\stackrel{{{\mathcal D}}'}{\sim} g(x)\not\equiv 0$,
as $|x|_p \to \infty$ with respect to the {\it automodel\/} function
$\rho(t)$ of degree $\pi_{\alpha}$ then $g(x)$ is a homogeneous
distribution of degree $\pi_{\alpha}$ {\rm(}with respect to
Definition~{\rm\ref{de1}.(b)}{\rm)}.
\end{Lemma}
\begin{proof}
This lemma is proved by repeating the corresponding assertion
from the book~\cite{Vl-D-Zav} practically word for word.
Let $a\in {\mathbb Q}_p^{*}$.
In view of Definition~\ref{de4.1} and (\ref{64}), we obtain
$$
\langle g(ax),\varphi(x)\rangle
=\lim_{|t|_p \to \infty}
\Bigl\langle \frac{f(tax)}{\rho(t)},\varphi(x)\Bigr\rangle
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad
$$
$$
\qquad
=\pi_{\alpha}(a)\lim_{|t|_p \to \infty}
\Bigl\langle \frac{f(tax)}{\rho(ta)},\varphi(x)\Bigr\rangle
=\pi_{\alpha}(a)\langle g(x),\varphi(x)\rangle,
$$
for all $a\in {\mathbb Q}_p^{*}$, \ $\varphi \in {{\mathcal D}}({\mathbb Q}_p^n)$. Thus
$g(ax)=\pi_{\alpha}(a)g(x)$ for all $a\in {\mathbb Q}_p^{*}$.
\end{proof}
For $n=1$, as it follows from the theorem describing all
one-dimensional {\it homogeneous} distributions
~\cite[Ch.II,\S 2.3.]{G-Gr-P},~\cite[VIII.1.]{Vl-V-Z},
and Lemma~\ref{lem6}, if $f(x)\in {{\mathcal D}}'({\mathbb Q}_p)$ has the
quasi-asymptotics of degree $\pi_{\alpha}$ at infinity then
\begin{equation}
\label{65}
f(x)\stackrel{{{\mathcal D}}'}{\sim} g(x)=\left\{
\begin{array}{lcr}
C\pi_{\alpha}(x), && \pi_{\alpha}\ne \pi_{0}=|x|_p^{-1}, \\
C\delta(x), && \pi_{\alpha}=\pi_{0}=|x|_p^{-1}, \\
\end{array}
\right.
\quad |x|_p \to \infty,
\end{equation}
where $C$ is a constant, and the distribution $\pi_{\alpha}(x)$
is defined by (\ref{24}).
\begin{Definition}
\label{de5} \rm
(~\cite{Kh-Sh1},~\cite{Kh-Sh2})
Let $f\in {{\mathcal D}}'({\mathbb Q}_p^n)$. If there exists an {\it automodel\/} function
$\rho(t)$, $t\in {\mathbb Q}_p^{*}$ of degree $\pi_{\alpha}$ such that
$$
\frac{f\big(\frac{x}{t}\big)}{\rho(t)} \to g(x)\not\equiv 0,
\quad |t|_p \to \infty,
\quad \text{in} \quad {{\mathcal D}}'({\mathbb Q}_p^n)
$$
then we say that the distribution $f$ has the {\it quasi-asymptotics\/}
$g(x)$ of degree $\big(\pi_{\alpha}\big)^{-1}$ at zero with respect to
$\rho(t)$, and write
$$
f(x) \stackrel{{{\mathcal D}}'}{\sim} g(x), \quad |x|_p \to 0 \ \big(\rho(t)\big).
$$
If for any $\alpha$ we have
$$
\frac{f\big(\frac{x}{t}\big)}{|t|_p^{\alpha}} \to 0,
\quad |t|_p \to \infty,
\quad \text{in} \quad {{\mathcal D}}'({\mathbb Q}_p^n)
$$
then we say that the distribution $f$ has a {\it quasi-asymptotics\/}
of degree $-\infty$ at zero, and write $f(x) \stackrel{{{\mathcal D}}'}{\sim} 0$, \
$|x|_p \to 0$.
\end{Definition}
\begin{Example}
\label{ex3} \rm
Let $f_{m}\in {{\mathcal D}}'({\mathbb Q}_p)$ be an {\it associated homogeneous
{\rm(}in the wide sense{\rm)}\/} distribution of degree~$\pi_{\alpha}(x)$
and order~$m$ defined by (\ref{19.3}), (\ref{19.5}).
In view of Definition~\ref{de1.1}, we have the asymptotic formulas:
$$
f_{m}(tx)=\pi_{1}(t)|t|_p^{\alpha-1}f_m(x)
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
$$
$$
\qquad\qquad
+\sum_{j=1}^{m}\pi_{1}(t)|t|_p^{\alpha-1}\log_p^j|t|_pf_{m-j}(x),
\quad |t|_p \to \infty,
$$
$$
f_{m}\Big(\frac{x}{t}\Big)=\pi_{1}^{-1}(t)|t|_p^{-\alpha+1}f_m(x)
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
$$
$$
\qquad
+\sum_{j=1}^{m}(-1)^{j}\pi_{1}^{-1}(t)|t|_p^{-\alpha+1}\log_p^j|t|_pf_{m-j}(x),
\quad |t|_p \to \infty.
$$
Here the coefficients of the {\it leading term\/} of both
asymptotics are homogeneous distributions $f_0$ and $(-1)^m f_0$
of degree~$\pi_{\alpha}(x)$ defined by the relation from
Definition~\ref{de1.1}.
According to the last relations and Definitions~\ref{de4},~\ref{de5},
one can easily see that
$$
\begin{array}{rclrclrcl}
\displaystyle
f_{m}(x) &\stackrel{{{\mathcal D}}'}{\sim}& f_0(x), &&|x|_p \to \infty
&& \big(|t|_p^{\alpha-1}\pi_{1}(t)\log_{p}^m|t|_p\big), \smallskip \\
\displaystyle
f_{m}(x) &\stackrel{{{\mathcal D}}'}{\sim}& (-1)^m f_0(x), &&|x|_p \to 0
&& \big(|t|_p^{-\alpha+1}\pi_{1}^{-1}(t)\log_{p}^m|t|_p\big).
\end{array}
$$
\end{Example}
\section{The Tauberian theorems}
\label{s7}
\begin{Theorem}
\label{th5}
{\rm (~\cite{Kh-Sh2})}
A distribution $f\in {{\mathcal D}}'({\mathbb Q}_p^n)$ has a quasi-asymptotics of
degree $\pi_{\alpha}$ at infinity with respect to the automodel
function $\rho(t)$, $t\in {\mathbb Q}_p^*$, i.e.,
$$
f(x) \stackrel{{{\mathcal D}}'}{\sim} g(x), \quad |x|_p \to \infty
\ \big(\rho(t)\big)
$$
if and only if its Fourier transform has a quasi-asymptotics of
degree $\pi_{\alpha}^{-1}\pi_{0}^{n}=\pi_{\alpha+n}^{-1}$ at zero
with respect to the automodel function $|t|_p^n\rho(t)$, i.e.,
$$
F[f(x)](\xi) \stackrel{{{\mathcal D}}'}{\sim} F[g(x)](\xi), \quad |\xi|_p \to 0
\ \big(|t|_p^n\rho(t)\big).
$$
\end{Theorem}
\begin{proof}
Let us prove the necessity. Let $f(x) \stackrel{{{\mathcal D}}'}{\sim} g(x)$, \
$|x|_p \to \infty$ \ $\big(\rho(t)\big)$, i.e.,
\begin{equation}
\label{43}
\lim_{|t|_p \to \infty}
\Bigl\langle \frac{f(tx)}{\rho(t)}, \varphi(x)\Bigr\rangle
=\langle g(x), \varphi(x)\rangle, \quad \forall \ \varphi \in {{\mathcal D}}({\mathbb Q}_p^n),
\end{equation}
where $\rho(t)$ is an automodel function of degree $\pi_{\alpha}$.
In view of formula (\ref{14}),
$F[f(x)](\frac{\xi}{t})=|t|_p^nF[f(tx)](\xi)$,
\ $x,\xi\in {\mathbb Q}_p^n$, \ $t\in {\mathbb Q}_p^*$, we have
$$
\Bigl\langle F[f(x)]\Big(\frac{\xi}{t}\Big),\varphi(\xi)\Bigr\rangle
=|t|_p^n\bigl\langle F[f(tx)](\xi),\varphi(\xi)\bigr\rangle
=|t|_p^n\bigl\langle f(tx),F[\varphi(\xi)](x)\bigr\rangle,
$$
$\varphi \in {{\mathcal D}}({\mathbb Q}_p^n)$.
Hence, taking into account relation (\ref{43}), we obtain
$$
\lim_{|t|_p \to \infty}
\Bigl\langle \frac{F[f(x)](\frac{\xi}{t})}{|t|_p^n\rho(t)},
\varphi(\xi)\Bigr\rangle
=\lim_{|t|_p \to \infty}
\Bigl\langle \frac{f(tx)}{\rho(t)},F[\varphi(\xi)](x)\Bigr\rangle
\qquad
$$
$$
=\bigl\langle g(x),F[\varphi(\xi)](x)\bigr\rangle
=\bigl\langle F[g(x)](\xi),\varphi(\xi)\bigr\rangle,
\quad \forall \ \varphi \in {{\mathcal D}}({\mathbb Q}_p^n),
$$
i.e., the distribution $F[f(x)](\xi)$ has the quasi-asymptotics
$F[g(x)](\xi)$ of degree $\pi_{\alpha+n}^{-1}$ at zero with respect
to $|t|_p^n\rho(t)$.
The sufficiency can be proved similarly.
\end{proof}
For $n=1$ Theorem~\ref{th5}, Lemma~\ref{lem6},
and formula (\ref{60}) imply the following corollary.
\begin{Corollary}
\label{cor6}
A distribution $f\in {{\mathcal D}}'({\mathbb Q}_p)$ has a quasi-asymptotics of
degree $\pi_{\alpha}(x)$ at infinity, i.e.,
\begin{equation}
\label{43*}
f(x) \stackrel{{{\mathcal D}}'}{\sim}
g(x)=\left\{
\begin{array}{lcr}
C|x|_p^{\alpha-1}\pi_1(x),
&& \pi_{\alpha}\ne \pi_{0}=|x|_p^{-1}, \\
C\delta(x),
&& \pi_{\alpha}=\pi_{0}=|x|_p^{-1}, \\
\end{array}
\right.
\quad |x|_p \to \infty,
\end{equation}
if and only if its Fourier transform $F[f]$ has a
quasi-asymptotics of degree $\pi_{\alpha+1}^{-1}(\xi)$
at zero, i.e.,
$$
F[f(x)](\xi) \stackrel{{{\mathcal D}}'}{\sim}F[g(x)](\xi)
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
$$
$$
\qquad\qquad
=\left\{
\begin{array}{lcr}
C\Gamma_p(\pi_{\alpha})|\xi|_p^{-\alpha}\pi_1^{-1}(\xi),
&& \pi_{\alpha}\ne \pi_{0}=|x|_p^{-1}, \\
C,
&& \pi_{\alpha}=\pi_{0}=|x|_p^{-1}, \\
\end{array}
\right.
\quad |\xi|_p \to 0,
$$
where the distribution $\pi_{\alpha}(x)=|x|_p^{\alpha-1}\pi_1(x)$
is given by {\rm (\ref{24})}.
\end{Corollary}
\begin{Theorem}
\label{th7}
Let $f \in \Phi_{\times}'({\mathbb Q}_p^n)$. Then
$$
f(x) \stackrel{\Phi_{\times}'}{\sim} g(x), \quad |x|_p \to \infty
\quad \big(\rho(t)\big)
$$
if and only if
$$
D^{\beta}_{\times}f(x) \stackrel{\Phi_{\times}'}{\sim} D^{\beta}_{\times}g(x),
\quad |x|_p \to \infty \quad \big(|t|_p^{|-\beta|}\rho(t)\big),
$$
where $\beta=(\beta_1,\dots,\beta_n)\in {\mathbb C}^n$, \
$|\beta|=\beta_1+\cdots+\beta_n$.
\end{Theorem}
\begin{proof}
Let $\beta_j \ne -1$, $j=1,2,\dots$. In this case the Riesz kernel
$f_{-\beta}(x)$ is a {\it homogeneous\/} distribution of degree~$|-\beta|-n$.
According to Lemma~\ref{lem4} and formulas (\ref{58}), (\ref{59}), (\ref{62}),
we have
$$
\bigl\langle \big(D^{\beta}_{\times}f\big)(tx),\phi(x)\bigr\rangle
=\bigl\langle \big(f*f_{-\beta}\big)(tx),\phi(x)\bigr\rangle
\qquad\qquad\qquad\qquad\qquad\qquad
$$
$$
=|t|_p^{-n}
\Bigl\langle f(x),\Bigl\langle f_{-\beta}(y),\phi\Big(\frac{x+y}{t}\Big)
\Bigr\rangle\Bigr\rangle
=|t|_p^{n}
\bigl\langle f(tx),
\bigl\langle f_{-\beta}(ty),\phi(x+y)\bigr\rangle \bigr\rangle
$$
$$
=|t|_p^{|-\beta|}
\bigl\langle f(tx),
\bigl\langle f_{-\beta}(y),\phi(x+y)\bigr\rangle \bigr\rangle
=|t|_p^{|-\beta|}
\bigl\langle f(tx),\big(D^{\beta}_{\times}\phi\big)(x)\bigr\rangle,
$$
for all $\phi \in \Phi_{\times}({\mathbb Q}_p^n)$.
Thus
$$
\Bigl\langle \frac{\big(D^{\beta}_{\times}f\big)(tx)}{|t|_p^{|-\beta|}\rho(t)},
\phi(x)\Bigr\rangle
=\Bigl\langle \frac{f(tx)}{\rho(t)},
\big(D^{\beta}_{\times}\phi\big)(x)\Bigr\rangle.
$$
Next, passing to the limit in the above relation, as
$|t|_p \to \infty$, we obtain
$$
\lim_{|t|_p \to \infty}
\Bigl\langle \frac{\big(D^{\beta}_{\times}f\big)(tx)}{|t|_p^{|-\beta|}\rho(t)},
\phi(x)\Bigr\rangle
=\lim_{|t|_p \to \infty}
\Bigl\langle \frac{f(tx)}{\rho(t)},
\big(D^{\beta}_{\times}\phi\big)(x)\Bigr\rangle
$$
That is, $\lim_{|t|_p \to \infty}
\frac{(D^{\beta}_{\times}f)(tx)}{|t|_p^{|-\beta|}\rho(t)}
=D^{\beta}_{\times}g(x)$
in $\Phi_{\times}'({\mathbb Q}_p^n)$ if and only if
$\lim_{|t|_p \to \infty}\frac{f(tx)}{\rho(t)}=g(x)$ in
$\Phi_{\times}'({\mathbb Q}_p^n)$.
Thus this case of the theorem is proved.
Consider the case where among all $\beta_1,\dots,\beta_n$ there are $k$
pieces such that $=-1$ and $n-k$ pieces such that $\ne -1$. In this case
the Riesz kernel $f_{-\beta}(x)$ is an {\it associated homogeneous\/}
distribution of degree~$|-\beta|-n$ and order $k$, \ $k=1,\dots,n$.
Let $\beta_1=\cdots=\beta_k=-1$, \ $\beta_{k+1},\cdots,\beta_{n}\ne -1$.
Then according to (\ref{58.1}),
$$
f_{-\beta}(ty)=|t|_p^{|-\beta|-n}
(-1)^k\frac{(p-1)^k}{\log^k p}(\log|y_1|_p+\log|t|_p)\times
\qquad\qquad\qquad
$$
$$
\cdots\times(\log|y_k|_p+\log|t|_p)
\times\frac{|y_{k+1}|_p^{-\beta_{k+1}-1}}{\Gamma_p(-\beta_{k+1})}
\times\cdots\times \frac{|y_n|_p^{-\beta_{n}-1}}{\Gamma_p(-\beta_n)}
$$
$$
=|t|_p^{|-\beta|-n}f_{-\beta}(y)
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad
$$
$$
\qquad\qquad\quad
+|t|_p^{|-\beta|-n}(-1)^k\frac{(p-1)^k}{\log^k p}
\frac{|y_{k+1}|_p^{-\beta_{k+1}-1}}{\Gamma_p(-\beta_{k+1})}
\times\cdots\times\frac{|y_n|_p^{-\beta_{n}-1}}{\Gamma_p(-\beta_n)}
$$
$$
\times
\bigg(\Big(\log|y_2|_p\times\cdots\times\log|y_k|_p
+\cdots+\log|y_1|_p\times\cdots\times\log|y_{k-1}|_p\Big)\log|t|_p
$$
\begin{equation}
\label{43**}
+\cdots+\Big(\log|y_1|_p+\cdots+\log|y_k|_p\Big)\log^{k-1}|t|_p
+\log^{k}|t|_p\bigg).
\end{equation}
It is easy to verify that in view of characterization (\ref{50}),
$$
\bigl\langle f_{-\beta}(ty), \phi(x+y)\bigr\rangle
=|t|_p^{|-\beta|-n}\bigl\langle f_{-\beta}(y), \phi(x+y)\bigr\rangle
\qquad\qquad\qquad
$$
\begin{equation}
\label{43***}
\qquad\qquad\qquad
=|t|_p^{|-\beta|-n}\big(D^{\beta}_{\times}\phi\big)(x),
\quad \phi \in \Phi_{\times}({\mathbb Q}_p^n).
\end{equation}
For example, taking into account (\ref{50}), we obtain
$$
\Bigl\langle
\times_{j=2}^{k}\log|x_j-y_j|_p\times
\times_{i=k+1}^{n}|x_{i}-y_{i}|_p^{-\beta_{i}-1},
\int_{{\mathbb Q}_p}\phi(y_1,y_2,\dots,y_n)\,dy_1 \Bigr\rangle=0,
$$
for all $\phi \in \Phi_{\times}({\mathbb Q}_p^n)$.
In a similar way, one can prove that all terms in (\ref{43**}),
with the exception of $|t|_p^{|-\beta|-n}f_{-\beta}(y)$,
{\it do not give any contribution\/} to the functional
$\langle f_{-\beta}(ty),\phi(x+y)\rangle$, where
$\beta_1=\cdots=\beta_k=-1$, \ $\beta_{k+1},\cdots,\beta_{n}\ne -1$.
Thus repeating the above calculations almost word for word and using
(\ref{43***}), we prove this case of the theorem.
\end{proof}
\begin{Theorem}
\label{th8}
Let $f \in \Phi'({\mathbb Q}_p^n)$. Then
$$
f(x) \stackrel{\Phi'}{\sim} g(x), \quad |x|_p \to \infty
\quad \big(\rho(t)\big)
$$
if and only if
$$
D^{\beta}f(x) \stackrel{\Phi'}{\sim} D^{\beta}g(x),
\quad |x|_p \to \infty \quad \big(|t|_p^{-\beta}\rho(t)\big),
$$
where $\beta\in {\mathbb C}$.
\end{Theorem}
\begin{proof}
Let $\beta \ne -n$, $j=1,2,\dots$. Since the Riesz kernel
$\kappa_{-\beta}(x)$ is a {\it homogeneous\/} distribution of
degree~$-\beta-n$, according to Lemma~\ref{lem4.1} and formulas
(\ref{63.4}), (\ref{59**}), (\ref{62**}),
we have
$$
\bigl\langle \big(D^{\beta}f\big)(tx),\phi(x)\bigr\rangle
=\bigl\langle (f*\kappa_{-\beta}\big)(tx),\phi(x)\bigr\rangle
\qquad\qquad\qquad\qquad\qquad\qquad\quad
$$
$$
=|t|_p^{-n}
\Bigl\langle f(x),\Bigl\langle \kappa_{-\beta}(y),\phi\Big(\frac{x+y}{t}\Big)
\Bigr\rangle\Bigr\rangle
=|t|_p^{n}
\bigl\langle f(tx),
\bigl\langle \kappa_{-\beta}(ty),\phi(x+y)\bigr\rangle \bigr\rangle
$$
$$
=|t|_p^{-\beta}
\bigl\langle f(tx),
\bigl\langle \kappa_{-\beta}(y),\phi(x+y)\bigr\rangle \bigr\rangle,
=|t|_p^{-\beta}
\bigl\langle f(tx),\big(D^{\beta}\phi\big)(x)\bigr\rangle,
\qquad\qquad
$$
for all $\phi \in \Phi({\mathbb Q}_p^n)$.
Passing to the limit in the above relation, as
$|t|_p \to \infty$, we obtain
$$
\lim_{|t|_p \to \infty}
\Bigl\langle \frac{\big(D^{\beta}f\big)(tx)}{|t|_p^{-\beta}\rho(t)},
\phi(x)\Bigr\rangle
=\lim_{|t|_p \to \infty}
\Bigl\langle \frac{f(tx)}{\rho(t)},\big(D^{\beta}\phi\big)(x)\Bigr\rangle
$$
Thus this case of the theorem is proved.
Let $\beta=-n$. In this case the Riesz kernel $\kappa_{n}(x)$ is an
{\it associated homogeneous\/} distribution of degree~$0$ and order $1$.
According to (\ref{63.7}), we have
$$
\kappa_{n}(ty)=-\frac{1-p^{-n}}{\log p}\log|y|_p
-\frac{1-p^{-n}}{\log p}\log|t|_p.
$$
In view of (\ref{54}),
$$
\bigl\langle \kappa_{n}(ty), \phi(x+y)\bigr\rangle
=\bigl\langle \kappa_{n}(y), \phi(x+y)\bigr\rangle
\qquad\qquad\qquad\qquad\qquad\qquad\qquad
$$
$$
-\frac{1-p^{-n}}{\log p}\log|t|_p
\bigl\langle 1, \phi(x+y)\bigr\rangle
=\big(D^{-n}\phi\big)(x),
\quad \phi \in \Phi({\mathbb Q}_p^n).
$$
Thus repeating the above calculations almost word for word and using
the last relation, we prove this case of the theorem.
\end{proof}
\begin{Theorem}
\label{th9}
A distribution $f\in \Phi'({\mathbb Q}_p)$ has a quasi-asymptotics at infinity
with respect to an automodel function $\rho(t)$ of degree $\pi_{\alpha}$
if and only if there exists a positive integer $N>-\alpha+1$ such that
$$
\lim_{|x|_p \to \infty}\frac{D^{-N}f(x)}{|x|_p^{N}\rho(x)}=A \ne 0,
$$
i.e., the {\rm(}fractional{\rm)} primitive $D^{-N}f(x)$ of
order $N$ has an asymptotics at infinity {\rm(}understood in the usual
sense{\rm)} of degree $\pi_{\alpha+N}$.
\end{Theorem}
\begin{proof}
By setting $\beta=-N$, $N>-\alpha+1$ in Theorem~\ref{th7}, we obtain that
relation (\ref{43*}) holds if and only if
\begin{equation}
\label{80}
D^{-N}f(x)\stackrel{\Phi'}{\sim}D^{-N}g(x)
=C\left\{
\begin{array}{lcr}
D^{-N}\big(|x|_p^{\alpha-1}\pi_1(x)\big), && \pi_{\alpha}\ne \pi_{0}, \\
D^{-N}\big(\delta(x)\big), && \pi_{\alpha}=\pi_{0}, \\
\end{array}
\right.
\end{equation}
as $|x|_p \to \infty$ \, $\big(|t|_p^{N}\rho(t)\big)$,
where $\pi_{0}=|x|_p^{-1}$.
If $\pi_{\alpha}\ne \pi_{0}=|x|_p^{-1}$, with the help of formulas
(\ref{25.4}), (\ref{25.5}), (\ref{57.2}), we find that
\begin{equation}
\label{81}
D^{-N}g(x)=C\frac{|x|_p^{N-1}}{\Gamma_p(N)}*\big(|x|_p^{\alpha-1}\pi_{1}(x)\big)
=C\frac{\Gamma_p(\pi_{\alpha})}{\Gamma_p(\pi_{\alpha+N})}
|x|_p^{\alpha+N-1}\pi_{1}(x),
\end{equation}
where the $\Gamma$-functions are given by (\ref{25.1}), (\ref{25}).
If $\pi_{\alpha}=\pi_{0}=|x|_p^{-1}$ then
\begin{equation}
\label{82}
D^{-N}g(x)=C\frac{|x|_p^{N-1}}{\Gamma_p(N)}*\delta(x)
=C\frac{|x|_p^{N-1}}{\Gamma_p(N)}.
\end{equation}
Formulas (\ref{80}), (\ref{81}), (\ref{82}) imply that
$$
\lim_{|t|_p \to \infty}
\Bigl\langle \frac{\big(D^{-N}f\big)(tx)}{|t|_p^{N}\rho(t)},
\phi(x)\Bigr\rangle
=C\frac{\Gamma_p(\pi_{\alpha})}{\Gamma_p(\pi_{\alpha+N})}
\bigl\langle |x|_p^{\alpha+N-1}\pi_{1}(x),\phi(x)\bigr\rangle,
$$
for all $\phi \in \Phi({\mathbb Q}_p)$. Since $\alpha+N-1>0$, we have
\begin{equation}
\label{83}
\lim_{|t|_p \to \infty}
\frac{\big(D^{-N}f\big)(tx)}{|t|_p^{N}\rho(t)}
=C\frac{\Gamma_p(\pi_{\alpha})}{\Gamma_p(\pi_{\alpha+N})}
|x|_p^{\alpha+N-1}\pi_{1}(x).
\end{equation}
By using Definition~\ref{de4.1} and formula (\ref{64}),
relation (\ref{83}) can be rewritten in the following form
$$
A=C\frac{\Gamma_p(\pi_{\alpha})}{\Gamma_p(\pi_{\alpha+N})}
=\lim_{|t|_p \to \infty}
\frac{\big(D^{-N}f\big)(tx)}{|t|_p^{N}\rho(t)|x|_p^{\alpha+N-1}\pi_{1}(x)}
\qquad\qquad\qquad\qquad\qquad
$$
$$
=\lim_{|tx|_p \to \infty}
\frac{\big(D^{-N}f\big)(tx)}{|tx|_p^{N}\rho(tx)}
\lim_{|t|_p \to \infty}
\frac{\rho(tx)}{|x|_p^{\alpha-1}\pi_{1}(x)\rho(t)}
=\lim_{|y|_p \to \infty}
\frac{\big(D^{-N}f\big)(y)}{|y|_p^{N}\rho(y)}.
$$
\end{proof}
\begin{Theorem}
\label{th10}
Let ${\mathcal A}(\xi)\in {\mathcal E}({\mathbb Q}_p^n\setminus \{0\})$ be the symbol of
a {\it homogeneous\/} pseudo-differential operator $A$ of degree~$\pi_{\beta}$,
and $f \in \Phi'({\mathbb Q}_p^n)$. Then
$$
f(x) \stackrel{\Phi'}{\sim} g(x), \quad |x|_p \to \infty
\quad \big(\rho(t)\big)
$$
if and only if
$$
(Af)(x) \stackrel{\Phi'}{\sim} (Ag)(x),
\quad |x|_p \to \infty \quad \big(\pi_{\beta}^{-1}(t)\rho(t)\big).
$$
\end{Theorem}
\begin{proof}
Since the Lizorkin space $\Phi({\mathbb Q}_p^n)$ is invariant under the
pseudo-differential operator $A$ (see Sec.~\ref{s5}),
according to formulas (\ref{64.3*}), (\ref{64.5}), and
(\ref{14}), (\ref{17}), we have
$$
\bigl\langle \big(Af\big)(tx),\phi(x)\bigr\rangle
=|t|_p^{-n}\Bigl\langle f(x),A^{T}\phi\Big(\frac{x}{t}\Big)\Bigr\rangle
\qquad\qquad\qquad\qquad\qquad
$$
$$
=|t|_p^{-n}\Bigl\langle f(x),
F^{-1}\big[{\mathcal A}(-\xi)F[\phi\Big(\frac{x}{t}\Big)](\xi)\big](x)\Bigr\rangle
\qquad\qquad\quad
$$
$$
=\frac{1}{\pi_{\beta}(t)}\bigl\langle f(x),
F^{-1}\big[{\mathcal A}(-t\xi)F[\phi(x)](t\xi)\big](x)\bigr\rangle
\qquad\qquad\quad
$$
$$
=\frac{|t|_p^{-n}}{\pi_{\beta}(t)}\Bigl\langle f(x),
F^{-1}\big[{\mathcal A}(-\xi)F[\phi(x)](\xi)\big]\Big(\frac{x}{t}\Big)\Bigr\rangle
\qquad\qquad\quad
$$
$$
\qquad
=\frac{1}{\pi_{\beta}(t)}\bigl\langle f(tx),
F^{-1}\big[{\mathcal A}(-\xi)F[\phi(x)](\xi)\big](x)\bigr\rangle,
\quad \forall \, \phi \in \Phi({\mathbb Q}_p^n).
$$
Passing to the limit in the above relation, as
$|t|_p \to \infty$, we obtain
$$
\lim_{|t|_p \to \infty}
\Bigl\langle \frac{(Af)(tx)}{\pi_{\beta}^{-1}(t)\rho(t)},
\phi(x)\Bigr\rangle
=\lim_{|t|_p \to \infty}
\Bigl\langle \frac{f(tx)}{\rho(t)},\big(A^{T}\phi\big)(x)\Bigr\rangle,
$$
i.e., in view of (\ref{64.5}),
$\lim_{|t|_p \to \infty}\frac{(Af)(tx)}{\pi_{\beta}^{-1}(t)\rho(t)}=Ag(x)$
in $\Phi'({\mathbb Q}_p^n)$ if and only if
$\lim_{|t|_p \to \infty}\frac{f(tx)}{\rho(t)}=g(x)$ in
$\Phi'({\mathbb Q}_p^n)$.
Thus the theorem is proved.
\end{proof}
\begin{center}
{\bf Acknowledgements }
\end{center}
The authors would like to thank Yu.~N.~Drozzinov, S.~V.~Kozyrev,
I.~V.~Volovich, B.~I.~Zavialov for fruitful discussions.
|
1,314,259,994,312 | arxiv | \section{Introduction}
The evolution of lattice gauge theory techniques has greatly enhanced our
understanding of quark-gluon dynamics in QCD.
Heavy-light mesons provide an ideal laboratory
for lattice QCD studies.
The static approximation ($m_Q \rightarrow \infty$) in which the heavy
quark propagator is replaced by a straight time-like Wilson line
provides a framework which allows a
quantitative study of masses, decay constants, mixing amplitudes,
and electroweak form factors.[1]
Since heavy-light mesons have only one dynamical
light (valence) quark, these systems are also well suited to the
study of constituent quark ideas [2] and the chiral
quark model [3].
In view of the success of the nonrelativistic (NR) potential
model for heavy $Q\bar Q$ mesons,
one interesting question for heavy-light systems is the
nature and extent of the deviation from the NR potential
picture as one of the quarks becomes light.
Here we present results of a numerical lattice study of this question.
Our findings support a surprisingly simple answer.
The Coulomb gauge wave functions obtained in lattice QCD agree,
within the accuracy of our calculations, with the results of a simple
relativistic generalization of the NR quarkonium potential model.
It is only necessary to replace the NR kinetic energy term in the
Hamiltonian by its relativistic form, leaving the NR
potential unchanged. The
only adjustable parameter is the quark
mass parameter $\mu$.
This description holds down to fairly small values of the current
quark mass, corresponding to a pion mass of approximately $300 MeV/c^2$, well
into
the region where the NR description fails.
\section{Wavefunctions in Lattice QCD}
In lattice QCD, the properties of hadronic states are studied
using correlation functions of operators which couple to the state.
Originally
local operators were used. More recently smeared (non-local) operators have
been found to improve the ability to extract the masses of meson and baryon
ground states [4]. Many of the present studies have been
done with configurations and propagators
fixed in Coulomb gauge and operators which smear the position
of the quark field uniformly over a spatial cube of variable size.
However a constant cube of any size is a very crude approximation to the
ground state wave function [5].
Hence, the propagator generally has significant
contamination from higher states
out to times large compared to the inverse of the energy splitting between the
ground state and the lowest excited state.
This is a particular problem in the study of
heavy-light correlators because they become noisy
rather rapidly in time.
Unfortunately, this is an unavoidable feature of heavy-light
systems [6,7].
Recently a multistate smearing technique
has been proposed [6]
which allows the extraction of
the properties of heavy-light states
from relatively short times.
The details of the multistate smearing method have been
presented elsewhere[6]. By choosing an appropriate orthonormal set of smearing
functions and diagonalizing the corresponding matrix of correlators, one
obtains the wave functions of not only the lowest lying state in a given
channel,
but also of radially excited states. Here we define the wave function to be
the vacuum-to-one-particle matrix element,
\begin{equation}
\Psi(\vec{r}) = \sum_a\langle 0|q_a(\vec{r},0) Q_a^{\dag}(0,0)|B\rangle
\end{equation}
where $|B\rangle$ is the state of interest. The sum is over color, and
spin labels are supressed.
Here we discuss the wave functions obtained for the 1S and 2S
levels of the S-wave pseudoscalar meson as well as a preliminary study of
the 1P state.[8] The main emphasis will be on the remarkable quantitative
agreement between the lattice QCD wave functions and those obtained from
a simple relativistic quark model Hamiltonian. The results for heavy-light
decay constants and spectroscopy will be presented at this conference
by Eichten.[9]
The investigation used an existing set of 50 configurations
(each separated by 2000 sweeps) generated by ACPMAPS on
a $16^3\times 32$ lattice at $\beta = 5.9$. The configurations
were fixed to Coulomb gauge and light quark propagators
with $\kappa = .158$ were used.
Only the four lowest energy smearing
functions were included ($N = 4$).
\section{Relativistic Quark Model}
The optimized wave functions obtained from our lattice data by the multistate
smearing method turn out to be, within errors, the same as the eigenfunctions
of a lattice version of the spinless, relativistic quark model Hamiltonian,
which
we will now define. In the absence of gauge fields, the free quark Hamiltonian
can be exactly diagonalized by introducing momentum space creation and
annihilation
operators for quarks and antiquarks. In the continuum,
\begin{eqnarray}
H_0 = \int\frac{d^3p}{(2\pi)^3} \sqrt{\vec{p}^2+\mu^2} \sum_i[
\alpha^{\dag}_i(p)\alpha_i(p)\nonumber\\
+ \beta^{\dag}_i(p)\beta_i(p)]\label{eq:H0}
\end{eqnarray}
where the sum is over spin and color labels. In terms of the covariant quark
propagator, the particle and antiparticle operators are associated with
propagation
forward and backward in time, respectively. Since $H_0$ contains no pair
creation
($\alpha^{\dag}\beta^{\dag}$) terms, it is possible to formulate the eigenvalue
problem as that of a one-body operator,
$H_0 \rightarrow \sqrt{\mu^2-\nabla^2} $
If we now turn on the gauge interaction and introduce a heavy-quark, static
color
source, the description of the bound light quark becomes, in principle,
drastically
more complicated. We know that, in the limit $\mu\gg\Lambda_{QCD}$ where the
dynamical
quark becomes heavy, the primary effect of the color source is to introduce a
static,
confining potential $V(r)$ whose form is well-measured and consistently given
by both
$Q\bar{Q}$ phenomenology and lattice QCD,
\begin{equation}
H_0 \rightarrow H = H_0 + V(r) \label{eq:SRQM}
\end{equation}
At this stage, the Hamiltonian can still be regarded as a one-body
operator[10]. As the mass of the quark becomes light, one expects more
complicated effects
arising from the gauge interaction which render the Hamiltonian eigenvalue
problem
intractable. These effects include the creation of gluons and light $q\bar{q}$
pairs,
as well as the exchange of transverse and non-instantaneous gluons with the
static
source. From the numerical results presented in the next section, we conclude
that
these effects are relatively small, and that the heavy-light meson system is
well-described
by the Hamiltonian (6), which we will refer to as the spinless relativistic
quark
model (SRQM).
\begin{figure}[htb]
\psfig{figure=figure1.ps,height=3.0in,width=2.8in}
\caption{Comparison of the 1S state in LQCD ($\times$'s) with the NRQM
(+'s) and the SRQM (boxes).}
\label{fig:toosmall}
\end{figure}
The construction of explicit eigenfunctions of the SRQM Hamiltonian is easily
accomplished by a numerical procedure. First the operator $H$ is discretized on
a 3D
lattice by replacing the spatial derivatives with finite differences
The potential energy V(r) is just the static energy
measured on the same configurations used to study
the heavy-light spectrum.
Then the resolvent
operator $(E-H)^{-1}$ acting on a source vector $\chi$ is computed by a
numerical
matrix inversion (conjugate gradient) algorithm. Finally, the parameter $E$ is
varied
to find the poles in the output vector $(E-H)^{-1}\chi$. The location of the
pole is
an eigenvalue of $H$, and its residue is the corresponding eigenfunction. In
the next
section we compare the wave functions obtained in this way from the SRQM
Hamiltonian
with the lattice QCD results.
\section{Comparison of Wavefunctions}
\begin{figure}[htb]
\psfig{figure=figure2.ps,height=3.0in,width=2.8in}
\caption{Comparison of the 2S state in LQCD ($\times$'s) with the NRQM
(+'s) and the SRQM (boxes).}
\label{fig:toosmall}
\end{figure}
\begin{figure}[htb]
\psfig{figure=figure3.ps,height=3.0in,width=2.8in}
\caption{The 1P state in LQCD extracted from T=2 (+'s), T=4 (boxes) and,
T=6 ($\times$'s}).
\label{fig:toosmall}
\end{figure}
Using the four state smeared correlator described in
section 2 an initial study for
the S-wave
channel was carried out.
After some iterative improvement of the smearing functions, it was found that
the value $\mu = .23$ for the dimensionless mass parameter in the SRQM
Hamiltonian gave the best agreement with the lattice QCD wave functions with
$\beta = 5.9, \kappa = .158$. In Fig.~1 the LQCD wave function
is plotted with the SRQM wave function. For comparison, the
nonrelativistic (NR) Schrodinger wave function (obtained by replacing the
relativistic kinetic term by $p^2/2m$) is also plotted. The mass
parameter in the NR Hamiltonian was adjusted to give the same slope at
the origin in the ground state wave function. Notice that, for large r,
the QCD and SRQM wave functions both fall exponentially. On the other
hand, the NR wave function falls faster
than exponentially ($exp(-\alpha r^{\frac{3}{2}}$), as expected from the
behavior of the analytic solution in a pure linear potential (Airy function).
Remarkably, by including the relativistic kinetic term, the SRQM wave
functions are brought into excellent agreement with those of lattice QCD,
without changing the potential from its nonrelativistic form.
In Fig.~2 we plot the excited 2S state from LQCD along with the corresponding
wave functions from the SRQM and the NR model.
The QCD wave function is somewhat more peaked at the origin, however,
the overall agreement between QCD and the SRQM is excellent.
Here, there are no adjustable parameters, $m$ being already fixed from
the 1S state fit.
Finally, in Fig.~3 we show some preliminary results of a study
of the 1P state. Here the solid line is the 1P wave function from the SRQM.
The data points depict the evolution of the P-wave LQCD radial wavefunction
extracted from time slices $T=2$ (+'s), $T=4$ (boxes), and
$T=6$ ($\times$'s), starting with an approximate guess for the initial smearing
function. The ansatz for the initial smearing function used here was a simple
$re^{-\alpha r}$ form. As the LQCD wave function evolves in Euclidean time,
it appears to approach a true eigenstate whose wavefunction again agrees
remarkably well with the SRQM result, with no adjustable parameters.
\section{Discussion}
Additional studies are in
progress using a variety of lattice sizes, gauge coupling strengths, and light
quark masses. Preliminary results of these studies are fully consistent with
the conclusions presented here.
The agreement of lattice QCD with the SRQM wave functions suggests
that the relativistic propagation of the light valence quark is
the most important effect which must be included in a description of
heavy-light mesons.
Other field theoretic effects such as the presence
of multibody components of the wavefunction (containing gluons along
with light $q\bar{q}$ pairs arising, in quenched approximation, from the
propagation of the valence quark backward in time) are of less
quantitative importance in determining the shape of
the valence quark wave function.
Further numerical studies of the connection between lattice QCD
and the relativistic quark model are planned.
\section{Acknowledgements}
We thank George Hockney, Aida El Khadra, Andreas Kronfeld, and
Paul Mackenzie for joint lattice efforts without which this analysis
would not have been possible. We also thank Chris Quigg for
useful discussions. The numerical calculations were performed on the
Fermilab ACPMAPS computer system developed by the CR\&D department in
collaboration with the theory group. This work was supported in part by
the U.~S.~Department of Energy under grant No. DE-AS05-89ER40518.
|
1,314,259,994,313 | arxiv | \section{Introduction}
\label{sec-INT}
The concept {\em entropy} arises across a broad range of topics
within the mathematical sciences, with different nuances and applications.
There is a substantial literature (see Section~\ref{sec:concept}) on topics linking entropy and graphs,
but our focus seems different from these.
In this paper we use the word only with its most elementary meaning:
for any probability distribution $\mathbf{p} = (p_s)$ on any finite set $S$,
its {\em entropy} is the number
\begin{equation}
\mathrm{ent}(\mathbf{p}) = - \sum_s p_s \log p_s .
\label{ent-def}
\end{equation}
For an $S$-valued random variable $X$ we abuse notation by writing
$\mathrm{ent}(X)$ for the entropy of the distribution of $X$.
Consider an $N$-vertex undirected graph.
Instead of the usual conventions about vertex-labels
(unlabelled; labeled by a finite set independent of $N$; labeled by integers
$1,\ldots,N$) our convention is that there is a fixed
(i.e. independent of $N$) alphabet ${\mathbf A}$ of size $2 \le A < \infty$ and that each vertex
has a different ``name", which is a length-$O(\log N)$ string
$\mathbf{a} = (a_1,\ldots,a_m)$ of letters from ${\mathbf A}$.
We will consider probability distributions over such graphs-with-vertex-names, in the $N \to \infty$ ``sparse graph limit" where the number of edges is $O(N)$.
In other words we study random graphs-with-vertex-names $\GG_N$ whose average degree is $O(1)$.
In this particular context (see Section~\ref{sec:TSU} for discussion) one expects
that the entropy should grow as
\begin{equation}
\mathrm{ent}(\GG_N) \sim c N \log N,
\label{c-def}
\end{equation}
where $c$ is thereby interpretable as
an ``entropy rate".
Note the intriguing curiosity that
the numerical value of the entropy rate $c$ does not depend on the base
of the logarithms, because there is a ``log" on both sides of the
definition (\ref{c-def}), and indeed we will mostly avoid specifying the base.
In Section~\ref{sec:easy} we define and analyze a variety of models for which calculation of entropy rates is straightforward.
In Section~\ref{sec:hard} we study one more complicated model. This is the {\em mathematical} content of the paper.
Our motivation for studying entropy in this specific setting is discussed verbally in Section~\ref{sec:BP}, and this discussion
is the main {\em conceptual} contribution of the paper.
The discussion is independent of the subsequent mathematics but may be helpful in formulating interesting probability models for future study.
Section~\ref{sec:backg} gives some (elementary) technical background.
Section \ref{sec:final} contains final remarks and open problems.
\section{Remarks on data compression for graphical\\ structures}
\label{sec:BP}
The well-known textbook~\cite{MR2239987} provides an account of the classical Shannon setting of data compression for sequential data,
motivated by English language text modeled as a stationary random sequence.
What is the analog for graph-structured data?
This is plainly a vague question.
Real-world data rarely consists only of the abstract mathematical structure -- unlabelled vertices and edges -- of a graph; typically a considerable amount of context-dependent extra information is also present.
Two illustrative examples:\\
(i) Phylogenetic trees on species; here part of the data is the names of the species
and the names of clades; \\
(ii) Road networks; here part of the data is the names or numbers of the roads
and some indication of the locations where roads meet.
\smallskip
\noindent
Our setting is designed as one simple abstraction of ``extra information",
in which the (only) extra information is
the ``names" attached to vertices.
Note
that in many examples one expects some association between the names and the graph structure, in that the names of two vertices which are adjacent
will on average be ``more similar" in some sense than
the names of two non-adjacent vertices.
This is very clear in the phylogenetic tree example, because of the
genus-species naming convention.
So when we study toy probability models later, we want models featuring
such association.
Let us remind the reader of two fundamental facts from
information theory \cite{MR2239987}.
\noindent
(a) In the general setting (\ref{ent-def}), there
there exists a coding (e.g.\ Huffman code) $f_\mathbf{p}: S \to {\mathbf B}$
such that, for $X$ with distribution $\mathbf{p}$,
\[ \mathrm{ent}(\mathbf{p}) \le {\mathbb E} \ \mathrm{len} (f_\mathbf{p}(X)) \le \mathrm{ent}(\mathbf{p}) + 1 \]
and no coding can improve on the lower bound.
Here ${\mathbf B}$ denotes the set of finite binary strings
$\mathbf{b} = b_1b_2 \ldots b_m$ and $\mathrm{len} (\mathbf{b}) = m$ denotes the length of a string and entropy is computed to base $2$.
Recall that a coding is just a $1 - 1$ function.
\noindent
(b) In the classical Shannon setting, one considers
a stationary ergodic sequence ${\mathbf X} = (X_i)$ with values in a finite alphabet.
Such a sequence has an {\em entropy rate}
\[
H := \lim_{k \to \infty} k^{-1} \mathrm{ent}(X_1,\ldots,X_k) .
\]
Moreover there exist coding functions $f$ (e.g.\ Lempel-Ziv) which are {\em universal} in the sense that for every such stationary ergodic sequence,
\[ \lim_{m \to \infty}
m^{-1} {\mathbb E} \ \mathrm{len} (f(X_1, \ldots,X_m)) = H . \]
\noindent
The important distinction is that in (a) the coding function $f_\mathbf{p}$ depends on the distribution of ${\mathbf X}$ but in (b) the coding function $f$ is a function on finite sequences which does not depend on the distribution of ${\mathbf X}$.
In our setting of graphs with vertex-names we can in principle apply (a), but it will typically be very unrealistic to imagine that observed real-world data is a realization from some {\em known} probability distribution on such graphs.
At the other extreme, for many reasons one cannot expect there to exist, in our setting,
``universal" algorithms analogous to (b).
For instance, the vertex-names $(\mathbf{a}, \mathbf{a}^*)$ across some edges might be related by a deterministic cryptographic function.
Also note it is difficult to imagine a definition analogous to ``stationary" in our setting.
So it seems necessary to rely on heuristic algorithms for compression,
where {\em heuristic} means only that there is no good theoretical guarantee on compressed length.
One could of course compare different heuristic algorithms at an empirical level by testing them on real-world data.
As a theoretical complement, one could test an algorithm's efficiency by trying to prove that, for some wide range of qualitatively different probability models for $\GG_N$, the
algorithm behaves optimally in the sense of compressing to
mean length $(c + o(1)) N \log N$ where $c$ is the entropy rate
(\ref{c-def}).
And the contribution of this paper is to provide a collection of
probability models for which we know the numerical value of $c$.
\subsection{Remarks on the technical setup}
\label{sec:TSU}
The discussion above did not involve
two extra assumptions made in Section \ref{sec-INT},
that the graphs are sparse and that
the length of names is $O(\log N)$
(note the length must be at least order $\log N$ to allow the names
to be distinct).
These extra assumptions create a more focussed setting for data compression that is
mathematically interesting for two reasons.
If the entropies of the two structural components -- the unlabelled graph, and the set of names -- were of different orders, then only the larger one would be important;
but these extra assumptions make both entropies be of the same order, $N \log N$.
So both of these two structural components and their association become relevant for compression.
A second, more technical, reason is that natural models
of sparse random graphs $\GG_n$ invariably have a well-defined limit
$\GG_\infty$ in the sense of
{\em local weak convergence} \cite{MR2023650,MR2354165}
of unlabelled graphs, and the limit $\GG_\infty$ automatically has a property
{\em unimodularity} directly analogous to stationarity for
random sequences.
This addresses part of the ``difficult to imagine a definition analogous to stationary in our setting" issue raised above, but it remains difficult
to extend this notion to encompass the vertex-names.
\subsection{Related work}
\label{sec:concept}
We have given a verbal argument that the Section \ref{sec-INT} setting of sparse graphs with vertex-names
is a worthwhile setting for future theoretical study of data compression in graphical structures.
It is perhaps surprising that this precise setting has apparently not been
considered previously.
The large literature on what is called ``graph entropy", recently surveyed
in \cite{MR2737460}, deals with statistics of a single unlabelled graph,
which is quite different from our setting.
Data compression for graphs
with a fixed alphabet is considered in \cite{MR1984489}.
In a different direction, the case of {\em sequences} of length $N$ with increasing-sized alphabets
is considered in \cite{MR2095850,MR2817014}.
Closest to our topic is \cite{SZP}, discussing entropy and
explicit compression algorithms for Erd\H{o}s-R\'enyi\ random graphs.
But all of this literature deals with settings that seem ``more mathematical" than ours, in the sense
of being less closely related to compression of
real-world graphical structures involving extra information.
On the applied side, there is considerable discussion of
heuristic compression algorithms designed to exploit expected features of graphs arising in particular contexts, for instance
WWW links
\cite{Boldi03thewebgraph}
and social networks
\cite{Chierichetti:2009}.
What we proposed in the previous section as future research is to try to bridge the gap between that work and mathematical theory by seeking to devise and study general purpose heuristic algorithms.
On a more speculative note, we have a lot of sympathy with the view
expressed by John Doyle and co-authors
\cite{doyle-anderson},
who argue that the ``organized complexity" one sees in real world
evolved biological and technological networks is essentially different from the ``disorganized complexity" produced by probability models of random graphs.
At first sight it is unclear how one might try to demonstrate this distinction at some statistical level. But producing a heuristic algorithm that codes some class of real-world networks to lengths
smaller than the entropy of typical probability models of such networks would be rather convincing.
\section{A little technical background}
\label{sec:backg}
Here are some elementary facts \cite{MR2239987} about entropy,
in the setting (\ref{ent-def}) of a $S$-valued r.v. $X$,
which we will use without comment.
\begin{eqnarray*}
\mathrm{ent}(X) & \le& \log |S| \\
\mathrm{ent}(X,Y) &\le & \mathrm{ent}(X) + \mathrm{ent}(Y)\\
\mathrm{ent}(X) & \ge & \mathrm{ent}(h(X)) \mbox{ for any } h:S \to S^\prime.
\end{eqnarray*}
These inequalities are equalities if and only if, respectively,
$X$ has uniform distribution on $S$
$X$ and $Y$ are independent
$h$ is $1 - 1$ on the range of $X$.
\noindent
Also, if $\bar{\theta} = \sum_s q_s \theta_s$,
where $q = (q_s)$ and each $\theta_s$ is a probability distribution, then
\begin{equation}
\mathrm{ent}(\bar{\theta}) \le \mathrm{ent}(q) + \sum_s q_s \ \mathrm{ent}(\theta_s)
\label{H-mixing}
\end{equation}
with equality if and only if the supports of the $\theta_s$ are essentially disjoint. In random variable notation,
\begin{equation}
\mathrm{ent}(X) = \mathrm{ent} (f(X)) + {\mathbb E} \mathrm{ent}(X \vert f(X))
\label{H-condit}
\end{equation}
where the random variable $\mathrm{ent}(X \vert Y)$ denotes entropy of the conditional distribution.
(Note this is what a probabilist would call ``conditional entropy", though information theorists use that phrase to mean ${\mathbb E} \, \mathrm{ent}(X \vert Y)$).
Write
\[ \mbox{${\mathcal E}$}(p) = - p \log p - (1-p) \log (1-p) \]
for the entropy of the Bernoulli($p$) distribution.
We will often use the fact
\begin{equation}
\mbox{${\mathcal E}$}(p) \sim p \log \sfrac{1}{p} \mbox{ as } p \downarrow 0.
\label{Bplim}
\end{equation}
We will also often use the following three basic crude estimates.
First,
\begin{equation}
\mbox{ if } K_m \to \infty \mbox{ and } \sfrac{K_m}{m} \to 0
\mbox{ then } \log {m \choose K_m} \sim K_m \log \sfrac{m}{K_m} .
\label{mK}
\end{equation}
Second, for $X(n,p)$ with Binomial$(n,p)$ distribution,
if $0\leq x_n \leq np$ and $x_n/n \to x \in [0,p]$ then
\begin{equation}
\log \Pr(X(n,p) \le x_n) = -n \Lambda_p(x_n/n) + \textrm{O}(\log n ), \label{BinLD}
\end{equation}
where $\Lambda_p(x) := x \log \sfrac{x}{p} + (1-x) \log \sfrac{1-x}{1-p}$.
The first order term is standard from large deviation theory and the second order
estimate follows from finer but still easy analysis; see for example
Lemma~2.1 of~\cite{kamc10}.
Third, write $G[N,M]$ for the number of graphs on
vertex-set $1,\ldots,N$ with at most $M$ edges.
It easily follows from~\eqref{BinLD} that
\begin{equation}
\mbox{if } \sfrac{M}{N} \to \zeta \in [0,\infty) \mbox{ then }
\frac{\log G[N,M]}{N \log N} \to \zeta. \label{GMN}
\end{equation}
\section{Easy examples}
\label{sec:easy}
Standard models of random graphs on vertices
labelled $1, \ldots, N$ can be adapted to our setting of vertex-names in several ways. In particular, one could either\\
(i) re-write the integer label in binary, that is as a binary string; or\\
(ii) replace the labels by distinct random strings as names.\\
These two schemes are illustrated in the first two examples below.
We present the results in a fixed format:
a name for the model as a subsection heading, a definition of the model $\GG_N$,
typically involving parameters $\alpha, \beta, \ldots$, and a Proposition giving a formula for the entropy rate $c = c(\alpha, \beta, \ldots)$ such that
\[ \mathrm{ent}(\GG_N) \sim c N \log N \mbox{ as } N \to \infty .\]
Model descriptions and calculations sometimes implicitly assume $N$ is sufficiently large.
These particular models are ``easy" in the specific sense that independence of edges allows us to write down an exact expression for entropy; then calculations establish the asymptotics.
We also give two general results, Lemmas~\ref{L1} and~\ref{L2},
showing that graphs with short edges, or with
similar names between connected vertices, have entropy rate zero.
\subsection{Sparse Erd\H{o}s-R\'enyi, default binary names}
{\bf Model.} $N$ vertices, whose names are the integers
$1,\ldots, N$ written as binary strings of length $\lceil \log_2 N \rceil$.
Each of the ${N \choose 2}$ possible edges is present independently with probability $\alpha/N$, where $0<\alpha<\infty$.
\smallskip \noindent
{\bf Entropy rate formula.}
$c(\alpha) = \sfrac{\alpha}{2}$.
\smallskip \noindent {\bf Proof.\ }
The entropy equals
${N \choose 2} \mbox{${\mathcal E}$}(\alpha/N)$;
letting $N \to \infty$ and using (\ref{Bplim}) gives the formula.
\subsection{Sparse Erd\H{o}s-R\'enyi, random $A$-ary names}
\label{sec:SpERb}
{\bf Model.} As above, $N$ vertices, and
each of the ${N \choose 2}$ possible edges is present independently with probability $\alpha/N$. Take $L_N \sim \beta \log_A N$ for
$1 < \beta < \infty$ and take the vertex names
as a uniform random choice of $N$ distinct $A$-ary strings of length $L_N$.
\smallskip \noindent
{\bf Entropy rate formula.}
$c(\alpha, \beta) = \beta - 1 + \frac{\alpha}{2}$.
\smallskip \noindent {\bf Proof.\ }
The entropy equals
$\log{A^{L_N} \choose N} +{N \choose 2} \mbox{${\mathcal E}$}(\alpha/N)$.
The first term $\sim (\beta - 1) N \log N$ by (\ref{mK}) and the second
term $\sim \frac{\alpha}{2} N \log N$ as in the previous model.
\smallskip \noindent
{\bf Remark.} One might have naively guessed that the formula would involve $\beta$
instead of $\beta -1$, on the grounds that the entropy of the sequence of names is $\sim \beta N \log N$, but this is the rate in a third model where
a vertex name is a pair $(i,\mathbf{a})$, where $1 \le i \le N$ and $\mathbf{a}$ is the random string.
This model distinction becomes more substantial for the model to be studied
in Section~\ref{sec:hard}.
\subsection{Small Worlds Random Graph}
\label{sec:SW}
{\bf Model.}
Start with $N=n^2$ vertices arranged in an $n\times n$ discrete torus,
where the name of each vertex is its coordinate-pair $(i,j)$ written
as two binary strings of lengths $\lceil \log_2 n \rceil$.
Add the usual edges of the degree-$4$ nearest neighbor torus graph.
Fix parameters
$0 < \alpha, \gamma < \infty$.
For each edge $(w,v)$ of the remaining set $S$ of ${N \choose 2}-2N$ possible edges in the graph, add the edge independently
with probability $p_N(||w-v||_2)$, where $p_N(r)=a r^{-\gamma}$ and $a:=a_{N,\gamma}$ is chosen such that the mean degree of the graph
$\GG_N$ of these random edges $\to \alpha$ as $N \to \infty$
(see (\ref{sw-d1},\ref{sw-d2}) for explicit expressions)
and the Euclidean distance $||w-v||_2$ is taken using the torus convention.
\smallskip \noindent
{\bf Entropy rate formula.}
\begin{eqnarray*}
c(\alpha, \gamma) &=& \alpha/2, \quad 0 < \gamma < 2 \\
&=& \alpha/4, \quad \gamma = 2 \\
&=& 0, \quad \quad 2< \gamma < \infty .
\end{eqnarray*}
\smallskip \noindent
{\bf Remark.} The different cases arise because for $\gamma < 2$ the
edge-lengths are order $n$ whereas for $\gamma > 2$ they are $O(1)$.
\smallskip \noindent {\bf Proof.\ }
Write $r_{i,j}=\sqrt{i^2+j^2}$ and $p_{i,j}=p_N(r_{i,j})$. The degree $D(v)$ of vertex $v$ in $\GG_N$ (assuming $n$ is odd --
the even case is only a minor modification) has mean
\begin{align}
{\mathbb E} D(v)-4 &= 4\sum_{i,j=1}^{(n-1)/2} p_N(r_{i,j})+ 4\sum_{i=2}^{(n-1)/2} p_N(r_{i,0}) \notag \\
&= a\left( 4 \sum_{i,j=1}^{(n-1)/2} (i^2+j^2)^{-\gamma/2}+ 4 \sum_{i=2}^{(n-1)/2} i^{-\gamma}\right) . \label{sw1}
\end{align}
Similarly, the entropy
of $\GG_N$ is exactly
\begin{equation}
\label{2}
\mathrm{ent}(\GG_N)=\frac{N}{2}\left(4\sum_{i,j=1}^{(n-1)/2} \mbox{${\mathcal E}$}(p_N(r_{i,j}))+ 4\sum_{i=2}^{(n-1)/2} \mbox{${\mathcal E}$}(p_N(r_{i,0}))\right).
\end{equation}
One can analyze these expressions separately in the three cases.
First consider the ``critical" case $\gamma = 2$. Here the quantity in parentheses in (\ref{sw1})
is $\sim \int_1^{(n-1)/2} 2\pi r^{-1} dr \sim 2 \pi \log n \sim \pi\log N$.
We therefore take
\begin{equation}
a = a_{N,1} \sim \sfrac{\alpha}{\pi \log N}
\label{sw-d1}
\end{equation}
so that $\GG_N$ has mean degree $\to 4+\alpha$.
Evaluating the entropy similarly,
where in the second line the ``$\log a$" term is asymptotically negligible,
\begin{eqnarray*}
\mathrm{ent}(\GG_N) &\sim& \frac{N}{2}
\int_1^{(n-1)/2} 2\pi r \ \mbox{${\mathcal E}$}(ar^{-2}) dr \\
&\sim& N \pi
\int_1^{(n-1)/2} r \cdot ar^{-2} \cdot ( - \log a + 2 \log r ) \ dr \\
&\sim& 2N \pi a
\int_1^{(n-1)/2} r^{-1} \log r \ dr \\
&\sim& 2N \pi a \cdot \sfrac{1}{2} \log^2 n \\
&\sim &\sfrac{\alpha}{4} N \log N
\end{eqnarray*}
giving the asserted entropy rate formula in this case $\gamma = 2$.
In the case $\gamma < 2$,
more elaborate though straightforward calculations
(see appendix)
show that to have the mean degree $\to \alpha+4$ we take
\begin{equation}
a = a_{N,\gamma} \sim \alpha \kappa_\gamma N^{-1 + \gamma/2} ;
\quad \kappa_\gamma = \frac{2 - \gamma}{2^{1+\gamma}
\int_0^{\pi/4} \sec^{2-\gamma}(\theta) d\theta} .
\label{sw-d2}
\end{equation}
and then establish the asserted entropy rate $\alpha/2$.
In the case $\gamma > 2$ the mean length of the edges of $\GG_N$
becomes $O(1)$.
One could repeat calculations for this case, but the asserted zero
entropy rate follows from the more general Lemma \ref{L1} later,
as explained in Section~\ref{sec:zero_rate}.
\smallskip \noindent
{\bf Remark.}
The case $\gamma < 2$ suggests a general principle that
models with ``long edges" should have the same entropy rates as if
the edges were uniform random subject to the same degree distribution.
But there seems no general formulation
of such a result without explicit dependence assumption.
\subsection{Edge-probabilities depending on Hamming distance}
We first describe a general model, then the specialization that
we shall analyze.
{\bf General model.}
Fix an alphabet ${\mathbf A}$ of size $A$.
For each $N$ choose $L_N$ such that $N \le A^{L_N}$,
and suppose $L_N \sim \beta \frac{\log N}{\log A}$ for some
$\beta \in [1,\infty)$.
Take $N$ vertex-names as a uniform random choice of distinct length-$L_N$ strings from ${\mathbf A}$.
Write $d_H(\mathbf{a},\mathbf{a}^\prime) = |\{i: a_i \neq a^\prime_i\}|$ for Hamming distance between names.
For each $N$ let $\mathbf{w} = \mathbf{w}^N$ be a sequence of decreasing weights
$1 = w(1) \ge w(2) \ge \ldots \ge w(L_N) \ge 0$.
We want the probability of an edge between vertices
$(\mathbf{a},\mathbf{a}^\prime)$ to be proportional to $w(d_H(\mathbf{a},\mathbf{a}^\prime))$.
For each vertex $\mathbf{a}$, the expectation of the sum of
$w(d_H(\mathbf{a},\mathbf{a}^\prime))$ over other vertices $\mathbf{a}^\prime$ equals
\begin{equation}
\mu_N:=
\frac{N-1}{1 - A^{-L_N}}
\sum_{u=1}^{L_N} {L_N \choose u} \left(\frac{A-1}{A}\right)^u \left(\frac{1}{A}\right)^{L_N-u} \ w(u). \label{Hamm-1}
\end{equation}
Fix $0 < \alpha < \infty$, and make a random graph $\GG_N$ with mean degree $\alpha$ by specifying that, conditional on the set of vertex-names, each possible edge
$(\mathbf{a},\mathbf{a}^\prime)$ is present independently with probability
$\alpha w(d_H(\mathbf{a},\mathbf{a}^\prime)) / \mu_N$.
Note that in order for this model to make sense, we need $\mu_N\geq\alpha$, which is not guaranteed
by the description of the model.
Intuitively, we expect that the lengths (measured by Hamming distance) of edges
will be around the $\ell_N$ maximizing $\binom{L_N}{\ell_N} \ w^N(\ell_N) $, and that for all (suitably regular) choices of $\mathbf{w}^N$ with
$\ell_N/N \to d \in [0,1]$ the entropy rate will involve $\mathbf{w}^N$ only via the limit $d$.
Stating and proving a general such result seems messy, so
we will study only the special case
\begin{eqnarray}
w(u)&=&1, \ 1\leq u\leq M_N; \label{wu} \\
&=& 0 ,\ M_N <u \le L_n \nonumber \\
M_N/L_N &\to & d\in(0,1 - \sfrac{1}{A}) \\
1\leq &\beta&< \frac{\log A}{\Lambda_{1 - 1/A}(d) } \label{betaUB}
\end{eqnarray}
for $\Lambda_p(d)$ as at (\ref{BinLD}).
Here condition (\ref{betaUB}) is needed, as we will see at (\ref{munb}), to
make $\mu_N \to \infty$.
Note that for the case $d = 0$ one could use Lemma \ref{L1} later and the accompanying conditioning argument in Section~\ref{sec:zero_rate} to show that the entropy rate of $\GG_N$ equals the rate ($\beta - 1$) for the
set of vertex-names.
The opposite case
$(1 - 1/A) \le d \le 1$ is essentially the model of Section~\ref{sec:SpERb}
and the rate becomes $\beta - 1 + \alpha/2$: as expected, these rates are the $d \to 0$ and the $d \to 1 - 1/A$ limits of the rates in our formula below.
\smallskip \noindent
{\bf Entropy rate formula.}
In the special case (\ref{wu} - \ref{betaUB}),
\[
c(A, \alpha, \beta, d)=\beta-1+\frac{\alpha}{2}\left(1-\frac{\beta\Lambda_{1-1/A}(d)}{\log A}\right) .
\]
To establish this formula, first observe that
for Binomial $X(\cdot,\cdot)$ as at (\ref{BinLD})
\baln{
\mu_N=\frac{N-1}{1 - A^{-L_N}} \Pr(1\leq X(L_N,1 - 1/A) \leq M_N), \label{bdt2}
}
and so by (\ref{BinLD})
\begin{equation}
\frac{\log \mu_N}{\log N} \to 1 - \frac{ \beta \ \Lambda_{1-1/A}(d) }{\log A} .
\label{munb}
\end{equation}
So condition (\ref{betaUB}) ensures that $\mu_N \to \infty$ and therefore the model makes sense.
Write $\mathbf{Names}$ (to avoid overburdening the reader with symbols) for the random unordered set of vertex-names,
and use (\ref{H-condit}) to write
\[ \mathrm{ent}(\GG_N) = \mathrm{ent}(\Names) + {\mathbb E} \, \mathrm{ent}(\GG_N \vert \Names) . \]
As in Section~\ref{sec:SpERb} the contribution to the entropy rate from the first term is
$\beta - 1$. For the second term, write
\[ \mathrm{ent}(\GG_N \vert \Names) = \sum_{\mathbf{a}\not=\mathbf{a}^\prime}
\mbox{${\mathcal E}$}\left(\sfrac{\alpha}{\mu_N}\right) {\rm 1\hspace{-0.90ex}1}_{(\mathbf{a} \in \Names, \mathbf{a}^\prime \in \Names)}
{\rm 1\hspace{-0.90ex}1}_{(d_H(\mathbf{a},\mathbf{a}^\prime) \le M_N)}
\]
where the sum is over unordered pairs $\{\mathbf{a},\mathbf{a}^\prime\}$ in ${\mathbf A}^{L_N}$.
Take expectation to get
\[ {\mathbb E} \, \mathrm{ent}(\GG_N \vert \Names) =
\frac{\binom{N}{2}}{1 - A^{-L_N}}
\sum_{u=1}^{M_N} {L_N \choose u} \left(\frac{A-1}{A}\right)^u \left(\frac{1}{A}\right)^{L_N-u} \mbox{${\mathcal E}$}\left(\frac{\alpha}{\mu_N}\right) .
\]
But from the definition (\ref{Hamm-1}) of $\mu_N$ this simplifies to
$\frac{N}{2} \mu_N \mbox{${\mathcal E}$}(\alpha/\mu_N)$,
and then from (\ref{Bplim})
\[ {\mathbb E} \, \mathrm{ent}(\GG_N \vert \Names) \sim \frac{\alpha N}{2}\ \log \mu_N. \]
Appealing to (\ref{munb}) establishes the entropy rate formula.
\subsection{Non-uniform and uniform random trees}
\label{sec:tree}
{\bf Model.}
Construct a random tree $\mbox{${\mathcal T}$}_N$ on vertices $1,\ldots, N$ as follows.
Take $V_3, V_4, \ldots, V_N$ independent uniform on
$\{1,\ldots,N\}$. Link vertex $2$ to vertex $1$.
For $k = 3,4,\ldots, N$ link vertex $k$ to vertex $\min(k-1, V_k)$.
\smallskip \noindent
{\bf Entropy rate formula.}
$c = 1/2$.
\smallskip \noindent {\bf Proof.\ }
\[ \mathrm{ent}(\mbox{${\mathcal T}$}_N) = \sum_{k=3}^N \mathrm{ent}(W_k) \]
where $W_k = \min(k-1, V_k)$ has entropy
\[ \mathrm{ent}(W_k) = \sfrac{k-2}{N} \log N + \sfrac{N-k+2}{N} \log \sfrac{N}{N-k+2} \]
The sum of the first term $\sim \frac{1}{2} N \log N$ and the sum of the second term is of smaller order.
\smallskip \noindent
{\bf Remark.} This tree arose in \cite{MR1085326}, where it was shown (by an indirect argument) that
if one first constructs $\mbox{${\mathcal T}$}_N$, then applies a uniform random permutation to the vertex-labels,
the resulting random tree $\mbox{${\mathcal T}$}^*_N$ is uniform on the set of all labelled trees. Cayley's formula tells us there are $N^{N-2}$ labelled trees, so
$\mathrm{ent}(\mbox{${\mathcal T}$}^*_N) = \log N^{N-2}$and so
$(\mbox{${\mathcal T}$}^*_N)$ has entropy rate $c = 1$.
\subsection{Conditions for zero entropy rate}
\label{sec:zero_rate}
Here we will give two complementary conditions under which the
entropy rate is zero.
Lemma \ref{L1} concerns the case where we start with deterministic vertex-names, and add random edges which mostly link a vertex to some of the
``closest" vertices, specifically to vertices amongst the
($o(N^\varepsilon)$ for all $\varepsilon > 0$) closest vertices.
Lemma \ref{L2} concerns the case where we start with a determinstic graph
on unlabelled vertices, and add random vertex-labels such that vertices linked by an edge mostly have names that differ in only $o(\log N)$ places.
Note that these lemmas may then be applied conditionally.
That is, if we start with a random unordered set of names, and then
(conditional on the set of names) add random edges in a way satisfying the
assumptions of Lemma \ref{L1},
then the entropy rate of the resulting $\GG_N$ will equal the entropy rate of the
original random unordered set of names.
Similarly, if we start with a random graph
on unlabelled vertices, then (conditional on the graph) add random names
in a way satisfying the
assumptions of Lemma \ref{L2},
then the entropy rate of the resulting $\GG_N$ will equal the entropy rate of the original random unlabelled graph.
\begin{Lemma}\label{L1}
For each $N$, suppose we take $N$ vertices with deterministic names
(w.l.o.g. $1 \le i \le N$ written as binary strings, to fit our set-up) and suppose for each $i$ we are given an ordering
$j(i,1), j(i,2), \ldots, j(i,N-1)$ of the other vertices.
Say that an edge $(i, j = j(i,\ell))$ with $i<j$ has length $\ell$.
Consider a sequence of random graphs $\GG_N$ whose distribution is arbitrary subject to \\
(i) The number $E_N$ of edges satisfies
$\Pr(E_N>N\beta)=\textrm{o}(N^{-1} \log N)$ for some constant $\beta < \infty$;\\
(ii) For some $M_N$ such that
$\log(M_N)=\textrm{o}(\log N)$, the r.v. \\
\hspace*{0.5in} $X_N := \mbox{
number of edges with length greater than $M_N$} $ \\
satisfies $\Pr( X_N > N\delta)= \textrm{o}(1)$ for all $\delta>0$.\\
Then the entropy rate is $ c = 0$.
\end{Lemma}
\smallskip \noindent
{\bf Remark.} The lemma applies to the $\gamma>2$ case of the ``small worlds" model
in Section~\ref{sec:SW}. Take the ordering induced by the
natural distance between vertices. In this case, $E_N$ is a sum of independent indicators
with ${\mathbb E} E_N \sim c N$ for some constant $c$. Standard concentration results (e.g.~\cite{MR2248695} Theorem 2.15)
imply (i) for any $\beta>c$, and (ii) follows since for any sequence
$M_N\to\infty$ we have ${\mathbb E} X_N= \textrm{O}(N M_N^{1-\gamma/2})=\textrm{o}(N)$.
\smallskip \noindent {\bf Proof.\ }
We first show that the result holds with (i) replaced by \\
(i$'$) $\Pr(E_N>N\beta)=0$,\\
and then use this modified statement to prove the lemma.
Assume now that $\GG_N$ satisfies (i$'$) and (ii) and write
$\GG_N$, considered as an edge-set, as a disjoint union
$\GG_N^\prime \cup \GG_N^{\prime \prime}$,
where $\GG_N^\prime$ consists of the edges of length $\le M_N$.
Because $\GG_N^\prime$ contains at most $\beta N$ edges out of a set of
at most $NM_N$ edges,
\begin{eqnarray*}
\mathrm{ent} (\GG_N^\prime) &\le& \log(\beta N) + \log {NM_N \choose \beta N} \\
&=& o(N \log N) \mbox{ by } (\ref{mK}).
\end{eqnarray*}
Now fix $\delta > 0$ and condition on whether the number $X_N$ of edges of $\GG_N^{\prime \prime}$ is bigger or smaller than $\delta N$.
Using (\ref{H-mixing}) we get
\[
\mathrm{ent}(\GG_N^{\prime \prime})
\le \log 2 + \log G[N,\delta N] + \Pr(X_N > \delta N) \log G[N,\beta N] . \]
Now $\Pr(X_N > \delta N) \to 0$ by assumption (ii), and then using (\ref{GMN}) we get
\[ \mathrm{ent}(\GG_N^{\prime \prime}) \le (\delta + o(1)) N \log N. \]
Because $\delta > 0 $ is arbitrary we conclude
\[ \mathrm{ent}(\GG_N) \le \mathrm{ent} (\GG_N^\prime) + \mathrm{ent}(\GG_N^{\prime \prime})
= o(N \log N) . \]
Now assume that $\GG_N$ satisfies the weaker hypotheses (i) and (ii).
Defining $\widehat{\GG_N}$ to have the conditional distribution of
$\GG_N $ given $E_N\leq \beta N$,
it is clear that
$\widehat{\GG_N}$ satisfies (i$'$).
We will show that it also satisfies (ii), implying (by the previous result) that the entropy rate of $\widehat{\GG}_N$ is zero.
Let $\delta>0$. Conditioning on the event ($A_N$, say) that $E_N\leq \beta$,
\baln{
\Pr(X_N>\delta N) = \Pr( X_N>\delta N| A_N)\Pr(A_N) + \Pr(X_N>\delta N|A_N^c)\Pr(A_N^c). \label{t5}
}
By (ii), the term on the left hand side of \eqref{t5} is $\textrm{o}(1)$, and
by (i), $\Pr(A_N^c)=\textrm{o}(1)$, and so also $\Pr(A_N) \to 1$. Thus,
$\Pr( X_N>\delta N| A_N)$ must be $\textrm{o}(1)$, as desired.
To complete the proof, use \eqref{H-mixing} to write
\bal{
\mathrm{ent}(\GG_N)&\leq \mbox{${\mathcal E}$}(\Pr(A_N))+ \mathrm{ent}(\GG_N|A_N)\Pr(A_N)+\mathrm{ent}(\GG_N|A_N^c)\Pr(A_N^c) \\
&\leq \log 2+\mathrm{ent}(\widehat{\GG}_N)+ \Pr(A_N^c) \binom{N}{2}\log 2 .
}
The entropy rate of $\widehat{\GG}_N$ is zero, and assumption (i) is
exactly that $\Pr(A_N^c)=\textrm{o}(N^{-1} \log N)$, so $\mathrm{ent}(\GG_N) =
\textrm{o}(N\log N)$, as desired.
\begin{Lemma}
\label{L2}
Take a deterministic graph on $N$ unlabelled vertices, and let $c_N$
denote the number of components and $e_N$ the number of edges. Construct $\GG_N$ by assigning random distinct vertex-names $\mathbf{a}(v)$
of length $\textrm{O}(\log N)$ to vertices $v$, their distribution being arbitrary subject to
\[ \sum_{\textrm{edges } (\mathbf{a}, \mathbf{a}^\prime)} d_H(\mathbf{a}, \mathbf{a}^\prime) = \textrm{o}(N \log N)
\mbox{ in probability}. \]
If $e_N = O(N)$ and $c_N = o(N)$ then $\GG_N$ has
entropy rate zero.
\end{Lemma}
\smallskip \noindent {\bf Proof.\ }
By a straightforward truncation argument
we may assume there is a deterministic bound
\[ \sum_{\textrm{edges } (\mathbf{a}, \mathbf{a}^\prime)} d_H(\mathbf{a}, \mathbf{a}^\prime)
\le s_N = \textrm{o}(N \log N) .\]
The name-lengths are $\le \beta \log N$ for some $\beta$.
Consider first the case where there is a single component.
Take an arbitrary spanning tree with arbitrary root, and write the edges of the tree in breadth-first order as $e_1,\ldots, e_{N-1}$.
We can specify $\GG_N$ by specifying first
the name of the root; then
for each edge $e_i = (v,v^\prime)$ directed away from the root,
specify the coordinates where $\mathbf{a}(v^\prime)$ differs from $\mathbf{a}(v)$
and specify the values of $\mathbf{a}(v^\prime)$ at those coordinates.
Write $\SS$ for the random set of all these differing coordinates.
Conditional on $\SS = S$ the entropy of $\GG_N$ is at most
$(|S| + \beta \log N ) \log A$, where the $\beta \log N$ term
arises from the root name.
So using \eqref{H-mixing}
\[ \mathrm{ent}(\GG_N) \le \mathrm{ent}(\SS) + (s_N + \beta \log N) \log A . \]
With $c_N$ components the same argument shows
\[ \mathrm{ent}(\GG_N) \le \mathrm{ent}(\SS) + (s_N + c_N \beta \log N) \log A . \]
The second term is $\textrm{o}(N \log N)$ by assumption, and
\[ \mathrm{ent}(\SS) \le \log \left (\sum_{i \le s_N}
\binom{\beta N \log N}{i} \ \right) = \textrm{o}(N \log N) , \]
the final relation by e.g.\ the $p = 1/2$ case of (\ref{BinLD}).
\subsection{Summary}
The reader will recognize the models in this section as standard random graph models,
adapted to our setting in one of several ways.
One can take a model of dynamic growth, adding one vertex at a time, and then assign the $k$'th vertex a name, e.g.\ the
``default binary" or ``random A-ary" used in the Erd\H{o}s-R\'enyi\ models.
Alternatively, as in the ``Hamming distance" model, one can
start with $N$ vertices with assigned names and then add edges according to some probabilistic rule involving the names of end-vertices.
Roughly speaking, for any existing random graph model where one can calculate anything, one can calculate the entropy rate for such adapted models.
But this is an activity perhaps best left for future Ph.D. theses.
We are more interested in models where the graph structure and the name structure each simultaneously influence the other, rather than starting by specifying one structure
and having that influence the other.
It is not so easy to devise tractable such models, but the next section
shows our attempt.
\section{A hybrid model}
\label{sec:hard}
In this section we study a model for which calculation of the entropy rate is less straightforward.
It incidently reveals a connection between our setting and the more familiar setting of ``graph entropy".
\subsection{The model}
\label{sec:hybrid_model}
In outline, the graph structure is again sparse Erd\H{o}s-R\'enyi\
$\GG(N,\alpha/N)$, but we construct it inductively over vertices, and make the vertex-names copy parts of the names of previous vertices that the current vertex is linked to.
Here are the details.
\smallskip
\noindent
{\bf Model: Erd\H{o}s-R\'enyi\ with hybrid names.} Take $L_N \sim \beta \log_A N$ for $1<\beta<\infty$.
Vertex $1$ is given a uniform random length-$L_N$ $A$-ary name.
For $1 \le n \le N-1$:
\begin{quote}
vertex $n+1$ is given an edge to each vertex $ i \le n$ independently with probability $\alpha/N$. Write $Q_n \ge 0$ for the number of such edges,
and $\mathbf{a}^1, \ldots, \mathbf{a}^{Q_n}$ for the names of the linked vertices.
Take an independent uniform random length-$L_N$ $A$-ary string $\mathbf{a}^0$.
Assign to vertex $n+1$ the name obtained by, independently for each
coordinate $1 \le u \le L_N$, making a uniform random choice from the
$Q_n+1$ letters
$a^0_u, a^1_u, \ldots,a^{Q_n}_u$.
\end{quote}
See Figure 1.
This model gives a family $(\GG_N)$ parametrized by $(A,\beta,\alpha)$.
Note that this scheme for defining ``hybrid" names could be used
with any sequential construction of a random graph, for instance
preferential attachment models.
\setlength{\unitlength}{0.33in}
\begin{picture}(10,4)(-3,0)
\put(0,1){\circle*{0.2}}
\put(2,3){\circle*{0.2}}
\put(5,2){\circle*{0.2}}
\put(-0.6,1.26){d\underline{af}cbb}
\put(1.4,3.26){bfc\underline{ca}d}
\put(4.6,2.26){bafcac}
\put(0,1){\vector(5,1){4.7}}
\put(2,3){\vector(3,-1){2.7}}
\end{picture}
{\bf Figure 1.}
{\small Schematic for the hybrid model.
A vertex (right) arrives with some ``original name"
$\underline{b}bdab\underline{c}$ and is attached to two previous vertices with names
$dafcbb$ and $bfccad$.
The name given to the new vertex is obtained by copying for each
position the letter in that position in a uniform random choice from the three names. Choosing the underlined letters gives the name shown in the figure.}
\subsection{The ordered case}
This model illustrates a distinction mentioned in
Section~\ref{sec:SpERb}.
In the construction above, the $n$th vertex is assigned a name, say $\mathbf{a}^n$, during the construction, but in the final graph $\GG_N$ we do not see the value of $n$ for a vertex.
The ``ordered" model $(\GG^{ord}_N)$ in which we do see the value of $n$ for each vertex, by making the name be
$(n,\mathbf{a}^n)$, is a different model whose analysis is conceptually more straightforward, so we will start with that model.
We return to the unordered model in section \ref{sec-unordered}.
\smallskip \noindent
{\bf Entropy rate formula} for $(\GG^{ord}_N)$.
\begin{equation}
\frac{\alpha}{2} + \beta \sum_{k \ge 0} \frac{\alpha^k J_k(\alpha) h_A(k) }{k! \log A }
\label{hybrid-rate}
\end{equation}
where
\[ J_k(\alpha) := \int_0^1 x^k e^{-\alpha x} dx \]
and the constants $h_A(k)$ are defined at (\ref{hA-def}).
Write $\GG_{N,n}$ for the partial graph obtained after vertex $n$ has been assigned its edges to previous vertices and then its name.
We will show that, for deterministic $e_{N,n}$ defined at (\ref{eNn}) below,
as $N \to \infty$
the entropies of the conditional distributions satisfy
\begin{equation}
\max_{1 \le n \le N-1}
{\mathbb E} \left| \mathrm{ent}(\GG_{N,n+1} \vert \GG_{N,n}) - e_{N,n} \right|
= \textrm{o}(\log N) .
\label{eNn1}
\end{equation}
By the chain rule (\ref{H-condit}) this immediately implies
\[ \mathrm{ent}(\GG_N^{ord}) - \sum_{n=1}^{N-1} e_{N,n} = \textrm{o}(N \log N) \]
which will establish the entropy rate formula.
The key ingredient is the following technical lemma;
note that the measures $\mu^{\textbf{i}}$ below depend on the
realization of $\GG^{ord}_N$ and are therefore random quantities.
Write ``ave" for average, and write
\[ ||\Theta||\ta{k} : = \sfrac{1}{2} \sum_{\mathbf{a} \in {\mathbf A}^k}
\left| \Theta(\mathbf{a}) - A^{-k} \right| \]
for the variation distance between a probability distribution $\Theta$
on ${\mathbf A}^k$ and the uniform distribution.
\begin{Lemma}
\label{L3a}
Write $(n,\mathbf{a}^n), \ 1 \le n \le N$ for the vertex-names of $\GG^{ord}_N$.
For each $k\geq 1$ and $\textbf{i}:=(i_1,\ldots,i_k)$ with $1 \le i_1<\cdots < i_k \le N$,
write $\mu^{\textbf{i}}$
for the empirical distribution of
$(a^{i_1}_u,\ldots, a^{i_k}_u), 1 \le u \le L_N$.
That is, the probability distribution on $ {\mathbf A}^k$
\[ \mu^{\textbf{i}}(x_1,\ldots, x_k) := L_N^{-1} \sum_{u=1}^{L_N}
{\rm 1\hspace{-0.90ex}1}_{\left(a^{i_1}_u = x_1,\ldots, a^{i_k}_u = x_k\right)}. \]
Then
\baln{
\Delta\ta{k}_N :=
\max_{2 \le n \le N}
{\mathbb E} ||\underset{1 \le i_1<\cdots < i_k \le n}{\mathrm{ave}} \ \mu^{\textbf{i}} ||\ta{k}\leq C \left(\frac{A^{k/2}}{\sqrt{\log N}}+\frac{k^2}{N}\right).\label{089}
}
for a constant $C$ not depending on $k, N$.
\end{Lemma}
We defer the proof to Section \ref{sec:p3}.
Fix $N$ and $n$, and consider
$\mathrm{ent}(\GG_{N,n+1} \vert \GG_{N,n})$,
the entropy of the conditional distribution.
Conditioning on the
edges of vertex $n+1$ in $\GG_{N,n+1}$, and using the chain rule~\eqref{H-condit}, we find
\baln{
&\mathrm{ent}(\GG_{N,n+1} \vert \GG_{N,n})=n \mbox{${\mathcal E}$}(\alpha/N) \notag \\
& \quad +\hspace{-5mm}\mathop{\sum_{k=0,\ldots, n}}_{1\leq i_1<\cdots<i_k\leq n}\hspace{-5mm}\left(\frac{\alpha}{N}\right)^k\left(1-\frac{\alpha}{N}\right)^{n-k}
\mathrm{ent}(\mathbf{a}^{n+1}\vert \GG_{N,n}, n+1\rightarrow \{i_1,\ldots,i_k\}), \label{155}
}
where $n+1\rightarrow\{i_1,\ldots,i_k\}$ denotes the event that vertex $n+1$ connects
to vertices $i_1,\ldots,i_k$ and no others.
The contribution to the entropy from the choice of edges is
$n \mbox{${\mathcal E}$}(\alpha/N)$, which as in previous models contributes
(after summing over $n$) the first term
$\alpha/2$ of the entropy rate formula, so in the following we need
consider only the contribution from names, that is the sum in~\eqref{155}.
Consider the contribution to the sum~\eqref{155} from $k=2$,
that is on the event
$\{ Q_n = 2\}$ that vertex
$n+1$ links to exactly two previous vertices.
Conditional on these being a particular pair $1 \le i < j \le n$,
with names $\mathbf{a}^i, \mathbf{a}^j$, the contribution to entropy is exactly
\[ \mathrm{ent}(\mathbf{a}^{n+1}\vert \GG_{N,n}, n+1\rightarrow \{i,j\})=L_N \sum_{(a,a^\prime) \in {\mathbf A} \times {\mathbf A}}
g_2(a,a^\prime) \ \mu^{(i,j)}(a,a^\prime) \]
where
\begin{eqnarray*}
g_2(a,a^\prime) &=& \mbox{${\mathcal E}$}_A(\sfrac{A+1}{3A}, \sfrac{A+1}{3A},
\sfrac{1}{3A}, \sfrac{1}{3A}, \ldots \ldots \sfrac{1}{3A})
\mbox{ if } a^\prime \neq a \\
&=& \mbox{${\mathcal E}$}_A(\sfrac{2A+1}{3A}, \sfrac{1}{3A},
\sfrac{1}{3A}, \sfrac{1}{3A}, \ldots \ldots \sfrac{1}{3A})
\mbox{ if } a^\prime = a
\end{eqnarray*}
and where $\mbox{${\mathcal E}$}_A(\mathbf{p})$ is the entropy of a distribution
$\mathbf{p} = (p_1,\ldots,p_A)$.
Now unconditioning on the pair $(i,j)$, the contribution to
$\mathrm{ent}(\GG_{N,n+1} \vert \GG_{N,n})$ from the event $\{ Q_n = 2\}$; that is
the $k=2$ term of the sum~\eqref{155}; equals
\[ L_N \sum_{1 \le i < j \le n} \frac{\alpha^2}{N^2}
\ \left(1 - \sfrac{\alpha}{N}\right)^{n-2}
\quad
\sum_{(a,a^\prime) \in {\mathbf A} \times {\mathbf A}}
g_2(a,a^\prime) \ \mu^{(i,j)}(a,a^\prime) \]
\begin{equation}
= \frac{L_N \alpha^2 \binom{n}{2}}{N^2}
\ \left(1 - \sfrac{\alpha}{N}\right)^{n-2}
\sum_{(a,a^\prime) \in {\mathbf A} \times {\mathbf A}}
g_2(a,a^\prime) \ \underset{1 \le i < j \le n}{\mathrm{ave}} \ \mu^{(i,j)}(a,a^\prime) .
\label{LNa-sum}
\end{equation}
Lemma~\ref{L3a} now tells us that the sum in (\ref{LNa-sum}) differs from
\[ h_A(2) := A^{-2} \sum_{(a,a^\prime) \in {\mathbf A} \times {\mathbf A}}
g_2(a,a^\prime) \]
by at most
$2g_2^* \Delta_N\ta{2} $
where $g_2^*\leq\log A$ is the maximum possible value of $g_2(\cdot, \cdot)$ and $\Delta_N\ta{2} $ is as defined in Lemma~\ref{L3a}.
So to first order as $N \to \infty$, the
quantity (\ref{LNa-sum}) is
\[ e_{N,n,2} := \beta \log_A N \times \alpha^2 h_A(2) \sfrac{\binom{n}{2}}{N^2}
\ \exp(-\alpha n/N),\]
with an error bounded by
\bal{
(2\log A) \, L_N\binom{n}{2}\left(\frac{\alpha}{N}\right)^2\left(1-\frac{\alpha}{N}\right)^{n-2}\Delta_N\ta{2}.
}
A similar argument applies to
the terms in the sum~\eqref{155} for a general number $k$ of links.
In brief, we define
\[ e_{N,n,k} := \beta \log_A N \times \alpha^k h_A(k) \sfrac{\binom{n}{k}}{N^k}
\ \exp(-\alpha n/N) \]
where
\begin{equation}
h_A(k) := A^{-k} \sum_{(a_1,\ldots,a_k) \in {\mathbf A}^k}
\mathrm{ent}(\mathbf{p}^{[a_1,\ldots,a_k]}),
\label{hA-def}
\end{equation}
and where $\mathbf{p}^{[a_1,\ldots,a_k]}$ is the probability distribution $\mathbf{p}$ on ${\mathbf A}$
defined by
\[ p^{[a_1,\ldots,a_k]}(a) = \frac{1 + A\times |\{i: a_i = a\}|}{(1+k)A} . \]
Also for $k = 0$ we set $h_A(0) = \log A$, the entropy of the uniform distribution on~${\mathbf A}$.
Repeating the argument from the case $k=2$,
we find that~\eqref{155} is, to first order, $\sum_{k\geq0} e_{N,n,k}$, with error of order
\baln{
L_N\sum_{k=0}^n \binom{n}{k}\left(\frac{\alpha}{N}\right)^k\left(1-\frac{\alpha}{N}\right)^{n-k}\Delta_N\ta{k}. \label{156}
}
Applying
Lemma~\ref{L3a} to bound $\Delta_N\ta{k}$
and then using simple properties of the binomial
distribution yields that~\eqref{156} is $\textrm{o}(\log N)$.
So we are now in the setting of~(\ref{eNn1}) with
\begin{equation}
e_{N,n} = \sum_{k \ge 0} e_{N,n,k} .
\label{eNn}
\end{equation}
Because
\[ \sum_{n=1}^{N-1} \sfrac{\binom{n}{k}}{N^k}
\ \exp(-\alpha n/N)
\sim \sfrac{N}{k!} J_k(\alpha)\]
calculating $\sum_{n=1}^{N-1} e_{N,n}$ gives the stated entropy rate formula.
\subsection{Proof of Lemma \ref{L3a}}
\label{sec:p3}
Fix $N$.
Recall the construction of $\GG^{ord}_N$ involves an ``original name process" -- letters
of the name of vertex $n$ may be copies from previous names or may be from an ``original name", independent uniform
for different $n$.
Consider a single coordinate, w.l.o.g. coordinate $1$, of the
vertex-names of $\GG^{ord}_N$.
For each vertex $n$ this is either from the original name of $n$ or a copy of some
previous vertex-name, so inductively the letter at vertex $n$ is a copy of the letter
originating at some vertex $1 \le C^N_1(n) \le n$;
and similarly the letter at general coordinate $u$ is a copy from some vertex
$C^N_u(n)$.
Because the copying process is independent of the name origination process,
it is clear that the (unconditional) distribution of each name $\mathbf{a}^n$ is
uniform on length-$L_N$ words.
Moreover it is clear that, for $1 \le i < j \le N$,
\begin{equation}\label{CNu}
\begin{split}
&\mbox{ the two names
$\mathbf{a}^i$ and $\mathbf{a}^j$ are independent uniform} \\
&\hspace{1cm}\mbox{on the event }
\{C^N_u(i) \ne C^N_u(j) \ \forall u\}.
\end{split}
\end{equation}
The proof of Lemma~\ref{L3a} rests upon the
following lemma, whose proof we defer
to the end of Section~\ref{sec:sparseER} .
\begin{Lemma}
\label{L4}
For $(I,J)$ uniform on $\{1 \le i < j \le n\}$, write
$\theta_{N,n} = \Pr (C^N_1(I) = C^N_1(J))$. Then
\[ \max_{2 \le n \le N} \theta_{N,n} = \textrm{O}(1/N)
\mbox{ as } N \to \infty .\]
\end{Lemma}
We first use this lemma to prove Lemma~\ref{L3a} in the case where $k=2$.
For $(I, J)$ as in Lemma~\ref{L4},
\begin{eqnarray}
\Delta_N\ta{2} &=&
\sfrac{1}{2}\max_{2 \le n \le N} {\mathbb E} \sum_{\textbf{x}\in \textbf{A}^2}
\left|
{\mathbb E} \big( L_N^{-1}\sum_{u=1}^{L_N} {\rm 1\hspace{-0.90ex}1}_{(a^{I}_u = x_1, a^{J}_u = x_2)} | \GG_{N,n} \big)
-A^{-2} \right| \nonumber\\
&\leq& \sfrac{1}{2}\max_{2 \le n \le N} \sum_{\textbf{x}\in \textbf{A}^2}
{\mathbb E} \left|L_N^{-1}\sum_{u=1}^{L_N} {\rm 1\hspace{-0.90ex}1}_{(a^{I}_u = x_1, a^{J}_u = x_2)}-A^{-2}\right|. \label{166}
\end{eqnarray}
By Lemma~\ref{L4} and~\eqref{CNu}, the two names $\mathbf{a}^I, \mathbf{a}^J$
are independent uniform on ${\mathbf A}^{L_N}$ outside an event
of probability $\textrm{O}(1/N)$. Under this event, we bound
the total variation distance appearing in $\Delta_N\ta{2}$ by 1, leading to the
second summand in the bound~\eqref{089}.
If the two names are independent, then
because the sum below has Binomial$(L_N,A^{-2})$ distribution with variance $< L_NA^{-2}$,
\begin{equation}
{\mathbb E} \left|
L_N^{-1} \sum_{u=1}^{L_N} {\rm 1\hspace{-0.90ex}1}_{(a^{I}_u = x_1, a^{J}_u = x_2)} - A^{-2} \right| \leq L_N^{-1/2} A^{-1},
\label{LN1}
\end{equation}
which contributes the first summand in the bound~\eqref{089}.
The proof of Lemma~\ref{L3a} for general $k$ is similar.
Taking $I_1,\ldots, I_k$ independent
and uniform on the set
$\{1\leq i_1 <\cdots<i_k\leq n\}$, we have the analog of (\ref{166}):
\bal{
\Delta_N\ta{k}\leq \sfrac{1}{2}\max_{2 \le n \le N} \sum_{\textbf{x}\in \textbf{A}^k}
{\mathbb E} \left|L_N^{-1}\sum_{u=1}^{L_N} {\rm 1\hspace{-0.90ex}1}_{(a^{I_1}_u = x_1,\ldots, a^{I_k}_u = x_k)}-A^{-k}\right|.
}
The names $\mathbf{a}^{I_1},\ldots, \mathbf{a}^{I_k}$ are independent outside of the ``bad" event that
some pair within $k$ random vertices
have the same $C^N_1(\cdot)$ value. But the probability of this bad event
is bounded by $\binom{k}{2}$ times the chance for a given pair,
which, after applying Lemma~\ref{L4}, leads to the second summand of the bound~\eqref{089}.
And the upper bound for the term analogous to~\eqref{LN1}
becomes $L_N^{-1/2} A^{-k/2}$.
\ \ \rule{1ex}{1ex}
\subsection{Structure of the directed sparse Erd\H{o}s-R\'enyi\ graph}
\label{sec:sparseER}
In order to prove Lemma~\ref{L4} and later results,
we study the original name variables $C_u^N(i)$ defined at~\eqref{CNu}.
It will first help to collect some facts about the structure of a directed sparse Erd\H{o}s-R\'enyi\ random graph.
Write (omitting the dependence on~$N$)
\[ \mbox{${\mathcal T}$}_n = \{1 \le j \le n: \ \exists g \ge 0 \mbox{ and a path }
n = v_0 > v_1 > \ldots > v_g = j \mbox{ in } \GG_N \} . \]
We visualize $\mbox{${\mathcal T}$}_n$ as the vertices of the tree of descendants of $n$
although it may not be a tree. The next result collects two facts about the
structure of $\mbox{${\mathcal T}$}_n$ including that for large $N$ it is a tree with high probability.
\begin{Lemma}\label{LTn}
For $m<n$ and $\mbox{${\mathcal T}$}_n$ as above,\\
(a) $\Pr( \mbox{${\mathcal T}$}_n \cap \mbox{${\mathcal T}$}_m \ne \emptyset) \le \frac{(\alpha e^\alpha)^2
+ \alpha e^\alpha}{N} $. \\
(b) $\Pr( \mbox{${\mathcal T}$}_n \mbox{ is not a tree }) \le \frac{ (\alpha e^\alpha)^3}{2N}$.
\end{Lemma}
{\bf Proof.\ }
First note that
for $1 \le j < n \le N$ the mean number of decreasing paths from
$n$ to $j$ of length $g \ge 1$ equals
$\binom{n-j-1}{g-1} (\alpha/N)^g$.
Because $n-j-1 \le N$, this is bounded by
$\frac{\alpha}{N} \ \frac{\alpha^{g-1}}{(g-1)!}$,
and summing over $g$ gives
\begin{equation}
\Pr(j \in \mbox{${\mathcal T}$}_n) \le
{\mathbb E} (\mbox{number of decreasing paths from $n$ to $j$}) \le
\alpha e^{\alpha}/N .
\label{Edec}
\end{equation}
We break the event $\mbox{${\mathcal T}$}_n \cap \mbox{${\mathcal T}$}_m\ne \emptyset$
into a disjoint union according to the largest element in the intersection:
$\max \mbox{${\mathcal T}$}_n\cap \mbox{${\mathcal T}$}_m=j$ for $j=1,\ldots,m$.
Now note for $j\leq m-1$, we can write
\baln{
\Pr (\max \mbox{${\mathcal T}$}_n\cap \mbox{${\mathcal T}$}_m=j)\leq {\mathbb E} \sum_{x_n^j,y_m^j}
{\rm 1\hspace{-0.90ex}1}_{(x_n^j \mbox{ is path in } \GG_N)}
{\rm 1\hspace{-0.90ex}1}_{(y_m^j \mbox{ is path in } \GG_N)} ,
\label{11}
}
where the sum is over edge-disjoint decreasing paths $x_n^j$ from $n$ to $j$
and $y_m^j$ from $m$ to $j$. Since the paths are edge-disjoint,
the indicators appearing in the sum~\eqref{11} are independent and so we find
\bal{
\Pr (\max \mbox{${\mathcal T}$}_n\cap \mbox{${\mathcal T}$}_m=j)&\leq \sum_{x_n^j,y_m^j}
\Pr(x_n^j \mbox{ is path in } \GG_N)
\Pr(y_m^j \mbox{ is path in } \GG_N) \\
&\leq \sum_{x_n^j}\Pr(x_n^j \mbox{ is path in } \GG_N) \sum_{y_m^j}\Pr(y_m^j \mbox{ is path in } \GG_N) \\
&\leq (\alpha e^{\alpha}/N)^2;
}
where the sums
in the second line are over all paths from $n$ (respectively m) to $j$, and the final
inequality follows from~\eqref{Edec}.
Now part~(a) of the lemma follows by summing over $j<m$ and adding the
corresponding bound~\eqref{Edec} for the case $j=m$.
Part (b) is proved in a similar fashion.
If $\mbox{${\mathcal T}$}_n$ is not a tree then for some $j_2 \in \mbox{${\mathcal T}$}_n$ and some
$j_1 < j_2$ there are two edge-disjoint paths from $j_2$ to $j_1$.
For a given pair $(j_2,j_1)$ the mean number of such path-pairs is bounded by
$\Pr (j_2 \in \mbox{${\mathcal T}$}_n) \times (\alpha e^{\alpha}/N)^2$.
By (\ref{Edec}) this is bounded by $(\alpha e^{\alpha}/N)^3$, and summing over pairs $(j_2,j_1)$ gives the stated bound.
\ \ \rule{1ex}{1ex}
\smallskip\noindent
{\bf Remark.} Note that for part (a) of the lemma we could
also appeal to the more sophisticated
inequalities of~\cite{MR799280} concerning disjoint occurrence of events,
which would give the stronger bound $\Pr (\max \mbox{${\mathcal T}$}_n\cap \mbox{${\mathcal T}$}_m=j)\leq\Pr(j \in \mbox{${\mathcal T}$}_m) \times \Pr(j \in \mbox{${\mathcal T}$}_n)$.
\paragraph{Proof of Lemma \ref{L4}.}
Lemma~\ref{L4} follows from Lemma~\ref{LTn}(a)
and the observation that
$\{C_1^N(i)=C_1^N(j)\}\subseteq\{\mbox{${\mathcal T}$}_i\cap\mbox{${\mathcal T}$}_j\ne \emptyset\}$.
\subsection{Making the vertex labels distinct}
\label{sec:unord}
In the ordered model studied above, the vertex-names are $(n,\mathbf{a}^n), 1 \le n \le N$. In order to study the
unordered model described at the start of Section~\ref{sec:hard},
we first must address the fact that
the vertex-names $\mathbf{a}^n, 1 \le n \le N$ may not be distinct.
\begin{Lemma}
\label{L7}
Let $\GG_N$ be random graphs-with-vertex-names, where
(following our standing assumptions)
the names have length $\log N/\log A\leq L_N=O(\log N)$, and suppose that
for some deterministic sequence $k_N=o(N)$, the number of vertices that have non-unique names in $\GG_N$,
say $V_N$, satisfies
$\Pr (V_N\geq k_N)=o(1)$.
Let $\GG^*_N$ be a modification with unique names obtained by re-naming some or all of the non-uniquely-named vertices. Then
$| \mathrm{ent}(\GG^*_N) - \mathrm{ent}(\GG_N)| = o(N \log N)$.
\end{Lemma}
\begin{proof}
The chain rule \eqref{H-condit} implies that
\bal{
\mathrm{ent}(\GG^*_N)& \le \mathrm{ent}(\GG_N)+{\mathbb E} \mathrm{ent}(\GG^*_N | \GG_N),
}
so we want to show that ${\mathbb E} \mathrm{ent}(\GG^*_N | \GG_N)$ is $o(N \log N)$.
Considering the number of ways of relabeling $V_N$ vertices,
\bal{
{\mathbb E} \mathrm{ent}(\GG^*_N | \GG_N)&\leq {\mathbb E} \log \left(V_N!\binom{A^{L_N}}{V_N}\right), \\
&\leq {\mathbb E} (\log A^{V_N L_N}) {\rm 1\hspace{-0.90ex}1}_{(V_N<k_N)} +{\mathbb E}(\log A^{L_N V_N}) {\rm 1\hspace{-0.90ex}1}_{(V_N\geq k_N)}, \\
&\leq \log(A) L_N [k_N + N \Pr(V_N\geq k_N)]=o(N\log N),
}
as desired. \ \ \rule{1ex}{1ex}
\end{proof}
\noindent{\bf Remark.} The analogous lemma holds if instead we replace the labels of any random subset of vertices of $\GG_N$
to form $\GG_N^*$, provided the subset size satisfies the same assumptions as $V_N$.
\begin{Lemma}\label{LVN}
For $\GG^{ord}_N$ and $\GG_N$,
\baln{\label{evn}
{\mathbb E} |\{n \, : \, \mathbf{a}^n = \mathbf{a}^m \mbox{ for some } m \ne n\}| = o(N).
}
\end{Lemma}
{\bf Proof.\ }
As in Lemma~\ref{L4}, the proof is based on studying
the originating vertex $C_i(n)$ (now dropping the notational dependence on $N$)
of the letter ultimately
copied to coordinate $i$ of vertex $n$ through the ``trees" $\mbox{${\mathcal T}$}_n$.
Given $\mbox{${\mathcal T}$}_n$, the copying mechanism that determines the name $\mathbf{a}^n$
evolves independently for each coordinate, and this implies
the {\em conditional independence property:} for $1 \le m < n \le N$,
the events $\{C_i(n) = C_i(m)\}, 1 \le i \le L_N$ are
conditionally independent given $\mbox{${\mathcal T}$}_m$ and $\mbox{${\mathcal T}$}_n$.
Because
\bal{
&\Pr ( a^n_i = a^m_i \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m) \\
&\qquad=\Pr ( C_i(n) = C_i(m) \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m) +
\sfrac{1}{A} \Pr ( C_i(n) \ne C_i(m) \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m)
}
the conditional independence property implies
\begin{equation}\label{c-i}
\begin{split}
&\Pr ( \mathbf{a}^n = \mathbf{a}^m \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m) \\
&\quad=\left[ \Pr ( C_1(n) = C_1(m) \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m) +
\sfrac{1}{A} \Pr ( C_1(n) \ne C_1(m) \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m)\right]^{L_N}.
\end{split}
\end{equation}
Now we always have $C_1(n) \in \mbox{${\mathcal T}$}_n$ so trivially
\begin{equation}
\Pr ( C_1(n) = C_1(m) \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m) = 0
\mbox{ on } \mbox{${\mathcal T}$}_n \cap \mbox{${\mathcal T}$}_m = \emptyset .
\label{C1C}
\end{equation}
We show below that when the sets do intersect we have
\baln{\label{LC1n}
\Pr ( C_1(n) = C_1(m) \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m) \le \sfrac{1}{2}
\mbox{ on } \{ \mbox{$\mbox{${\mathcal T}$}_n$ and $\mbox{${\mathcal T}$}_m$ are trees} \}.
}
Assuming~\eqref{LC1n}, since
$A \ge 2$, for $p \le 1/2$, we have $p + (1-p)/A \le 3/4$,
and now combining (\ref{c-i}, \ref{C1C}, \ref{LC1n}), we find
\[
\Pr ( \mathbf{a}^n = \mathbf{a}^m \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m)
\le (\sfrac{3}{4})^{L_N} {\rm 1\hspace{-0.90ex}1}_{( \mbox{${\mathcal T}$}_n \cap \mbox{${\mathcal T}$}_m \ne \emptyset )}
\mbox{ on }
\{ \mbox{$\mbox{${\mathcal T}$}_n$ and $\mbox{${\mathcal T}$}_m$ are trees} \}.
\]
Now take expectation, appeal to part~(a) of Lemma~\ref{LTn}, and sum over $m$ to conclude
\[
\Pr (\mbox{$\mbox{${\mathcal T}$}_n$ is a tree}, \mathbf{a}^n = \mathbf{a}^m \mbox{ for some } m \ne n \mbox{ for which
$\mbox{${\mathcal T}$}_m$ is a tree} ) \]
\[ \le (\sfrac{3}{4})^{L_N} \ ((\alpha e^\alpha)^2
+ \alpha e^\alpha)
\to 0.
\]
Now any $n$ for which the name $\mathbf{a}^n$ is not unique is either in the set
of $n$ defined by the event above, or in one of the two following sets:
\[ \{n: \mbox{$\mbox{${\mathcal T}$}_n$ is not a tree} \} \]
\[ \{n: \mbox{$\mbox{${\mathcal T}$}_n$ is a tree, $ \mathbf{a}^n = \mathbf{a}^m$ for some $m\ne n$ for which $\mbox{${\mathcal T}$}_m$ is not a tree
} \]
\vspace*{-0.25in}
\[ \mbox{ but $\mathbf{a}^n \ne\mathbf{a}^m$ for all $m\ne n$ for which $\mbox{${\mathcal T}$}_m$ is a tree} \}. \]
The cardinality of the final set is at most the cardinality of the previous set,
which by part~(b) of Lemma~\ref{LTn} has expectation
$O(1)$.
Combining these bounds gives (\ref{evn}).
It remains only to prove~\eqref{LC1n}.
For $v \in \mbox{${\mathcal T}$}_n$ write $R_v(n)$ for the event that the path of copying
of coordinate $1$ from $C_1(n)$ to $n$ passes through $v$.
We may assume there is at least one edge from $n$ into $[1,n-1]$
(otherwise we are in the setting of (\ref{C1C})). Given $\mbox{${\mathcal T}$}_n$, the chance that
vertex $n$ adopts the label of any given neighbor in $\mbox{${\mathcal T}$}_n$ is bounded by $1/2$, we see
\begin{equation}
\Pr(R_v(n) | \mbox{${\mathcal T}$}_n) \le \sfrac{1}{2}, \quad v \in \mbox{${\mathcal T}$}_n .
\label{Rvn}
\end{equation}
Similarly by (\ref{C1C}) we may assume $ \mbox{${\mathcal T}$}_n \cap \mbox{${\mathcal T}$}_m \ne \emptyset$.
By hypothesis $\mbox{${\mathcal T}$}_n$ and $\mbox{${\mathcal T}$}_m$ are trees, and so there is a subset
$\mbox{${\mathcal M}$} \subseteq \mbox{${\mathcal T}$}_n \cap \mbox{${\mathcal T}$}_m$ of ``first meeting" points $v$ with the property that the path from $v$ to $n$ in $\mbox{${\mathcal T}$}_n$ does not meet the
path from $v$ to $m$ in $\mbox{${\mathcal T}$}_m$ and
\[ \{ C_1(n) = C_1(m)\} = \cup_{v \in \mbox{${\mathcal M}$}} \left[R_v(n) \cap R_v(m) \right] \]
with a disjoint union on the right. So
\begin{equation}
\Pr ( C_1(n) = C_1(m) \vert \mbox{${\mathcal T}$}_n, \mbox{${\mathcal T}$}_m)
= \sum_{v \in \mbox{${\mathcal M}$}} \Pr(R_v(n) \vert \mbox{${\mathcal T}$}_n) \times \Pr(R_v(m) \vert \mbox{${\mathcal T}$}_m) . \label{CCm}
\end{equation}
Now $v \to \Pr(R_v(n) \vert \mbox{${\mathcal T}$}_n)$ and $v \to \Pr(R_v(m) \vert \mbox{${\mathcal T}$}_m)$
are sub-probability distributions on $\mbox{${\mathcal M}$}$ and the former satisfies (\ref{Rvn}). Now (\ref{CCm}) implies (\ref{LC1n}).
\ \ \rule{1ex}{1ex}
\subsection{The unordered model and its entropy rate}
\label{sec-unordered}
The model we introduced as $\GG_N$ in
section \ref{sec:hybrid_model}
does not quite fit our default setting because the vertex-names will
typically not be all distinct.
However, if we take the ordered model $\GG_N^{ord}$ and then
arbitrarily
rename the non-unique names, to obtain a model $\GG_N^{ord*}$ say,
then Lemmas \ref{L7} and \ref{LVN} imply that only a proportion $o(1)$ of vertices
are renamed and the entropy rate is unchanged:
\[ (\GG_N^{ord*}) \mbox{ has entropy rate (\ref{hybrid-rate}) } . \]
Now we can ``ignore the order", that is replace the names
$\{(n,\mathbf{a}^n)\}$ by the now-distinct names $\{\mathbf{a}^n\}$,
to obtain a model $\GG_N^*$, say.
In this section we will obtain the entropy rate formula for
$(\GG_N^*)$ as
\baln{
\mbox{(entropy rate for $\GG^*_N$)}= \mbox{ (entropy rate for $\GG^{ord*}_N$)} - 1. \label{aut}
}
The remainder of this section is devoted to the proof of~\eqref{aut}.
Write $\HH_N$ for the Erd\H{o}s-R\'enyi\ graph arising in the construction of
$\GG^{ord*}_N$; that is, each vertex $n+1$ is linked to each earlier
vertex $i$ with probability $\alpha/N$, and we regard the created edges as directed edges
$(n+1,i)$.
Now delete the vertex-labels; consider the resulting graph $\HH^{unl}_N$
as a random unlabelled directed acyclic graph.
Given a realization of $\HH^{unl}_N$ there is some number
$1 \le M(\HH^{unl}_N) \le N!$ of possible vertex orderings consistent with the edge-directions of the realization.
\begin{Lemma}\label{L5b}
In the notation above,
\baln{
\mathrm{ent}(\GG^{ord*}_N) = \mathrm{ent}(\GG_N^*) + {\mathbb E} \log M(\HH^{unl}_N). \label{009}
}
\end{Lemma}
\smallskip \noindent {\bf Proof.\ }
According to the chain rule (\ref{H-condit}),
\bal{
\mathrm{ent}(\GG^{ord*}_N)=\mathrm{ent}(\GG_N^*)+{\mathbb E} \mathrm{ent}(\GG^{ord*}_N|\GG_N^*).
}
We only need to show
\bal{
\mathrm{ent}(\GG^{ord*}_N|\GG_N^*)=\log M(\HH^{unl}_N),
}
which follows from two facts: given $\GG_N^*$, all possible vertex orderings
consistent with the edge-directions of $\HH^{unl}_N$ are equally likely
and there are $M(\HH^{unl}_N)$ of these orderings. The latter fact is
obvious from the definition and to see the former,
consider two such orderings; there is a permutation taking one to the other.
Given a realization of $\GG^{ord*}_N$ associated with the realization of
$\HH^{unl}_N$, applying the same permutation gives a different realization of
$\GG^{ord*}_N$ associated with the same realization of
$\HH^{unl}_N$. These two realizations of $\GG^{ord*}_N$ have the same probability, and map to the same element of
$\GG_N^*$, and (here we are using that the second part of the labels are all distinct)
this is the only way that different realizations of $\GG^{ord*}_N$
can map to the same element of $\GG_N^*$.
\ \ \rule{1ex}{1ex}
So it remains only to prove
\begin{Proposition}
\label{P1}
\[ {\mathbb E} \log M(\HH^{unl}_N) \sim N \log N . \]
\end{Proposition}
\smallskip \noindent {\bf Proof.\ }
Choose $K_N \sim N^\varepsilon$ for small $\varepsilon > 0$ and partition the labels
$[1,N]$ into $K_N$ consecutive intervals
$I_1, I_2,\ldots$ each containing $N/K_N$ labels.
Consider a realization of the (labeled) Erd\H{o}s-R\'enyi\ graph $\HH_N$.
The number $V_i$ of edges with both end-vertices in $I_i$ has
Binomial($ \binom{N/K_N}{2}, \alpha/N$) distribution with mean
$\sim \frac{\alpha N}{2K_N^2}$,
and from standard large deviation bounds
(e.g.~\cite{MR2248695} Theorem 2.15)
\[ \Pr ( V_i \le \sfrac{\alpha N}{K_N^2}, \mbox{ all } 1 \le i \le K_N) \to 1 . \]
For a realization $\HH_N$ satisfying these inequalities we have
\[ M(\HH^{unl}_N) \ge \left(
\left(\frac{N}{K_N} - \frac{2\alpha N}{K_N^2} \right) !
\ \right)^{K_N} . \]
This holds because we can create permutations consistent with
$\HH^{unl}_N $ by, on each interval $I_i$, first placing the
(at most $ \frac{2\alpha N}{K_N^2} $) labels involved in the edges with both ends in $I_i$ in increasing order, then placing the remaining labels in arbitrary order.
So
\begin{eqnarray*}
{\mathbb E} \log M(\HH^{unl}_N) &\ge& (1 - o(1)) \log
\left(
\left(\frac{N}{K_N} - \frac{2\alpha N}{K_N^2} \right) !
\ \right)^{K_N} \\
&\sim& K_N \times \sfrac{N}{K_N} \log \sfrac{N}{K_N} \\
&\sim& (1-\varepsilon) N \log N
\end{eqnarray*}
establishing Proposition~\ref{P1}.
\ \ \rule{1ex}{1ex}
\smallskip \noindent{\bf{Remark.}}
Proposition~\ref{P1} and Lemma~\ref{L5b}
are in the spirit of the
graph entropy literature, but we could not
find these results there. As discussed in Section~\ref{sec:concept}, this literature
is largely concerned with the complexity of the structure of an unlabeled graph, or
in the case of~\cite{SZP}, the entropy of probability distributions on unlabeled graphs.
A quantity of interest in these settings is the ``automorphism group" of the graph which
is closely related to $M(\HH^{unl}_N)$ here.
For example, an analog of~\eqref{009} is
shown in Lemma~1 of~\cite{SZP} and Theorem~1
there uses this lemma to
relate the entropy rate between an Erd\H{o}s-R\'enyi\ graph on $N$ vertices
with edge probabilities $p_N$ with distinguished vertices and that of
the
same model where the vertex labels are ignored.
Their result is very close to Proposition~\ref{P1}, but~\cite{SZP} only
considers
edge weights $p_N$ satisfying $N p_N/\log(N)$ bounded away from zero, which
falls outside our setting.
\section{Open problems}
\label{sec:final}
Aside from the (quite easy) Lemmas \ref{L1} and \ref{L2}, our results concern specific models. Are there interesting ``general" results in this topic?
Here are two possible avenues for exploration.
Given a random graph-with vertex-names $\GG = \GG_N$, there is an associated
random unlabeled graph $\GG^{\mbox{{\footnotesize unl}}}$ and an associated random unordered set of names $\mathbf{Names}$, and obviously
\[ \mathrm{ent}(\GG) \ge \max(\mathrm{ent}(\GG^{unl}), \mathrm{ent}(\mathbf{Names})) . \]
Lemmas \ref{L1} and \ref{L2}, applied conditionally as indicated in Section \ref{sec:zero_rate},
give sufficient conditions for
$\mathrm{ent}(\GG) = \mathrm{ent}(\GG^{unl})$ or $\mathrm{ent}(\GG) = \mathrm{ent}(\mathbf{Names})$.
In general one could ask
``given $\GG^{unl}$ and $\mathbf{Names}$, how random is the assignment of names to vertices?"
The standard notion of {\em graph entropy} enters here, as a statistic of the ``completely random" assignment, so the appropriate conditional graph entropy within a model constitutes a measure of relative randomness.
Another question concerns measures of strength of association of names across edges.
One could just take the space ${\mathbf A}^{L_N}$ of possible names, consider
the empirical distribution across edges $(v,w)$ of the pair of names
$(\mathbf{a}(v), \mathbf{a}(w))$ as a distribution on the product space ${\mathbf A}^{L_N} \times {\mathbf A}^{L_N}$
and compare with the product measure using
some quantitative measure of dependence.
But neither of these procedures quite gets to grips with the issue of finding
conceptually interpretable quantitative measures
of dependence between graph structure and name structure, which we propose as an open problem.
A second issue concerns ``local" upper bounds for the entropy rate.
In the classical context of sequences $X_1,\ldots,X_n$ from ${\mathbf A}$,
an elementary consequence of subadditivity
is that (without any further assumptions) one can upper bound
$\mathrm{ent}(X_1,\ldots,X_n)$ in terms of the ``size-$k$ random window" entropy
\[ \mbox{${\mathcal E}$}_{n,k} := \mathrm{ent}(X_U, X_{U+1}, \ldots,X_{U+k-1}); \quad U
\mbox{ uniform on } [1,n-k+1] \]
and this is optimal in the sense that for a stationary ergodic sequence the
``global" entropy rate is actually equal to the quantity
\[ \lim_{k \to \infty} \lim_{n \to \infty} k^{-1} \mbox{${\mathcal E}$}_{n,k} \]
arising from this ``local" upper bound.
In our setting we would like some analogous result saying that, for the entropy
$\mbox{${\mathcal E}$}_{N,k}$ of the restriction of $\GG_N$ to some ``size-$k$" neighborhood of a random vertex, there is always an upper
bound for the entropy rate $c$ of the form
\[ c \le \lim_{k \to \infty} \lim_{N \to \infty} \frac{ \mbox{${\mathcal E}$}_{N,k}}{k \log N} \]
and that this is an equality under some ``no long-range dependence" condition analogous to
ergodicity. But results of this kind seem hard to formulate, because
of the difficulty in specifying which vertices and edges are to be included in the ``size-$k$" neighborhood.
\paragraph{Acknowledgement.}
The hybrid model arose from a conversation with Sukhada Fadnavis.
|
1,314,259,994,314 | arxiv |
\subsection{Astronomical data}
Astronomical data can be found in two formats: spectra and images. Spectra consist of nearly continuous measures of energy fluxes emitted or absorbed by an object as a function of wavelength. Their fine-grained resolutions allow reliable categorization of objects by matching observed energy curves to theoretical ones. This information, in turn, is used to validate theories of how and when a given type of object formed, how it evolved and what are the physical processes involved in its formation and evolution. In spite of providing such rich and fine-grained information, spectra are hardly scalable: a spectrograph requires hours of exposure in order to collect enough photons across all wavelengths. Spectral data, thus, is expensive and scarce. There are datasets of spectra that have become significantly large, such as the one from the Sloan Digital Sky Survey (SDSS)~\cite{Abazajian09}, and that can be used as ground-truth for fully supervised tasks. Still, there is an inherent bias in how objects were chosen for spectroscopic observation.
In order to build datasets that are more representative of the universe at large, images are used. Astronomical images can be understood as a discretized version of spectra, in which energy flux measures are grouped in broader ranges of wavelength. Image-based sky surveys yield information about numbers of objects that are orders of magnitude larger than spectroscopic ones. Even though these information usually come at the cost of larger uncertainties, they are vital for defining distributions of objects at large scales, for capturing transient phenomena (such as asteroids and supernova explosions), and also for detecting unusual phenomena (the so-called outliers) that may be more carefully studied with a spectrograph afterwards. Besides, images provide rich morphological information that is usually absent in spectra, and that can be used for studying formation and evolution of objects in different ways.
Astronomical image acquisition is carried out by collecting photons with CCD cameras mounted to telescopes containing a set of filters with different passbands. The resulting images are three-dimensional arrays wherein the depth dimension corresponds to the amount of filters used, and each pixel corresponds to the count of photons that passed through a filter in that position. Tasks such as object segmentation (using traditional, unsupervised computer vision techniques) and estimation of properties of objects are performed over these images. After this, a catalog of objects of unknown classes with position coordinates and estimates of properties is generated. The most used tool for this kind of low-level processing is called SExtractor~\cite{Bertin96}.
Raw astronomical images usually come in more than three channels, making them impossible to visualize using the standard color model for digital images, RGB. However, there are algorithms~\cite{Lupton03} that combine information from multiple channels, enhancing relevant details from each passband, and generate RGB composite images. These are mostly used for scientific communication, but are now also used for research at the interface between Deep Learning and Astronomy. In this work, we use RGB composite images made available by various sky surveys.
\subsection{Self-supervised Learning}
Self-supervised learning consists of using unlabeled data and some attribute that can be easily and automatically generated from these data for training models on a pretext task, with the objective of learning representations that are useful for other tasks.
Some methods of producing pretext tasks for image data are:
\begin{itemize}
\item Clustering~\cite{Caron18}: features extracted from a model are clustered and their cluster assignments are used as pseudo-labels for iteratively training the model.
\item Image Rotation~\cite{Gidaris18}: four copies of images are generated by rotating them by $0^{\circ}$, $90^{\circ}$, $180^{\circ}$ and $270^{\circ}$, and a model is trained to predict which rotation was applied to each image.
\item Relative Patch Location~\cite{Doersch15}: pairs of patches from images are generated, and a model is trained to predict the position of the second patch relative to the first.
\end{itemize}
In Astronomy, most data is unlabeled, but useful properties of the objects, such as brightness and size, can be readily computed from analytical models, without supervision. This yields an ideal setting for using self-supervision, where computed properties are used as targets for regression tasks. In this way, unlabeled data are included in the learning process, leading to representations that are closer to the distribution of the observed objects in the universe at large.
Astronomical properties alone are commonly used as input features for object classification~\cite{Sesar17, Costa-Duarte19}. However, we believe that combining images with computable properties of astronomical objects into a single representation through self-supervised learning leads to more powerful, representative features that can achieve better performance on more difficult tasks, such as outlier detection or clustering.
\subsection{Pretext task}
\label{sec:method_pretext}
A pretext task is a task in which known attributes of each data point are used as intermediate targets for training models in a supervised manner, with the objective of learning representations that can be used in higher-level tasks such as classification or clustering. We choose the prediction of magnitudes from unlabeled images of astronomical objects as our pretext task.
Magnitude is a dimensionless measure of the brightness of an object in a given passband. Magnitudes are defined in a logarithmic scale, such that a difference of 1 in magnitude corresponds to a difference of $100^{1/5}$ in brightness, and magnitude values found in modern sky surveys are roughly inside the range [0, 30], where lower values correspond to brighter objects. To ease training and convergence, we rescale magnitudes by dividing them by 30. Magnitudes are computed with preprocessing tools such as SExtractor~\cite{Bertin96} and made available by the sky survey teams for the scientific community.
We use a subset of the first data release of the Southern Photometric Local Universe Survey (S-PLUS)~\cite{Oliveira19}, a sky survey aimed at imaging the Southern sky in twelve filters. There are, therefore, twelve magnitudes per object to be used as targets. The first S-PLUS data release includes both image cutouts and a catalog of detected objects, their estimated properties and their corresponding uncertainties. Estimates of magnitudes in the catalog have a precision of $0.01$ and come with uncertainty estimates.
Table~\ref{table:magnitudes} shows magnitude values and their uncertainties for a randomly sampled object. Rows are ordered by passbands from smallest (u) to largest (z) wavelengths. The filters u, g, r, i and z are a widely used set of broad filters, known as the \emph{ugriz} system. They refer to ultraviolet, green, red, infrared, and near-infrared. The other filters refer to narrow filters whose objective is to capture information about phenomena that happen at specific wavelengths. It can be seen that uncertainties vary significantly per filter.
\begin{table}[!ht]
\centering
\caption{An example of magnitude values \newline used as targets for the pretext task}
\label{table:magnitudes}
\begin{tabular}{lc}
\toprule
u & 19.87 \textpm 0.04 \\
f378 & 19.93 \textpm 0.06 \\
f395 & 19.95 \textpm 0.09 \\
f410 & 19.42 \textpm 0.06 \\
f430 & 19.34 \textpm 0.05 \\
g & 19.16 \textpm 0.02 \\
f515 & 19.09 \textpm 0.04 \\
r & 18.96 \textpm 0.02 \\
f660 & 18.93 \textpm 0.02 \\
i & 18.82 \textpm 0.02 \\
f861 & 18.78 \textpm 0.03 \\
z & 18.80 \textpm 0.03 \\
\toprule
\end{tabular}
\end{table}
We filter out every object for which any of the magnitudes has an uncertainty higher than $0.1$. This uncertainty threshold keeps a balance between having a dataset of reasonable size and removing examples with noisy targets. We also filter out all labeled objects, which will later be used for downstream tasks, in order to avoid leaking from the pretext task to downstream tasks.
The resulting unlabeled set contains 205321 objects, which are split into 80\% for training, 10\% for validation and 10\% for testing. Results of the pretext training are described in Section~\ref{sec:results_pretext}.
\subsection{Downstream tasks}
\label{sec:method_downstream}
Six datasets with varying degrees of difficulty are selected for downstream classification tasks. Two of them are extracted from the Southern Photometric Local Universe Survey (S-PLUS)~\cite{Oliveira19}, the sky survey also used in the pretext task, and the other four from the Sloan Digital Sky Survey (SDSS)~\cite{Abazajian09}, a sky survey that sweeps the Northern sky in five filters. Each dataset is briefly described below.
\vspace{5pt}
\noindent \textbf{Star/Galaxy (SG):} 50090 images divided into two classes: \emph{Stars} (27981) and \emph{Galaxies} (22109), extracted from Data Release 1 of S-PLUS. This dataset is considered the easiest among the six, since it has a reasonable amount of data, classes are balanced, and it contains easily identifiable examples of stars and galaxies.
\medskip
\noindent \textbf{Star/Galaxy/Quasar (SGQ):} 54000 images divided into three balanced classes: \emph{Stars} (18000), \emph{Galaxies} (18000), and \emph{Quasars} (18000), also extracted from Data Release 1 of S-PLUS. This dataset is a harder variant of the \emph{SG} dataset, since it contains fainter examples of galaxies and stars, and also contains quasars (quasi-stellar radio sources). Quasars are small objects that are easily confused with stars. See examples in Fig.~\ref{fig:sgq}.
\medskip\noindent \textbf{Merging Galaxies (MG):} 15766 images divided into two classes: \emph{Merging} (5778) and \emph{Non-interacting} (9988) galaxies, extracted from Data Release 7 of SDSS. This dataset is sufficiently large, but its instances are not as clearly separable as stars and galaxies. See examples in Fig.~\ref{fig:mg}.
\medskip\noindent \textbf{Galaxy Morphology, 2-class (EF-2):} 3604 images divided into two classes: \emph{Elliptical} (289) and \emph{Spiral} (3315) galaxies, extracted from the EFIGI~\cite{baillard2011efigi} dataset, which is based on SDSS. This dataset is highly unbalanced towards images of \emph{Spiral} galaxies.
\medskip\noindent \textbf{Galaxy Morphology, 4-class (EF-4):} 4389 images divided into four classes: \emph{Elliptical} (289), \emph{Spiral} (3315), \emph{Lenticular} (537) and \emph{Irregular} (248) galaxies, extracted from the EFIGI dataset. The additional classes make the classification problem harder, since the objects are not as clearly identifiable as in the 2-class subset and classes are highly unbalanced as well.
\medskip\noindent \textbf{Galaxy Morphology, 15-class (EF-15):} 4327 images divided into fifteen classes: \emph{Elliptical:-5} (227), \emph{Spiral:0} (196), \emph{Spiral:1} (257), \emph{Spiral:2} (219), \emph{Spiral:3} (517), \emph{Spiral:4} (472), \emph{Spiral:5} (303), \emph{Spiral:6} (448), \emph{Spiral:7} (285), \emph{Spiral:8} (355), \emph{Spiral:9} (263), \emph{Lenticular:-3} (189), \emph{Lenticular:-2} (196), \emph{Lenticular:-1} (152), and {Irregular:10} (248) galaxies, also extracted from the EFIGI dataset. The numbers after the names come from the Hubble Sequence~\cite{hubble1982realm}, which is a standard taxonomy used in astronomy for galaxy morphology. This is the hardest dataset among the evaluated, since it has only a few hundred observations for each class. See examples for EF-2, EF-4 and EF-15 in Fig.~\ref{fig:ef}.
\vspace{5pt}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.47\textwidth]{images/sgq}
\caption{Sample images from the SGQ dataset.}
\label{fig:sgq}
\vspace{-0.15cm}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.47\textwidth]{images/mg}
\caption{Sample images from the MG dataset.}
\label{fig:mg}
\vspace{-0.15cm}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.47\textwidth]{images/ef}
\caption{Sample images from the EF-2, EF-4 and EF-15 datasets.}
\label{fig:ef}
\vspace{-0.15cm}
\end{figure}
Labels for the S-PLUS datasets were obtained by matching astronomical coordinates of the objects between the S-PLUS image catalog and the SDSS spectra catalog, which contains orders of magnitude less samples. Because of that, even though the image catalog may include millions of detected objects, only thousands of them can be reliably labeled and used for supervised training.
\subsection{Training and evaluation setup}
\label{sec:method_training}
In~\cite{VISAPP}, we extensively analyzed training setups for a variety of astronomical object classification tasks. Five CNN architectures, three optimizers, two values of L2 regularization ($\lambda$)~\cite{krogh1992simple} and two values of max-norm constraint ($\gamma$)~\cite{srebro2005rank} were considered. We found that VGG16~\cite{Simonyan14} yielded the best results for all tasks in this context, both when training from scratch and when fine-tuning based on ImageNet weights, and that ImageNet was beneficial in most cases. We also found that VGG16 combined with $\lambda=0$ and $\gamma=2$ yielded the best results for most tasks.
Thus, in this work, we use a VGG16 backbone with $\lambda=0$ and $\gamma=2$ for all tasks. As in~\cite{VISAPP}, we add a 2048-unit fully-connected layer with max-norm constraint, followed by a Dropout layer~\cite{srivastava2014dropout} with $0.5$ probability of dropping units, followed by a final fully-connected layer with $n$ units, where $n$ is either the number of magnitudes or the number of classes. The output of the last layer is passed either through a softmax function, if the task is classification, or through a modified ReLU function that saturates to 1 for $x \ge 1$, if the task is regression. As mentioned earlier, regression targets are divided by 30. For the classification task, cross entropy was used as the loss function, while for the regression task, mean absolute error (MAE) was used.
For the pretext (regression) task, SGD with an initial learning rate of $10^{-3}$ is used. For downstream (classification) tasks, we consider five schemes for comparison:
\begin{itemize}
\item training from scratch;
\item extracting features from a model pre-trained on ImageNet;
\item extracting features from a model pre-trained on the pretext task;
\item fine-tuning a model pre-trained on ImageNet;
\item fine-tuning a model pre-trained on the pretext task.
\end{itemize}
When training from scratch, all layers are trained for up to 200 epochs using Adam. When extracting features, all the convolutional layers are kept frozen and the top layers are trained for up to 100 epochs using Adam. When fine-tuning, first all the convolutional layers are frozen and only the top layers are trained for 10 epochs using Adam, then the convolutional layers are unfrozen and trained along with the top layers for up to 200 epochs using SGD with a learning rate of $10^{-4}$. In all cases, training is automatically stopped if validation loss diverges from training loss for more than 10 epochs. Each combination of dataset and scheme is run three times and the average and the standard deviation of the results is reported.
\subsection{Low-data regime}
\label{sec:method_low_data}
We carry out an experiment in which we observe how performance varies with the size of the training set. In order to do that, we start out with a small subset of 100 samples of the training data, and train models with incrementally larger subsets of data until the entire training set is used. From 100 to 1000 samples, an incremental step of 100 is used, to account for variances in very low-data regime. From 1000 to 3000, a step of 500 is used. Finally, from 10000 to 40000, a step of 10000 is used. This yields a total of 18 training runs per dataset and per scheme. The validation set is fixed.
Training from scratch was carried out using SGD with a learning rate of $10^{-4}$, which we empirically found to be more stable than Adam when using little data. This experiment is run only for the S-PLUS datasets, SG and SGQ.
\subsection{Pretext task}
\label{sec:results_pretext}
Training was carried out using the dataset described in Section~\ref{sec:method_pretext}, with architectures and parameters as specified in Section~\ref{sec:method_training}. A MAE of $0.0034$ was achieved on the validation set for the prediction of magnitudes in the [0,1] range. This corresponds to a MAE of $0.1$ on the original magnitude scale. Given that the magnitudes have uncertainties of up to $0.1$, we consider it a very reasonable error.
\subsection{Downstream tasks}
\begin{table*}
\centering
\caption{Accuracies obtained on the validation sets for six classification tasks, trained using five different schemes.}
\label{table:acc}
\begin{tabular}{lccccc}
\toprule
& from scratch & \multicolumn{2}{c}{feature extraction} & \multicolumn{2}{c}{fine-tuning} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6}
& & ImageNet & magnitudes & ImageNet & magnitudes \\ \cmidrule(lr){1-6}
EF-15 & 0.4227 \textpm 0.0256 & 0.2642 \textpm 0.0212 & \textbf{0.3271 \textpm 0.0141} & 0.3683 \textpm 0.0222 & \textbf{0.4048 \textpm 0.0245} \\
EF-2 & 0.9683 \textpm 0.0082 & \textbf{0.9697 \textpm 0.0040} & 0.9580 \textpm 0.0056 & \textbf{0.9893 \textpm 0.0016} & 0.9687 \textpm 0.0045 \\
EF-4 & 0.8729 \textpm 0.0153 & \textbf{0.8000 \textpm 0.0271} & 0.7801 \textpm 0.0013 & \textbf{0.8774 \textpm 0.0035} & 0.8322 \textpm 0.0080 \\
MG & 0.6336 \textpm 0.0000 & 0.7825 \textpm 0.0006 & \textbf{0.8041 \textpm 0.0045} & \textbf{0.9580 \textpm 0.0042} & 0.9360 \textpm 0.0016 \\
SGQ & 0.8722 \textpm 0.0038 & 0.6624 \textpm 0.0014 & \textbf{0.7255 \textpm 0.0015} & 0.8760 \textpm 0.0011 & \textbf{0.8828 \textpm 0.0008} \\
SG & 0.9897 \textpm 0.0035 & 0.9366 \textpm 0.0005 & \textbf{0.9799 \textpm 0.0010} & 0.9901 \textpm 0.0006 & \textbf{0.9928 \textpm 0.0004} \\
\toprule
\end{tabular}
\end{table*}
Training and evaluation were carried out according to the setup described in Section~\ref{sec:method_training}, using the datasets described in Section~\ref{sec:method_downstream}. Table~\ref{table:acc} shows validation accuracies for classifiers trained on the five schemes: from scratch, using features extracted from a model pre-trained on ImageNet, using features extracted from a model pre-trained on magnitudes, fine-tuning a model pre-trained on ImageNet and fine-tuning a model pre-trained on magnitudes. Each scheme was run three times to account for randomness in the optimization process. Reported results are the average and standard deviation of the three runs.
For fine-tuning, models pre-trained on magnitudes perform better by a significant margin for half of the datasets, including both datasets from S-PLUS, but perform worse for most datasets from SDSS.
\begin{figure}[!hb]
\centering
\includegraphics[width=0.88\columnwidth]{images/embeddings_tsne50}
\caption{Projections of features extracted from validation sets using models pre-trained on ImageNet and on magnitudes.}
\label{fig:projections}
\end{figure}
For feature extraction, models pre-trained on magnitudes perform better by large margins for 2/3 of the datasets, including both datasets from S-PLUS and two datasets from SDSS. Figure~\ref{fig:projections} shows two-dimensional projections of features extracted from validation sets of some of the datasets, created with t-SNE~\cite{maaten2008visualizing}, with perplexity=50, where we see that class separation is improved for the features extracted using the model trained on magnitudes, when compared with those extracted using the model trained on ImageNet.
\begin{figure*}[!ht]
\begin{minipage}[l]{1.0\columnwidth}
\centering
\includegraphics[width=\linewidth]{images/accuracies_SG}
\end{minipage}
\hfill{}
\begin{minipage}[r]{1.0\columnwidth}
\centering
\includegraphics[width=\linewidth]{images/accuracies_SGQ}
\end{minipage}
\caption{Accuracies on the validation sets of the SG (left) and SGQ (right) datasets as a function of training set size.}
\label{fig:acc_curves}
\end{figure*}
\subsection{Low-data regime}
Training and evaluation were carried out according to the setup described in Section~\ref{sec:method_low_data}. Figure~\ref{fig:acc_curves} shows accuracy curves as a function of the training set size for the datasets SG and SGQ. Dashed lines refer to classifiers trained on features extracted from the pretext task or from the ImageNet model, and solid lines refers to classifiers trained with fine-tuning from the pretext task or from the ImageNet model.
For feature extraction, the gap between pre-training on magnitudes and on ImageNet remains large and nearly constant regardless of training set size. For fine-tuning, the gap between pre-training on magnitudes and on ImageNet is larger for small training sets and decreases as more training data is added. Table~\ref{table:acc_littledata} shows accuracy values for a training set size of 0.25\% of the total, which corresponds to nearly 100 training examples for each dataset. For such a small training set, it can be seen that pre-training on magnitudes can yield accuracies that are up to 10 pp higher when compared to ImageNet.
\begin{table}[!ht]
\centering
\caption{Accuracies on the validation sets \newline for a training set of approximately 100 examples.}
\label{table:acc_littledata}
\begin{tabular}{lccc}
\toprule
& & SG & SGQ \\ \cmidrule(lr){3-4}
& ImageNet & 0.8702 & 0.5529 \\
feature extraction & magnitudes & \textbf{0.9727} & \textbf{0.6051} \\
& diff. & +0.1015 & +0.052 \\ \cmidrule(lr){2-4}
& ImageNet & 0.9326 & \textbf{0.6485} \\
fine-tuning & magnitudes & \textbf{0.9713} & 0.5138 \\
& diff. & +0.0387 & -0.1346 \\
\toprule
\end{tabular}
\end{table}
\section{Introduction}
\label{sec:intro}
\input{010_intro.tex}
\section{Background}
\label{sec:related}
\input{020_background.tex}
\section{Method}
\label{sec:method}
\input{030_method.tex}
\section{Results}
\label{sec:results}
\input{040_results.tex}
\section{Discussion}
\label{sec:discussion}
\input{050_discussion.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{060_conclusion.tex}
\section*{Acknowledgment}
\noindent This study was financed in part by FAPESP (2015/22308-2, 2017/25835-9 and 2018/25671-9) and the Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior - Brasil (CAPES) - Finance Code 001.
\IEEEtriggeratref{16}
\IEEEtriggercmd{\enlargethispage{-6.5in}}
\bibliographystyle{IEEEtran}
|
1,314,259,994,315 | arxiv | \section{Introduction}
\subsection{}
A theory of quantum supergroups was developed systematically by
Yamane \cite{Y1, Y2} after the classical work of Drinfeld, Jimbo and
Lusztig. Recently the interest in quantum supergroups has been
revived (see \cite{CW, CHW1, CHW2}) thanks to their categorification
\cite{HW} by Hill and one of the authors using the spin nilHecke and
quiver Hecke superalgebras \cite{W, EKL, KKT}. The work on quantum
supergroups of {\em anisotropic type} (meaning no isotropic odd
simple roots) has also motivated in turn further progress on
categorification. The conjecture in \cite{HW} that cyclotomic (spin)
quiver Hecke superalgebras categorify the integrable modules of the
supergroup has recently been proved by Kang, Kashiwara, and Oh
\cite{KKO13}. The validity of this conjecture at rank one, in which
case quiver Hecke superalgebras reduce to spin nilHecke algebras,
was already noted in \cite{HW} as an easy upgrading of the
difficult categorification result of Ellis, Khovanov, and Lauda
\cite{EKL}. Yet another recent development is the categorification
of the modified covering quantum group in rank one
(see Ellis-Lauda \cite{EL}).
A basic observation in \cite{HW} is that the parity functor $\Pi$
categorifies a formal super sign $\pi$ subject to $\pi^2=1$. This
leads to the formulation of the so-called covering quantum group
$\cqg$ in \cite{HW, CW, CHW1}, which allows a second formal
parameter $\pi$ such that $\pi^2=1$ besides the usual quantum
parameter $v$. The specialization of $\cqg$ at $\pi=1$, denoted by
$\cqg |_{\pi=1}$, recovers the usual quantum group while the
specialization of $\cqg$ at $\pi=-1$, denoted by $\cqg |_{\pi=-1}$,
recovers the quantum supergroup of anisotropic type. In contrast to
the versions of quantum supergroups over $\mathbb C(v)$ studied in
literature, our covering (or super) quantum groups have a
well-developed representation theory such as weight modules and
integrable modules over ${\Q(v)}$, thanks to the enlarged Cartan
subalgebras \cite{CHW1}; moreover, they admit integral forms. Under
a mild bar-consistent condition on the super Cartan datum, the half
covering quantum group $\chqg$ ($\cong \cqg^{-}$) and the associated
integrable modules admit a novel bar involution which sends $v
\mapsto \pi v^{-1}$ and then admit canonical bases \cite{CHW2}.
The (covering) quantum supergroups are quantizations of Lie
superalgebras associated to the anisotropic type super Cartan datum
introduced in \cite{Kac}. It has been known that Lie superalgebras
associated to the super Cartan datum have representation theory
similar to that of Kac-Moody algebras associated to the same super
Cartan datum with $\mathbb Z_2$-grading forgotten; in particular, the
character formulas for the integrable modules of these Lie algebras
and superalgebras coincide. In the (only) finite type, this reduces
to the well-known fact that the finite-dimensional modules of Lie
superalgebra $\mathfrak{osp}(1|2n)$ and Lie algebra $\mathfrak{so}(2n+1)$ have
the same characters. Such a similarity continues to hold at the
quantum level. But a conceptual explanation for all these
coincidences has been missing (see an earlier attempt \cite{La} in
finite type).
\subsection{}
The goal of this paper is to establish (somewhat surprising) direct links at several levels
between quantum groups and supergroups
associated to bar-consistent super Cartan datum, which
provide a conceptual explanation of the above coincidences.
We construct automorphisms (called twistors) denoted by $\Psi, \dot\Psi$ of
the half covering quantum group $\chqg$ and the modified covering quantum group $\cmqg$,
respectively.
The construction of twistors requires
an extension of scalars to include a square root of $-1$, denoted by ${\mathbf{t}}$ in this paper.
The twistor switches $\pi$ and $-\pi$, and hence specializes to an isomorphism between
the half (and resp., modified) quantum group and its super counterpart.
As an immediate consequence, we obtain an equivalence of categories of weight modules for quantum
group $\cqg|_{\pi=1}$ and supergroup $\cqg|_{\pi=-1}$.
We also formulate an {\em extended covering quantum group} with enlarged Cartan subalgebra
and construct its twistor.
Symbolically, we summarize the role of the twistor in the case of modified covering quantum group
in the following commutative diagram:
\begin{center}
\begin{tikzpicture}[scale=1]
\draw[->] (.15,1.3) arc (-50:220:.3);
\draw (-.05, 2.1) node {$\qquad\qquad \;\; \dot\Psi \; (\pi \mapsto -\pi)$};
\draw (.1,1) node {$\cmqg[{\mathbf{t}}]$};
\draw (-2,-1) node {$\cmqg[{\mathbf{t}}] |_{\pi=1}$};
\draw (2,-1) node {$\cmqg[{\mathbf{t}}] |_{\pi=-1}$};
\draw[->>] (-.2,.7) -- (-1.6,-.6);
\draw[->>] (.2,.7) -- (1.6,-.6);
\draw[right hook->] (-1.9,-.6) -- (-.5,.7);
\draw[left hook->] (1.9,-.6) -- (.5,.7);
\draw[double,<->] (-1,-1) -- (1,-1) node[midway, above] {$\simeq$};
\end{tikzpicture}
\end{center}
Alternatively, one can view the modified quantum group $\cmqg
|_{\pi=1}$ and the modified quantum supergroup $\cmqg |_{\pi=-1}$ as
two different rational forms of a common algebra $\cmqg[{\mathbf{t}}]
|_{\pi=1}$. The two rational forms admit their own distinct integral
forms. Remarkably the distinction between super and non-super
algebras becomes blurred at the quantum level, even though a clear
distinction exists between Lie algebras and Lie superalgebras (for
example, there are ``more" integrable modules for Lie algebras than
for the corresponding Lie superalgebras \cite{Kac}).
As an application, the twistor $\Psi$ induces a transformation
on the crystal lattice of $\chqg$ which behaves well with the crystal structure.
By careful bookkeeping, we provide a purely algebraic proof
of \cite[Proposition~6.7]{CHW2} that the crystal
lattice of $\chqg$ is invariant
under an anti-automorphism $\varrho$ which fixes the Chevalley generators.
Furthermore, the twistor $\Psi$ is shown to match
Lusztig-Kashiwara's canonical basis for $\chqg|_{\pi=1}$ \cite{Lu1, K} with
the canonical basis for the half quantum supergroup $\chqg|_{\pi=-1}$ constructed in \cite{CHW2}, up to
integer powers of ${\mathbf{t}}$. Let us add that this does not give a new proof of the existence
of the canonical basis for $\chqg$ or for the integrable modules of $\cqg$.
\subsection{}
Although it is not very explicitly used in this paper, the
connection between (one-parameter) quantum groups and two-parameter
$(v,t)$-quantum groups developed by two of the authors \cite{FL12}
plays a basic role in our evolving understanding of the links
between quantum groups and supergroups. A connection between
(one-parameter) quantum groups and quantum supergroups can indeed be
formulated by a ``twisted lift" to two-parameter quantum groups
which is followed by a ``specialization" of the second parameter $t$
to ${\mathbf{t}}$ with ${\mathbf{t}}^2=-1$. But we have decided to adopt the more
intrinsic and self-contained approach as currently formulated in
this paper.
The isomorphism result on modified quantum (super)groups $\cmqg[{\mathbf{t}}]
|_{\pi=1}$ and $\cmqg[{\mathbf{t}}] |_{\pi=-1}$ in this joint work was
announced in \cite{FL13}, where the isomorphism in the rank one
case was established somewhat differently from here.
A version of our equivalence of categories of weight modules for
$\cqg|_{\pi=1}$ and $\cqg|_{\pi=-1}$ also appeared in \cite{KKO13}
with a very different proof. Note that the notion of weight modules
in {\em loc. cit.} is nonstandard and subtle, and the
multi-parameter algebras formulated therein over $\mathbb C(v)$ or
$\mathbb C(v)^\pi$ do not seem to admit rational forms or integral forms or
modified counterparts as ours. Some construction similar to the
twistor $\widehat{\Psi}$ for our extended covering quantum group
(see Proposition~\ref{prop:qgiso}) also appeared in \cite{KKO13}.
In contrast to {\em loc. cit.}, our formula for $\widehat{\Psi}$ is
very explicit; the twistor $\dot\Psi$ here preserves the integral
forms (see Theorem~\ref{thm:modauto}), and this allows us to
specialize $v$ to be a root of unity without difficulty.
\subsection{}
The paper is organized as follows.
In Section~\ref{sec:half}, after recalling some preliminaries,
we formulate and establish a twistor $\Psi$ of the half covering quantum group
$\chqg[{\mathbf{t}}]$,
which restricts to an isomorphism between the super and non-super half quantum groups.
Here we make a crucial use of a new multiplication on $\chqg[{\mathbf{t}}]$ twisted by a distinguished bilinear form,
and the general idea of such twisted multiplication goes back to \cite{FL12}.
In Section~\ref{sec:CB}, we use the twistor $\Psi$ to compare the
crystal lattices between the $\pi=1$ and $\pi=-1$ cases. In
particular, we give an algebraic proof that the crystal lattice for
$\chqg$ is preserved by an anti-involution $\varrho$. (This was
stated in \cite[Proposition~6.7]{CHW2}.) Then we show that the
twistor $\Psi$ matches the canonical basis elements of the half
quantum supergroup $\chqg |_{\pi=-1}$ and those of half quantum
group $\chqg |_{\pi=1}$, up to integer powers of ${\mathbf{t}}$.
In Section~\ref{sec:modified}, we construct a twistor of the
modified covering quantum group. This restricts to an isomorphism
between the super and non-super modified quantum groups. An
immediate corollary is an equivalence of categories of weight
modules for the super and non-super quantum groups. A further
consequence is an equivalence of BGG categories of modules for
Kac-Moody Lie algebras and Lie superalgebras. Finally we construct
an alternative twistor relating quantum groups to quantum
supergroups upon enlarging the Cartan subalgebras.
\vspace{.2cm} \noindent \textbf{Acknowledgements.} Y.L. is supported
in part by the NSF grant DMS-1160351, while S.C. and W.W. are
partially supported by the NSF grant DMS-1101268. S.C. was also
supported by a semester fellowship at University of Virginia. S.C.
and W.W. thank Institute of Mathematics, Academia Sinica, Taipei for
providing an excellent working environment and support, where part
of this project was carried out. W.W. thanks Shun-Jen Cheng and
Maria Gorelik for helpful discussions regarding the work of
Lanzman.
\section{The twistor of half covering quantum group}
\label{sec:half}
\subsection{The preliminaries}
We review some basic definitions which can be found in \cite{CHW1, CHW2} and references therein.
\begin{dfn}
\label{dfn:scd}
A {\em Cartan datum} is a pair $(I,\cdot)$ consisting of a finite
set $I$ and a $\mathbb Z$-valued symmetric bilinear form $\nu,\nu'\mapsto \nu\cdot\nu'$
on the free abelian group $\mathbb Z[I]$ satisfying
\begin{enumerate}
\item[(a)] $d_i=\frac{i\cdot i}{2}\in \mathbb Z_{>0}, \quad \forall i\in I$;
\item[(b)]
$a_{ij}=2\frac{i\cdot j}{i\cdot i}\in \mathbb Z_{\leq 0}$, for $i\neq j$ in $I$.
\end{enumerate}
A Cartan datum is called a {\em super Cartan datum of anisotropic
type} if there is a partition $I=I_{\bar{0}}\coprod I_{\bar{1}}$ which
satisfies the condition
\begin{enumerate}
\item[(c)] $2\frac{i\cdot j}{i\cdot i} \in 2\mathbb Z$ if $i\in I_{\bar{1}}$ and $j \in I$.
\end{enumerate}
A super Cartan datum of anisotropic type is called {\em
bar-consistent}
if it additionally satisfies
\begin{enumerate}
\item[(d)] $d_i\equiv p(i) \mod 2, \quad \forall i\in I.$
\end{enumerate}
\end{dfn}
We will always assume $I_{\bar{1}}\neq\emptyset$ without loss of
generality. We note that (d) is almost always satisfied for super
Cartan data of finite or affine type (with one exception which
corresponds to a Dynkin diagram with two short roots of opposite
parity at its both ends, called by $A^{(4)}(0,2n)$). A super Cartan
datum is always assumed to be bar-consistent in this paper.
We note that a bar-consistent super Cartan datum satisfies
%
\begin{equation} \label{eq:even}
i\cdot j\in 2\mathbb Z \quad \text{ for all }i,j\in I.
\end{equation}
The $i\in I_{\bar{0}}$ are called even, $i\in I_{\bar{1}}$ are called odd. We
define a parity function $p:I\rightarrow\set{0,1}$ so that $i\in
I_{\overline{ p(i)}}$. We extend this function to the homomorphism
$p:\mathbb Z[I]\rightarrow \mathbb Z_2$. Then $p$ induces a {\em parity $\mathbb Z_2$-grading} on
$\mathbb Z[I]$.
We define the height function ${\operatorname{ht}}$ on $\mathbb Z[I]$
by letting ${\operatorname{ht}}(\sum_{i\in I} c_i i)=\sum_{i\in I} c_i$.
A {\em super root datum} associated to a super Cartan datum
$(I,\cdot)$ consists of
\begin{enumerate}
\item[(a)]
two finitely generated free abelian groups $Y$, $X$ and a
perfect bilinear pairing $\ang{\cdot, \cdot}:Y\times X\rightarrow \mathbb Z$;
\item[(b)]
an embedding $I\subset X$ ($i\mapsto i'$) and an embedding $I\subset
Y$ ($i\mapsto i$) satisfying
\item[(c)] $\ang{i,j'}=\frac{2 i\cdot j}{i\cdot i}$ for all $i,j\in I$.
\end{enumerate}
We will assume that the image of the imbedding $I\subset X$
(respectively, the image of the imbedding $I\subset Y$) is linearly
independent in $X$ (respectively, in $Y$);
in the terminology of \cite{L93}, this means the datum is both $X$-regular and
$Y$-regular.
If $V$ is a vector space graded by $\mathbb Z[I]$, $X$, or $Y$, we will use the weight notation
$|x|=\mu$ if $x\in V_\mu$. If $V$ is a $\mathbb Z_2$-graded vector space, we will
use the parity notation $p(x)=a$ if $x\in V_a$.
Let $v$ and $t$ be formal parameters, and let $\pi$ be an indeterminate
such that
$$
\pi^2=1.
$$
For a ring $R$ with $1$, we will form a new ring
$R^\pi=R[\pi]/(\pi^2-1)$. Given an $R^\pi$-module (or algebra) $M$,
the {\em specialization of $M$ at $\pi=\pm 1$} means the $R$-module
(or algebra) $M|_{\pi=\pm 1} \stackrel{\text{def}}{=}R_{\pm}\otimes_{R^\pi} M$,
where $R_\pm =R$ is viewed
as a $R^\pi$-module on which $\pi$ acts as $\pm 1$.
Assume 2 is invertible in $R$; i.e. $\frac{1}{2}\in R$.
We define
\begin{equation}\label{eq:pi idempotent}
\varepsilon_{+}=\frac{1+ \pi}{2},\qquad\varepsilon_{-}=\frac{1- \pi}{2},
\end{equation}
and note that $R^\pi=R\varepsilon_+\oplus R\varepsilon_-$.
In particular, since $\pi \varepsilon_{\pm}=\pm \varepsilon_{\pm}$
for an $R^\pi$-module $M$, we see that
\[M|_{\pi=\pm 1}\cong \varepsilon_{\pm } M.\]
Similarly, for an $R$-module $M$, we define
\[M[t^{\pm 1}]=R[t^{\pm 1}]\otimes_R M.\]
Let ${\mathbf{t}}^2=-1 \in R$. Let us define the specialization of $t$ at ${\mathbf{t}}$
to be
\[
M[{\mathbf{t}}]=R[{\mathbf{t}}]\otimes_{R[t^{\pm 1}]} M[t^{\pm 1}]=R[{\mathbf{t}}]\otimes_R M.
\]
(Note that the results herein may be reformulated in a context where ${\mathbf{t}}$ is replaced by an
indeterminate solution to the equation $t^4=1$.)
Recall $\pi^2=1$. For $k \in \mathbb Z_{\ge 0}$ and $n\in \mathbb Z$,
we introduce a $(v,\pi)$-variant of quantum integers, quantum factorial and quantum binomial coefficients:
\begin{equation}
\label{eq:nvpi}
\begin{split}
\bra{k}_{v,\pi} &
=\frac{(\pi v)^k-v^{-k}}{\pi v-v^{-1}} \in \mathbb Z[v^{\pm 1}]^\pi,
\\
\bra{k}_{v,\pi}^! &= \prod_{l=1}^k \bra{l}_{v,\pi} \in \mathbb Z[v^{\pm 1}]^\pi,
\\
\bbinom{n}{k}_{v,\pi} &=\frac{\prod_{l=n-k+1}^n \big( (\pi v)^{l}
-v^{-l} \big)}{\prod_{l=1}^k \big( (\pi v)^{l}- v^{-l} \big)} \in
\mathbb Z[v^{\pm 1}]^\pi.
\end{split}
\end{equation}
We will use the notation
$$
v_i=v^{d_i}, \quad t_i=t^{d_i}, \quad \pi_i=\pi^{d_i}, \quad \text{ for } i\in I.
$$
Let $(I,\cdot)$ be a super Cartan datum.
The {\em half covering quantum group} $\chqg$ \cite[\S 1]{CHW1} is the ${\Q(v)^\pi}$-algebra with generators
$\theta_i$ ($i\in I$) and relations
\begin{equation}\label{eq:serrerel}
\sum_{k=0}^{b_{ij}} (-1)^k\pi^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{v_i, \pi_i}
\theta_i^{b_{ij}-k}\theta_j\theta_i^k=0 \quad \text{for all }i\neq j\in I,
\end{equation}
where
$$
b_{ij} = 1- a_{ij}.
$$
As first noted in \cite{HW},
the $\mathbb Q$-algebra $\chqg$ admits a bar involution $\bar{\phantom{c}}$ such that
\begin{equation}
\label{eq:bar}
\overline{\theta_i} =\theta_i\; (\forall i\in I), \qquad \overline{\pi} =\pi, \qquad \overline{v} =\pi v^{-1}.
\end{equation}
We define the divided powers
\begin{equation}\label{eq:thetadivpow}
\theta_i^{(n)}=\frac{\theta_i^n}{\bra{n}^!_{v_i,\pi_i}}.
\end{equation}
These elements generate a $\mathbb Z[v^{\pm 1}]^\pi$-subalgebra
of $\chqg$, denoted by ${}_\mathbb Z\chqg$. (In this paper, the notation $\mathbb Z[v^{\pm 1}]$ stands
for the ring of Laurent polynomias in $v$.) Note that $\theta_i^{(n)}$ is bar invariant.
By specialization at $\pi=\pm 1$, we obtain the usual half quantum
group $\chqg|_{\pi=1}$ and the half quantum supergroup
$\chqg|_{\pi=-1}$, respectively. By leaving $\pi$ as an
indeterminate, we can simultaneously address both cases.
The algebra $\chqg$ has a $\mathbb Z[I]\times \mathbb Z_2$-grading obtained by setting $|\theta_i|=i$
and $p(\theta_i)=p(i)$, for $i \in I$.
The algebra $\chqg$ is known \cite{HW, CHW1}
to be equipped with a nondegenerate symmetric bilinear form $(\cdot,\cdot)$
such that
\[(1,1)=(\theta_i,\theta_i)=1,\quad (\theta_ix,y)=(x,e_i'(y)),\]
where $e_i':\chqg\rightarrow \chqg$ is the map satisfying
\begin{equation} \label{eq:e'}
e_i'(1)=0, \quad e_i'(\theta_j)=\delta_{ij},\quad e_i'(xy)=e_i'(x)y+\pi^{p(i)p(x)}v^{-i\cdot |x|} xe_i'(y).
\end{equation}
There exists \cite{CHW2} a (non-super) algebra anti-automorphism of $\chqg$ such that
\begin{equation} \label{eq:rho}
\varrho(\theta_i)=\theta_i\; (\forall i\in I),
\qquad \varrho(xy)=\varrho(y)\varrho(x), \quad \forall x,y \in \chqg.
\end{equation}
\subsection{A twisted multiplication}
\label{subsec:twist}
Fix once and for all a total order $<$ on $I$.
Recall the notation $d_i, a_{ij}$ from Definition~\ref{dfn:scd}.
Let $\phi:\mathbb Z[I]\times \mathbb Z[I]\rightarrow \mathbb Z$
be the bilinear form defined by: for $i,j \in I$,
%
\begin{equation}\label{eq:phidef}
\phi(i,j)=\begin{cases}
d_ia_{ij}&\text{ if } j<i,\\
d_i &\text{ if } j=i,\\
-2p(i)p(j)&\text{ if } j>i.\\
\end{cases}
\end{equation}
Set
\begin{equation*}
\delta_{i<j} =\begin{cases}
0, & \text{ if } i\not < j,
\\
1, & \text{ if } i<j.
\end{cases}
\end{equation*}
By abuse of notation we regard $\mathbb Z_2 =\{0,1\} \subset \mathbb Z$, and so by \eqref{eq:even} we have
\begin{equation*}
\phi(i,j)- \phi(j,i) =(-1)^{\delta_{i<j}} \big(i\cdot j + 2p(i)p(j) \big) \in 2\mathbb Z,\qquad \text{for }i\neq j.
\end{equation*}
In particular, we always have
\begin{equation} \label{eq:phisymmetrized}
\phi(i,j)- \phi(j,i) \equiv i\cdot j + 2p(i)p(j) \mod 4,\qquad \text{for }i\neq j.
\end{equation}
Recall that $\tpchqg$ denotes the ${\Q(v)[t^{\pm 1}]^\pi}$-algebra ${\Q(v)[t^{\pm 1}]^\pi}\otimes_{{\Q(v)^\pi}}\chqg$.
Define a new multiplication $*$ on $\tpchqg$ by setting
\begin{equation} \label{eq:x*y}
x*y=t^{\phi(|x|,|y|)} xy,
\end{equation}
for homogeneous $x,y\in \tpchqg$ and then extending it bilinearly.
Since $\phi$ is bilinear, one verifies that $(\tpchqg, *)$ is a $\mathbb Z[I]$-graded associative algebra
generated by $\theta_i$. We will use the notation $x^{*n}=\underbrace{x*x*\ldots*x}_{n}$
for powers taken with respect to this product.
We note that
\begin{equation}\label{eq:varrho*}
\varrho(x*y)=t^{\phi(|x|,|y|)-\phi(|y|,|x|)}\varrho(y)*\varrho(x), \quad \forall x, y\;\mbox{homogeneous}.
\end{equation}
\begin{prop}
The algebra $(\tpchqg,*)$ has a presentation as the $\mathbb Q(v)[t^{\pm 1}]^\pi$-algebra
with generators $\theta_i$ $(i\in I)$ and relations
\begin{equation}
\label{eq:Serre*}
\sum_{k=0}^{b_{ij}} (-1)^k \pi^{\binom{k}{2}p(i)+kp(i)p(j)}
t^{k(b_{ij}-k)d_i + (b_{ij}-k)\phi(i,j)+k\phi(j,i)}
\bbinom{b_{ij}}{k}_{v_i,\pi_i}
\!\!\theta_i^{*\, b_{ij}-k}*\theta_j*\theta_i^{*k}=0,
\end{equation}
for all $i\neq j\in I$.
\end{prop}
\begin{proof}
The relation \eqref{eq:Serre*} for $(\tpchqg,*)$ can be derived directly from the
Serre relation \eqref{eq:serrerel} for $\chqg$, and vice versa. As the computation is straightforward,
we skip the details.
\end{proof}
\begin{rem}
The twisted $*$-product on $\tpchqg$ is a variant of the transformation defined
in \cite[\S 4]{FL12} to relate one-parameter quantum group to two-parameter quantum group.
The precise formula for the bilinear form $\phi$ is new, and it plays a crucial role in this paper.
\end{rem}
\subsection{The twistor $\Psi$}
Recall that we set ${\mathbf{t}}^2=-1$ and
that $\schqg$ is the ${\Q(v,{\mathbf{t}})^\pi}$-algebra ${\Q(v,{\mathbf{t}})^\pi}\otimes_{{\Q(v)[t^{\pm 1}]^\pi}}\tpchqg$.
By specializing $t$ and twisting $v$, we obtain
the following $\mathbb Q({\mathbf{t}})$-algebra isomorphism which plays a fundamental role in this paper.
\begin{thm}\label{thm:halfiso}
There is a $\mathbb Q({\mathbf{t}})$-algebra isomorphism $\Psi:\chqg[{\mathbf{t}}] \rightarrow (\chqg[{\mathbf{t}}], *)$
satisfying
\begin{equation} \label{eq:psi}
\Psi(\theta_i)=\theta_i\, (i\in I), \quad \Psi(v)={\mathbf{t}}^{-1} v, \quad \Psi(\pi)=-\pi,\quad \Psi(xy)=\Psi(x)*\Psi(y).
\end{equation}
\end{thm}
The transformation $\Psi$ is called the {\em twistor} on $\schqg$.
\begin{proof}
Set
\[
S_{ij} :=\sum_{k=0}^{b_{ij}} (-1)^k (-\pi)^{\binom{k}{2}p(i)+kp(i)p(j)}
\bbinom{b_{ij}}{k}_{{\mathbf{t}}_i^{-1} v_i,(-\pi)_i}
\theta_i^{*\, b_{ij}-k}*\theta_j*\theta_i^{*k}.
\]
To show such a $\mathbb Q({\mathbf{t}})$-linear map $\Psi$ exists, it suffices to show that
the images of the generators satisfy \eqref{eq:serrerel} with respect to $*$;
that is,
\begin{equation} \label{eq:Sij}
S_{ij} =0 \quad \text{for all }i\neq j\in I.
\end{equation}
To that end, fix $i\neq j\in I$. Unraveling the definition of $*$, we have
\begin{align*}
S_{ij}=\sum_{k=0}^{b_{ij}} &(-1)^k(-\pi)^{\binom{k}{2}p(i)+kp(i)p(j)} \bbinom{b_{ij}}{k}_{{\mathbf{t}}_i^{-1} v_i,(-\pi)_i}\\
&\times {\mathbf{t}}^{(\binom{k}{2}+\binom{b_{ij}-k}{2}+k(b_{ij}-k))d_i
+ (b_{ij}-k)\phi(i,j)+k\phi(j,i)}\theta_i^{b_{ij}-k}\theta_j\theta_i^{k}.
\end{align*}
One verifies that $\binom{k}{2}+\binom{b_{ij}-k}{2}=\binom{b_{ij}}{2}-k(b_{ij}-k)$
and $\bbinom{b_{ij}}{k}_{{\mathbf{t}}_i^{-1} v_i,(-\pi)_i}
={\mathbf{t}}^{k(b_{ij}-k)d_i}\bbinom{b_{ij}}{k}_{v_i,\pi_i}$.
Using these identities, we rewrite the above identity for $S_{ij}$ as
\begin{align}
{\mathbf{t}}^{-\binom{b_{ij}}{2}d_i} S_{ij}
&= \sum_{k=0}^{b_{ij}} (-1)^k(-\pi)^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{{\mathbf{t}}_i^{-1} v_i,(-\pi)_i}
{\mathbf{t}}^{(b_{ij}-k)\phi(i,j)+k\phi(j,i)}\theta_i^{b_{ij}-k}\theta_j\theta_i^{k}
\notag \\
&= \sum_{k=0}^{b_{ij}} (-1)^k(-\pi)^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{v_i,\pi_i}
{\mathbf{t}}^{\clubsuit}\theta_i^{b_{ij}-k}\theta_j\theta_i^{k},
\label{eq:tS}
\end{align}
where
\begin{equation} \label{eq:club}
\clubsuit=k(b_{ij}-k)d_i + (b_{ij}-k)\phi(i,j)+k\phi(j,i).
\end{equation}
Now let us consider $\clubsuit$.
First assume that $i<j$.
Then we find that
\begin{align*}
\clubsuit
&= k(b_{ij}-k)d_i - 2(b_{ij}-k)p(i)p(j)+kd_ia_{ij}
\\
&= -2\binom{k}{2}d_i+2kp(i)p(j)-2b_{ij}p(i)p(j).
\end{align*}
Next assume that $i>j$.
Then we have
\begin{align*}
\clubsuit
&= k(b_{ij}-k)d_i + (b_{ij}-k)d_ia_{ij}-2kp(i)p(j)
\\
&=
-2\binom{k}{2}d_i+a_{ij}(b_{ij}-2k)d_i-2kp(i)p(j).
\end{align*}
Note that $2a_{ij} d_i \equiv 0 \mod 4$, thanks to \eqref{eq:even}.
In either case when $i<j$ or $i>j$, we see that
\[
\clubsuit= 2\binom{k}{2} d_i+2kp(i)p(j)+c(i,j) \mod 4,
\]
where
\begin{equation*}
c(i,j) =
\begin{cases}
2b_{ij}p(i)p(j), & \text{ if }i<j,
\\ \\
-d_i \binom{a_{ij}}{2}, & \text{ if } i>j.
\end{cases}
\end{equation*}
Recall ${\mathbf{t}}^2=-1$. By the bar-consistent condition we have $2d_i=2p(i) \mod 4$,
and thus
${\mathbf{t}}^\clubsuit={\mathbf{t}}^{c(i,j)}(-1)^{\binom{k}{2}p(i)+kp(i)p(j)}$.
Then we can rewrite \eqref{eq:tS}
and apply the Serre relation \eqref{eq:serrerel} for $\chqg$ to conclude that
\[{\mathbf{t}}^{-\binom{b_{ij}}{2}d_i-c(i,j)} S_{ij}=\sum_{k=0}^{b_{ij}}
(-1)^k\pi^{\binom{k}{2}p(i)+kp(i)p(j)}
\bbinom{b_{ij}}{k}_{v_i,\pi_i}
\theta_i^{b_{ij}-k}\theta_j\theta_i^{k}=0.
\]
Therefore, \eqref{eq:Sij} is verified and $\Psi$ is well defined.
Finally, to see that $\Psi$ is an isomorphism, we note that
a similar argument can be used to show that a map $\Phi: (\schqg, *) \rightarrow \schqg$ satisfying
\[\Phi(\theta_i)=\theta_i, \quad \Phi (v)={\mathbf{t}} v, \quad \Phi(\pi)=-\pi,\quad \Phi(x*y)=\Phi(x)\Phi(y),\]
is well defined as well; clearly $\Phi$ is the inverse of $\Psi$.
\end{proof}
Theorem~\ref{thm:halfiso} provides a way to compare the super and non-super half quantum groups via
$\Psi$. Indeed, recall the idempotents $\varepsilon_{\pm}$
from \eqref{eq:pi idempotent}. Then from
$
\Psi(\pi)=-\pi
$
, we see that
$
\Psi(\varepsilon_{\pm})=\varepsilon_{\mp}
$
. In particular, $\Psi(\varepsilon_{\pm}\schqg)=\varepsilon_{\mp}\schqg$, in effect
swapping the super and non-super specializations at $\pi=-1$ and $\pi=1$.
\begin{cor}\label{cor:half super to non}
There is a $\mathbb Q({\mathbf{t}})$-linear isomorphism $\Psi:\schqg|_{\pi=1}\rightarrow \schqg|_{\pi=-1}$.
\end{cor}
Using the identification $\schqg|_{\pi=\pm 1}\cong \varepsilon_{\pm}\schqg$,
we have inclusions $\schqg|_{\pi=\pm 1}\hookrightarrow \schqg$.
Theorem~\ref{thm:halfiso} and Corollary~\ref{cor:half super to non} can be summarized symbolically in the following
diagram:
\begin{center}
\begin{tikzpicture}[scale=1]
\draw[<->] (-1.6,1.) -- (1.4,1.) node[midway,above] {$\Psi$};
\draw (-2,1) node {$\chqg[{\mathbf{t}}]$};
\draw (2.3,1) node {$(\chqg[{\mathbf{t}}],*)$};
\draw (-2,-1) node {$\chqg[{\mathbf{t}}] |_{\pi=1}$};
\draw (2.3,-1) node {$\chqg[{\mathbf{t}}] |_{\pi=-1}$};
\draw[right hook->] (-1.9,-.6) -- (-1.9,.6);
\draw[<<-] (2,-.6) -- (2,.6);
\draw[right hook->] (2.4,-.6) -- (2.4,.6);
\draw[<<-] (-2.3,-.6) -- (-2.3,.6);
\draw[snake=snake,-] (-.9,-1) -- (.9,-1);
\draw[<-] (-1.2,-1) -- (-.9,-1);
\draw[<-] (1.4,-1) -- (.9,-1);
\end{tikzpicture}
\end{center}
For $i_1,\ldots, i_n \in I$, we denote
\begin{align*}
\mathbf{N}(i_1+\ldots +i_n) &=\sum_{1\leq r<s\leq n} i_r\cdot i_s,
\\
\mathbf{p}(i_1+\ldots +i_n) &=\sum_{1 \leq r<s \leq n}p(i_r)p(i_s).
\end{align*}
By convention, $\mathbf{N}(i_1)=\mathbf{p}(i_1)=0$.
Note that $\mathbf{N}(\cdot)$ is always an even integer by \eqref{eq:even}.
The following proposition on the $\mathbb Q({\mathbf{t}})$-linear involution
$\varrho$ of $\schqg$ will be used in the next section.
\begin{prop} \label{prop:rhopsi}
The involutions $\Psi\varrho\Psi^{-1}$ and $\varrho$ on $\schqg$ are
equal up to a sign on each weight space. More precisely, we have
\begin{equation}
\label{eq;rhopsi}
\Psi\varrho\Psi^{-1}(x) =(-1)^{\frac{\mathbf{N}(\nu)}{2}+\mathbf{p}(\nu)} \varrho(x), \quad \text{ for } x \in \chqg_\nu.
\end{equation}
\end{prop}
\begin{proof}
We prove the formula \eqref{eq;rhopsi} by induction on the height ${\operatorname{ht}}(|x|)$.
The formula clearly holds when ${\operatorname{ht}}(|x|) \le 1$.
Now assume that the formula holds for $x$ with ${\operatorname{ht}} (|x|)\ge 1$
and for $y$ with ${\operatorname{ht}}(|y|)\ge 1$. Recall ${\mathbf{t}}^2=-1$.
Then by \eqref{eq:rho}, \eqref{eq:phisymmetrized}, \eqref{eq:x*y}, \eqref{eq:varrho*}, and \eqref{eq:psi}, we have
\begin{align*}
\Psi \varrho \Psi^{-1} (x*y)
&= \Psi \big( \varrho (\Psi^{-1}(y))\, \varrho (\Psi^{-1}(x) ) \big)
\\
&= \Psi \varrho \Psi^{-1}(y) * \Psi \varrho \Psi^{-1}(x)
\\
&= (-1)^{\frac{\mathbf{N}(|y|)}{2}+\mathbf{p}(|y|)+\frac{\mathbf{N}(|x|)}{2}+\mathbf{p}(|x|)}
\varrho (y) * \varrho (x)
\\
&= (-1)^{\frac{\mathbf{N}(|y|)}{2}+\mathbf{p}(|y|)+\frac{\mathbf{N}(|x|)}{2}+\mathbf{p}(|x|)}
{\mathbf{t}}^{\phi(|y|,|x|) -\phi(|x|,|y|)} \varrho (x*y)
\\
&= (-1)^{\frac{\mathbf{N}(|x*y|)}{2}+\mathbf{p}(|x*y|)}\varrho(x*y).
\end{align*}
Hence the formula \eqref{eq;rhopsi} holds for $x*y$. This completes the induction.
Since $\mathbf{N}$ and $\mathbf{p}$ only depend on the weight,
$\Psi\varrho\Psi^{-1}$ and $\varrho$ are proportional on each weight space.
The proposition is proved.
\end{proof}
\section{Comparison of crystal lattices and canonical bases}
\label{sec:CB}
\subsection{Comparing crystal lattices}
For $x\in \chqg_\nu$, there is a unique decomposition of the form
\begin{equation} \label{eq:string}
x=\sum_{n\geq 0} \theta_i^{(n)}x_n,
\end{equation}
such that $x_n=0$ for all but finitely many $n$,
$x_n\in \chqg_{\nu- ni}$, and $e_i'(x_n)=0$ for all $n$.
We will refer to this as its {\em $\bf i$-string decomposition}.
Then we define Kashiwara operators
\[{\tilde{e}}_i x=\sum_{n\geq 1} \theta_{i}^{(n-1)}x_n,\]
\[{\tilde{f}}_i x=\sum_{n\geq 0} \theta_{i}^{(n+1)}x_n.\]
Let $\mathcal A\subset \mathbb Q(v)$ be the ring of rational functions with no poles at $v=0$
and so $\mathcal A^\pi=\mathcal A[\pi]\subset {\Q(v)^\pi}$.
The crystal lattice ${\mathcal L}$ of $\chqg$ is the $\mathcal A^\pi$-lattice
generated by \[B=\set{{\tilde{f}}_{i_1}\ldots {\tilde{f}}_{i_n} 1\mid \forall i_1,\ldots, i_n\in I, \forall n}.\]
According to \cite{CHW2},
the set ${\mathcal{B}} :=(B\cup \pi B)+v{\mathcal L}$ is a $\mathbb Q$-basis of ${\mathcal L}/v{\mathcal L}$,
called the (maximal) crystal basis for $\chqg$.
We note the following useful properties of ${\mathcal L}$
(with the same proof as usual \cite{K}).
\begin{lem}\label{lem:latticefacts}
Let $x=\sum_{n\ge 0} \theta_i^{(n)} x_n$ be the $i$-string decomposition
of $x\in \chqg$. Then,
\begin{enumerate}
\item $x\in {\mathcal L}$ if and only if $x_n\in {\mathcal L}$
for all $n$.
\item If $x+v{\mathcal L}\in {\mathcal{B}}$, then
$x=\theta_i^{(n)} x_n$ mod $v{\mathcal L}$ for some $n$
and $x_n+v{\mathcal L}\in {\mathcal{B}}$.
\item If ${\tilde{e}}_j x=0$ for all $j\in I$ then $x=0$; if ${\tilde{e}}_j x\neq 0$ then
${\tilde{f}}_j {\tilde{e}}_j x=x$.
\end{enumerate}
\end{lem}
To take advantage of Theorem~\ref{thm:halfiso}, we need to extend
scalars to include ${\mathbf{t}}$. We let $\mathcal A[{\mathbf{t}}]=\mathbb Q({\mathbf{t}})\otimes_\mathbb Q \mathcal A$, the subring of $\mathbb Q(v,{\mathbf{t}})$
of rational functions with no poles at $v=0$. Then
set ${\mathcal L}[{\mathbf{t}}]=\mathcal A[{\mathbf{t}}]^\pi\otimes_{\mathcal A^\pi} {\mathcal L}$.
The isomorphism $\Psi$ in Theorem \ref{thm:halfiso}, which sends $v\mapsto {\mathbf{t}}^{-1} v$
and $\pi\mapsto -\pi$, clearly preserves the $\mathbb Q({\mathbf{t}})$-algebra $\mathcal A[{\mathbf{t}}]^\pi$.
\begin{lem}\label{lem:psilattice} The following properties hold:
\begin{enumerate}
\item
$\Psi(\theta_i^{(n)})=\theta_i^{(n)}$ for $n\ge 1$;
\item $e_i'(\Psi(x))={\mathbf{t}}^{\phi(i,|x|-i)}\Psi(e_i'(x))$ for all homogeneous
$x\in \schqg$ and $i\in I$;
\item
Let $x\in \schqg_\nu$ with its $i$-string decomposition \eqref{eq:string} for a given $i\in I$.
Then $\Psi(x)$ has the following $i$-string decomposition
\[\Psi(x)=\sum_{n\geq 0} {\mathbf{t}}^{\phi(ni,\nu)-n^2d_i} \theta_i^{(n)}\Psi(x_n).\]
\end{enumerate}
\end{lem}
\begin{proof}
Recall the definitions \eqref{eq:nvpi} of $\bra{n}_{v,\pi}$ and \eqref{eq:psi} of $\Psi$. We have
\[
\Psi \big( \bra{n}_{v,\pi} \big) = \bra{n}_{{\mathbf{t}}^{-1} v,-\pi}={\mathbf{t}}^{n-1}\bra{n}_{v,\pi}.
\]
We prove (1) by induction on $n$. The case when $n=1$ is clear. Assume
$\Psi(\theta_i^{(n-1)})=\theta_i^{(n-1)}$. By definition of the divided power \eqref{eq:thetadivpow}, we have
\begin{align*}
\Psi(\theta_i^{(n)}) &= \Psi \big(\bra{n}_{v_i,\pi_i}^{-1} \theta_i \theta_i^{(n-1)} \big)
\\
&= {\mathbf{t}}_i^{1-n} \bra{n}_{v_i,\pi_i}^{-1} \Psi(\theta_i) * \Psi \big( \theta_i^{(n-1)} \big)
\\
&= {\mathbf{t}}_i^{1-n} \bra{n}_{v_i,\pi_i}^{-1} {\mathbf{t}}_i^{n-1} \theta_i \theta_i^{(n-1)}
=\theta_i^{(n)}.
\end{align*}
Now let us verify (2). It is trivial if ${\operatorname{ht}} |x|\leq 1$.
Otherwise, it suffices to show that if (2) holds for $x,y\in \schqg$,
then it holds for $xy$.
By \eqref{eq:e'} we compute
\begin{align*}
e_i'(\Psi(xy))&={\mathbf{t}}^{\phi(|x|,|y|)}e_i'(\Psi(x)\Psi(y))
\\
&={\mathbf{t}}^{\phi(|x|,|y|)} \big( e_i'(\Psi(x))\Psi(y) +\pi^{p(i)p(x)}v^{-i\cdot |x|} \Psi(x) e_i'(\Psi(y)) \big)
\\
&={\mathbf{t}}^{\phi(i,|y|)}e_i'(\Psi(x))*\Psi(y)+\pi^{p(i)p(x)}v^{- i\cdot |x|}
{\mathbf{t}}^{\phi(|x|,i)}\Psi(x)*e_i'(\Psi(y))
\\
&\stackrel{(\star)}{=}{\mathbf{t}}^{\phi(i,|y|)+\phi(i,|x|-i)}\Psi(e_i'(x)y)+ \pi^{p(i)p(x)} v^{-i\cdot |x|}
{\mathbf{t}}^{\phi(|x|,i)+\phi(i,|y|-i)}\Psi(xe_i'(y))
\\
&\stackrel{(\star\star)}{=}{\mathbf{t}}^{\phi(i,|y|)+\phi(i,|x|-i)}\Psi(e_i'(x)y)+(-\pi)^{p(i)p(x)}({\mathbf{t}}^{-1} v)^{- i\cdot |x|}
{\mathbf{t}}^{\phi(i,|x|)+\phi(i,|y|-i)}\Psi(xe_i'(y))
\\
&={\mathbf{t}}^{\phi(i,|x|+|y|-i)}\Psi \big(e_i'(x)y+\pi^{p(i)p(x)}v^{-i\cdot |x|} xe_i'(y) \big)\\
&={\mathbf{t}}^{\phi(i,|xy|-i)}\Psi(e_i'(xy)),
\end{align*}
where the equation $(\star)$ follows from the inductive assumption and \eqref{eq:psi}
and $(\star\star)$ follows from \eqref{eq:phisymmetrized}.
Finally, we prove (3). Such an identity for $\Psi(x)$ follows
by the definition of $\Psi$ and (1), and the claim that this is an $i$-string decomposition
follows from (2).
\end{proof}
\begin{prop} \label{prop:latticeinv}
The isomorphism $\Psi$ preserves the lattice ${\mathcal L}[{\mathbf{t}}]$,
i.e., $\Psi({\mathcal L}[{\mathbf{t}}])={\mathcal L}[{\mathbf{t}}]$. Furthermore,
$\Psi$ induces an isomorphism $\Psi_0$ on ${\mathcal
L}[{\mathbf{t}}]/v{\mathcal L}[{\mathbf{t}}]$ such that
\[
\Psi_0(x)={\mathbf{t}}^{\ell(x)}x \qquad \forall x\in \mathcal B,
\]
where $\ell(x)$ is some integer depending on $x$.
\end{prop}
\begin{proof}
We first observe that $\Psi({\mathcal L}[{\mathbf{t}}])\subseteq {\mathcal L}[{\mathbf{t}}]$,
as this follows from using induction on height along with Lemma~ \ref{lem:latticefacts}(1) and (3),
and Lemma~ \ref{lem:psilattice}(3). On the other hand,
Lemma \ref{lem:psilattice} can be rewritten in terms of $\Psi^{-1}$ (essentially by replacing
${\mathbf{t}}$ with ${\mathbf{t}}^{-1}$ in (2) and (3)) and so a similar argument
shows $\Psi^{-1} ({\mathcal L}[{\mathbf{t}}])\subseteq {\mathcal L}[{\mathbf{t}}]$.
Therefore $\Psi({\mathcal L}[{\mathbf{t}}]) ={\mathcal L}[{\mathbf{t}}]$.
Let $x+ v{\mathcal L}[{\mathbf{t}}] \in \mathcal B$. We proceed by induction on the height of $x$.
First note that $\Psi_0(1+v{\mathcal L}[{\mathbf{t}}])=1+v{\mathcal L}[{\mathbf{t}}]$
and $\Psi_0(\pi+v{\mathcal L}[{\mathbf{t}}])=-\pi+v{\mathcal L}[{\mathbf{t}}]$, so the proposition holds with
$\ell(1+v{\mathcal L}[{\mathbf{t}}])=0$ and $\ell(\pi+v{\mathcal L}[{\mathbf{t}}])=2$.
If ${\operatorname{ht}} |x|\geq 1$,
then by Lemma~ \ref{lem:latticefacts}(2) and (3), there is an $i\in I$
such that we can write $x+ v{\mathcal L}[{\mathbf{t}}]=\theta_i^{(n)}x_n+ v{\mathcal L}[{\mathbf{t}}]$ with
$x_n+ v{\mathcal L}[{\mathbf{t}}] \in {\mathcal{B}}$ and $n>0$.
Then by induction on the height and Lemma~\ref{lem:psilattice}(3),
we have
\begin{equation*}\label{eq:psinoughtcrybas}
\Psi_0(x+ v{\mathcal L}[{\mathbf{t}}])={\mathbf{t}}^{\phi(ni,\nu)-n^2d_i+\ell(x_n+v{\mathcal L}[{\mathbf{t}}]^{\pi})}x+ v{\mathcal L}[{\mathbf{t}}].
\end{equation*}
The proposition is proved.
\end{proof}
It was stated in \cite[Proposition 6.7]{CHW2} that ${\mathcal L}$ is
$\varrho$-invariant. In contrast to the non-super setting (as done
by Lusztig and Kashiwara), this is not easy to verify algebraically
using the tools in {\em loc. cit.} because the bilinear form on
${\mathcal L}/v{\mathcal L}$ is not positive definite. Here we are
in a position to furnish an algebraic proof of \cite[Proposition
~6.7]{CHW2}.
\begin{prop}
The involution $\varrho$ preserves ${\mathcal L}$, i.e.,
$\varrho({\mathcal L})={\mathcal L}$.
\end{prop}
\begin{proof}
Since $\frac{1}{2}\in \mathcal A$, we note that
\[\mathcal L=\varepsilon_+ \mathcal L\oplus \varepsilon_- \mathcal L
\cong \mathcal L|_{\pi=1}\oplus \mathcal L|_{\pi=-1}.\]
We similarly have a decomposition $\varrho=\varrho_+\oplus\varrho_-$
where $\varrho_\pm(x)=\varrho(\varepsilon_\pm x)$, and by definition
we see that under the isomorphism
$\varepsilon_\pm \schqg\cong \schqg|_{\pi=\pm 1}$, $\varrho_\pm$ corresponds
to $\varrho|_{\pi=\pm 1}$.
Since it is known \cite{K, L93} that $\varrho_{\pi=1}({\mathcal
L}|_{\pi=1})={\mathcal L}|_{\pi=1}$, it suffices to show that
\[
\varrho|_{\pi=-1} ({\mathcal L}|_{\pi=-1})={\mathcal L}|_{\pi=-1}
.\]
Since $\Psi (\pi) =-\pi$, we have $\Psi({\mathcal L}[{\mathbf{t}}]|_{\pi=1})={\mathcal L}[{\mathbf{t}}]|_{\pi=-1}$.
Let $x\in{\mathcal L}|_{\pi=-1}$.
Since $x\in{\mathcal L}|_{\pi=-1}\subset {\mathcal L}[{\mathbf{t}}]|_{\pi=-1}$, by Proposition~\ref{prop:rhopsi} we have
$$
\varrho|_{\pi=-1} (x)
=(-1)^{\frac{\mathbf{N}(|x|)}{2}+\mathbf{p}(|x|)}
\Psi\varrho|_{\pi=1} \Psi^{-1}(x)
\in {\mathcal L}[{\mathbf{t}}]|_{\pi=-1}.
$$
On the other hand,
by definition we have $\varrho(x)\in \chqg|_{\pi=-1}$, and hence
$$
\varrho|_{\pi=-1}(x)\in {\mathcal L}[{\mathbf{t}}]|_{\pi=-1}\cap \chqg|_{\pi=-1}={\mathcal L}|_{\pi=-1}.
$$
The proposition is proved.
\end{proof}
\subsection{Comparing canonical bases}
The bar involution on $\chqg$ in \eqref{eq:bar} extends trivially to an involution
$\bar{\phantom{c}}$ of $\tpchqg$ and $\schqg$
by letting $\overline{t}=t$ and $\overline{{\mathbf{t}}} ={\mathbf{t}}$ respectively.
\begin{lem} \label{lem:bar}
The map $\Psi$ commutes with the bar map on $\schqg$, i.e.,
$\bar{\phantom{c}}\circ\Psi=\Psi\circ\bar{\phantom{c}}$.
\end{lem}
\begin{proof}
By the definition of $\Psi$ given in Theorem~\ref{thm:halfiso},
the only nontrivial thing to check is the commutativity when acting on $v$. Indeed, recalling ${\mathbf{t}}^4=1$, we have
\[
\bar{\Psi(v)}={\mathbf{t}}^{-1}\pi v^{-1}=-\pi({\mathbf{t}}^{-1} v)^{-1}=\Psi(\bar{v}).\]
The lemma is proved.
\end{proof}
As proven in \cite{CHW2} (generalizing the approach of \cite{K}),
there exists a globalization map $G:{\mathcal L}[{\mathbf{t}}]/v{\mathcal
L}[{\mathbf{t}}]\rightarrow {\mathcal L}[{\mathbf{t}}]\cap \bar{{\mathcal L}}[{\mathbf{t}}]$
such that for each $b\in {\mathcal{B}}$, $G(b)$ is the unique bar-invariant
vector in ${\mathcal L}[{\mathbf{t}}]$ such that $G(b)+v{\mathcal L}[{\mathbf{t}}]=b$.
The set $\set{G(b):b\in {\mathcal{B}}}$ is called the canonical $\pi$-basis
for $\chqg$.
Specializing $\pi=1$ yields the usual canonical basis of
Lusztig and Kashiwara, while specializing $\pi=-1$ yields a
(signed) canonical basis for the half quantum supergroup.
Even though we have established a connection
on the level of crystal lattices and crystal bases, it is somewhat surprising to see that
$\Psi$ allows us to establish a direct and precise link between
the canonical bases for the two specializations.
Recall $\ell(\cdot)$ from Proposition~\ref{prop:latticeinv}, which
is integer-valued but may not be even-integer-valued in general.
\begin{thm}\label{thm:CB comparison}
For any $b\in {\mathcal{B}}$, we have
\[
\Psi(G(b))={\mathbf{t}}^{\ell(b)}G(b).
\]
In particular, $\Psi(G(b)|_{\pi=1})$ is proportional to $G(b)|_{\pi=-1}$.
\end{thm}
\begin{proof}
It follows by Lemma~\ref{lem:bar} that $\Psi(G(b))$ is bar-invariant.
It follows by the definition of the maps and Proposition~ \ref{prop:latticeinv} that
\[\Psi(G(b))+ v{\mathcal L}[{\mathbf{t}}] =\Psi(b) ={\mathbf{t}}^{\ell(b)} b.\]
Therefore, ${\mathbf{t}}^{-\ell(b)}\Psi(G(b))=G(b)$ and thus
$\Psi(\varepsilon_+G(b))=\varepsilon_{-}{\mathbf{t}}^{\ell(b)}G(b)$.
\end{proof}
\begin{example}
Let $(I,\cdot)$ be the super Cartan datum associated to $\mathfrak{osp}(1|4)$
with $I=\set{\mathbf{1},\mathbf{2}}$ (where $\mathbf{1}$ is the odd simple root) and Dynkin diagram given by
$$\qquad\qquad\qquad
\xy
(-25,0)*{\bullet};(-15,0)*{\circ}**\dir{=};
(-20,0)*{<};
(-25,3)*{\mathbf{1}};(-15,3)*{\mathbf{2}}
\endxy
$$
Then
\[p(\mathbf{1})=1,\quad p(\mathbf{2})=0;\]
\[\mathbf{1}\cdot \mathbf{1}=2,\quad \mathbf{1}\cdot \mathbf{2}
=\mathbf{2}\cdot \mathbf{1}=-2,\quad \mathbf{2}\cdot \mathbf{2}=4;\]
\[\phi(\mathbf{1},\mathbf{1})=1,\quad \phi(\mathbf{1},\mathbf{2})= 0,
\quad \phi(\mathbf{2},\mathbf{1})=-2,\quad
\phi(\mathbf{2},\mathbf{2})=2.\]
It is an easy computation that
\[{\tilde{f}}_{\mathbf{1}}{\tilde{f}}_{\mathbf{2}}{\tilde{f}}_{\mathbf{1}} 1
=\theta_{\mathbf{1}}(\theta_{\mathbf{2}}\theta_{\mathbf{1}}-v^2\theta_{\mathbf{1}}\theta_{\mathbf{2}})+
v^2\theta^{(2)}_{\mathbf{1}}\theta_{\mathbf{2}}.\]
In particular, $G({\tilde{f}}_{\mathbf{1}}{\tilde{f}}_{\mathbf{2}}{\tilde{f}}_{\mathbf{1}} 1+v{\mathcal L})=
\theta_{\mathbf{1}}\theta_{\mathbf{2}}\theta_{\mathbf{1}}$,
and $\Psi \big(G({\tilde{f}}_{\mathbf{1}}{\tilde{f}}_{\mathbf{2}}{\tilde{f}}_{\mathbf{1}} 1+v{\mathcal L}) \big)
={\mathbf{t}}^{-1}\theta_{\mathbf{1}}\theta_{\mathbf{2}}\theta_{\mathbf{1}}$.
\end{example}
\section{The twistor of modified covering quantum group}
\label{sec:modified}
\subsection{The modified covering quantum group}
To facilitate the definition of modified covering quantum group
next, we recall the definition of the covering quantum group $\cqg$.
We recall that $b_{ij} = 1- a_{ij}.$
\begin{dfn} \cite{CHW1}
\label{dfn:cqg}
The covering quantum group
$\cqg$ associated to a super root datum $(Y, X, I, \cdot)$ is the $\mathbb Q(v)^{\pi}$-algebra with generators
$E_i, F_i$, $K_\mu$, and $J_\mu$, for $i\in I$ and $\mu\in Y$, subject to the
relations:
\begin{equation}\label{eq:JKrels}
J_\mu J_\nu=J_{\mu+\nu},\quad K_\mu K_\nu=K_{\mu+\nu},\quad K_0=J_0=J_\nu^2=1,\quad
J_\mu K_\nu=K_\nu J_\mu,
\end{equation}
\begin{equation}\label{eq:Jweightrels}
J_\mu E_i=\pi^{\ang{\mu,i'}} E_i J_\mu,\quad J_\mu F_i=\pi^{-\ang{\mu,i'}} F_i J_\mu,
\end{equation}
\begin{equation}\label{eq:Kweightrels}
K_\mu E_i=v^{\ang{\mu,i'}} E_i K_\mu,\quad K_\mu F_i=v^{-\ang{\mu,i'}} F_i K_\mu,
\end{equation}
\begin{equation}\label{eq:commutatorrelation}
E_iF_j-\pi^{p(i)p(j)}F_jE_i=\delta_{ij}\frac{J_{d_i i}K_{d_i i}-K_{-d_i i}}{\pi_i v_i- v_i^{-1}},
\end{equation}
\begin{equation}\label{eq:Eserrerel}
\sum_{k=0}^{b_{ij}} (-1)^k\pi^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{v_i,\pi_i}
E_i^{b_{ij}-k}E_jE_i^k=0 \;\; (i\neq j),
\end{equation}
\begin{equation}\label{eq:Fserrerel}
\sum_{k=0}^{b_{ij}} (-1)^k\pi^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{v_i,\pi_i}
F_i^{b_{ij}-k}F_jF_i^k=0 \;\; (i\neq j),
\end{equation}
for $i,j\in I$ and $\mu,\nu\in Y$.
\end{dfn}
Again by specialization at $\pi=\pm 1$, we obtain the usual quantum
group $\cqg|_{\pi=1}$ (with extra central elements) and the super
quantum group $\cqg|_{\pi=-1}$.
We extend scalars and set $\tpcqg=\mathbb Q(v)[t^{\pm 1}]^\pi\otimes_{\mathbb Q(v)^\pi} \cqg$.
We endow $\cqg$ with a $\mathbb Z[I]$-grading by setting
\begin{equation}
|E_i|=i,\quad |F_i|=-i,\quad |J_\mu|=|K_\mu|=0,
\end{equation}
and also endow $\cqg$ with a $\mathbb Z_2$-grading by setting
\begin{equation}
p(E_i)=p(F_i)=p(i),\quad p(J_\mu)=p(K_\mu)=0.
\end{equation}
The definition of the covering quantum group $\cqg$ is also internally coherent
with the notion of the modified covering quantum group $\cmqg$, which we now introduce.
\begin{dfn}
The {\it modified covering quantum group} $\cmqg$
associated to the root datum $(Y, X, I, \cdot)$ is defined to be
the associative ${\Q(v)^\pi}$-algebra without unit which is
generated by the symbols $1_{\lambda}, E_i1_{\lambda}$ and $F_i1_{\lambda}$, for $\lambda\in X$
and $i \in I$, subject to the relations:
\begin{eqnarray}
&1_{\lambda} 1_{\lambda'} =\delta_{\lambda, \lambda'} 1_{\lambda},
\vspace{6pt}\label{eq:modified idemp rel}\\
& (E_i1_{ \lambda }) 1_{\lambda'} = \delta_{\lambda, \lambda'} E_i1_{ \lambda}, \quad
1_{\lambda'} (E_i1_{\lambda}) = \delta_{\lambda', \lambda+ i'} E_i1_{\lambda},
\vspace{6pt}\label{eq:modified E rel}\\
& (F_i1_{ \lambda}) 1_{\lambda'}= \delta_{\lambda, \lambda'} F_i1_{ \lambda}, \quad
1_{\lambda'} (F_i1_{\lambda})= \delta_{\lambda', \lambda- i'} F_i1_{\lambda},
\vspace{6pt}\label{eq:modified F rel}\\
& (E_iF_j-\pi^{p(i)p(j)}F_jE_i)1_{\lambda}=\delta_{ij}\bra{\langle i, \lambda\rangle}_{v_i,\pi_i}1_\lambda,
\vspace{6pt}
\label{eq:modified comm rel}\\
& \sum_{k=0}^{b_{ij}}(-1)^k \pi^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{v_i,\pi_i}
E^{b_{ij}-k}_iE_jE^{k}_i1_{\lambda}=0
\;\; (i\neq j),
\vspace{6pt}
\label{eq:modified E Serre}\\
&\sum_{k=0}^{b_{ij}}(-1)^k \pi^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{v_i,\pi_i}
F^{b_{ij}-k}_iF_jF^{k}_i1_{\lambda}=0
\;\; (i\neq j),
\label{eq:modified F Serre}
\end{eqnarray}
where $i,j \in I$, $\lambda, \lambda'\in X$, and we use the notation $xy1_\lambda=(x1_{\lambda+|y|})(y1_\lambda)$
for $x,y\in \cqg$.
\end{dfn}
As with the half covering quantum groups, if we set $\pi=1$ then $\cmqg|_{\pi=1}$
is the modified quantum group of Lusztig, whereas if $\pi=-1$ then $\cmqg|_{\pi=-1}$
is the modified quantum supergroup.
The algebra $\cmqg$ has a (left) $\cqg$-action given by
\[
E_i\cdot x1_\lambda=(E_i1_{\lambda+|x|})x1_\lambda,\quad F_i
\cdot x1_\lambda=(F_i1_{\lambda+|x|})x1_\lambda,
\]
%
\[
K_\nu\cdot x1_\lambda=v^{\ang{\nu,\lambda+|x|}} x1_\lambda,
\quad J_{\nu}\cdot x1_\lambda=\pi^{\ang{\nu,\lambda+|x|}}x1_{\lambda}.
\]
There is also a similar right $\cqg$-action on $\cmqg$.
Then as in \cite{Lu1}, $\cmqg$ can be identified with the ${\Q(v)^\pi}$-algebra on the symbols $x1_\lambda$
for $x\in \cqg$ and $\lambda\in X$ satisfying
\begin{equation}\label{eq:altmodifiedrel}
x1_\lambda y1_\mu=\delta_{\lambda,\mu+|y|}xy1_\mu,\quad
K_\nu1_\lambda=v^{\ang{\nu,\lambda}} 1_\lambda,\quad J_{\nu}1_\lambda
=\pi^{\ang{\nu,\lambda}}1_{\lambda}.
\end{equation}
Denote by $_{\mathbb Z} \!\cmqg$ the $\mathbb Z[v^{\pm 1}]^\pi$-subalgebra of $\cmqg$ generated by
$1_{\lambda}, E_i^{(n)} 1_{\lambda}$ and $F_i^{(n)}1_{\lambda}$, for $n \ge 1$, $\lambda\in X$
and $i \in I$ (here we recall the definition of divided powers \eqref{eq:thetadivpow}).
Then $_{\mathbb Z} \!\cmqg$ is a $\mathbb Z[v^{\pm 1}]^\pi$-form of $\cmqg$.
\subsection{The twistor $\dot{\Psi}$}
Recall the bilinear form $\phi(\cdot,\cdot):\mathbb Z[I]\times
\mathbb Z[I]\rightarrow \mathbb Z$ from \eqref{eq:phidef}, and that we have an
embedding $\mathbb Z[I]\hookrightarrow X$ given by $i\mapsto i'$. Fix once
and for all a transversal $C \subset X$ for the coset
representatives of $X/\mathbb Z[I]$. Then we define the bilinear pairing
$\dot\phi(\cdot,\cdot):\mathbb Z[I]\times X\rightarrow \mathbb Z$ by
\begin{equation}
\dot{\phi}(\nu,\mu'+\lambda)=\phi(\nu,\mu),
\quad \ \text{for all }\nu,\mu\in \mathbb Z[I], \; \lambda\in C.
\end{equation}
The map $\dot\Psi$ in the following theorem can be viewed as a counterpart in the setting of
modified covering quantum group of the isomorphism $\Psi$
in Theorem~ \ref{thm:halfiso}. Note that
we do not need to use a twisted multiplication in this setting as for $\Psi$.
By base changes we set as usual $\cmqg[{\mathbf{t}}] = \mathbb Q(v,{\mathbf{t}})^\pi \otimes_{\Q(v)^\pi} \cmqg$ and
$_{\mathbb Z} \!\cmqg[{\mathbf{t}}] =\mathbb Z[v^{\pm 1}, {\mathbf{t}}]^\pi \otimes_{\mathbb Z[v^{\pm 1}]^\pi} {}_{\mathbb Z} \!\cmqg$.
\begin{thm}\label{thm:modauto}
\begin{enumerate}
\item
There is an automorphism
$\dot{\Psi}$ of the $\mathbb Q({\mathbf{t}})$-algebra $\cmqg[{\mathbf{t}}]$ of order $4$
such that, for all $i\in I$ and $\lambda \in X$,
\[
\dot\Psi(1_\lambda)=1_\lambda,\quad \dot\Psi(E_i1_\lambda)
={\mathbf{t}}^{d_i\ang{i,\lambda}-\dot{\phi}(i,\lambda)} E_i1_{\lambda},\quad
\dot\Psi(F_i1_\lambda)={\mathbf{t}}^{\dot{\phi}(i,\lambda)} F_i1_{\lambda},
\]
\[
\dot\Psi(\pi)=-\pi,\quad \dot\Psi(v)={\mathbf{t}}^{-1}v.\]
\item The automorphism $\dot{\Psi}$ preserves the $\mathbb Z[v^{\pm 1},{\mathbf{t}}]^\pi$-form $_{\mathbb Z} \!\cmqg[{\mathbf{t}}]$.
\end{enumerate}
\end{thm}
\begin{proof}
(1) Once we verify that the endomorphism $\dot\Psi$ is well defined,
it is clearly an automorphism of order four by checking the images
of the generators. To verify that $\dot\Psi$ is well defined, it
suffices to check that the images of the generators satisfy the
relations. It is straightforward to verify \eqref{eq:modified idemp
rel}-\eqref{eq:modified F rel}, and we leave that as an exercise to
the reader.
Let us check \eqref{eq:modified comm rel}. We compute
\begin{align*}
(& {\mathbf{t}}^{d_i\ang{i,\lambda-j'} -\dot{\phi}(i,\lambda-j')} E_i1_{\lambda-j'})({\mathbf{t}}^{\dot{\phi}(j,\lambda)} F_j1_{\lambda})
\\
&\; - (-\pi)^{p(i)p(j)} ({\mathbf{t}}^{\dot{\phi}(j,\lambda+i')} F_j1_{\lambda+i'})
({\mathbf{t}}^{d_i\ang{i,\lambda}-\dot{\phi}(i,\lambda)}E_i1_{\lambda})\\
&\quad ={\mathbf{t}}^{d_i\ang{i,\lambda-j'}-\dot{\phi}(i,\lambda-j')+\dot{\phi}(j,\lambda)}
(E_iF_j-(-\pi)^{p(i)p(j)}{\mathbf{t}}^{i\cdot j + \phi(j,i)-\phi(i,j)}F_jE_i)1_\lambda
\\
&\quad ={\mathbf{t}}^{d_i\ang{i,\lambda-j'}-\dot{\phi}(i,\lambda-j')+\dot{\phi}(j,\lambda)}
(E_iF_j-\pi^{p(i)p(j)}F_jE_i)1_\lambda\\
&\quad =\delta_{ij}{\mathbf{t}}_i^{\ang{i,\lambda}-1}\bra{\ang{i,\lambda}}_{v_i,\pi_i}\\
&\quad =\delta_{ij}\bra{\ang{i,\lambda}}_{{\mathbf{t}}_i^{-1}v_i,-\pi_i}.
\end{align*}
Next, let us check the Serre relations. As the proof of \eqref{eq:modified F Serre} are
similar, we will only check \eqref{eq:modified E Serre}.
Let us set
\[E_{ij}(k)=\dot\Psi(E_i^{b_{ij}-k}E_jE_i^k1_\lambda).\]
We want to verify that
\begin{equation}\label{eq:dotpsimodifiedserre}
\sum_{k=0}^{b_{ij}}(-1)^k (-\pi)^{\binom{k}{2}p(i)
+kp(i)p(j)}\bbinom{b_{ij}}{k}_{{\mathbf{t}}^{-1}v_i,-\pi_i}E_{ij}(k)=0.
\end{equation}
First note that
\[
\Psi(E_i^s1_\lambda)
=\prod_{t=1}^s{\mathbf{t}}^{d_i\ang{i,\lambda+(t-1)i'}-\dot{\phi}(i,\lambda+(t-1)i')}E_i1_{\lambda+(t-1)i'}
={\mathbf{t}}^{\binom{s}{2}d_i+s(d_i\ang{i,\lambda}-\dot{\phi}(i,\lambda))}E_i^s1_\lambda.
\]
By using the factorization
$E_i^{b_{ij}-k}E_jE_i^k1_\lambda
=E_i^{b_{ij}-k}1_{\lambda+j'+ki'} E_j1_{\lambda+ki'}E_i^k1_\lambda$
and the identity $\binom{k}{2}+\binom{b_{ij}-k}{2}+k(b_{ij}-k)=\binom{b_{ij}}{2}$, we compute that
\[E_{ij}(k)={\mathbf{t}}^{\spadesuit_{ij}(k)+\heartsuit_{ij}}E^{b_{ij}-k}_iE_jE^{k}_i1_{\lambda},\]
where
\begin{align*}
\heartsuit_{ij}&=b_{ij}(i\cdot j + d_i\ang{i,\lambda}-\dot{\phi}(i,\lambda))
+d_j\ang{j,\lambda}-\dot{\phi}(j,\lambda)+\binom{b_{ij}}{2},
\\
\spadesuit_{ij}(k)&=-k\phi(j,i)-(b_{ij}-k)\phi(i,j).
\end{align*}
Then
\begin{align*}
\bbinom{b_{ij}}{k}_{{\mathbf{t}}^{-1}v_i,-\pi_i}E_{ij}(k)
={\mathbf{t}}^{\heartsuit_{ij}+\spadesuit_{ij}(k)
+k(b_{ij}-k)d_i}\bbinom{b_{ij}}{k}_{v_i,\pi_i}E_i^{b_{ij}-k}E_jE_i^k1_{\lambda}.
\end{align*}
Recall $\clubsuit$ from \eqref{eq:club}.
Since $\phi(i_1,i_2)\in 2\mathbb Z$ for $i_1\neq i_2\in I$, we see that
\[
\spadesuit_{ij}(k)+k(b_{ij}-k)d_i
\equiv \clubsuit
\equiv 2\binom{k}{2} + 2k p(i) p(j)+c(i,j)\mod 4.
\]
Then we see that
\begin{align*}
\sum_{k=0}^{b_{ij}}&(-1)^k (-\pi)^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{{\mathbf{t}}^{-1}v_i,-\pi_i}E_{ij}(k)
\\&={\mathbf{t}}^{\heartsuit_{ij}+c(i,j)}\sum_{k=0}^{b_{ij}}(-1)^k \pi^{\binom{k}{2}p(i)+kp(i)p(j)}
\bbinom{b_{ij}}{k}_{v_i,\pi_i}
E^{b_{ij}-k}_iE_jE^{k}_i1_{\lambda}=0.
\end{align*}
This finishes the verification of the Serre relations, whence (1).
Part (2) follows immediately from (1) by noting that $\dot\Psi$ preserves the divided powers up to
some integer power of ${\mathbf{t}}$.
\end{proof}
Since $\dot\Psi(\pi) = -\pi$, we obtain the following variant of Theorem~ \ref{thm:modauto}.
\begin{thm}
\label{thm-Udot}
The automorphism $\dot{\Psi}$ of $\cmqg[{\mathbf{t}}]$
induces an isomorphism of $\mathbb Q({\mathbf{t}})$-algebras $\cmqg[{\mathbf{t}}]|_{\pi=1} \cong \cmqg[{\mathbf{t}}]|_{\pi=-1}$
and an isomorphism of $\mathbb Z[{\mathbf{t}}]$-algebras ${}_\mathbb Z\cmqg[{\mathbf{t}}]|_{\pi=1}\cong {}_\mathbb Z\cmqg[{\mathbf{t}}]|_{\pi=-1}$.
In particular, we have embeddings
$\cmqg|_{\pi=\pm 1}\hookrightarrow \cmqg[{\mathbf{t}}]|_{\pi=\mp 1}$ and
${}_\mathbb Z\cmqg|_{\pi=\pm 1}\hookrightarrow {}_\mathbb Z\cmqg[{\mathbf{t}}]|_{\pi=\mp 1}$.
\end{thm}
Hence, one may view the algebras
$\cmqg|_{\pi= 1}$ and $\cmqg|_{\pi=-1}$ as two different rational forms
of the algebra $\cmqg[{\mathbf{t}}]|_{\pi=1}$ (or equivalently, of $\cmqg[{\mathbf{t}}]|_{\pi=-1}$).
We shall refer to the automorphism $\dot\Psi$ in Theorem~ \ref{thm:modauto}
as a {\em twistor} on $\cmqg[{\mathbf{t}}]$.
\begin{rem}
\begin{enumerate}
\item The definition of the covering quantum group $\cqg$ and the
modified covering quantum group $\cmqg$
makes sense for super Cartan datum
without the bar-consistent condition (d) in Definition~\ref{dfn:scd}.
But the above theorems require the bar-consistent condition.
\item
The integer $\dot{\phi}(i,\lambda)$ admits a geometric interpretation
(compare the integers $e_{\mu, n\alpha_i} $ and $f_{\mu, n\alpha_i}$ in ~\cite[5.1]{Li12}).
\end{enumerate}
\end{rem}
\subsection{Category equivalences}
Recall that a $\cmqg$-module $M$ over ${\Q(v)}$ is called {\em unital}
if each $m\in M$ is a finite sum of the form $m=\sum_{\lambda\in
X}1_\lambda m$.
When specializing $\pi$ to $\pm 1$, we obtain the definition of a
unital $\cmqg|_{\pi=\pm 1}$-module over ${\Q(v)}$. We denote the
categories of unital modules over ${\Q(v)}$ of $\cmqg$ (resp.,
$\cmqg|_{\pi=1}$, $\cmqg|_{\pi=- 1}$) by $\dot{\mathcal C}$ (and resp.,
$\dot{\mathcal C}_{\pi=1}$, $\dot{\mathcal C}_{\pi=- 1}$). We have $\dot{\mathcal C}
=\dot{\mathcal C}_{\pi=1} \oplus \dot{\mathcal C}_{\pi=- 1}$. We denote the category
of unital modules over the field $\mathbb Q(v,{\mathbf{t}})$ of $\cmqg$ (and
resp., $\cmqg|_{\pi=1}$, $\cmqg|_{\pi=- 1}$) by $\dot{\mathcal C}^{\mathbf{t}}$ (and
resp., $\dot{\mathcal C}^{\mathbf{t}}_{\pi=1}$, $\dot{\mathcal C}^{\mathbf{t}}_{\pi=- 1}$).
The following is an immediate consequence
of Theorem \ref{thm-Udot}.
\begin{prop}\label{prop:dotcatequiv}
The twistor $\dot\Psi$ induces a category equivalence between
$\dot{\mathcal C}^{\mathbf{t}} _{\pi=1}$ and $\dot{\mathcal C}^{\mathbf{t}} _{\pi=-1}$.
\end{prop}
The main novelty in the definition
of the covering quantum group $\cqg$ is the additional generators $J_\mu$ in the Cartan subalgebra,
which lead to a natural formulation of the integral form of $\cqg$ and weight modules of $\cqg$
(see \cite{CHW1}).
A $\cqg$-module over ${\Q(v)}$ (and resp.,
$\cqg[{\mathbf{t}}]$-module over $\mathbb Q(v,{\mathbf{t}})$)
$M$ is called a {\em weight module} of $\cqg$ (and resp., $\cqg[{\mathbf{t}}]$) if
$M=\oplus_{\lambda\in X} M_\lambda$ with
\[
M_\lambda=\set{
m\in M \mid K_\nu m=v^{\ang{\nu,\lambda}} m,\ J_\nu m=\pi^{\ang{\nu,\lambda}} m,
\ \ \forall\ \nu\in Y}.
\]
We denote the category of weight modules of $\cqg$ (and resp.,
$\cqg[{\mathbf{t}}]$) over the respective fields as above by ${\mathcal C}$ (and
resp., ${\mathcal C}^{\mathbf{t}}$). Similarly, we have categories of weight modules
of $\cqg[{\mathbf{t}}]|_{\pi=\pm 1}$ over $\mathbb Q(v,{\mathbf{t}})$ denoted by
${\mathcal C}^{\mathbf{t}}_{\pi =\pm 1}$.
One can suitably formulate the BGG category $\mathcal O^{\mathbf{t}}$,
$\mathcal O^{\mathbf{t}}_{\pi=1}$, $\mathcal O^{\mathbf{t}}_{\pi=-1}$
as subcategories of ${\mathcal C}^{\mathbf{t}}$, ${\mathcal C}^{\mathbf{t}}_{\pi=1}$ and ${\mathcal C}^{\mathbf{t}}_{\pi=-1}$, respectively.
Recall the definition of the highest weight $\cqg$-modules $V(\lambda)$ over ${\Q(v)}$, for $\lambda \in X$;
see \cite[Proposition 2.6.5]{CHW1}. Then $V(\lambda)_{\pi=\pm 1}$
is a simple $\cqg|_{\pi=\pm 1}$-module. Let
$X^+ =\{\lambda \in X \mid \ang{i, \lambda} \in \mathbb Z_{\ge 0}, \forall i\in I\}$ be the set of dominant weights.
Then $\{V(\lambda)_{\pi=1} | \lambda \in X^+\}$ and $\{V(\lambda)_{\pi=-1} | \lambda \in X^+\}$
form a complete list of
pairwise non-isomorphic simple integrable modules of $\cqg|_{\pi=1}$ and of $\cqg|_{\pi=-1}$, respectively.
\begin{prop}
\label{Weight-A}
\begin{enumerate}
\item
The categories ${\mathcal C}^{\mathbf{t}}_{\pi=1}$ and ${\mathcal C}^{\mathbf{t}}_{\pi=-1}$ are equivalent.
\item
The characters of the integrable $\cqg|_{\pi=-1}$-module $V(\lambda)_{\pi=-1}$
and the integrable $\cqg|_{\pi=1}$-module $V(\lambda)_{\pi=1}$ coincide, for each $\lambda\in X^+$.
\end{enumerate}
\end{prop}
\begin{proof}
The equivalence in (1) follows by a sequence of category equivalences:
$$
{\mathcal C}^{\mathbf{t}}_{\pi=1} \cong \dot{\mathcal C}^{\mathbf{t}} _{\pi=1}
\cong \dot{\mathcal C}^{\mathbf{t}} _{\pi=-1} \cong {\mathcal C}^{\mathbf{t}}_{\pi=-1},
$$
where the second equivalence follows by Proposition~\ref{prop:dotcatequiv},
the first equivalence is easy \cite[\S 23.1.4]{L93}, and
the third equivalence is completely analogous.
The category equivalences above always send each object $M$ to $M$ (where the weight
structure remains unchanged), and moreover, the equivalence from ${\mathcal C}^{\mathbf{t}}_{\pi=1}$ to ${\mathcal C}^{\mathbf{t}}_{\pi=-1}$ sends
$V(\lambda)_{\pi=1}$ to $V(\lambda)_{\pi=-1}$ while preserving weight space
decompositions. The proposition is proved.
\end{proof}
\begin{cor} \label{cor:catO}
The BGG categories $\mathcal O^{\mathbf{t}}_{\pi=1}$ of modules over
the quantum group $\cqg[{\mathbf{t}}]|_{\pi=1}$ and $\mathcal O^{\mathbf{t}}_{\pi=-1}$
over the quantum supergroup $\cqg[{\mathbf{t}}] |_{\pi=-1}$ are equivalent via $\dot\Psi$.
\end{cor}
\begin{rem}
Thanks to the isomorphism of the integral forms $_{\mathbb Z} \!\cmqg|_{\pi=1}$ and $_{\mathbb Z} \!\cmqg|_{\pi=-1}$
in Theorem~\ref{thm-Udot},
a version of category equivalence similar to Propositions ~\ref{Weight-A}
holds when specializing $v$ to be a root of unity.
\end{rem}
\begin{rem}
Proposition~\ref{Weight-A}(2) was stated in \cite{CHW1} without proof, and there has been another proof
given in \cite{KKO13}. A version of Proposition~\ref{Weight-A}(1) on the equivalence of the weight modules
of somewhat different algebras
over $\mathbb C(v)^\pi$
also appeared in \cite{KKO13} with a very different proof.
Note that the notion of weight modules in {\em loc. cit.} is nonstandard and subtle, and
the algebras formulated therein over $\mathbb C(v)$ (or $\mathbb C(v)^\pi$)
do not seem to admit rational forms or integral forms or modified forms as ours;
in particular, their formulation does not make sense when $v$ is a root of unity.
\end{rem}
\begin{rem}
Let $X^{\text{ev}} = \{\lambda \in X \mid \ang{i,\lambda} \in 2\mathbb Z, \forall i
\in I_{\overline{1}} \}$. Denote by $\mathcal O^{\mathbf{t}}_{\pi=1,v=1}$ (and
resp., $\mathcal O^{\mathbf{t}}_{\pi=-1,v=1}$) the BGG category of
$X^{\text{ev}}$-weighted modules over the Lie algebra (and resp.,
Lie superalgebra) associated to the super root datum $(Y,X, I,
\cdot)$. Using the technique of quantization of Lie bialgebras,
Etingof-Kazhdan \cite{EK} established an equivalence of categories
between $\mathcal O^{\mathbf{t}}_{\pi=1,v=1}$ and $\mathcal O^{\mathbf{t}}_{\pi=1}$.
As a super analogue, Geer \cite{Ge} similarly established a
quantization of Lie bisuperalgebras (Geer's super analogue was
formulated for the finite type basic Lie superalgebras, but it makes
sense for Kac-Moody as done by Etingof-Kazhdan.) This leads to an
equivalence of categories between $\mathcal O^{\mathbf{t}}_{\pi=-1,v=1}$ and
$\mathcal O^{\mathbf{t}}_{\pi=-1}$ (where the restriction to the weights in
$X^{\text{ev}}$ is necessary; see the classification of integrable
modules in \cite{K}). When combining with our category equivalence
in Corollary~\ref{cor:catO}, we obtain an equivalence of highest
weight categories between BGG categories for Lie algebras and
superalgebras. This equivalence provides an irreducible character
formula in $\mathcal O^{\mathbf{t}}_{\pi=-1,v=1}$ whenever the corresponding
irreducible module of $\mathcal O^{\mathbf{t}}_{\pi=1,v=1}$ admits a solution
of the Kazhdan-Lusztig conjecture (by Beilinson-Bernstein,
Brylinski-Kashiwara, Kashiwara-Tanisaki).
\begin{center}
\begin{tikzpicture}[scale=1]
\draw[->] (-1.6,1.) -- (1.4,1.) node[midway,above] {$\dot\Psi$};
\draw (-2,1) node {$\mathcal O^{\mathbf{t}}_{\pi=1}$};
\draw (2.3,1) node {$\mathcal O^{\mathbf{t}}_{\pi=-1}$};
\draw (-2,-1) node {$\mathcal O^{\mathbf{t}}_{\pi=1,v=1}$};
\draw (2.3,-1) node {$\mathcal O^{\mathbf{t}}_{\pi=-1,v=1}$};
\draw (2.1,0) node {G};
\draw (-2.3,0) node {EK};
\draw[->] (-1.9,-.6) -- (-1.9,.6);
\draw[->] (2.4,-.6) -- (2.4,.6);
\draw[snake=snake,-] (-.9,-1) -- (.9,-1);
\draw[<-] (-1.2,-1) -- (-.9,-1);
\draw[<-] (1.4,-1) -- (.9,-1);
\end{tikzpicture}
\end{center}
\end{rem}
\subsection{Extended covering quantum groups}
We first work over a formal
parameter $t$.
Let $\mathbf{T}$ be the group algebra (in multiplicative form) of the group
$\mathbb Z[I]\times Y$, that is, the $\mathbb Q(v)[t^{\pm 1}]^\pi$-algebra with generators
$T_{\mu}, \Upsilon_{\nu}$, for $\mu\in Y$ and $\nu\in \mathbb Z[I]$, and relations
%
\begin{equation}\label{eq:Trelns}
T_\mu T_{\mu'}=T_{\mu+\mu'},\quad \Upsilon_\nu \Upsilon_{\nu'}=\Upsilon_{\nu+\nu'},
\quad T_\mu \Upsilon_\nu=\Upsilon_\nu T_\mu, \quad T_0=\Upsilon_0=1.
\end{equation}
We define an action of $\mathbf{T}$ on $\tpcqg$ by
\begin{equation}
T_\mu\cdot x=t^{\ang{\mu,\eta'}} x,\quad \Upsilon_\nu\cdot x=t^{\phi(\nu,\eta)} x\text{ for all }x\in \tpcqg_\eta.
\end{equation}
Then we form the semi-direct $\mathbb Q(v)[t^{\pm 1}]^\pi$-algebra
$\widehat\cqg[t^{\pm 1}]=\mathbf{T} \ltimes \tpcqg$ with respect to the above action of $\mathbf T$;
that is, $TxT^{-1}=T\cdot x$ for all $T\in \mathbf{T}$ and $x \in \tpcqg$.
By specialization, we obtain a $\mathbb Q(v,{\mathbf{t}})^\pi$-algebra
$\widehat\cqg[{\mathbf{t}}]$, which is called the {\em extended covering quantum group}.
\begin{prop}\label{prop:qgiso}
There is a $\mathbb Q({\mathbf{t}})$-algebra automorphism $\widehat{\Psi}$ on $\widehat\cqg[{\mathbf{t}}]$
such that
\[
\widehat{\Psi}(E_i)={\mathbf{t}}_i^{-1}\Upsilon_i^{-1} T_{d_i i}E_i,\quad
\widehat{\Psi}(F_i)
=F_i\Upsilon_i,\quad
\widehat{\Psi}(K_\nu)=T_{-\nu} K_\nu,\quad
\widehat{\Psi}(J_\nu)=T_\nu^{2} J_\nu,\]
\[
\widehat{\Psi}(T_\nu)=T_\nu,\quad
\widehat{\Psi}(\Upsilon_\nu)=\Upsilon_\nu, \quad
\widehat{\Psi}(v)={\mathbf{t}}^{-1} v, \quad \widehat{\Psi}(\pi)=-\pi.\]
\end{prop}
The automorphism $\widehat{\Psi}$ will be
called the {\em twistor} on $\widehat\cqg[{\mathbf{t}}]$.
\begin{proof}
We first show that such a map is well defined by showing that relations
\eqref{eq:JKrels}-\eqref{eq:Fserrerel} and \eqref{eq:Trelns} are satisfied
by the images of the generators. The relations
\eqref{eq:JKrels}-\eqref{eq:Kweightrels}
and \eqref{eq:Trelns} are straightforward to verify,
and we leave this to the reader.
Let us verify \eqref{eq:commutatorrelation}. On one hand, we have
\begin{align}
\notag \widehat{\Psi}(E_i) & \widehat{\Psi}(F_j)-\widehat{\Psi}(\pi)^{p(i)p(j)}\widehat{\Psi}(F_j)\widehat{\Psi}(E_i)\\
\notag &={\mathbf{t}}_i^{-1}\Upsilon_i^{-1} T_{d_i i}E_iF_j\Upsilon_j
-(-\pi)^{p(i)p(j)}F_j\Upsilon_j{\mathbf{t}}^{-1}_i\Upsilon_i^{-1} T_{d_i i}E_i\\
\notag &={\mathbf{t}}^{-d_i+d_j-\phi(j,i)}\Upsilon_i^{-1}\Upsilon_jT_{d_i i}
\Big( E_iF_j-{\mathbf{t}}^{i\cdot j+\phi(j,i)-\phi(i,j)}(-\pi)^{p(i)p(j)}F_jE_i \Big)\\
\label{eq:PsicommLHS}&={\mathbf{t}}^{-d_i+d_j-\phi(j,i)}\Upsilon_i\Upsilon_j^{-1}T_{d_i i}(E_iF_j-\pi^{p(i)p(j)}F_jE_i),
\end{align}
where the last equality follows from \eqref{eq:phisymmetrized} and ${\mathbf{t}}^2=-1$.
On the other hand,
\begin{align}
\notag \delta_{ij}\frac{\widehat{\Psi}(J_{d_i i})
\widehat{\Psi}(K_{d_i i})-\widehat{\Psi}(K_{-d_i i})}{\widehat{\Psi}(\pi_i)\widehat{\Psi}(v_i)-\widehat{\Psi}(v_i)^{-1}}
&=\delta_{ij}\frac{T_{d_i i} J_{d_i i}K_{d_i i}-T_{d_i i} K_{-d_i i}}{(-{\mathbf{t}})^{-d_i} \pi_iv_i-{\mathbf{t}}_i v_i^{-1}}\\
\label{eq:PsicommRHS}&=\delta_{ij}
{\mathbf{t}}_i^{-1}T_{d_i i}\frac{ J_{d_i i}K_{d_i i}-K_{-d_i i}}{\pi_i v_i-v_i^{-1}}.
\end{align}
Then comparing \eqref{eq:PsicommLHS} and \eqref{eq:PsicommRHS}, we see that they
are equal for all $i,j\in I$,whence \eqref{eq:commutatorrelation}.
It remains to check the Serre relations \eqref{eq:Eserrerel} and
\eqref{eq:Fserrerel}. As these computations are entirely similar,
let us prove \eqref{eq:Fserrerel}.
Then recalling \eqref{eq:club}, we see that
\[(F_i\Upsilon_i)^{b_{ij}-k}(F_j\Upsilon_j)(F_i\Upsilon_i)^{k}
={\mathbf{t}}^{\binom{b_{ij}}{2}-k(b_{ij}-k)d_i+\clubsuit}F_i^{b_{ij}-k}F_jF_i^k\Upsilon_{b_{ij}i+j}.
\]
Hence as in the proof of Theorem~ \ref{thm:halfiso}, we have
\begin{align*}
&\sum_{k=0}^{b_{ij}} (-1)^k(-\pi)^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{{\mathbf{t}}^{-1}v_i,-\pi_i}
(F_i\Upsilon_i)^{b_{ij}-k}(F_j\Upsilon_j)(F_i\Upsilon_i)^k\\&
=\parens{\sum_{k=0}^{b_{ij}} (-1)^k\pi^{\binom{k}{2}p(i)+kp(i)p(j)}\bbinom{b_{ij}}{k}_{v_i,\pi_i}
F_i^{b_{ij}-k}F_jF_i^k}{\mathbf{t}}^{\binom{b_{ij}}{2}+c(i,j)}\Upsilon_{b_{ij}i+j}=0.
\end{align*}
The proposition is proved.
\end{proof}
\begin{rem} \label{rem:UUdot}
Here is a heuristic way of thinking about the extended covering quantum group and its twistor.
The algebra $\cqg$ acts on $\cmqg$ via
\[1\mapsto \sum_{\lambda\in X} 1_\lambda,\; E_i\mapsto \sum_{\lambda\in X} E_i 1_\lambda,\;
F_i\mapsto \sum_{\lambda\in X} F_i 1_\lambda,\;
K_\nu\mapsto \sum_{\lambda\in X} v^{\ang{\nu,\lambda}}1_\lambda,\;
J_\nu\mapsto \sum_{\lambda\in X} \pi^{\ang{\nu,\lambda}}1_\lambda.\]
Then $\dot\Psi$ induces an alternate
$\cqg$-module structure on $\cmqg$ via
\begin{align*}
\notag1\mapsto \sum &1_\lambda,\quad E_i\mapsto \sum_{\lambda\in X}
{\mathbf{t}}^{d_i \ang{i,\lambda}-\dot{\phi}(i,\lambda)}E_i 1_\lambda,\quad
F_i\mapsto \sum_{\lambda\in X} {\mathbf{t}}^{\dot{\phi}(i,\lambda)}F_i 1_\lambda,\\
&\label{eq:new action on dot U}
K_\nu\mapsto \sum_{\lambda\in X} ({\mathbf{t}}^{-1}v)^{\ang{\nu,\lambda}}1_\lambda,\quad
J_\nu\mapsto \sum_{\lambda\in X} (-\pi)^{\ang{\nu,\lambda}}1_\lambda.
\end{align*}
Merging these two actions
leads to the introduction of new semisimple elements $T_\nu$ and $\Upsilon_\mu$ such that
$
T_\nu\mapsto \sum_{\lambda\in X} t^{\ang{\nu,\lambda}}1_\lambda
$
and
$\Upsilon_\mu\mapsto \sum_{\lambda\in X}t^{\dot{\phi}(\mu,\lambda)} 1_\lambda.
$
\end{rem}
\begin{rem}
Some construction similar to the twistor $\widehat{\Psi}$ as in Propoisition~\ref{prop:qgiso} appeared in \cite{KKO13}.
In contrast to {\em loc. cit.}, our formula for $\widehat{\Psi}$ is very explicit.
\end{rem}
By specialization, the twistor on $\widehat\cqg[{\mathbf{t}}]$
leads to an isomorphism between the extended super and non-super quantum groups.
\begin{cor}\label{cor:isoquantumgrps}
The $\mathbb Q({\mathbf{t}})$-algebras $\widehat\cqg[{\mathbf{t}}] |_{\pi=1}$ and $\widehat\cqg[{\mathbf{t}}]_{\pi=-1}$
are isomorphic under $\widehat{\Psi}$.
\end{cor}
The twistor $\Psi: \chqg[{\mathbf{t}}] \rightarrow (\chqg [{\mathbf{t}}], *)$ in Theorem~\ref{thm:halfiso}
is intimately related to the twistor $\widehat{\Psi}: \widehat \cqg[{\mathbf{t}}] \rightarrow \widehat \cqg[{\mathbf{t}}]$
in Proposition~\ref{prop:qgiso}, as we shall describe.
There is an injective $\mathbb Q(v)[t^{\pm 1}]^\pi$-algebra homomorphism (see \cite[\S 2.1]{CHW1})
\begin{equation}\label{eq:minusmap}
(\cdot)^-: \tpchqg \longrightarrow \tpcqg,
\end{equation}
such that $\theta_i^-=F_i$ for all $i\in I.$
\begin{lem}
There is an injective $\mathbb Q(v)[t^{\pm 1}]^\pi$-algebra homomorphism
%
\begin{equation*}
\chi: (\tpchqg, *) \longrightarrow \widehat\cqg[t^{\pm 1}]
\end{equation*}
such that
$$
\chi(x) = x^-\Upsilon_{\nu}, \qquad \forall x \in \tpchqg_\nu.
$$
\end{lem}
\begin{proof}
One checks by definition that, for $x,y\in \tpchqg$ homogeneous,
\[
(x*y)^-\Upsilon_{|x|+|y|}=x^-\Upsilon_{|x|}
y^-\Upsilon_{|y|}.
\]
The lemma is proved.
\end{proof}
Now specializing $t$ to ${\mathbf{t}}$ for $\chi$ and $(\cdot)^-$ above, we obtain
an injective $\mathbb Q(v, {\mathbf{t}})^\pi$-algebra homomorphism
$
\chi: (\schqg, *) \longrightarrow \widehat\cqg[{\mathbf{t}}]$,
and an injective $\mathbb Q(v,{\mathbf{t}})^\pi$-algebra homomorphism
$(\cdot)^-: \schqg \longrightarrow \cqg[{\mathbf{t}}]$.
The following proposition can be verified by definitions, which we leave to the reader.
\begin{prop}
We have a
commutative diagram of $\mathbb Q({\mathbf{t}})$-algebra homomorphisms:
\begin{center}
\begin{tikzpicture}
\draw (.2,0) node{$\schqg$};
\draw (0,-1.7) node{$(\schqg,*)$};
\draw (2.5,0) node{$\cqg[{\mathbf{t}}]$};
\draw (2.5,-1.7) node{$\widehat \cqg[{\mathbf{t}}]$};
\draw[right hook-latex,thick] (.7,0)--(2,0) node[midway,above]{$(\cdot)^-$};
\draw[right hook-latex,thick] (.7,-1.7)--(2,-1.7) node[midway,above]{$\chi$};
\draw[-latex,thick] (.2,-.3) -- (.2,-1.4)node[midway,left]{$\Psi$};
\draw[-latex,thick] (2.5,-.3) -- (2.5,-1.4)node[midway,right]{$\widehat{\Psi}$};
\end{tikzpicture}
\end{center}
\end{prop}
|
1,314,259,994,316 | arxiv | \section*{Introduction}
Analysing various kinematic domains of Virtual Compton Scattering (VCS),
one may obtain clues to various problems of strong interaction dynamics.
If $s\gg-t$, and $Q^2$ is fixed, the amplitude of VSC may be described
(Fig.1a) by the sum of $t$--channel Reggeon (R) exchanges \cite{dg},
\begin{equation} {\cal M}=\sum_{R} s^{\alpha_R(t)} \beta_R(t,Q^2) e^{-{1\over 2}i
\pi \alpha_R(t)},\end{equation}
where the sum is taken over all possible Reggeons with positive
charge parity, $i.e.$ Pomeron ($\alpha(0)\approx 1$), $f-a_2$
($\alpha(0)\approx 1/2$) and pion ($\alpha(0)\approx 0$) trajectories.
It implies that at asymptotically high energies ($s\to\infty$), the energy
behavior of Compton amplitude is governed by the Pomeron exchange.
However, as I demonstrate below, if $s$ and $Q^2$ are in the range of a
few (GeV/c)$^2$, and $t\approx-m_\pi^2$, where $m_\pi$ is a pion mass,
a relative contribution from the $\pi^0$--exchange becomes
large, exceeding even the diffractive (Pomeron) contribution at low $t$.
Taking advantage of different quantum numbers and phases of $t$--channel
exchanges, it is possible to separate these contributions \cite{afanas94}
and measure the form factor of $\gamma^*\pi^0\to\gamma$ transition
($F_{\gamma^*\gamma\pi^0}(Q^2)$) as a
function of $Q^2$. Theoretically, the
leading--twist QCD contribution to the
$F_{\gamma^*\gamma\pi^0}$ transition form factor is given by the quark
triangle diagram (Fig.1b) with no hard gluon exchange making this process
0th--order in QCD running coupling constant
$\alpha_s$
with non--perturbative dynamics being contained in the pion
distribution amplitude
$\varphi_\pi(x)$. \cite{bl} The function $\varphi_\pi(x)$
cannot be predicted by perturbative QCD (except for it asymptotic behavior)
and it is crucial for
understanding whether or not one may apply a perturbative QCD description
to exclusive processes at given energies. The data on
$F_{\gamma^*\gamma\pi^0}(Q^2)$ obtained at $e^+e^-$ colliders
\cite{cello,cleo} currently available for $Q^2$ up to 8 (GeV/c)$^2$
indicate that
$\varphi_\pi(x)$ is close to its asymptotic form, and therefore
`soft', nonperturbative mechanisms are dominant in this energy range,
in agreement with theoretical predictions based on QCD Sum Rules.\cite{rr}
This conclusion is so important that it is desirable to have an
independent measurement of $F_{\gamma^*\gamma\pi^0}(Q^2)$,
and such a measurement via VCS was proposed earlier at
Jefferson Lab,\cite{loi} for transferred momenta
$Q^2=1.0\div 3.5$ (GeV/c)$^2$.
Higher values of $Q^2$ are also possible for
the Jefferson Lab energy upgrade.
As $Q^2$ increases, and $s$ stays large, in the low--$t$ limit one may
observe scaling behavior of the VCS amplitude predicted recently by
X.~Ji \cite{ji} and A.~Radyushkin. \cite{deeply}
\begin{figure}
\hskip 1. in\psfig{figure=regge1.ps,height=1.5 in}
\vskip -1.9 in
\hskip 3. in\psfig{figure=anomaly1.ps,height=1.5 in}
\hskip 1.3 in a)
\hskip 1.5 in b)
\caption{a) VSC amplitude in terms of $t$--channel Reggeon exchanges, where
$R$ stands for the Pomeron, $f-a_2$, and pion trajectories, and the dashed
blob denotes the two--photon transition form factor;
b) Quark triangle diagram for $\gamma^*M\to\gamma$ transition, where
$M$ is a pseudoscalar meson.}
\label{fig:regge}
\end{figure}
\section*{Transition Form Factors of Leading $t$--channel Exchanges}
Regge phenomenology proved very successful at describing energy dependence
of hadronic total cross sections and differential
cross--sections at low $t$ and high $s$. For VCS, it predicts the slope
of the Regge trajectory $\alpha_R(t)$ to be independent of the photon
virtuality $Q^2$. This is an important result of Regge theory which needs
to be tested experimentally, as was indicated earlier (see, $e.g.$,
Ref. \cite{brodsky}).
As far as form factors of $\gamma^* Reggeon\rightarrow\gamma$ transitions are
concerned, one needs a microscopic theory of Reggeon exchanges to be able
to predict their $Q^2$--dependence. This task is challenging
for the `soft' Pomeron exchange limited to a few GeV energy scale, and
this problem is still far from being solved in QCD.
However, QCD predictions for the two--photon transition
form factors
related to the $t$--channel of VCS are available for the case of
$\pi^0$ exchange. For the review of theoretical approaches,
see Ref.\cite{rr} I would also like to mention here effective quark models for
$F_{\gamma^*\gamma\pi^0}$ based on the extended NJL--model \cite{ibg} and
the model of dynamical dressing of propagators and vertices,\cite{offshell}
the latter also addressed
the off-shell behavior of $F_{\gamma^*\gamma\pi^0}$ for the proposed Jefferson
Lab experiment. \cite{loi}
The lightest meson which can
be exchanged in the $t$--channel of VCS is $\pi^0$. It may be possible
to extract
the corresponding form factor doing a Chew--Low extrapolation like
for the charged pion form factor
measurements \cite{mack} scheduled at Jefferson Lab.
\section*{Exchange of $\pi^0$}
In the real Compton scattering on the proton at high $s$ and low $t$, the
main contribution to the cross section is known to be diffractive, due to
the Pomeron exchange.
I will demonstrate here that the situation is different for the
case of virtual photons, because of strong enhancement of
the $\pi^0$--exchange term.
At small
$t$, the $\pi$--trajectory is determined by the pion Born term.
The matrix element of the corresponding transition is
\begin{eqnarray}
{\cal M}_{\gamma^*p\to\gamma p}^{(\pi^0)} = e^2 F_{\gamma^*\gamma\pi^0}(Q^2)
g_{\pi NN} F_{\pi NN}(t)
D_\pi(t)\epsilon_{\mu\nu\alpha\beta}\varepsilon_\mu\varepsilon_\nu'^{*}
q_\alpha q_\beta ' {\bar u'}\gamma_5 u,
\end{eqnarray}
where $\varepsilon (\varepsilon')$ is the polarization 4--vector of initial
(final) photon, and $q (q')$ is its momentum ($Q^2\equiv-q^2$), and
$u(u')$ is a bispinor of the initial (final) proton. The pion propagator
has the form $i D_\pi(t)=(t-m_\pi^2)^{-1}$, and I also assumed a conventional
monopole form for the cut--off form factor
$F_{\pi NN}(t)=\Lambda^2/(\Lambda^2-t)$.
Define four coincidence structure functions (SF) for the (unpolarized)
$p(e,e'\gamma)p$ cross section as
\begin{eqnarray}
{d^5\sigma\over d E' d\Omega_e d \Omega_p}&=&{\alpha^3\over 16\pi^3}{E'\over E}
{|{\bf p}|_{c.m.}\over m W}{1\over Q^2} {1\over 1-\epsilon}
[\sigma_T+\epsilon \sigma_L+ \epsilon \cos(2\varphi) \sigma_{TT}\nonumber \\
& &+\sqrt{2\epsilon (1+\epsilon)} \cos(\varphi) \sigma_{LT}],\\
\epsilon^{-1}&\equiv &1-2{{\bf q}_{lab}^2\over q^2} \tan^2{\theta_e\over 2},
\end{eqnarray}
where $E(E')$ is the initial (final) electron energy, $\theta_e$ is the
electron scattering angle, $\varphi$ is the
azimuthal angle, and $m$ is the proton mass. The matrix element given by eq.(2)
yields the following contributions to SF:
\begin{eqnarray}
\sigma_T^{(\pi^0)}&=&[(|{\bf q}|-q_0\cos\theta)^2+
(|{\bf q}|\cos\theta-q_0)^2] X, \\
\sigma_L^{(\pi^0)}&=& 2 Q^2 \sin^2(\theta) X, \\
\sigma_{LT}^{(\pi^0)}&=&2 \sqrt{Q^2} (|{\bf q}|-q_0\cos\theta)\sin(\theta) X,\\
\sigma_{TT}^{(\pi^0)}&=& Q^2 \sin^2(\theta) X, \\
X&=&{-t\over (t-m_\pi^2)^2} [q_0' F_{\pi NN}(t) g_{\pi NN}
F_{\gamma^*\gamma\pi^0}(Q^2)]^2,
\end{eqnarray}
where the c.m. energy of the (initial) final photon is given
by $q_0={q^2+W^2-m^2\over 2 W}$ ($q'_0={W^2-m^2\over 2 W}$),
and $\theta$ is the c.m. angle of outgoing photon.
The $\pi^0$ pole contributes the most to
transverse photoabsorption in VCS; the corresponding SF
is shown in Fig.~\ref{fig:sigmat}. Contributions to the other structure
functions are suppressed
at small $t$ by the factor of $\theta$ for $\sigma_{LT}$ and $\theta^2$ for
$\sigma_{L}$ and $\sigma_{TT}$.
The overall factor $-t/(t-m_\pi^2)^2$ from Eq.(9) has a pole
in the unphysical
region, $t=m_\pi^2$, turns to zero at $t=0$, and has a maximum in the
physical region at $t=-m_\pi^2$. As can be seen from Fig.~\ref{fig:sigmat},
dependence of the $\pi^0$ contribution on $\theta$
changes dramatically when
going from real to virtual photons. When $Q^2=0$, it is $suppressed$ at
forward angles ($i.e.$, low $t$); in contrast, for $Q^2\neq 0$, it is
$peaked$ at forward angles. This result is due to the Lorentz structure of
the $\gamma^*\gamma\pi^0$--vertex defined by Eq.(2):
$\epsilon_{\mu\nu\alpha\beta}\varepsilon_\mu\varepsilon_\nu'^{*}
q_\alpha q_\beta '$.
\begin{figure}[t]
\vskip -.7 in
\hskip 1.5 in \psfig{figure=plst.ps,height=3.5 in}
\vskip -.7 in
\caption{Contribution of the $\pi^0$--exchange Born diagram to the
(dimensionless) structure function $\sigma_T$ of transverse virtual
photoabsorption; the invariant mass of final $\gamma p$ state is
taken $W$= 2.5 GeV.
\label{fig:sigmat}}
\vskip -.2 in
\end{figure}
Enhancement of $\pi^0$ exchange makes it large enough to reach,
and even exceed, the magnitude of diffractive term. For instance,
assuming the VMD--model for the $\gamma^*Pomeron\rightarrow\gamma$ transition form
factor, $\pi^0$ contribution to $\sigma_T$ is evaluated to be
25\% higher than from the
Pomeron at $Q^2=$ 2 (GeV/c)$^2$ and, respectively, three times higher at
$Q^2=$ 3 (GeV/c)$^2$. (This result is obtained at $W=$ 2.5 GeV, $t=-m_\pi^2$).
It creates favorable conditions for extracting the form factor
$F_{\gamma^*\gamma\pi^0}$ from VCS experiments.
\section*{Separation of the $t$--channel exchanges}
One can attempt to disentangle various $t$--channel exchanges in VCS.
Indeed, the Pomeron a) has quantum numbers of vacuum (except for its spin),
b) contributes almost purely to the imaginary part of VCS amplitude at low
$t$, and c) does not flip the nucleon spin. On the other hand, the pion
a) is isovector and pseudoscalar b) contributes to the real part of
VCS amplitude at low $t$, and c) flips the nucleon spin.
These circumstances may be used to design the experiments in
order to separate these mechanisms, especially if
nuclear targets are used. For instance,
coherent VCS on $^4$He excludes pseudoscalar $t$--channel exchanges
($\pi^0,\eta,\eta'$), thus providing useful information on $Q^2$
evolution of the diffractive (Pomeron) term. Coherent VCS on deuterium target
would rule out the $\pi^0$--exchange, but keep exchange of other pseudoscalars
with zero isospin. On the other hand, if the hadronic target undergoes an
isovector transition of any kind ($e.g.$ threshold deuteron dissociation),
it would be completely due to the $\pi^0$ exchange.
In addition,
both diffractive and pseudoscalar $t$--channel exchanges are strongly
suppressed in interference between VCS and the Bethe--Heitler amplitudes
for the case of unpolarized particles. If, however, spin effects are
included, the asymmetry and/or recoil polarization due to the
proton polarized normal to the reaction plane would be caused by interference
between diffractive and $\pi^0$ terms (in which case the electron beam
polarization is not required), while the sideways (in--plane)
asymmetry/polarization with longitudinally polarized electrons would be
mainly caused by interference between
$\pi^0$--exchange and the Bethe--Heitler amplitudes.
\section*{Summary}
$\bullet$ Exchanges of $\pi^0$ and the Pomeron in $t$--channel provide the
largest contributions to the amplitude of VCS on the proton at small $t$ and
$s$ in the region of a few GeV$^2$.
$\bullet$ It is possible to separate these contributions and study
$Q^2$--dependence of the corresponding form factors.
$\bullet$ For the form factor $F_{\gamma^*\gamma\pi^0}$, it gives information
about the pion distribution amplitude and QCD corrections. It also
discriminates between predictions of effective quark models.
$\bullet$ Further increasing energies and momentum transfers, and keeping
$t$ small, one may observe a transition to the scaling limit of VCS predicted
and studied theoretically by X. Ji \cite{ji} and A. Radyushkin.\cite{deeply}
\section*{Acknowledgments}
I would like to thank Vincent Breton and the other members of the organizing
committee for the perfectly organized workshop at Clermont--Ferrand.
Discussions with A. Radyushkin, N. Isgur, S. Brodsky, and P. Bertin are
gratefully acknowledged. This work was supported by the US Department of
Energy under contract DE--AC05--84ER40150.
\section*{References}
|
1,314,259,994,317 | arxiv | \section{Introduction}
The simulation of many-body states is one of the most promising and long-awaited applications of quantum computing. In particular, quantum computers are expected to efficiently prepare certain entangled multipartite states, which can help us in the study of quantum many-body systems, or in variational quantum algorithms. The advent of the first generations of both analog and quantum computers has triggered a strong interest in the preparation of such states. For instance, GHZ states up to tens of qubits have been prepared with trapped ions \cite{sackett_2000, Leibfried_2005, Monz_2011, Friis_2018, pogorelov2021compact}, Rydberg atoms \cite{Omran_2019}, superconducting qubits \cite{DiCarlo_2010, Song_2017, Gong_2019, Song_2019}, photons \cite{Bouwmeester_1999, Pan_2001, Zhong_2018, Wang_2018} or nuclear spins \cite{laflamme1998nmr, neumann2008multipartite}.
Tensor network states (TNS) constitute an especially appealing family of multipartite states \cite{cirac2020matrix}. On the one hand, they are expected to efficiently approximate the ground states of local Hamiltonians. On the other, many paradigmatic states in the realm of quantum information or condensed matter physics are simple examples of TNS. The best-known class of such states is that of Matrix Product States (MPS) \cite{Werner}, which corresponds to a one-dimensional geometry. Higher-dimensional generalizations are known as Projected Entangled Pair States (PEPS) \cite{Verstraete_2004}. In both cases, they are characterized by the bond dimension, $D$, which is directly related to the size of the tensors building such states. Those states possess a special property, namely that they are the ground state of a local, frustration-free Hamiltonian. This implies that, in case of no degeneracy, one can easily check the successful preparation of the state by measuring a set of local observables. Thus, such states naturally play an essential role in the certification of quantum computers \cite{universalblind, reichardt2012classical, reichard2013nature, Wiebe_2014, Wiebe_2014_2, Wiebe_2015, aharonov2017interactive, fitzsimons2015post, Hangleiter_2017, mahadev2018classical, brakerski2019cryptographic, aaronson2016complexitytheoretic, Boixo_2018}.
The preparation of TNS has been actively pursued in the last years and, in particular, methods that operate efficiently in terms of the number of qudits (or lattice size), $N$, have been devised.\footnote{By efficient we mean that the computational time grows at most polynomially with $N$. We will also use the short cut ``exponential time" meaning that the time scales exponentially with $N$.} Matrix Product States can be sequentially generated \cite{SchoenSequentallyGen} in a time that scales linearly with $N$. In fact, MPS of up to ten qubits have been recently prepared in a superconducting setup \cite{Besse_2020, Schwartz434, Istrati_2020, Takedaeaaw4530}. Certain kinds of PEPS (the so-called sequentially generated) can also be prepared in the same time scale \cite{Ba_uls_2008} and proposals for the generation of sequentially generated PEPS have been recently put forward \cite{Pichler_2017, Gimeno_Segovia_2019}. While containing many paradigmatic examples of TNS, those states are fine-tuned in the sense that a small change on the tensors defining the state may lead to another PEPS outside that class. In fact, those tensors are strongly restricted by the fact that the states have to be sequentially generated.
In \cite{Ge_2016}, a very efficient quantum algorithm to generate a wide range of PEPS was introduced. That class of states is stable under deformations of the tensors and thus they are not fine-tuned. The algorithm is based on the adiabatic method and the circuit depth scales as $O(\log(N))$. The algorithm needs, however, the existence of a gap, $\Delta$, along the adiabatic path, something which is difficult to ensure since checking that typically requires a computational time that scales exponentially with $N$. Additionally, it is devised for digital quantum computers, but not analog ones.
In this paper we consider a family of states on arbitrary lattices that is very closely related to that introduced in \cite{Ge_2016}. We show how the computation of the gap can be expressed as a semidefinite programming problem (SDP), and how this allows us to efficiently compute lower bounds $\delta\le \Delta$ on the gap. We also extend the adiabatic algorithm to continuous time, which is more suitable for analog quantum computers. This ensures that states for which $\delta>0$ can be prepared in a time scaling as $O(\log(N))$, which compares very favourably with respect to existing methods, which scale as $O(\mathrm{poly}(N))$. We show that for such families of states, it is possible to predict the expectation values of many observables beyond those appearing as terms of the parent Hamiltonian. In fact, there is an exponential number of such observables, which forms a complete set in the set of operators acting on the many-body Hilbert space. This naturally leads to certification protocols based on interactive proofs (see \cite{universalblind, reichardt2012classical, reichard2013nature, Wiebe_2014, Wiebe_2014_2, Wiebe_2015, aharonov2017interactive, mahadev2018classical, brakerski2019cryptographic, aaronson2016complexitytheoretic, Boixo_2018} for previous works on interactive verification schemes) which are inspired by the difficulty of sampling from PEPS \cite{Schuch_2007, Haferkamp_2020}. While they require a certification time that grows exponentially with $N$ \cite{aaronson2016complexitytheoretic, Boixo_2018}, we propose efficient versions that, however, rely on stronger standard
complexity assumptions. We also explain that, in case they can be spoofed, this would immediately lead to new classical algorithms to estimate physically relevant expectation values of observables in the class of states we consider.
This paper is structured as follows. In Section \ref{section_states} we present the class of states and their corresponding parent Hamiltonian. The states depend on two positive parameters, $t, \beta \geq 0$, that can be viewed as time and inverse temperature, respectively. We show that, by construction, these states can be smoothly connected to a product state. We also extend the adiabatic quantum algorithm of \cite{Ge_2016} to continuous time, and show how the computational time scales with $\delta$ and $N$.
In Section \ref{section_GapsCorr} we first show how one can efficiently find lower bounds on the gap of the parent Hamiltonian by means of an SDP. In particular, for every value of $t$ we can find a maximum value of $\beta(t)$ such that for all $\beta < \beta(t)$ the gap of the Hamiltonian can be lower bounded by a constant that does not depend on the system size. We then introduce sets of operators whose expectation values can be easily computed and that are complete, in the sense that they provide a tomography of the state. Finally, we propose verification protocols in Section \ref{section_verification} and discuss some possible complexity arguments.
\section{States and parent Hamiltonian}\label{section_states}
In this section, we introduce the family of states that we will consider in the present work, which is closely connected to that analyzed in \cite{Ge_2016}. It is built in terms of sets of local commuting operators, together with two parameters $t,\beta\ge 0$. For $t=\beta=0$, they are product states, whereas otherwise they are entangled and can be efficiently expressed as PEPS. Based on that fact, we will explicitly construct a frustration-free parent Hamiltonian, which will play an important role in the procedure to prepare the states.
We then analyse how to prepare these family of states adiabatically and study the scaling of the computational time as a function of the system sized and a lower bound on the gap. Lastly, we show how the presented family of states includes many physically relevant examples of TN states.
\subsection{Setting}
We will consider a rather general setup, although we will give examples later for regular lattices. We consider $N$ qudits, with Hilbert space ${\cal H}_d=\mathds{C}^d$ and computational basis $\{|0\rangle,\ldots,|d-1\rangle\}$, located at the vertices ${\cal V}$ of a graph ${\cal G}({\cal V},{\cal E})$ with edges ${\cal E}$ of bounded degree $z$ (that is, the maximal number of edges starting from a given vertex). We define ${\cal H}={\cal H}_d^{\otimes N}$, and denote the set of Hermitian operators acting on ${\cal H}$
by ${\cal A}$. For any $O\in {\cal A}$, we define its support $\lambda(O)\subset{\cal V}$ as the subset of vertices such that $O=\mathrm{tr}_i (O) \otimes {\openone}_i/d$ if and only if $i\notin \lambda(O)$, where $\mathrm{tr}_i$ is the trace with respect to the qudit at vertex $i$.
The graph $\mathcal G$ defines a natural distance $d(i,j)$ between two vertices $i,j\in {\cal V}$ as the minimal number of edges connecting them. We also define the radius of a subset of vertices $\lambda\subset {\cal V}$,
\begin{equation}
r(\lambda) = \min_{i\in {\cal V}} \max_{j\in{\cal V}} d(i,j).
\end{equation}
We denote by ${\cal A}_r\in {\cal A}$ the set of Hermitian operators acting on $\mathcal H$ whose support has a radius of at most $r$.
\subsection{States}
We call $\mathbb{K}_{r,M}\subset {\cal A}$ the set of operators that can be written as
\begin{equation}
\label{decomposition}
K = \sum_{n=1}^M \kappa_n \, , \quad [\kappa_n,\kappa_m]=0 \, \, , \, n,m=1,\ldots,M \ ,
\end{equation}
where $\kappa_n\in {\cal A}_r$, $\|\kappa_n\|_{\infty}\le 1$ and $\kappa_n \succeq 0$. That is, it is the set that can be written as a sum of $M$ commuting, positive semidefinite, subnormalized operators that have a radius of at most $r$.
Apart from the trivial cases, where the operators $\kappa_n$ act on single vertices or when they are products of the same single-qudit operator, one can easily construct non-trivial sets $\mathbb{K}_{r,M}$. In Appendix \ref{appendix_operators} we briefly review some of them.
Given $r_{1,2},M_{1,2}\in \mathbb{N}$, $K_{1,2}\in \mathbb{K}_{r_{1,2},M_{1,2}}$,we define the family of states
\begin{equation}
\label{Psibetat}
\lvert\Psi(\beta,t)\rangle = \frac{1}{Z(\beta,t)} e^{\beta K_1} e^{i t K_2} \lvert\varphi_1 \rangle \otimes \ldots \otimes \lvert \varphi_N\rangle\ ,
\end{equation}
where $\beta,t\ge 0$, $Z$ is a normalization constant and $\ket{\varphi_i}$ are arbitrary product states. This family of states obviously contains all product states if we take $\beta= t = 0$. For $\beta=0$, we have $Z(0,t)=1$. Note that we only explicitly denote the dependence of $\ket{\Psi(\beta,t)}$ on $\beta$ and $t$, while omitting the dependence on $K_{1,2}$ and $\ket{\varphi_i}$, in order to ease the notation.
\begin{figure*}
\centering
\includegraphics[width=0.80\linewidth]{lattice_complete2.pdf}
\caption{Example of operators $e^{i\kappa_{2,n}t}$ (straight lines) and $e^{\beta\kappa_{1,m}}$ (wiggled lines) that act each on two adjacent sites. We denote by $L$ the lattice side. (a) All operators $e^{i\kappa_{2,n}t}$ that act on site $j$. (b) Subset of vertices $\mu_j$, where $\mu_j = \{j, j \pm 1, j \pm L\}$. (c) Subset of vertices $\nu_j$, where $\nu_j = \{j, j \pm 1, j \pm L, j \pm 2, j \pm L \pm 1, j \pm L \mp 1, j \pm 2L\}$}
\label{fig:vertices_operators}
\end{figure*}
\subsection{Parent Hamiltonian} \label{section_parent}
We now show that any state \eqref{Psibetat} is the unique ground states of a local, frustration-free Hamiltonian, which we construct explicitly. We denote by $\kappa_{\alpha,n}$ the operators appearing in the decomposition \eqref{decomposition} of $K_{\alpha}$, for $\alpha = 1,2$. For $j\in \mathcal V$, let $\mu_j:=\{n\,|\,j\in\lambda(\kappa_{2,n})\}$ be the index set of all terms $\kappa_{2,n}$ which act non-trivially on site $j$, and $\nu_j=\{m\,|\,\exists\, n\in\mu_j:\,\lambda(\kappa_{1,m})\cap\lambda(\kappa_{2,n})\ne\emptyset\}$ the index set of all terms $\kappa_{1,m}$ which overlap with one of the previous terms. See Figure \ref{fig:vertices_operators} for an example of how such set of vertices would be.
Then, for $j=1,\dots,N$, we define
\begin{equation}
\label{hj}
h_j = O_j^\dagger \Pi_j O_j\ ,
\end{equation}
where $\Pi_{j} = \mathds{1} - | \varphi_j \rangle\langle \varphi_j|_j$ acts on the qudit at vertex $j$, and
\begin{equation}
\label{eq:def-Oj}
O_j = \prod_{n\in \mu_j} e^{-i \kappa_{2,n} t} \prod_{m\in \nu_j} e^{-\beta \kappa_{1,m}}\ ,
\end{equation}
which is invertible. With this definition, we have $h_j=h_j^\dagger \succeq 0$ and $h_j |\Psi(\beta,t)\rangle=0$.
We define the parent Hamiltonian of $\ket{\Psi(\beta,t)}$ as
\begin{equation}
\label{eq:def-Hbetat}
H(\beta,t) = \sum_{j=1}^N h_j\ .
\end{equation}
Note that we have now also suppressed the dependence of $h_j$ on $\beta$ and $t$ for convenience.
Let us show that, indeed, \eqref{Psibetat} is the unique ground state of such an operator. Since $h_j\ket{\Psi(\beta,t)}=0$, we have that
$H(\beta,t) |\Psi(\beta,t)\rangle=0$, and since
$H(\beta,t)\succeq0$, this implies that $\ket{\Psi(\beta,t)}$ is a ground state of $H(\beta,t)$, with ground state energy $0$.
Conversely, if $H(\beta,t)|\Psi'\rangle=0$, then $h_j\ket{\Psi'}=0$ and thus $\Pi_j e^{-i t K_2} e^{-\beta K_1} |\Psi'\rangle=0$ for all $j$, which in turn means that $\ket{\Psi'}$ is proportional to $\ket{\Psi(\beta,t)}$: We thus see that $\ket{\Psi(\beta,t)}$ is the unique ground state of $H(\beta,t)$.
\subsection{Adiabatic preparation}\label{section_Adiabatic}
The existence of a smooth path of Hamiltonians connecting $H(\beta,t)$, and thus $\ket{\Psi(\beta,t)}$, to a simple product state at $H(0,0)$ implies that these states can be prepared adiabatically, by starting with the product state $\ket{\Psi(0,0)}=\ket{\varphi_1}\otimes\cdots\otimes\ket{\varphi_N}$ and adiabatically changing the Hamiltonian from $H(0,0)$ to $H(\beta,t)$. In fact, the first step of the procedure -- changing the Hamiltonian from $H(0,0)$ to $H(0,t)$ -- corresponds to applying a unitary transformation
$U=e^{itK_2}$ to $\ket{\Psi(0,0)}$.
This transformation can be implemented \emph{exactly} in time $t$ by evolving with $K_2$ (rather than $H$). Alternatively, using the fact that $K_2$ is a sum of local commuting terms, $U$ can be decomposed into a finite-depth local unitary circuit (where the number of layers only depends on the structure of the interaction), which can be realized exactly on a digital quantum computer or simulator in constant time.
The task that needs to be implemented adiabatically is the second part of the preparation, that is, the interpolation from $\ket{\Psi(0,t)}$ to $\ket{\psi(\beta,t)}$. Generally, the time required for a faithful adiabatic evolution will depend on the magnitude of the spectral gap of $H(\beta',t)$ along the interpolation $\beta'\in[0;\beta]$. As it turns out, for the given type of interpolation, we can devise an efficient way of checking the presence of such a gap numerically, which we present in Section~\ref{sec:section_gaps}.
Once we have established a lower bound on such a gap, we can use any standard bound for adiabatic evolutions \cite{Jansen_2007, albash_lidar}, which gives an adiabatic runtime scaling as $T = O(N^2 \Delta^{-3} \epsilon^{-1}$), with $\Delta$ a lower bound on the gap, and $\epsilon$ the error in the final state.
Moreover, for the situation at hand, we can improve the scaling of the adiabatic preparation by making use of the locality of the Hamiltonian, combined with the version of the adiabatic theorem proven in Ref.~\cite{Ge_2016}. To this end, we construct an alternative interpolation from $H(0,t)$ to $H(\beta,t)$ where in each step, we only change \emph{one} of the terms in the Hamiltonian; importantly, the method derived in Section~\ref{sec:section_gaps} to prove gaps still applies in that case. By changing the imaginary time in a suitably smooth way along this interpolation, we then obtain that the adiabatic time required per Hamiltonian term changed scales logarithmically with the desired accuracy. We can now concatenate the interpolation for all the individual Hamiltonian terms and use the fact that changes performed on distant terms can be equally well carried out in parallel due to an effective light cone through Lieb-Robinson-bounds~\cite{LRbounds,Ge_2016}.
Combining these two results yields an adiabatic preparation scheme for $\ket{\Psi(\beta,t)}$ where the adiabatic time scales \emph{poly-logarithmically} with the desired accuracy and the problem size $N$, and thus exponentially better than other known methods for preparing MPS and PEPS.
\subsection{Connection to tensor networks}\label{section_peps}
Let us now show that the states \eqref{Psibetat} can be efficiently described as PEPS \cite{cirac2020matrix} with the same connectivity as ${\cal G}({\cal V},{\cal E})$ and a bond dimension $D$ which is upper bounded by a function of $r_{1,2}$, $z$, and $d$, but which does not depend on $N$ or $M$. To see this, let us consider an edge $(i,j) \in \cal E$. Since the operators $\kappa_{1,n}$ commute pairwise, we can express the product of operators that act on both $i$ and $j$ as a single one, $\prod_{i,j \in \lambda(\kappa_{1,n})} e^{i t \kappa_{1,n}} = e^{it \sum_{i,j \in \lambda(\kappa_{1,n})}\kappa_{1,n}} $. We can bound the number of terms that appear in the sum in the exponent by the number of operators that act on qudit $i$. Since the individual operators $\kappa_{1,n}$ act on a radius $r$, note that $e^{it \sum_{i \in \lambda(\kappa_{1,n})}\kappa_{1,n}}$ acts on, at most, all the qudits that are at a distance less or equal than $2r_1$ from qubit $i$. The number of such neighbors can be bounded by
\begin{equation}
x \le z \cdot \sum_{i=1}^{2r_1-1} \left(z-1\right)^{i} = z\frac{1-(z-1)^{2r_1}}{2-z} \ = \ O(z^{2r_1})\ .
\end{equation}
Finally, note that an operator that acts on $x$ qudits increase the bond dimension at most by $O(d^{x/2})$.\footnote{This can be easily checked by iteratively performing a singular value decomposition on the operator and representing it as a Matrix Product Operator (MPO) acting on the $x$ qubits.}
Iterating this for every edge (which overcounts interactions) gives an upper bound $O(d^{z^{2r_1}})$ for the required bond dimension.
Since a similar argument is valid for the operators $\kappa_{2,m}$, we conclude that the states of the form \eqref{Psibetat} can be described by a tensor network of bond dimension at most $O(d^{z^{2(r_1+r_2)}})$. Importantly, the bond dimension remains bounded independent of the system size since $r_{1,2}$ and $z$ are size-independent.
In the following, we particularize the above results to standard PEPS -- in particular, injective MPS and PEPS, and a set of (possibly) non-injective PEPS with unique ground state. We will in the following consider regular lattices in one or higher dimensions. In the first case, we have MPS, and for the higher dimensional case, we have PEPS.
\subsubsection{Injective MPS}
Matrix Product States (MPS) are the simplest TN \cite{Werner,cirac2020matrix}. They can be written as
\begin{equation}
|\Psi\rangle = \sum_{s_1,\ldots, s_N=0}^{d-1} A_{s_1}^1 \ldots A_{s_N}^N |s_1,\ldots, s_N\rangle.
\end{equation}
We consider a special subclass, so called injective MPS. They fulfill $d=d_0^2$, and are constructed with two qudits $L_n$ and $R_n$ each of dimension $d_0$ per vertex, and
\begin{equation}
\label{MPS}
|\Psi\rangle = \bigotimes_{n=1}^N Q_n |\Phi\rangle \ ,
\end{equation}
where $|\Phi\rangle$ is a state where, at each vertex, the qudit $R_n$ is in a maximally entangled state with the qudit $L_{n+1}$ on the vertex to its right (and thus $L_n$ in a maximally entangled state with $R_{n-1}$), and $0<Q_n\le {\openone}$ are invertible operators that transform $\mathds{C}^{d_0}\otimes \mathds{C}^{d_0}\to \mathds{C}^{d}$. Generically, MPS become injective by blocking $\le D^4$ original qudits \cite{SanzWieland}.
MPS obviously fall within the family of states defined in \eqref{Psibetat} for the special case of a 1D lattice as graph. Here, for each $n$, $\kappa_{2,n}$ acts on $R_n$ and $L_{n+1}$ in such a way that $e^{i\kappa_{2,n}}$ creates the maximally entangled states on those qudits. The operator $\kappa_{1,n}$ are given by $\kappa_{1,n}=-\log(Q_n)/\beta$ and act on a single vertex each. This is illustrated in Figure~\ref{fig:mps}.
\begin{figure}
\includegraphics[width=\linewidth]{mps_lnrn.pdf}
\caption{MPS construction in terms of the operator $e^{i \kappa_{2,n}}$, that creates an entangled pair between sites $R_n$ and $L_{n+1}$, and $e^{\beta \kappa_{1,n}}$, that maps the virtual sites $L_n, R_n$ to the physical qudit at position $n$. }
\label{fig:mps}
\end{figure}
\subsubsection{Injective PEPS}
Injective PEPS are the generalization of \eqref{MPS} to higher spatial dimensions. The state has the same form, but now there are $z_n$ qudits at site $n$, where $z_n$ is the coordination number of vertex $n$ (i.e., the number of edges connected to the vertex). The state $\ket\Phi$ contains entangled pairs along all possible vertices of the lattice. As for MPS, one can readily see that these states lie in the family defined in \eqref{Psibetat}.
\subsubsection{PEPS corresponding to classical models}
Consider a Gibbs state of some classical Hamiltonian $H_{\rm cl}(s_1, ..., s_n) = \sum_{\langle i_1 \ldots i_k \rangle } h_{i_1 \ldots i_k}(s_{i_1} \ldots s_{i_k})$, where $s_i \in \{ 0,1,\ldots,d-1 \}$, and $\langle i_1 \ldots i_k \rangle $ denotes a the regions of neighbouring particles coupled by the interaction. We define the state
\begin{equation}
\ket{ \Psi} = \frac{1}{Z} \sum_{s_1, \ldots , s_n} e^{-\beta H_{\rm cl}(s_1, ..., s_n)/2} \ket{s_1, ..., s_n}\ ,
\label{eq_ising_psi}
\end{equation}
where $Z$ is the classical partition function. We can rewrite this state as follows \cite{Verstraete_2006}:
\begin{equation}
\ket{ \Psi} = \frac{1}{Z} e^{-\beta \hat H_{\rm cl}/2} \left(\frac{1}{\sqrt{d}}\sum_{s=0}^{d-1} \ket{s}\right)^{\otimes n},
\end{equation}
where $\hat H_{\rm cl}$ is an operator with eigenstates $\ket{s_1, ..., s_n}$ and eigenvalues $H_{\rm cl}(s_1, ..., s_n)$. This state is of the type of Eq.~\eqref{Psibetat}, where $K_1 = H_{\rm cl}/2$ and $K_2 =0$. A description of these states in terms of PEPS can be easily obtained. For the special case of two-body nearest-neighbor interactions, we get a bond dimension equal to the dimension of the physical degree of freedom, see~\cite{Verstraete_2006}.
\section{Gaps and Expectation values}\label{section_GapsCorr}
In this section we establish two key results for both the efficient preparation of the states \eqref{Psibetat} as well as for their verification. First, we develop an effective method to compute lower bounds to the gap of the parent Hamiltonians \eqref{eq:def-Hbetat} for some range of parameters $t,\beta$. This will ensure that the corresponding ground states can be efficiently prepared with the adiabatic algorithm presented in the next section. Second, we determine the expectation values of a complete set of operators in those states, so that one can use them to check that the state has been successfully prepared.
\subsection{Gaps} \label{sec:section_gaps}
We will now explain how to efficiently obtain a lower bound on the gap of the parent Hamiltonians constructed in the previous subsection, for specific points in parameter space as well as uniform bounds for a whole parameter regime. To start with, note that $H(\beta=0,t)$ is a sum of commuting projectors $h_j$ for any $t$, and thus has a gap $\Delta=1$. It thus remains to supply methods to bound the gap in the case where $\beta>0$.
Let us first discuss how to obtain such a bound for a specific point $H(\beta,t)=\sum h_j$ in parameter space.
To this end, consider the semidefinite program (SDP):
\begin{subequations}\label{sdp}
\begin{equation}
\delta = \max_{a_{ij}, c_{ij}} x \label{sdp1}
\end{equation}
subject to
\begin{align}
\!\!\forall i\neq j:\hspace*{5.7cm}&\nonumber\\
h_ih_j+h_jh_i+ a_{ij}h_i^2 + a_{ji}h_j^2
-c_{ij} h_i -c_{ji} h_j & \succeq 0
\label{hnm}
\\
\forall i\!: \quad \sum_{j \ne i} a_{ij} &= 1 \label{sdp3} \\
\forall i\!: \quad \sum_{j \ne i} c_{ij} & = x \label{sdp4}
\end{align}
\end{subequations}
For any feasible point of the SDP (and in particular the optimum in \eqref{sdp1}), summing \eqref{hnm} over all $i\ne j$ yields (counting each pair twice, and up to a factor $2$)
\begin{equation}
\label{eq:sdp-sum-of-all}
\sum_{i\ne j}h_ih_j + \sum_i h_i^2 - x \sum_i h_i\ \succeq 0 \, ,
\end{equation}
or equivalently
\begin{equation}
H^2 - x H \succeq 0 \, ,
\end{equation}
which says that $H$ has no eigenvalues in the interval $(0;x)$. Since the ground state energy is $0$ by construction, this implies the existence of a spectral gap $\Delta\ge\delta$ above the unique ground state.
Since \eqref{sdp} is an SDP with the dimension of the constraints independent of $N$,
it can be solved efficiently. Note that the SDP can be simplified considerably by setting
\begin{equation}
\label{sdp-only-overlapping}
a_{ij}=c_{ij}=0\mbox{\quad for\ }\lambda(h_i)\cap\lambda(h_j)=\emptyset
\end{equation}
and observing that \eqref{hnm} is trivially satisfied in those cases, leaving a number of equations linear in $N$. Another possible simplification amounts to first solve for each \emph{individual pair} $i\ne j$ the SDP which minimizes $a_{ij}+a_{ji}$ subject to $h_ih_j+h_jh_i + a_{ij}h_i^2 + a_{ji} h_j^2$; then, if for all $i$, $\sum_{j\ne i}a_{ij}<y_i$, there is a gap (since $h_i$ is relatively bounded by $h_i^2$, so we can add positive contributions to both $a_{ij}$ and $c_{ij}$ while still satisfying \eqref{hnm}); a bound on the gap can either be computed directly from $y_i$ or through the SDP \eqref{eq:sdp-sum-of-all} while keeping $a_{ij}$ fixed.
Finally, note that for $\beta=0$ (where the $h_i$ are commuting projectors), condition \eqref{hnm} holds for any choice of $a_{ij}=c_{ij}$, and thus indeed gives $\delta=1$.
Better bounds on the gap can be obtained by relaxing \eqref{hnm} to only hold when summed over specific groups of terms (which still implies \eqref{eq:sdp-sum-of-all}); a natural such case would be the variant of the SDP obtained by first grouping adjacent terms $\tilde h_i=\sum h_i$ (with the sum over terms in some neighborhood) and then setting up the SDP for the $\tilde h_i$. Furthermore, one can replace $h_i$ (also after blocking) by projectors with the same range as $h_i$, since those relatively bound $h_i$ and thus have a system-size independent gap if and only if the original Hamiltonian does.
Having described a way how to efficiently obtain a lower bound to the gap for a given point $H(\beta,t)$, how can we use this to build methods for certifying a gap over a whole range of parameters? The idea is based on the continuity of the SDP conditions (which should come as no surprise, given the finite dimension and smooth dependency of $\beta$ of all objects involved). Given a certified gap $\delta$ for some $H(\beta,t)$
using the SDP \eqref{sdp} (with corresponding optimal parameters $a^\ast_{ij}$ and $c^\ast_{ij}$),
we show in Appendix \ref{appendix_continuity_gap} that the SDP for $H(\beta+\tau,t)$ has a feasible point with $a'_{ij}=a^\ast_{ij}$, and where
\begin{equation}
c_{ij}' = e^{-2|\nu_j|\tau} c^\ast_{ij} - (1-e^{-2|\nu_j|\tau}) - a^\ast_{ij}\big(e^{-2|\nu_j|\tau}-
e^{-2|\nu_i{\setminus}\nu_j|\tau}\big)
\end{equation}
for $0\le a_{ij}\le 1$, and a variant thereof if $a_{ij}<0$ or $a_{ij}>1$, see Eqs.~\eqref{app-sdp-cprime}. Importantly, $c'_{ij}$ changes uniformly continuous as $\tau$ is increased starting from $\tau=0$: Thus, this
allows one to obtain a lower bound $\delta(\tau)$ on the gap of $H(\beta+\tau,t)$ by virtue of \eqref{sdp4}; importantly, the lower bound is uniform for a given interaction geometry and independent of the system size or the specific chosen model, and changes uniformly continuous with $\tau$. We now take the $\tau_0$ for which the lower bound closes, i.e. $\delta(\tau_0 ) = 0$, and re-run the SDP for $H(\beta+\tau_0,t)$: If that SDP returns a gap as well, this proves that $H(\beta+\tau,t)$ is gapped for all $0\le\tau\le\tau_0$.
By starting from $\beta=0$ (for which \eqref{sdp} trivially holds), we can thus establish the existence of a gap (and obtain explicit lower bounds to it) for a finite regime $0\le\beta\le\beta_0$ by evaluating the SDP \eqref{sdp} only at a finite number of points (where the bound can be improved by increasing the density of points).
Let us now briefly discuss the suitability of the method for the TN of the previous section. For translational invariant (or suitably uniform) injective MPS, it was proven by Fannes, Nachtergaele, and Werner~\cite{Werner} that the parent Hamiltonian is always gapped. Indeed, they showed that by blocking a number of sites proportional to the correlation length (for a fixed bond dimension), which in turn can be bounded as a function of the gap, and replacing the blocked Hamiltonian $\tilde h_i$ by projectors, Eq.~\eqref{hnm} is fulfilled (with all $a_{ij}$ and $c_{ij}$ equal) -- and thus the SDP will yield a gap $\delta>0$ -- already with the restriction \eqref{sdp-only-overlapping} imposed.
In higher dimensions, an analogous result has been obtained for PEPS which are unique ground states of local Hamiltonian (in particular, injective PEPS)~\cite{Kastoryano_2018}: Whenever the Hamiltonian is gapped in the thermodynamic limit, the SDP with condition \eqref{eq:sdp-sum-of-all} will be satisfied by the projector-valued Hamiltonian obtained after blocking a number of sites which only depends on the gap and the geometry of the system, but not on the system size or details of the model.
Finally, regarding the PEPS corresponding to classical models in dimensions higher than one, for sufficiently high temperatures (small $\beta$) the SDP method will give a gap (as the Hamiltonian at $\beta=\infty$ is trivial). Note that the corresponding classical model may have a phase transition at sufficiently low temperatures, which implies that the correlation length will diverges and thus the gap will vanish in the thermodynamic limit ($N\to\infty$). Thus, the SDP automatically also allows to determine an upper bound to that critical temperature, which will yet again get more accurate as we consider larger regions, both by blocking and by relaxing (or omitting) the restriction~\eqref{eq:sdp-sum-of-all}. Thus, the continuity bound of Appendix~\ref{appendix_continuity_gap} at the same time provides a means of determining upper bounds on the critical temperature of classical statistical models.
\subsection{Expectation values} \label{sec:section_expectation}
Computing expectation values of ground states of local Hamiltonians is hard in general \cite{localQMA}. However, for ground states of frustration-free Hamiltonians, there are certain observables for which it is straightforward to compute such values. In this section, we find a complete set of operators for which this can be done and which will be on the basis of the verification protocols presented below.
As argued in Section \ref{section_parent}, for $\ket\Psi$ given in \eqref{Psibetat} we have $h_n|\Psi\rangle=0$, and thus trivially $\langle \Psi|h_n|\Psi\rangle=0$. We will now define a set of observables for which one can also compute the expectation value in $\ket\Psi$. We will restrict to qubits ($d=2$) and will take $|\varphi_j\rangle=|0\rangle$, although it is straightforward to extend the method to qudits and other states. We will denote
the Pauli operators acting on the $j$-th qubit,
$j=1,\ldots,N$,
by $\sigma_{\alpha}^j$ with $\alpha=x,y,z$;
for instance, $\sigma_z|0\rangle=|0\rangle$.
The key idea is to notice that for any $\lambda\subset {\cal V}$,
\begin{equation}
\label{Olambdaqprop}
O_\lambda |\Psi\rangle = \big(\bigotimes_{j\in\lambda} |0\rangle_j\langle 0| \big) O_\lambda |\Psi\rangle,
\end{equation}
where
\begin{equation}
O_\lambda = \prod_{n\in \mu(\lambda)} e^{-i \kappa_{2,n} t} \prod_{m\in \nu(\lambda)} e^{-\beta \kappa_{1,m}}
\end{equation}
with $\mu(\lambda)=\bigcup_{j\in\lambda}\mu_j$,
$\nu(\lambda)=\bigcup_{j\in\lambda}\nu_j$,
and where the sets $\mu_j,\nu_j$ have been defined in Section \ref{section_parent}.
The first set of operators is defined as $Z^+_\lambda = (Z_\lambda + Z_\lambda^\dagger)/2$ and
$Z^-_\lambda = (Z_\lambda - Z_\lambda^\dagger)/2i$, where
\begin{equation}
\label{Zlambda}
Z_\lambda = O_\lambda^{-1} \left(\bigotimes_{j\in\lambda}\sigma_z^j \right) O_\lambda
\end{equation}
-- that is, $\ket{\Psi}$ is a right (left) eigenvector of $Z_\lambda$ ($Z_\lambda^\dagger$), using \eqref{Olambdaqprop}.
We then have
\begin{subequations}
\begin{eqnarray}
\label{Zplus}
\langle \Psi| Z^+_\lambda |\Psi\rangle &=& 1\ ,\\
\langle \Psi| Z^-_\lambda |\Psi\rangle &=& 0\ ,
\end{eqnarray}
\end{subequations}
and thus $Z^\pm_\lambda$ have fixed expectation values.
The second set is more ample. Given any $P$ supported in $\lambda\subset {\cal V}$ with the property that there exist $j \in \lambda $ such that $\bra{0} P \ket{0}_j = 0$, we define
\begin{equation}
\label{eq:q-lambda}
Q_\lambda = O_\lambda^\dagger P O_\lambda.
\end{equation}
Again, using \eqref{Olambdaqprop} we can compute the expectation value of those operators
\begin{equation}
\langle \Psi| Q_\lambda |\Psi\rangle = 0.
\end{equation}
The set of operators defined above, taken jointly for all $\lambda\in\mathcal V$, is complete in ${\cal A}$ in the sense that their expectation values completely determine the state. To show that, we just have to devise a subset of $4^N$ linearly independent operators from that set.
To this end, let us start from the set of all operators $P$ which are a product of Pauli and identity operators, and associate to each of them an operator $Q\equiv Q(P)$ using one of the constructions above.
Each of these operators can be written as
\begin{equation}
\label{eq:P-pauli-substring}
P=\bigotimes_{j\in \lambda} \sigma^j_{\alpha_j}\ ,
\end{equation}
where $\alpha_j=x,y,z$, and $\lambda\subseteq {\cal V}$ is the set of sites on which $P$ acts non-trivially.
In case $\alpha_j=z$ for all $j\in\lambda$, we define $Q(P)=Z^+_\lambda$; otherwise, $Q(P)= Q_\lambda$, Eq.~\eqref{eq:q-lambda}. Finally, for $P={\openone}$, let $Q(P)={\openone}$.
In this way, starting from all products of Pauli operators and the identity, we have obtained a set of operators $Q_n$,
$n=1,\ldots,4^N$.
This set is linearly independent iff the matrix $B_{n,m}={\rm tr}(Q_n^\dagger
Q_m)/2^N$ is not singular. Trivially, for $\beta=0$, $B_{n,m}=\delta_{n,m}$ and thus not singular. Since the operators $O_\lambda$ used to define the map $P\mapsto Q$ are analytic in $\beta>0$, the determinant of $B_{n,m}$ will be analytic as well, and thus it can only vanish at countably many points, all of which are isolated. Thus, generically it will be non-zero and, in the possible measure-zero cases where it is can be circumvented by taking a closeby value of $\beta$.
The fact that the set of operators $Z_\lambda$ and $Q_\lambda$ is (over)complete means that any observable can be expanded as a linear combination thereof. If we are able to obtain the corresponding coefficients, then we will be able to compute the expectation value of all physical quantities. In practice, this will be difficult since, due to the fact that the operators $O_\lambda$ non-trivially overlap with each other, we will typically need an exponential number of terms in the expansion. Nonetheless, there may be a way of truncating that expansion, which would give lead to new algorithms in terms of tensor networks. Furthermore, since we know the expectation values of a basis of operators, we possess full tomographic information on state. However, as before, it is not useful to compute other expectation values.
The operators $Q_\lambda$ are supported on a set of vertices that is larger than $\lambda$ (roughly speaking, on all vertices that lie at a distance up to $2r$ from that set). It is possible to define other observables which have a smaller support and for which we can still compute the expectation values. This is relevant for more practical applications, like the verification protocols introduced below, where we want to make statements about measurements performed within the support of $Q_\lambda$, and we want them to include as few qubits as possible. For that, given $j\in{\cal V}$ and an operator $P'$ supported on $\lambda$ with $j\notin\lambda$, we define
\begin{subequations}
\label{Qjx}
\begin{eqnarray}
Q_{j,1}&=&O_j^\dagger \sigma_x^j P' O_j\ ,\\
Q_{j,2}&=&O_j^\dagger \sigma_y^j P' O_j\ ,\\
Q_{j,3}&=&O_j^\dagger ({\openone}-\sigma_z^j) P' O_j\ ,
\end{eqnarray}
\end{subequations}
which only act on the joint support of $O_j$ and $P'$.
Again, using \eqref{Olambdaqprop} we find the expectation value of those operators to be
\begin{equation}
\langle \Psi| Q_{j,\alpha} |\Psi\rangle = 0.
\end{equation}
In summary, we have defined a set of observables whose expectation values are either zero or one. We can choose a subset thereof where $|\lambda|\le c$, with $c$ a constant independent of $N$. In such a case, since we know $O_\lambda$ we can efficiently write those observables as linear combinations of Pauli operators in the support of $O_\lambda$ (like \eqref{Zlambda}), or even smaller (like \eqref{Qjx}).
\section{Verification schemes} \label{section_verification}
In this section we discuss different ways of exploiting the state preparation procedure for the state \eqref{Psibetat} as a verification scheme.
First, we will analyse a quantum state verification method and show how to certify that the state has been created successfully by performing local measurements and using the fact that there exists a parent Hamiltonian that is both gapped and frustration free \cite{Cramer_2010}.
Then, we will consider the scenario of classical verification of quantum computation, where the goal is to make sure that someone else is in possession of a quantum computer solely through classical communication~\cite{reichardt2012classical, reichard2013nature, aaronson2016complexitytheoretic, Boixo_2018,mahadev2018classical, brakerski2019cryptographic}. We will restrict here to the case of qubits, although the arguments can be easily extended to qudits.
\subsection{Quantum state verification} \label{section_quantum_verification}
Unique ground states of frustration free Hamiltonians, like the ones we are dealing with here, can be trivially certified with local measurements. This is achieved by just performing local quantum tomographies to make sure that the expectation values $\langle h_j\rangle=0$ for all $h_n$ defined in~\eqref{hj}. Indeed, if this is the case, then $\langle H\rangle=0$ [cf.~\eqref{eq:def-Hbetat}], and since $H\ge 0$ and has a unique ground state, this implies that the measurement must have been performed on that state.
In practice, since we can only perform a finite number of measurements, the estimates for $\langle h_j\rangle$ will not be exactly zero; additionally, measurement errors will give rise to errors in those quantities as well. However, one can still estimate the success probability of the preparation in different ways. The most straightforward one is to relate the obtained expectation value of $\langle H\rangle=\mathrm{tr}(H\rho)$ (with the corresponding estimated error) to the overlap of the state we have prepared, $\rho$, and the target ground state $\ket\Psi$. It is straightforward to show that the fidelity obeys
\begin{equation}
\label{bound}
\langle \Psi|\rho|\Psi\rangle \ge 1- \frac{\mathrm{tr} (H\rho)}{\Delta} \ge 1- \frac{\mathrm{tr} (H\rho)}{\delta}\ ,
\end{equation}
where $\Delta$ is the spectral gap of $H$, and $\delta$ the lower bound obtained in the previous section. Thus, if we can obtain this expectation value (with an error bar) and we compute $\delta$, we directly obtain a lower bound to the overlap.
Neglecting measurement errors, for a finite number of measurements, the estimate of $ \langle \Psi|\rho|\Psi\rangle $ will have some error bar.
To use the bound \eqref{bound}, we need that the error in $\mathrm{tr}(H\rho)=\langle H\rangle=\sum \langle h_j\rangle$ is sufficiently below $\delta$. Assuming that we perform independent measurements and thus that we have independent errors with same standard deviation $\sigma$ for the estimator of all the $h_j$, we have that
the error $\epsilon_j$ for each term must satisfy $\delta/\sqrt{N}\sim\epsilon_j \sim \sigma /\sqrt{L_j}$, where $L_j$ is the number of measurements performed to estimate $\langle h_j\rangle$. If, in addition, we assume $L_j = L_{j'}$, we obtain a conservative estimate for the total number of measurements of
$L_\mathrm{tot} \lesssim N L_j \sim \sigma^2N^2/\delta^2$. Note that instead of performing independent measurements, one could measure qubits belonging to non-overlapping regions in parallel \cite{Cotler_2020}, which might lead to a reduction of the total number of rounds.
The verification can also be analyzed as an adversarial game,
where the
\prover prepares a state, $\rho$, gives it to \verifier, and claims that it is indeed $\Psi$. \verifier can then perform measurements to gain confidence that this is indeed the case. Such a protocol as been analyzed in Ref.~\cite{Hangleiter_2017} by assuming that \verifier can measure by projecting in the basis of eigenstates of the local operators of the Hamiltonian $h_j$. In Appendix \ref{appendix_bounds_verification}, we perform a similar analysis, but assuming that \verifier can only perform Pauli measurements, which might be a more realistic assumption for current experimental setups.
\subsection{Classical verification of quantum computation}
Let us now turn towards a different kind of verification, in which the verifier is fully classical and communicates with the quantum prover through a classical channel~\cite{reichardt2012classical, reichard2013nature,mahadev2018classical, brakerski2019cryptographic, aaronson2016complexitytheoretic, Boixo_2018}. Such protocols can also be formulated as an adversary game: Here, the prover claims that he can efficiently carry out quantum computations on a quantum computer. The verifier has to make sure that this is the case by communicating classically with the prover, that is, asking him to perform certain tasks on his quantum computer and report the results. Of particular interest is the case where both prover and verifier have limited additional resources, namely
they can perform classical computations with a computational time that scales at most polynomially with the number of qubits. In this setting, the verifier can pose challenges to the prover which he can only accomplish if he has a quantum computer, but not with his limited classical resources, and the challenge is to find a way which allows the verifier with her limited classical resources to verify this.
Recently, several protocols achieving this task have been proposed whose security is based on standard complexity assumptions \cite{mahadev2018classical, brakerski2019cryptographic, aaronson2016complexitytheoretic}. While the first protocol \cite{mahadev2018classical, brakerski2019cryptographic} is most adequate for fault-tolerant quantum computers, the latter \cite{aaronson2016complexitytheoretic} is very attractive since it can already be used to verify existing NISQ devices \cite{arute2019quantum}. However, it requires the verifier to carry out classical computations whose runtime scales exponentially with the number of qubits, though with a relatively small exponent which makes it comparatively practical. Apart from their use to certify quantum computers that can be only used remotely by classical means, one of the most appealing applications of such protocols is in the context of certified random number generators \cite{brakerski2019cryptographic}.
Ground states of frustration-free quantum Hamiltonians that can be efficiently prepared, like the ones presented in this work, may offer an alternative way for this kind of verification; specifically, one can exploit the task of reproducing correctly the expectation values of the observables introduced in Section \ref{sec:section_expectation} as a challenge for the prover. In the following, we will describe such verification protocols, and analyze their security and the underlying complexity theoretic assumptions in different regimes. As we will see, the straightforward application of this idea requires exponential time. More efficient versions of the protocol are possible, but the underlying complexity assumptions are less tangible and thus, their security remains unclear.
\subsubsection{Protocol}
The verification protocol consists of three steps:
(\emph{i})~The verifier sends the prover instructions for preparing the state $\ket{\Psi}$.
(\emph{ii})~This step consists of $R$ rounds: in each round, the verifier sends the prover a set of observables; the prover then prepares the state $\ket\Psi$, measures the observables, and reports the outcome.
(\emph{iii}) The verifier
performs certain checks on the accumulated measurement outcomes to verify that the prover has indeed prepared the state $\ket\Psi$ and is thus in possession of a quantum computer.
For step (\emph{i}), the verifier just has to give the circuit that prepares the state to the prover, which she can e.g.\ obtain by trotterizing the adiabatic evolution. Alternatively, she can directly give the time dependent Hamiltonian $H(t,\beta)$, together with the initial states $\ket{\varphi_i}$, to the prover, e.g.\ in case he is in possession of an
analog quantum computer. An honest prover will then be able to efficiently prepare the state $\ket\Psi$ in a time that scales as $\log(N)$, as discussed in Section~\ref{section_Adiabatic}.
For each round of step (\emph{ii}), the verifier sends the prover a list of bases $\bm\alpha=(\alpha_1,\dots,\alpha_N)$, $\alpha_j=x,y,z$, in which the individual qubits should be measured; the $\bm\alpha$ will be generally drawn at random from some distribution which is dictated by the verification step (\emph{iii}). The prover then prepares the state and measures qubit $j$ in the Pauli basis $\alpha_j$, and obtains results $\bm s = (s_1,\dots,s_N)$, $s_j=\pm1$. For an honest prover, these results are drawn from a distribution
\begin{equation}
\label{eq:P0-distribution}
P_0(\bm s | \bm \alpha)= \langle \Psi| \bigotimes_{j=1}^N \frac12(\openone+s_j\sigma_{\alpha_j}^{j})
\ket\Psi\ .
\end{equation}
After receiving the measurement basis, the prover prepares $\ket\Psi$ and performs the measurements. Importantly, since $\ket\Psi$ can be prepared in time $O(\log(N))$, each round can in principle be carried out in time $\log(N)$ as long as the prover has the ability to measure and communicate the results in parallel.
Step (\emph{ii}) allows for several natural generalizations. In particular, we can allow for measurements beyond the Pauli basis, we can enable the verifier to make \emph{adaptive} queries, where the choice of $\bm s$ in any round can depend on the results of the previous round, and we can split each round into several sub-rounds of communication, where the state is prepared once and then a sequence of adaptive measurements is performed on it, measuring only a subset of the qubits at each time.
In step (\emph{iii}), the verifier uses the samples obtained from the prover to compute certain quantities which serve to verify that the prover is indeed sampling from the correct distribution $P_0$. To this end, the prover can e.g.\ use some of the quantities $Q$ constructed in Section~\ref{sec:section_expectation} for which $\langle Q\rangle=0,1$ (or variants thereof), or the Hamiltonian terms $h_j$ for which $\bra\Psi h_j\ket\Psi=0$ (which can be used to replace the $Z^-_{\{j\}}$). Let us now consider one such observable $Q$ supported on $\lambda\subset \mathcal V$. It can be decomposed as
\begin{equation}
\label{eq:O-Pauli-expansion}
Q = \sum_{\bm\gamma} o(\bm\gamma) \bigotimes_{j\in\lambda}
\sigma_{\gamma_j}^j\ ,
\end{equation}
where $\bm\gamma=(\gamma_1,\dots,\gamma_{|\lambda|})$, $\gamma_j=x,y,z$.
The verifier can then estimate $\langle Q\rangle$ using an estimator
\begin{equation}
\label{eq:Obar}
\bar Q = \sum_{\bm\gamma} o(\bm\gamma) \bar s(\bm\gamma)\ ,
\end{equation}
where $\bar s(\bm\gamma)$ is the average outcome where the prover measured spin $j$ in the basis $\sigma_{\gamma_j}$ for all $j\in \lambda$, and an arbitrary basis for all $j\notin\lambda$;
that is, if we denote the set of rounds where $\alpha_j=\gamma_j$ for all $j\in \lambda$ by $\mathcal R(\bm\gamma)$, and
the result of the $r$'th round by $\bm s^r=(s_1^r,\dots,s_N^r)$, we have
\begin{equation}
\label{eq:O-sbar}
\bar s(\bm\gamma) = \frac{1}{|\mathcal R(\bm\gamma)|} \sum_{r\in\mathcal R(\bm\gamma)} \prod_{j\in\lambda} s_{j}^r\ .
\end{equation}
If the samples have been taken according to $P_0$,
$\bar O \to \bra\Psi O\ket\Psi$
in the limit $|\mathcal R(\bm\gamma)|\to\infty$, which can be used as a means of verification (see Appendix~\ref{appendix_bounds_verification} for a quantitative analysis). Note that alternatively, we can determine the average also using only rounds $\mathcal R(\bm\gamma)$ where sites $j \notin \lambda$ have only been measured in some specific bases.
\subsubsection{Complexity Analysis}
Let us now analyze under which conditions the protocol allows the verifier to conclude that the
prover must indeed be in possession of a quantum computer, what potential classical cheating strategies might be, and what limitations to such strategies exist.
\paragraph{Sampling from the quantum distribution is hard.\label{para:hardness-true}}
The first cheating strategy would be to find a way to classically sample from the correct distribution $P_0$, Eq.~\eqref{eq:P0-distribution} -- in that case, the verifier would have no way of detecting this. However, there are several strong complexity-theoretic arguments against that. First, note that the adaptive version of the protocol contains measurement-based quantum computing: The cluster state is clearly in the given class (with $\beta=0$), and an adaptive protocol with $\mathrm{poly}(N)$ queries per round would implement a general quantum computation. Thus, sampling from the correct $P_0$ is impossible unless $\textsf{BQP}=\textsf{BPP}$. However, it is also known that even sampling from the output of a circuit of commuting gates and measuring in a fixed basis (that is, our non-adaptive protocol for $\beta=0$), a class known as \textsf{IQP}, is computationally hard: If such sampling were possible to a constant distance (in $\ell_1$ norm), the polynomial hierarchy would collapse to its third level~\cite{Bremner_2010,Bremner_2016}.
We thus conclude that there is strong complexity-theoretic evidence that it is impossible for the prover to classically sample from the correct distribution $P_0$ (up to constant error) in polynomial time.
\paragraph{Reproducing all $\langle Q\rangle$ is hard.}
The second cheating strategy would be to sample from some different distribution $P'$ which is chosen such that all estimators for $\langle Q\rangle$ computed from $P'$ are correct.
Any estimator $\bar Q$, Eq.~\eqref{eq:Obar}, supported on a subset $\lambda$ of sites can be computed in many different ways, which differ by the measurement settings over which we average for the sites not contained in $\lambda$, see Eq.~\eqref{eq:O-sbar} and the discussion below it. We will thus additionally impose that the estimator $\bar Q$ converges to the same value for all those ways to compute it; this can be easily ensured by the verifier by computing $\bar Q$ in all different ways, or just in a randomly chosen one. (Alternatively, this can be ensured by computing the marginal probability distributions in different ways and checking their consistency, that is, by checking that the measurement setting $\alpha_i$ of qubit $i$ does not affect the distribution of the measurement results $s_j$ for any of the other qubits $j\ne i$; this can be seen as a kind of non-signalling condition on the distribution.)
These conditions however imply that $P'=P_0$, the correct distribution derived from the quantum state. The reason is that for any given $\bm\alpha$, we can reconstruct $P'(\,\cdot\,|\bm \alpha)$ from the expectation values of all Pauli measurements given by arbitrary \emph{substrings} of $\bm\alpha$ (that is, where the Paulis in some positions have been replaced by identities, as in Eq.~\eqref{eq:P-pauli-substring}) through an $N$-fold Hadamard transform (using the consistency condition above); the latter, in turn, can be reconstructed from all $\langle Q\rangle$ as they are related by an invertible transformation, as discussed in Section~\ref{sec:section_expectation}.
We thus find that this is not a viable cheating strategy, since no other distribution which yields all the correct $\bar Q$ exists. Note that together with the hardness results mentioned under point \ref{para:hardness-true}, this implies that even sampling from a $P'$ which approximates $\langle Q\rangle$ for all $Q$ sufficiently well is hard. The required accuracy in $Q$ has to be chosen such that the expectation values of Pauli strings have exponential accuracy in $N$; since the number of samples required to this end is in fact determined by the latter (the transformation \eqref{eq:Obar} is exact), this generally requires an exponential number of samples.
\paragraph{A space-bounded verification protocol.}
These considerations imply that our protocol can be used for verification in a setting where both prover and verifier have bounded (polynomial) storage space, but the verification procedure can take an exponential number of rounds, and the verifier can use exponential time: In each round, the prover is only granted $\mathrm{poly}(N)$ time (or logarithmic time with suitable parallel processing power). By performing an exponential number of queries, the verifier can get an exponentially precise estimate $\bar Q$ for any of the observables $Q$ (either a randomly selected one, or all of them); importantly, the expansion coefficient $o(\bm\gamma)$ of $Q$ in the Pauli basis, Eq.~\eqref{eq:O-Pauli-expansion}, is a trace of a product of local operators and can thus be computed in \textsf{PSPACE}, and the summands in \eqref{eq:Obar} can be sampled and added sequentially one by one; the verifier thus only requires polynomial storage space. The prover, one the other hand, cannot sample from the correct distribution in the available polynomial (or logarithmic) time, due to the hardness results discussed in point \ref{para:hardness-true}; at the same time, his limited memory prevents him from pre-computing all possible outcomes after having learned the adiabatic circuit for preparing $\ket\Psi$ in step (\emph{i}) (even though the exponential time used for the overall protocol would allow for it).
Let us note that what makes our setup special is not the fact that the knowledge of all $\langle Q\rangle$ allows to reconstruct the probability distribution -- such
a complete tomography is possible in any scenario.
Rather,
what makes our construction special is that the verifier knows the required expectation values for all operators $\langle Q\rangle$ right away, whereas in a general tomography scheme, a computationally costly reconstruction procedure is required to obtain $P$ from measured expectation values.
However, in order to compute a general expectation value $\langle Q\rangle$, the verifier still needs to collect an exponential amount of data from the prover: While it is sufficient to sample a small number of randomly chosen $Q$'s, most $Q$'s still have an exponential number of terms in their Pauli expansion \eqref{eq:O-Pauli-expansion}.
The situation here can thus be somehow regarded as the reverse from that introduced in Ref.~\cite{aaronson2016complexitytheoretic}: While in that case, the verifier requires a polynomial number of measurements from the prover, it takes her an exponential time to check if they correspond to the correct probability (by computing the cross-entropy). In our case, the correct expectation values can be trivially computed, but one requires an exponential number of samples to obtain them. Note that computing the coefficients $o(\bm\gamma)$ of $Q$, which are needed to estimate its expectation value from the samples, might require exponential resources as well.
\paragraph{Sampling strategies to reproduce $\langle Q\rangle$ for local $Q$.}
As we have seen, checking some $\langle Q\rangle$, chosen at random from the set of \emph{all} $Q$, allows to verify that the prover is in possession of a quantum computer. This, however, requires exponential resources, since a typical $Q$ involves a significant fraction of the Pauli terms acting on its support, that is, an exponential number, and moreover we require an exponential accuracy. A practical verification protocol must thus resort to testing only \emph{local} $Q$'s, that is, those which are supported on at most order $\log(N)$ sites, to an accuracy $1/\mathrm{poly}(N)$, as those can be sampled in $\mathrm{poly}(N)$ time.
Thus, the question which arises is whether there is a way for the prover to efficiently sample from a distribution $P'$ which reproduces those local $\langle Q \rangle$ correctly, up to polynomial accuracy.
First, let us notice here that, if the prover indeed measures in the Pauli basis, reaching a polynomial accuracy on local $\langle Q \rangle$ expectation values uniquely identifies the ground state $\ket\Psi$ among all multi-qubit states. The reason is that we can include the terms of the parent Hamiltonian $H=\sum h_j$ in the set of $Q$'s, and since $\ket\Psi$ is the unique ground state of $H$ and $H$ is gapped, this implies $\|\ket\Phi-\ket\Psi\|<1/\mathrm{poly}(N)$ if the $\langle h_j\rangle$ are $1/\mathrm{poly}(N)$-close, see Eq.~\eqref{bound}. Hence, to entirely establish the security of the protocol, it would be enough to prove that any distribution $P'$ reproducing $\langle Q \rangle$ with the desired accuracy must be obtainable by performing Pauli measurements on a multi-qubit state. Notice that a possible way to achieve that would be to prove a robust self-testing statement based on the $\langle Q \rangle$ expectation values (see \cite{Supic2020selftestingof} for a review of self-testing techniques).
Second, even if such a distribution $P'$ exists, there are constraints on the design of algorithms to sample from it efficiently. For instance, one might assume that it should be possible to characterize the space of solutions of the equations $|\bar Q-\langle Q\rangle|\le 1/\mathrm{poly}(N)$, which locally constrain $P'$, use them to find ways to sample locally correctly, and patch those ways of sampling together to obtain a sampling algorithm which works globally. However, such a strategy, if based only on the final conditions on $\langle Q\rangle$, is bound to fail, unless further properties of $P_0$ are taken into account: Specifically, we can choose the $Q$ to enforce 3-SAT clauses or some other classical \textsf{NP}-hard problem (such as spin glasses on a 2D lattice), in which case such an algorithm is bound to fail as it would have to solve the \textsf{NP} problem. (It is a fingerprint of hard instances of \textsf{NP} problems that the local constraints cannot be patched together easily.) Note that also knowing the precise local reduced density matries does not necessarily make the problem easier, as the general quantum marginal problem is \textsf{QMA}-hard~\cite{liu:marginal-qma,broadbent:marginal-qma}.
Thus, a successful cheating strategy most likely will have to use the full knowledge of the adiabatic path used to prepare the state, or equivalently knowledge of $K_1$ and $K_2$. One approach could be to classically simulate the adiabatic evolution -- for instance, one could try to adapt the Monte Carlo algorithm by Bravyi and Terhal~\cite{bravyi:stoq-ffree} for the simulation of adiabatic quantum computation along a path which is both frustration free (which we have) and sign-problem free (which we don't necessarily have), followed by a measurement in the $\sigma_z$ basis (which we don't have). Indeed, it is plausible that such an algorithm will allow to correctly sample in cases where the final Hamiltonian is classical and only needs to be sampled in the $\sigma_z$ basis. On the other hand, it will likely break down if one of the conditions is not met, which will manifest itself in a sign problem in the Monte Carlo method. In fact, we cannot expect a cheating strategy based on simulating the adiabatic evolution to work, since such an algorithm (if working as desired) will precisely sample from the \emph{correct} distribution $P_0$, which we have previously established to be computationally hard. Thus, in order to attack the protocol with such a strategy, one would have to devise a method to simulate the adiabatic evolution in a way where local expectation values are reproduced correctly, yet global properties would not
(with the goal of circumventing hardness results);
it is unclear how a route to accomplish this would look like.
\begin{acknowledgements}
We thank Adam Bouland, Zeph Landau, and Umesh Vazirani for helpful discussions. This work has received support from the European Union's Horizon 2020 program through the ERC-AdG QUENOCOBA (No.~742102) and the ERC-CoG SEQUAM (No.~863476),
from the DFG (German Research Foundation) under Germany's
Excellence Strategy (EXC2111-390814868), and from the Austrian Science Fund (FWF)
through
Project number 414325145 wihtin SFB F7104.
J.T.\ thanks the Alexander von Humboldt foundation and the Google Research Scholar Program for support.
\end{acknowledgements}
\bibliographystyle{h-physrev}
|
1,314,259,994,318 | arxiv | \section{INTRODUCTION}
One of the central questions in the interpretation of quantum
mechanics from a realist perspective is whether the
indeterminacies or uncertainties in quantum mechanics are of an
\emph{ontological} or \emph{gnoseological} character. They are
gnoseological if the observables of the system possess exact
values that quantum mechanics is unable to predict and can only
provide probability distributions for them. In this case the
uncertainty is in our knowledge of the system and not in the
system itself and the development of a deterministic theory with
hidden variables that are averaged to produce the same
statistical predictions of quantum mechanics is wished. The
indeterminacies are ontological if the observables do not assume
exact values but instead they are diffuse by nature and the
indeterminacies are in nature and not in our knowledge. In this
case quantum mechanics can be considered to be a complete theory
and not the statistical average of a better theory. The
Einstein-Podolsky-Rosen argument\cite{epr} was originally
designed in order to prove that quantum mechanics is not
complete, although later developments favour an interpretation of
the argument where the values of the ``elements of physical
reality'' are nonlocality determined (a special case of
contextuality). Indeed, the experimental violations\cite{exp} of
Bell's inequality\cite{bell} have established such nonlocal
effects in the valuation of observables.
The existence of definite context independent values for the
observables was shown by Bell\cite{bell2} and by Kochen and
Specker\cite{koch} to be in conflict with quantum mechanics on
logical grounds, that is, in conflict with the geometrical
structure of the Hilbert space, more than with the postulates of
quantum mechanics. The Kochen Specker theorem is a complicated
argument requiring 117 vectors in a three dimensional Hilbert
space. A simpler proof was produced by Peres\cite{per} with only
33 vectors and Penrose\cite{pen} found a beautiful geometrical
representation for these vectors. With another goal, not trying
to minimize the number of directions, a proof of the theorem was
given\cite{gill} involving continuous sets of directions.
Analysing spin observables for systems of two and three
particles, Mermin\cite{mer} presented physical examples of the
Bell-Kochen-Specker contradiction.
The original Einstein-Podolsky-Rosen argument involves
observables of position and momentum and D. Bohm\cite{Bohm}
presented the same argument but in terms of spin observables.
Since then, spin observables were preferred for
Einstein-Podolsky-Rosen and Bell-Kochen-Specker type of
arguments. This preference is not only because these involve a
finite dimensional Hilbert space with simpler mathematics, but
mainly because spin observables are more adequate for real
experimental tests. However, spin is an essentially quantum
mechanic observable with almost no correlate in classical
mechanics and therefore it is interesting to exhibit these
arguments also for position and momentum observables in order to
emphasize the drastic differences between classical and quantum
mechanics. With these observables, a ``frame function'' was
constructed and a rigourous proof of the Bell-Kochen-Specker
theorem was provided\cite{zimba}. Furthermore, illustrations of
the Bell-Kochen-Specker contradiction were built involving Sign
and Reflection observables\cite{fleming} and also for unitary
operators, functions of position and momentum\cite{clifton}. In
all these cases, the operators involved are not trivial functions
of position and momentum and it would be convenient to devise a
simple proof. In this work, the incompatibility of quantum
mechanics with the assumption of the existence of noncontextual
values for position and momentum of just one particle is shown.
The proof is very simple (\emph{a posteriori}) and it involves
the simplest physical system (one spinless free particle) and
therefore it makes the Bell-Kochen-Specker result easier
accessible to non experts.
In this work we will investigate the possibility of existence of
definite values for position and momentum observables of a single
particle. Of course these putative values are not provided by
quantum mechanics and it does not matter whether they are
deterministic, as in a hidden variable theory, or are random
values distributed according to some inherent randomness in
nature (zitterbewegung). In an ensemble of systems, we only
require that the proposed putative values should be distributed
according to the distributions predicted by quantum mechanics.
\section{THE PUTATIVE VALUE}
Let us assume that we can assign to any Hilbert space operator $A$
a numerical value $\overline{A}$ called the \emph{putative value},
with the following properties.
\begin{itemize}
\item \emph{Completeness}: the set $\{\overline{A}\}$ of all
possible putative values is the spectrum of the corresponding
operator.
\item \emph{Functional consistency:} the putative value
preserves functional relations in the sense that for any
function $F$ it is $\overline{F(A)}=F(\overline{A})$.
\item \emph{Context independence:} the putative value
assumed by an operator is independent of the context in
which the corresponding observable is placed. Different contexts
are defined by different sets of commuting operators.
\end{itemize}
It can be proved\cite{ish} that the functional consistency
condition has the important consequence that the putative value
for \emph{commuting} operators are additive and multiplicative.
That is,
\begin{equation}\label{aditandmul}
[A,B]=0\ \rightarrow\ \overline{A+B}=
\overline{A}+\overline{B}\ \mbox{ and }\
\overline{A\cdot B}=\overline{A}\cdot \overline{B}\ .
\end{equation}
These relations can be generalized to functions that can be
expanded as power series: $\overline{
F(A,B)}=F(\overline{A},\overline{B})$, however we will not use
the general form. In fact we will only need the additive
property. This additive property is not necessarily true for non
commuting operators. For instance if $A=J_{x}$ and $B=J_{y}$ are
two components of angular momentum, then $A+B=\sqrt{2}J_{u}$ is
also a component of angular momentum in a different direction but
the spectrum of $\sqrt{2}J_{u}$ is clearly not equal to the sum
of the spectra of $J_{x}$ and $J_{y}$.
We will now investigate whether it is possible to assign putative
values $\overline{X}$ and $\overline{P}$ to the position and
momentum observables $X$ and $P$ of a particle, in a way
compatible with quantum mechanics. This compatibility means that
the putative values should be distributed according to the
probability functions provided by quantum mechanics. Of course,
quantum mechanics can not predict or compute these putative
values but we want to know if their \emph{existence} is allowed
by quantum mechanics. We just want to see if we can \emph{think}
that these values exist. It is also irrelevant whether these
values can be calculated by a deterministic hidden variables
theory or they are assigned randomly.
Let us assume that the position observable is divided by some
length scale $\lambda$ (Compton length, for instance) and
momentum is multiplied by $\lambda/\hbar$ making them
dimensionless. Therefore their associated values are pure numbers
and the addition of position with momentum is not meaningless.
Let us consider the operators $X_{1},X_{2},P_{1},P_{2}$
corresponding to the observables of position and momentum of a
particle in a plane and let
$\overline{X}_{1},\overline{X}_{2},\overline{P}_{1},\overline{P}_{2}$
be their putative values. Let us now build several linear
combinations of these operators that can be grouped in
intersecting subsets of commuting operators. In Fig.1 we see some
of these linear combinations and we notice that all operators
joined by a straight line commute. Let us consider the two
operators $A=X_{1}-X_{2}+P_{1}+P_{2}$ and
$B=X_{1}+X_{2}+P_{1}-P_{2}$. Since they commute, their putative
values are such that
\begin{equation}\label{putval1}
\overline{A+B}=\overline{A}+\overline{B}\ .
\end{equation}
The left hand side of this equation is
$\overline{A+B}=\overline{2X_{1}+2P_{1}}$, and the right hand
side is
$\overline{A}+\overline{B}=\overline{X_{1}-X_{2}+P_{1}+P_{2}}
+\overline{X_{1}+X_{2}+P_{1}-P_{2}}
=\overline{X_{1}-X_{2}}+\overline{P_{1}+P_{2}}
+\overline{X_{1}+X_{2}}+\overline{P_{1}-P_{2}} =
\overline{X}_{1}-\overline{X}_{2}+\overline{P}_{1}+\overline{P}_{2}
+\overline{X}_{1}+\overline{X}_{2}+\overline{P}_{1}-\overline{P}_{2}
=2\overline{X}_{1}+2\overline{P}_{1}$. Therefore the equation
above becomes
\begin{equation}\label{putval2}
\overline{X_{1}+P_{1}}=\overline{X}_{1}+\overline{P}_{1} \ .
\end{equation}
We have used the additive property of the putative values of
\emph{commuting} observables and we have shown that, even though
$X_{1}$ and $P_{1}$ \emph{do not commute}, their putative values
are also additive. This is indeed very suspicious and for most
experts this would be sufficient reason to deny the existence of
the putative values. Anyway we will prove that this result is in
contradiction with quantum mechanics, but before doing this, some
comments are convenient.
\begin{figure*}
\setlength{\unitlength}{1cm}
\begin{picture}(20,8)
\thicklines \put(4,1){\framebox(9,4)} \put(4,3){\line(1,0){9}}
\multiput(4,1)(3,0){4}{\circle*{0.3}}
\multiput(4,5)(3,0){4}{\circle*{0.3}}
\multiput(4,3)(9,0){2}{\circle*{0.3}}
\put(3.3,5.5){$\mathbf{X_{1}-X_{2}}$}
\put(6.8,5.5){$\mathbf{X_{1}}$} \put(9.8,5.5){$\mathbf{X_{2}}$}
\put(12.3,5.5){$\mathbf{X_{1}+X_{2}}$}
\put(3.3,0.3){$\mathbf{P_{1}+P_{2}}$}
\put(6.8,0.3){$\mathbf{P_{1}}$} \put(9.8,0.3){$\mathbf{P_{2}}$}
\put(12.3,0.3){$\mathbf{P_{1}-P_{2}}$}
\put(-0.3,3){$\mathbf{A=X_{1}-X_{2}+P_{1}+P_{2}}$}
\put(13.5,3){$\mathbf{B=X_{1}+X_{2}+P_{1}-P_{2}}$}
\end{picture}
\caption[FIGURE 1.]{ Set of operators used to show that, although
X and P do not commute, their putative values are additive.
Notice that all operators joined by a straight line
commute.\vspace{2cm}}
\end{figure*}
The property of context independence is necessary in the above
argument because we assume that the value of, for instance, the
operator $X_{1}+X_{2}$ in the upper right corner of Fig.1, is the
same when we consider it as a member of the set
$\{X_{1}-X_{2},X_{1},X_{2},X_{1}+X_{2}\}$ as the value it takes
when it is a member of the set $\{X_{1}+X_{2},B,P_{1}-P_{2}\}$.
Without this assumption, Eq.(\ref{putval2}) could not be obtained.
We could have taken other sets of commuting operators leading to
similar results. For instance in the top line of Fig.1 we could
take the operators $X_{1}+P_{2},X_{1},P_{2},X_{1}-P_{2}$ and the
appropriate set in the lower line. Also, choosing the signs
properly, instead of an addition we could get a substraction in
Eq.(\ref{putval2}) or any linear combination of the operators.
Considering the projection of position and momentum along two
arbitrary orthogonal directions in three dimensional space, would
lead us to the conclusion that for any linear combination we have
$\overline{\alpha\textbf{X}+\beta\textbf{P}}=
\alpha\overline{\textbf{X}}+\beta\overline{\textbf{P}}$. Notice
that in order to obtain these results it is important that the
commutator of $X$ and $P$ is a \emph{constant} and a subtle
cancellation of the commutator in different directions is made.
This cancellation is no longer possible when we use a similar
scheme in order to try to prove that
$\overline{X^{2}+P^{2}}=\overline{X^{2}}+\overline{P^{2}}$.
Unfortunately several attempts to prove this failed. If we could
prove this, then we would obtain an immediate contradiction
because the spectrum of $X^{2}+P^{2}$ is discrete whereas the
spectra of $X^{2}$ and $P^{2}$ are continuous. An immediate
contradiction would also follow if we could prove that
$\overline{XP}=\overline{X}\ \overline{P}$ because in one case the
spectrum is complex and in the other it is real.
\section{CONTRADICTION WITH QUANTUM MECHANICS}
If the putative values for position $X$ and momentum $P$ exist,
then they must be such that, for the operator $S=X+P$, we have
$\overline{S}=\overline{X+P}=\overline{X}+\overline{P}$. We will
now see that this is in contradiction with quantum mechanics. For
this, let us consider an ensemble of systems described by quantum
mechanics by a Hilbert space element $\psi$. Let
$\{\varphi_{x}\}\ ,\{\phi_{p}\}\ ,\{\eta_{s}\}$ be the
eigenvectors of the operators $X,P,S$ corresponding to the
eigenvalues $x,p,s$. According to quantum mechanics, these three
observables will have the probability distribution functions
\begin{eqnarray}
\rho(x) &=& |\langle\varphi_{x},\psi\rangle|^{2} \ ,\\
\varpi(p) &=& |\langle\phi_{p},\psi\rangle|^{2}\ , \\
\sigma(s)&=& |\langle\eta_{s},\psi\rangle|^{2}\ .
\end{eqnarray}
The assumptions made are the usual ones when we deal with
position and momentum observables. However in order to be more
rigourous we should state that the operators $X,P$ and $S$ are
unbound (this follows from their commutation relations) and they
have no eigenvectors \emph{in} the Hilbert space. The solution to
this problem is to define a \emph{Rigged} Hilbert space (known as
a Gel'fand triplet in mathematics) that contains the sets of
generalized eigenvectors $\{\varphi_{x}\}\ ,\{\phi_{p}\}\
,\{\eta_{s}\}$ that can be used as basses in order to expand any
Hilbert space element\cite{bal}. The modulus squared of the
expansion coefficients are interpreted in quantum mechanics as
the probability distributions given in the equations above. As an
example, the generalized eigenvectors in the coordinate
representation, where $X=x$ and $P=-i\partial_{x}$, are given by
\begin{eqnarray}
\varphi_{x_{0}}(x) &=& \delta (x-x_{0}) \ ,\\
\phi_{p}(x) &=& \frac{1}{\sqrt{2\pi}}\exp\left(ipx\right)\ , \\
\eta_{s} (x)&=& \frac{i^{1/4}}{\sqrt{2\pi}}
\exp\left(-i\left(\frac{(x-s)^{2}}{2}-\frac{s^{2}}{4}\right)\right)\
.
\end{eqnarray}
One can easily check that they satisfy their corresponding
eigenvalue equations and that they are ``delta function''
normalized. We will not use these functions but we present them
just in order to clarify that the probability distributions in
Eqs.(4) to (6) are mathematically well defined.
In the ensemble of systems, the putative values of position and
momentum $\overline{X}$ and $\overline{P}$ are distributed
according to $\rho(x)$ and $\varpi(p)$, then the addition of
these two random variables
$\overline{S}=\overline{X+P}=\overline{X}+\overline{P}$ is
distributed, according to the theory of random variables, by the
convolution
\begin{eqnarray}
\sigma_{pv}(s)&=& \int\!\!dx\ \rho(x)\ \varpi(s-x) \nonumber \\
&=& \int\!\!\!dx\!\!\!\int\!\!\!dp\ \rho(x)\ \varpi(p)
\ \delta\left(s-(x+p)\right)\ .
\end{eqnarray}
Now we will see that this putative value prediction for the
distribution is different from the quantum mechanical prediction
in Eq.(6), that can be written as
\begin{equation}\label{sigmamq}
\sigma(s)= \int\!\!\!dx\!\!\!\int\!\!\!dp\
\langle\varphi_{x},\psi\rangle \langle\psi,\phi_{p}\rangle\
\langle\phi_{p},\eta_{s}\rangle\langle\eta_{s},\varphi_{x}\rangle
\ ,
\end{equation}
and appears formally quite different from the putative value
distribution. The formal difference between these two
distributions is such that, presumably,
$\sigma_{pv}(s)\neq\sigma(s)$ for all states $\psi$. In
particular, it is easy to show that for some states both
distributions have different dispersion, that is,
$\Delta^{2}_{pv}(s)\neq\Delta^{2}_{mq}(s)$. The quantum
mechanical prediction is:
\begin{eqnarray*}
\Delta^{2}_{mq}(s) &=& \langle S^{2}\rangle-\langle S\rangle^{2} =
\langle (X+P)^{2}\rangle-\langle X+P\rangle^{2}\\
&=& \langle X^{2}+P^{2}+XP+PX\rangle-\langle X\rangle^{2}-\langle P\rangle^{2}
-2\langle X\rangle\langle P\rangle\\
&=& \Delta^{2}(x) + \Delta^{2}(p)+
\langle XP+PX\rangle-2\langle X\rangle\langle P\rangle\ ,
\end{eqnarray*}
and the putative value prediction is
\begin{eqnarray*}
\Delta^{2}_{pv}(s) &=&\int\!\!\!ds\ s^{2} \sigma_{pv}(s)
-\left(\int\!\!\!ds\ s\ \sigma_{pv}(s)\right)^{2}\\
&=&\int\!\!ds\ s^{2} \int\!\!\!dx\!\!\!\int\!\!\!dp\ \rho(x)\
\varpi(p)\ \delta\left(s-(x+p)\right)
-\left(\int\!\!\!ds\ s \int\!\!\!dx\!\!\!\int\!\!\!dp\ \rho(x)\ \varpi(p)
\ \delta\left(s-(x+p)\right) \right)^{2} \\
&=& \int\!\!\!dx\!\!\!\int\!\!\!dp\ (x+p)^{2}\rho(x)\ \varpi(p)
-\left(\int\!\!\!dx\!\!\!\int\!\!\!dp\ (x+p)\ \rho(x)\ \varpi(p)
\right)^{2} \\
&=& \int\!\!\!dx\!\!\!\int\!\!\!dp\ (x^{2}+p^{2}+2xp)\ \rho(x)\
\varpi(p)
-\left(\int\!\!\!dx\ x\ \rho(x)+ \int\!\!\!dp\ p\ \varpi(p)
\right)^{2}\\
&=& \Delta^{2}(x) + \Delta^{2}(p)\ .
\end{eqnarray*}
Clearly, at least for any state with non vanishing
correlation\cite{rob} $\langle XP+PX\rangle-2\langle
X\rangle\langle P\rangle\neq 0$, the assumption of the existence
of the putative values is in contradiction with quantum
mechanics.
\section{CONCLUSION}
If the predictions of quantum mechanics are correct, then we have
proved that position and momentum observables can not be assigned
any context independent value. The proof presented here involves
elementary observables of one single particle and provides a very
simple illustration of the Bell-Kochen-Specker contradiction.
Context \emph{dependent} putative values are not prohibited and
all attempts to replace standard quantum mechanics by some form
of hidden variables theories must necessarily include the context
dependence in the deterministic assignment of values to the
observables. This necessity makes such deterministic theories
less appealing. One of the main reasons for developing hidden
variables theories was to bring the quantum world closer to the
classical expectations but the necessary contextuality goes in
the other direction.
\section{Acknowledgements}
I would like to thank M. Hoyuelos and A. Jacobo for comments.
This work received partial support from ``Consejo Nacional de
Investigaciones Cient{\'\i}ficas y T{\'e}cnicas'' (CONICET), Argentina.
|
1,314,259,994,319 | arxiv | \section{#1}\setcounter{equation}{0}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newtheorem{Thm}{Theorem}[section]
\newtheorem{Defi}[Thm]{Definition}
\newtheorem{Cor}[Thm]{Corollary}
\newtheorem{Lemma}[Thm]{Lemma}
\newtheorem{Prop}[Thm]{Proposition}
\newtheorem{Rem}[Thm]{Remark}
\newtheorem{Conj}[Thm]{Conjecture}
\newtheorem{Prelim}[Thm]{Preliminary}
\newenvironment{thm}[0]{\begin{Thm}\noindent}%
{\end{Thm}}
\newenvironment{defi}[0]{\begin{Defi}\noindent\rm}%
{\end{Defi}}
\newenvironment{cor}[0]{\begin{Cor}\noindent}%
{\end{Cor}}
\newenvironment{lemma}[0]{\begin{Lemma}\noindent}%
{\end{Lemma}}
\newenvironment{prop}[0]{\begin{Prop}\noindent}%
{\end{Prop}}
\newenvironment{rem}[0]{\begin{Rem}\noindent\rm}%
{\end{Rem}}
\newenvironment{conj}[0]{\begin{Conj}\noindent}%
{\end{Conj}}
\newenvironment{prelim}[0]{\begin{Prelim}\noindent}%
{\end{Prelim}}
\def\par\noindent{\it Proof.}{\ }{\ }{\par\noindent{\it Proof.}{\ }{\ }}
\def~\hfill$\square$\medbreak{~\hfill$\square$\medbreak}
\def\medbreak\noindent{\medbreak\noindent}
\def\text#1{\;\;\;\;{\rm \hbox{#1}}\;\;\;\;}
\def\quad\quad{\quad\quad}
\def\quad\quad\quad{\quad\quad\quad}
\def\vspace{-1mm}\item[{\rm (a)}]{\vspace{-1mm}\item[{\rm (a)}]}
\def\item[{\rm (b)}]{\item[{\rm (b)}]}
\def\item[{\rm (c)}]{\item[{\rm (c)}]}
\def\item[{\rm (d)}]{\item[{\rm (d)}]}
\def\item[{\rm (e)}]{\item[{\rm (e)}]}
\def\item[{\rm (f)}]{\item[{\rm (f)}]}
\def\item[{\rm (g)}]{\item[{\rm (g)}]}
\def\item[{\rm (h)}]{\item[{\rm (h)}]}
\def\item[{\rm (i)}]{\item[{\rm (i)}]}
\def\msy#1{{\mathbb #1}}
\def{\msy C}{{\msy C}}
\def{\msy N}{{\msy N}}
\def{\msy Z}{{\msy Z}}
\def{\msy R}{{\msy R}}
\def{\msy D}{{\msy D}}
\def{\msy T}{{\msy T}}
\def\alpha{\alpha}
\def\beta{\beta}
\def\delta{\delta}
\def\varepsilon{\varepsilon}
\def\varphi{\varphi}
\def\gamma{\gamma}
\def\kappa{\kappa}
\def\lambda{\lambda}
\def\nu{\nu}
\def\rho{\rho}
\def\sigma{\sigma}
\def\chi{\chi}
\def\zeta{\zeta}
\def\Delta{\Delta}
\def\Phi{\Phi}
\def\Lambda{\Lambda}
\def\Sigma{\Sigma}
\def\frak#1{\mathfrak #1}
\def{\frak a}{{\frak a}}
\def{\frak b}{{\frak b}}
\def{\frak g}{{\frak g}}
\def{\frak h}{{\frak h}}
\def{\frak j}{{\frak j}}
\def{\frak k}{{\frak k}}
\def{\frak l}{{\frak l}}
\def{\frak m}{{\frak m}}
\def{\frak n}{{\frak n}}
\def{\frak p}{{\frak p}}
\def{\frak q}{{\frak q}}
\def{\frak s}{{\frak s}}
\def{\frak t}{{\frak t}}
\def{\frak u}{{\frak u}}
\def\Leftarrow{\Rightarrow}
\def\rightarrow{\rightarrow}
\def{\rm Re}\,{{\rm Re}\,}
\def{\rm Im}\,{{\rm Im}\,}
\def\inp#1#2{\langle#1\,,\,#2\rangle}
\def\hinp#1#2{\langle#1\,|\,#2\rangle}
\def{\rm Ad}{{\rm Ad}}
\def{\rm End}{{\rm End}}
\def{\rm Hom}{{\rm Hom}}
\def{\rm I}{{\rm I}}
\def{\rm ad}{{\rm ad}}
\def\,{\scriptstyle\circ}\,{\,{\scriptstyle\circ}\,}
\def\simeq{\simeq}
\def{\rm pr}{{\rm pr}}
\def\otimes{\otimes}
\def\Leftarrow{\Leftarrow}
\def\pijl#1{{\buildrel #1 \over \longrightarrow}}
\def{\rm tr}\,{{\rm tr}\,}
\def{\rm q}{{\rm q}}
\def{\scriptscriptstyle \C}{{\scriptscriptstyle {\msy C}}}
\def{\scriptscriptstyle \R}{{\scriptscriptstyle \R}}
\def{\mathcal A}{{\mathcal A}}
\def{\mathcal B}{{\mathcal B}}
\def{\mathcal C}{{\mathcal C}}
\def{\mathcal D}{{\mathcal D}}
\def{\mathcal E}{{\mathcal E}}
\def{\mathcal F}{{\mathcal F}}
\def{\mathcal H}{{\mathcal H}}
\def{\mathcal I}{{\mathcal I}}
\def{\mathcal K}{{\mathcal K}}
\def{\mathcal L}{{\mathcal L}}
\def{\mathcal M}{{\mathcal M}}
\def{\mathcal N}{{\mathcal N}}
\def{\mathcal O}{{\mathcal O}}
\def{\mathcal P}{{\mathcal P}}
\def{\mathcal R}{{\mathcal R}}
\def{\mathcal S}{{\mathcal S}}
\def{\mathcal T}{{\mathcal T}}
\def{\mathcal U}{{\mathcal U}}
\def{\mathcal V}{{\mathcal V}}
\def{\mathcal W}{{\mathcal W}}
\def{\mathcal Y}{{\mathcal Y}}
\def{\mathcal Z}{{\mathcal Z}}
\def{\rm O}{{\rm O}}
\def{\rm S}{{\rm S}}
\def{\rm U}{{\rm U}}
\def\rmS\rmU{{\rm S}{\rm U}}
\def\rmS\rmO{{\rm S}{\rm O}}
\def{\rm PW}{{\rm PW}}
\def{\rm Exp}{{\rm Exp}}
\def\mathrm s\mathrm u\mathrm p\mathrm p{\mathrm s\mathrm u\mathrm p\mathrm p}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{V}^a}{\mathcal{V}^a}
\newcommand{\mathrm{Cr}(G/K_0)}{\mathrm{Cr}(G/K_0)}
\def\sideremark#1{\ifvmode\leavevmode\fi\vadjust{\vbox to0pt{\vss
\hbox to 0pt{\hskip\hsize\hskip1em%
\vbox{\hsize2cm\tiny\raggedright\pretolerance10000
\noindent #1\hfill}\hss}\vbox to8pt{\vfil}\vss}}}
\newcommand{\edz}[1]{\sideremark{#1}}
\newcommand{\edz{changed}}{\edz{changed}}
\begin{document}
\setcounter{section}{0}
\title[Fourier series on compact symmetric spaces]{Fourier series on compact symmetric spaces: K-finite
functions of small support}
\author{Gestur \'Olafsson}
\thanks{Research of \'Olafsson was supported by NSF grants DMS-0402068
and DMS-0801010}
\address{Department of Mathematics, Louisiana State University, Baton Rouge,
LA 70803, U.S.A.}
\email{olafsson@math.lsu.edu}
\author{Henrik Schlichtkrull}
\address{Department of Mathematics, University of Copenhagen, Universitetsparken 5,
DK-2100 K{\o}benhavn {\O}, Denmark}
\email{schlicht@math.ku.dk}
\subjclass{ 43A85, 53C35, 22E46}
\keywords{Symmetric space; Fourier transform; Paley-Wiener theorem}
\begin{abstract}
The Fourier coefficients of a function $f$
on a compact symmetric space
$U/K$ are given by integration of $f$ against
matrix coefficients of irreducible representations of $U$.
The coefficients depend on a spectral parameter $\mu$,
which determines the representation, and they can be
represented by elements $\widehat f(\mu)$
in a common Hilbert space ${\mathcal H}$.
We obtain a theorem of Paley-Wiener type which
describes the size of the support of $f$
by means of the exponential type
of a holomorphic ${\mathcal H}$-valued extension of $\widehat f$,
provided $f$ is $K$-finite and of sufficiently small support.
The result was obtained previously
for $K$-invariant functions, to which case we reduce.
\end{abstract}
\maketitle
\eqsection{Introduction.}
\noindent
The present paper is a continuation of our article \cite{OS}.
We consider a Riemannian symmetric space $X$ of compact type,
realized as the homogeneous space $U/K$ of a compact Lie group $U$.
Up to covering, $U$ is the connected component of the group
of isometries of $X$. As an example,
we mention the sphere $S^n$, for which $U={\rm SO}(n+1)$
and $K={\rm SO}(n)$. In the cited paper, we considered
$K$-invariant functions on $U/K$. The Fourier series of
a $K$-invariant function $f$ is
\begin{equation}
\label{e: K-invariant Fourier series}
\sum_{\mu}a_\mu\psi_\mu(x),
\end{equation}
where $\psi_\mu$
is the zonal spherical function
associated with the representation of $U$ with highest weight
$\mu$, and where the Fourier coefficients $a_\mu$ are given by
\begin{equation}\label{eq-sphericalFT}
a_\mu= d(\mu)\tilde f(\mu)=
d(\mu)\int_{U/K} f(x)\overline{\psi_\mu(x)}\,dx,
\end{equation}
with $d(\mu)$ being the representation dimension,
and $dx$ being the normalized invariant measure on $U/K$.
The main result of \cite{OS} is a local Paley-Wiener
theorem, which gives a necessary and sufficient condition
on the coefficients in the series
(\ref{e: K-invariant Fourier series})
that it is the Fourier series of a smooth
$K$-invariant function $f$
supported in a geodesic ball of a given
sufficiently small radius $r$ around the
origin in $U/K$. The condition is, that $\mu \mapsto a_\mu$
extends to a holomorphic function of
exponential type $r$ satisfying certain invariance under
the action of
the Weyl group.
We refer to \cite{BOP,BOP2,C06,G01} for previous results on special cases.
The case of distributions will be treated in \cite{OS08b}.
In the present paper we consider the general case where
the $K$-invariance is replaced by
$K$-finiteness. Instead of being scalars, the Fourier
coefficients take values in the Hilbert space ${\mathcal H}=L^2(K/M)$,
where $M$ is a certain subgroup of $K$.
In case of $U/K=S^n$, we have $K/M=S^{n-1}$.
Our main result is Theorem \ref{t: main} below, which describes
the set of Fourier coefficients of $K$-finite smooth functions
on $U/K$, supported in a ball of a given sufficiently small radius.
The corresponding result for Riemannian
symmetric spaces of the non-compact type is due
to Helgason, see \cite{Sig73}.
Our method is by reduction to the $K$-invariant case.
For the reduction we
use Kostant's description of the spherical principal series
of a semisimple Lie group \cite{Kostant}. A similar reduction
was found by Torasso \cite{Torasso} for Riemannian symmetric
spaces of the non-compact type, thus providing an alternative
proof of the mentioned theorem of Helgason.
\eqsection{Basic notation}
\label{s: notation}\noindent
We recall some basic notation from \cite{OS}.
We are considering a Riemannian symmetric space $U/K$, where
$U$ is a connected compact semisimple Lie group and
$K$ a closed symmetric subgroup. By definition this means
that there exists a nontrivial involution $\theta$ of $U$ such that
$K_0\subset K\subset U^\theta$. Here $U^\theta$ denotes the
subgroup of $\theta$-fixed points, and
$K_0:=U^\theta_0$ its identity component. The
base point in $U/K$ is denoted by $x_0=eK$.
The Lie algebra of $U$ is denoted ${\frak u}$, and by
${\frak u}={\frak k}\oplus{\frak q}$ we denote the
Cartan decomposition associated with the involution $\theta$.
We endow $U/K$ with the Riemannian structure induced
by the negative of the Killing form on ${\frak q}$.
Let ${\frak a}\subset{\frak q}$ be a maximal abelian subspace,
${\frak a}^*$ its dual space, and ${\frak a}^*_{\msy C}$ the complexified
dual space. The set of non-zero weights for ${\frak a}$ in ${\frak u}_{\msy C}$
is denoted by $\Sigma$. The roots $\alpha\in\Sigma\subset{\frak a}^*_{\msy C}$
are purely imaginary valued on ${\frak a}$.
The corresponding Weyl group, generated by the reflections
in the roots, is denoted $W$. We make a fixed choice of
a positive system $\Sigma^+$ for $\Sigma$, and define
$\rho\in i{\frak a}^*$ to be half the sum of the roots in $\Sigma^+$,
counted with multiplicities.
The centralizer of ${\frak a}$ in $K$ is denoted $M=Z_K({\frak a})$.
Some care has to be taken because we are not assuming $K$
is connected. We recall that if $U$ is simply connected,
then $U^\theta$ is connected and $K=K_0$, see \cite{Sig}, p. 320.
We recall also that in general $K=MK_0$, see \cite{OS}, Lemma 5.2.
In the following we shall need to complexify $U$ and $U/K$.
Since $U$ is compact there exists a unique (up to isomorphism)
connected complex Lie group $U_{\msy C}$ with Lie algebra ${\frak u}_{\msy C}$
which contains $U$ as a real Lie subgroup.
Let ${\frak g}$ denote the real form ${\frak k}+i{\frak q}$ of ${\frak u}_{\msy C}$,
and let $G$ denote the connected real Lie subgroup of $U_{\msy C}$
with this Lie algebra.
Then ${\frak g}_{\msy C}={\frak u}_{\msy C}$ as complex vector
spaces, and $U_{\msy C}$ complexifies $G$ as well as $U$. In particular,
the almost complex structures that ${\frak u}$ and ${\frak g}$ induce
on $U_{\msy C}$ are identical. For this reason we shall
denote $U_{\msy C}$ also by $G_{\msy C}$.
The Cartan involutions of ${\frak u}$ and $U$
extend to involutions of ${\frak g}_{\msy C}$ of $G_{\msy C}$, which we shall denote
again by $\theta$, and which leave ${\frak g}$ and $G$ invariant.
The corresponding Cartan decomposition of ${\frak g}$ is
${\frak g}={\frak k}+i{\frak q}$. It follows that $K_0=G^\theta$ is maximal compact in $G$,
and $G/K_0$ is a Riemannian symmetric space of the non-compact type.
We denote by ${\frak g}={\frak k}\oplus i{\frak a}\oplus {\frak n}$ and
$G=K_0AN$ the Iwasawa decompositions of ${\frak g}$ and $G$
associated with $\Sigma^+$. Here $A=\exp(i{\frak a})$ and
$N=\exp{\frak n}$.
Furthermore, we
let $H\colon G\rightarrow i{\frak a}$ denote the {\it Iwasawa projection}
$$K_0AN\ni k\exp Y n=g\mapsto H(g)=Y.$$
Let $K_{0{\msy C}}$, $A_{\msy C}$, and $N_{\msy C}$ denote the connected subgroups
of $G_{\msy C}$ with Lie algebras ${\frak k}_{\msy C}$, ${\frak a}_{\msy C}$ and ${\frak n}_{\msy C}$,
and put $K_{\msy C}=K_{0{\msy C}}K$. Then $G_{\msy C}/K_{\msy C}$
is a symmetric space, and it
carries a natural complex structure with respect to which
$U/K$ and $G/K_0$ are totally real submanifolds
of maximal dimension.
\begin{lemma}\label{l: complex Iwasawa map}
There exists an open $K_{\msy C}\times K$-invariant
neighborhood $\mathcal{V}^a$ of the neutral element
$e$ in $G_{\msy C}$, and a holomorphic map
\begin{equation}\label{e: complex H}
H\colon \mathcal{V}^a\rightarrow{\frak a}_{\msy C},
\end{equation}
which agrees with the Iwasawa projection on $\mathcal{V}^a\cap G$,
such that
\begin{equation}\label{e: Iwa}
u\in K_{\msy C}\exp(H(u))N_{\msy C}
\end{equation}
for all $u\in\mathcal{V}^a$.
\end{lemma}
\begin{proof}(See \cite{clerc} or \cite{Stanton}.)
We first assume that $K=U^\theta$. Then $K_{\msy C}=G_{\msy C}^\theta$.
Since ${\frak g}_{\msy C}={\frak n}_{\msy C}\oplus{\frak a}_{\msy C}\oplus{\frak k}_{\msy C}$,
there exist an open
neighborhood $T_{{\frak n}_{\msy C}}\times T_{{\frak a}_{\msy C}}$ of $(0,0)$
in ${\frak n}_{\msy C}\times{\frak a}_{\msy C}$ such that
the map
$$
{\frak n}_{\msy C}\times{\frak a}_{\msy C}\ni(X,Y)\mapsto
\exp X\exp Y\cdot x_0\in G_{\msy C}/K_{\msy C}
$$
is a biholomorphic diffeomorphism of
$T_{{\frak n}_{\msy C}}\times T_{{\frak a}_{\msy C}}$
onto an open neighborhood $\mathcal{V}$ of
$x_0=eK_{\msy C}$ in $G_{\msy C}/K_{\msy C}$. We assume, as we may, that
$T_{{\frak n}_{\msy C}}$ and $T_{{\frak a}_{\msy C}}$ are invariant under the
complex conjugation with respect to the real form ${\frak g}$.
We denote by $\mathcal{V}^a$
the open set $\{x\mid x^{-1} K_{\msy C}\in \mathcal{V}\}\subset G_{\msy C}$.
The map
$$K_{\msy C}\times T_{{\frak a}_{\msy C}}\times T_{{\frak n}_{\msy C}}\ni(k,Y,X)\mapsto
k\exp Y\exp X\in \mathcal{V}^a\subset G_{\msy C}$$
is then a biholomorphic diffeomorphism.
In particular, the map
$H\colon \mathcal{V}^a\rightarrow{\frak a}_{\msy C}$ defined by
$$k\exp Y\exp X\mapsto Y$$
for $k\in K_{\msy C}$, $Y\in T_{{\frak a}_{\msy C}}$ and $X\in T_{{\frak n}_{\msy C}}$,
is holomorphic and satisfies (\ref{e: Iwa}).
The conjugation with respect to ${\frak g}$ lifts to an
involution of $G_{\msy C}$ that leaves $G$ pointwise fixed.
Moreover, since this conjugation commutes with $\theta$,
it stabilizes $K_{\msy C}$. Hence it stabilizes $\mathcal{V}^a$.
Let $u\in\mathcal{V}^a\cap G$
and write $u=k\exp Y\exp X$ with
$k\in K_{\msy C}$, $Y\in T_{{\frak a}_{\msy C}}$ and $X\in T_{{\frak n}_{\msy C}}$.
It follows that
$k$, $Y$ and $X$ are fixed by the conjugation.
In particular, $Y\in i{\frak a}$ and $X\in {\frak n}$, and
hence $k=u\exp(-X)\exp(-Y)\in G\cap K_{\msy C}=K_0$.
Therefore, $u=k\exp Y\exp X$ is the Iwasawa decomposition,
and $H(u)=Y$ the Iwasawa projection, of $u$.
We postpone the condition of right-$K$-invariance and
consider the general case where $K_0\subset K\subset
U^\theta$. We retain the
sets $T_{{\frak n}_{\msy C}}$ and $T_{{\frak a}_{\msy C}}$
from above and recall that $K_{\msy C}=K_{0{\msy C}}K$ is an open subgroup
of the previous $K_{\msy C}$.
Again we define
$\mathcal{V}^a=K_{{\msy C}}\exp(T_{{\frak a}_{\msy C}})\exp(T_{{\frak n}_{\msy C}})$.
This is an open subset of the previous $\mathcal{V}^a$.
The restriction of the previous $H$ to this set
is obviously holomorphic, agrees with Iwasawa on $\mathcal{V}^a\cap G$,
and it is easily seen to satisfy (\ref{e: Iwa}).
Finally, we note that $\mathcal{V}^a$ contains an ${\rm Ad} K$
invariant open neighborhood $V$ of $e$ in $G_{\msy C}$. Hence,
for each $k\in K$, the set $\mathcal{V}^a k$ is left-$K_{\msy C}$-invariant
and contains $V$.
The intersection $\cap_{k\in K} \mathcal{V}^a k$ is $K_{\msy C}\times K$
invariant and contains $V$. The interior of this set has
all the properties requested of $\mathcal{V}^a$.
~\hfill$\square$\medbreak\end{proof}
We call the map in (\ref{e: complex H}) the
{\it complexified Iwasawa projection}.
A particular set $\mathcal{V}^a$ as above can be constructed as follows.
Let
$$\Omega =\{X\in {\frak a} \mid
( \forall \alpha\in\Sigma)\,\, |\alpha (X)|<\pi/2\}.$$
The set
$$\mathcal{V}=\mathrm{Cr}(G/K_0)=G\,\exp \Omega\, K_{\msy C}\subset G_{\msy C}/K_{\msy C},$$
called the {\it complex crown} of $G/K_0$,
was introduced in
\cite{AG90}.
Its preimage in $G_{\msy C}$
is open and contained in
$N_{\msy C} A_{\msy C} K_{\msy C}\subset G_{\msy C}$.
This is shown for all classical groups in \cite{KrSt}, Theorem 1.8,
and in general in \cite{H02}, Theorem 3.21. See also \cite{GM},
\cite{M03}.
Let $\mathcal{V}^a=\{x^{-1}\mid x\in \mathcal{V}\}\subset G_{\msy C}$.
The existence of the holomorphic Iwasawa projection $\mathcal{V}^a\rightarrow{\frak a}_{\msy C}$ is
established in \cite{KrSt}, Theorem 1.8, with a proof that
can be repeated in the general case.
It follows that $\mathcal{V}^a$
has all the properties mentioned in Lemma \ref{l: complex Iwasawa
map}.
One important property of the crown is
that it is $G$-invariant and that all the spherical functions on
$G/K$ extends to a holomorphic function on the crown (it is in fact maximal with this
property, see \cite{kos}, Theorem 5.1).
However, this property plays no role in the present article,
where we shall just assume that $\mathcal{V}^a$ has the properties in
Lemma \ref{l: complex Iwasawa map}, and $\mathcal{V}=(\mathcal{V}^a)^{-1}$.
\eqsection{Fourier analysis}
\label{s: Fourier analysis}\noindent
In this section
we develop a local Fourier theory for
$U/K$ based on elementary representation
theory. The theory essentially
originates from Sherman \cite{ShermanBull}.
An irreducible unitary representation $\pi$ of $U$ is
said to be {\it spherical} if there exists a non-zero
$K$-fixed vector $e_\pi$ in the representation space $V_\pi$.
The vector $e_\pi$ (if it exists) is unique up to multiplication
by scalars. After normalization to unit length we obtain
the matrix coefficient
$$\psi_\pi(u)=\langle \pi(u)e_\pi,e_\pi\rangle$$
which is the corresponding \textit{zonal spherical function}.
{}From the point of view of
representation theory it is natural to define the
Fourier transform of an integrable function $f$ on
$U/K$ to be the map that associates the vector
$$\pi(f) e_\pi
= \int_{U} f(u\cdot x_0) \pi(u)e_{\pi} \, du
=\int_{U/K} f(x) \pi(x)e_{\pi} \, dx\in V_\pi,$$
to each spherical representation, with a fixed choice of
the unit vector $e_\pi$ for each $\pi$
(see \cite{OS08a} for discussion on the noncompact case).
The corresponding Fourier series is
\begin{equation}
\label{Fourier series}
\sum_\pi d(\pi)\,
\langle \pi(f)e_\pi ,\pi(x)e_\pi \rangle
\end{equation}
for $x\in U/K$.
It converges to $f$ in $L^2$ if $f$ belongs to $L^2(U/K)$, and it
converges uniformly if $f$ has a sufficient number of continuous
derivatives (see \cite{taylor}).
In the case of the sphere $S^2$, the expansion of $f$
in spherical harmonics $Y^m_l(x)$ (with integral labels $|m|\leq l$)
is obtained from this expression when we
express $\pi(x)e_\pi$ by means of
an orthonormal basis for the
$(2l+1)$-dimensional representation space $V_\pi=V_l$.
For the purpose of Fourier analysis it is convenient
to embed all the representation spaces $V_\pi$,
where $\pi$ is spherical, in a common
Hilbert space ${\mathcal H}$, independent of $\pi$, such that
$\widehat f$ can be viewed as an ${\mathcal H}$-valued function on the set of
equivalence classes of irreducible spherical representations.
This can be achieved as follows.
Recall that in the classification of Helgason,
a spherical representation $\pi=\pi_\mu$
is labeled by an element $\mu\in {\frak a}^*_{\msy C}$,
which is the restriction, from a compatible maximal torus,
of the highest weight of $\pi$ (see \cite{GGA}, p. 538).
We denote by $\Lambda^+(U/K)\subset{\frak a}^*_{\msy C}$ the set of
these restricted highest weights, so that $\mu\mapsto\pi_\mu$
sets up a bijection from $\Lambda^+(U/K)$ onto the set of
equivalence classes of irreducible $K$-spherical
representations.
According to the theorem of
Helgason, every $\mu\in\Lambda^+(U/K)$
satisfies
\begin{equation}
\label{e: Helgason condition}
\frac{\langle\mu,\alpha\rangle}{\langle\alpha,\alpha\rangle}
\in{\msy Z}^+,
\end{equation}
for all $\alpha\in\Sigma^+$, where the brackets
denote the inner product induced by the Killing form. Furthermore,
if $U$ is simply connected, then an element
$\mu\in{\frak a}^*_{\msy C}$ belongs to $\Lambda^+(U/K)$ if
and only if it satisfies (\ref{e: Helgason condition}).
For the description in the general case, one must
supplement (\ref{e: Helgason condition}) by
both the assumption that $\pi_\mu$ descends to $U$,
and that the $K_0$-fixed vector is also $K$-fixed.
For each $\mu\in\Lambda^+(U/K)$ we fix an irreducible unitary
spherical representation
$(\pi_\mu,V_\mu)$ of $U$
and a unit $K$-fixed vector $e_\mu\in V_\mu$.
Furthermore, we fix a highest weight vector $v_\mu$
of weight $\mu$, such that $\langle v_\mu,e_\mu\rangle=1$.
The following lemma is proved in \cite{GGA}, p. 535,
in the case that $U$ is simply connected.
\begin{lemma}
\label{l: K-span}
Let $\mu\in\Lambda^+(U/K)$. Then
$\pi_\mu(m)v_\mu=v_\mu$ for all $m\in M$, and the
vectors
$\pi_\mu(k) v_\mu$, where $k\in K_0$, span the space $V_\mu$.
\end{lemma}
\begin{proof} Let $m\in M$ be given.
Since $m$ centralizes ${\frak a}$ and normalizes
${\frak n}$, it follows that $\pi_\mu(m)v_\mu$ is again
a highest weight vector of the same weight.
Hence $\pi_\mu(m)v_\mu=c v_\mu$. By taking
inner products with $e_\mu$, which is $M$-fixed,
it follows that $c=1$.
The statement about the span follows directly from the
Iwasawa decomposition $G=K_0AN$.~\hfill$\square$\medbreak
\end{proof}
It follows from Lemma \ref{l: K-span} that
the map $V_\mu \rightarrow L^2(K/M)$, $v\mapsto \langle v , \pi_\mu(\cdot )v_\mu
\rangle$, is injective.
We shall use the space ${\mathcal H}=L^2(K/M)$ as our common model
for the spherical representations. It will be convenient
to use an anti-linear embedding of $V_\mu$. Hence we
define for $\mu\in\Lambda^+(U/K)$
\begin{equation}\label{eq-hv}
h_v(k)=\langle\pi_\mu(k)v_\mu,v\rangle,\quad (k\in K)
\end{equation}
and ${\mathcal H}_\mu=\{h_v\mid v\in V_\mu\}$.
Then $v\mapsto h_v$ is a $K$-equivariant anti-isomorphism
$V_\mu\rightarrow{\mathcal H}_\mu\subset{\mathcal H}$.
Notice that $h_{e_\mu}=1$, the constant function on $K/M$.
Hence $1$ belongs to ${\mathcal H}_\mu$ for all $\mu\in\Lambda^+(U/K)$.
Although we shall not use it in the sequel, we also note
that every $K$-finite function in ${\mathcal H}=L^2(K/M)$ belongs to
${\mathcal H}_\mu$ for some $\mu$ (this can be seen from results
explained below, notably Lemma \ref{l: embedding} and
equation (\ref{e: Kostant for psi}), where for a given $K$-type
$\delta$ one chooses $\mu$ such that $P(-\mu-\rho)$
is non-singular).
According to the chosen embedding of $V_\mu$ in ${\mathcal H}$, we define
the {\it Fourier transform} of an integrable function $f$
on $U/K$ by $$\tilde f(\mu)=
\int_{U/K} f(u)\, h_{\pi_\mu(u)e_\mu} \,du\in{\mathcal H}$$
for $\mu\in\Lambda^+(U/K)$, that is
\begin{equation
\tilde f(\mu,b)=
\int_{U/K} f(u)\, \langle\pi_\mu(k)v_\mu ,\pi_\mu(u)e_\mu\rangle \,du,
\label{e: Fourier transform}
\end{equation
for $b=kM\in K/M$.
If $f$ is $K$-invariant, then $\widetilde{f} (\mu) $ is
independent of~$b$. Integration
over $K$ then shows that this definition
agrees with the spherical Fourier transform in (\ref{eq-sphericalFT}).
It is easily seen that the Fourier transform $f\mapsto \widetilde{f}(\mu )$
is intertwining for the left regular actions of $K$ on $U/K$
and $K/M$, respectively. In particular, it maps $K$-finite
functions on $U/K$ to $K$-finite functions on $K/M$.
We now invoke the complex group $G_{\msy C}$
and the complexified Iwasawa projection defined
in the preceding section. Let $\mathcal{V}^a\subset G_{\msy C}$
and $H\colon\mathcal{V}^a\rightarrow{\frak a}_{\msy C}$ be as in Lemma
\ref{l: complex Iwasawa map}, and let $\mu\in\Lambda^+(U/K)$.
Since $\pi_\mu$ extends to a holomorphic representation
of $G_{\msy C}$, it follows from Lemma \ref{l: complex Iwasawa map} that
$\langle\pi_\mu(u)v_\mu,e_\mu\rangle=e^{\mu(H(u))}$
for all $u\in \mathcal{V}^a$.
Let $\mathcal{V}=\{x^{-1}\mid x\in \mathcal{V}^a\}\subset G_{\msy C}$. Then
\begin{equation}\label{e: Iwasawa expression}
\langle \pi_\mu(k)v_\mu, \pi_\mu(u)e_\mu\rangle
=e^{\mu(H(u^{-1}k))}
\end{equation}
for $k\in K$, $u\in U\cap\mathcal{V}$ and $\mu\in\Lambda^+(U/K)$.
\begin{lemma}
\label{l: holo ext}
Let $f$ be an integrable function on $U/K$ with
support in $U\cap\mathcal{V}$.
Then
\begin{equation}
\label{e: Fourier with H}
\tilde f(\mu,k)=
\int_{U/K} f(u)\, e^{\mu(H(u^{-1}k))} \,du,
\end{equation}
for all $k\in K/M$,
and the Fourier transform $\mu\mapsto \tilde f(\mu)$
extends to a holomorphic ${\mathcal H}$-valued function on
${\frak a}^*_{\msy C}$, also denoted by $\tilde f$,
satisfying the same equation {\rm (\ref{e: Fourier with H})}.
Moreover,
\begin{equation}
\label{e: pi(f) inversion}
\pi_\mu(f)e_\mu=\int_{K/M} \tilde f(-\mu-2\rho,k)
\pi_\mu(k)v_\mu\, dk
\end{equation}
for all $\mu\in\Lambda^+({U/K})$.
\end{lemma}
The measure on $K/M$ used in (\ref{e: pi(f) inversion})
is the quotient of the normalized Haar measures on $K$ and $M$.
\smallskip
\begin{proof} The expression (\ref{e: Fourier with H}) follows
immediately from (\ref{e: Fourier transform})
and (\ref{e: Iwasawa expression}). The integrand in
(\ref{e: Fourier with H}) depends holomorphically on $\mu$,
locally uniformly with respect to $u$ and $k$.
Hence an analytic continuation is
defined by this formula.
In order to establish the identity (\ref{e: pi(f) inversion})
it suffices to show that
$$\pi_\mu(u)e_\mu=\int_{K/M} e^{-(\mu+2\rho)H(u^{-1}k)}
\pi_\mu(k)v_\mu \,dk$$
for $u\in U\cap\mathcal{V}$. The latter identity is easily shown
to hold for $u\in G$ (use \cite{GGA}, p. 197, Lemma 5.19,
and the fact that $K/M=K_0/(M\cap K_0)$).
By analytic continuation it then holds for $u\in \mathcal{V}_0$,
the identity component of $\mathcal{V}$. Since $\mathcal{V}=\mathcal{V}_0K_{\msy C}$,
it follows for all $u\in\mathcal{V}$.
~\hfill$\square$\medbreak
\end{proof}
\begin{Cor}
\label{c: Sherman}
{\rm (Sherman)}
Assume $f\in L^2(U/K)$ has
support
contained in $U\cap\mathcal{V}$.
Then the sum
$$
\sum_{\mu\in\Lambda^+(U/K)} d(\mu)
\int_{K/M} \tilde f(-\mu-2\rho,k) \,
\langle\pi_\mu(k)v_\mu,\pi_\mu(x)e_\mu \rangle \,dk,\quad x\in U/K,
$$
converges to $f$ in $L^2(U/K)$, and it
converges uniformly if $f$ has a sufficient number of continuous
derivatives.
\end{Cor}
\begin{proof}
(See \cite{ShermanBull}).
Follows immediately
from (\ref{Fourier series}) by insertion of
(\ref{e: pi(f) inversion}).~\hfill$\square$\medbreak
\end{proof}
In \cite{ShermanActa} the inversion formula of
Corollary \ref{c: Sherman} is
extended to a formula for
functions on $U/K$ without
restriction on the support
(for symmetric spaces of rank one).
We shall not use this extension here.
For the special case of the sphere $U/K=S^n$,
see also \cite{ShermanTAMS}, \cite{Strichartz}
and \cite{Y}.
\eqsection{The spherical principal series}\noindent
The space ${\mathcal H}=L^2(K/M)=L^2(K_0/(M\cap K_0))$
is the representation space for
the spherical principal series for $G$.
We denote by $\sigma_\lambda$ this series of
representations, given by
\begin{equation}
\label{e: spherical principal series}
[\sigma_\lambda(g)\psi](k)=e^{-(\lambda+\rho)H(g^{-1}k)}
\psi(\kappa(g^{-1}k))
\end{equation}
for $\lambda\in{\frak a}_{\msy C}^*$, $g\in G$, $\psi\in{\mathcal H}$ and $k\in K_0$.
Here $\kappa\colon G\rightarrow K_0$ is the Iwasawa projection
$kan\mapsto k$.
Let $\mu\in\Lambda^+(U/K)$.
By extending $\pi_\mu$ to a holomorphic
representation of $G_{\msy C}$ and then restricting to $G$, we obtain
a finite dimensional
representation of $G$, which we again denote by~$\pi_\mu$.
We now have the following well-known result.
It relates the embedding of $V_\mu$ into ${\mathcal H}$,
which motivated (\ref{e: Fourier transform}),
to the principal series representations.
\begin{lemma}
\label{l: embedding}
Let $\mu\in\Lambda^+(U/K)$. The map $v\mapsto h_v$ defined by
{\rm (\ref{eq-hv})} provides a
$G$-equivariant embedding of the contragredient of
$\pi_\mu$ into $\sigma_{-\mu-\rho}$.
\end{lemma}
\begin{proof}
Recall that the contragredient representation
can be realized on the conjugate Hilbert space $\bar V_\mu$
by the operators $\pi_\mu(g^{-1})^*$, and
notice that $v\mapsto h_v$ is {\it linear} from $\bar V_\mu$
to ${\mathcal H}$.
Since $v_\mu$ is
a highest weight vector it follows easily from
(\ref{e: spherical principal series}) that
$$\sigma_{-\mu-\rho}(g)h_v=h_{\pi_\mu(g^{-1})^*v}$$
for $g\in G$.
~\hfill$\square$\medbreak
\end{proof}
The space $C^\infty(K/M)\subset {\mathcal H}$ carries
the family of representations, also denoted by $\sigma_\lambda$,
of ${\frak g}_{\msy C}$ obtained by differentiation and complexification.
Thus, although the representations $\sigma_\lambda$ of $G$
in general do not complexify to global representations of $U$,
the infinitesimal representations $\sigma_\lambda$ of ${\frak u}$
are defined for all $\lambda\in{\frak a}^*_{\msy C}$. We denote by
${\mathcal H}^\infty_\lambda$ the space
$C^\infty(K/M)$ equipped with the representation $\sigma_\lambda$
of ${\frak u}_{\msy C}={\frak g}_{\msy C}$, and with the left regular representation of
$K$.
\begin{lemma}
\label{l: u-homo}
The Fourier transform $f\mapsto \tilde f(\mu)$ defines a
$({\frak u},K)$-homomorphism
{}from $C^\infty(U/K)$ to ${\mathcal H}^\infty_{-\mu-\rho}$,
for all $\mu\in\Lambda^+(U/K)$. Moreover, the holomorphic extension,
defined in Lemma \ref{l: holo ext},
restricts to a $({\frak u},K)$-homomorphism
from
$$\{f\in C^\infty(U/K)\mid \mathrm s\mathrm u\mathrm p\mathrm p f\subset U\cap \mathcal{V}\}$$
to ${\mathcal H}^\infty_{-\mu-\rho}$ for all $\mu\in{\frak a}^*_{\msy C}$.
\end{lemma}
\begin{proof} Since $\pi_\mu$ is a unitary representation of $U$
it follows from Lemma \ref{l: embedding} that
$\sigma_{-\mu-\rho}(X)h_v=h_{\pi_\mu(X)v}$ for $X\in{\frak u}$, $v\in V_\mu$.
The first statement now follows, since $$\tilde f(\mu)=\int_{U/K}
f(u) h_{\pi_\mu(u)e_\mu}\,du.$$
It follows from Lemma \ref{l: uniqueness} and Theorem \ref{t: into}
below, that the second statement can be derived from the first by analytic
continuation with respect to $\mu$, provided the support of $f$
is sufficiently small. However, we prefer to give an independent
proof, which only requires assumptions on the support of $f$
as stated in the lemma.
Since the Fourier transform in (\ref{e: Fourier with H}) is clearly
$K$-equivariant,
it suffices to prove the intertwining property
\begin{equation}\label{eq-intertwiningProperty}
[L(X)f]^\sim(\mu)=\sigma_{-\mu-\rho}(X)\tilde f(\mu)
\end{equation}
for $X\in{\frak q}$. By definition
$$[L(X)f](u)=\frac{d}{dt}\Big|_{t=0} f(\exp(-tX)u)$$
and hence by invariance of the measure
$$[L(X)f]^\sim(\mu,k)=
\int_{U/K} f(u) \,\frac{d}{dt}\Big|_{t=0}\,
e^{\mu(H(u^{-1}\exp(-tX)k))}\,du.$$
Let ${\frak p}=i{\frak q}$ so that ${\frak g}={\frak k}+{\frak p}$ is the Cartan
decomposition of ${\frak g}$, and write $X=iY$ for $Y\in{\frak p}$.
Since the complexified Iwasawa map $H$ is holomorphic, it
follows that
$$\frac{d}{dt}\Big|_{t=0}\,
e^{\mu(H(u^{-1}\exp(-tX)k))}=i\frac{d}{dt}\Big|_{t=0}\,
e^{\mu(H(u^{-1}\exp(-tY)k))}.$$
Furthermore
$$H(u^{-1}\exp(-tY)k)=H(u^{-1}\kappa(\exp(-tY)k))+H(\exp(-tY)k)
$$
and hence we derive
\begin{eqnarray}
&&[L(X)f]^\sim(\mu,k)\nonumber\\
&&\quad\quad=
i\frac{d}{dt}\Big|_{t=0} \left[ e^{\mu(H(\exp(-tY)k))}
\int_{U/K} f(u)
e^{\mu(H(u^{-1}\kappa(\exp(-tY)k))}\,du\right]
\nonumber\\
&&\quad\quad=
i\frac{d}{dt}\Big|_{t=0} \left[e^{\mu(H(\exp(-tY)k))}
\tilde f(\mu,\kappa(\exp(-tY)k))\right].\nonumber\\
&&\quad\quad=
i\frac{d}{dt}\Big|_{t=0} \left[\sigma_{-\mu-\rho}(\exp(tY))
\tilde f(\mu)\right](k).\nonumber
\end{eqnarray}
Since by definition $\sigma_{-\mu-\rho}(X)=i\sigma_{-\mu-\rho}(Y)$,
the last expression is exactly
$\sigma_{-\mu-\rho}(X)\tilde f(\mu)$ evaluated at $k$.
~\hfill$\square$\medbreak\end{proof}
We recall that there exist normalized standard intertwining
operators between the principal series:
$${\mathcal A}(w,\lambda)\colon {\mathcal H}\rightarrow{\mathcal H}, \quad w\in W\, ,$$
such that
\begin{equation}\label{e: intertwining property of A}
\sigma_{w\lambda}(g)\circ{\mathcal A}(w,\lambda)=
{\mathcal A}(w,\lambda)\circ\sigma_\lambda(g)
\end{equation}
for all $g\in G$.
The normalization is such that
\begin{equation}\label{e: normalization of A}
{\mathcal A}(w,\lambda)1=1
\end{equation}
for the constant function $1$ on $K/M$. The map
$\lambda\mapsto {\mathcal A}(w,\lambda)$ is meromorphic with values
in the space of bounded linear operators on ${\mathcal H}$.
We need the following property of the {\it Poisson kernel},
which is defined for $x\in G$ and $k\in K_0$ by
$e^{-(\lambda+\rho)H(x^{-1}k)}$. By Lemma
\ref{e: complex H} it is defined also for $x\in\mathcal{V}$ and $k\in K$.
\begin{lemma}
\label{l: A on Poisson}
The identity
\begin{equation}
\label{e: A on e}
{\mathcal A}(w,\lambda)e^{-(\lambda+\rho)H(x^{-1}\cdot)}
=e^{-(w\lambda+\rho)H(x^{-1}\cdot)},
\end{equation}
of functions in ${\mathcal H}$, holds for all $x\in\mathcal{V}$.
\end{lemma}
\begin{proof}
The identity is well-known for $x\in G$. In fact in this case
it follows easily from (\ref{e: spherical principal series}),
(\ref{e: intertwining property of A}) and (\ref{e: normalization of A}).
The map
$x\mapsto e^{\mu (H(x^{-1}\cdot))}$ is holomorphic
${\mathcal H}$-valued on $\mathcal{V}$ for each $\mu\in{\frak a}^*_{\msy C}$,
because the complexified Iwasawa projection
is holomorphic. Hence
(\ref{e: A on e}) holds
for $x\in \mathcal{V}_0$ by analytic continuation,
and then for $x\in\mathcal{V}$ by the obvious
left-$K_{\msy C}$-invariance
of both sides with respect to $x^{-1}$.
~\hfill$\square$\medbreak\end{proof}
\eqsection{The $K$-finite Paley-Wiener space}
\noindent
For each irreducible representation $\delta$ of $K_0$ we denote
by ${\mathcal H}_\delta$ the finite dimensional subspace of ${\mathcal H}$
consisting of the functions that generate
an isotypical representation of type $\delta$. Likewise, for
each finite set $F$ of $K_0$-types, we denote by ${\mathcal H}_F$ the sum
of the spaces ${\mathcal H}_\delta$ for $\delta\in F$.
Obviously, the intertwining operators ${\mathcal A}(w,\lambda)$ preserve
each subspace ${\mathcal H}_F$. Although we do not need it in the sequel,
we remark that
$\lambda\mapsto {\mathcal A}(w,\lambda)|_{{\mathcal H}_F}$ is a rational map
from ${\frak a}_{\msy C}^*$ into the space of linear operators on the finite
dimensional space ${\mathcal H}_F$, for each $F$, see \cite{W72}.
Note that since $K/K_0$ is finite, a function on $K/M=K_0/(K_0\cap M)$
is $K_0$-finite if and only if it is $K$-finite. We use the
notations ${\mathcal H}_\delta$ and ${\mathcal H}_F$ also for an irreducible
representation $\delta$ of $K$, and for a set $F$ of $K$-types.
\begin{defi}
\label{d: PW space}
For $r>0$ the {\it $K$-finite Paley-Wiener space}
${\rm PW}_{K,r}({\frak a})$ is
the space of holomorphic functions $\varphi$ on ${\frak a}_{\msy C}^*$
with values in ${\mathcal H}=L^2(K/M)$
satisfying the following.
\item{(a)} There exists a finite set $F$ of $K$-types such that
$\varphi(\lambda)\in {\mathcal H}_F$ for all $\lambda\in{\frak a}_{\msy C}^*$.
\item{(b)} For each $k\in{\msy N}$ there exists a constant $C_k>0$ such that
$$\|\varphi(\lambda)\|\leq C_k(1+|\lambda|)^{-k} e^{r|{\rm Re}\,\lambda|}$$
for all $\lambda\in{\frak a}_{\msy C}^*.$
\item{(c)} The identity
$\varphi(w(\mu+\rho)-\rho)={\mathcal A}(w,-\mu-\rho)\varphi(\mu)$
holds for all $w\in W$, and for generic $\mu\in{\frak a}_{\msy C}^*$.
\end{defi}
We note that the norm on ${\frak a}^*_{\msy C}$ used in (b) is induced by
the negative of the Killing form on ${\frak a}$.
In particular we see that ${\rm PW}_{K,r}({\frak a})={\rm PW}_{K_0,r}({\frak a})$,
that is, the $K$-finite Paley-Wiener space is the same for all the
spaces $U/K$ where $K_0\subset K\subset U^\theta$.
Notice that the Paley-Wiener space ${\rm PW}_r({\frak a})$ defined in \cite{OS}
can be identified with the space of functions $\varphi$
in ${\rm PW}_{K,r}({\frak a})$, for which $\varphi(\lambda)$
is a constant function on $K/M$ for each $\lambda$.
This follows from the normalization
(\ref{e: normalization of A}).
The functions in the Paley-Wiener space are uniquely determined by their
restriction to $\Lambda^+(U/K)$, at least when $r$ is sufficiently small.
This is seen in the following lemma.
\begin{lemma}
\label{l: uniqueness}
There exists $R>0$ such that if
$\varphi\in{\rm PW}_{K,r}({\frak a})$ for some $r<R$ and
$\varphi(\mu)=0$ for all $\mu\in\Lambda^+(U/K)$, then $ \varphi=0$.
\end{lemma}
\begin{proof} The relevant value of $R$ is
the same as in \cite{OS} Thm. 4.2 (iii) and Remark 4.3.
The lemma follows easily from application of
\cite{OS}, Section 7, to the function
$\lambda\mapsto \langle\varphi(\lambda,\,\cdot\,), \psi\rangle$
for each $\psi\in{\mathcal H}$.~\hfill$\square$\medbreak
\end{proof}
Obviously ${\rm PW}_{K,r}({\frak a})$ is $K$-invariant, where
$K$ acts by the left regular representation
on functions on $K/M$. The following lemma shows that it
is also a $({\frak u},K)$-module.
\begin{lemma}
\label{l: (u,K)}
Let $r>0$, $\varphi\in{\rm PW}_{K,r}({\frak a})$ and $X\in{\frak u}_{\msy C}$. Then
the function $\psi=\sigma(X)\varphi$ defined by
$$\psi(\lambda)=\sigma_{-\lambda-\rho}(X)(\varphi(\lambda))\in{\mathcal H}$$
for each $\lambda\in{\frak a}^*_{\msy C}$, belongs to ${\rm PW}_{K,r}({\frak a})$.
\end{lemma}
\begin{proof}
Recall that $\sigma_{-\lambda-\rho}(X)$ is defined
by complexification of the infinitesimal action of
${\frak g}$ on the smooth functions in ${\mathcal H}$, and note that
$\varphi(\lambda)$ is smooth on $K/M$, since it is $K$-finite.
Hence we may assume $X\in{\frak g}$.
It is easily seen that $\psi(\lambda)$
is $K_0$-finite, of types which occur in the tensor product of
the adjoint representation $\mathrm{Ad}$ of $K_0$ on ${\frak g}$
with types from $F$. Hence condition (a) is valid for
the function $\psi$.
Condition (c) follows immediately from the intertwining
property of ${\mathcal A}(w,\lambda)$. It remains to verify
holomorphicity in $\lambda$, and the estimate
in (b) for $\psi$.
By definition both the holomorphicity
and norm in the estimate (b)
refer to the Hilbert space ${\mathcal H}=L^2(K/M)$. However,
because of condition (a)
and since ${\mathcal H}_F$ is finite dimensional, it is equivalent
to require holomorphicity of $\psi(\lambda)(x)$ pointwise for
each $x\in K/M$, and likewise to require the exponential
estimate for $\psi(\lambda)(x)$ pointwise with respect to $x$.
Thus let an element $x=kM\in K/M$ be fixed, where $k\in K_0$.
Note that by (\ref{e: spherical principal series})
\begin{equation}%
(\sigma(X)\varphi)(\lambda)(k)
\frac{d}{dt}\Big|_{t=0}\, e^{-(\lambda+\rho)(H(\exp(-tX)k))}\,
\varphi(\lambda)(\kappa(\exp(-tX)k)).
\nonumbe
\end{equation}
Differentiating with the Leibniz rule, we obtain a sum of
two terms.
The first term is
\begin{equation}
\label{e: first term}
\frac{d}{dt}\Big|_{t=0} \left( e^{-(\lambda+\rho)(H(\exp(-tX)k))} \right)
\varphi(\lambda)(k).
\end{equation}
Let $\alpha(Z)=H(\exp(Z)k)\in i{\frak a}$ for $Z\in{\frak g}$, then $\alpha(0)=0$ and
it follows that (\ref{e: first term}) equals
$$
(\lambda+\rho)(d\alpha_0(X))\,\varphi(\lambda)(k)
$$
where $d\alpha_0$ is the differential of $\alpha$ at $0$.
It is now obvious that (\ref{e: first term})
is holomorphic and satisfies the same the growth
estimate as $\varphi(\lambda)(k)$.
Hence (b) is valid for the
first term.
The second term is
\begin{equation}
\label{e: second term}
\frac{d}{dt}\Big|_{t=0}
\varphi(\lambda)(\kappa(\exp(-tX)k)),
\end{equation}
which we rewrite as follows.
Let
$$\beta(Z)=\kappa(\exp(Z)k)k^{-1}\in K_0$$
for $Z\in{\frak g}$, then $\beta(0)=e$ and
$$\varphi(\lambda)(\kappa(\exp(-tX)k))=\varphi(\lambda)(\beta(-tX)k).$$
It follows that (\ref{e: second term}) equals
$$
L(d\beta_0(X))(\varphi(\lambda))(k)
$$
where $d\beta_0(X)\in T_eK_0={\frak k}$.
The linear operator $L(d\beta_0(X))$ preserves
the finite dimensional space ${\mathcal H}_F$ and hence
restricts to a bounded linear operator on that space.
It follows that (\ref{e: second term})
is holomorphic in $\lambda$ and satisfies~(b).
~\hfill$\square$\medbreak\end{proof}
\eqsection{Fourier transform maps into Paley-Wiener space}
\noindent
In this section we prove the following result. Let $C_K^\infty(U/K)$
denote the space of $K$-finite smooth functions on $U/K$, and for
each $r>0$ let
$$C^\infty_{K,r}(U/K)=\{f\in C_K^\infty(U/K)\mid
\mathrm s\mathrm u\mathrm p\mathrm p f \subset{\rm Exp}(\bar B_r(0))\}$$
where $\bar B_r(0)$ denotes the closed ball in ${\frak q}$
of radius $r$ and center
$0$, and ${\rm Exp}$ denotes the exponential map of $U/K$.
\begin{thm}
\label{t: into}
There exists a number $R>0$ such that
${\rm Exp}(\bar B_R(0))\subset U\cap\mathcal{V}$ and such that
the following holds
for every $r<R$:
If $f\in C^\infty_{K,r}(U/K)$,
then the holomorphic extension of $\tilde f$ from
Lemma \ref{l: holo ext} belongs to
${\rm PW}_{K,r}({\frak a})$.
\end{thm}
In the proof we shall reduce to the case where $K=K_0$.
The following lemma prepares the way for this reduction.
The projection $p: U/K_0\rightarrow U/K$ is a covering map. Hence we
can choose $R>0$ such that $p$ restricts to a
diffeomorphism of the open ball ${\rm Exp}(B_R(0))$ in $U/K_0$
onto the open ball ${\rm Exp}(B_R(0))$ in $U/K$.
It follows that for each $r<R$
a bijection $F\mapsto f$ of $C^\infty_{K_0,r}(U/K_0)$ onto
$C^\infty_{K,r}(U/K)$ is defined by
$$f(u)=\sum_{v\in K/K_0} F(uv),\quad u\in U$$
for $F\in C^\infty_{K_0,r}(U/K_0)$,
where for each $u$ at most one term is non-zero.
The inverse map is given by
$$F(u)= \begin{cases} f(p(u)), & u\in {\rm Exp}(B_R(0)),\\0,
&{\rm otherwise,}
\end{cases}$$
for $f\in C^\infty_{K,r}(U/K)$. Let $\mathcal{V}^a\subset G_{\msy C}$ be as in
Lemma \ref{l: complex Iwasawa map}, and note that this set
also satisfies the assumptions of that lemma for the
symmetric space $U/K_0$. As before, let $\mathcal{V}=\{x^{-1}|x\in\mathcal{V}^a\}$.
\begin{lemma}
\label{l: K to K0} Let $f\in C^\infty_{K,r}(U/K)$ and
$F\in C^\infty_{K_0,r}(U/K_0)$ be as above.
Then $f$ is supported in $U\cap\mathcal{V}$ if and only
$F$ is supported in $U\cap\mathcal{V}$. In this case, the analytically continued
Fourier transforms of these functions satisfy
$$\tilde f(\mu)=c\tilde F(\mu)$$
for all $\mu\in{\frak a}^*_{\msy C}$, where $c$ is the index of $K_0$ in $K$.
\end{lemma}
\begin{proof} It follows from the definition of the map $F\mapsto f$ that
$$\tilde f(\mu,k)=\int_U \sum_{v\in K/K_0} F(uv)e^{\mu(H(u^{-1}k))}\,du
=c\tilde F(\mu,k)
$$
by right-$K$-invariance of the
Haar measure and left-$K$-invariance of $H$.~\hfill$\square$\medbreak
\end{proof}
We can now give the proof of Theorem \ref{t: into}.
\smallskip
\begin{proof}
Property (a) in Definition \ref{d: PW space}
follows immediately
from the fact that the Fourier transform is $K$-equivariant.
Moreover, the transformation law for the
Weyl group in Property (c) follows easily
from Lemma \ref{l: A on Poisson} by integration
over $U\cap\mathcal{V}$ against $f(u)$.
For the proof of Property (b), with $r$ bounded by a suitable value $R$,
we reduce to the case that $K$ is connected.
We assume that $R$ is sufficiently small as
described above Lemma \ref{l: K to K0}.
Then according to the lemma, given a function $f\in C^\infty_{K,r}(U/K)$,
the function $F\in C^\infty_{K_0,r}(U/K_0)$ has the same
Fourier transform up to a constant. The reduction now follows
since ${\rm PW}_{K,r}({\frak a})={\rm PW}_{K_0,r}({\frak a})$,
as mentioned below Definition \ref{d: PW space}.
For the rest of this proof we assume $K=K_0$.
It is known from \cite{OS}, Thm. 4.2(i),
that the estimate in Property (b)
holds for $K$-invariant functions on $U/K$.
We prove the property in general by reduction to that case.
In particular, we can
use the same value of $R>0$ (see \cite{OS}, Remark 4.3).
Fix an irreducible $K$-representation $(\delta,V_\delta)$.
It suffices to prove the result for functions $f$ that
transform isotypically under $K$ according to this type.
We shall use Kostant's description in \cite{Kostant}
of the $K$-types
in the spherical principal series. We draw the results we
need directly from the exposition in
\cite{GASS}, Chapter 3.
In particular, we denote by $H^*_\delta$ the
finite dimensional subspace of
the enveloping algebra ${\mathcal U}({\frak g})$ which is the image
under symmetrization of the space of harmonic polynomials
on ${\frak p}$ of type $\delta$, and we denote by
$E_\delta$ the space
$$E_\delta={\rm Hom}_K(V_\delta,H^*_\delta),$$
of linear $K$-intertwining maps
$V_{\delta}\rightarrow H^*_\delta$.
It is known that $E_\delta$ has the same dimension as $V^M_\delta$.
We denote by ${\rm Hom}^*(V_\delta^M,E_\delta)$ the space
of {\it anti-linear} maps $V_\delta^M\rightarrow E_\delta$.
The principal result we need is Theorem 2.12 of \cite{GASS}, p. 250,
according to which there exists
a rational function
$P=P^\delta$ on ${\frak a}_{\msy C}^*$ with values
in ${\rm Hom}^*(V_\delta^M,E_\delta)$
such that
\begin{equation}
\label{e: Kostant}
\int_{K/M} e^{-(\lambda+\rho)H(x^{-1}k)}
\langle v,\delta(k)v'\rangle\,dk
=
[L(P(\lambda)(v')(v))\varphi_\lambda](x)
\end{equation}
for all $v\in V_\delta$, $v'\in V_{\delta}^M$ and $x\in G/K$,
and for $\lambda\in{\frak a}_{\msy C}^*$ away
from the singularities of $P(\lambda)$.
Here $L$ denotes the action of the enveloping algebra
from the left on functions on $G/K$, and
$\varphi_\lambda$ denotes the spherical function
$$
\varphi_\lambda(x)=
\int_{K/M} e^{-(\lambda+\rho)H(x^{-1}k)}\,dk$$
on $G/K$.
The equality
(\ref{e: Kostant}) is valid for $x\in U\cap\mathcal{V}$
by analytic continuation. Let $f\in C^\infty_{K,r}(U/K)_\delta$,
where $r<R$ and the subscript $\delta$ indicates that $f$
is $K$-finite of this type.
Then
$$f(x)=d(\delta)\int_K \chi_\delta(l) f(lx)\,dl$$
for all $x\in U$, where $\chi_\delta$ is the character of $\delta$.
It follows that
$$
\tilde f(\mu,k)= d(\delta)
\int_{U/K}\int_K \chi_\delta(l) f(lu) \, dl \,\,e^{\mu(H(u^{-1}k))} \,du
$$
and hence by Fubini and invariance of measures
$$\tilde f(\mu,k)= d(\delta)
\int_{U/K}\int_{K/M}\int_M \chi_\delta(lmk^{-1}) \,dm\,
e^{\mu(H(u^{-1}l))} \, dl\, f(u) \,du.
$$
The inner expression
$\int_M \chi_\delta(lmk^{-1})\,dm$
is a finite sum of matrix coefficients of the form
$\langle \delta(l)v,\delta(k)v'\rangle$ with $v\in V_\delta$ and
$v'\in V^M_{\delta}$, and hence it follows from (\ref{e: Kostant})
that $\tilde f(\mu,k)$ for generic $\mu\in{\frak a}^*_{\msy C}$
is a finite sum of expressions of the form
$$\int_{U/K} [L(P(-\mu-\rho)(\delta(k)v')(v))
\varphi_{-\mu-\rho}](u) f(u)\,du$$
with $v$ and $v'$ independent of $\mu$ and $k$.
In these expressions the right invariant differential operators
$L(P(-\mu-\rho)(\delta(k)v')(v)$ can be thrown over, by taking
adjoints.
Since the spherical function is $K$-invariant, we finally obtain
\begin{equation}
\label{e: Lf}
\int_{U/K}
\varphi_{-\mu-\rho}(u)
\int_K [L(P(-\mu-\rho)(\delta(k)v')(v))^*f](yu)\,dy \,du.
\end{equation}
Notice that
(\ref{e: Lf}) is the spherical Fourier transform
from \cite{OS}, Section~6.
It follows that $\tilde f(\mu,k)$, for $\mu$ generic and $k\in K$,
is a finite sum in which
each term has the form of the spherical Fourier transform
applied to the $K$-integral of a derivative of $f$ by a differential
operator with coefficients that depends rationally on $\mu$
and continuously on $k$.
The application of
a differential operator to $f$ does not increase the support,
hence it follows from the estimates in \cite{OS} that each
term is a rational multiple of a function of $\mu$ of
exponential type, with estimates which are
uniform with respect to $k$. It then follows from
\cite{GASS} Lemma 5.13, p. 288, and its proof,
that the Fourier transform
$\tilde f(\mu, k)$ itself is of the same exponential type.
We have established Property (b) in Definition \ref{d: PW space}
for $\tilde f$.
~\hfill$\square$\medbreak
\end{proof}
\eqsection{Fourier transform maps onto Paley-Wiener space}
\noindent
Let $\varphi\in {\rm PW}_{K,r}({\frak a})$ for some $r>0$ and consider
the function $f$ on $U/K$ defined by the Fourier series
\begin{equation}
\label{e: inverse PW}
f(x)=
\sum_{\mu\in\Lambda^+(U/K)} d(\mu)
\int_{K/M} \varphi(-\mu-2\rho,k) \,
\langle\pi_\mu(k)v_\mu,\pi_\mu(x)e_\mu \rangle \,dk.
\end{equation}
It follows from the estimate in Property (b) of
Definition \ref{d: PW space} that the sum converges
and defines a smooth function on $U/K$ (see \cite{Sugiura}).
\begin{thm}
\label{t: onto}
There exists a number $R>0$ such that
${\rm Exp}(\bar B_R(0))\subset U\cap\mathcal{V}$ and such that
the following holds
for every $r<R$.
For each $\varphi\in {\rm PW}_{K,r}({\frak a})$
the function $f$ on $U/K$ defined by (\ref{e: inverse PW})
belongs to
$C^\infty_{K,r}(U/K)$ and has Fourier transform
$\tilde f=\varphi$.
\end{thm}
\begin{proof}
Again we first reduce to the case that $K$ is connected.
Assuming that the theorem is valid in that case, we find
a number $R>0$ such that
every function $\varphi\in {\rm PW}_{K_0,r}({\frak a})$, where $r<R$,
is of the form $\tilde F$ for some $F\in C^\infty_{K_0,r}(U/K_0)$.
We may assume that $R$ is as small as explained above
Lemma~\ref{l: K to K0}. Let $\varphi\in {\rm PW}_{K,r}({\frak a})$ be given
and recall that ${\rm PW}_{K,r}({\frak a})={\rm PW}_{K_0,r}({\frak a})$. Let
$F\in C^\infty_{K_0,r}(U/K_0)$ with $\tilde F=c^{-1}\varphi$,
and construct $f\in C^\infty_{K,r}(U/K)$ as in Lemma
\ref{l: K to K0}. It follows from the lemma that
$\tilde f=c\tilde F=\varphi$,
and then it follows from Corollary \ref{c: Sherman}
that $f$ is the function given by (\ref{e: inverse PW}).
This completes the reduction.
For the rest of this proof, we assume that $K=K_0$.
The value of $R$ that we shall use
is the same as in \cite{OS}, Thm. 4.2(ii) and Remark 4.3.
We may assume that $\varphi(\lambda,\cdot)$ is isotypical of a given
$K$-type $\delta$ for all $\lambda\in{\frak a}^*_{\msy C}$.
For $v\in V_\delta$ and $v'\in V^M_\delta$
we denote by $\psi_{v,v'}$
the matrix coefficient
$$\psi_{v,v'}(k)=\langle v,\delta(k)v'\rangle$$
on $K/M$. By the Frobenius reciprocity
theorem it follows that
these functions $\psi_{v,v'}$ span the space
${\mathcal H}_\delta$. Moreover, it follows from the definition of
the standard intertwining operators
by means of integrals over quotients of
$\theta(N)$, that these operators
act on each function $\psi_{v,v'}$
only through the second variable. That is,
there exists a linear map
$$B(w,\lambda)\colon V^M_\delta \rightarrow V^M_\delta$$
such that
\begin{equation}
\label{e: A on psi}
{\mathcal A}(w,\lambda)\psi_{v,v'}=\psi_{v,B(w,\lambda)v'}.
\end{equation}
for all $v,v'$.
Notice that the dependence of $B(w,\lambda)$ on $\lambda$
is anti-meromorphic.
It follows (by using a basis for $V_\delta$)
that we can write $\varphi(\mu,k)$
as a finite sum of functions of the form
$$\psi_{v,v'(\mu)}(k)$$
where $v\in V_\delta$ is fixed and where
$v'\colon {\frak a}^*_{\msy C}\rightarrow V_\delta^M$ is anti-holomorphic
of exponential type $r$ and satisfies the
transformation relation in Definition \ref{d: PW space} (c), that is,
\begin{equation}
\label{e: W on v'}
v'(w(\mu+\rho)-\rho)=B(w,-\mu-\rho)v'(\mu)
\end{equation}
for $w\in W$.
Since the Poisson transformation for $G/K$ is equivariant
for the left action and injective for generic $\lambda$,
it follows from (\ref{e: Kostant}), by applying the inverse
Poisson
transform on both sides, that
\begin{equation}
\label{e: Kostant for psi}
\psi_{v,v'}=
\sigma_\lambda(P(\lambda)(v')(v))1
\end{equation}
for all $v\in V_\delta$, $v'\in V_\delta^M$
(see also \cite{GASS}, Thm. 3.1, p. 251), and
for all $\lambda$ for which
$P(\lambda)$ is non-singular. Here $1$ denotes the
constant function with value 1 on $K/M$.
We apply (\ref{e: Kostant for psi}) for $\lambda=-\mu-\rho$
generic
to the function
$\psi_{v,v'(\mu)}$
and thus obtain
our Paley-Wiener function $\varphi(\mu,\cdot)$
as a finite sum of
elements of the form
$$
\sigma_{-\mu-\rho}(P(-\mu-\rho)(v'(\mu))(v))1.
$$
The functions $P\colon {\frak a}^*_{\msy C} \rightarrow {\rm Hom}^*(V^M_\lambda,E_\delta)$
satisfy the following transformation property
\begin{equation}
\label{e: W on P}
P(w\lambda)\circ B(w,\lambda)=P(\lambda).
\end{equation}
Indeed, it follows from
(\ref{e: Kostant for psi}), (\ref{e: A on psi})
and (\ref{e: intertwining property of A})
that
$$\sigma_{w\lambda}(P(w\lambda)(B(w,\lambda)v')(v))1
=\sigma_{w\lambda}(P(\lambda)(v')(v))1$$
for all $v$ and $v'$, and generic $\lambda$. The identity
(\ref{e: W on P}) follows, since the map $u\mapsto \sigma_\nu(u)1$
is injective from $H^*_\delta$ to ${\mathcal H}$ for generic $\nu$
according to \cite{GASS}, Thm. 3.1, p. 251
(alternatively, (\ref{e: W on P}) follows from
\cite{GASS}, Thm. 3.5, p. 254).
It follows from (\ref{e: W on P}) combined with (\ref{e: W on v'})
that the function
$$\mu\mapsto u(\mu):=P(-\mu-\rho)(v'(\mu))(v)\in
H^*_\delta$$ satisfies
$u(w(\mu+\rho)-\rho)=u(\mu)$ for generic $\mu$, that
is, the shifted function $\lambda\mapsto u(\lambda-\rho)$
is $W$-invariant. Notice that $u$ is a rational multiple of
a holomorphic function of $\mu$,
since $P(-\mu-\rho)$ is antilinear in $v'$, and $v'$
is antiholomorphic in~$\mu$.
It follows from \cite{GASS}, Prop. 4.1, p. 264, that
$\lambda\mapsto P(-\lambda)$ is non-singular
on an open neighborhood of the set where
$${\rm Re}\,\langle\lambda,\alpha\rangle\geq 0$$
for all roots $\alpha\in\Sigma^+$. Hence
$u(\lambda-\rho)$ is holomorphic on this set.
By the above-mentioned $W$-invariance
the function is then holomorphic everywhere. Since it is a
rational multiple of a function of exponential type $r$,
we conclude from \cite{GASS}, Lemma 5.13, p. 288,
that it has exponential type $r$.
Since $H^*_\delta$ is finite dimensional
we thus obtain an expression for
$\varphi(\lambda,\cdot)$ as a finite sum
of functions of the form
$$\varphi_i(\lambda) \sigma_\lambda(u_i)1,$$
with scalar valued functions $\varphi_i$ on ${\frak a}^*_{\msy C}$
which are $W$-invariant
(for the action twisted by $\rho$)
and of exponential type $r$,
and with $u_i\in H^*_\delta$.
According to the theorem proved in \cite{OS}, each function
$\varphi_i$ is the spherical Fourier transform of a
$K$-invariant smooth function $f_i\in C^\infty_r(U/K)$.
The function $L(u_i)f_i$ also belongs to
$C^\infty_r(U/K)$, and by Lemma \ref{l: u-homo}
it has Fourier transform $\varphi_i(\lambda)\sigma_\lambda(u_i)1$.
We conclude that if $f$ is the sum of the $L(u_i)f_i$, then
$\tilde f=\varphi$, as desired.
Finally, it follows from Corollary \ref{c: Sherman}
that $f$ is identical to
the function given by the Fourier series (\ref{e: inverse PW}).
~\hfill$\square$\medbreak\end{proof}
We combine Theorems \ref{t: into} and \ref{t: onto} to obtain
the following.
\begin{thm}
\label{t: main}
There exists a number $R>0$ such that the Fourier transform
is a bijection of $C^\infty_{K,r}(U/K)$ onto
${\rm PW}_{K,r}({\frak a})$ for all $r<R$.
\end{thm}
We note the following corollary, which is analogous to a
result of Torasso in the non-compact case (see
\cite{GASS}, Cor. 5.19, p. 291).
\begin{cor}
There exists $r>0$ such that each function in
$C^\infty_{K,r}(U/K)$ is a finite linear combination of
derivatives of $K$-invariant
functions in $C^\infty_r(U/K)$ by members
of $\,\mathcal U({\frak g})$, acting from the left.
\end{cor}
\begin{proof} More precisely, the proof above shows that
if $f\in C^\infty_{K,r}(U/K)$ is $K$-finite
of isotype $\delta$, then $f=\sum_i L(u_i)f_i$
with $u_i\in H^*_\delta$ and $f_i\in C^\infty_r(U/K)^K$.
\end{proof}
\eqsection{Final remarks}\noindent
Every function $f\in C^\infty(U/K)$ can be expanded in
a sum of $K$-types,
\begin{equation}
\label{e: K-type expansion}
f=\sum_{\delta\in\widehat K} f_\delta
\end{equation}
where $f_\delta\in C^\infty_\delta(U/K)$
is obtained from $f$ by left convolution
with the character of $\delta$ (suitably normalized).
It is easily seen that $f$ is supported in a given
closed geodesic ball $B$ around $x_0$, if and only if
each $f_\delta$ is supported in $B$.
The following is then a consequence of Theorem
\ref{t: main}.
\begin{cor}\label{c: non-K-finite}
There exists $R>0$ with the following
property. Let $f\in C^\infty_R(U/K)$ and $r<R$.
Then $f\in C^\infty_r(U/K)$ if and only if
the Fourier transform $\tilde f_\delta$ of
each of the functions $f_\delta$ allows a
holomorphic continuation satisfying the
growth estimate (b) of Definition \ref{d: PW space}
(with constants depending on $\delta$).
\end{cor}
For example, in the case of the sphere $S^2$, the expansion
(\ref{e: K-type expansion}) of $f$ reads $f=\sum_{m\in\mathbb Z} f_m$,
and the Fourier transform of $f_m$ is the map
\begin{equation}
\label{e: spherical harmonic}
l\mapsto \left\{\begin{array}{cl} c_{l,m} & \mathrm{for}\, \, l\geq |m|\cr
0&\mathrm{for}\, \, 0\leq l<|m|
\end{array}
\right.
\end{equation}
where $c_{m,l}$ are the coefficients of
the spherical harmonics expansion
$$f=\sum_{l=0}^\infty (2l+1)\sum_{|m|\leq l} c_{l,m}Y^m_l.$$
The condition in Corollary
\ref{c: non-K-finite} is thus that the map
(\ref{e: spherical harmonic}) has a holomorphic extension
to $l\in\mathbb C$ of the proper exponential type, for each $m\in\mathbb Z$.
It is an obvious question, whether the assumption of
$K$-finiteness can be removed in Theorem \ref{t: main}.
It is not difficult to remove it from Theorem \ref{t: onto}.
Assume that $\varphi$ satisfies Properties (b) and (c)
in Definition \ref{d: PW space} for a suitably
small value of $r$. Define a function $f:U/K\rightarrow \mathbb{C}$
by (\ref{e: inverse PW}). Using the arguments from \cite{Sugiura,taylor}
it follows that $f\in C^\infty(U/K)$. By expanding
$f$ as in (\ref{e: K-type expansion})
it follows from Corollary \ref{c: non-K-finite}
that $f$ has support inside the ball of radius $r$.
It also follows that $\tilde f=\varphi$.
The nontrivial part would be to remove the assumption
from Theorem \ref{t: into}. At this point we do not know if the Fourier
transform actually maps all non-$K$-finite functions of small support
into the space of functions satisfying the estimate in
Property (b). The ingredients in our proof, in particular
the matrices $P(\lambda )$, depend on the $K$-types. We would like
to point out that for the noncompact dual $G/K$, this direction
is proved in \cite{GASS}, p. 278, using the Radon transform.
It has been suggested to us by Simon Gindikin that \cite{Simon}
might be used in such an argument for $U/K$.
|
1,314,259,994,320 | arxiv | \section{Introduction}
\label{sec:intro}
Convolutional neural networks (CNN) \cite{lecun1998gradient} allow us to extract powerful features that can be used for tasks such as image classification and segmentation. However, these features are usually domain specific in that they are not discriminative enough for datasets coming from other domains, resulting in poor classification performance. Consequently, unsupervised domain adaptation techniques have emerged \cite{ganin2016domain,tzeng2015simultaneous,liu2016coupled,wang2018deep} to address the domain shift phenomenon between a source dataset and a target dataset. Common techniques use adversarial learning in order to make extracted features from the source and target datasets indistinguishable. The extracted features from the target dataset are then passed through a trained classifier (pre-trained on the source dataset) to predict the labels of the target test-set \cite{2017arXiv170205464T}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/eg.png}
\caption{{\bf Metric Learning.} The result of minimizing the triplet loss on the MNIST dataset. Each cluster corresponds to examples belonging to a single digit label.}
\label{fig:clusters}
\end{figure}
Recently, metric-based methods were introduced to address the problem of unsupervised domain adaptation \cite{hsu2017learning,pinheiro2017unsupervised}. Namely, classifying an example is performed by computing its similarity to prototype representations of each category \cite{pinheiro2017unsupervised}. Further, a category-agnostic clustering network was proposed by \cite{hsu2017learning} to cluster new datasets through transfer learning.
\begin{figure*}[t!]
\centering
\subfigure{\label{fig:a}\includegraphics[width=0.45\textwidth]{figures/p1_out.pdf}}
\subfigure{\label{fig:b}\includegraphics[width=0.45\textwidth]{figures/p2_out.pdf}}
\caption{{\bf Domain Adaptation.} The {\bf blue} dots represent the MNIST embeddings after optimizing Eq. \eqref{eq:triplet}. The {\bf orange} dots represent the USPS embeddings. The center image shows the USPS embeddings before minimizing the domain shift adverbially by Eq. \eqref{eq:da}. The right-most image shows the USPS embeddings after optimizing Eq. \eqref{eq:dacluster}.}
\end{figure*}
In this paper, we introduce M-ADDA, a metric-based adversarial discriminative domain adaptation framework. First, M-ADDA trains our source model using metric learning by optimizing the triplet loss \cite{hoffer2015deep} on the source dataset. As a result, if $K$ is the number of classes then the dataset is clustered into $K$ clusters where each cluster is composed of examples having the same label (see Fig. \ref{fig:clusters}). The goal is to obtain an embedding of the target dataset where the k-nearest neighbors (kNN) of each example belong to the same class and where examples from different classes are separated by a large margin. A major strength in this approach is its non-parametric nature \cite{weinberger2009distance} as it does not implicitly make parametric (possibly limiting) assumptions about the input distributions.
\begin{figure*}[t!]
\centering
\subfigure{\label{fig:a}\includegraphics[width=0.45\textwidth]{figures/p1_out.pdf}}
\subfigure{\label{fig:c}\includegraphics[width=0.45\textwidth]{figures/p3_out.pdf}}
\caption{{\bf Domain Adaptation.} The {\bf blue} dots represent the MNIST embeddings after optimizing Eq. \eqref{eq:triplet}. The {\bf orange} dots represent the USPS embeddings. The center image shows the USPS embeddings before minimizing the domain shift adverbially by Eq. \eqref{eq:da}. The right-most image shows the USPS embeddings after optimizing Eq. \eqref{eq:dacluster}.}
\end{figure*}
Next we adapt the distributions between the source and target extracted features using the adversarial learning method used by ADDA \cite{2017arXiv170205464T}. This addresses the domain discrepancy between the datasets. Early methods for domain adaptation are based on minimizing correlation distances and minimizing the maximum mean discrepancy to ensure both datasets have a common feature space~\cite{tzeng2014deep,long2015learning,sun2016return,sun2016deep}. However, adversarial learning approaches showed state-of-the-art performance for domain adaptation. While the features' distributions become more similar during training, we also train a network that maps the extracted features to embeddings such that they are clustered into $K$ clusters. Concurrently, we encourage the clusters to have large margins between them. Therefore, the network is trained by minimizing the distance between each target example embedding and its closest cluster center corresponding to the source embedding. This approach is simple to implement and achieves competitive results on digit datasets such as MNIST \cite{lecun1998mnist}, and USPS \cite{le1989handwritten}.
To summarize our contributions, (1) we propose a novel metric-learning framework that uses the triplet loss to cluster the source dataset for the task of domain adaptation; (2) we propose a new loss function that regularizes the embeddings of the target dataset to encourage them to form clusters; and (3) we show a large improvement over ADDA \cite{2017arXiv170205464T} on a standard unsupervised domain adaptation benchmark. Note that ADDA uses a similar architecture but a different loss function than M-ADDA.
In section \ref{sec:related}, we review the related works and other similar approaches. In section \ref{sec:proposed}, we introduce our framework and the new loss terms for domain adaptation. In section \ref{sec:exps}, we present experimental results illustrating the efficacy of our approach on the digits dataset. Finally, we conclude the paper in section \ref{sec:conclusion}.
\begin{figure*}
\centering
\subfigure{\label{fig:a}\includegraphics[width=1.0\textwidth]{figures/pretrain.pdf}}
\caption{{\bf Training the source model.} We pre-train the source encoder and decoder by optimizing the triplet loss in Eq. \eqref{eq:triplet}. The source encoder extracts the features from the source dataset and the decoder maps the features to the embedding space where clusters are formed.}
\label{fig:pretrain}
\end{figure*}
\section{Related Work}\label{sec:related}
{\it Metric learning} has shown great success in many visual classification tasks \cite{weinberger2009distance,song2016deep,hoffer2015deep}. The goal is to learn a distance metric such that examples belonging to the same label are close as possible in some embedding space and samples from different labels are as far from one another as possible. It can be used for unsupervised learning such as clustering \cite{xing2003distance} and supervised learning such as k-nearest neighbor algorithms \cite{han2001text,weinberger2009distance}. Recently, triplet networks \cite{hoffer2015deep} and Siamese networks \cite{bertinetto2016fully} were proposed as powerful models for metric learning which have been successfully applied for few-shot learning and learning with few data. However, to the best of our knowledge we are the first to apply metric learning that is based on triplet networks for domain adaptation.
A close topic to domain adaptation is {\it transfer learning} which has received tremendous attention recently. It allows us to solve tasks where labels are scarce by learning from relevant tasks for which labels are abundant \cite{cao2013practical,shi2017transfer,deselaers2012weakly} by identifying a common structure between multiple tasks \cite{finn2017model}. A common transfer learning strategy is to use pre-trained networks such as those trained on imagenet \cite{krizhevsky2012imagenet} and fine-tune them on new tasks. While this approach can significantly improve performance for many visual tasks, it performs poorly when the pre-trained network is used on a dataset which comes from a different distribution than the one it trained on. This is because the model has learned features that are specific to one domain that might not be meaningful for other domains.
To address this challenge, a large set of domain adaptation methods were proposed over the years \cite{2017arXiv170205464T,ganin2016domain,tzeng2015simultaneous,liu2016coupled} whose goal is to determine a common latent space between two domains often referred to as a source dataset and a target dataset. The general setting is to use a model that trains to extract features from the source dataset, and then encourage features extracted from the target dataset to be similar to the source features \cite{ghifary2016deep,bousmalis2016domain,ganin2014unsupervised,tzeng2017adversarial,saito2017asymmetric}. Auto-encoder based methods~\cite{ghifary2016deep,bousmalis2017unsupervised} train one or a variety of auto-encoders for the source and target datasets. Then, a classifier is trained based on the latent representation of the source dataset. The same classifier is then used to label the target dataset. Adversarial networks~\cite{goodfellow2014generative} based approaches use a generator model to transform the examples' feature representations from one domain to another~\cite{bousmalis2017unsupervised,shrivastava2017learning,russo2017source}.
Another group of domain adaptation methods~\cite{tzeng2017adversarial,tzeng2014deep,li2016revisiting,sun2016deep} minimize the difference between the distributions of the features extracted from the source and target data. They achieve this by minimizing point estimates of a given metric between the source and target distributions by using maximum or mean discrepancy metrics. Current state-of-the-art techniques use the adversarial learning approach to encourage the feature representations from the two datasets to be indistinguishable (i.e. have a common distribution) \cite{2017arXiv170205464T}. Close to our method are the recent similarity based approaches proposed by \cite{hsu2017learning,pinheiro2017unsupervised}, which transfer class-agnostic prior to new datasets, and classify examples by computing their similarity to prototype representation of each category, respectively. Our approach uses a regularized metric learning method with the help of k-nearest neighbors as a non-parametric framework. This can be more powerful than ADDA which uses a model that makes parametric assumptions (introducing limitations) about the input distribution \cite{weinberger2009distance}.
Another class of domain adaptation methods are self-ensembling methods which augment the source dataset by applying various label preserving transformations on the images~\cite{laine2016temporal,tarvainen2017mean,french2018self,sajjadi2016regularization}. Using the augmented dataset they train several deep network models and use an ensemble of those networks for the domain adaptation task. Laine et. al.~\cite{laine2016temporal} have two networks in their model: the $\Pi$-model and temporal model. In the $Pi$-model, every unlabelled sample feeds to a classifier twice with different dropout, noise and image translation parameters. Their temporal model records the average of the historical network prediction per sample and forces the subsequent predictions to be close to the average. Travainen et.al~\cite{tarvainen2017mean} improve the temporal network by recording the average of the network weights rather than class prediction. This results in two networks: the student and the teacher network. The student network is trained via gradient descent and the weights of the teacher are the historical exponential moving average of the weights of the student network. The unsupervised loss is the mean square difference between the prediction of the student and the teacher under different dropout, noise and image translation parameters. French et. al.~\cite{french2018self} combine the previous two methods with adding extra modifications and engineering and gets state of the art results in many domain adaptation tasks for image datasets. However, this method uses heavy engineering with many label preserving transformations to augment the data. In contrast, we show that our method significantly improves results over ADDA by making simple changes to their framework.
\section{Proposed Approach: M-ADDA}\label{sec:proposed}
We propose M-ADDA which performs two main steps:
\begin{enumerate}
\item train a source model on the source dataset using metric learning (as in Figure \ref{fig:pretrain}) using the Triplet loss function; then
\item simultaneously, adapt the distributions between the extracted source and target dataset features and regularize the predicted target dataset embeddings to form clusters (see Figure \ref{fig:adversarial}).
\end{enumerate}
\begin{figure*}
\centering
\subfigure{\label{fig:a}\includegraphics[width=1.0\textwidth]{figures/adversarial.pdf}}
\caption{{\bf Training the target model.} We adversarially adapt the encoded features' distributions between the source and target encoder using Eq. \eqref{eq:da} while using the source cluster centers to optimize Eq. \eqref{eq:cluster}. The label of each target embedding is the mode of the labels of the nearest source embedding neighbors.}
\label{fig:adversarial}
\end{figure*}
Our M-ADDA framework consists of a source model and a target model. The two models have the same architecture, and they both have an encoder that extracts features from the input dataset and a decoder to map the extracted features to embeddings. Consider a source dataset $(X_S, Y_S)$, and a target dataset $(X_T, Y_T)$ where the data $X_S$ and $X_T$ are drawn from two different distributions.
\subsubsection{Training the source model.}
The source model $f_{\theta_S}(\cdot)$, parameterised by $\theta_S$, is first trained on the source dataset by optimizing the following triplet loss:
\begin{equation}
\begin{aligned}
\mathcal{L}(\theta_S)= \sum_{(a_i, p_i, n_i)}\max(&||f_{\theta_S}(a_i) - f_{\theta_S}(p_i)||^2 - \\ & ||f_{\theta_S}(a_i) - f_{\theta_S}(n_i)||^2 + m,0)
\label{eq:triplet}
\end{aligned}
\end{equation}
where $a_i$ is an anchor example (picked randomly), $p_i$ is an example with the same label as the anchor and $n_i$ is an example with a different label from the anchor. Optimizing Eq.~\eqref{eq:triplet} encourages the embedding of $a_i$ to be closer to $p_i$ than to $n_i$ by at least margin $m$. If the anchor example is close enough to the positive example $p_i$, and far from the negative example $n_i$ by a margin of at least $m$, the $max$ function returns zero; therefore, the corresponding triplet $(a_i,p_i,n_i)$ does not contribute to the loss function. If the margin is smaller than $m$, then the $max$ function returns $||f_{\theta_S}(a_i) - f_{\theta_S}(p_i)||^2 - ||f_{\theta_S}(a_i) - f_{\theta_S}(n_i)||^2 + m$. Minimizing this term results in moving $a_i$ towards $p_i$ and moving it away from $n_i$ in the embedding feature space. After optimizing the loss term long enough, the samples with the same label are pulled together and those with different labels are pushed away from each other. As a result, points of the same label form a single cluster which allows us to efficiently classify examples using k-nearest neighbors (see Figure \ref{fig:pretrain}).
Algorithm \ref{alg:source} shows the procedure of training the source model on the source dataset for one epoch. Given a batch $(X_B, Y_B)$, for each unique element $y_i$ in $Y_B$, we obtain an anchor $a_i$ whose label is $y_i$, a positive example $p_i$ whose label is $y_i$, and a negative example $n_i$ whose label is not $y_i$. Note that set($Y_B$) returns the unique elements of $Y_B$. In our experiments, we obtained the negative example uniformly at random. However, other methods are possible such as greedily picking the triplet with the largest loss (as computed by Eq. (\ref{eq:triplet})), and non-uniformly picking triplets based on their individual loss values. Finally, for each triplet, we compute the loss and update the parameters of the source model to minimize Eq. (\ref{eq:triplet}).
\subsubsection{Training the target model.} Next, we define $C$ as the set of centers corresponding to the source embedding clusters (represented as red dots in Figure \ref{fig:adversarial}). Each center in $C$ corresponds to a single label in the source dataset. A center is computed by taking the mean of the source embeddings belonging to that center's label. Then, we train the target model, parametrized by $\theta_T$ by optimizing the following two loss terms:
\begin{equation}
\mathcal{L}(\theta_T, \theta_D) = \underbrace{\mathcal{L}_A(\theta_{T_E}, \theta_{D})}_{\text{Adapt}} + \underbrace{\mathcal{L}_C(\theta_{T})}_{\text{C-Magnet}}
\label{eq:dacluster}
\end{equation}
where $\theta_{T_E}$ correspond to the parameters of the target model's encoder; and $\theta_D$ is the parameter set for a discriminator model we use to adapt the distributions of the extracted features between the source ($S$) and target ($T$) datasets. We achieve this by optimizing:
\begin{equation}
\begin{aligned}
\mathcal{L}_A(\theta_{T_E}, \theta_D)= \min_{\theta_{D}} \max_{\theta_{T_E}} - &\sum_{i\in S} \log{D_{\theta_{D}}(E_{\theta_S}(X_{S_i}))} \;- \\& \sum_{i\in T} \log{(1 - D_{\theta_{D}}(E_{\theta_{T_E}}(X_{T_i})))},
\label{eq:da}
\end{aligned}
\end{equation}
where $\theta_{S_E}$ is the source model encoder's set of parameters; and $D(\cdot)$ is the discriminator model which is trained to maximize the probability that the features extracted by the source model's encoder come from the source dataset and that the features extracted by the target model's encoder come from the target dataset. In other words, the discriminator $D(.)$ tries to distinguish between the features extracted from the source dataset and the features from the target dataset by giving higher value (close to one) to a source dataset feature vector and a lower value (close to zero) to a target dataset feature vector. Simultaneously, the encoder of the target model is trained to confuse the discriminator into predicting the target features as coming from the source dataset. This adversarial learning approach encourages the features extracted by $E_{\theta_{S_E}}(X_{S_i})$ and $E_{\theta_{T_E}}(X_{T_i})$ to be indistinguishable in their distributions. For the sake of brevity, note that we show the loss functions in terms of a single source example $X_{S_i}$ and target example $X_{T_i}$.
In parallel, we minimize the center magnet loss term defined as,
\begin{equation}
\mathcal{L}_C(\theta_T)= \sum_{i\in T} \min_j ||f_{\theta_T}(x_i) - C_j||^2,
\label{eq:cluster}
\end{equation}
which pulls the embeddings of example $X_i$ to the closest cluster center defined in $C$ (see Figure \ref{fig:adversarial}). The cluster center for a class is obtained by taking the Euclidean mean of all samples belonging to that class. Since we have 10 classes in MNIST and USPS, $|C|=10$. This regularization term allows the target dataset embeddings to form clusters that are similar to the clusters formed by the source dataset embeddings. This is useful when minimizing $\mathcal{L}(\theta_T, \theta_D)$ fails to make the target embedding clustered in a similar way as the source embeddings. For example, in Fig. \ref{fig:b} we see that the target embeddings become scattered around the center when minimizing $\mathcal{L}_A(\theta_T, \theta_D)$ only. However, by simultenously minimizing $\mathcal{L}_C(\theta_T)$ we get a better formation of clusters as seen in Fig. \ref{fig:c}.
\begin{algorithm}
\caption{Training the source model on the source dataset (single epoch).}\label{alg:source}
\begin{algorithmic}[1]
\INPUTS
\STATE Source model $f_{\theta_S}(\cdot)$, and source images and labels $(X_S, Y_S)$.
\ENDINPUTS
\FOR{$\{X_B, Y_B\} \in (X_S, Y_S)$}
\FOR{$y_i \in \text{set( }Y_B\text{) }$}
\STATE $AP \gets \text{All image pairs whose label is } y_i$.
\FOR{each $\{a_i,p_i\} \in AP$ }
\STATE $n_i \gets \text{A random sample in } X_B \text{ whose label is not } y_i$.
\STATE $L \gets \text{The loss in Eq (\ref{eq:triplet}) using } \{a_i, p_i, n_i\} \text{ and } f_{\theta_S}(\cdot)$.
\STATE Update the parameters $\theta_S$ by backpropagating through $L$.
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Training the target model on the target dataset (single epoch).}\label{alg:da}
\begin{algorithmic}[1]
\INPUTS
\STATE Target model $f_{\theta_T}(\cdot)$, and source and target images and labels $(X_S, Y_S, X_T, Y_T)$.
\ENDINPUTS
\FOR{$\{X_{S_B}, Y_{S_B},X_{T_B}, Y_{T_B}\} \in (X_S, Y_S, X_T, Y_T)$}
\STATE Maximize Eq. (\ref{eq:da}) w.r.t. $\theta_D$ using $\{X_{S_B}, Y_{S_B},X_{T_B}, Y_{T_B}\}$
\STATE Minimize Eq. (\ref{eq:da}) w.r.t. $\theta_T$ using $\{X_{S_B}, Y_{S_B},X_{T_B}, Y_{T_B}\}$
\ENDFOR
\STATE $E_S \gets \text{The embeddings of the source dataset extracted by} f_{\theta_S}(\cdot)$
\STATE $C \gets \text{The cluster centers of } E_S \text{ are obtained by taking the Euclidean mean for each class.}$
\FOR{$\{X_{T_B}, Y_{T_B}\} \in (X_T, Y_T)$}
\STATE $L \gets \text{The loss computed using Eq. \ref{eq:cluster} and cluster centers $C$}$
\STATE Update parameters $\theta_T$ by backpropagating through $L$.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Predicting the labels of the test images.}\label{alg:c}
\begin{algorithmic}[1]
\INPUTS
\STATE Target model $f_{\theta_T}(\cdot)$, Source model $f_{\theta_T}(\cdot)$, and source and target images and labels.
\ENDINPUTS
\STATE $E_S \gets \text{The embeddings of the source dataset extracted by} f_{\theta_S}(\cdot)$
\FOR{$\{X_{T_B}, Y_{T_B}\} \in (X_T, Y_T)$}
\STATE $E_{T_B} \gets \text{The embeddings of $X_{T_B}$ extracted by} f_{\theta_T}(\cdot)$
\STATE $P_{T_B} \gets \text{The mode label of the k-nearest $E_S$ samples.}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
Algorithm \ref{alg:da} shows the procedure for training the target model on the target dataset. Lines 4-5 use Eq. (\ref{eq:da}) to make the target features and the source features indistinguishible. Lines 7-12 update the target model parameters by encouraging the target embeddings to move to the closest source cluster center. As shown in Algorithm \ref{alg:c}, the prediction stage consists of two steps. First we extract the embeddings of the source dataset examples using the pre-trained source model. Then, the label of an example $X_{T_i}$ is the mode label of the k-nearest source embeddings. This non-parametric approach allows us to implicitly learn powerful features that are used to compute the similarities between the examples.
\section{Experiments}\label{sec:exps}
\begin{table}[t]
\centering
\caption{{\bf Digits Adaptation}. We evaluate our method on the unsupervised domain adaptation task on the digits datasets, using the setup in \cite{2017arXiv170205464T}.}
\begin{tabular}{ |l|c|c| }
Method & MNIST $\rightarrow$ USPS & USPS $\rightarrow$ MNIST \\\hline\hline
Source only (ADDA \cite{2017arXiv170205464T})& 0.752 & 0.571 \\\hline
Source only (Ours)& 0.601 & 0.679\\\hline
Gradient reversal \cite{ganin2016domain}& 0.771 & 0.730 \\\hline
Domain confusion \cite{tzeng2015simultaneous}& 0.791 & 0.665 \\\hline
CoGAN \cite{liu2016coupled}& 0.912& 0.891 \\\hline
ADDA \cite{2017arXiv170205464T}& 0.894 & 0.901 \\\hline\hline
M-ADDA (Ours) & {\bf \;0.952 } & {\bf \;0.940 } \\\hline
\end{tabular}
\label{table:results}
\end{table}
\begin{figure}[t]
\centering
\subfigure{\label{fig:a}\includegraphics[width=70mm]{figures/da.png}}
\caption{{\bf Dataset.} Example images taken from the 2 digit domains we used in our benchmark.}
\label{fig:digits}
\end{figure}
\begin{table}[t]
\centering
\caption{{\bf Digits Adaptation}. We evaluate our method using the setup in \cite{bousmalis2016domain,bousmalis2017unsupervised}.}
\begin{tabular}{ |l|c|c| }
Method & MNIST $\rightarrow$ USPS & USPS $\rightarrow$ MNIST \\\hline\hline
Source only (Ours)& 0.60 & 0.68\\\hline
DSN \cite{bousmalis2016domain}& 0.91 & -\\\hline
PixelDA \cite{bousmalis2017unsupervised} & 0.96 & -\\\hline
SimNet \cite{pinheiro2017unsupervised}& 0.96 & 0.96 \\\hline\hline
M-ADDA (Ours) & {\bf 0.98} & {\bf 0.97 } \\\hline
\end{tabular}
\label{table:resultsBig}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/plots/SRC_uspsBig2mnistBig.pdf}~
\includegraphics[width=0.5\textwidth]{figures/plots/ACC_uspsBig2mnistBig.pdf}
\caption{{\bf Optimizing the triplet loss.} (left) the Triplet loss value during the training of the source model on the USPS and MNIST datasets; (right) The classification accuracy obtained on the target datasets.}\label{fig:loss}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/qualitative/src_mnistBig2uspsBig.pdf}~
\includegraphics[width=0.5\textwidth]{figures/qualitative/tgt_mnistBig2uspsBig.pdf}
\caption{{\bf M-ADDA results.} (left) The t-SNE components of the source embeddings on the MNIST dataset after training the source model. (right) The t-SNE components of the target embeddings of the USPS dataset after training the target model. The stars represent the cluster centers of the source embeddings. The colors represent different labels.}\label{fig:both}
\end{figure}
\begin{table}[t]
\centering
\caption{{\bf Ablation studies}. Impact of the loss terms on the classification accuracy of the target model.}
\begin{tabular}{ |l|c|c| }
Method & MNIST $\rightarrow$ USPS & USPS $\rightarrow$ MNIST \\\hline\hline
Center Magnet Only& 0.77 & 0.85 \\\hline
Adversarial Adaptation Only & 0.93 & 0.92\\\hline
M-ADDA & {\bf \;0.98 } & {\bf \;0.97 } \\\hline
\end{tabular}
\label{table:ablation}
\end{table}
To illustrate the performance of our method for the unsupervised domain adaptation task, we apply it on the standard digits dataset benchmark using accuracy as the evaluation metric. We consider 2 domains: MNIST, and USPS. They consist of 10 classes representing the digits between 0 and 9 (we show some digit examples in Figure \ref{fig:digits}). We follow the experimental setup in \cite{2017arXiv170205464T} where 2000 images are sampled from MNIST and 1800 from USPS for training. Since our task is unsupervised domain adaptation, all the images in the target domain are unlabeled. In each experiment, we ran Algorithm \ref{alg:source} for 200 epochs to train our source model. Then, we report the accuracy on the target test set after running Algorithm \ref{alg:da} for 200 epochs.
We also use similar architectures for our models as those in \cite{2017arXiv170205464T}. The encoder module is the modified LeNet architecture provided in the Caffe source code \cite{lecun1998gradient}. The decoder is a simple linear model that transforms the encoded features into 256-unit embedding vectors. The discriminator consists of 3 fully connected layers: two layers with 500 hidden units followed by the final discriminator output. Each of the 500-unit layers uses a ReLU activation function.
Table \ref{table:results} shows the results of our experiments on the digits datasets. We see that our method achieves competitive results compared to previous state-of-the-art methods, ADDA \cite{2017arXiv170205464T}. This suggests that metric learning allows us to achieve good results for domain adaptation. Further, Table \ref{table:resultsBig} shows the results of our experiments using the setup in \cite{bousmalis2016domain,bousmalis2017unsupervised} where the full training set was used for both MNIST and USPS. We see that our method beats recent state-of-the-art methods in the USPS, MNIST domain adaptation challenge. However, it would be interesting to see the efficacy of M-ADDA in more complicated tasks such as the VisDA dataset challenge \cite{peng2017visda}. We show in Fig. \ref{fig:loss} (left) the Triplet loss value during the training of the source model on the USPS and MNIST datasets. Further, Fig. \ref{fig:loss} (right) shows the classification accuracy obtained on the target datasets with respect to the number of epochs. Higher accuracy was obtained for USPS when the model was trained on MNIST, which is expected since MNIST consists of more training examples.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/plots/TGT_uspsBig2mnistBig.pdf}~
\includegraphics[width=0.5\textwidth]{figures/plots/TGT_mnistBig2uspsBig.pdf}
\caption{{\bf Ablation studies.} (left) The classification accuracy on MNIST using variations of the loss function (\ref{eq:dacluster}); (right) The classification accuracy on USPS using variations of the loss function (\ref{eq:dacluster}). NOCENTER refers to optimizing Eq. (\ref{eq:da}) only, and NODISC refers to optimizing Eq. (\ref{eq:cluster}) only. The blue lines refer to the result of optimizing Eq. (\ref{eq:dacluster}).}\label{fig:acc}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/qualitative/src_mnist2usps_nodisc.pdf}~
\includegraphics[width=0.5\textwidth]{figures/qualitative/tgt_mnist2usps_nodisc.pdf}
\caption{{\bf Center magnet optimization only.} The stars represent the cluster centers of the source embeddings.}\label{fig:centeronly}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/qualitative/src_mnist2usps_nocenter.pdf}~
\includegraphics[width=0.5\textwidth]{figures/qualitative/tgt_mnist2usps_nocenter.pdf}
\caption{{\bf Adversarial optimization only.} The stars represent the cluster centers of the source embeddings.}\label{fig:daonly}
\end{figure}
In Table \ref{table:ablation}, we compare between two main variations for training the target model. Center Magnet only updates the target model using only Eq. (\ref{eq:cluster}); therefore, it ignores the adversarial training part of Eq. (\ref{eq:da}). Using Center Magnet only to train the target model results in poor performance. This is expected since the performance highly depends on the initial clustering. We see in Fig. \ref{fig:centeronly} (right) that several source cluster centers (represented as stars) contain samples corresponding to different labels. For example, the samples with the pink label are clustered with those of the green label. Similarly, those with the orange label are clustered with those of the teal label. This is expected since the target model is encouraged to move the embeddings to the nearest cluster centers without having to match the extracted feature distributions between the source and target datasets.
Using only the adversarial adaptation loss improves the results significantly, since having the extracted features distribution between the source and target similar is crucial. However, we see in Fig. \ref{fig:daonly} (right) that some samples are far from any cluster center which makes their class labels ambiguous. Namely, the pink and yellow samples that are in the center between the yellow and pink cluster centers. To address these ambiguities, the center magnet loss helps the model to regularize against them. As a result, we see in Fig. \ref{fig:both} (right) that better clusters are formed when we optimize the whole loss function defined in Eq. \ref{eq:dacluster}. This suggests that M-ADDA has strong potential in addressing the task of unsupervised domain adaptation.
\section{Conclusion}\label{sec:conclusion}
We propose M-ADDA, which is a metric-learning based method, to address the task of unsupervised domain adaptation. The framework consists of two main steps. First, a triplet loss is used to pre-train the source model on the source dataset. Then, we adversarialy train a target model to adapt the distributions of its extracted features to match those of the source model. In parallel, we optimize a center magnet loss to regularize the output embeddings of the target model so that they form clusters that have similar structure as that of the source model's output embeddings. We showed that this approach can perform significantly better than ADDA \cite{2017arXiv170205464T} on the digits adaptation dataset of MNIST and USPS. For future work, it would be interesting to apply these methods on more complicated datasets such as those in the VisDA challenge.
\bibliographystyle{splncs04}
|
1,314,259,994,321 | arxiv | \section{Introduction}\label{SecIntro}
A porous medium consists of a solid matrix saturated with a fluid that circulates freely through the pores \cite{BIOT56A,BOURBIE,CARCIONE07}. Such media are involved in many applications, modeling for instance natural rocks, engineering composites \cite{GIBSON94} and biological materials \cite{COWIN89}. The most widely-used model describing the propagation of mechanical waves in porous media has been proposed by Biot in 1956 \cite{BIOT56A,BIOT56B}. It includes two classical waves (one "fast" compressional wave and one shear wave), in addition to a second "slow" compressional wave, which is highly dependent on the saturating fluid. This slow wave was observed experimentally in 1980 \cite{PLONA80}, thus confirming the validity of Biot's theory.
Two frequency regimes have to be distinguished when dealing with poroelastic waves. In the low-frequency range (LF), the flow inside the pores is of Poiseuille type \cite{BIOT56A}. The viscous effects are then proportional to the relative velocity of the motion between the fluid and the solid components. In the high-frequency range (HF), modeling the dissipation is a more delicate task. Biot first presented an expression for particular pore geometries \cite{BIOT56B}. In 1987, Johnson-Koplik-Dashen (JKD) published a general expression for the HF dissipation in the case of random pores \cite{JKD87}, where the viscous efforts depend on the square root of the frequency. No particular difficulties are raised by the HF regime if the solution is computed in the space-frequency domain \cite{Carcione96b,Santos05}. On the contrary, the computation of HF waves in the space-time domain is much more challenging. Time fractional derivatives are then introduced, involving convolution products \cite{LUBICH86}. The past of the solution must be stored, which dramatically increases the computational cost of the simulations.
The present work is proposes an efficient numerical model to simulate the transient poroelastic waves in the full frequency range of Biot's model. In the high-frequency range, only two numerical approaches have been proposed in the literature to integrate the Biot-JKD equations directly in the time-domain. The first approach consists in a straightforward discretization of the fractional derivatives defined by a convolution product in time \cite{MASSON10}. In the example given in \cite{MASSON10}, the solution is stored over $20$ time steps. The second approach is based on the diffusive representation of the fractional derivative \cite{LU05}. The convolution product is replaced by a continuum of memory variables satisfying local differential equations \cite{MATIGNON10}. This continuum is then discretized using Gaussian quadrature formulae \cite{YUAN02,DIETHELM08,BIRK10}, resulting in the Biot-DA (diffusive approximation) model. In the example proposed in \cite{LU05}, $25$ memory variables are used, which is equivalent, in terms of memory requirement, to storing $25$ time steps. The idea of using memory variables to avoid convolution products is close to the strategy commonly used in viscoelasticity \cite{EMMERICH87}.
The concern of realism leads us also to tackle with anisotropic porous media. Transverse isotropy is commonly used in practice. It is often induced by Backus averaging, which replaces isotropic layers much thinner than the wavelength by a homogeneous isotropic transverse medium \cite{GELINSKY97}. To our knowledge, the earliest numerical work combining low-frequency Biot's model and transverse isotropy is based on an operator splitting in conjonction with a Fourier pseudospectral method \cite{CARCIONE96}. Recently, a Cartesian-grid finite volume method has been developed \cite{LEVEQUE13}. One of the first work combining anistropic media and high-frequency range is proposed in \cite{HANYGA05}. However, the diffusive approximation proposed in the latter article has three limitations. Firstly, the quadrature formulae make the convergence towards the original fractional operator very slow. Secondly, in the case of low frequencies, the Biot-DA model does not converge towards the Biot-LF model. Lastly, the number of memory variables required for a given accuracy is not specified.
The present work extends and improves our previous contributions about the modeling of poroelastic waves. In \cite{CHIAVASSA10}, we addressed 1D equations in the low-frequency range, introducing a splitting of the PDE. 2D generalizations for isotropic media required to implement space-time mesh refinement \cite{CHIAVASSA11,CHIAVASSA13}. Diffusive approximation of the fractional derivatives in the high-frequency range were introduced in \cite{BLANC_JCP13} and generalized in 2D in \cite{BLANC_JASA13}. Compared with \cite{BLANC_JASA13}, the originality of the present paper is threefold:
\begin{enumerate}
\item incorporation of anisotropy. The numerical scheme and the discretization of the interfaces need to be largely modified accordingly;
\item new procedure to determine the coefficients of the diffusive approximation. In \cite{BLANC_JCP13,BLANC_JASA13}, we used a classical least-squares optimization. It is much more accurate than the Gauss-Laguerre technique proposed in \cite{LU05}. But in counterpart, some coefficients are negative, which prevents to conclude about well-posedness of the diffusive model. Here, we fix this problem by using optimization with constraint of positivity, based on Shor's algorithm. Moreover, the accuracy of this new method is largely improved compared with the linear optimization;
\item theoretical analysis. A new result about the eigenvalues of the diffusion matrix is introduced and the energy analysis is extended to anisotropy.
\end{enumerate}
This article is organized as follows. The original Biot-JKD model is outlined in $\S$ \ref{sec:phys} and the diffusive representation of fractional derivatives is described. The energy decrease is proven, and a dispersion analysis is done. In $\S$ \ref{sec:DA}, an approximation of the diffusive model is presented, leading to the Biot-DA system. The properties of this system are also analyzed: energy, hyperbolicity and dispersion. Determination of the quadrature coefficients involved in the Biot-DA model are investigated in $\S$ \ref{sec:DA:coeff}. Gaussian quadrature formulae and optimization methods are successively proposed and compared, the latter being finally preferred. The numerical modeling of the Biot-DA system is addressed in $\S$ \ref{sec:num}, where the equations of evolution are split into two parts: the propagative part is discretized using a fourth-order finite-difference scheme, and the diffusive part is solved exactly. An immersed interface method is implemented to account for the jump conditions and for the geometry of the interfaces on a Cartesian grid when dealing with heterogeneous media. Numerous numerical experiments are presented in $\S$ \ref{sec:exp}, validating the method developed in this paper. In $\S$ \ref{sec:conclu}, a conclusion is drawn and some futures lines of research are suggested.
\section{Physical modeling}\label{sec:phys}
\subsection{Biot model}\label{sec:phys:Biot}
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & \hspace{0.8cm}(b)\\
\includegraphics[scale=0.4]{isotropie_transverse.eps} &
\hspace{0.8cm}
\includegraphics[scale=0.7]{Interface2.eps}
\end{tabular}
\end{center}
\caption{medium under study. (a): the physical properties are symmetric about the axis $z$ that is normal to the plane $(x,y)$ of isotropy. (b): interface $\Gamma$ separating two poroelastic media $\Omega_0$ and $\Omega_1$. The normal and tangential vectors at a point $P$ along $\Gamma$ are denoted by ${\bf n}$ and ${\bf t}$, respectively.}
\label{fig:interface}
\end{figure}
We consider a transversely isotropic porous medium, consisting of a solid matrix saturated with a fluid that circulates freely through the pores \cite{BIOT56A,BOURBIE,CARCIONE07}. The subscripts $1$, $3$ represent the $x$, $z$ axes, where $z$ is the symmetry axis (figure \ref{fig:interface}). The perturbations propagate with a wavelength $\lambda$.
The Biot model involves 15 positive physical parameters: the density $\rho_f$, the dynamic viscosity $\eta$ and the bulk modulus $K_f$ of the fluid, the density $\rho_s$ and the bulk modulus $K_s$ of the grains, the porosity $0\leqslant\phi\leqslant1$, the tortuosities ${\cal T}_1\geqslant 1$, ${\cal T}_3\geqslant 1$, the absolute permeabilities at null frequency $\kappa_1$, $\kappa_3$, and the symmetric definite positive drained elastic matrix $\mbox{\boldmath$C$}$
\begin{equation}
\mbox{\boldmath$C$} = \left( \begin{array}{cccc}
c_{11} & c_{13} & 0 & 0 \\
c_{13} & c_{33} & 0 & 0 \\
0 & 0 & c_{55} & 0 \\
0 & 0 & 0 & \displaystyle \frac{c_{11} - c_{12}}{2}
\end{array}
\right).
\label{eq:drained_elastic_matrix}
\end{equation}
The linear Biot model is valid if the following hypotheses are satisfied \cite{BIOT62}:
\begin{itemize}
\item[${\cal H}_1$]: the wavelength $\lambda$ is large in comparison with the characteristic radius of the pores $r$;
\item[${\cal H}_2$]: the amplitudes of the waves in the solid and in the fluid are small;
\item[${\cal H}_3$]: the single fluid phase is continuous;
\item[${\cal H}_4$]: the solid matrix is purely elastic;
\item[${\cal H}_5$]: the thermo-mechanical effects are neglected, which is justified when the saturating fluid is a liquid.
\end{itemize}
In the validity domain of homogenization theory (${\cal H}_1$), two frequency ranges have to be distinguished. The frontier between the low-frequency (LF) range and the high-frequency (HF) range is reached when the viscous efforts are similar to the inertial effects. The frequency transitions are given by \cite{BIOT56A}
\begin{equation}
\displaystyle f_{ci} = \frac{\eta\,\phi}{2\,\pi\,{\cal T}_i\,\kappa_i\,\rho_f}=\frac{\omega_{ci}}{2\,\pi},\quad i=1,3.
\label{eq:fc}
\end{equation}
Denoting $\mbox{\boldmath$u_s$}$ and $\mbox{\boldmath$u_f$}$ the solid and fluid displacements, the unknowns in a velocity-stress formulation are the solid velocity $\mbox{\boldmath$v_s$} = \frac{\partial \,\mbox{\scriptsize\boldmath$u_s$}}{\partial\,t}$, the filtration velocity $\mbox{\boldmath$w$} = \frac{\partial\,\mbox{\scriptsize\boldmath${\cal W}$}}{\partial\,t} = \frac{\partial}{\partial\,t}\phi\,(\mbox{\boldmath$u_f$}-\mbox{\boldmath$u_s$})$, the elastic symmetric stress tensor $\underline{\mbox{\boldmath$\sigma$}}$ and the acoustic pressure $p$. Under the hypothesis of small perturbations (${\cal H}_2$), the symmetric strain tensor $\underline{\mbox{\boldmath$\varepsilon$}}$ is
\begin{equation}
\underline{\mbox{\boldmath$\varepsilon$}} = \frac{1}{2}\,(\nabla{\mbox{\boldmath$u_s$}} + \nabla{\mbox{\boldmath$u_s$}}^T).
\label{eq:strain_tensor}
\end{equation}
Using the Voigt notation, the stress tensor and the strain tensor are arranged into vectors $\mbox{\boldmath$\sigma$}$ and $\mbox{\boldmath$\varepsilon$}$
\begin{equation}
\displaystyle \mbox{\boldmath$\sigma$}=(\sigma_{11}\,,\,\sigma_{33}\,,\,\sigma_{13})^T, \quad \displaystyle \mbox{\boldmath$\varepsilon$}=(\varepsilon_{11}\,,\,\varepsilon_{33}\,,\,2\,\varepsilon_{13})^T.
\label{eq:notation_voigt}
\end{equation}
Setting
\begin{subnumcases}{\label{eq:def_Cu}}
\displaystyle \xi = -\nabla.\mbox{\boldmath${\cal W}$},\quad\mbox{\boldmath$C^u$} = \mbox{\boldmath$C$} + m\,\mbox{\boldmath$\beta$}\,\mbox{\boldmath$\beta$}^T,\label{eq:def_Cu_a}\\
\displaystyle \mbox{\boldmath$\beta$} = (\beta_1\,,\,\beta_1\,,\,\beta_3)^T,\quad \beta_1 = 1 - \frac{c_{11} + c_{12} + c_{13}}{3\,K_s},\quad\beta_3 = 1 - \frac{2\,c_{13} + c_{33}}{3\,K_s},\label{eq:def_Cu_b}\\
\displaystyle K=K_s\,(1 + \phi\,(K_s/K_f - 1)),\quad m=\frac{K_s^2}{K-(2\,c_{11} + c_{33} + 2\,c_{12} + 4\,c_{13})/9},\label{eq:def_Cu_c}
\end{subnumcases}
where $\mbox{\boldmath$C^u$}$ is the undrained elastic matrix and $\xi$ the rate of fluid flow, the poroelastic linear constitutive laws are \cite{CARCIONE07}
\begin{equation}
\displaystyle \mbox{\boldmath$\sigma$} = \mbox{\boldmath$C^u$}\,\mbox{\boldmath$\varepsilon$} - m\,\mbox{\boldmath$\beta$}\,\xi, \quad \displaystyle p = m\,\left( \xi - \mbox{\boldmath$\beta$}^T\,\mbox{\boldmath$\varepsilon$} \right).
\label{eq:biot_comportement}
\end{equation}
Using (\ref{eq:def_Cu_a}) and (\ref{eq:def_Cu_b}), we obtain equivalently
\begin{equation}
\mbox{\boldmath$\sigma$} = \mbox{\boldmath$C$}\,\mbox{\boldmath$\varepsilon$} - \mbox{\boldmath$\beta$}\,p, \quad p = m\,\left( \xi - \mbox{\boldmath$\beta$}^T\,\mbox{\boldmath$\varepsilon$} \right).
\label{eq:biot_comportement_bis}
\end{equation}
The symmetry of $\underline{\mbox{\boldmath$\sigma$}}$ implies compatibility conditions between spatial derivatives of the stresses and the pressure, leading to the Beltrami-Michell equation \cite{COUSSY95,RICE76}
\begin{equation}
\begin{array}{l}
\displaystyle \frac{\partial^2\sigma_{13}}{\partial\,x\,\partial\,z} = \Theta_0\,\frac{\partial^2\sigma_{11}}{\partial\,x^2} + \Theta_1\,\frac{\partial^2\sigma_{33}}{\partial\,x^2} + \Theta_2\,\frac{\partial^2p}{\partial\,x^2} + \Theta_3\,\frac{\partial^2\sigma_{11}}{\partial\,z^2} + \Theta_0\,\frac{\partial^2\sigma_{33}}{\partial\,z^2} + \Theta_4\,\frac{\partial^2p}{\partial\,z^2},\\
[15pt]
\displaystyle \Theta_0 = -c_{55}\,\frac{c_{13}}{c_{11}\,c_{33} - c_{13}^2},\quad\Theta_1 = -\frac{c_{11}}{c_{13}}\,\Theta_0,\quad\Theta_2 = \beta_1\,\Theta_0 + \beta_3\,\Theta_1,\\
[15pt]
\displaystyle \Theta_3 = -\frac{c_{33}}{c_{13}}\,\Theta_0,\quad\Theta_4 = \beta_3\,\Theta_0 + \beta_1\,\Theta_3.
\end{array}
\label{eq:beltrami_ani}
\end{equation}
If the medium is isotropic and in the elastic limit case ($\beta_1 = \beta_3 = 0$), we recover the usual equation of Barr\'e de Saint-Venant.
Introducing the densities
\begin{equation}
\rho = \phi\,\rho_f + (1-\phi)\,\rho_s,\quad \rho_{wi} = \frac{{\cal T}_i}{\phi}\,\rho_f,\quad i=1,3,
\end{equation}
the conservation of momentum yields
\begin{subnumcases}{\label{eq:biot_dynamique}}
\displaystyle \rho\,\frac{\partial\,\mbox{\boldmath$v_s$}}{\partial\,t} + \rho_f\,\frac{\partial\,\mbox{\boldmath$w$}}{\partial\,t} = \nabla\,.\,\underline{\mbox{\boldmath$\sigma$}},\label{eq:biot_dynamique_a}\\
[3pt]
\displaystyle \rho_f\,\frac{\partial\,\mbox{\boldmath$v_s$}}{\partial\,t} + \mathrm{diag}\left(\rho_{wi}\right)\,\frac{\partial\,\mbox{\boldmath$w$}}{\partial\,t} + \mathrm{diag}\left(\frac{\eta}{\kappa_i}\,F_i(t)\right)*\mbox{\boldmath$w$} = -\nabla\,p,\label{eq:biot_dynamique_b}
\end{subnumcases}
where $\mathrm{diag}\left(d_i\right)$ denotes the $2\times 2$ diagonal matrix $({d_1 \atop 0} {0 \atop d_3})$, $*$ denotes the time convolution product and $F_i(t)$ are viscous operators. In LF, the flow in the pores is of Poiseuille type, and the dissipation efforts in (\ref{eq:biot_dynamique_b}) are given by
\begin{equation}
F_i(t) \equiv F_i^{LF}(t) = \delta(t) \Longleftrightarrow F_i^{LF}(t)*w_i(x,z,t) = w_i(x,z,t),\quad i=1,3,
\label{eq:F_lf}
\end{equation}
where $\delta$ is the Dirac distribution, which amounts to the Darcy's law.
\subsection{High frequency dissipation: the JKD model}\label{sec:phys:JKD}
In HF, a Prandtl boundary layer occurs at the surface of the pores, where the effects of viscosity are significant. Its width is inversely proportional to the square root of the frequency. Biot (1956) presented an expression of the dissipation process for particular pore geometries \cite{BIOT56B}. A general expression for the viscous operator for random networks of pores with constant radii has been proposed by Johnson, Koplik and Dashen (1987) \cite{JKD87}. This function is the most-simple one fitting the LF and HF limits and leading to a causal model. The only additional parameters are the viscous characteristic length $\Lambda_i$. We take \cite{MASSON10}
\begin{equation}
P_i = \frac{4\,{\cal T}_i\,\kappa_i}{\phi\,\Lambda_i^2},\quad\Omega_i = \frac{\omega_{ci}}{P_i} = \frac{\eta\,\phi^2\,\Lambda_i^2}{4\,{\cal T}_i^2\,\kappa_i^2\,\rho_f},\quad i=1,3,
\label{eq:coef_hf}
\end{equation}
where $P_i$ is the Pride number. The Pride number describes the geometry of the pores: $P_i=1/2$ corresponds to a set of non-intersecting canted tubes, whereas $P_i=1/3$ describes a set of canted slabs of fluids \cite{CARCIONE07}. Based on the Fourier transform in time, $\widehat{F}_i(\omega) = {\cal F}\left( F_i(t) \right) = \int _{\mathbb R} F_i(t)e^{-j\omega t}\,dt$, the viscous operators given by the JKD model are \cite{JKD87}
\begin{equation}
\widehat{F}_i^{JKD}(\omega) \displaystyle = \left( 1+j\,\omega\,\frac{4\,{\cal T}_i^2\,\kappa_i^2\,\rho_f}{\eta\,\Lambda_i^2\,\phi^2}\right)^{1/2} = \left( 1+j\,P_i\,\frac{\omega}{\omega_{ci}}\right)^{1/2}= \frac{1}{\sqrt{\Omega_i}}\,(\Omega_i +j\,\omega)^{1/2}.\\
\label{eq:F_omega}
\end{equation}
Therefore, the terms $F_i(t)*w_i(x,z,t)$ involved in (\ref{eq:biot_dynamique_b}) are
\begin{equation}
\begin{array}{ll}
F_i^{JKD}(t)*w_i(x,z,t) & \displaystyle = {\cal F}^{-1}\left( \frac{1}{\sqrt{\Omega_i}}\,(\Omega_i + j\,\omega)^{1/2}\widehat{w}_i(x,z,\omega)\right),\\
[13pt]
& \displaystyle = \frac{1}{\sqrt{\Omega_i}}\,(D+\Omega_i)^{1/2}w_i(x,z,t).
\end{array}
\label{eq:F_t}
\end{equation}
In the last relation of (\ref{eq:F_t}), $(D+\Omega_i)^{1/2}$ is an operator. $D^{1/2}$ is a fractional derivative in time of order $1/2$, generalizing the usual derivative characterized by $\frac{\partial\,w_i}{\partial\,t} = {\cal F}^{-1}\left( j\,\omega\,\widehat{w}_i\right)$. The notation $\left(D+\Omega_i\right) ^{1/2}$ accounts for the shift $\Omega_i$ in (\ref{eq:F_t}).
\subsection{The Biot-JKD equations of evolution}\label{sec:phys:EDP}
The system (\ref{eq:biot_dynamique}) is rearranged by separating $\frac{\partial\,\mbox{\scriptsize\boldmath$v_s$}}{\partial\,t}$ and $\frac{\partial\,\mbox{\scriptsize\boldmath$w$}}{\partial\,t}$ and using the definitions of $\mbox{\boldmath$\varepsilon$}$ and $\xi$. Taking
\begin{equation}
\gamma_i = \frac{\eta}{\kappa_i}\,\frac{\rho}{\chi_i}\,\frac{1}{\sqrt{\Omega_i}},\quad i=1,3,
\end{equation}
one obtains the following system of evolution equations
\begin{subnumcases}{\label{eq:2D_ani_syst_hyp_jkd_scalar}}
\displaystyle \frac{\partial\,v_{s1}}{\partial\,t} - \frac{\rho_{w1}}{\chi_1}\,\left( \frac{\partial\,\sigma_{11}}{\partial\,x}+\frac{\partial\,\sigma_{13}}{\partial\,z}\right) - \frac{\rho_f}{\chi_1}\,\frac{\partial\,p}{\partial\,x} = \frac{\rho_f}{\rho}\,\gamma_1\,(D+\Omega_1)^{1/2}\,w_1 + G_{v_{s1}},\label{eq:2D_ani_syst_hyp_jkd_scalar_a}\\
[5pt]
\displaystyle \frac{\partial\,v_{s3}}{\partial\,t} - \frac{\rho_{w3}}{\chi_3}\,\left( \frac{\partial\,\sigma_{13}}{\partial\,x}+\frac{\partial\,\sigma_{33}}{\partial\,z}\right) - \frac{\rho_f}{\chi_3}\,\frac{\partial\,p}{\partial\,z} = \frac{\rho_f}{\rho}\,\gamma_3\,(D+\Omega_3)^{1/2}\,w_3 + G_{v_{s3}},\label{eq:2D_ani_syst_hyp_jkd_scalar_b}\\
[5pt]
\displaystyle \frac{\partial\,w_1}{\partial\,t} + \frac{\rho_f}{\chi_1}\,\left( \frac{\partial\,\sigma_{11}}{\partial\,x}+\frac{\partial\,\sigma_{13}}{\partial\,z}\right) + \frac{\rho}{\chi_1}\,\frac{\partial\,p}{\partial\,x} = -\gamma_1\,(D+\Omega_1)^{1/2}\,w_1 + G_{w_1},\label{eq:2D_ani_syst_hyp_jkd_scalar_c}\\
[5pt]
\displaystyle \frac{\partial\,w_3}{\partial\,t} + \frac{\rho_f}{\chi_3}\,\left( \frac{\partial\,\sigma_{13}}{\partial\,x}+\frac{\partial\,\sigma_{33}}{\partial\,z}\right) + \frac{\rho}{\chi_3}\,\frac{\partial\,p}{\partial\,z} = -\gamma_3\,(D+\Omega_3)^{1/2}\,w_3 + G_{w_3},\label{eq:2D_ani_syst_hyp_jkd_scalar_d}\\
[5pt]
\displaystyle \frac{\partial\,\sigma_{11}}{\partial\,t} - c_{11}^u\,\frac{\partial\,v_{s1}}{\partial\,x} -c_{13}^u\,\frac{\partial\,v_{s3}}{\partial\,z} - m\,\beta_1\,\left( \frac{\partial\,w_1}{\partial\,x} + \frac{\partial\,w_3}{\partial\,z}\right) = G_{\sigma_{11}},\label{eq:2D_ani_syst_hyp_jkd_scalar_e}\\
[5pt]
\displaystyle \frac{\partial\,\sigma_{13}}{\partial\,t}-c_{55}^u\,\left( \frac{\partial\,v_{s3}}{\partial\,x}+\frac{\partial\,v_{s1}}{\partial\,z}\right) = G_{\sigma_{13}},\label{eq:2D_ani_syst_hyp_jkd_scalar_f}\\
[5pt]
\displaystyle \frac{\partial\,\sigma_{33}}{\partial\,t}-c_{13}^u\,\frac{\partial\,v_{s1}}{\partial\,x}- c_{33}^u\,\frac{\partial\,v_{s3}}{\partial\,z} - m\,\beta_3\,\left( \frac{\partial\,w_1}{\partial\,x} + \frac{\partial\,w_3}{\partial\,z}\right) = G_{\sigma_{33}},\label{eq:2D_ani_syst_hyp_jkd_scalar_g}\\
[5pt]
\displaystyle \frac{\partial\,p}{\partial\,t} + m\, \left( \beta_1\,\frac{\partial\,v_{s1}}{\partial\,x}+\beta_3\,\frac{\partial\,v_{s3}}{\partial\,z} +\frac{\partial\,w_1}{\partial\,x} +\frac{\partial\,w_3}{\partial\,z} \right) = G_p.\label{eq:2D_ani_syst_hyp_jkd_scalar_h}
\end{subnumcases}
The source terms $G_{v_{s1}}$, $G_{v_{s3}}$, $G_{w_1}$, $G_{w_3}$, $G_{\sigma_{11}}$, $G_{\sigma_{13}}$, $G_{\sigma_{33}}$ and $G_p$ have been introduced to model the forcing.
\subsection{The diffusive representation}\label{sec:phys:DR}
The shifted fractional derivatives in (\ref{eq:F_t}) can be written \cite{HANYGA01}
\begin{equation}
(D+\Omega_i )^{1/2}w_i(x,z,t) = \int _0 ^t \frac{e^{-\Omega_i (t-\tau)}}{\sqrt{\pi\,(t-\tau)}}\,\left(\frac{\partial \,w_i}{\partial \,t}(x,z,\tau)+\Omega_i\,w_i(x,z,\tau)\,\right)d\tau,\hspace{0.5cm}i=1,3.
\label{eq:Dfrac}
\end{equation}
The operators $(D+\Omega_i)^{1/2}$ are not local in time and involve the entire time history of $\mbox{\boldmath$w$}$. Based on Euler's Gamma function, the diffusive representation of the totally monotone function $\frac{1}{\sqrt{\pi\,t}}$ is \cite{MATIGNON10}
\begin{equation}
\displaystyle \frac{1}{\sqrt{\pi\,t}} = \frac{1}{\pi}\,\int _0 ^\infty\,\frac{1}{\sqrt{\theta}}\,e^{-\theta t}d\theta .
\label{eq:fonction_diffu}
\end{equation}
Substituting (\ref{eq:fonction_diffu}) into (\ref{eq:Dfrac}) gives
\begin{equation}
(D+\Omega_i)^{1/2}w_i(x,z,t) = \frac{1}{\pi}\, \int _0 ^{\infty}\frac{1}{\sqrt{\theta}}\,\psi_{i}(x,z,\theta,t)\,d\theta,
\label{eq:derivee_frac}
\end{equation}
where the memory variables are defined as
\begin{equation}
\psi_{i}(x,z,\theta,t)=\int_0^t e^{-(\theta + \Omega_i)(t-\tau)}\,\left(\frac{\partial \,w_i}{\partial \,t}(x,z,\tau)+\Omega_i\,w_i(x,z,\tau)\,\right)\,d\tau.
\label{eq:variable_diffu}
\end{equation}
For the sake of clarity, the dependence on $\Omega_i$ and $w_i$ are omitted in $\psi_{i}$. From (\ref{eq:variable_diffu}), it follows that the two memory variables $\psi_{i}$ satisfy the ordinary differential equation
\begin{subnumcases}{\label{eq:EDO_psi}}
\displaystyle \frac{\partial\,\psi_{i}}{\partial\,t} = -(\theta + \Omega_i)\,\psi_{i} + \frac{\partial \,w_i}{\partial \,t}+\Omega_i\,w_i,\label{eq:EDO_psi_a} \\
[5pt]
\displaystyle \psi_{i}(x,z,\theta,0) = 0.\label{eq:EDO_psi_b}
\end{subnumcases}
The diffusive representation therefore transforms a non-local problem (\ref{eq:Dfrac}) into a continuum of local problems (\ref{eq:derivee_frac}). It should be emphasized at this point that no approximation have been made up to now. The computational advantages of the diffusive representation will be seen in $\S$ \ref{sec:DA} and \ref{sec:exp}, where the discretization of (\ref{eq:derivee_frac}) and (\ref{eq:EDO_psi_a}) will yield a numerically tractable formulation.
\subsection{Energy of Biot-JKD}\label{sec:phys:NRJ}
Now, we express the energy of the Biot-JKD model (\ref{eq:2D_ani_syst_hyp_jkd_scalar}).
\begin{proposition}[\bf Decrease of the energy]
Let us consider the Biot-JKD model (\ref{eq:2D_ani_syst_hyp_jkd_scalar}) without forcing, and let us denote
\begin{equation}
E=E_1+E_2+E_3,
\label{eq:E1E2E3_JKD}
\end{equation}
with
\begin{equation}
\begin{array}{l}
\displaystyle E_1 = \frac{1}{2}\,\int_{\mathbb{R}^2}\left(\rho\,\mbox{\boldmath$v_s$}^T\,\mbox{\boldmath$v_s$} + 2\,\rho_f\,\mbox{\boldmath$v_s$}^T\,\mbox{\boldmath$w$} + \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\rho_{wi}\right)\,\mbox{\boldmath$w$}\right)\,dx\,dz,\\
[15pt]
\displaystyle E_2 = \frac{1}{2}\,\int_{\mathbb{R}^2}\left(\left( \mbox{\boldmath$\sigma$} + p\,\mbox{\boldmath$\beta$} \right)^T\,\mbox{\boldmath$C$}^{-1}\,\left( \mbox{\boldmath$\sigma$} + p\,\mbox{\boldmath$\beta$} \right) + \frac{1}{m}\,p^2\right)\,dx\,dz,\\
[15pt]
\displaystyle E_3 = \frac{1}{2}\,\int_{\mathbb{R}^2}\frac{\eta}{\pi}\,\int_0^{\infty}(\mbox{\boldmath$w$}-\mbox{\boldmath$\psi$})^T\,\mathrm{diag}\left(\frac{1}{\kappa_i\,\sqrt{\Omega_i\,\theta}\,(\theta+2\,\Omega_i)}\right)\,(\mbox{\boldmath$w$}-\mbox{\boldmath$\psi$})\,d\theta\,dx\,dz.
\end{array}
\label{eq:energieJKD}
\end{equation}
Then, $E$ is an energy which satisfies
\begin{equation}
\begin{array}{l}
\displaystyle \frac{\textstyle d\,E}{\textstyle d\,t} = -\int_{\mathbb{R}^2}\frac{\eta}{\pi}\,\int_0^{\infty} \displaystyle \left\lbrace \mbox{\boldmath$\psi$}^T\,\mathrm{diag}\left(\frac{\theta+\Omega_i}{\kappa_i\,\sqrt{\Omega_i\,\theta}\,(\theta+2\,\Omega_i)}\right)\,\mbox{\boldmath$\psi$}\right.\\
[15pt]
\hspace{1.1cm}
\left. \displaystyle + \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\frac{\Omega_i}{\kappa_i\,\sqrt{\Omega_i\,\theta}\,(\theta+2\,\Omega_i)}\right)\,\mbox{\boldmath$w$} \right\rbrace \,d\theta\,dx\,dz \;\leqslant\; 0 .
\end{array}
\label{eq:dEdtJKD}
\end{equation}
\label{prop:nrjJKD}
\end{proposition}
Proposition \ref{prop:nrjJKD} is proven in \ref{annexe:proof_nrj}. It calls for the following comments:
\begin{itemize}
\item the Biot-JKD model is stable;
\item when the viscosity of the saturating fluid is neglected ($\eta=0$), the energy of the system is conserved;
\item the terms $E_1$ and $E_2$ in (\ref{eq:energieJKD}) have a clear physical significance: $E_1$ is the kinetic energy, and $E_2$ is the strain energy;
\item the energy analysis is valid for continuously variable parameters.
\end{itemize}
\subsection{Dispersion analysis}\label{sec:phys:dispersion}
In this section, we derive the dispersion relation of the waves which propagate in a poroelastic medium. This relation describes the frequency dependence of phase velocities and attenuations of waves. For this purpose, we search for a general plane wave solution of (\ref{eq:2D_ani_syst_hyp_jkd_scalar})
\begin{equation}
\left\lbrace
\begin{array}{l}
\mbox{\boldmath$V$}=(v_1\,,\,v_3\,,\,w_1\,,\,w_3)^T = \mbox{\boldmath$V_0$}\,e^{j(\omega t-\mbox{\scriptsize\boldmath$k$}.\mbox{\scriptsize\boldmath$r$})},\\
[5pt]
\mbox{\boldmath$T$}=(\sigma_{11}\,,\,\sigma_{13}\,,\,\sigma_{33}\,,\,-p)^T = \mbox{\boldmath$T_0$}\,e^{j(\omega t-\mbox{\scriptsize\boldmath$k$}.\mbox{\scriptsize\boldmath$r$})},
\end{array}
\right.
\label{eq:plane_wave}
\end{equation}
where $\mbox{\boldmath$k$} = k\,(\cos(\varphi),\;\sin(\varphi))^T$ is the wavevector, $k$ is the wavenumber, $\mbox{\boldmath$V_0$}$ and $\mbox{\boldmath$T_0$}$ are the polarizations, $\mbox{\boldmath$r$} = (x,\;z)^T$ is the position, $\omega = 2\,\pi\,f$ is the angular frequency and $f$ is the frequency. By substituting equation (\ref{eq:plane_wave}) into equations (\ref{eq:2D_ani_syst_hyp_jkd_scalar_e})-(\ref{eq:2D_ani_syst_hyp_jkd_scalar_h}), we obtain the $4\times 4$ linear system:
\begin{equation}
\begin{array}{ccc}
\omega\,\mbox{\boldmath$T$} = -k & \underbrace{\left( \begin{array}{cccc}
c_{11}^u\,c_{\varphi} & c_{13}^u\,s_{\varphi} & \beta_1\,m\,c_{\varphi} & \beta_1\,m\,s_{\varphi}\\
[5pt]
c_{55}^u\,s_{\varphi} & c_{55}^u\,c_{\varphi} & 0 & 0\\
[5pt]
c_{13}^u\,c_{\varphi} & c_{33}^u\,s_{\varphi} & \beta_3\,m\,c_{\varphi} & \beta_3\,m\,s_{\varphi}\\
[5pt]
\beta_1\,m\,c_{\varphi} & \beta_3\,m\,s_{\varphi} & m\,c_{\varphi} & m\,s_{\varphi}
\end{array}\right)} & \mbox{\boldmath$V$},\\
& \mbox{\boldmath${\cal C}$} &
\end{array}
\label{eq:dispersion_matC}
\end{equation}
where $c_{\varphi} = \cos(\varphi)$ and $s_{\varphi} = \sin(\varphi)$. Then, substituting (\ref{eq:plane_wave}) into (\ref{eq:2D_ani_syst_hyp_jkd_scalar_a})-(\ref{eq:2D_ani_syst_hyp_jkd_scalar_d}) gives another $4\times 4$ linear system:
\begin{equation}
\begin{array}{ccccc}
-k & \underbrace{\left( \begin{array}{cccc}
c_{\varphi} & s_{\varphi} & 0 & 0\\
[5pt]
0 & c_{\varphi} & s_{\varphi} & 0\\
[5pt]
0 & 0 & 0 & c_{\varphi}\\
[5pt]
0 & 0 & 0 & s_{\varphi}
\end{array}\right) } & \mbox{\boldmath$T$} = \omega & \underbrace{\left( \begin{array}{cccc}
\rho & 0 & \rho_f & 0\\
[5pt]
0 & \rho & 0 & \rho_f\\
[5pt]
\rho_f & 0 & \displaystyle \frac{\widehat{Y}_1^{JKD}(\omega)}{j\,\omega} & 0\\
[5pt]
0 & \rho_f & 0 & \displaystyle \frac{\widehat{Y}_3^{JKD}(\omega)}{j\,\omega}
\end{array}\right) } & \mbox{\boldmath$V$},\\
& \mbox{\boldmath${\cal L}$} & & \mbox{\boldmath$\Gamma$} &
\end{array}
\label{eq:dispersion_matGammaL}
\end{equation}
where $\widehat{Y}_1^{JKD}$ and $\widehat{Y}_3^{JKD}$ are the viscodynamic operators \cite{NORRIS86}:
\begin{equation}
\widehat{Y}_i^{JKD} = j\,\omega\,\rho_{wi} + \frac{\eta}{\kappa_i}\,\widehat{F}_i^{JKD}(\omega),\quad i=1,3.
\label{eq:operateur_visco_jkd}
\end{equation}
Since the matrix $\mbox{\boldmath$\Gamma$}$ is invertible, the equations (\ref{eq:dispersion_matC}) and (\ref{eq:dispersion_matGammaL}) lead to the eigenproblem
\begin{equation}
\mbox{\boldmath$\Gamma$}^{-1}\,\mbox{\boldmath${\cal L}$}\,\mbox{\boldmath${\cal C}$}\,\mbox{\boldmath$V$} = \left( \frac{\omega}{k} \right)^2\,\mbox{\boldmath$V$}.
\label{eq:syst_dispersion_gen}
\end{equation}
The equation (\ref{eq:syst_dispersion_gen}) is solved numerically, leading to two quasi-compressional waves denoted $qP_f$ (fast) and $qP_s$ (slow), and to one quasi-shear wave denoted $qS$ \cite{CARCIONE96}. The wavenumbers thus obtained depend on the frequency and on the angle $\varphi$. One of the eigenvalues is zero with multiplicity two, and the other non-zero eigenvalues correspond to the wave modes $\pm k_{pf}(\omega,\varphi)$, $\pm k_{ps}(\omega,\varphi)$ and $\pm k_{s}(\omega,\varphi)$. Therefore three waves propagates symmetrically along the directions $\cos(\varphi)\,x + \sin(\varphi)\,z$ and $-\cos(\varphi)\,x - \sin(\varphi)\,z$.
The wavenumbers give the phase velocities $c_{pf}(\omega,\varphi) = \omega/\Re\mbox{e}(k_{pf})$, $c_{ps}(\omega,\varphi) = \omega/\Re\mbox{e}(k_{ps})$, and $c_{s}(\omega,\varphi) = \omega/\Re\mbox{e}(k_{s})$, with $0<c_{ps}<c_{pf}$ and $0<c_{s}$. The attenuations $\alpha_{pf}(\omega,\varphi) = -\Im\mbox{m}(k_{pf})$, $\alpha_{ps}(\omega,\varphi) = -\Im\mbox{m}(k_{ps})$ and $\alpha_{s}(\omega,\varphi) = -\Im\mbox{m}(k_{s})$ are also deduced. Both the phase velocities and the attenuations of Biot-LF and Biot-JKD are strictly increasing functions of the frequency. The high-frequency limits ($\omega \rightarrow \infty$ in (\ref{eq:syst_dispersion_gen})) of phase velocities $c_{pf}^{\infty}(\varphi)$, $c_{ps}^{\infty}(\varphi)$ and $c_s^{\infty}(\varphi)$ are obtained by diagonalizing the left-hand side of (\ref{eq:2D_ani_syst_hyp_jkd_scalar}).
Various authors have illustrated the effect of the JKD correction on the phase velocity and on the attenuation \cite{MASSON06}. In figure \ref{fig:dispersion_bf_freq}, the physical parameters are those of medium $\Omega_0$ (cf table \ref{table:para_phy_ani}), where the frequencies of transition are $f_{c1} = 25.5$ kHz, $f_{c3} = 85$ kHz. The dispersion curves are shown in terms of the frequency at $\varphi = 0$ rad. The high-frequency limit of the phase velocities of the quasi-compressional waves are $c_{pf}^{\infty}(0) = 5244$ m/s and $c_{ps}^{\infty}(0) = 975$ m/s, which justifies the denomination "fast" and "slow".
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
phase velocity of the $P_f$ wave & attenuation of the $P_f$ wave\\
\includegraphics[scale=0.34]{vitesse_phase_pf.eps} &
\includegraphics[scale=0.34]{attenuation_pf.eps}\\
[10pt]
phase velocity of the $S$ wave & attenuation of the $S$ wave\\
\includegraphics[scale=0.34]{vitesse_phase_s.eps} &
\includegraphics[scale=0.34]{attenuation_s.eps}\\
[10pt]
phase velocity of the $P_s$ wave & attenuation of the $P_s$ wave\\
\includegraphics[scale=0.34]{vitesse_phase_ps.eps} &
\includegraphics[scale=0.34]{attenuation_ps.eps}
\end{tabular}
\end{center}
\caption{dispersion curves in terms of the frequency. Comparison between Biot-LF and Biot-JKD models at $\varphi = 0$ rad. The vertical dotted line denotes the critical frequency separating low-frequency and high frequency regimes. The horizontal dotted lines in the left row denote the maximal phase velocity at infinite frequency.}
\label{fig:dispersion_bf_freq}
\end{figure}
Figure \ref{fig:dispersion_bf_freq} calls for the following comments \cite{BOURBIE}:
\begin{itemize}
\item when $f<f_{ci}$, the Biot-JKD and Biot-LF dispersion curves are very similar as might be expected, since $\widehat{F}^{JKD}_i(0) = \widehat{F}^{LF}_i(0) = 1$;
\item the frequency evolution of the phase velocity and of the attenuation is radically different for the three waves, whatever the chosen model (LF or JKD): the effect of viscous losses is negligible on the fast wave, small on the shear wave, whereas it is very important on the slow wave;
\item when $f\ll f_{ci}$, the slow compressional wave is almost static \cite{CHANDLER81,RICE76}. When $f>f_{ci}$, the slow wave propagates but is greatly attenuated.
\end{itemize}
Taking
\begin{equation}
\mbox{\boldmath$U$}_1 = \left( \begin{array}{cccc}
1 & 0 & 0 & 0\\
[5pt]
0 & 0 & 1 & 0\\
[5pt]
0 & 0 & 0 & 1\\
[5pt]
0 & 0 & 0 & 0
\end{array}\right) ,\quad \mbox{\boldmath$U$}_3 = \left( \begin{array}{cccc}
0 & 0 & 1 & 0\\
[5pt]
0 & 1 & 0 & 0\\
[5pt]
0 & 0 & 0 & 0\\
[5pt]
0 & 0 & 0 & 1
\end{array}\right) ,
\label{eq:matrix_energy_velocity}
\end{equation}
the energy velocity vector $\mbox{\boldmath$V$}_e$ is \cite{CARCIONE96,CARCIONE93}:
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle \mbox{\boldmath$V$}_e = \frac{\left\langle \mbox{\boldmath$P$} \right\rangle}{\left\langle E_s + E_k \right\rangle} = \frac{\left\langle \mbox{\boldmath$P$} \right\rangle}{\left\langle E \right\rangle},\\
[10pt]
\displaystyle \left\langle \mbox{\boldmath$P$} \right\rangle = -\frac{1}{2}\,\Re\mbox{e}\left(
\left(
\overrightarrow{e_x}\,(\mbox{\boldmath$U$}_1.\mbox{\boldmath$T$})^T + \overrightarrow{e_z}\,(\mbox{\boldmath$U$}_3.\mbox{\boldmath$T$})^T
\right).\overline{\mbox{\boldmath$V$}}
\right),\\
[10pt]
\displaystyle \left\langle E \right\rangle = \frac{1}{4}\,\Re\mbox{e}\left( \left( 1 + \frac{(\omega/k)^2}{\left| \omega/k \right| ^2} \right) \mbox{\boldmath$V$}^T\,\mbox{\boldmath$\Gamma$}\,\overline{\mbox{\boldmath$V$}} \right) ,
\end{array}
\right.
\label{eq:energy_velocity}
\end{equation}
where $\overline{\mbox{\boldmath$V$}}$ is the complex conjugate of $\mbox{\boldmath$V$}$, $\left\langle \mbox{\boldmath$P$} \right\rangle$ is the Umov-Poynting vector, $\left\langle E_k \right\rangle$ and $\left\langle E_s \right\rangle$ are the average kinetic and strain energy densities, and $\left\langle E \right\rangle$ is the mean energy density. The theoretical wavefronts are the locus of the end of energy velocity vector $\mbox{\boldmath$V$}_e$ multiplied by the time of propagation. We will use this property in $\S$ \ref{sec:exp} to validate the simulations.
\section{The Biot-DA (diffusive approximation) model}\label{sec:DA}
The aim of this section is to approximate the Biot-JKD model, using a numerically tractable approach.
\subsection{Diffusive approximation}\label{sec:DA:DA}
The diffusive representation of fractional derivatives (\ref{eq:derivee_frac}) is approximated by using a quadrature formula on $N$ points, with weights $a_{\ell}^i$ and abcissae $\theta_{\ell}^i$ ($i=1,3$):
\begin{equation}
\begin{array}{ll}
(D+\Omega_i)^{1/2}w_i(x,z,t) & \displaystyle = \frac{1}{\pi}\,\int_0^{\infty}\frac{1}{\sqrt{\theta}}\psi^i(x,z,\theta,t)\,d\theta
\simeq \sum\limits_{\ell=1}^N a_{\ell}^i\,\psi^i(x,z,\theta_{\ell}^i,t),\\
[10pt]
& \displaystyle \equiv \sum_{\ell=1}^N a_{\ell}^i\,\psi_{\ell}^i(x,z,t).
\end{array}
\label{eq:DA}
\end{equation}
From (\ref{eq:EDO_psi_a}), the $2\,N$ memory variables $\psi_{\ell}^i$ satisfy the ordinary differential equations
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle \frac{\partial\,\psi_{\ell}^i}{\partial\,t} = -(\theta_{\ell}^i + \Omega_i)\,\psi_{\ell}^i + \frac{\partial\,w_i}{\partial\,t} + \Omega_i\,w_i,\\
[8pt]
\displaystyle \psi_{\ell}^i(x,z,0) = 0.
\end{array}
\right.
\label{eq:EDO_psi_DA_ani}
\end{equation}
\subsection{The Biot-DA first-order system}\label{sec:DA:EDP}
The fractional derivatives involved in the Biot-JKD system (\ref{eq:2D_ani_syst_hyp_jkd_scalar}) are replaced by their diffusive approximation (\ref{eq:DA}), with evolution equations (\ref{eq:EDO_psi_DA_ani}). After some algebraic operations, the Biot-DA system is written as a first-order system in time and in space, used in the numerical simulations of $\S$ \ref{sec:exp} ($j=1,\cdots N$)
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle \frac{\partial\,v_{s1}}{\partial\,t} - \frac{\rho_{w1}}{\chi_1}\,\left( \frac{\partial\,\sigma_{11}}{\partial\,x}+\frac{\partial\,\sigma_{13}}{\partial\,z}\right) - \frac{\rho_f}{\chi_1}\,\frac{\partial\,p}{\partial\,x} = \frac{\rho_f}{\rho}\,\gamma_1\,\sum\limits_{\ell=1}^N a_{\ell}^1\,\psi_{\ell}^1 + G_{v_{s1}},\\
[10pt]
\displaystyle \frac{\partial\,v_{s3}}{\partial\,t} - \frac{\rho_{w3}}{\chi_3}\,\left( \frac{\partial\,\sigma_{13}}{\partial\,x}+\frac{\partial\,\sigma_{33}}{\partial\,z}\right) - \frac{\rho_f}{\chi_3}\,\frac{\partial\,p}{\partial\,z} = \frac{\rho_f}{\rho}\,\gamma_3\,\sum\limits_{\ell=1}^N a_{\ell}^3\,\psi_{\ell}^3 + G_{v_{s3}},\\
[10pt]
\displaystyle \frac{\partial\,w_1}{\partial\,t} + \frac{\rho_f}{\chi_1}\,\left( \frac{\partial\,\sigma_{11}}{\partial\,x}+\frac{\partial\,\sigma_{13}}{\partial\,z}\right) + \frac{\rho}{\chi_1}\,\frac{\partial\,p}{\partial\,x} = -\,\gamma_1\,\sum\limits_{\ell=1}^N a_{\ell}^1\,\psi_{\ell}^1 + G_{w_1},\\
[10pt]
\displaystyle \frac{\partial\,w_3}{\partial\,t} + \frac{\rho_f}{\chi_3}\,\left( \frac{\partial\,\sigma_{13}}{\partial\,x}+\frac{\partial\,\sigma_{33}}{\partial\,z}\right) + \frac{\rho}{\chi_3}\,\frac{\partial\,p}{\partial\,z} = -\,\gamma_3\,\sum\limits_{\ell=1}^N a_{\ell}^3\,\psi_{\ell}^3 + G_{w_3},\\
[10pt]
\displaystyle \frac{\partial\,\sigma_{11}}{\partial\,t} - c_{11}^u\,\frac{\partial\,v_{s1}}{\partial\,x} -c_{13}^u\,\frac{\partial\,v_{s3}}{\partial\,z} - m\,\beta_1\,\left( \frac{\partial\,w_1}{\partial\,x} + \frac{\partial\,w_3}{\partial\,z}\right) = G_{\sigma_{11}},\\
[10pt]
\displaystyle \frac{\partial\,\sigma_{13}}{\partial\,t}-c_{55}^u\,\left( \frac{\partial\,v_{s3}}{\partial\,x}+\frac{\partial\,v_{s1}}{\partial\,z}\right) = G_{\sigma_{13}},\\
[10pt]
\displaystyle \frac{\partial\,\sigma_{33}}{\partial\,t}-c_{13}^u\,\frac{\partial\,v_{s1}}{\partial\,x}- c_{33}^u\,\frac{\partial\,v_{s3}}{\partial\,z} - m\,\beta_3\,\left( \frac{\partial\,w_1}{\partial\,x} + \frac{\partial\,w_3}{\partial\,z}\right) = G_{\sigma_{33}},\\
[10pt]
\displaystyle \frac{\partial\,p}{\partial\,t} + m\, \left( \beta_1\,\frac{\partial\,v_{s1}}{\partial\,x}+\beta_3\,\frac{\partial\,v_{s3}}{\partial\,z} +\frac{\partial\,w_1}{\partial\,x} +\frac{\partial\,w_3}{\partial\,z} \right) = G_p,\\
[10pt]
\displaystyle \frac{\partial\,\psi_j^1}{\partial\,t} + \frac{\rho_f}{\chi_1}\left( \frac{\partial\,\sigma_{11}}{\partial\,x} + \frac{\partial\,\sigma_{13}}{\partial\,z}\right) + \frac{\rho}{\chi_1}\,\frac{\partial\,p}{\partial\,x} = \Omega_1\,w_1 - \gamma_1\,\sum\limits_{\ell=1}^N a_{\ell}^1\,\psi_{\ell}^1 - (\theta_j^1 + \Omega_1)\,\psi_j^1 + G_{w_1},\\
[10pt]
\displaystyle \frac{\partial\,\psi_j^3}{\partial\,t} + \frac{\rho_f}{\chi_3}\left( \frac{\partial\,\sigma_{13}}{\partial\,x} + \frac{\partial\,\sigma_{33}}{\partial\,z}\right) + \frac{\rho}{\chi_3}\,\frac{\partial\,p}{\partial\,z} = \Omega_3\,w_3 - \gamma_3\,\sum\limits_{\ell=1}^N a_{\ell}^3\,\psi_{\ell}^3 - (\theta_j^3 + \Omega_3)\,\psi_j^3 + G_{w_3}.
\end{array}
\right.
\label{eq:2D_ani_syst_hyp_ad}
\end{equation}
Taking the vector of unknowns
\begin{equation}
\mbox{\boldmath$U$} = (v_{s1}\,,\,v_{s3}\,,\,w_1\,,\,w_3\,,\,\sigma_{11}\,,\,\sigma_{13}\,,\,\sigma_{33}\,,\,p\,,\,\psi_1^1\,,\,\psi_1^3\,,\,\cdots\,,\,\psi_N^1\,,\,\psi_N^3)^T,
\label{eq:vect_U_ad}
\end{equation}
and the forcing
\begin{equation}
\mbox{\boldmath$G$} = \left( G_{v_{s1}}\,,\,G_{v_{s3}}\,,\,G_{w_1}\,,\,G_{w_3}\,,\,G_{\sigma_{11}}\,,\,G_{\sigma_{13}}\,,\,G_{\sigma_{33}}\,,\,G_p\,,\,G_{w_1}\,,\,G_{w_3}\,,\,G_{w_1}\,,\,G_{w_3} \right)^T,
\label{eq:forcing_DA}
\end{equation}
the system (\ref{eq:2D_ani_syst_hyp_ad}) is written in the form:
\begin{equation}
\frac{\partial\,\mbox{\boldmath$U$}}{\partial\,t} + \mbox{\boldmath$A$}\,\frac{\partial\,\mbox{\boldmath$U$}}{\partial\,x} + \mbox{\boldmath$B$}\,\frac{\partial\,\mbox{\boldmath$U$}}{\partial\,z} = -\mbox{\boldmath$S$}\,\mbox{\boldmath$U$} + \mbox{\boldmath$G$},
\label{eq:2D_ani_syst_hyp_tens_ad}
\end{equation}
where $\mbox{\boldmath$A$}$ and $\mbox{\boldmath$B$}$ are the ($2\,N+8)\times(2\,N+8)$ propagation matrices and $\mbox{\boldmath$S$}$ is the diffusive matrix (given in \ref{annexe:matABS}). The number of unknowns increases linearly with the number of memory variables. Only the matrix $\mbox{\boldmath$S$}$ depends on the coefficients of the diffusive approximation.
\subsection{Properties}\label{sec:DA:prop}
Some properties are stated to characterize the first-order differential system (\ref{eq:2D_ani_syst_hyp_ad}). First, one notes that the only difference between the Biot-LF model, the Biot-JKD model and the Biot-DA model occurs in the viscous operators
\begin{equation}
\widehat{F}_i(\omega) = \left\lbrace \begin{array}{ll}
\displaystyle \widehat{F}_i^{LF}(\omega) = 1 \qquad\; & \mbox{Biot-LF},\\
[5pt]
\displaystyle \widehat{F}_i^{JKD}(\omega) = \frac{1}{\sqrt{\Omega_i}}\,(\Omega_i + j\,\omega)^{1/2} & \mbox{Biot-JKD},\\
[10pt]
\displaystyle \widehat{F}_i^{DA}(\omega) = \frac{\Omega_i + j\,\omega}{\sqrt{\Omega_i}}\,\sum\limits_{\ell=1}^N \frac{a_{\ell}^i}{\theta_{\ell}^i + \Omega_i + j\,\omega} & \mbox{Biot-DA}.
\end{array}\right.
\label{eq:fonction_correction}
\end{equation}
The dispersion analysis of the Biot-DA model is obtained by replacing the viscous operators $\widehat{F}_i^{JKD}(\omega)$ by $\widehat{F}_i^{DA}(\omega)$ in (\ref{eq:operateur_visco_jkd}). One of the eigenvalues of $\mbox{\boldmath$\Gamma$}^{-1}\,\mbox{\boldmath${\cal L}$}\,\mbox{\boldmath${\cal C}$}$ (\ref{eq:syst_dispersion_gen}) is still zero with multiplicity two, and the other non-zero eigenvalues correspond to the wave modes $\pm k_{pf}(\omega,\varphi)$, $\pm k_{ps}(\omega,\varphi)$ and $\pm k_{s}(\omega,\varphi)$. Consequently, the diffusive approximation does not introduce spurious wave.
\begin{proposition}
The eigenvalues of the matrix $\mbox{\boldmath$M$} = \cos(\varphi)\,\mbox{\boldmath$A$} + \sin(\varphi)\,\mbox{\boldmath$B$}$ are
\begin{equation}
sp(\mbox{\boldmath$M$}) = \left\lbrace 0\,,\,\pm c_{pf}^{\infty}(\varphi)\,,\,\pm c_{ps}^{\infty}(\varphi)\,,\,\pm c_{s}^{\infty}(\varphi)\right\rbrace,
\end{equation}
with $0$ being of multiplicity $2\,N+2$.
\end{proposition}
\noindent
The non-zero eigenvalues do not depend on the viscous operators $\widehat{F}_i(\omega)$. Consequently, the high-frequency limits of the phase velocities $c_{pf}^{\infty}(\varphi)$, $c_{ps}^{\infty}(\varphi)$ and $c_s^{\infty}(\varphi)$, defined in $\S$ \ref{sec:phys:dispersion}, are the same for both Biot-LF, Biot-JKD and Biot-DA models. An argumentation similar to \cite{LEVEQUE13} shows that the matrix $\mbox{\boldmath$M$}$ is diagonalizable for all $\varphi$ in $[0,2\,\pi[$, with real eigenvalues. The three models are therefore hyperbolic.
\begin{proposition}[\bf Decrease of the energy]
An energy analysis of (\ref{eq:2D_ani_syst_hyp_ad}) is performed. Let us consider the Biot-DA model (\ref{eq:2D_ani_syst_hyp_ad}) without forcing, and let us denote
\begin{equation}
E=E_1+E_2+E_3,
\label{eq:E1E2E3_AD}
\end{equation}
where $E_1$, $E_2$ are defined in equations (\ref{eq:energieJKD}) and
\begin{equation}
\begin{array}{l}
\displaystyle E_3 = \frac{1}{2}\,\int_{\mathbb{R}^2}\frac{\eta}{\pi}\,\sum\limits_{\ell=1}^N(\mbox{\boldmath$w$}-\mbox{\boldmath$\psi_{\ell}$})^T\,\mathrm{diag}\left(\frac{a_{\ell}^i}{\kappa_i\,\sqrt{\Omega_i\,\theta_{\ell}^i}\,(\theta_{\ell}^i+2\,\Omega_i)}\right)\,(\mbox{\boldmath$w$}-\mbox{\boldmath$\psi_{\ell}$})\,dx\,dz.
\end{array}
\label{eq:energieAD}
\end{equation}
Then, $E$ satisfies
\begin{equation}
\begin{array}{l}
\displaystyle \frac{\textstyle d\,E}{\textstyle d\,t} = -\int_{\mathbb{R}^2}\frac{\eta}{\pi}\,\sum\limits_{\ell=1}^N \displaystyle \left\lbrace \mbox{\boldmath$\psi_{\ell}$}^T\,\mathrm{diag}\left(\frac{a_{\ell}^i\,(\theta_{\ell}^i+\Omega_i)}{\kappa_i\,\sqrt{\Omega_i\,\theta_{\ell}^i}\,(\theta_{\ell}^i+2\,\Omega_i)}\right)\,\mbox{\boldmath$\psi_{\ell}$}\right. \\
[20pt]
\hspace{1.1cm}\displaystyle \left. + \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\frac{a_{\ell}^i\,\Omega_i}{\kappa_i\,\sqrt{\Omega_i\,\theta_{\ell}^i}\,(\theta_{\ell}^i+2\,\Omega_i)}\right)\,\mbox{\boldmath$w$} \right\rbrace \,dx\,dz.
\end{array}
\label{eq:dEdtAD}
\end{equation}
\label{prop:nrjAD}
\end{proposition}
\noindent
The proof of the proposition \ref{prop:nrjAD} is similar to the proof of the proposition \ref{prop:nrjJKD} and will not be repeated here. Proposition \ref{prop:nrjAD} calls the following comments:
\begin{itemize}
\item only $E_3$ and the time evolution of $E$ are modified by the diffusive approximation;
\item the abscissae $\theta_{\ell}^i$ are always positive, as explained in $\S$ \ref{sec:DA:coeff}, but not necessarily the weights $a_{\ell}^i$. Consequently, in the general case, we cannot say that the Biot-DA model is stable. However, in the particular case where the coefficients $\theta_{\ell}^i$, $a_{\ell}^i$ are all positive, $E$ is an energy, and $\frac{d\,E}{d\,t} < 0$: the Biot-DA model is therefore stable in this case.
\end{itemize}
\begin{proposition}
Let us assume that the abscissae $\theta_{\ell}^i$ have been sorted in increasing order
\begin{equation}
\theta_1^i < \theta_2^i < \cdots < \theta_N^i,\quad i=1,3,\\
\label{eq:poids_croissant}
\end{equation}
and that the coefficients $\theta_{\ell}^i$, $a_{\ell}^i$ of the diffusive approximation (\ref{eq:DA}) are positive. Then zero is an eigenvalue with multiplicity $6$ of $\mbox{\boldmath$S$}$. Moreover, the $2\,N + 2$ non-zero eigenvalues of $\mbox{\boldmath$S$}$ (denoted $s_{\ell}^i$, $\ell = 1,\cdots,N+1$) are real positive, and satisfy
\begin{equation}
\begin{array}{l}
0 < s_1^i < \theta_1^i + \Omega_i < \cdots < s_N^i < \theta_N^i + \Omega_i < s_{N+1}^i,\quad i=1,3.
\end{array}
\label{eq:vpS}
\end{equation}
\label{prop:diffusive_part_vpS}
\end{proposition}
\noindent
Proposition \ref{prop:diffusive_part_vpS} is proven in \ref{annexe:proof_vpS}. As we will see in $\S$ \ref{sec:num}, the proposition \ref{prop:diffusive_part_vpS} ensures the stability of the numerical method. Positivity of quadrature abscissae and weights is again the fundamental hypothesis.
\subsection{Determining the Biot-DA parameters}\label{sec:DA:coeff}
For the sake of clarity, the space coordinates and the subscripts due to the anisotropy are omitted. The quadrature coefficients aim to approximate improper integrals of the form
\begin{equation}
(D+\Omega)^{1/2}w(t) = \frac{1}{\pi}\,\int_0^{\infty}\frac{1}{\sqrt{\theta}}\psi(t,\theta)\,d\theta \simeq \sum\limits_{\ell=1}^N a_{\ell}\,\psi(t,\theta_{\ell}).
\label{eq:quadrature_AD}
\end{equation}
Moreover, the positivity of the quadrature coefficients is crucial for the stability of the Biot-DA model and its numerical implementation, as shown in propositions \ref{prop:nrjAD} and \ref{prop:diffusive_part_vpS}. Two approaches can be employed for this purpose. While the most usual one is based on orthogonal polynomials, the second approach is associated with an optimization procedure applied to the viscous operators (\ref{eq:fonction_correction}).
\subsubsection{Gaussian quadratures\label{section:gauss_jacobi}}
Various orthogonal polynomials exist to evaluate the improper integral (\ref{eq:quadrature_AD}). The first method, proposed in \cite{YUAN02}, is to use the Gauss-Laguerre quadrature formula, which approximates improper integrals over $\mathbb{R}^+$. Slow convergence of this method is explained and corrected in \cite{DIETHELM08}. It consists in replacing the Gauss-Laguerre quadrature by a Gauss-Jacobi quadrature, more suitable for functions which decrease algebraically. A last improvement, proposed in \cite{BIRK10}, consists in using a modified Gauss-Jacobi quadrature formula, recasting the improper integral (\ref{eq:quadrature_AD}) as
\begin{equation}
\displaystyle \frac{1}{\pi}\,\int_0^{\infty} \frac{1}{\sqrt{\theta}}\,\psi(\theta)\,d\theta=\frac{1}{\pi}\,\int_{-1}^{+1}(1-\tilde{\theta})^\gamma(1+\tilde{\theta})^\delta\tilde{\psi}(\tilde{\theta})\,d\tilde{\theta}\simeq \frac{1}{\pi}\,\sum\limits_{\ell = 1}^N\tilde{a}_\ell\,\tilde{\psi}(\tilde{\theta}_\ell),
\label{eq:quadrature_derivee_frac_birk}
\end{equation}
with the modified memory variable $\tilde{\psi}$ defined as
\begin{equation}
\tilde{\psi}(\tilde{\theta})=\frac{4}{(1-\tilde{\theta})^{\gamma-1}(1+\tilde{\theta})^{\delta+3}}\,\left(\frac{1+\tilde{\theta}}{1-\tilde{\theta}}\right)\psi\left(\left(\frac{1-\tilde{\theta}}{1+\tilde{\theta}}\right)^2\right).
\label{eq:fonction_gauss_jacobi}
\end{equation}
The abscissae $\tilde{\theta}_{\ell}$, which are the zeros of the Gauss-Jacobi polynomials, and the weights $\tilde{a}_{\ell}$ can be computed by standard routines \cite{NUM_RECIPES}. In \cite{BIRK10}, the author proves that for fractional derivatives of order $1/2$, the optimal coefficients to use are $\gamma = 1$ and $\delta = 1$. The coefficients of the diffusive approximation $\theta_{\ell}$ and $a_{\ell}$ (\ref{eq:quadrature_AD}) are therefore related to the coefficients $\tilde{\theta}_{\ell}$ and $\tilde{a}_{\ell}$ (\ref{eq:quadrature_derivee_frac_birk}) by
\begin{equation}
\theta_{\ell} = \left(\frac{1-\tilde{\theta}_{\ell}}{1+\tilde{\theta}_{\ell}}\right)^2,\quad a_{\ell} = \frac{1}{\pi}\,\frac{4\,\tilde{a}_{\ell}}{(1-\tilde{\theta}_{\ell})\,(1+\tilde{\theta}_{\ell})^3}.
\label{eq:coef_quad_birk}
\end{equation}
By construction, they are strictly positive.
\subsubsection{Optimization procedures\label{section:solvopt}}
In \cite{BLANC_JCP13,BLANC_JASA13}, we proposed a different method to determine the coefficients $\theta_{\ell}$ and $a_{\ell}$ of the diffusive approximation (\ref{eq:quadrature_AD}). This method is based on the frequency expressions of the viscous operators and takes into account the frequency content of the source. Our requirement is therefore to approximate the viscous operator $\widehat{F}^{JKD}(\omega)$ by $\widehat{F}^{DA}(\omega)$ (\ref{eq:fonction_correction}) in the frequency range of interest $I = [\omega_{\min},\omega_{\max}]$, centered on the central angular frequency of the source. This leads to the minimization of the quantity $\chi^2$ with respect to the abcissae $\theta_{\ell}$ and to the weights $a_{\ell}$
\begin{equation}
\chi^2 = \sum\limits_{k=1}^K\left| \frac{\widehat{F}^{DA}(\omega_k)}{\widehat{F}^{JKD}(\omega_k)} - 1 \right| ^2
= \sum\limits_{k=1}^K\left| \sum\limits_{\ell=1}^N a_{\ell}\,\frac{(\Omega + j\,\omega_k)^{1/2}}{\theta_{\ell}+\Omega + j\,\omega_k} - 1 \right| ^2,
\label{eq:chi2}
\end{equation}
where the angular frequencies $\omega_k$ are distributed linearly in $I$ on a logarithmic scale of $K$ points
\begin{equation}
\omega_k = \omega_{\min}\,\left(\frac{\omega_{\max}}{\omega_{\min}}\right) ^{\frac{k-1}{K-1}},\qquad k=1\cdots K.
\label{eq:tilde_omega_k}
\end{equation}
In \cite{BLANC_JCP13,BLANC_JASA13}, the abcissae $\theta_{\ell}$ were arbitrarily put linearly on a logarithmic scale, as (\ref{eq:tilde_omega_k}). Only the weights $a_{\ell}$ were optimized with a linear least-squares minimization procedure of (\ref{eq:chi2}). Some negative weights were obtained, which represents a major drawback, at least theoretically, since the stability of the Biot-DA model can not be guaranteed.
To remove this drawback and improve the minimization of $\chi^2$, a nonlinear constrained optimization is developed, where both the abcissae and the weights are optimized. The coefficients $\theta_{\ell}$ and $a_{\ell}$ are now constrained to be positive. An additional constraint $\theta_{\ell} \leqslant \theta_{max}$ is also introduced to ensure the computational accuracy in the forthcoming numerical method ($\S$ \ref{sec:num}). Setting
\begin{equation}
\theta_{\ell} = (\theta'_{\ell})^2,\quad a_{\ell} = (a'_{\ell})^2,
\label{eq:coef_min_shor}
\end{equation}
the number of constraints decreases from $3\,N$ to $N$ leading to the following minimization problem:
\begin{equation}
\displaystyle \min\limits_{(\theta_{\ell}',a_{\ell}')} \chi^2,\quad\theta_{\ell}'\leqslant \sqrt{\theta_{\max}}.
\label{eq:pb_opti_2}
\end{equation}
The constrained minimization problem (\ref{eq:pb_opti_2}) is nonlinear and non-quadratic with respect to abscissae $\theta_{\ell}'$. To solve it, we implement the program SolvOpt \cite{KAPPEL00,SHOR85}, used in viscoelasticity \cite{REKIK11}. Since this Shor's algorithm is iterative, it requires an initial estimate $\theta_{\ell}'^0$, $a_{\ell}'^0$ of the coefficients which satisfies the constraints of the minimization problem (\ref{eq:pb_opti_2}). For this purpose, $\theta_{\ell}^0$ and $a_{\ell}^0$ are initialized with the method based on the modified Gauss-Jacobi quadrature formula (\ref{eq:coef_quad_birk}).
Different initial guess have been used, derived from Gaus-Legendre and Gauss-Jacobi methods, leading to the same final coefficients $\theta_{\ell}$ and $a_{\ell}$.
In what follows, we always use the parameters
$\omega_{\min} = \omega_0/10,\quad\omega_{\max}=10\,\omega_0,\quad\theta_{\max}=100\,\omega_0,\quad K=2\,N,$
where $\omega_0=2\,\pi\,f_0$ is the central angular frequency of the source.
\subsubsection{Discussion\label{section:choix_opti}}
To compare the quadrature methods presented in $\S$ \ref{section:gauss_jacobi} and \ref{section:solvopt}, we first define the error of model $\varepsilon_{mod}$ as
\begin{equation}
\varepsilon_{mod} = \,\left|\left| \frac{\widehat{F}^{DA}(\omega)}{\widehat{F}^{JKD}(\omega)}-1\right|\right|_{L_2} \, = \left( \int_{\omega_{\min}}^{\omega_{\max}}\left| \frac{\widehat{F}^{DA}(\omega)}{\widehat{F}^{JKD}(\omega)}-1\right|^2\,d\omega \right)^{1/2}.
\label{eq:norme2_relative_error}
\end{equation}
The variation of $\varepsilon_{mod}$ in terms of the number $N$ of memory variables, for $f_0 = 200$ kHz and $f_c = 3.84$ kHz, is represented on figure \ref{fig:erreur_opti}-a. The Gauss-Jacobi method converges very slowly, and the error is always larger than $1$ \% even for $N=50$. Moreover, for values of $N\leqslant10$, the error is always larger than $60$ \%. For both the linear and the nonlinear optimizations, the errors decrease rapidly with $N$. Nevertheless, the nonlinear procedure outperforms the results obtained in the linear case. For $N=8$ for instance, the relative error of the nonlinear optimization ($\varepsilon_{mod} \simeq 7.16\,10^{-3}$ \%) is $514$ times smaller than the error of the linear optimization ($\varepsilon_{mod} \simeq 3.68$ \%). For larger values of $N$, the system is poorly conditioned and the order of convergence deteriorates; in practice, this is not penalizing since large values of $N$ are not used. An example of a priori parametric determination of $N$ in terms of both the frequency range and the desired accuracy is also given in figure \ref{fig:erreur_opti}-b for the nonlinear procedure. The case $N=0$ corresponds to the Biot-LF model.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b)\\
\includegraphics[scale=0.33]{erreur_fonction_N.eps} &
\includegraphics[scale=0.33]{N_fonction_f0.eps}
\end{tabular}
\end{center}
\caption{(a): relative error $\varepsilon_{mod}$ in terms of $N$ for both the modified Gauss-Jacobi quadrature and the nonlinear constrained optimization. (b): required values of $N$ in terms of $f_0/f_{c1}$ and the required accuracy $\varepsilon_{mod}$ for the nonlinear optimization.}
\label{fig:erreur_opti}
\end{figure}
It is also important to compare the influence of the quadrature coefficient on the physical observables. For that purpose, we represent on figure \ref{fig:fig_opti_dispersion} the phase velocity and the attenuation of the slow wave of the Biot-DA model, obtained with the different quadrature methods. As expected, the results given by the Gauss-Jacobi method are extremely poor. On the contrary, the linear and non-linear procedures are able to represent very accurately the variations of these quantities on the considered range of frequencies, even for the small values $N=3$. Based on these results and the positivity requirement, the nonlinear constrained optimization is therefore considered as the better way to determine the coefficients of the diffusive approximation. This method is used in all what follows.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b)\\
\includegraphics[scale=0.30]{compare_cps_3_zoom.eps} &
\includegraphics[scale=0.30]{compare_attps_3_zoom.eps}\\
\end{tabular}
\end{center}
\caption{phase velocity (a) and attenuation (b) of the slow quasi-compressional wave. Comparison between the Biot-DA model and the Biot-JKD model for $N=3$.}
\label{fig:fig_opti_dispersion}
\end{figure}
\section{Numerical modeling}\label{sec:num}
\subsection{Splitting}\label{sec:num:splitting}
In order to integrate the Biot-DA system (\ref{eq:2D_ani_syst_hyp_tens_ad}), a uniform grid is introduced, with mesh size $\Delta\,x,_,\Delta\,z$ and time step $\Delta\,t$. The approximation of the exact solution $\mbox{\boldmath$U$}(x_i = i\,\Delta\,x,z_j = j\,\Delta\,z,t_n = n\,\Delta\,t)$ is denoted by $\mbox{\boldmath$U$}_{ij}^n$, with $0 \leqslant i \leqslant N_x$, $0 \leqslant j \leqslant N_z$. If $\Delta\,x = \Delta\,z$, a straightforward discretization of (\ref{eq:2D_ani_syst_hyp_tens_ad}) by an explicit time scheme typically leads to the following condition of stability
\begin{equation}
\Delta t \leqslant \min\left( \Upsilon\,\frac{\Delta x}{\max\limits_{\varphi\in[0,\pi/2]}c_{pf}^{\infty}(\varphi)},\frac{2}{R(\mbox{\boldmath$S$})}\right),
\label{eq:CFL_direct}
\end{equation}
where $R(\mbox{\boldmath$S$})$ is the spectral radius of $\mbox{\boldmath$S$}$, and $\Upsilon > 0$ is obtained by a Von-Neumann analysis when $\mbox{\boldmath$S$}=\mbox{\boldmath$0$}$. The first term of (\ref{eq:CFL_direct}), which depends of the propagation matrices $\mbox{\boldmath$A$}$ and $\mbox{\boldmath$B$}$, is the classical CFL condition. The second term of (\ref{eq:CFL_direct}) depends only on the diffusive matrix $\mbox{\boldmath$S$}$. From proposition \ref{prop:diffusive_part_vpS}, we deduce that the spectral radius of $\mbox{\boldmath$S$}$ satisfies
\begin{equation}
R(\mbox{\boldmath$S$}) > \max\limits_{\ell=1,\cdots,N}(\theta_{\ell}^1 + \Omega_1,\theta_{\ell}^3 + \Omega_3)
\label{chapter3:eq:spectral_radius_S}
\end{equation}
if the coefficients $\theta_{\ell}^i$ and $a_{\ell}^i$ of the diffusive approximation are positive. With highly dissipative fluids, the second term of (\ref{eq:CFL_direct}) can be so small that numerical computations are intractable.
A more efficient strategy is adopted here, based on the second-order Strang splitting \cite{LEVEQUE02}. It consists in splitting the original system (\ref{eq:2D_ani_syst_hyp_tens_ad}) into a propagative part
\begin{equation}
\frac{\partial\,\mbox{\boldmath$U$}}{\partial\,t} + \mbox{\boldmath$A$}\,\frac{\partial\,\mbox{\boldmath$U$}}{\partial\,x} + \mbox{\boldmath$B$}\,\frac{\partial\,\mbox{\boldmath$U$}}{\partial\,z} = \mbox{\boldmath$0$},\qquad(\mbox{\boldmath$H$}_p)
\label{eq:propagative_part}
\end{equation}
and a diffusive part with forcing
\begin{equation}
\frac{\partial\,\mbox{\boldmath$U$}}{\partial\,t} = -\mbox{\boldmath$S$}\,\mbox{\boldmath$U$} + \mbox{\boldmath$G$}, \qquad\qquad\qquad(\mbox{\boldmath$H$}_d)
\label{eq:diffusive_part}
\end{equation}
where $\mbox{\boldmath$H$}_p$ and $\mbox{\boldmath$H$}_d$ are the operators associated with each part. One solves alternatively the propagative part and the diffusive part:
\begin{equation}
\mbox{\boldmath$U$}^{n+1} = \mbox{\boldmath$H$}_d\left(t_{n+1},\frac{\Delta t}{2}\right)\circ\mbox{\boldmath$H$}_p(\Delta t)\circ\mbox{\boldmath$H$}_d\left(t_n,\frac{\Delta t}{2}
\right)\,\mbox{\boldmath$U$}^{n}.
\label{eq:strang_splitting}
\end{equation}
The discrete operator $\mbox{\boldmath$H$}_p$ associated with the propagative part (\ref{eq:propagative_part}) is an ADER 4 (Arbitrary DERivatives) scheme \cite{DUMBSER04}. This scheme is fourth-order accurate in space and time, is dispersive of order 4 and dissipative of order 6 \cite{STRIKWERDA99}, and has a stability limit $\Upsilon=1$. On Cartesian grids, ADER 4 amounts to a fourth-order Lax-Wendroff scheme. A general expression of the ADER scheme, together with its numerical analysis, can be found in the section 4-3 of the thesis \cite{THESE_BLANC}.
The solution of (\ref{eq:diffusive_part}) is given by
\begin{equation}
\begin{array}{ll}
\displaystyle \mbox{\boldmath$H$}_d\left(t_k,\frac{\Delta t}{2}\right)\,\mbox{\boldmath$U$}(t_0) & \displaystyle = e^{-\mbox{\scriptsize\boldmath$S$}\,\Delta t/2}\,\mbox{\boldmath$U$}\left(t_0\right) + \int_{t_0}^{t_0 + \Delta t/2} e^{-\mbox{\scriptsize\boldmath$S$}\,(t_0 + \Delta t/2-\tau)}\,\mbox{\boldmath$G$}(\tau)\,d\tau,\\
[12pt]
& \displaystyle \simeq e^{-\mbox{\scriptsize\boldmath$S$}\,\frac{\Delta t}{2}}\,\mbox{\boldmath$U$}(t_0) - (\mbox{\boldmath$I$} - e^{-\mbox{\scriptsize\boldmath$S$}\,\frac{\Delta t}{2}})\,\mbox{\boldmath$S$}^{-1}\,\mbox{\boldmath$G$}(t_k),
\end{array}
\label{eq:int_diffu_part_F}
\end{equation}
with $k=n$ or $n+1$. The exponential matrix $e^{-\mbox{\scriptsize\boldmath$S$}\,\Delta t/2}$ is computed numerically using the $(6,6)$ Pad\'e approximation in the "scaling and squaring method" \cite{MOLER03}. Proposition \ref{prop:diffusive_part_vpS} ensures that the numerical integration of the diffusive step (\ref{eq:diffusive_part}) is unconditionally stable \cite{BLANC_JCP13}. Without forcing, i.e. $\mbox{\boldmath$G$} = \mbox{\boldmath$0$}$, the integration of the diffusive part (\ref{eq:diffusive_part}) is exact.
The full algorithm is therefore stable under the optimum CFL condition of stability
\begin{equation}
\Delta t = \Upsilon\,\frac{\Delta x}{\max\limits_{\varphi\in[0,\pi/2]}c_{pf}^{\infty}(\varphi)},\quad\Upsilon\leqslant 1,
\label{eq:CFL_dsplit}
\end{equation}
which is always independent of the Biot-DA model coefficients. Since the matrices $\mbox{\boldmath$A$}$, $\mbox{\boldmath$B$}$ and $\mbox{\boldmath$S$}$ do not commute, the order of convergence decreases from $4$ to $2$. Using a fourth-order ADER scheme is nevertheless advantageous, compared with the second-order Lax-Wendroff scheme: the stability limit is improved, and numerical artifacts (dispersion, attenuation, anisotropy) are greatly reduced.
\subsection{Immersed interface method}\label{sec:num:IIM}
Let us consider two transversely isotropic homogeneous poroelastic media $\Omega_0$ and $\Omega_1$ separated by a stationary interface $\Gamma$, as shown in figure \ref{fig:interface}. The governing equations (\ref{eq:2D_ani_syst_hyp_tens_ad}) in each medium have to be completed by a set of jump conditions. The simple case of perfect bonding and perfect hydraulic contact along $\Gamma$ is considered here, modeled by the jump conditions \cite{GUREVICH99}:
\begin{equation}
\left[ \mbox{\boldmath$v_s$} \right] = {\bf 0},\quad \left[ \mbox{\boldmath$w$}.\mbox{\boldmath$n$} \right] = 0,\quad \left[ \underline{\mbox{\boldmath$\sigma$}}.\mbox{\boldmath$n$} \right] = \mbox{\boldmath$0$},\quad \left[ p \right] = 0.
\label{eq:jump_condition}
\end{equation}
The discretization of the interface conditions requires special care. A straightforward stair-step representation of interfaces introduces first-order geometrical errors and yields spurious numerical diffractions. In addition, the jump conditions (\ref{eq:jump_condition}) are not enforced numerically if no special treatment is applied. Lastly, the smoothness requirements to solve (\ref{eq:propagative_part}) are not satisfied, decreasing the convergence rate of the ADER scheme.
To remove these drawbacks while maintaining the efficiency of Cartesian grid methods, immersed interface methods constitute a possible strategy \cite{LEVEQUE97,LI94,CHIAVASSA11}. The latter studies can be consulted for a detailed description of this method. The basic principle is as follows: at the irregular nodes where the ADER scheme crosses an interface, modified values of the solution are used on the other side of the interface instead of the usual numerical values.
Calculating these modified values is a complex task involving high-order derivation of jump conditions (\ref{eq:jump_condition}), high-order derivation of the Beltrami-Michell equation (\ref{eq:beltrami_ani}) and algebraic manipulation, such as singular value decompositions. All these time consuming procedures can be carried out during a preprocessing stage and only small matrix-vector multiplications need to be performed during the simulation. After optimizing the code, the extra CPU cost can be practically negligible, i.e. lower than 1\% of that required by the time-marching procedure.
Compared with $\S$ 3-3 of \cite{CHIAVASSA11}, the modifications induced by anisotropy concern
\begin{itemize}
\item step 1: the derivation of the jump conditions,
\item step 2: the derivation of the Beltrami-Michell equation.
\end{itemize}
These modifications are tedious and hence will not be repeated here. They are deduced from the new expressions (\ref{eq:beltrami_ani}) and (\ref{eq:2D_ani_syst_hyp_ad}).
\subsection{Summary of the algorithm}\label{sec:num:algo}
The numerical method can be summed up as follows:
\begin{enumerate}
\item pre-processing step
\begin{itemize}
\item diffusive coefficients: initialisation (\ref{eq:coef_quad_birk}), nonlinear optimisation (\ref{eq:coef_min_shor})-(\ref{eq:pb_opti_2})
\item numerical scheme: ADER matrices for (\ref{eq:propagative_part}), exponential of the diffusive matrix (\ref{eq:int_diffu_part_F})
\item immersed interface method: detection of irregular points, computation of extrapolation matrices
\end{itemize}
\item time iterations
\begin{itemize}
\item immersed interface method: computation of modified values near interfaces
\item diffusive half-step $\mbox{\boldmath$H$}_d$ (\ref{eq:int_diffu_part_F})
\item propagative step $\mbox{\boldmath$H$}_p$ (\ref{eq:propagative_part}), using modified values near interfaces
\item diffusive half-step $\mbox{\boldmath$H$}_d$ (\ref{eq:int_diffu_part_F})
\end{itemize}
\end{enumerate}
\section{Numerical experiments}\label{sec:exp}
\noindent
{\it Configuration}\label{sec:exp:config}
In order to demonstrate the ability of the present method to be applied to a wide range of applications, the numerical tests will be run on two different transversely isotropic porous media. The medium $\Omega_0$ is composed of thin layers of epoxy and glass, strongly anisotropic if the wavelengths are large compared to the thickness of the layers \cite{CARCIONE96}. The medium $\Omega_1$ is water saturated Berea sandstone, which is sedimentary rock commonly encountered in petroleum engineering. The grains are predominantly sand sized and composed of quartz bonded by silica \cite{CARCIONE96,DAI95}.
The values of the physical parameters are given in table \ref{table:para_phy_ani}. The viscous characteristic lengths $\Lambda_1$ and $\Lambda_3$ are obtained by setting the Pride numbers $P_1 = P_3 = 0.5$. We also report in these tables some useful values, such as phase velocities, critical frequencies, and quadrature parameters computed for each media. The central frequency of the source is $f_0=200$ kHz, and the quadrature coefficients $\theta_{\ell}^i$, $a_{\ell}^i$, $i=1,3,$ are determined by nonlinear constrained optimization with $N=3$ memory variables. The error of model $\varepsilon_{mod}$ (\ref{eq:norme2_relative_error}) is also given. We note that the transition frequencies $f_{c1}$ and $f_{c3}$ are the same for both $\Omega_0$ and $\Omega_1$. In this particular case, the coefficients of the diffusive approximation are therefore also the same.
\begin{table}[htbp]
\begin{center}\footnotesize
\begin{tabular}{llll}
& Parameters & $\Omega_0$ & $\Omega_1$\\
\hline
\rule[-1mm]{0mm}{3mm} Saturating fluid & $\rho_f$ (kg/m$^3$) & $1040$ & $1040$\\
\rule[-1mm]{0mm}{3mm} & $\eta$ (Pa.s) & $10^{-3}$ & $10^{-3}$\\
\rule[-1mm]{0mm}{3mm} & $K_f$ (GPa) & $2.5$ & $2.5$\\
\rule[-1mm]{0mm}{3mm} Grain & $\rho_s$ (kg/m$^3$) & $1815$ & $2500$\\
\rule[-1mm]{0mm}{3mm} & $K_s$ (GPa) & $40$ & $80$\\
\rule[-1mm]{0mm}{3mm} Matrix & $\phi$ & $0.2$ & $0.2$\\
\rule[-1mm]{0mm}{3mm} & ${\cal T}_1$ & $2$ & $2$\\
\rule[-1mm]{0mm}{3mm} & ${\cal T}_3$ & $3.6$ & $3.6$\\
\rule[-1mm]{0mm}{3mm} & $\kappa_1$ (m$^2$) & $6.\,10^{-13}$ & $6.\,10^{-13}$\\
\rule[-1mm]{0mm}{3mm} & $\kappa_3$ (m$^2$) & $10^{-13}$ & $10^{-13}$\\
\rule[-1mm]{0mm}{3mm} & $c_{11}$ (GPa) & $39.4$ & $71.8$\\
\rule[-1mm]{0mm}{3mm} & $c_{12}$ (GPa) & $1$ & $3.2$\\
\rule[-1mm]{0mm}{3mm} & $c_{13}$ (GPa) & $5.8$ & $1.2$\\
\rule[-1mm]{0mm}{3mm} & $c_{33}$ (GPa) & $13.1$ & $53.4$\\
\rule[-1mm]{0mm}{3mm} & $c_{55}$ (GPa) & $3$ & $26.1$\\
\rule[-1mm]{0mm}{3mm} & $\Lambda_1$ (m) & $6.93\,10^{-6}$ & $2.19\,10^{-7}$\\
\rule[-1mm]{0mm}{3mm} & $\Lambda_3$ (m) & $3.79\,10^{-6}$ & $1.20\,10^{-7}$\\\hline
\rule[-1mm]{0mm}{4mm} Dispersion & $c_{pf}^{\infty}(0)$ (m/s) & $5244.40$ & $6004.31$\\
\rule[-1mm]{0mm}{4mm} & $c_{pf}(f_0,0)$ kHz (m/s) & $5227.10$ & $5988.50$\\
\rule[-1mm]{0mm}{4mm} & $c_{pf}^{\infty}(\pi/2)$ (m/s) & $3583.24$ & $5256.03$\\
\rule[-1mm]{0mm}{4mm} & $c_{pf}(f_0,\pi/2)$ (m/s) & $3581.42$ & $5245.84$\\
\rule[-1mm]{0mm}{4mm} & $c_{ps}^{\infty}(0)$ (m/s) & $975.02$ & $1026.45$\\
\rule[-1mm]{0mm}{4mm} & $c_{ps}(f_0,0)$ (m/s) & $901.15$ & $949.33$\\
\rule[-1mm]{0mm}{4mm} & $c_{ps}^{\infty}(\pi/2)$ (m/s) & $604.41$ & $745.59$\\
\rule[-1mm]{0mm}{4mm} & $c_{ps}(f_0,\pi/2)$ (m/s) & $534.88$ & $661.32$\\
\rule[-1mm]{0mm}{4mm} & $c_{s}^{\infty}(0)$ (m/s) & $1368.36$ & $3484.00$\\
\rule[-1mm]{0mm}{4mm} & $c_{s}(f_0,0)$ (m/s) & $1361.22$ & $3470.45$\\
\rule[-1mm]{0mm}{4mm} & $c_{s}^{\infty}(\pi/2)$ (m/s) & $1388.53$ & $3522.07$\\
\rule[-1mm]{0mm}{4mm} & $c_{s}(f_0,\pi/2)$ (m/s) & $1381.07$ & $3508.05$\\
\rule[-1mm]{0mm}{4mm} & $f_{c1}$ (Hz) & $2.55\,10^{4}$ & $2.55\,10^{4}$\\
\rule[-1mm]{0mm}{4mm} & $f_{c3}$ (Hz) & $8.50\,10^{4}$ & $8.50\,10^{4}$\\ \hline
\rule[-1mm]{0mm}{4mm} Optimization & $\theta_1^1$ (rad/s) & $1.64\,10^5$ & $1.64\,10^5$\\
\rule[-1mm]{0mm}{4mm} & $\theta_2^1$ (rad/s) & $2.80\,10^6$ & $2.80\,10^6$\\
\rule[-1mm]{0mm}{4mm} & $\theta_3^1$ (rad/s) & $3.58\,10^7$ & $3.58\,10^7$\\
\rule[-1mm]{0mm}{4mm} & $a_1^1$ (rad$^{1/2}$/s$^{1/2}$) & $5.58\,10^2$ & $5.58\,10^2$\\
\rule[-1mm]{0mm}{4mm} & $a_2^1$ (rad$^{1/2}$/s$^{1/2}$) & $1.21\,10^3$ & $1.21\,10^3$\\
\rule[-1mm]{0mm}{4mm} & $a_3^1$ (rad$^{1/2}$/s$^{1/2}$) & $7.32\,10^3$ & $7.32\,10^3$\\
\rule[-1mm]{0mm}{4mm} & $\varepsilon_{mod}^1$ (\%) & $1.61$ & $1.61$\\
\rule[-1mm]{0mm}{4mm} & $\theta_1^3$ (rad/s) & $3.14\,10^5$ & $3.14\,10^5$\\
\rule[-1mm]{0mm}{4mm} & $\theta_2^3$ (rad/s) & $5.06\,10^7$ & $5.06\,10^7$\\
\rule[-1mm]{0mm}{4mm} & $\theta_3^3$ (rad/s) & $4.50\,10^6$ & $4.50\,10^6$\\
\rule[-1mm]{0mm}{4mm} & $a_1^3$ (rad$^{1/2}$/s$^{1/2}$) & $7.57\,10^2$ & $7.57\,10^2$\\
\rule[-1mm]{0mm}{4mm} & $a_2^3$ (rad$^{1/2}$/s$^{1/2}$) & $8.79\,10^3$ & $8.79\,10^3$\\
\rule[-1mm]{0mm}{4mm} & $a_3^3$ (rad$^{1/2}$/s$^{1/2}$) & $1.38\,10^3$ & $1.38\,10^3$\\
\rule[-2mm]{0mm}{5mm} & $\varepsilon_{mod}^3$ (\%) & $0.53$ & $0.53$\\ \hline
\end{tabular}
\end{center}
\caption{Physical parameters of the transversely isotropic media used in the numerical experiments. The phase velocities $c_{pf}(f_0,\varphi)$, $c_{ps}(f_0,\varphi)$ and $c_{s}(f_0,\varphi)$ are computed at $f = f_0 = 200$ kHz when the wavevector $\mbox{\boldmath$k$}$ makes an angle $\varphi$ with the horizontal $x$-axis, and $c_{pf}^{\infty}(\varphi)$, $c_{ps}^{\infty}(\varphi)$, $c_{s}^{\infty}(\varphi)$ denote the high-frequency limit of the phases velocities.}
\label{table:para_phy_ani}
\end{table}
In all the numerical simulations, the time step is computed from the physical parameters of the media through relations (\ref{eq:CFL_dsplit}), setting the CFL number $\Upsilon=0.95$. The numerical experiments are performed on an Intel Core i7 processor at $2.80$ GHz.
In the first test and the third test, the computational domain $[-0.15,0.15]^2$ m is discretized with $N_x = N_z = 2250$ grid nodes in each direction, which amounts to 20 points per slow compressional wavelength in $\Omega_0$. In the other tests, the computational domain $[-0.1,0.1]^2$ m is discretized with $N_x= N_z = 1500$, which amounts also to 20 points per slow compressional wavelength in $\Omega_0$ and in $\Omega_1$.\\
\noindent
{\it Test 1: homogeneous medium}\label{sec:exp:test1}
In the first test, the homogeneous medium $\Omega_0$ (table \ref{table:para_phy_ani}) is excited by a source point located at $(0 \mbox{ m},0 \mbox{ m})$. The only non-null component of the forcing $\mbox{\boldmath$F$}$ (\ref{eq:forcing_DA}) is $G_{\sigma_{13}} = g(t)\,h(x,z)$, where $g(t)$ is a Ricker signal of central frequency $f_0$ and of time-shift $t_0=1/f_0=10^{-5}$ s:
\begin{equation}
g(t) =
\left\lbrace
\begin{array}{ll}
\displaystyle \left( 2\,\pi^2\,f_0^2\,\left( t-t_0\right) ^2-1\right) \,\exp\left( -\pi^2\,f_0^2\,(t-t_0)^2\right) \; & \displaystyle \mbox{if}\;0\leqslant t\leqslant 2\,t_0,\\
[10pt]
\displaystyle 0 & \displaystyle \mbox{otherwise},
\end{array}
\right.
\label{eq:ricker}
\end{equation}
and $h(x,z)$ is a truncated Gaussian centered at point $(0,0)$, of radius $R_0=6.56\,10^{-3}$ m and $\Sigma=3.28\,10^{-3}$ m:
\begin{equation}
h(x,z) =
\left\lbrace
\begin{array}{ll}
\displaystyle \frac{1}{\pi\,\Sigma^2}\,\exp\left(-\frac{x^2+z^2}{\Sigma^2}\right) \; & \mbox{if}\;0\leqslant x^2+z^2\leqslant R_0^2,\\
[10pt]
\displaystyle 0 & \mbox{otherwise}.
\end{array}
\right.
\label{eq:gaussienne}
\end{equation}
The time step is $\Delta t=2.41\,10^{-8}$ s. We use a truncated Gaussian for $h(x,z)$ rather than a Dirac distribution to avoid spurious numerical artifacts localized around the source point. This source generates cylindrical waves of all types: fast and slow quasi-compressional waves and quasi-shear waves, which are denoted by $qP_f$, $qP_s$ and $qS$, respectively, in figure \ref{fig:2D_homogene_ani}. The three waves are observed in the pressure field. Contrary to the isotropic case, where the pressure of the shear wave is null, pressure is visible in the qS wave.
A comparison is proposed with the theoretical wavefronts deduced from the dispersion analysis (section \ref{sec:phys:dispersion}) and the resolution of (\ref{eq:syst_dispersion_gen}). They are denoted by a black dotted line in figure \ref{fig:2D_homogene_ani}. It is observed that the computed waves are well positionned at the final instant $t_1 \simeq 2.72\,10^{-5}$ s (corresponding to 1125 time steps). No special care is applied to simulate outgoing waves (with PML, for instance), since the simulation is stopped before the waves have reached the edges of the computational domain. The cusp of the shear wave is seen in the numerical solution.\\
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
$p$ & zoom\\
\hspace{-0.3cm}
\includegraphics[scale=0.36]{p_homogene_ani.eps} &
\hspace{-0.3cm}
\includegraphics[scale=0.40]{p_homogene_ani_zoom.eps}
\end{tabular}
\end{center}
\caption{test 1. Fast and slow quasi-compressional waves, respectively $qP_f$ and $qP_s$, and quasi-shear wave $qS$ emitted by a source point at $(0 \mbox{ m},0 \mbox{ m})$. Pressure at $t_1 \simeq 2.72\,10^{-5}$ s.}
\label{fig:2D_homogene_ani}
\end{figure}
\noindent
{\it Test 2: diffraction of a plane wave by a plane interface}\label{sec:exp:test2}
In the second test, the source is a plane fast compressional wave traveling in the positive direction of the $x$-axis, whose wavevector $\mbox{\boldmath$k$}$ is parallel to the direction of propagation. Its time evolution is the same Ricker signal as in the first test (\ref{eq:ricker}). We use periodic boundary conditions at the top and at the bottom of the domain. The validity of the method is checked in the particular case of heterogeneous transversely isotropic media, where a semi-analytical solution can be obtained easily. The media $\Omega_0$ and $\Omega_1$ are separated by a vertical wave plane interface at $x = 0$ m. The incident $P_f\,$-wave ($Ip_f$) propagates in the medium $\Omega_1$. The time step is $\Delta t=2.11\,10^{-8}$ s. The figure \ref{fig:validation_interface_ani_carte} shows a snapshot of the pressure at $t_1 \simeq 1.48\,10^{-5}$ s (corresponding to 750 time steps), on the whole computational domain. The reflected fast and slow quasi-compressional waves, denoted respectively $Rp_f$ and $Rp_s$, propagate in the medium $\Omega_1$; and the transmitted fast and slow quasi-compressional waves, denoted respectively $Tp_f$ and $Tp_s$, propagate in the medium $\Omega_0$.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b)\\
\includegraphics[scale=0.40]{p_droite_init_ani.eps} &
\includegraphics[scale=0.40]{p_droite_ani.eps}\\
\end{tabular}
\end{center}
\caption{test 2. Snapshot of pressure at initial time (a) and at $t_1 \simeq 1.48\,10^{-5}$ s (b). The plane interface is denoted by a straight black line, separating $\Omega_1$ (on the left) and $\Omega_0$ (on the right).}
\label{fig:validation_interface_ani_carte}
\end{figure}
In this case, we compute the exact solution of Biot-DA thanks to Fourier tools and poroelastic equations; a general overview of the analytical solution is given in the \ref{SecAppExact}. The figure \ref{fig:validation_interface_ani_coupe} shows the excellent agreement between the analytical and the numerical values of the pressure along the line $z=0$ m. Despite the relative simplicity of this configuration (1D evolution of the waves and lack of shear waves), it can be viewed as a validation of the numerical method which is fully 2D whatever the geometrical setting.\\
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
Pressure & Zoom on the slow compressional waves\\
\includegraphics[scale=0.33]{droite_coupe_P_ani.eps} &
\includegraphics[scale=0.33]{droite_coupe_P_zoom_ani.eps}\\
\end{tabular}
\end{center}
\caption{test 2. Pressure along the line $z=0$ m; vertical line denotes the interface. Comparison between the numerical values (circle) and the analytical values (solid line) of $p$ at $t_1 \simeq 1.48\,10^{-5}$ s.}
\label{fig:validation_interface_ani_coupe}
\end{figure}
\noindent
{\it Test 3: source point and plane interface or sinusoidal interface}\label{sec:exp:test3}
In the previous test, the configuration was fully 1D, but more complex geometries can be handled on a Cartesian grid thanks to the immersed interface method. As an example, the media $\Omega_0$ and $\Omega_1$ are separated by a plane interface with slope $15$ degree with the horizontal $x$-axis, passing through the point $(0 \mbox{ m},-0.004 \mbox{ m})$. The homogeneous medium $\Omega_1$ is excited by the source point described in test 1. This source emits cylindrical waves which interact with the medium $\Omega_0$. The time step is $\Delta t = 2.11\,10^{-8}$ s. Snapshot of the pressure at time $t \simeq 2.53\,10^{-5}$ s (corresponding to 1200 time steps) and the time evolution of the pressure at receivers $R_0$ $(0.042 \mbox{ m},0.0068 \mbox{ m})$ in $\Omega_0$ and $R_1$ $(0.048 \mbox{ m},0.0071 \mbox{ m})$ in $\Omega_1$ are represented on figure \ref{fig:droite_incline}.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b)\\
\includegraphics[scale=0.37]{p_droite_incline_ani.eps} &
\includegraphics[scale=0.37]{p_droite_incline_ani_R.eps}
\end{tabular}
\end{center}
\caption{test 3. Snapshot of pressure and at $t_1 \simeq 2.53\,10^{-5}$ s (a). The plane interface separating the media $\Omega_0$ and $\Omega_1$ is denoted by a straight black line. Time evolution of the pressure (b) at receiver $R_0$ in $\Omega_0$ (red) and at receiver $R_1$ in $\Omega_1$ (blue).}
\label{fig:droite_incline}
\end{figure}
The plane interface can be easily replaced, for instance by a sinusoidal interface of equation
\begin{equation}
-\sin\theta\,(x-x_s) + \cos\theta\,(z-z_s) - A_s\,\sin\left(\omega_s\,\left(\cos\theta\,(x-x_s) + \sin\theta\,(z-z_s)\right)\right) = 0,
\label{eq:sinus}
\end{equation}
with $x_s = 0$ m, $z_s = -0.027$ m, $A_s = 0.01$ m, $\omega_s = 50\,\pi$ rad/s, $\theta = \pi/12$ rad. Snapshot of the pressure and the time evolution of the pressure at receivers $R_0$ $(0.036 \mbox{ m},-0.031 \mbox{ m})$ in $\Omega_0$ and $R_1$ $(0.036 \mbox{ m},-0.014 \mbox{ m})$ in $\Omega_1$ are represented on figure \ref{fig:sinus_incline}.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b)\\
\includegraphics[scale=0.37]{p_sinus_incline_ani.eps} &
\includegraphics[scale=0.37]{p_sinus_incline_ani_R.eps}
\end{tabular}
\end{center}
\caption{test 3. Snapshot of pressure and at $t_1 \simeq 2.53\,10^{-5}$ s (a). The sinusoidal interface separating the media $\Omega_0$ and $\Omega_1$ is denoted by a straight black line. Time evolution of the pressure (b) at receiver $R_0$ in $\Omega_0$ (red) and at receiver $R_1$ in $\Omega_1$ (blue).}
\label{fig:sinus_incline}
\end{figure}
In both cases, no spurious diffraction is induced by the stair-step representation of the interface, thanks to the immersed interface method. Moreover, classical waves conversions and scattering phenomena are observed: reflected, transmitted and Stoneley waves. The shape of the transmitted waves, not circular, illustrates the strong anisotropy of the medium $\Omega_0$.\\
\noindent
{\it Test 4: multiple ellipsoidal scatterers}\label{sec:exp:test4}
To illustrate the ability of the proposed numerical strategy to handle complex geometries, $200$ ellipsoidal scatterers of medium $\Omega_1$, with major and minor radii of $0.025$ m and $0.02$ m, are randomly distributed in a matrix of medium $\Omega_0$. The computational domain is $[-0.8,0.8]^2$ m, hence the surfacic concentration of scatterers is $25$ \%. A uniform distribution of scatterers is used. The source is the same plane fast compressional wave than is the second test, and we use periodic boundary conditions at the top and at the bottom of the domain.
The time step is $\Delta t=3.37\,10^{-7}$ s. The pressure is represented at the initial time and at time $t_1 \simeq 1.43\,10^{-4}$ s (corresponding to 425 time steps) on figure \ref{fig:ellipses_ani}. This simulation has taken approximately $11.5$ h of preprocessing and $8.5$ h of time-stepping. Similar numerical experiments are also performed for a surfacic concentration of scatterers of $10$ \% and $15$ \%.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b)\\
\includegraphics[scale=0.42]{pression_multidiffusion_init.eps}&
\includegraphics[scale=0.42]{pression_multidiffusion.eps}
\end{tabular}
\end{center}
\caption{test 4. Multiple scattering in random media. Snapshot of $p$ at the initial time (a) and at time $t_1 \simeq 1.43\,10^{-4}$ s. The matrix is $\Omega_0$, whereas the $200$ scatterers are $\Omega_1$.}
\label{fig:ellipses_ani}
\end{figure}
At each time step, the components of $\mbox{\boldmath$U$}_{ij}^n$ are stored inside the subdomain containing the inclusions. For this purpose, a uniform network consisting of $N_l = 800$ lines and $N_c = 25$ columns of receivers is put in the subdomain. The position of the receivers is given by $(x_i,z_j)$, where $i=0,\cdots,N_c - 1$ and $j=0,\cdots,N_l - 1$. The field $\mbox{\boldmath$U$}_{ij}^n$ recorded on each array (each line of receivers), represented on figure \ref{fig:sismo}-a, corresponds to a field propagating along one horizontal line of receivers. A main wave train is clearly visible, followed by a coda. Summing the time histories of these $N_c$ arrays gives a coherent field propagating in the $x$ direction:
\begin{equation}
\overline{\mbox{\boldmath$U$}}_i^n = \frac{1}{N_l}\,\sum\limits_{j=0}^{N_l - 1}\mbox{\boldmath$U$}_{ij}^n.
\label{eq:coherent_field}
\end{equation}
On the coherent seismogram thus obtained, represented on figure \ref{fig:sismo}-b, the coda has disappeared, and the main wave train behaves like a plane wave propagating in a homogeneous (but dispersive and attenuating) medium. The coherent phase velocity $c(\omega)$, represented in figure \ref{fig:effective_properties}-a, is computed by applying a $\mathfrak{p}-\omega$ transform to the space-time data on the coherent field (\ref{eq:coherent_field}), where $\mathfrak{p} = 1/c$ is the slowness of the waves \cite{MECHAN81,MOKHTAR88}. The horizontal lines represent a simple average of the phase velocities weighted by the concentration. The coherent attenuation $\alpha(\omega)$ is estimated from the decrease in the amplitude spectrum of the coherent field during the propagation of the waves, see \ref{fig:effective_properties}-b. An error estimate is also deduced, represented in figure \ref{fig:effective_properties} by vertical lines.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b)\\
\includegraphics[scale=0.35]{sismo_ligne.eps} &
\includegraphics[scale=0.35]{sismo_somme.eps}
\end{tabular}
\end{center}
\caption{test 4. Incident plane $qP_s$-wave in a medium with $25\%$ inclusion concentration. (a): pressure recorded along an array, (b): coherent pressure obtained afer summation.}
\label{fig:sismo}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b)\\
\includegraphics[scale=0.51]{vitesse_phase_effective.eps} &
\includegraphics[scale=0.51]{attenuation_effective.eps}
\end{tabular}
\end{center}
\caption{test 4. Effective phase velocity (a) and effective attenuation (b) at various inclusion concentrations. The vertical lines represents the error bars. The horizontal lines in (a) give the average phase velocity weighted by the concentration.}
\label{fig:effective_properties}
\end{figure}
\section{Conclusion}\label{sec:conclu}
An explicit finite-difference method has been developed here to simulate transient poroelastic waves in the full range of validity of the Biot-JKD model, which involves order $1/2$ fractional derivatives. A diffusive representation transforms the fractional derivatives, non-local in time, into a continuum of local problems, approximated by quadrature formulae. The Biot-JKD model is then replaced by an approximate Biot-DA model, much more tractable numerically. The coefficients of the diffusive approximation are determined by a nonlinear constrained optimization procedure, leading to a small number of memory variables. The hyperbolic Biot-DA system of partial differential equations is discretized using various tools of scientific computing: Strang splitting, fourth-order ADER scheme, immersed interface method. It enables to treat efficiently and accurately the propagation of transient waves in transversely isotropic porous media.
Some future lines of research are suggested:
\begin{itemize}
\item \emph{Multiple scattering.} Many theoretical methods of multiple scattering have been developed to determine the effective wavenumber of media with random scatterers; see for instance the Independent Scattering Approximation and the Waterman-Truell method \cite{WATERMAN61}. The main drawback of these methods is that their validity is restricted to small concentrations of scatterers, typically less than 10 \%. On the contrary, numerical methods do not suffer from such a limitation if suitable efforts are done. In particular, the errors due to the discretization (numerical dispersion, numerical dissipation, spurious diffractions on interfaces, ...) must be much smaller than the physical quantities of interest. In \cite{CHEKROUN12}, numerical simulations were used in the elastic case to estimate the accuracy of standard theoretical models, and also to show the improvement induced by recent models of multiple scattering \cite{CONOIR10}. As shown in test 4 of $\S$ \ref{sec:exp}, the numerical tools presented here make possible a similar study poroelastic random media and comparisons with theoretical models \cite{TOURNAT04,LUPPE08}.
However, realistic configurations would involve approximately $1500$ scatterers, and sizing of the experiments leads to $N_x \times N_z = 10000^2$, and $10000$ time iterations are required. Consequently, the numerical method has to be parallelized, for instance by Message Passing Interface (MPI).
\item \emph{Thermic boundary-layer.} In cases where the saturating fluid is a gas, the effects of thermal expansion of both pore fluid and the matrix have to be taken into account. In the HF regime, the thermal exchanges between fluid and solid phase occur in a small layer close to the surface of the pores. In this case, the dynamic thermal permeability is introduced \cite{LAFARGE97}, leading in the time-domain to an additional shifted fractional derivative of order $1/2$. The numerical method developed in this paper can be applied without difficulty by introducing additional memory variables.
\item \emph{Fractional derivatives in space.} The Biot theory is very efficient to predict the macroscopic behavior of long-wavelength sound propagation in porous medium with relatively simple microgeometries. However, it remains far to describe correctly the coarse-grained dynamics of the medium when the microgeometry of the porous medium become more complex, for instance fractal. For rigid-framed porous media permeated by a viscothermal fluid, a generalized macroscopic nonlocal theory of sound propagation has been developed to take into account not only temporal dispersion, but also spatial dispersion \cite{NEMATI12}. In this case, the coefficients depends on the frequency and on the wavenumber. In the space-time domain, it introduces not only time-fractional derivatives, but also space-fractional derivatives. Numerical modeling of space-fractional differential equations has been addressed by several authors \cite{LIU04,TADJERAN06}, by using a Gr\"unwald-Letnikov approximation. The diffusive approximation of such derivatives constitutes an interesting challenge.
\end{itemize}
\section*{Acknowledgments}
The authors wish to thank Dr Mathieu Chekroun (LAUM, France) for his insights about multiple scattering and for computing the coherent phase velocity and attenuation with the $\mathfrak{p}-\omega$ transform in test 4.
\begin{appendix}
\section{Proof of proposition \ref{prop:nrjJKD}} \label{annexe:proof_nrj}
The equation (\ref{eq:biot_dynamique_a}) is multiplied by $\mbox{\boldmath$v_s$}^T$ and integrated
\begin{equation}
\int_{\mathbb{R}^2}\left( \rho\,\mbox{\boldmath$v_s$}^T\,\frac{\partial\,\mbox{\boldmath$v_s$}}{\partial\,t} + \rho_f\,\mbox{\boldmath$v_s$}^T\,\frac{\partial\,\mbox{\boldmath$w$}}{\partial\,t} - \mbox{\boldmath$v_s$}^T\,(\nabla.\underline{\mbox{\boldmath$\sigma$}})\right)\,dx\,dz = 0.
\label{eq:nrj_proof1}
\end{equation}
The first term in (\ref{eq:nrj_proof1}) is written
\begin{equation}
\int_{\mathbb{R}^2}\rho\,\mbox{\boldmath$v_s$}^T\,\frac{\partial\,\mbox{\boldmath$v_s$}}{\partial\,t}\,dx\,dz = \frac{d}{dt}\,\frac{1}{2}\,\int_{\mathbb{R}^2}\rho\,\mbox{\boldmath$v_s$}^T\,\mbox{\boldmath$v_s$}\,dx\,dz.
\label{eq:nrj_proof2}
\end{equation}
Integrating by part and using (\ref{eq:biot_comportement_bis}), we obtain
\begin{equation}
\begin{array}{l}
\displaystyle -\int_{\mathbb{R}^2}\mbox{\boldmath$v_s$}^T\,(\nabla.\underline{\mbox{\boldmath$\sigma$}})\,dx\,dz \displaystyle = \int_{\mathbb{R}^2}\mbox{\boldmath$\sigma$}^T\,\frac{\partial\,\mbox{\boldmath$\varepsilon$}}{\partial\,t}\,dx\,dz = \displaystyle \int_{\mathbb{R}^2}\mbox{\boldmath$\sigma$}^T\,\left(\mbox{\boldmath$C$}^{-1}\,\frac{\partial\,\mbox{\boldmath$\sigma$}}{\partial\,t} - \mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\beta$}\,\frac{\partial\,p}{\partial\,t}\right)\,dx\,dz,\\
[12pt]
\hspace{0.6cm} \displaystyle = \frac{d}{dt}\,\frac{1}{2}\, \int_{\mathbb{R}^2}\mbox{\boldmath$\sigma$}^T\,\mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\sigma$}\,dx\,dz + \int_{\mathbb{R}^2}\mbox{\boldmath$\sigma$}^T\,\mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\beta$}\,\frac{\partial\,p}{\partial\,t}\,dx\,dz,\\
[15pt]
\hspace{0.6cm} \displaystyle = \frac{d}{dt}\,\frac{1}{2}\,\int_{\mathbb{R}^2}\left(\mbox{\boldmath$\sigma$}^T\,\mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\sigma$} + 2\,\mbox{\boldmath$\sigma$}^T\,\mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\beta$}\,p\right)\,dx\,dz - \int_{\mathbb{R}^2}\left( \frac{\partial\,\mbox{\boldmath$\sigma$}}{\partial\,t}\right) ^T\,\mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\beta$}\,p\,dx\,dz.
\end{array}
\label{eq:nrj_proof3}
\end{equation}
Equation (\ref{eq:biot_dynamique_b}) is multiplied by $\mbox{\boldmath$w$}^T$ and integrated
\begin{equation}
\begin{array}{l}
\displaystyle \int_{\mathbb{R}^2} \left\lbrace \rho_f\,\mbox{\boldmath$w$}^T\,\frac{\partial\,\mbox{\boldmath$v_s$}}{\partial\,t} + \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\rho_{wi}\right)\,\frac{\partial\,\mbox{\boldmath$w$}}{\partial\,t} + \mbox{\boldmath$w$}^T\,\nabla p\right.\\
[13pt]
\left.+ \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\frac{\eta}{\kappa_i}\,\frac{1}{\Omega_i}\,(D+\Omega_i)^{1/2}\right)\,\mbox{\boldmath$w$} \right\rbrace \,dx\,dz = 0.
\end{array}
\label{eq:nrj_proof4}
\end{equation}
The second term in (\ref{eq:nrj_proof4}) can be written
\begin{equation}
\int_{\mathbb{R}^2}\mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\rho_{wi}\right)\,\frac{\partial\,\mbox{\boldmath$w$}}{\partial\,t}\,dx\,dz = \frac{d}{dt}\,\frac{1}{2}\,\int_{\mathbb{R}^2}\mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\rho_{wi}\right)\,\mbox{\boldmath$w$}\,dx\,dz.
\label{eq:nrj_proof5}
\end{equation}
Integrating by part the third term of (\ref{eq:nrj_proof4}), we obtain
\begin{equation}
\begin{array}{l}
\displaystyle \int_{\mathbb{R}^2}\mbox{\boldmath$w$}^T\,\nabla p\,dx\,dz \displaystyle = -\int_{\mathbb{R}^2}p\,\nabla . \mbox{\boldmath$w$}\,dx\,dz,\\
[15pt]
\hspace{0.1cm} \displaystyle = \int_{\mathbb{R}^2}p\,\frac{\partial\,\xi}{\partial\,t}\,dx\,dz = \int_{\mathbb{R}^2}p\,\left( \frac{1}{m}\,\frac{\partial\,p}{\partial\,t} + \mbox{\boldmath$\beta$}^T\,\frac{\partial\,\mbox{\boldmath$\varepsilon$}}{\partial\,t} \right)\,dx\,dz,\\
[15pt]
\hspace{0.1cm} = \displaystyle \frac{d}{dt}\,\frac{1}{2}\,\int_{\mathbb{R}^2}\frac{1}{m}\,p^2\,dx\,dz + \int_{\mathbb{R}^2}p\,\mbox{\boldmath$\beta$}^T\,\left( \mbox{\boldmath$C$}^{-1}\,\frac{\partial\,\mbox{\boldmath$\sigma$}}{\partial\,t} + \mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\beta$}\,\frac{\partial\,p}{\partial\,t}\right)\,dx\,dz,\\
[20pt]
\hspace{0.1cm} = \displaystyle \frac{d}{dt}\,\frac{1}{2}\,\int_{\mathbb{R}^2}\frac{1}{m}\,p^2\,dx\,dz + \int_{\mathbb{R}^2}\mbox{\boldmath$\beta$}^T\,\mbox{\boldmath$C$}^{-1}\,\frac{\partial\,\mbox{\boldmath$\sigma$}}{\partial\,t}\,p\,dx\,dz + \int_{\mathbb{R}^2}\mbox{\boldmath$\beta$}^T\,\mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\beta$}\,p\,\frac{\partial\,p}{\partial\,t}\,dx\,dz,\\
[20pt]
\hspace{0.1cm} = \displaystyle \frac{d}{dt}\,\frac{1}{2}\,\int_{\mathbb{R}^2}\frac{1}{m}\,p^2\,dx\,dz + \int_{\mathbb{R}^2}\mbox{\boldmath$\beta$}^T\,\mbox{\boldmath$C$}^{-1}\,\frac{\partial\,\mbox{\boldmath$\sigma$}}{\partial\,t}\,p\,dx\,dz + \frac{d}{dt}\,\frac{1}{2}\,\int_{\mathbb{R}^2}\mbox{\boldmath$\beta$}^T\,\mbox{\boldmath$C$}^{-1}\,\mbox{\boldmath$\beta$}\,p^2\,dx\,dz.
\end{array}
\label{eq:nrj_proof6}
\end{equation}
We add (\ref{eq:nrj_proof1}) and the three first terms of (\ref{eq:nrj_proof4}). Using the symmetry of $\mbox{\boldmath$C$}$, there remains
\begin{equation}
\int_{\mathbb{R}^2}\rho_f\,\left( \mbox{\boldmath$v_s$}^T\,\frac{\partial\,\mbox{\boldmath$w$}}{\partial\,t} + \mbox{\boldmath$w$}^T\,\frac{\partial\,\mbox{\boldmath$v_s$}}{\partial\,t}\right) \,dx\,dz = \frac{d}{dt}\,\frac{1}{2}\int_{\mathbb{R}^2}2\,\rho_f\,\mbox{\boldmath$v_s$}^T\,\mbox{\boldmath$w$}.
\label{eq:nrj_proof7}
\end{equation}
Equations (\ref{eq:derivee_frac}) and (\ref{eq:nrj_proof1})-(\ref{eq:nrj_proof7}) yield
\begin{equation}
\frac{d}{dt}\,(E_1 + E_2) = -\int_{\mathbb{R}^2}\int_0^{\infty}\frac{\eta}{\pi\,\sqrt{\theta}}\,\mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\frac{1}{\kappa_i\,\sqrt{\Omega_i}}\right)\,\mbox{\boldmath$\psi$}\,d\theta\,dx\,dz.
\label{eq:nrj_proof8}
\end{equation}
To calculate the right-hand side of (\ref{eq:nrj_proof8}), equation (\ref{eq:EDO_psi_a}) is multiplied by $\mbox{\boldmath$w$}^T$ or $\mbox{\boldmath$\psi$}^T$
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle \mbox{\boldmath$w$}^T\,\frac{\partial\,\mbox{\boldmath$\psi$}}{\partial\,t} - \mbox{\boldmath$w$}^T\,\frac{\partial\,\mbox{\boldmath$w$}}{\partial\,t} + \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\theta + \Omega_i\right)\,\mbox{\boldmath$\psi$} - \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\Omega_i\right)\,\mbox{\boldmath$w$} = \mbox{\boldmath$0$},\\
[10pt]
\displaystyle \mbox{\boldmath$\psi$}^T\,\frac{\partial\,\mbox{\boldmath$\psi$}}{\partial\,t} - \mbox{\boldmath$\psi$}^T\,\frac{\partial\,\mbox{\boldmath$w$}}{\partial\,t} + \mbox{\boldmath$\psi$}^T\,\mathrm{diag}\left(\theta + \Omega_i\right)\,\mbox{\boldmath$\psi$} - \mbox{\boldmath$\psi$}^T\,\mathrm{diag}\left(\Omega_i\right)\,\mbox{\boldmath$w$} = \mbox{\boldmath$0$}.
\end{array}
\right.
\label{eq:nrj_proof10}
\end{equation}
Equation (\ref{eq:nrj_proof10}) can be written as
\begin{equation}
\begin{array}{ll}
\mbox{\boldmath$\psi$}^T\,\mathrm{diag}\left(\theta + 2\,\Omega_i\right)\,\mbox{\boldmath$w$} = & \displaystyle \frac{\partial}{\partial\,t}\,\frac{1}{2}\,(\mbox{\boldmath$w$}-\mbox{\boldmath$\psi$})^T\,(\mbox{\boldmath$w$}-\mbox{\boldmath$\psi$})\\
[10pt]
& + \mbox{\boldmath$\psi$}^T\,\mathrm{diag}\left(\theta + \Omega_i\right)\,\mbox{\boldmath$\psi$} + \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\Omega_i\right)\,\mbox{\boldmath$w$}.
\end{array}
\label{eq:nrj_proof11}
\end{equation}
Substituting (\ref{eq:nrj_proof11}) in (\ref{eq:nrj_proof8}) leads to the relation (\ref{eq:dEdtJKD})
\begin{equation}
\begin{array}{ll}
\displaystyle \frac{d}{dt}\,(E_1 + E_2 + E_3) = & \displaystyle -\int_{\mathbb{R}^2}\int_0^{\infty}\frac{\eta}{\pi\,\sqrt{\theta}}\, \displaystyle \left\lbrace \mbox{\boldmath$\psi$}^T\,\mathrm{diag}\left(\frac{\theta+\Omega_i}{\kappa_i\,\sqrt{\Omega_i}\,(\theta+2\,\Omega_i)}\right)\,\mbox{\boldmath$\psi$}\right.\\
[15pt]
& \displaystyle \left. + \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\frac{\Omega_i}{\kappa_i\,\sqrt{\Omega_i}\,(\theta+2\,\Omega_i)}\right)\,\mbox{\boldmath$w$} \right\rbrace\,d\theta\,dx\,dz.
\end{array}
\label{eq:nrj_proof12}
\end{equation}
It remains to prove that $E$ (\ref{eq:E1E2E3_JKD}) is a positive definite quadratic form. Concerning $E_1$, we write
\begin{equation}
\rho\,\mbox{\boldmath$v_s$}^T\,\mbox{\boldmath$v_s$} + \mbox{\boldmath$w$}^T\,\mathrm{diag}\left(\rho_{wi}\right)\,\mbox{\boldmath$w$} + 2\,\rho_f\,\mbox{\boldmath$v_s$}^T\,\mbox{\boldmath$w$} = \mbox{\boldmath$X_1$}^T\,\,\mbox{\boldmath$H_1$}\,\mbox{\boldmath$X_1$} + \mbox{\boldmath$X_3$}^T\,\,\mbox{\boldmath$H_3$}\,\mbox{\boldmath$X_3$},
\end{equation}
where
\begin{equation}
\mbox{\boldmath$X_i$} = (v_{si}\;w_i)^T,\quad\mbox{\boldmath$H_i$} = \left(\begin{array}{cc}
\rho & \rho_f\\
[10pt]
\rho_f & \rho_{wi}
\end{array}\right),\quad i=1,3.
\end{equation}
Taking ${\cal S}_i$ and ${\cal P}_i$ to denote the sum and the product of the eigenvalues of matrix $ \mbox{\boldmath$H_i$}$, we obtain
\begin{equation}
\left\lbrace
\begin{array}{l}
{\cal P}_i = \det\, \mbox{\boldmath$H_i$} = \rho\,\rho_{wi} - \rho_f^2 = \chi_i > 0,\\
[10pt]
{\cal S}_i = \mbox{tr}\, \mbox{\boldmath$H_i$} = \rho + \rho_w > 0.
\end{array}
\right.
\end{equation}
The eigenvalues of $\mbox{\boldmath$H_i$}$ are therefore positive. This proves that $E_1$ is a positive definite quadratic form. The terms $E_2$, $E_3$ and $-\frac{dE}{dt}$ are obviously positive definite quadratic form because the involved matrices are definite positive.$\qquad\square$
\section{Matrices of propagation and dissipation} \label{annexe:matABS}
The matrices in (\ref{eq:2D_ani_syst_hyp_tens_ad}) are
\begin{equation}
\mbox{\boldmath$A$} = \left(
\begin{array}{ccc}
\mbox{\boldmath$0$}_{4,4} & \mbox{\boldmath$A$}_1 & \mbox{\boldmath$0$}_{4,2N}\\
[10pt]
\mbox{\boldmath$A$}_2 & \mbox{\boldmath$0$}_{4,4} & \mbox{\boldmath$0$}_{4,2N}\\
[10pt]
\mbox{\boldmath$0$}_{2N,4} & \mbox{\boldmath$A$}_3 & \mbox{\boldmath$0$}_{2N,2N}
\end{array}
\right) ,\quad
\mbox{\boldmath$A$}_3 =
\left(
\begin{array}{cccc}
\displaystyle \frac{\rho_f}{\chi_1} & 0 & 0 & \displaystyle \frac{\rho}{\chi_1}\\
[10pt]
0 & \displaystyle \frac{\rho_f}{\chi_3} & 0 & 0\\
[10pt]
\vdots & \vdots & \vdots & \vdots \\
[10pt]
\displaystyle \frac{\rho_f}{\chi_1} & 0 & 0 & \displaystyle \frac{\rho}{\chi_1}\\
[10pt]
0 & \displaystyle \frac{\rho_f}{\chi_3} & 0 & 0
\end{array}
\right),
\label{eq:matrixA_ani_ad}
\end{equation}
$$
\mbox{\boldmath$A$}_1 = \left(
\begin{array}{cccc}
\displaystyle -\frac{\rho_{w1}}{\chi_1} & 0 & 0 & \displaystyle -\frac{\rho_f}{\chi_1}\\
[10pt]
0 & \displaystyle -\frac{\rho_{w3}}{\chi_3} & 0 & 0\\
[10pt]
\displaystyle \frac{\rho_f}{\chi_1} & 0 & 0 & \displaystyle \frac{\rho}{\chi_1}\\
[10pt]
0 & \displaystyle \frac{\rho_f}{\chi_3} & 0 & 0
\end{array}
\right) ,\quad
\mbox{\boldmath$A$}_2 = \left(
\begin{array}{cccc}
-c_{11}^u & 0 & -\beta_1\,m & 0\\
[10pt]
0 & -c_{55}^u & 0 & 0\\
[10pt]
-c_{13}^u & 0 & -\beta_3\,m & 0\\
[10pt]
\beta_1\,m & 0 & m & 0
\end{array}
\right),
$$
\begin{equation}
\mbox{\boldmath$B$} = \left(
\begin{array}{ccc}
\mbox{\boldmath$0$}_{4,4} & \mbox{\boldmath$B$}_1 & \mbox{\boldmath$0$}_{4,2N}\\
[10pt]
\mbox{\boldmath$B$}_2 & \mbox{\boldmath$0$}_{4,4} & \mbox{\boldmath$0$}_{4,2N}\\
[10pt]
\mbox{\boldmath$0$}_{2N,4} & \mbox{\boldmath$B$}_3 & \mbox{\boldmath$0$}_{2N,2N}
\end{array}
\right) ,\quad
\mbox{\boldmath$B$}_3 =
\left(
\begin{array}{cccc}
0 & \displaystyle \frac{\rho_f}{\chi_1} & 0 & 0\\
[10pt]
0 & 0 & \displaystyle \frac{\rho_f}{\chi_3} & \displaystyle \frac{\rho}{\chi_3}\\
[10pt]
\vdots & \vdots & \vdots & \vdots \\
[10pt]
0 & \displaystyle \frac{\rho_f}{\chi_1} & 0 & 0\\
[10pt]
0 & 0 & \displaystyle \frac{\rho_f}{\chi_3} & \displaystyle \frac{\rho}{\chi_3}
\end{array}
\right),
\label{eq:matrixB_ani_ad}
\end{equation}
$$
\mbox{\boldmath$B$}_1 = \left(
\begin{array}{cccc}
0 & \displaystyle -\frac{\rho_{w1}}{\chi_1} & 0 & 0\\
[10pt]
0 & 0 & \displaystyle -\frac{\rho_{w3}}{\chi_3} & \displaystyle -\frac{\rho_f}{\chi_3}\\
[10pt]
0 & \displaystyle \frac{\rho_f}{\chi_1} & 0 & 0\\
[10pt]
0 & 0 & \displaystyle \frac{\rho_f}{\chi_3} & \displaystyle \frac{\rho}{\chi_3}
\end{array}
\right),\quad
\mbox{\boldmath$B$}_2 = \left(
\begin{array}{cccc}
0 & -c_{13}^u & 0 & -\beta_1\,m\\
[10pt]
-c_{55}^u & 0 & 0 & 0\\
[10pt]
0 & -c_{33}^u & 0 & -\beta_3\,m\\
[10pt]
0 & \beta_3\,m & 0 & m
\end{array}
\right),
$$
and $\mbox{\boldmath$S$}$ is the diffusive matrix
\begin{equation}
\mbox{\boldmath$S$} = \left( \begin{array}{ccc}
{\bf 0}_{4,4} & {\bf 0}_{4,4} & {\bf S}_1\\
[10pt]
{\bf 0}_{4,4} & {\bf 0}_{4,4} & {\bf 0}_{4,2N}\\
[10pt]
{\bf S}_3 & {\bf 0}_{2N,4} & {\bf S}_2
\end{array}
\right),\quad
\mbox{\boldmath$S$}_3 =
\left(
\begin{array}{cccc}
0 & 0 & -\Omega_1 & 0\\
[10pt]
0 & 0 & 0 & -\Omega_3 \\
[10pt]
\vdots & \vdots & \vdots & \vdots \\
[10pt]
0 & 0 & -\Omega_1 & 0\\
[10pt]
0 & 0 & 0 & -\Omega_3
\end{array}
\right) ,
\label{eq:matrixS_ani_ad}
\end{equation}
\vspace{10pt}
$$
\mbox{\boldmath$S$}_1 =
\left(
\begin{array}{ccccc}
\displaystyle -\frac{\rho_f}{\rho}\,\gamma_1\,a_1^1 & 0 & \cdots & \displaystyle -\frac{\rho_f}{\rho}\,\gamma_1\,a_N^1 & 0 \\
[10pt]
0 & \displaystyle -\frac{\rho_f}{\rho}\,\gamma_3\,a_1^3 & \cdots & 0 & \displaystyle -\frac{\rho_f}{\rho}\,\gamma_3\,a_N^3 \\
[10pt]
\displaystyle \gamma_1\,a_1^1 & 0 & \cdots & \displaystyle \gamma_1\,a_N^1 & 0 \\
[10pt]
0 & \displaystyle \gamma_3\,a_1^3 & \cdots & 0 & \displaystyle \gamma_3\,a_N^3
\end{array}
\right),
$$
\vspace{10pt}
$$
\mbox{\boldmath$S$}_2 =\left(
\begin{array}{ccccc}
\gamma_1\,a_1^1 + (\theta_1^1+\Omega_1) & 0 & \cdots & \gamma_1\,a_N^1 & 0 \\
[10pt]
0 & \gamma_3\,a_1^3 + (\theta_1^3+\Omega_3) & \cdots & 0 & \gamma_3\,a_N^3 \\
[5pt]
\vdots & \vdots & \vdots & \vdots & \vdots \\
[10pt]
\gamma_1\,a_1^1 & 0 & \cdots & \gamma_1\,a_N^1 + (\theta_N^1+\Omega_1) & 0 \\
[10pt]
0 & \gamma_3\,a_1^3 & \cdots & 0 & \gamma_3\,a_N^3 + (\theta_N^3+\Omega_3)
\end{array}
\right).
$$
\section{Proof of proposition \ref{prop:diffusive_part_vpS}} \label{annexe:proof_vpS}
We denote $\mbox{\boldmath$P$}_{{\cal B}}$ the change-of-basis matrix satisfying
\begin{equation}
\mbox{\boldmath$U$} = \mbox{\boldmath$P$}_{{\cal B}}\,\left( \mbox{\boldmath$U$}_1\,,\,\mbox{\boldmath$U$}_3\,,\,\mbox{\boldmath$\sigma$}\,,\,p \right)^T,
\label{eq:diffu_change_of_basis}
\end{equation}
with
\begin{equation}
\mbox{\boldmath$U$}_i = (v_{si}\,,\,w_i\,,\,\psi_1^i\,,\,\cdots\,,\,\psi_N^i)^T,\quad i=1,3.
\label{eq:vecteur_etat_reduced}
\end{equation}
The matrix $\mbox{\boldmath$P$}_{{\cal B}}$ is thus invertible, and the matrices $\mbox{\boldmath$S$}$ (\ref{annexe:matABS}) and $\mbox{\boldmath$S$}_{{\cal B}} = \mbox{\boldmath$P$}_{{\cal B}}^{-1}\,\mbox{\boldmath$S$}\mbox{\boldmath$P$}_{{\cal B}}$ are similar. The matrix $\mbox{\boldmath$S$}_{{\cal B}}$ writes
\begin{equation}
\mbox{\boldmath$S$}_{{\cal B}} = \left(
\begin{array}{cccc}
\tilde{\mbox{\boldmath$S$}_1} & \mbox{\boldmath$0$}_{N+2,N+2} & \mbox{\boldmath$0$}_{N+2,3} & \mbox{\boldmath$0$}_{N+2,1}\\
[10pt]
\mbox{\boldmath$0$}_{N+2,N+2} & \tilde{\mbox{\boldmath$S$}_3} & \mbox{\boldmath$0$}_{N+2,3} & \mbox{\boldmath$0$}_{N+2,1}\\
[10pt]
\mbox{\boldmath$0$}_{3,N+2} & \mbox{\boldmath$0$}_{3,N+2} & \mbox{\boldmath$0$}_{3,3} & \mbox{\boldmath$0$}_{3,1}\\
[10pt]
\mbox{\boldmath$0$}_{1,N+2} & \mbox{\boldmath$0$}_{1,N+2} & \mbox{\boldmath$0$}_{1,3} & 0
\end{array}
\right)
\end{equation}
with ($i=1,3$)
\begin{equation}
\tilde{\mbox{\boldmath$S$}_i} = \left( \begin{array}{cc|cccc}
0 & 0 & \displaystyle -\frac{\rho_f}{\rho}\,\gamma_i\,a_1^i & \displaystyle -\frac{\rho_f}{\rho}\,\gamma_i\,a_2^i & \cdots & \displaystyle -\frac{\rho_f}{\rho}\,\gamma_i\,a_N^i\\
[10pt]
\rule[-2mm]{0mm}{2mm} 0 & 0 & \gamma_i\,a_1^i & \gamma_i\,a_2^i & \cdots & \gamma_i\,a_N^i\\ \hline
\rule[0mm]{0mm}{4mm} 0 & -\Omega_i & \gamma_i\,a_1^i + (\theta_1^i + \Omega_i) & \gamma_1\,a_2^i & \cdots & \gamma_i\,a_N^i\\
[10pt]
0 & -\Omega_i & \gamma_i\,a_1^i & \gamma_i\,a_2^i + (\theta_2^i + \Omega_i) & \cdots & \gamma_i\,a_N^i\\
[10pt]
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
[10pt]
0 & -\Omega_i & \gamma_i\,a_1^i & \gamma_i\,a_2^i & \cdots & \gamma_i\,a_N^i + (\theta_N^i + \Omega_i)
\end{array}
\right).
\label{eq:matrix_Sx_reduced}
\end{equation}
\medskip
The characteristic polynomial of $\mbox{\boldmath$S$}$ is
\begin{equation}
P_{\mbox{\boldmath$S$}}(s) = s^4\,P_{\tilde{\mbox{\boldmath$S$}_1}}(s)\,P_{\tilde{\mbox{\boldmath$S$}_3}}(s),
\label{eq:poly_cara_S}
\end{equation}
where $P_{\tilde{\mbox{\boldmath$S$}_i}}(s)$ denotes the characteristic polynomial of the matrix $\tilde{\mbox{\boldmath$S$}_i}$, i.e. $\tilde{\mbox{\boldmath$S$}_i}(s) = \det(\tilde{\mbox{\boldmath$S$}_i} - s\,\mbox{\boldmath$I$}_{N+2})$ with $\mbox{\boldmath$I$}_{N+2}$ the $(N+2)$-identity matrix. This $(N+2)$-determinant is expanded along the first column. The line $I$ and the column $J$ of the $(N+1)$-determinant thus obtained are denoted $L_I$ and $C_J$, respectively ($0\leqslant I,J \leqslant N$). The following algebraic manipulations are then performed successively:
\begin{equation}
\begin{array}{ll}
(i) & L_{\ell} \leftarrow L_{\ell} - L_0,\quad \ell = 1,\cdots,N,\\
[5pt]
(ii) & C_0 \leftarrow C_0\,\prod\limits_{\ell=1}^N(\theta_{\ell}^i+\Omega_i-s),\\
(iii) & C_0 \leftarrow C_0 - (s-\Omega_1)\,\mathop{\prod\limits_{k=1}^N}\limits_{k\neq \ell} (\theta_k^i+\Omega_i-s)\,C_{\ell},\quad \ell = 1,\cdots,N.
\end{array}
\end{equation}
One deduces
\begin{equation}
P_{\tilde{\mbox{\boldmath$S$}_i}}(s) = -s\,{\cal Q}_i(s) = s^2\,\prod\limits_{\ell=1}^N(\theta_{\ell}^i+\Omega_i-s) + \gamma_i\,s\,(s-\Omega_i)\,\sum\limits_{\ell=1}^N a_{\ell}^i\,\mathop{\prod\limits_{k=1}^N}\limits_{k\neq \ell} (\theta_k^i+\Omega_i-s).
\label{eq:poly_carac_Sx_4}
\end{equation}
From equation (\ref{eq:poly_carac_Sx_4}), one has $P_{\tilde{\mbox{\boldmath$S$}_i}}(0) \neq 0$ while ${\cal Q}_i(0) \neq 0$, therefore $0$ is an eigenvalue of the matrix $\tilde{\mbox{\boldmath$S$}_i}$ with multiplicity $1$. In what follows, the positivity of the coefficients $\theta_{\ell}^i$, $a_{\ell}^i$ of the diffusive approximation is used. In the limit $s \rightarrow 0^+$, then asymptotically
\begin{equation}
P_{\tilde{\mbox{\boldmath$S$}_i}}(s) \mathop{\sim}\limits_{s\rightarrow 0^+} -\gamma_i\,\Omega_i\,s\,\sum\limits_{\ell=1}^N a_{\ell}^i\,\mathop{\prod\limits_{k=1}^N}\limits_{k\neq \ell} (\theta_k^i+\Omega_i) \Rightarrow \mbox{sgn}\left(P_{\tilde{\mbox{\boldmath$S$}_i}}(0^+)\right) = -1.
\label{eq:vp_Sx_zero}
\end{equation}
Moreover, using (\ref{eq:poids_croissant}), then at the quadrature abscissae one has for all $\ell=1,\cdots,N$
\begin{equation}
P_{\tilde{\mbox{\boldmath$S$}_i}}(\theta_{\ell}^i + \Omega_i) = \gamma_i\,\theta_{\ell}^i\,(\theta_{\ell}^i + \Omega_i)\,a_{\ell}^i\,\mathop{\prod\limits_{k=1}^N}\limits_{k\neq \ell} (\theta_k^i-\theta_{\ell}^i) \Rightarrow \mbox{sgn}\left(P_{\tilde{\mbox{\boldmath$S$}_i}}(\theta_{\ell}^i + \Omega_i)\right) = (-1)^{\ell+1}.
\label{eq:vp_Sx_theta}
\end{equation}
Finally, the following limit holds
\begin{equation}
P_{\tilde{\mbox{\boldmath$S$}_i}}(s) \mathop{\sim}\limits_{s\rightarrow +\infty} (-1)^N\,s^{N+2} \Rightarrow \mbox{sgn}\left(P_{\tilde{\mbox{\boldmath$S$}_i}}(+\infty)\right) = (-1)^N.
\label{eq:vp_Sx_infini}
\end{equation}
We introduce the following intervals
\begin{equation}
I_N^i = ]\theta_N^i + \Omega_i,+\infty[,\; I_{\ell}^i = ]\theta_{\ell}^i,\theta_{\ell+1}^i + \Omega_i],\;\mbox{for}\,\ell = 1,\cdots,N-1,\; I_0^i = ]0,\theta_1^i + \Omega_i].
\label{eq:interval_eigen_S}
\end{equation}
The real-valued continuous function $P_{\tilde{\mbox{\boldmath$S$}_i}}$ changes of sign on each interval $I_{\ell}^i$. Consequently, according to the intermediate value theorem, $P_{\tilde{\mbox{\boldmath$S$}_i}}$ has at least one zero in each interval. Since $P_{\tilde{\mbox{\boldmath$S$}_i}}$ has at the most $N+1$ distinct zeros in $]0,+\infty[$, we deduce that $\exists\, ! \, s_{\ell}^i \in I_{\ell}^i / P_{\tilde{\mbox{\boldmath$S$}_i}}(s_{\ell}^i)=0,\quad \ell=1,\cdots,N+1$. Using equation (\ref{eq:poly_cara_S}), the characteristic polynomial of $\mbox{\boldmath$S$}$ (\ref{eq:poly_carac_Sx_4}) is therefore
\begin{equation}
P_{\mbox{\boldmath$S$}}(s) = s^6\,\prod\limits_{\ell=1}^{N+1}(s-s_{\ell}^1)\,(s-s_{\ell}^3),
\label{eq:poly_carac_Sx_5}
\end{equation}
which concludes the proof.$\qquad\square$
\section{Semi-analytical solution of Test 2} \label{SecAppExact}
In test 2, we consider the interaction of a plane wave with a plane interface at normal incidence. This 1D case can be solved semi-analytically, by using Fourier analysis and poroelastic constitutive equations. Note that no shear wave is involved here. The general overview of the algorithm is as follows:
\begin{itemize}
\item writing the potentials of fluid and elastic motions in terms of the potentials of fast and slow compressional waves. To do so, diagonalize the vectorial Helmholtz decomposition of Biot equations \cite{BOURBIE}. The coefficient
\begin{equation}
Y(\omega)=\frac{\textstyle \left((1-\phi)\,\rho_s+\rho_f\,\beta\,({\cal T}-1)\right)\,\omega^2-\left(\lambda_f+2\,\mu-m\,\beta^2\right)\,k^2+i\,\omega\,\phi\,\beta\,\frac{\textstyle \eta}{\textstyle \kappa}}{\textstyle \displaystyle \rho_f\,({\cal T}\,\beta-\phi)\,\omega^2-i\,\omega\,\phi\,\beta\,\frac{\textstyle \eta}{\textstyle \kappa}F(\omega)}
\label{CoeffY}
\end{equation}
is introduced by the change of basis, where $k$ is the wavenumber and $F(\omega)$ is the JKD frequency correction;
\item deduce each field (velocity, stress, pressure) from the expressions of potentials and from the poroelastic constitutive equations;
\item for a given Fourier mode, the reflected and transmitted waves can be written
\begin{equation}
{\bf U}(x,\,\omega)=
\left(
\begin{array}{c}
\mp 1\\
0\\
\pm \phi(1-Y)\\
0\\
\displaystyle
\frac{k}{\omega}\left(\lambda_f+2\mu+\beta\,m\,\phi(Y-1)\right)\\
0\\
\displaystyle
\frac{k}{\omega}\left(\lambda_f+\beta\,m\,\phi(Y-1)\right)\\
[6pt]
\displaystyle
-\frac{k}{\omega}m\,\left(\beta+\phi(Y-1)\right)
\end{array}
\right)\,e^{i(\omega t-kx)}\,{\hat g}(\omega),
\label{ModeFourier}
\end{equation}
where ${\hat g}(\omega)$ is the Fourier transform of the source (\ref{eq:ricker}). The sign $\pm$ depends whether the wave is a right-going incident or transmitted wave (+) or a left-going reflected wave (-);
\item the four reflected and transmitted slow and fast waves (\ref{ModeFourier}) must be multiplied by a reflection or transmitted coefficient. The four coefficients are computed by applying the jump conditions and by solving the resulting linear system;
\item an inverse Fourier transform yields an approximate time-domain solution.
\end{itemize}
\end{appendix}
\section*{References}
\bibliographystyle{elsarticle-num}
|
1,314,259,994,322 | arxiv | \section{Introduction}
\begin{comment}
Designing a secure SoC platform is a challenging and complex activity. SoC design flow involves multiple designs and verification stages. Modern SoC IPs are heavily customized on various fields e.g. Signal Processing, Video and Image Encoding, Quantum and Artificial Intelligence Computations, etc. Hundreds of third-party IP designer companies, SoC Integrators, and foundries are involved in these whole SoC development chain. This complex design and integration process opens a big door for several security attacks and limitations \cite{tehranipoor_trojan},
\end{comment}
Often non-trusted third-party IP cores or EDA tools are integrated into different stages of SoC development life and are susceptible to numerous attacks, such as HT injection, IP piracy, cloning, tampering and unauthorized access \cite{trojan_1},
To prevent malicious attackers from gaining unauthorized access and leaking sensitive information, modern SoCs are equipped with sandboxing mechanism where applications and OS are executed in a isolated trusted environment \cite{trustzone_white}. ARM TrustZone is the industry leading sandboxing mechanism which is widely available in mobile and heterogenous SoC devices for providing trusted execution environment (TEE) where untrusted IP core is executed in a isolated secure processor along with separated Memory, Cache and Bus system. Leveraging ARM TrustZone technology, TEE is extended for security measures in many academic projects and industrial applications such as Samsung Knox \cite{samsung_knox}, Android's Keystore \cite{android_tee} , OP-TEE \cite{op_tee}, Xilinx TrustZone \cite{xilinx_trustzone} etc
A SoC CPU which includes any Trusted Execution Environment (TEE) technology e.g. ARM TrustZone, Knox, Xilinx TrustZone etc. don't provide any root of trust based secure mechanism, where any running software is verified and trustworthy. Instead of root of trust based authorization existing TEE technologies focuses on only isolating the environment by partitioning the CPU, memory and system bus. Some of the current TrustZone based technologies assumes the availability of a secure storage devices for storing secret keys which can only be accessed by the secure world entity and serve as the root of the trust. On many modern SoC and mobile devices unfortunately that secured storage devices is not available. For establishing root of trust based trusted environment, the device key should be stored securely and available after a reboot. In many reconfigurable SoCs e.g. Xilinx Zynq - 7000 SoC \cite{trustzone_white}, UltraScale+
FreeScale etc., the secure key is stored either in battery backed RAM (BBRAM) or eFuse medium. However, there are some bottlenecks in this method and not recommended for following disadvantages: 1) These mechanism still need to provide secure random key generation by random number generator(RNG) which will serve as a root of trust. 2) eFuse is a non-changeble memory where the key cannot be updated. 3) BBRAM methods needs a physical battery to be placed for storage.
In this paper, we propose a novel and efficient trusted technology \textbf{\textbf{TrustToken}} to overcome above disadvantages. We used an asymmetric cryptographic solution that can generate identification tokens in runtime conditions without using any non-volatile memory or any secure boot system. We are inspired and motivated by Google Chromium sand-boxing \cite{google_sandbox} concept to establish a secure execution environment in a SoC background by assigning Tokens for each IP core. \textbf{\textbf{TrustToken}} is a security mechanism that allows to execute a non trusted third party IP in a closed and monitored environment. If a malicious attacker is able to exploit the access control of the IP in a way that lets him run arbitrary alter on the IP design, the \textbf{TrustToken} would help prevent this incident from causing damage to the system. This is achieved by wrapping a non-trusted IP with a security wrapper shown in Fig. \ref{trust_wrapper}. This security wrapper is connected with \textbf{TrustToken} Controller that performs the security evaluation of each connected non-trusted IP core
Token IDs are randomly generated by exploiting hardware process variation and assigned to every IP connected by the \textbf{TrustToken} controller in the boot stage.
\textbf{TrustToken} controller also incorporates a Physical Unclonable Function (PUF) IP block to generate root of trust based ubiquitous token keys. Each IP core has its own token, and the \textbf{TrustToken} controller compares this token to deny or grant access to the rest of the SoC system. This token access signal acts like a security ID card for the untrusted IP core and must provide in each data transaction access request.
\begin{figure}[h]
\centerline{\includegraphics[width=7cm]{wrapper.pdf}}
\caption{\textbf{Proposed TrustToken architecture} }
\label{trust_wrapper}
\end{figure}
\begin{comment}
In conclusion we state some key contributions for our protocol framework:
\begin{enumerate}[leftmargin=*]
\item Provides IP isolation by restricting IP to only allowable interactions.
\item Enforces root of trust based runtime asymmetric authorization mechanism by using a lighweight hybrid ring oscillator PUF. This approach doesnot require any non-volatile memory like eFuse or BBRAM.
\item Efficient implementation of secure isolation by assigning secure token ID for every non-trusted IP core.
\end{enumerate}
We implemented and evaluated the performance
of the proposed \textbf{TrustToken} on a Zynq-7000 based FPGA SoC and compared our works with ARM TrustZone based Xilinx TrustZone technology. We also present a thorough evaluation of our architecture, a quantitive analysis of the generated random keys, and the performance of the root of the trust. We didn't consider any hardware-level attacks which fall outside of the protection capabilities of the TrustZone, such as Side-Channel Attacks, Fault Injection, IP Privacy, Cloning, and Probing.
\end{comment}
\section{Related Work }
In academic research, the first isolation mechanism was proposed by Huffmire in the paper \cite{physicalisolation_huffmire}. In this proposed security mechanism named "Moats and drawbridges," a reconfigurable SoC is configured by creating a fence around the IP with a block of wires (moats). This fenced region can only communicate with another region via a "drawbridge" tunnel. Hategekimana et al. \cite{isolation_1} proposed the integration of nontrusted IPs in systems within a hardware sandbox to prevent malicious attacks by HTs. But the weakness of sandbox-based security is, it only allows allowable interactions with pre-defined access control definitions, and hence only specific violations are restricted.
\begin{comment}
Hategekimana et al. \cite{isolation_7} presented a protocol to build secure SoC kernels of FPGA-accelerators in heterogeneous SoC systems enforcing the Hardware Sandbox concept. The main drawback is that it has been implemented with increased overhead and latency, which seems unfit for applying real-world scenarios. Also, these proposed methods don't provide any protection outside of predefined sandboxing conditions, making the protocol ineffective in runtime conditions.
\end{comment}
Zhao et al. proposed a prototype that extends the ARM TrustZone technology to establish a root of trust-based authorization (secure storage, randomness, and secure boot) by using Mobile SRAM PUFs \cite{Zhao_trustzone_sram}. One of the disadvantages of this method is that SRAM is a poor, unreliable PUF solution and needs additional error correction code (ECC) algorithms to be employed with a helper data mechanism which increases the post-processing cost. In the paper \cite{Zhao_trustzone_token}, Zhao et al. extended their previous work and proposed a two-factor authentication protocol in mobile SoC platform by integrating separate Hash, RSA, and AES modules. The implementation latency on this work was poor and could not fit the real-world SoC IP security measures. In work \cite{basak_2017}, Abhishek et al. proposed a security wrapper-based framework around a third-party IP core, and within the security wrapper,
security policies were added to prevent malicious activities in
the system. This work focuses mainly on verification and debug purposes and does not fit for runtime trojan mitigation or software level attacks preventions due to high overhead and latency. In the proposed work \cite{ray_policy_2015}, authors mentioned a secured architecture of multiple IPs integrated with the OS kernel and applications, which lacks proper implementation.
The major drawback of the ARM TrustZone architecture is sharing same peripherals such as Ethernet, DRAM, UART, etc., which are susceptible to row-hammer and Denial-of-service attacks \cite{pinto_arm_2019}.
The primary security concern of ARM TrustZone technology is its weak and inefficient authentication mechanism.
Several research works have published, indicating unauthorized gain of kernel-level privilege in ARM TrustZone platforms from normal world environment \cite{pinto_arm_2019}.
\begin{comment}
Also, a trusted kernel region can have several vulnerabilities which can damage the whole TEE. \cite{Li_attack_1}. Benhani et al. \cite{benhani_trustzone} have published a paper where it demonstrates several attacks on TrustZone from the simple CAD command with some simple lines of code.
\end{comment}
\section{Background}
\subsection{Physical Unclonable Function}
Physical Unclonable Function exploits the manufacturing variation of a silicon nanocircuit to generate unique and ubiquitous keys \cite{kawser_puf}. PUF can be used for cryptographic operations such as authentication, random number generation, authorization, etc. The idea behind the PUF is that one (or more) device that is identical by design will have different electrical characteristics due to manufacturing variation.
\begin{comment}
This variation is unpredictable and can not be estimated through observation, neither optical nor SEM. A PUF can be considered a black box, where an input challenge pair generates an output response pair. Due to manufacturing variation, the generated output response should be unique and can be used as a unique ID or authentication key. The most common existing PUF designs are Arbiter PUF, Ring Oscillator, XOR, SRAM, etc.
\end{comment}
To evaluate PUF generated keys performance, the three most common metrics are used. They are Randomness, Bit Error Rate, and Uniqueness.
A strong PUF design has many challenge-response pairs (CRPs) generated from a single device, and normally weak PUFs support a relatively small number of CRPs. Compared to other crypto measures such as AES, SHA, MD5, or HASH functions, PUFs exploit limited hardware resources (LUTs, GATES).
\vspace{-2.5 mm }
\subsection{ARM TRUSTZONE}
ARM TrustZone technology refers to a secure execution
environment (SEE) \cite{trustzone_white} where an environment is provided to
isolate both trusted and non trusted software and hardware. It
is also referred to as Trusted Execution Environment (TEE), and
it has a monitor that controls the interactions between these
two separate worlds. TEE TrustZone uses two physically separate processors
dedicated to the trusted and non-trusted world in an embedded security system. The
major drawback of this architecture is they share the same
peripherals such as Ethernet, DRAM, UART, etc. ARM TrustZone is a combination
of some IP blocks which allows a partition between sets of I/O
Peripherals, Processors, and Memory into two different worlds.
In ARM TrustZone platform, two NS bit registers is dedicated to implement the isolation of a software process. \cite{trustzone_white}.
\begin{comment}
\begin{figure}[h]
\centerline{\includegraphics[width=9cm]{trustzone.pdf}}
\caption{The architechture of the ARM Trustzone \cite{trustzone_white}}
\label{trustzone}
\end{figure}
\end{comment}
\subsection{Hardware Trojan and Design For Trust}
A hardware Trojan (HT) is stated as a malicious attacker who has intentionally modified a system circuit and cause alteration from expected behavior while the circuit is implemented \cite{trojan_2}. HT poses a severe threat to SoC design as it can leak sensitive information or change the specification of a circuit in run time conditions. HT can create a emergency situation by degrading the overall system functionality of the circuit. Often this HT is deployed in stealth mode and activated in rare conditions, making it very difficult to patrol its harmful effects in the verification stage. Many researchers have come forward to classifying hardware Trojans and their structure based on their characteristics. One of the best classifications is according to the activation (referred to as Trojan Trigger), and payload mechanism ground \cite{trojan_1}.
\begin{comment}
Trojan activation or trigger is an optional or hidden part that a malicious attacker inserts. It observes the various data events in the circuit and activates malicious attacks in a rare node condition called the payload condition. As a result, the payload is undetected almost the entire period, and the whole IC acts as a Trojan-free circuit while running. Trigger conditions sometimes occur after repeated sequential elements such as rare-event triggered counters or after n-bit counting of a sequential state counter.
\subsection{PUF as Root of Trust}
Computer chips play a pivotal role in the makeup of smart system designs, be it in smart cars, robotics, AI systems etc. As such the integrity (security) of system designs can not be completely guaranteed by default since these chips can be tampered with maliciously. Along side encryption techniques, Physical Unclonable Function (PUF), is provisioned for system design security\cite{}.
PUF as Root of Trust is PUF-based security technique for chips, where random numbes generated by the PUF acts as chip fingerprint and unique identification (UI) code\cite{}.This technology is can be incorporated as a built-in hardware root of trust (RoT) simultaneously with other security techniques. Implementing this technology can help secure both data (for privacy) and intellectual property (chip know-how) of a company thereby reducing the risk of illegal copying and enormous commercial losses\cite{}.
\end{comment}
\section{Threat Model and System Properties}
\label{sec:threat}
\noindent
Before digging into the proposed architecture, we considered some threat models. Our threat model can be divided into two different explicit scenarios: Hardware Trojan and Illegal software access.
In considering the probable first threat model, we consider every IP as non-trusted and capable of inserting hidden malicious Trojan components inside the IP component. They can act in rare conditions. We assume that they are only activated in run-time environment situations and executed covertly from the interior design of the IP. We assume that the SoC IP integrator division is trusted. All other entities in the supply chain process, such as designers, foundries, manufacturers, and validators, can insert this Trojan at any level of the IC life cycle. In this scenario, \textbf{TrustToken} can provides protection against unauthorized data access, access control, modifications, and leaking of any sensitive information from the IP core to the outsider world.
In the second scenario, we assume a malicious attacker, who can gain illegal software access from the embedded SoC world, and leaks/modifies/steals sensitive information. For example, figure \ref{fig:illegal_access} shows an example scenario of malicious unauthorized access. In this figure, four software-level applications are running on two separate CPUs placed in the same SoC system. Four custom IPs are added in the hardware level design, which can be accessed from the software side and marked as the same color as the corresponding application. Access request of IP core 4(four) by software Application 3 (three) can be flagged as illegal and can be isolated by the proposed architecture model.
\begin{figure}[h]
\includegraphics[width=6cm]{illegal_access.pdf
\caption{Illegal software access request by Application 3 running on CPU}
\label{fig:illegal_access}
\end{figure}
In this article, we did not consider the physical attacks performed by physical equipment, which are out of the scope of this article. The attack scenarios did not cover attacks related to hardware components like side-channel attacks, probing attacks, snooping, timing attacks, denial-of-service, fault injection, etc. In summary, describing our architecture, we have taken these threat models into account :
\begin{enumerate}[leftmargin=*]
\item Any malicious HT hiding inside of an IP core, trying to execute in runtime environments. We assume that, hidden HT can bypass the existing CAD tools and can be undetected until payload condition is triggered.
\item Any malicious HT trying to perform illegal access control or unauthorized data transfer. We consider that the attackers can overwrite the data of a specific data channel and intentionally change the computational output. We also assume that, the malicious attacker could cause potential data leakage by changing the operating mode of the IP core.
\item Any malicious attacker located in the CPU core, trying to gain unauthorized access or leak sensitive information of other applications.
\end{enumerate}
\begin{comment}
\subsection{System Properties}
In our proposed design, we have achieved these desired system properties :
\begin{enumerate}[leftmargin=*]
\item \textbf{Trusted.} The secure key generation for the root of trust, i.e. TrustTokens, is guaranteed to be protected against any physical attacks as they exploit silicon manufacturing variation and cannot be predicted by any malicious attacker. The root of trust based secure key generation is completely isolated from the software and non-trusted IP cores.
\item \textbf{Low-cost Solution.} Uses of PUFs eliminates the requirement of hardware-based secure storage (e.g., non-volatile memory) and random number generation IP cores, which could potentially reduce production cost and could be implemented with low resource overhead.
\item \textbf{Flexible.} The proposed architecture includes token-based flexible Trust wrappers, which can easily adopt different IP cores and can be extended depending on tenant requirements.
\end{enumerate}
\subsection{Design assumptions}
While developing our proposed solution, we have taken some key points under consideration.
\begin{enumerate}[leftmargin=*]
\item \textbf{Heterogeneous.} Our proposed protocol targets the heterogeneous SoC platform and observe the implementations on this platform. We assume that our proposed protocol is designed to perform in run time environment.
\item \textbf{Adaptability.} Although this article insist on building Token based
security features for non-trusted IPs in heterogeneous
SoC platform, this protocol can be easily adapted
only in PL fabric-based system without the use of
processor system.
\item \textbf{Bus Specification.} For developing our protocol, we have considered all
the interfaces established by AMBA bus specifications.
We predict that our security protocol will adopt
all necessary standard AMBA interfaces like AXI,
AHB, APB, etc. We have chosen the APB interface for
our implementation and added the necessary signals
to build our security framework.
\item \textbf{Runtime Secure Key Generation . } We considered that generated authorization tokens will be generated in runtime condition exploiting process variation and will placed in secure blockram inside TrustToken controller. We also assume that secure blockram storage is trusted and cannot be tampered. Further, we considered that SoC and processors can boot securely.
\end{enumerate}
\end{comment}
\noindent
\section {Proposed Architechture }
\begin{figure*}[h!]
\centerline{\includegraphics[width=13cm]{main2.pdf}}
\caption{Overview of the proposed TrustToken architecture framework. Consist of TrustToken Controller, TrustWrapper and TokenGenerator}
\label{fig:architechture}
\end{figure*}
The goal of TrustToken is to provide a root of trust-based runtime isolation enabled mechanism, which allows an SoC owner to provide secure and flexible communication of IP cores without any additional secure storage services or system.
Figure \ref{fig:architechture} illustrates the detailed architecture of our proposed design, which includes the following components: the TrustToken Controller, TrustWrappers and TokenGenerator.
\begin{comment}
The primary task of the TrustToken controller block is to provide the foundation for building a root of a trust-based secure isolation system: primary tokens generated from the PUFs in a secure boot session and maintaining the integrity of non-trusted IPs with a runtime checking mechanism. TrustWrapper is an AMBA specified extended APB bus wrapper which carries the mandatory TrustToken and other necessary important signals.
This controller block also provides the secure boot session for the PUF-generated keys, important for software-level OS and secure services. TrustToken controller block also distributes the tokens to the designated IPs cores. To implement the secure enclave isolation, TrustWrapper enforces token-based data integrity and confidentiality check-in each communication transaction initiated by an non-trusted IP core. TrustWrapper is an AMBA specified extended APB bus wrapper which carries the mandatory TrustToken and other necessary important signals. The signals are specified and discussed deliberately in Section \ref{sub:isolation}. TrustWrapper should instantiate all non-trusted IP cores in the design phase, and the IP integrator should specify the trust integrity property in the hardware design. TokenGenerator block consists of a custom hybrid ring oscillator PUF design and can generate secret and unique identification tokens by exploiting manufacturing variations of the SoC hardware. Generated tokens are provided to the TrustToken Controller block for future security measures.
\end{comment}
\paragraph{\textbf{TrustToken Controller.}}
The \textbf{TrustToken} controller is a separate centralized IP dedicated to generating unique Tokens/IDs for the IPs and maintaining the security policies in the connected world. Any IP Integrator has to change the token's parameter value named \textbf{\textit{ar\_integrity}} to assert the validity check (Fig \ref{data_bits}). When this value is assigned as a LOW state, it will disable the isolation feature. When HIGH, it will impose the isolation mechanism of the IP and execute the IP in a non-trusted zone after successful authorization.
After generating the keys by PUF module, they are delivered to Central TrustToken Controller for assigning token IDs. Central TrustToken Controller works as the central security headquarters of the whole SoC system and is responsible for distributing all Token IDs provided by the integrated PUF module. This token ID is referred to as \textbf{\textit{ar\_token}} signal, and the length of this signal is 256 bits. Central TrustToken Controller also assigns specific ID for each of the non-trusted IP which is denoted as\textbf{\textit{ ar\_id}}. \textbf{TrustToken} controller randomly distributes the keys among the IP connected to this controller and stores the allocated Token IDs for respective IPs along with Tokens for future verification. Whenever any IP requests a READ/WRITE access to the \textbf{TrustToken} controller, it compares the received Token ID with the securely stored Tokens list. After successful authorization, it will either enable the data channel for communication or restrict it immediately.
\begin{comment}
\begin{figure}[h!]
\centerline{\includegraphics[width=8cm]{main1.pdf}}
\caption{Central TrustToken Controller connected with TrustWrapper and TokenGenerator. }
\label{fig:architechture}
\end{figure}
\end{comment}
\paragraph{\textbf{Trust Wrapper.}}
\label{sub:isolation}
In our proposed architecture, every IP will be wrapped in a security wrapper labeled as TrustWrapper. TrustWrapper has two different operating interfaces: Secured and Non-secured. Every non-trusted IP core tagged as non-secured will be assigned two additional bus signals to the IP core: ID and Token. Instead of adding any register level isolation mechanism or any separate bus protocol for the secure isolation, we rely on adding extra bus signals to the existing AMBA bus protocol specifications. Adding a separate bus protocol for isolation could create new vulnerabilities and force of modifying the interconnect bridge logic for security check operations.
Further, creating a uniform and unique bus protocol to carry IPs ID and Token information would need a different security mechanism and support every possible bus protocol specifications e.g. bandwidth, channel length, burst size, streaming methods, etc. Each data transaction initiated by the non-trusted IP core will create an authorization request by the Central TrustToken Controller. non-trusted IP should provide valid and unexpired security information (IDs and Token) to the controller block through the security wrapper.
\begin{figure}[h]
\centerline{\includegraphics[width=7cm]{data_bits2.pdf}}
\caption{TrustWrapper data ports: Proposed TrustToken signals with their relative port width.}
\label{data_bits}
\end{figure}
\begin{comment}
\textbf{TrustToken} architecture provides a secure isolation depending on the value of INTEGRITY LEVEL indicated in the TrustWrapper. Figure \ref{data_bits} shows the signals specifications with their relative data ports width. Signal \textbf{\textit{ar\_integrity}} is used to alter the INTEGRITY LEVEL state of non-trusted IPs. While altering the INTEGRITY LEVEL to the HIGH state, the \textbf{TrustToken} controller will enforce the isolation mechanism with the connected non-trusted IP. One of the major contribution of proposed protocol is that even altering the INTEGRITY LEVEL will require valid authorization.
\end{comment}
\paragraph{\textbf{Token Generator. }}
\noindent
Due to low overhead and latency, Enhanced Ring Oscillator-based PUF proposed in paper \cite{kawser_puf} is implemented, which is more stable than traditional Ring Oscillator PUF. Ring Oscillator-based PUF shows promising latency and resource utilization results compared to SRAM PUF, Arbiter PUF, TRNG, or other crypto cores.
Our custom Ring Oscillator-based PUF solution can generate 256-bit width keys. It has an accepted uniqueness and randomness to fit our goal of providing heterogeneous SoC security. In one of the fundamental research work regarding PUF \cite{kawser_puf
strong PUF is defined as the following security properties:
1. It will be impossible to clone the PUF circuit physically. 2. It will support many Challenge-Response Pairs(CRPs) so that the adversary cannot mount a brute force attack within a realistic time.
In terms of the Strong PUF definition, the proposed work can be considered a strong PUF and will be the best candidate to be implemented for the proposed SoC security reason.
\paragraph{\textbf{Secure Transition between Integrity Level}}
Benhani et al. \cite{benhani_trustzone} showed that any simple malicious TCL script in CAD Tool could comprise the logic state of AWPROT or ARPROT signal in the AXI peripheral or AXI Interconnection in Zynq Based SoC platform. Any malicious TCL script can modify this AWPROT to ARPROT signal to HIGH logic states, which forces the TRUST ZONE controller to execute in a non-secure world even though the IP is insecure, leading to a denial of service attacks. To encounter this problem, we propose a secure transition of Integrity Level Logic. Every transition of the signal \textbf{\textit{(ar\_integrity)}} should go under successful authentication verification by Central \textbf{TrustToken} Controller.
\section{Security Rules}
\label{sec:proposed}
In this section, we describe the formal specifications of the security rules for the \textbf{TrustToken} framework.
The security formalism defines the security elements and access control primitives that are implemented in the system. Both hardware and software level components are integrated in the security primitives because the software processes offload their to hardware IP cores. The security tuple $\mathbb{S}$ is characterized as follows:
\begin{equation*}
\mathbb{S} := \{U, P, O, T, I, A, D, M\}
\end{equation*}
\begin{itemize} [leftmargin=*]
\item $U$ = $\{u_1, u_2, u_3, .... ,u_n\}$ is the set of users in a system.
\item $P$ = $\{P_1, P_2, P_3, .... ,P_n\}$ is the set of process sets where each user has its corresponding process set $P_i$ = $\{p_{i1}, p_{i2}, p_{i3}, .... ,p_{im}\}$
\item $O$ = $\{o_1, o_2, o_3, .... ,o_k\}$ is the set of objects. In our proposed framework, objects correspond to various types of non-trusted IP cores.
\item $T$ = $\{T_1, T_2, T_3, .... ,T_n\}$ is the set of secret Tokens.
\item $I$= $\{I_1, I_2, I_3, .... ,I_n\}$ is the set of assigned IDs to each non-trusted IP core.
\item $A$ = $\{HIGH,LOW\}$ is the set of integrity access attributes. Here, $HIGH$ is the HIGH state level of integrity, $LOW$ is LOW state level of integrity.
\item $D$ = $\{yes,no\}$ is the set of decisions.
\item $M$ = $\{M_1, M_2, M_3, .... ,M_n\}$ is the set of access matrix. Each user has its corresponding access matrix. Each matrix has $m\times k$ elements where each element is a 3-bit access attribute, $a = a_2a_1a_0$ where $a_2 \rightarrow r, a_1 \rightarrow w, a_0 \rightarrow e$.
\end{itemize}
As most of the modern OS system allows us to create multiple user accounts in a single CPU , we include the set of users in the security tuple. Each user can execute multiple processes and we have included one process under each user. The integrity access attributes include HIGH and LOW states. To ensure the security of the system, we have defined and established some security rules:
\noindent
\textbf{Rule 1.} For each $u$ $\in$ $U$, there is a function $F_u$$\colon$$P$$\rightarrow$$M$ which must be a one to one function.
Rule 1 ensures secure isolation of hardware access requests as a process under one user can not gain any unauthorized access of other user.
\vspace{-0.5mm}
\textbf{Rule 2.} An access request is a 4-tuple $\tau := (u, p, o, t, i, a)$ where $u$ $\in$ $U$, $p$ $\in$ $P_i$, $o$ $\in$ $O$, $t_i$ $\in$ $T$, $i_i$ $\in$ $I$ and $a_i$ $\in$ $A$.
Rule 2 defines the access request where a process under a user account requests for a data transaction from a hardware IP core.\\
\noindent
\textbf{Rule 3.} Confidentiality Preserving Rule : If a process $p$ $\in$ $P$ has an integrity attribute, $i$ over an object $o$ $\in$ $O$ and the decision is $d$ $\in$ $D$, the confidentiality is preserved if $a_2$ = $r$ or $a_0$ = $e$ or both.
\vspace{-0.5mm}
\noindent
\textbf{Rule 4.} Integrity Preserving Rule : If a process $p$ $\in$ $P$ has an access attribute $a$ over an object $o$ $\in$ $O$ and the decision is $d$ $\in$ $D$, the integrity is preserved if $a_1$ = $w$ or $a_0$ = $e$ or both.
\noindent
\textbf{Rule 5.} The access request of a process $p$ $\in$ $P$ over an object $o$ $\in$ $O$ is granted if the decision is $d$ $\in$ $D$ and $d$ = $yes$.
\noindent
\textbf{Rule 6.} Only the Central Trust Controller or an IP integrator in design phase has the access to modify the access matrix $M_i$ $\in$ $M$.
\vspace{-0.5mm}
\noindent
\begin{comment}
\vspace{1.5mm}
\noindent
\textbf{Rule 6.} Only the Central Trust Controller or an IP integrator in design phase has the access to modify the access matrix $M_i$ $\in$ $M$.
An unauthorized modification of access permission is prevented by rule 6, as only the user can change the entries of its own access matrix.
\vspace{1.5mm}
\noindent
\textbf{Theorem 1.} \textit{An access request and its corresponding permission is secured if it obeys the rule 2 and rule 5.}
\begin{proof}
Let, an access $\tau_1$ to an accelerator is not a 4-tuple, such as $\tau_1 := (u_1, o_1, a_1)$ or $\tau_1 := (p_1, o_1, a_1)$. Besides, $\tau_2$ is a 4-tuple, $\tau_2 := (u_2, p_2, o_2, a_2)$. As for $\tau_1$, either user id or process id of the request is unknown, the request can not be verified whereas the request $\tau_2$ can be verified. So, an access request must obey the rule 2 to be a complete request. To grant a permission, if the request is complete, the decision $d$ is determined by accessing the element $a$ from the corresponding access matrix $M$. Hence, an access request is securely processed by the rule 2 and rule 5.
\end{proof}
\noindent
\textbf{Theorem 2.} \textit{The confidentiality and integrity of the system is preserved if all access requests obey the rule 3 and rule 4.}
\begin{proof}
Let, two access attributes are $a'=a'_2a'_1a'_0$ and $a''=a''_2a''_1a''_0$. If $a'_2 = w$ and $a'_1 = r$, the confidentiality and integrity of the system may be compromised for the access $a'$ as the read and write access is altered. Besides, if $a''_2 = r$ and $a'_1 = w$, the confidentiality and integrity of the system is not compromised for the access $a''$, as the read and write access is same as defined. Hence, the rule 3 and rule 4 preserves confidentiality and integrity of the system.
\end{proof}
\noindent
\textbf{Theorem 3.} \textit{A system is secured if and only if it's all accesses to objects are secured, and the system obeys the rule 6.}
\begin{proof}
The accesses to objects in a system are secured by the rule 1-5 and theorem 1 and theorem 2. Rule 6 defines the authority to modify the access matrix entries. Let, $u_1, u_2 \in U$ are two users. If $u_1$ tries to modify $M_2$ and $u_2$ tries to modify $M_1$, there is a possibility of security breach in the system. Therefore, rule 6 ensures the prevention of unauthorized modification in $M$ and maintain security.
\end{proof}
\end{comment}
\begin{comment}
\begin{algorithm}[t]
\caption{Algorithm of Software IP Management Module}
\label{alg:token_algo}
\algsetup{linenosize=\scriptsize}
\small
\begin{algorithmic}[1]
\REQUIRE $O_{id}$: Set of Object IDs, $P_i$: Set of processes, $SC$: Set of Security Contexts of processes, $\tau$: an access request tuple, $D_{in}$: Original Input Data, $b$: Signal from HIMM cache miss, $M_i$: Access Matrix
\ENSURE $C_{id}$: Set of Context ID for processes, $D_{out}$: Data out
\STATE $C_{id}\gets \phi$
\FORALL {$p_{ij} \in P_i$}
\IF{$SC_j(p_{ij})$ is valid}
\STATE $c_j \gets f(SC_j)$ /* Context ID generation function*/
\STATE $C_{id} \gets C_{id}\cup \{c_j\}$
\ENDIF
\ENDFOR
\FORALL {$\tau_i \in \tau$}
\STATE $a_i \gets M(p_{ij}, o_{id})$ \label{line:data_out}
\STATE $D_{out} \gets D_{in}\cup o_{id}\cup a_i$
\ENDFOR
\IF{$b = 1$}
\FOR {$p_i \in \tau_i$}
\STATE $SC_i \gets f^{-1}(c_i)$ /* Inverse function of \textit{f} */
\STATE $a_i \gets F(SC_i)$ /* Access SELinux Policy Server*/
\STATE $M(p_{ij}, o_{id}) \gets a_i$
\STATE Go to line~\ref{line:data_out}
\ENDFOR
\ENDIF
\RETURN $D_{out}, C_{id}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Algorithm of Trust Token Module}
\label{alg:SIMM_algo}
\algsetup{linenosize=\scriptsize}
\small
\begin{algorithmic}[1]
\REQUIRE $O_{id}$: Set of IP IDs, $P_i$: Set of processes, $SC$: Set of TrustToken and ID of processes, $\tau$: an access request tuple, $D_{in}$: Original Input Data, $M_i$: Access Matrix
\ENSURE $C_{id}$: Set of Context ID for processes, $D_{out}$: Data out
\STATE $C_{id}\gets \phi$
\FORALL {$p_{ij} \in P_i$}
\IF{$SC_j(p_{ij})$ is valid}
\STATE $c_j \gets f(SC_j)$ /* TrustToken and ID generation function*/
\STATE $C_{id} \gets C_{id}\cup \{c_j\}$
\ENDIF
\ENDFOR
\FORALL {$\tau_i \in \tau$}
\STATE $a_i \gets M(p_{ij}, o_{id})$ \label{line:data_out}
\STATE $D_{out} \gets D_{in}\cup o_{id}\cup a_i$
\ENDFOR
\RETURN $D_{out}, C_{id}$
\end{algorithmic}
\end{algorithm}
\end{comment}
\vspace{-1.5mm}
\noindent
\section {Protocol Evaluation and case studies }
In this section, we tried to cover the protocol resiliency in described attack scenarios.
\subsection{Scenario 1 : Compromising ID signals}
The Token-based authorization aimed to protect against malicious CAD or RTL modification attacks. As stated above, to achieve hardware modification and gain access control in a non-trusted IP, it was enough to modify only some commands in the script. In our proposed design, the PUF-based Token ID is provided only in run-time conditions and exploits the device's manufacturing variation. Also, as the Token ID is not saved in any memory, we assume that this will protect against any malicious attack of altering the Token ID. In section \ref{sec:threat} we have discussed a possible attack scenario where a software level attack was introduced from an arbitrary application core. The malicious adversaries configure a secured IP core and attempt to gain access to the victim IP by initiating a transaction request from a different IP core. However, Central Trust Controller keeps a record of all assigned IDs and Tokens and their respective source and destination IPs. Since the attacker has made an illegal access request from an outside IP core, this attempt will be compared with the saved credentials and prevented if mismatched.
\subsection{Scenario 2 : Compromising access control }
In the case of Xilinx TrustZone, \cite{xilinx_trustzone}, at the AXI interconnect level security check is performed, and it plays a critical role in the security. This Interconnect crossbar is also responsible for checking the security status of every transaction on the connected AXI bus, which creates a huge security risk. Any malicious attacker intending to break the security layer can easily control the AXI interconnect crossbar by modifying some security signals. This defect was overcome with the proposed secure design, as the proposed Trust Token Controller has enforced a robust and secure system that makes any access control attack very difficult to take control of the internal signals of Central \textbf{TrustToken} Controller. Central \textbf{TrustToken} Controller is itself encrypted with PUF-based Token ID key and hence restricts any unauthorized access control on this IP.
\subsection{Scenario 3: Comprising INTEGRITY LEVEL}
Any non-trusted IP connected to the Central \textbf{TrustToken} Controller for secure isolation is determined by the status of INTEGRITY LEVEL signals. As stated before in the thread model section, only an IP Integrator can define the INTEGRITY STATUS in hardware level. Any modification of this signal in runtime conditions will need proper authorization, which gives protection against any hidden CAD or RTL script attack. Also, to alter the status of the protection level, any malicious attacker has to show their PUF-based Token ID of the non-trusted IP. In the work, \cite{benhani_trustzone} benhani et al. showed that only by modifying the Arm TrustZone AWPROT/ARPROT signal any malicious attacker could create a significant Denial of Service (DoS) interruption in the SoC. This scenario can be overcome by the proposed secure transition model, where an alteration request should also go into an additional authorization layer.
\section{Performance Evaluation and Implementation Summary}
This section describes the experimental setup and overhead calculation used for implementing our proposed architecture to evaluate the robustness of the proposed \textbf{TrustToken} framework. The main setup was to efficiently implement the architecture and calculate overhead and latency for data transactions.
\subsection{ Evaluation Infrastructure}
To implement the protocol framework, we have used the Zybo Z7-20 (Xilinx XC7Z020) FPGA board throughout the whole article. This board has two ARM Cortex-A9 processors, clocked at 667 MHz
with 1-GB Memory and Zynq - 7000 FPGA processor. Overhead analysis and performance reports were generated and acquired from Xilinx's Vivado Design Suite platform.
All experiments reported in this article were performed on
Xilinx XC7Z020 FPGA PL fabric.
\subsection{Protocol Performance}
We evaluated our proposed TrustToken protocol by implementing and synthesizing on Zybo-Z7-20 board. For evaluation, we have attached TrustWrappers around four symmetric crypto IP cores (AES,DES,TRNG and RSA). Every TrustWrappers was assigned in HIGH integrity state to evaluate the proposed architecture model. We also initiated 5 different applications on ARM processor to access the crypto cores computational results. In our implementation, we successfully introduced trusted execution environment by TrustToken model and observed the results. In section \ref{sec:threat}, we considered a possible software level attack scenario, where a malicious attacker from Application 3 (mapped to TRNG hardware IP core) is trying to establish an authorized access path to RSA IP core. We implemented this scenario, and the attacked was prevented by TrustToken module.
To compare the protocol performance, we also designed VIVADO CAD tool based Xilinx TrustZone enclave around the four crypto cores by following the work \cite{xilinx_trustzone_cad} and compared with proposed TrustToken protocol. For Xilinx TrustZone, we successfully launch a simple CAD Tool attack by modifying the \textbf{AWPROT} signal in runtime condition scenario. Similarly, the attack attempt was failed in the proposed method, which explicitly prove the protocol resiliency against CAD tools attack.
\subsection{Token Keys Performance Evaluation}
Fig \ref{hamming} shows the hamming distance results calculated from the PUF keys. We can observe from the figure that the hamming distance is closely rounded between 40 and 60 percent, which proves the stability and effectiveness of the keys and is very close to the ideal characteristics of PUF \cite{kawser_puf}.
\begin{figure}[h]
\centerline{\includegraphics[width=5cm]{Hamming_Distance.pdf}}
\caption{Hamming Distance between the PUF keys.}
\label{hamming}
\end{figure}
\begin{table}
\caption{Characterizations of the Ring Oscillator PUF}
\label{table:PUF}
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Parameters & Values \\
\hline
\hline
No of Oscillators & 512 \\
No of Keys Generated & 256 \\
Length of a single generated Key & 256 bits\\
Length of a single Challenge Input & 2 bytes\\
Randomness & 46.62\% \\
Uniqueness & 48.18\%\\
Reliability & 100\% \\
\hline
\end{tabular}
\end{center}
\end{table}
In Table \ref{table:PUF} the overall characterizations of the PUF were summarized. Our internal PUF design includes 512 oscillators and can generates keys of 256 bits wide.
\begin{comment}
\subsection{TrustToken validation overhead}
In our proposed mechanism, the isolation mechanism with TrustToken initiates a validation overhead in each data transaction request. This overhead is mainly contributed by the TrustWrapper authorization delay time. TrustWrapper and Central TrustToken Controller have a bi-directional handshake authentication mechanism and generates a latency from 1 to 2 cycle depending on the Intregrity Level of the non-trusted IP cores.
\end{comment}
\subsection{Resource Overhead}
After successful implementation, we have included the utilization report from the VIVADO software platform in Table \ref{table:utlization}. The deployed design shows encouraging results with low resource utilization. BUFG region utilization is only rounded to 6.25 percent.
\begin{table}[ht]
\caption{Utilization Report}
\centering
\begin{tabular}{ |c | c | c |c}
\hline
Resource & Available & Utilization (\%) \\ [0.5ex]
\hline
LUT & 53200 & 618 (1.16\%) \\
FF & 106400 & 44 (0.04\%) \\
BUFG & 32 & 2 (6.25\%) \\[1ex]
\hline
\end{tabular}
\label{table:utlization}
\end{table}
\begin{comment}
\begin{table}[ht]
\caption{Power Dissipation}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Type & Dynamic Power & Static Power & Total Power \\
\hline
\rule[-2ex]{0pt}{2.2ex} \textbf{TrustToken} & 1.400W & 0.137W & 1.537W \\
\hline
\end{tabular}
\label{table:power}
\end{table}
Also, in Table \ref{table:power} we have included a power dissipation summary of the whole synthesized design. In terms of power, the proposed protocol consumes significantly less power, which assures the proposed protocol design to be deployed with minimum power consumption.
\end{comment}
\section{Conclusion }
This paper proposes a Token-based secure SoC architecture for non-trusted IP in a heterogeneous SoC platform. \textbf{TrustToken} architecture uses a root of trust-based authorization and provides an extra layer of security against unauthorized access control and attacks. The protocol uses a custom Ring Oscillator-based PUF module to generate keys, and it can exploit the reconfigurable nature of the SoC-based FPGA platform. Our implementation shows low latency and overhead in generating and distributing the PUF keys. We have shown that the proposed protocol uses a constrained LUT and BUFG region of an SoC architecture and effectively provides state-of-the-art illegal software access and data leakage prevention without much resource utilization. This protocol can also be promising for FPGA-based Hardware Accelerators fields etc., Cloud Computing, Machine Learning, and Image Processing.
\begin{comment}
To improve the reliability and tolerance of the PUF, an encoding scheme and "helper data" will be added with future works. Such changes will reduce the PUF response error due to changed environmental conditions, including temperature, voltage, and aging effects.
\end{comment}
\printbibliography
\end{document}
|
1,314,259,994,323 | arxiv | \section*{Introduction}
In \cite{Dix} Dixmier proved the existence of non-normal traces on
the von Neumann algebra $B(H)$. Dixmier's original construction
involves singular dilation invariant positive linear functionals
$\omega$ on $\ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$. This construction was altered by A.
Connes \cite{CN} (see also Definition 5.2 below) who defined
non-normal traces via the composition of the Cesaro mean and a
state on $C_b([0,\ensuremath{\infty})) / C_0([0,\ensuremath{\infty}))$. In \cite{DPSS},
\cite{DPSSS} and \cite{DPSSS2} the traces of Dixmier in \cite{Dix}
were broadly generalized as singular symmetric functionals on
Marcinkiewicz function (respectively, operator) spaces $M(\psi)$
on $[0,\ensuremath{\infty})$ (respectively, on a semifinite von Neumann algebra).
The symmetric functionals in \cite{DPSSS} and \cite{DPSSS2}
involve Banach limits, that is, singular translation invariant
positive linear functionals $L'$ on $\ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$. We extend
the construction of Dixmier in Definition 1.7 and Connes
in Definition 5.2 (verified in Theorem 6.3) by extending
the notion of Banach limits to $C_b([0,\ensuremath{\infty}))$.
The identification of the commutative specialization of \mbox{\text{(Connes-)}}Dixmier traces
as singular symmetric
functionals has some pivotal consequences.
The established
theory of Banach limits \cite{GL} and singular symmetric
functionals on Marcinkiewicz spaces \cite{DPSS}, \cite{DPSSS},
\cite{DPSSS2} can be applied to questions concerning the
\mbox{\text{(Connes-)}} Dixmier trace, a central notion in Connes'
non-commutative geometry \cite{CN}. Conversely, ideas in Connes'
non-commutative geometry, such as measurability of operators
\cite[IV.2.$\beta$, Definition 7]{CN}, lend themselves to
generalization to abstract Marcinkiewicz spaces (Definition 3.2
and Definition 3.5). As a result, we have been able to present a
new characterization of measurable operators (see Theorem 5.12,
Remark 5.13 and Theorem 6.6).
\medskip \noindent The paper is structured as follows.
Section 1 introduces
Banach limits, almost convergence (extending the notions of G. Lorentz \cite{GL})
and the theory of singular symmetric functionals on
the Marcinkiewicz space $M(\psi)$ defined by a concave function $\psi$
\cite{DPSS}, \cite{DPSSS}.
The construction of singular symmetric functionals on $M(\psi)$
\cite{DPSSS} (Definition 1.6 below) is extended by Definition 1.7.
Section 2 introduces sufficient conditions to identify
the singular symmetric functionals of \cite{DPSSS}
with those of Definition 1.7, see Theorem 2.3 and Theorem 2.7.
A result in \cite{DPSSS}, on
the Riesz semi-norm of a function $x$ in a Marcinkiewicz space $M(\psi)$
as the supremum of the values $\{ f(x) \}$ where $\{ f \}$ is a set of
singular symmetric functionals on $M(\psi)$, is extended in Theorem 2.8.
Section 3 contains
an analysis of various notions of a measurable element of a
Marcinkiewicz space $M(\psi)$, introduced in Definitions 3.2 and
3.5, and their coincidence (Theorem 3.7 and Corollary 3.9, see
also Theorem 3.14).
The results of Section 2 and Section 3 concern singular
symmetric functionals on $M(\psi)$ parameterised by the set
of strictly increasing, invertible, differentiable
and unbounded functions $\kappa: [0,\infty) \to [0,\infty)$.
Section 4 summarises the conditions on the function
$\kappa$ used in Section 2 and Section 3. Theorem 4.4,
which extends the existence results of \cite{DPSS},
demonstrates an equivalence between the growth
of the concave function $\psi$ and the existence
of functions $\kappa$ which satisfy the hypotheses
of results in Section 2 and 3.
A subset of the collection of extended Banach
limits, called Cesaro-Banach limits (Definition 5.4) is studied
further in Section 5. It is demonstrated that this subset is coincident
with
the generalized limits employed by Connes to construct the
\mbox{Connes-Dixmier} traces used in non-commutative geometry.
Theorem 5.6 identifies (the commutative specialization of)
\mbox{Connes-Dixmier} traces as a sub-class of the singular
symmetric functionals studied in \cite{DPSSS}, \cite{DPSSS2}.
Results on \mbox{Connes-Dixmier} traces then follow from the
general theory of singular symmetric functionals on Marcinkiewicz
spaces developed in the preceding sections (Theorem 5.12).
Section 6 considers the special example of the Marcinkiewicz
space $M(\psi)$ where $\psi(t)=\log(1+t)$
(recognized from non-commutative geometry
as the space $\mathcal{L}^{(1,\infty)}$). Here, we
summarize and present our results (Theorems 6.1, 6.2, 6.3, 6.4 and 6.6)
for Dixmier and Connes-Dixmier traces on the operator
Marcinkiewicz spaces associated with semifinite von Neumann
algebras of type \emph{I} and \emph{II}. In particular, Theorems 6.1, 6.2, 6.3, 6.4
and 6.6 apply to the operator ideals and traces
of non-commutative geometry \cite{CN}.
\section{Preliminaries}
\subsection{Banach Limits, Almost Convergence and Almost
Piecewise Linearity}
\medskip \noindent Let $H$ be one of the semigroups $\ensuremath{\mathbb{N}} := \{1,2,... \}$ or
$\ensuremath{\mathbb{R}}_+ := [0,\ensuremath{\infty})$ equipped with the topology and order induced
by the locally compact additive group $\ensuremath{\mathbb{R}}$. Let $C_b(H)$ be the
space of bounded continuous functions on $H$. Define the
translation operator \display{T_s(f)(t) = f(t+s) \ \ensuremath{\ \, \forall \,} s,t \in H,
f \in C_b(H).} An element $L \in C_b(H)^*$ is called translation
invariant if \display{L(T_s(f)) = L(f) \ensuremath{\ \, \forall \,} s \in H, f \in C_b(H).}
A \textbf{Banach limit} $L$ on $C_b(H)$ is a translation invariant
positive linear functional on $C_b(H)$ such that $L(1) = 1$. This
extends the notion of a Banach limit investigated in \cite{GL} in
the context of the semigroup $\ensuremath{\mathbb{N}}$ of all natural numbers. Let
$BL(H)$ denote \textbf{the set of all Banach limits on} $C_b(H)$.
It is easy to see that every $L \in BL(H)$ vanishes on compactly
supported elements from $C_b(H)$ and that
$$
\liminf_{t \to \ensuremath{\infty}} f(t) \leq L(f) \leq \limsup_{t \to \ensuremath{\infty}} f(t)
$$
for any positive $f \in C_b(H)$.
We extend the notion of almost convergent sequences \cite{GL}.
\begin{dfnS}
A function $f \in C_b(H)$ is said to be \textbf{\emph{almost convergent
at infinity}} if $L_1(f) = L_2(f) \ensuremath{\ \, \forall \,} L_1,L_2 \in BL(H)$.
\end{dfnS}
Let $f \in C_b(H)$ be almost convergent at infinity. We denote the value $A :=
L(f) \ensuremath{\ \, \forall \,} L \in BL(H)$ by
$$
\text{F-}\lim f = A
$$
following
G. Lorentz \cite{GL}. In particular we write
${\text{F-}\lim_{n\to \infty}
a_n}$ for $\alpha = \{a_n \}_{n=1}^\ensuremath{\infty} \in \ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$ and
${\text{F-}\lim_{t\to \infty}
g(t)}$ for $g \in C_b([0,\ensuremath{\infty}))$.
\bigskip \noindent
Let $\alpha = \{a_n \}_{n=1}^\ensuremath{\infty} \in \ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$. Let
$\chi_E$ be the characteristic function for $E \subset [0,\ensuremath{\infty})$.
Define the \textbf{piecewise linear extension map} \display{\ensuremath{p} :
\ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}}) \to C_b([0,\ensuremath{\infty}))} by \display{\ensuremath{p}(\alpha)(t) =
\sum_{n=0}^\infty \Big( a_n + (a_{n+1}-a_n)(t-n) \Big)
\chi_{[n,n+1)}(t),} where $a_0=0$ by definition. The following
lemma is an elementary application of the definition, hence the
proof is omitted.
\begin{lemmaS}
The map $\ensuremath{p} : \ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}}) \to C_b([0,\ensuremath{\infty}))$ is a positive
linear isometry with the following properties
\begin{prop2list}{10}{2}{3}
\item $\ensuremath{p}(1_{\ell^\ensuremath{\infty}}) = 1$,
\item $\nm{\ensuremath{p}(\alpha)} = \nm{\alpha}$ for all $\alpha \in \ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$,
\item $T_{k}(\ensuremath{p}(\alpha)) = \ensuremath{p}(T_k(\alpha))$
for all $\alpha \in \ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$ and $k \in \ensuremath{\mathbb{N}}$.
\end{prop2list}
\end{lemmaS}
\noindent Let $g \in C_b([0,\ensuremath{\infty}))$. Define the
\textbf{restriction map} $r_{\ensuremath{\mathbb{N}}}$ and \textbf{averaging map}
$E_{\ensuremath{\mathbb{N}}}$, acting from $ C_b([0,\ensuremath{\infty}))$ onto $\ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$ by
\display{\bd{g} := \{ g(n) \}_{n=1}^\ensuremath{\infty} \ \ , \ \ E_\ensuremath{\mathbb{N}}(g) := \{
\int_{n-1}^{n} g(s)ds \}_{n=1}^\ensuremath{\infty}.} The following lemma is an
elementary application of the definitions.
\begin{lemmaS}
The maps $r_{\ensuremath{\mathbb{N}}}, E_{\ensuremath{\mathbb{N}}} : C_b([0,\ensuremath{\infty})) \to \ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$ are positive
linear surjections with the following properties
\begin{prop2list}{10}{2}{3}
\item $\bd{1} = E_{\ensuremath{\mathbb{N}}}(1) = 1_{\ell^\ensuremath{\infty}}$,
\item $\nm{\bd{g}} \leq \nm{g}$ and $\nm{E_{\ensuremath{\mathbb{N}}}(g)} \leq \nm{g}$
for all $g \in C_b([0,\ensuremath{\infty}))$,
\item $\bd{T_{a + k}(g)} = T_k \bd{T_a(g)}$ and
$E_{\ensuremath{\mathbb{N}}}(T_{a + k}(g)) = T_k E_{\ensuremath{\mathbb{N}}}(T_a(g))$ \\
\hspace*{0.4cm} for all $a \in [0,\ensuremath{\infty})$, $g \in C_b([0,\ensuremath{\infty}))$ and $k \in \ensuremath{\mathbb{N}}$.
\end{prop2list}
\end{lemmaS}
\noindent The following notion shall become an important concept in Section 2.
\begin{dfnS}
Let $g \in C_b([0,\ensuremath{\infty}))$. We say $g$ is \textbf{\emph{almost piecewise linear
at infinity}} if $L(g - \ensuremath{p} \bd{g}) = 0 \ensuremath{\ \, \forall \,} L \in BL(\ensuremath{\mathbb{R}}_+)$.
\end{dfnS}
\subsection{Singular symmetric functionals on Marcinkiewicz
spaces}
\noindent We introduce the notation of \cite{DPSSS}. Let $m$ be
the Lebesgue measure on $[0,\ensuremath{\infty})$. Let $x$ be a measurable
function on $[0,\ensuremath{\infty})$. Define the \textbf{decreasing
rearrangement of} $x$ by \display{x^*(t) = \inf \inset{s \geq
0}{m(\{ |x| > s\}) \leq t} , \ t > 0.} Let $\Omega_\ensuremath{\infty}$ denote
the set of concave functions $\psi : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ such
that $\lim_{t \to 0^+} \psi(t) = 0$ and $\lim_{t \to \ensuremath{\infty}} \psi(t)
= \ensuremath{\infty}$. Important functions belonging to $\Omega_\ensuremath{\infty}$ include
$t$, $\log(1+t)$, $t^\alpha$ and $(\log(1+t))^{\alpha}$ for $0 <
\alpha < 1$, and $\log(1+\log(1+t))$. Let $\psi \in \Omega_\ensuremath{\infty}$.
Define the \textbf{weighted mean function} \display{\phi(x)(t) :=
\frac{1}{\psi(t)} \int_0^{t} x^*(s) ds,\ t>0} and denote by
$M(\psi)$ the \textbf{Marcinkiewicz space} of measurable functions
$x$ on $[0,\ensuremath{\infty})$ such that \display{\nm{x}_{M(\psi)} := \sup_{t
> 0} \phi(x)(t) = \nm{\phi(x)}_\ensuremath{\infty}
< \ensuremath{\infty}.}
\noindent The norm closure of $M(\psi) \cap L^1([0,\ensuremath{\infty}))$ in
$M(\psi)$ is denoted by $M_1(\psi)$. For every $\psi\in
\Omega_\ensuremath{\infty}$, we have $M_1(\psi)\neq M(\psi)$. We define the
\textbf{Riesz semi-norm} on $M(\psi)$ by \display{\rho_1(x) :=
\inf \inset{ \nm{x-y}_{M(\psi)}}{y \in M_1(\psi)}=\limsup _{t\to
\infty}\phi(x)(t),} (see \cite[Proposition 2.1]{DPSSS}). The
Banach space $(M(\psi),\nm{.}_{M(\psi)})$ is an example of a
rearrangement invariant space \cite{LT}, also termed a symmetric
space \cite{KPS}. Let $M_+(\psi)$ denote the set of positive
functions of $M(\psi)$.
\begin{dfnS}
A positive homogeneous functional $f:M_+(\psi)\to [0,\infty)$ is
\text{(i)} \textbf{\emph{symmetric}} if $f(x) \leq f(y)$ for all $x,y \in M_+(\psi)$ such
that $\int_0^t x^*(s)ds \leq \int_0^t y^*(s)ds \ensuremath{\ \, \forall \,} t \in
[0,\ensuremath{\infty})$, and \text{(ii)} \textbf{\emph{supported at infinity}}, or \textbf{\emph{singular}} on
$M(\psi)$, if $f(|x|) = 0$ for all $x \in M_1(\psi)$.
\end{dfnS}
If such a functional is {\it additive}, then it
can be extended by linearity to a bounded linear positive
functional on $M(\psi)$. Let $M_+(\psi)_{\mathrm{sym},\ensuremath{\infty}}^*$
denote the \textbf{cone of additive symmetric functionals on
$M_+(\psi)$ supported at infinity}, \cite[Section 2]{DPSSS}. Not
every Marcinkiewicz space $M(\psi)$, $\psi\in\Omega_\ensuremath{\infty}$, admits
non-trivial additive singular symmetric functionals. %
Necessary
and sufficient conditions for the existence of such functionals on
a Marcinkiewicz space $M(\psi)$ may be found in \cite[Theorem
3.4]{DPSS} and will be considered in Theorem 2.7 and Theorem 4.4
below.
\bigskip \noindent
Let $\kappa : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ be an increasing, continuous
and unbounded function. Define the $\kappa$\textbf{-weighted mean
function on} $M(\psi)$
$$
\phi_\kappa(x)(t) := \phi(x)(\kappa(t)) =
\frac{1}{\psi(\kappa(t))} \int_0^{\kappa(t)} x^*(s) ds,\quad t>0.
$$
Then, as $\phi(x) \in C_b([0,\ensuremath{\infty}))$ for each $x \in M(\psi)$, we
have $\phi_\kappa(x) \in C_b([0,\ensuremath{\infty}))$ for each $x \in M(\psi)$
and the sequences \display{ \bd{\phi(x)} = \{ \phi(x)(n)
\}_{n=1}^\ensuremath{\infty} \ \ \ \text{and} \ \ \ \bd{\phi_\kappa(x)} = \{
\phi_\kappa(x)(n)) \}_{n=1}^\ensuremath{\infty}} are bounded.
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ be
increasing, continuous and unbounded. Let $x \in M_+(\psi)$ and
$L' \in BL(\ensuremath{\mathbb{N}})$. Define \display{f_{L',\kappa}(x) := L' (
\bd{\phi_\kappa(x)}) = L' \Big( \{ \phi(x)(\kappa(n))
\}_{n=1}^\ensuremath{\infty} \Big) .}
\end{dfnS}
\noindent In \cite{DPSSS} necessary and sufficient conditions were
found on the sequence $\{ \kappa(n) \}_{n=1}^\ensuremath{\infty}$ and the
function $\psi\in \Omega_\ensuremath{\infty}$ such that $f_{L',\kappa} \in M_+(\psi)_{\mathrm{sym},\ensuremath{\infty}}^*$
for all $L' \in BL(\ensuremath{\mathbb{N}})$.
It is natural to introduce the following extension.
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ be
increasing, continuous and unbounded. Let $x \in M_+(\psi)$ and $L
\in BL(\ensuremath{\mathbb{R}}_+)$. Define \display{f_{L,\kappa}(x) :=
L(\phi_\kappa(x)). }
\end{dfnS}
The analysis of the functionals $f_{L,\kappa}$ on $M_+(\psi)$ begins in Section 2.2.
We finish the preliminaries with the following proposition and remark.
\begin{propS}
\ Let $\psi \in \Omega_\ensuremath{\infty}$, $L \in BL(\ensuremath{\mathbb{R}}_+)$ and $ L' \in
BL(\ensuremath{\mathbb{N}})$. Let $\kappa_1, \kappa_2 : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ be increasing continuous
and unbounded functions such that
$\kappa_1-\kappa_2$ is bounded. Then
$f_{L,\kappa_1}(x)=f_{L,\kappa_2}(x)$ and
$f_{L',\kappa_1}(x)=f_{L',\kappa_2}(x)$ for all $x\in
M_+(\psi)$.
\par
\vspace*{1\belowdisplayskip}
\proof{Since
$\nm{x}_{M(\psi)} \geq \psi(t)^{-1}\int_0^tx^*(s)ds \ge x^*(t)t\psi(t)^{-1}$
for all $t>0$, we have
$$
-\|x\|_{M(\psi)}\frac{\psi'(t)}{\psi(t)} \le
-\frac{\int_0^tx^*(s)ds\psi'(t)}{\psi^2(t)} \le \phi(x)'(t) \le
\frac{x^*(t)}{\psi(t)} \le \frac{\|x\|_{M(\psi)}}{t}.
$$
Hence for any $t > 0$,
$$
- \nm{x}_{M(\psi)} \log(\psi(t'))|_{\kappa_1(t)}^{\kappa_2(t)} \leq
\phi(x)(t') |_{\kappa_1(t)}^{\kappa_2(t)} \leq \nm{x}_{M(\psi)}
\log(t')|_{\kappa_1(t)}^{\kappa_2(t)}
$$
or
$$
-\nm{x}_{M(\psi)} \log \frac{\psi(\kappa_2(t))}{\psi(\kappa_1(t))} \leq
\phi_{\kappa_2}(x)(t) - \phi_{\kappa_1}(x)(t) \leq \nm{x}_{M(\psi)}
\log \frac{\kappa_2(t)}{\kappa_1(t)} \eqno{(1.1)}
$$
Let $f$ be an unbounded concave function. Then
$\Big|\frac{f(\kappa_2(t))}{f(\kappa_1(t))} - 1 \Big| =
\frac{|f(\kappa_2(t)) - f(\kappa_1(t))|}{f(\kappa_1(t))} \leq A
\frac{|\kappa_2(t) - \kappa_1(t)|}{f(\kappa_1(t))} \leq \frac{A
B}{f(\kappa_1(t))}$ for $A,B > 0$ and $t$ sufficiently large by
the hypothesis $f$ is concave and $\kappa_2 - \kappa_1$ is
bounded. Hence $\lim_{t \to \ensuremath{\infty}}
\frac{f(\kappa_2(t))}{f(\kappa_1(t))} = 1$. Then $\lim_{t \to
\ensuremath{\infty}} \log \Big| \frac{\kappa_2(t)}{\kappa_1(t)} \Big| = \lim_{t
\to \ensuremath{\infty}} \log \Big| \frac{\psi(\kappa_2(t))}{\psi(\kappa_1(t))}
\Big| = 0$ and $\phi_{\kappa_2}(x)(t) - \phi_{\kappa_1}(x)(t) \in
C_0([0,\ensuremath{\infty}))$ by (1.1). Since $L\in BL({\Bbb R}_+)$
(respectively, $ L'\in BL({\Bbb N}))$ vanishes on functions
(respectively, sequences) tending to 0 at infinity, we conclude
that $f_{L,\kappa_2}(x) - f_{L,\kappa_1}(x) = L(\phi_{\kappa_2}(x)
- \phi_{\kappa_1}(x)) = 0$ (respectively, $f_{L',\kappa_2}(x)
-f_{L',\kappa_1}(x) = L'(\{\phi_{\kappa_2}(n) -
\phi_{\kappa_1}(n)\}) = 0$). }
\end{propS}
\REMS{Proposition 1.8 introduces the notion of
equivalence classes of continuous increasing unbounded functions
that result in the same functional on $M_+(\psi)$.
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa_1, \kappa_2 : [0,\ensuremath{\infty}) \to
[0,\ensuremath{\infty})$ be continuous increasing and unbounded functions. We
define an equivalence relation $\sim_\psi$ by \display{\kappa_1
\sim_\psi \kappa_2 \text{ \ if \ } f_{L,\kappa_1}(x) =
f_{L,\kappa_2}(x) \ensuremath{\ \, \forall \,} L \in BL(\ensuremath{\mathbb{R}}_+) \ensuremath{\ \, \forall \,} x \in M_+(\psi).} Let
$[\kappa]$ denote the equivalence class, with respect to the
relation $\sim_\psi$, of a continuous increasing and unbounded
function $\kappa : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$. It easily follows from
Proposition 1.8 that the class $[\kappa]$ contains a strictly
increasing, invertible, unbounded function $\hat{\kappa}$ such
that $\hat{\kappa}(0)=0$. The function $\hat{\kappa}$ can be
chosen to be differentiable or even piecewise linear if required.
Hence, to analyse all functionals $f_{L,\kappa}$ on $M_+(\psi)$
where $\kappa : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ is a continuous increasing
unbounded function, it is sufficient to consider the set
$\mathcal{K}$ \textbf{of strictly increasing, invertible,
differentiable, unbounded functions} $\hat{\kappa} : [0,\ensuremath{\infty}) \to
[0,\ensuremath{\infty})$ \textbf{such that} $\hat{\kappa}(0)=0$.
}
\section{Symmetric Functionals involving Banach Limits}
This section demonstrates that: (i) the sets of functionals
$\inset{f_{L',\kappa}}{L' \in BL(\ensuremath{\mathbb{N}})}$ (Definition 1.6) and
$\inset{f_{L,\kappa}}{L \in BL(\ensuremath{\mathbb{R}}_+)}$ (Definition 1.7) provide
the same set of functionals on $M_+(\psi)$ supported at infinity
for any given $\kappa \in \mathcal{K}$ of sufficient regularity
with respect to $\psi$ (Theorem 2.3); (ii) necessary and
sufficient conditions exist on the function $\kappa \in
\mathcal{K}$ such that $f_{L',\kappa}, f_{L,\kappa} \in
M_+(\psi)_{\mathrm{sym},\ensuremath{\infty}}^*$ for all $L' \in BL(\ensuremath{\mathbb{N}})$ and $L
\in BL(\ensuremath{\mathbb{R}}_+)$ (Theorem 2.7); and (iii) the Riesz semi-norm
$\rho_1(x)$ of $x \in M(\psi)$ is the supremum of the values
$\inset{ f_{L,\kappa}(|x|)}{L \in BL(\ensuremath{\mathbb{R}}_+)}$ given certain
conditions on $\kappa$ and $\psi$ (Theorem 2.8).
\subsection{Definitions and Results}
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in\mathcal{K}$. Then
$\kappa$ is said to have \textbf{\emph{restricted growth with
respect to}} $\psi$ if
$$\text{F-}\lim_{n \to \ensuremath{\infty}} \frac{\psi(\kappa(n))}{\psi(\kappa(n+1))} = 1.$$
\end{dfnS}
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$. Denote by $\mathrm{R}(\psi)$
\textbf{\emph{the set of all $\kappa \in \mathcal{K}$ that have
restricted growth with respect to}} $\psi$.
\end{dfnS}
It is immediate the set $\mathrm{R}(\psi)$ is non-empty. The
concave function $\psi$ is an invertible function
such that $\psi^{-1}$ belongs to $\mathrm{R}(\psi) \subset \mathcal{K}$.
The rationale for introducing the set
R$(\psi)$ is provided by the following result.
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in \mathrm{R}(\psi)$.
\begin{prop2list}{6}{2}{2}
\item Let $L \in BL(\ensuremath{\mathbb{R}}_+)$.
Then there exists
$L' \in BL(\ensuremath{\mathbb{N}})$ such that
\display{f_{L,\kappa}(x) = f_{L',\kappa}(x) \ \ensuremath{\ \, \forall \,} x \in M_+(\psi).}
\item Let $L' \in BL(\ensuremath{\mathbb{N}})$. Then
there exists $L \in BL(\ensuremath{\mathbb{R}}_+)$ such that
\display{f_{L,\kappa}(x) = f_{L',\kappa}(x) \ \ensuremath{\ \, \forall \,} x \in M_+(\psi).}
\end{prop2list}
\end{thmS}
The proof of Theorem 2.3 appears in Section 2.2.
Theorem 2.3 says the sets $\inset{ f_{L,\kappa} }{ L \in
BL(\ensuremath{\mathbb{R}}_+)}$ and $\inset{ f_{L',\kappa} }{ L' \in BL(\ensuremath{\mathbb{N}}) }$ are
identical as sets of functionals on $M_+(\psi)$ when $\kappa \in \mathrm{R}(\psi)$.
This has an
important corollary.
\begin{corS}
Let $\psi \in \Omega_\ensuremath{\infty}$, $\kappa \in \mathrm{R}(\psi)$
and $x \in M_+(\psi)$.
Then
\display{ \text{F-}\lim_{t \to \ensuremath{\infty}} \phi_\kappa(x)(t) = A}
if and only if
\display{ \text{F-}\lim_{n \to \ensuremath{\infty}} \phi_\kappa(x)(n) = A}
for some $A \geq 0$.
\par
\vspace*{1\belowdisplayskip} \proof{Immediate from Theorem 2.3.}
\end{corS}
The condition that $\kappa$ has restricted growth with respect to $\psi$
identifies the two sets of functionals as above. However, the condition is not sufficient
to ensure \emph{additivity} of the functionals.
\begin{dfnS} We say that $\kappa\in \mathcal{K}$ is of \textbf{\emph{exponential increase}} if $\exists \, C > 0$ such that
$\forall t > 0$
\display{ \kappa(t + C) > 2 \kappa(t).}
\end{dfnS}
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$. We denote the \textbf{\emph{set of
elements of $\mathrm{R}(\psi)$ that are of exponential increase
by}} $\mathrm{R_{exp}}(\psi)$.
\end{dfnS}
The rationale for introducing the functions of exponential
increase is provided by the following result.
\begin{thmS}Let $\psi \in \Omega_\ensuremath{\infty}$ and
$\kappa \in \mathrm{R}(\psi)$. Then the following statements
are equivalent
\begin{prop2list}{10}{2}{4}
\item $f_{L',\kappa} \in M_+(\psi)_{\mathrm{sym},\ensuremath{\infty}}^*$
$\ensuremath{\ \, \forall \,} L' \in BL(\ensuremath{\mathbb{N}})$,
\item $f_{L,\kappa} \in M_+(\psi)_{\mathrm{sym},\ensuremath{\infty}}^*$
$\ensuremath{\ \, \forall \,} L \in BL(\ensuremath{\mathbb{R}}_+)$,
\item $\kappa \in \mathrm{R_{exp}}(\psi)$.
\end{prop2list}
\par
\vspace*{1\belowdisplayskip} \proof{(i) $\Leftrightarrow$ (iii) \ Let $p_n :=
\kappa(n)$ define the sequence $\{ p_n \}_{n=1}^\ensuremath{\infty}$. Then (i)
is equivalent to: (a) the existence of $m \in \ensuremath{\mathbb{N}}$ such that $2p_n
\leq p_{n+m}$, and (b) $\text{F-}\lim_{n \to \ensuremath{\infty}}
\frac{\psi(p_n)}{\psi(p_{n+1})} = 1$ by \cite[Theorem 3.8]{DPSSS}.
Since $\kappa$ is increasing, it is evident that the condition (a)
is equivalent to the assertion that $\kappa$ is of exponential
increase.
Condition (b) is exactly the condition $\kappa$ has restricted growth
with respect to $\psi$.
(i) $\Leftrightarrow$ (ii) is immediate from Theorem 2.3. }
\end{thmS}
Theorem 2.3 and Theorem 2.7 will allow us, in following sections,
to apply the results on singular symmetric functionals in
\cite{DPSS} and \cite{DPSSS}
to the construction of Connes in \cite{CN}.
One of the results that we shall apply,
the following and final result for this section,
is a more precise version of \cite{DPSSS} Theorem 4.1.
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$. Let $\kappa\in \mathcal{K}$ be
such that
$$\lim_{n \to \ensuremath{\infty}} \frac{\psi(\kappa(n))}{\psi(\kappa(n+1))} = 1.$$
Then
$$\rho_1(x) = \sup \inset{f_{L,\kappa}(|x|)}{L \in BL(\ensuremath{\mathbb{R}}_+)} \quad \forall x\in M(\psi).$$
\par
\vspace*{1\belowdisplayskip}
\proof{
Let $\kappa : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ be continuous, unbounded,
and increasing such that
$$
\lim_{n \to \ensuremath{\infty}} \frac{\psi(\kappa(n))}{\psi(\kappa(n+1))} = 1
\eqno{(2.1)}
$$
Let $x \in M(\psi)$. Without loss of generality, we assume
$\rho_1(x) =1$. Clearly
$$
q(x) := \sup \inset{L(\phi_{\kappa}(|x|))}{L \in BL(C_b)} \leq
\limsup_{t \to \ensuremath{\infty}} \phi_{\kappa}(|x|)(t) = \rho_1(x) = 1 \eqno{(2.2)}
$$
Let
$$
a_{n} := \phi_{\kappa}(x)(n) = \frac{1}{\psi(\kappa(n))}
\int_0^{\kappa(n)} x^*(s)ds.
$$
By (2.2) there exists increasing sequence $t_k \to \ensuremath{\infty}$ such that
$$
\lim_{k \to \ensuremath{\infty}} \phi_{\kappa}(|x|)(t_k) = 1 \eqno{(2.3)}
$$
Let $n_k \in \ensuremath{\mathbb{N}}$ such that $\kappa(n_k-1) \leq t_k \leq
\kappa(n_k)$. Then,
$$
\phi_{\kappa}(|x|)(t_k) \leq \frac{\psi(\kappa(n_k))}{\psi(\kappa(n_k-1))}
\frac{1}{\psi(\kappa(n_k))} \int_0^{\kappa(n_k)} x^*(s)ds
= a_{n_k} \frac{\psi(\kappa(n_k))}{\psi(\kappa(n_k-1))}.
$$
Hence
$$
a_{n_k} \geq \phi_{\kappa}(|x|)(t_k) \frac{\psi(\kappa(n_k-1))}{\psi(\kappa(n_k))}.
$$
Let $\epsilon > 0$, then by (2.1) and (2.3) there exists $K$ such
that $\ensuremath{\ \, \forall \,} k > K$,
$$
a_{n_k} > 1 - \frac{\epsilon}{3}.
$$
Now, for $i=1,2,...$ let $k_i$ be the smallest integer greater
than $k_{i-1}$ and $K$ such that
$$
\frac{\psi(\kappa(n_{k_i}+i))}{\psi(\kappa(n_{k_i}))} < 1 + \frac{\epsilon}{3}.
$$
The integer $k_i$ exists for each $i$ by equation (2.1). Hence for
all $j = 1,...,i$
$$
a_{n_{k_i}+j} \geq \frac{\psi(\kappa(n_{k_i}))}{\psi(\kappa(n_{k_i}+i))}
\frac{1}{\psi(\kappa(n_{k_i}))} \int_0^{\kappa(n_{k_i})} x^*(s)ds
> \frac{1}{1+\epsilon/3} a_{n_{k_i}} > \frac{1 - \epsilon/3}{1+\epsilon/3}
> 1 - \epsilon.
$$
Then, applying Sucheston's Theorem \cite{S}
and using Theorem 2.3 above, we obtain the existence of $L \in
BL(\ensuremath{\mathbb{R}}_+)$ such that $L(\phi_\kappa(x)) \geq 1 - \epsilon$. Hence
$q(x) \geq 1 - \epsilon$ for arbitrary $\epsilon > 0$ and $q(x) =
\rho_1(x) = 1$. }
\end{thmS}
\subsection{Technical Results}
This section culminates in the proof of Theorem 2.3.
\begin{lemmaS}
Let $L \in BL(\ensuremath{\mathbb{R}}_+)$. Then
\display{L'(\alpha) := L(\ensuremath{p}(\alpha)) \ \ensuremath{\ \, \forall \,} \alpha \in \ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})}
defines an element of $BL(\ensuremath{\mathbb{N}})$. \par
\vspace*{1\belowdisplayskip}
\proof{Let $L \in BL(\ensuremath{\mathbb{R}}_+)$.
Then $L'$ is linear as $L$ and $p$ are linear,
$\nm{L'(\alpha)} \leq \nm{L}\nm{\ensuremath{p}(\alpha)} = \nm{\alpha}$
and $L'(1) = 1$ by Lemma 1.2.
Let $k \in \ensuremath{\mathbb{N}}$. Then
$L'(T_k(\alpha)) = L(\ensuremath{p}(T_k(\alpha))) = L(T_k(\ensuremath{p}(\alpha))) = L(\ensuremath{p}(\alpha)) = L'(\alpha)$
by Lemma 1.2(iii) and translation invariance of $L$.}
\end{lemmaS}
\begin{lemmaS}
Let $L' \in BL(\ensuremath{\mathbb{N}})$. Then
\display{L(g) := L'(E_{\ensuremath{\mathbb{N}}}(g)) \ \ \ensuremath{\ \, \forall \,} g \in C_b([0,\ensuremath{\infty}))}
defines an element of $BL(\ensuremath{\mathbb{R}}_+)$.
\par
\vspace*{1\belowdisplayskip}
\proof{
Let $L' \in BL(\ensuremath{\mathbb{N}})$.
Then $L$ is linear as $L'$ and $E_{\ensuremath{\mathbb{N}}}$ is linear,
$\nm{L(g)} \leq \nm{L'}\nm{E_{\ensuremath{\mathbb{N}}}(g)} \leq \nm{g}$
and $L(1) = L'(E_{\ensuremath{\mathbb{N}}}(1)) = L'(1) = 1$ by Lemma 1.3.
It remains to be shown $L$ is translation invariant.
Let $a \in (0,1)$. Then
$$
\begin{array}{rl}
L(T_a(g)) = & L'(E_{\ensuremath{\mathbb{N}}}(T_a(g))) = L'(\{\int_{n-1}^ng(s+a)ds\}_{n=1}^\infty)
= L'(\{\int_{n-1+a}^{n+a}g(s)ds\}_{n=1}^\infty) \\
= & L'(\{\int_{n-1+a}^{n}g(s)ds\}_{n=1}^\infty)+L'(\{\int_{n}^{n+a}g(s)ds\}_{n=1}^\infty)\\
= & L'(\{\int_{n+a}^{n+1}g(s)ds\}_{n=1}^\infty)+L'(\{\int_{n}^{n+a}g(s)ds\}_{n=1}^\infty)\\
= & L'(\{\int_n^{n+1}g(s)ds\}_{n=1}^\infty) = L'(T_1(E_{\ensuremath{\mathbb{N}}}(g)))=L(g).
\end{array}
$$
Let $[b]$ be the greatest integer less than $b > 0$.
Then $T_b=T_{[b]}+T_a$ where $0\le a=b-[b]<1$.
The translation invariance of $L$ in the general case
follows from Lemma 1.3(iii).
}
\end{lemmaS}
\begin{lemmaS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in \mathrm{R}(\psi)$. Let
$x \in M_+(\psi)$ and define
\display{j_n(x) = \frac{1}{\psi(\kappa(n))} \int_{\kappa(n)}^{\kappa(n+1)}x^*(s)ds}
and
\display{K_n(x) = \sup_{t \in [n,n+1]} \Big| \phi_\kappa(x)(t)
- \phi_\kappa(x)(n) \Big|.}
Then
\display{
L'( \{ j_n(x) \}_{n=1}^\ensuremath{\infty} ) = 0 = L'( \{ K_n(x) \}_{n=1}^\ensuremath{\infty} ) \ \ensuremath{\ \, \forall \,} L' \in BL(\ensuremath{\mathbb{N}}).}
\par
\vspace*{1\belowdisplayskip}
\proof{Let $x \in M_+(\psi)$.
We abbreviate notation by setting $g(t) := \phi_\kappa(x)(t)$
and $\alpha_n = \frac{\psi(\kappa(n))}{\psi(\kappa(n+1))}$.
Let $h_n(x) = g(n+1) - \alpha_n g(n)$. Note $0 \leq \alpha_n \leq 1 \ensuremath{\ \, \forall \,} n$
as $\psi \circ \kappa$ is increasing.
Then $\text{F-}\lim_n \alpha_n = 1$ and $\text{F-}\lim_n |1-\alpha_n| = 0$
by hypothesis on $\kappa$.
Hence $L'(h_n(x)) = L'(T_1 \bd{g}) - (\text{F-}\lim_n \alpha_n )L'(\bd{g}) =
L'(\bd{g}) - L'(\bd{g}) = 0$
by \cite{DPSSS} Lemma 3.4 and translation invariance of $L'$. Moreover
\display{L'(j_n(x)) = 1 \cdot
L'(j_n(x)) = (\text{F-}\lim_n \alpha_n) L'(j_n(x))
= L'(\alpha_n j_n(x)) = L'(h_n(x)) = 0}
again by \cite{DPSSS} Lemma 3.4. Now
\begin{eqnarray*}
K_n(x) & = & \sup_{t \in [n,n+1]} \Big|\frac1{\psi(\kappa(t))}
\int_0^{\kappa(n)}x^*ds+\frac1{\psi(\kappa(t))}
\int_{\kappa(n)}^{\kappa(t)}x^*ds-
\frac1{\psi(\kappa(n))}\int_0^{\kappa(n)}x^*ds
\Big| \\
& \le & \sup_{t \in [n,n+1]} \Big|\phi_\kappa(x)(n) \Big( \frac{\psi(\kappa(n))}
{\psi(\kappa(t))}-1 \Big) \Big|+ \sup_{t \in [n,n+1]}
\Big|
\frac1{\psi(\kappa(t))}
\int_{\kappa(n)}^{\kappa(t)}x^*ds
\Big| \\
& \le & \Big|\phi_\kappa(x)(n) \Big( \frac{\psi(\kappa(n))}
{\psi(\kappa(n+1))}-1 \Big) \Big|+
\Big|
\frac1{\psi(\kappa(n))}
\int_{\kappa(n)}^{\kappa(n+1)}x^*ds
\Big| \\
& \le & \nm{x}_{M(\psi)} | 1-\alpha_n | + j_n(x)
\end{eqnarray*}
Hence $L'( \{ K_n(x) \}) \leq L'( \{j_n(x) \}) = 0$ by results above.
}
\end{lemmaS}
\begin{propS}
\ Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in \mathrm{R}(\psi)$.
Then $\phi_\kappa(x)$ is almost piecewise linear at infinity
for all $x \in M_+(\psi)$.
\par
\vspace*{1\belowdisplayskip} \proof{We abbreviate the notation by setting $g :=
\phi_\kappa(x)$, $C_n = \sup_{t \in [n,n+1]} |g(t) - \ensuremath{p}
\bd{g}(t)|$ and $K_n = \sup_{t \in [n,n+1]} |g(t) - g(n)|$,
$n=0,1,2,...$ Let $f = \ensuremath{p}(\{ C_{n-1} \}_{n=1}^\ensuremath{\infty})$. Then $|g -
\ensuremath{p} \bd{g}| \leq 2f(t+1/2)$. Hence $L(|g - \ensuremath{p} \bd{g}|) \leq 2
L(T_{1/2}f) = 2 L(f)$. We now evaluate $L(f)$. By Lemma 2.9 $L(f)
= L'(\{ C_n \}_{n=1}^\ensuremath{\infty})$ for some $L' \in BL(\ensuremath{\mathbb{N}})$. Consider
\vspace*{-0.5cm} \display{\begin{array}{rl} C_n = & \sup_{t \in
[n,n+1)} \Big| g(t) - \Big( g(n)
+ (g(n+1) - g(n))(t-n) \Big) \Big| \\
\leq & \sup_{t \in [n,n+1)} | g(t) - g(n)|
+ |g(n+1) - g(n)| \ \leq \ 2 K_n.
\end{array}}
Hence $L(f) = L'(\{ C_n \}_{n=1}^\ensuremath{\infty}) \leq 2
L'(\{K_n\}_{n=1}^\ensuremath{\infty}) = 0$ by Lemma 2.11.}
\end{propS}
We are now in a position to prove Theorem 2.3.
\subsubsection*{Proof of Theorem 2.3}
\noindent (i) \ Let $L \in BL(\ensuremath{\mathbb{R}}_+)$ and $L' \in BL(\ensuremath{\mathbb{N}})$ as in
Lemma 2.9. By Proposition 2.12 $L(\phi_\kappa(x) - \ensuremath{p}
\bd{\phi_\kappa(x)}) = 0$, hence $L(\phi_\kappa(x)) = L(\ensuremath{p}
\bd{\phi_\kappa(x)}) = L'(\bd{\phi_\kappa(x)})$.
\medskip \noindent (ii) \ Let $L' \in BL(\ensuremath{\mathbb{N}})$.
Let $L \in BL(\ensuremath{\mathbb{R}}_+)$ as in Lemma 2.10, $L(\phi_\kappa(x)) =
L'(E_{\ensuremath{\mathbb{N}}}(\phi_\kappa(x)))$. Since
$E_{\ensuremath{\mathbb{N}}}(\phi_\kappa(x))(n)=\phi_\kappa(x)(\xi_n)$ for some
$\xi_n\in[n-1,n]$ for each $n \in \ensuremath{\mathbb{N}}$ then \display{
|\phi_\kappa(x)(n)-E_{\ensuremath{\mathbb{N}}}(\phi_\kappa(x))(n)|\le\sup_{t\in[n-1,n]}|\phi_\kappa(x)(t)-\phi_\kappa(x)(n)|=K_n(x)}
for each $n \in \ensuremath{\mathbb{N}}$. Consequently, by Lemma 2.11,
$$
\begin{array}{rcl}
|L'(\bd{\phi_\kappa(x)}) - L'(E_{\ensuremath{\mathbb{N}}}(\phi_\kappa(x)))| & \le
& L'(\{|\phi_\kappa(x)(n)-E_{\ensuremath{\mathbb{N}}}(\phi_\kappa(x))(n)|\}_{n=1}^\infty) \\
& \le & L'(K_n(x)) = 0
\end{array}
$$
or $L(\phi_\kappa(x))=L'(\bd{\phi_\kappa(x)})$ as required. \quad
$\Box$
\section{Measurability in Marcinkiewicz Spaces}
\noindent Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in \mathcal{K}$.
Having constructed the family $\inset{f_{L,\kappa}}{L \in
BL(\ensuremath{\mathbb{R}}_+)}$ of functionals on $M_+(\psi)$, it is natural to
consider elements $x$ in $M_+(\psi)$ such that
$$
f_{L_1,\kappa}(x) = f_{L_2,\kappa}(x) \ensuremath{\ \, \forall \,} L_1,L_2 \in BL(\ensuremath{\mathbb{R}}_+).
$$
It is obvious that the equation above holds if and only if
$\phi(x)(t)$ is almost convergent (see Definition 1.1) and
consequently, in this case
$$
f_{L,\kappa}(x) = \text{F-} \lim_{t \to \ensuremath{\infty}} \phi_\kappa(x)(t) = A
$$
for some $A \geq 0$ and all $L \in BL(\ensuremath{\mathbb{R}}_+)$. Necessary and
sufficient conditions for almost convergence, even for sequences
\cite{GL}, are somewhat complicated . In studying the function
$\phi_\kappa(x)$ it is more preferable to consider the notions of
Cesaro convergence (definition below) and ordinary convergence and
`squeeze' almost convergence inbetween. In this section we: (i)
establish Cesaro convergence is weaker than almost convergence
which in turn is weaker than ordinary convergence (Remark 3.1,
Corollary 3.4), and then (ii) consider Tauberian conditions (see
\cite[Section 6.1]{Hardy}) on the function $\phi_\kappa(x)$ such
that Cesaro convergence implies ordinary convergence and hence the
notions of Cesaro, almost and ordinary convergence are identical
for $\phi_\kappa(x)$ (Theorem 3.7, Corollary 3.10).
\subsection{Definitions and Results}
Let $\{a_n \}_{n=1}^\ensuremath{\infty} \in \ell^\ensuremath{\infty}(\ensuremath{\mathbb{N}})$. Define
\display{b_n(p) = \frac{1}{n} \sum_{i=0}^{n-1} a_{p+i}}
for $p \in \ensuremath{\mathbb{N}}$. We recall
from \cite{GL} that $\{ a_n \}_{n=1}^\ensuremath{\infty}$ is almost convergent
(see Definition 1.1) if and only if
\display{L'(\{ a_n \}) = \text{F-}\lim_n a_n = \lim_n b_n(p) = A}
for all $L' \in BL(\ensuremath{\mathbb{N}})$ where $\lim_n b_n(p) = A$ uniformly with respect to $p \in \ensuremath{\mathbb{N}}$.
\medskip \noindent A \textbf{sequence} $\{ a_n \}_{n=1}^\ensuremath{\infty}$ is called
\begin{prop2list}{16}{2}{4}
\item \textbf{Cesaro convergent} if $\lim_n b_n(1) = A$
\item \textbf{almost convergent} if $\lim_n b_n(p) = A$
uniformly with respect to $p \in \ensuremath{\mathbb{N}}$
\item \textbf{convergent} if $\lim_n a_n = A$
\end{prop2list}
for some $A \geq 0$. We denote by C, F and S \textbf{the sets of
all Cesaro convergent sequences, almost convergent sequences and
convergent sequences}, respectively.
\REMS{Since \display{\lim_n a_n = A \ \ensuremath{\Rightarrow} \ \text{F-}\lim_n
a_n = \lim_n b_n(p) = A \ \ensuremath{\Rightarrow} \ \lim_n b_n(1) = A} we have
the inclusion of sets ${ \rm S \subset \rm F \subset \rm
C.}$ }
\vspace*{0cm}
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in\mathcal{K}$. Let $x \in
M_+(\psi)$. We say $x$ is
\begin{prop2list}{16}{2}{4}
\item \emph{C}$_\kappa$\textbf{\emph{-measurable}} if
$\bd{\phi_\kappa(x)} \in \rm C$, \item \emph{F}$_\kappa$
\textbf{\emph{-measurable}} if $\bd{\phi_\kappa(x)} \in \rm F$,
\item \emph{S}$_\kappa$\textbf{\emph{-measurable}} if
$\bd{\phi_\kappa(x)} \in \rm S$.
\end{prop2list}
\end{dfnS}
\bigskip \noindent Define for $\mu > 0$, \display{C(g)(\mu) = \frac{1}{\mu}
\int_{0}^\mu g(t) dt.} A \textbf{function} $g\in C_b([0,\ensuremath{\infty}))$ is called
\begin{prop2list}{16}{2}{4}
\item \textbf{Cesaro convergent} if $\lim_{t \to \ensuremath{\infty}} C(g)(t) = A$
\item \textbf{almost convergent at infinity} if
F-$\lim_{t \to \ensuremath{\infty}} g(t) = A$
\item \textbf{convergent at infinity} if $\lim_{t \to \ensuremath{\infty}} g(t) = A$
\end{prop2list}
for some $A \geq 0$. We denote by $\mathcal{C}$, $\mathcal{F}$ and
$\mathcal{S}$ \textbf{the sets of all Cesaro convergent functions,
almost convergent functions and functions convergent at infinity},
respectively.
\vspace*{0cm}
\begin{thmS}
Let $g \in C_b([0,\ensuremath{\infty}))$. Then
\display{ [a,b] \subset \inset{L(g)}{L \in BL(\ensuremath{\mathbb{R}}_+)} }
where
\display{\quad a=\liminf_{t\to\ensuremath{\infty}}C(g)(t),\
b=\limsup_{t\to\ensuremath{\infty}}C(g)(t).}
\par
\vspace*{1\belowdisplayskip} \proof{Suppose the result is false. Then there exists $c
\in [a,b]$ such that $c \not= L(g)$ for any $L \in BL(\ensuremath{\mathbb{R}}_+)$. By
continuity of $C(g)$ there exists a sequence $t_n\to\ensuremath{\infty}$ as $n
\to \ensuremath{\infty}$ such that $C(g)(t_n) \to c$. Let us consider
$C_b([0,\ensuremath{\infty}))^*$ equipped with the weak$^*$-topology. Then the
unit ball $B$ of $C_b([0,\ensuremath{\infty}))^*$ is weak$^*$-compact. Hence, the
sequence of functionals $\delta_{t_n}(f)=f(t_n),\ n=1,2,...,$ has
a limit point $V \in B$. In fact, this limit point belongs to the
weak$^*$ compact subset $B_1$ of positive elements $\gamma$ of the
unit ball $B\subset C_b([0,\ensuremath{\infty}))^*$ such that $\gamma(1)=1$.
From weak$^*$ convergence the state $V$ has the following
properties, (i) $V(p)=\lim_n p(t_n)=0$ for every function $p \in
C_0([0,\ensuremath{\infty}))$, and (ii) $V(C(g))=\lim_n C(g)(t_n)=c$.
Define the functional $L(f):=V(C(f))$ for $f \in C_b([0,\ensuremath{\infty}))$.
It is immediate that $L(g) = c$ by property (ii). Hence,
if $L$ belongs to $BL(\ensuremath{\mathbb{R}}_+)$, the supposition on $c$ is false
and the result is proven.
We show the functional $L$ is translation
invariant. Indeed, for any $f\in C_b([0,\ensuremath{\infty}))$
$$
C(T_af)(\mu)-C(f)(\mu)=\frac1\mu\int_0^\mu [f(t+a)-f(t)]dt=
\frac1\mu\left[\int_0^a f(t)dt +\int_\mu^{\mu+a}f(t)dt\right]
\to 0
$$
for $\mu\to\infty$. Hence
translation invariance of $L$ follows by property (i).
Trivially $L(1) = V(C(1)) = V(1) = 1$. Hence
$L \in BL(\ensuremath{\mathbb{R}}_+)$.
}
\end{thmS}
\begin{corS}
Let $\mathcal{C}$, $\mathcal{F}$ and $\mathcal{S}$ be the sets defined
as above. Then
\display{\mathcal{S} \subset \mathcal{F} \subset \mathcal{C}.}
\proof{The inclusion $\mathcal{S} \subset \mathcal{F}$ is immediate.
The inclusion $\mathcal{F} \subset \mathcal{C}$ is immediate
from Theorem 3.3 }
\end{corS}
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in \mathcal{K}$. Let $x
\in M_+(\psi)$. We say $x$ is
\begin{prop2list}{16}{2}{4}
\item $\mathcal{C}_\kappa$\textbf{\emph{-measurable}} if
$\phi_\kappa(x) \in \mathcal{C}$, \item
$\mathcal{F}_\kappa$\textbf{\emph{-measurable}} if $\phi_\kappa(x)
\in \mathcal{F}$, \item
$\mathcal{S}_\kappa$\textbf{\emph{-measurable}} if $\phi_\kappa(x)
\in \mathcal{S}$, \item $\mathcal{S}$\textbf{\emph{-measurable}}
if $\phi(x) \in \mathcal{S}$.
\end{prop2list}
\end{dfnS}
\vspace*{0cm} \REMS{We draw the reader's attention to the fact
that since $\kappa$ is continuous, $x$ is
$\mathcal{S}_\kappa$-measurable if and only if $x$ is
$\mathcal{S}$-measurable. The same (simple) analysis does not
work with the notion of $\rm S_\kappa$-measurability introduced in
Definition 3.2. Nevertheless, it is established in the following
theorem that the equivalence of ${\rm S}_\kappa$-measurability of
an element $x$ with $\mathcal{S}$-measurability of $x$ holds under
natural restriction on $x$.}
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in \mathrm{R}(\psi)$. Let
$x \in M_+(\psi)$ be such that
$$
t \, \phi_\kappa(x)'(t) > -H
$$
for some $H>0$ and all $t > 0$. Then the following statements are
equivalent
\begin{prop2list}{16}{2}{3}
\item $x$ is $\rm C_\kappa$-measurable, \item $x$ is
$\mathcal{C}_\kappa$-measurable, \item $x$ is $\rm
F_\kappa$-measurable, \item $x$ is
$\mathcal{F}_\kappa$-measurable, \item $x$ is $\rm
S_\kappa$-measurable, \item $x$ is $\mathcal{S}$-measurable.
\end{prop2list}
\end{thmS}
The proof of Theorem 3.7 appears in Section 3.2.
The hypothesis on the derivative $\phi_\kappa(x)'$, which depends on
$x\in M_+(\psi)$,
can be made independent of $x$ by a stronger hypothesis on the function $\kappa$.
We recall that $\kappa \in \mathcal{K}$ is an invertible differentiable
function such that $\kappa(0) = 0$.
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in \mathcal{K}$. We say
$\kappa$ has \textbf{\emph{dominated growth with respect to}}
$\psi$ if $\exists C > 0$ such that $\forall t > 0$
$$
\frac{(\psi\circ\kappa)'(t)}{\psi\circ\kappa(t)} < \frac{C}{t}.
$$
Denote by $\mathrm{D}(\psi)$ \textbf{\emph{the set of $\kappa \in
\mathcal{K}$ that have dominated growth with respect to $\psi$}}.
\end{dfnS}
\noindent It is immediate that the set $\mathrm{D}(\psi)$ is non-empty
since it contains $\psi^{-1}$.
The rationale for introducing the set
D$(\psi)$ is provided by the following result.
\begin{corS}
Let $\psi \in \Omega_\ensuremath{\infty}$, $\kappa \in \mathrm{D}(\psi)$ and $x
\in M_+(\psi)$. Then the following statements are equivalent
\begin{prop2list}{16}{2}{3}
\item $x$ is $\rm C_\kappa$-measurable, \item $x$ is
$\mathcal{C}_\kappa$-measurable, \item $x$ is $\rm
F_\kappa$-measurable, \item $x$ is
$\mathcal{F}_\kappa$-measurable, \item $x$ is $\rm
S_\kappa$-measurable, \item $x$ is $\mathcal{S}$-measurable.
\end{prop2list}
\end{corS}
The proof of Corollary 3.9 also appears in Section 3.2.
In terms of the functionals $f_{L,\kappa}$ of Definition 1.7
the preceding result may be reformulated as follows.
\begin{corS}
Let $\psi \in \Omega_\ensuremath{\infty}$, $\kappa \in \mathrm{D}(\psi)$
and $x \in M_+(\psi)$.
Then the following statements are equivalent
\begin{prop2list}{10}{2}{4}
\item $x$ is $\mathcal{C}_\kappa$-measurable,
\item $f_{L,\kappa}(x)$ is independent of $L \in BL(\ensuremath{\mathbb{R}}_+)$,
\item $f_{L,\kappa}(x) = \lim_{t \to \ensuremath{\infty}} \phi(x)(t) \ \ensuremath{\ \, \forall \,} L \in BL(\ensuremath{\mathbb{R}}_+).$
\end{prop2list}
\end{corS}
The equivalence of statements (ii) and (iii) in the above
Corollary is a new and surprising result. The implication
of the result may be seen
in the context of the work of A. Connes.
For this end we introduce notions relevant to \cite{CN}.
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in\mathcal{K}$. Then we
say $x \in M_+(\psi)$ is
\begin{prop2list}{10}{2}{4}
\item $\kappa$\textbf{\emph{-measurable}} if $f_{L,\kappa}(x)$ is
independent of $L \in BL(\ensuremath{\mathbb{R}}_+)$, and \item
\textbf{\emph{Tauberian}} if $$\lim_{t \to \ensuremath{\infty}}\phi(x)(t) =
\lim_{t \to \ensuremath{\infty}} \frac{1}{\psi(t)} \int_0^t x^*(s)ds = A$$ for
some $A \geq 0$.
\end{prop2list}
\end{dfnS}
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$. Denote by ${\cal M}_\kappa^+(\psi)$
(respectively, $\mathcal{T}_+(\psi)$) \textbf{\emph{the set of
$\kappa$-measurable}} (respectively, \textbf{\emph{Tauberian}})
\textbf{\emph{elements of $M_+(\psi)$}}. We also define the set
${\cal M}_+(\psi) := \cap_{\kappa \in \mathrm{R_{exp}}(\psi)}
{\cal M}_\kappa^+(\psi)$ called \textbf{\emph{the set of
measurable positive elements of the Marcinkiewicz space}}
$M(\psi)$.
\end{dfnS}
\begin{thmS}
Let ${\cal M}_\kappa^+(\psi)$ and
${\cal M}_+(\psi)$ be defined as above.
Then
\begin{prop2list}{10}{2}{4}
\item ${\cal M}_\kappa^+(\psi)$ is a closed, symmetric
subcone of $M_+(\psi)$ when $\kappa \in \mathrm{R_{exp}}(\psi)$,
\item ${\cal M}_+(\psi)$ is a closed, symmetric
subcone of $M_+(\psi)$.
\end{prop2list}
\par
\vspace*{1\belowdisplayskip} \proof{(i) Closedness, symmetricity and additivity follow
from the fact $f_{L,\kappa}$ is an additive singular symmetric
functional on $M_+(\psi)$ by Theorem 2.7. (ii) Follows from (i)
as ${\cal M}_+(\psi)= \cap_{\kappa \in \mathrm{R_{exp}}} {\cal
M}_\kappa^+(\psi)$. }
\end{thmS}
\medskip \noindent The implication of Theorem 3.7 is the following result
which connects
Proposition IV.2.$\beta$.4 and Proposition IV.2.$\beta$.6 of \cite{CN}.
We shall elaborate on this result in Section 5 and the implications
of the result for non-commutative geometry in the concluding section.
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$. Then
$$
\mathcal{T}_+(\psi) = \mathcal{M}^+_\kappa(\psi)
$$
for all $\kappa \in \mathrm{D}(\psi)$ and, if there exists
$\kappa \in \mathrm{D}(\psi)$ of exponential increase,
$$
\mathcal{T}_+(\psi) = \mathcal{M}_+(\psi)
= \mathcal{M}^+_\kappa(\psi)
$$
for all $\kappa \in \mathrm{D}(\psi)$.
\par
\vspace*{1\belowdisplayskip} \proof{The first result is immediate from Corollary 3.9.
Suppose $\kappa_1 \in \mathrm{D}(\psi)$ is of exponential
increase. Then $\kappa_1 \in \mathrm{R_{exp}}(\psi)$ by
Proposition 3.20(i) of next section. Hence $\mathcal{T}_+(\psi)
\subset \mathcal{M}_+(\psi) \subset
\mathcal{M}^+_{\kappa_1}(\psi) = \mathcal{T}_+(\psi) =
\mathcal{M}^+_{\kappa}(\psi)$ for any $\kappa \in \mathrm{D}(\psi)$,
where the last equality is given by the first result.}
\end{thmS}
\REMS{ It was shown in \cite{DPSSS} that if
$$
\liminf_{t\to\ensuremath{\infty}} \frac{\psi(2t)}{\psi(t)}=1 \ \ \mbox{but} \ \
\limsup_{t\to\ensuremath{\infty}} \frac{\psi(2t)}{\psi(t)}=2
$$
then there exists $x_0 > 0 $ in $M(\psi)\setminus M_1(\psi)$ such
that all additive symmetric functionals defined on $M(\psi)$
vanish on $x_0$. However, if $\phi(x)(t) \to 0$ as $t \to \ensuremath{\infty}$
then $\rho_1(x_0) = 0$ and $x_0 \in M_1(\psi)$, which is a
contradiction. This example shows the set $\mathcal{M}_+(\psi)$
of measurable elements and the set $\mathcal{T}_+(\psi)$ of
Tauberian elements are not the same in general and the set
$\mathrm{D}(\psi)$ can fail to admit an element of exponential
increase. Necessary and sufficient conditions on the concave
function $\psi$ such that $\mathrm{D}(\psi)$ admits an element of
exponential increase are given in Proposition 3.20 of next
section.}
\subsection{Technical Results}
Let $\{ a_n \}_{n \in \ensuremath{\mathbb{N}}} \subset \ensuremath{\mathbb{R}}$ be a sequence and
$s_n = \sum_{m=1}^n a_m$ denote
the $n^{\mathrm{th}}$-partial sum.
Hardy's section on Tauberian theorems for Cesaro summability \cite[Section 6.1]{Hardy}
contains the following result.
\medskip \noindent \textbf{THEOREM 64} If $\lim_{n \to \ensuremath{\infty}} \frac{1}{n} \sum_{m=1}^n s_m = A$
and $na_n > -H$ for some $A \in \ensuremath{\mathbb{R}}$ and $H > 0$, then $\lim_{n \to \ensuremath{\infty}} s_n = A$.
\medskip \noindent We recall that any sequence $\{ b_n \}_{n \in \ensuremath{\mathbb{N}}}$ is the
sequence of partial sums of the sequence $\{ a_n := b_n - b_{n-1} \}_{n\in \ensuremath{\mathbb{N}}}$ with the
convention $b_0=0$. Hence a trivial corollary of Theorem 64 is the following.
\begin{thmS} Let $\{ b_n \}_{n \in \ensuremath{\mathbb{N}}}$ be a sequence such that $b_n \geq 0$
and $n(b_n - b_{n-1}) > -H$ for some $H > 0$.
Then $\lim_{n \to \ensuremath{\infty}} \frac{1}{n} \sum_{m=1}^n b_m = A$ for some $A \geq 0$
if and only if $\lim_{n \to \ensuremath{\infty}} b_n = A$.
\end{thmS}
\noindent A continuous analogy of Theorem 64 exists in \cite[Section 6.8]{Hardy}. It has
the following corollary.
\begin{thmS} Let $b(t)$ be a positive piecewise differentiable function
such that $tb'(t) > -H$ for some $H > 0$ and almost all $t > 0$.
Then $\lim_{t \to \ensuremath{\infty}} \frac{1}{t} \int_0^t b(s)ds = A$ for some
$A \geq 0$ if and only if
$\lim_{t \to \ensuremath{\infty}} b(t) = A$.
\end{thmS}
These theorems are sufficient to prove Theorem 3.7 with the following
lemma.
\begin{lemmaS}
Let $b(t)$ be a piecewise differentiable function such that
$tb'(t) > -H$ for some $H > 0$ and almost all $t > 0$. Then
$n(b(n) - b(n-1)) > -2H$ for all $n \in \ensuremath{\mathbb{N}}$. \par \vspace*{1\belowdisplayskip}
\proof{Let $n \in \ensuremath{\mathbb{N}}$. Then $b(n) - b(n-1) \geq \inf_{t \in
[n-1,n]} b'(t) > \inf_{t \in [n-1,n]} -Ht^{-1} \geq - H(n-1)^{-1}
\geq -2Hn^{-1}$.}
\end{lemmaS}
\subsubsection*{Proof of Theorem 3.7} The scheme of implications
shall be \\
\centerline{ \begin{tabular}[r]{c@{}c@{}c@{}c@{}c}
\text{ (i) } & $\Leftrightarrow$ & \text{ (iii) } &
$\Leftrightarrow$ & \text{ (v) } \\[-6pt]
& & $\Updownarrow$ & & \\[-6pt]
\text{ (ii) } & $\Leftrightarrow$ & \text{ (iv) } &
$\Leftrightarrow$ & \text{ (vi). }
\end{tabular} } \\
\noindent \text{(i)} $\Leftarrow$ \text{(iii)} $\Leftarrow$
\text{(v)} is Remark 3.1
and (v) $\Leftarrow$ (i) is provided by
Lemma 3.18 and Theorem 3.16 using $b(t) = \phi_\kappa(t)$.
\noindent \text{(iii)} $\Leftrightarrow$ \text{(iv)} is Corollary 2.4.
\noindent \text{(ii)} $\Leftarrow$ \text{(iv)} $\Leftarrow$
\text{(vi)} is Corollary 3.4 and (vi) $\Leftarrow$ (ii) is
provided by Theorem 3.17 using $b(t) = \phi_\kappa(t)$ and
Remark 3.6. \vspace*{-.5cm}
\begin{flushright} $\Box$ \end{flushright}
The following Propositions are sufficient to prove Corollary 3.9.
\begin{propS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in \mathrm{D}(\psi)$. Then
$t\phi_\kappa(x)'(t) > - C \nm{x}_{M(\psi)}$ for all $t > 0$.
\par
\vspace*{1\belowdisplayskip} \proof{From the proof of Proposition 1.8
$$
\phi(x)'(t) \geq - \frac{\psi'(t)}{\psi(t)} \nm{x}_{M(\psi)}
$$
The substitution $t \mapsto \kappa(t)$, multiplication of both
sides by the positive number $t\kappa'(t)$ for $t > 0$ and the
elementary property $(f\circ\kappa)'(t) = f'(\kappa(t))\kappa'(t)$
yields
$$
t\phi_\kappa(x)'(t) \geq
- t\frac{(\psi \circ \kappa)'(t)}{\psi \circ \kappa(t)} \nm{x}_{M(\psi)}.
$$
The result now follows from the hypothesis $\kappa \in
\mathrm{D}(\psi)$. }
\end{propS}
\begin{propS}Let $\psi \in \Omega_\infty$. Then
\begin{prop2list}{10}{2}{2}
\item $\mathrm{D}(\psi) \subset \mathrm{R}(\psi)$
\end{prop2list}
and the following statements are equivalent
\begin{prop2list}{10}{2}{2}
\item[\text{(ii)}] the set $\mathrm{D}(\psi)$ contains an element
$\kappa$ of exponential increase \emph{;} \item[\text{(iii)}] $\exists \,
C>0$ such that
$$
\frac{\psi(2t)}{\psi(t)}=1+O\left(\frac1{\psi(t)^{1/{C}}}\right);
$$
\item[\text{(iv)}] $\psi^{-1}(t^C)$ is of exponential increase
for some $C \geq 1$.
\end{prop2list}
\par
\vspace*{1\belowdisplayskip} \proof{(i) Let $\kappa \in \mathrm{D}(\psi)$. Then by
Definition 3.8
$$
\log\left(\frac{(\psi \circ \kappa)(t+T)}{(\psi \circ
\kappa)(t)}\right)= \int_t^{t+T}\frac{(\psi \circ
\kappa)'(s)}{(\psi \circ \kappa)(s)}ds <
C\int_t^{t+T}s^{-1}ds=C\log\frac{(t+T)}{t}.
$$
Consequently
$$
\frac{(\psi \circ \kappa)(t+T)}{(\psi \circ \kappa)(t)}
<\left(\frac{t+T}{t}\right)^C=1+O(t^{-1}) \mbox{ for large }t.
\eqno{(3.1)}
$$
Taking $t=n$ and $T=1$ we get (i).
(ii) $\Rightarrow$ (iii) Substituting $t=1$ and $T=u-1$ into (3.1)
we get
$$
(\psi \circ \kappa)(u)<(\psi \circ \kappa)(1)u^C=O(u^C)
\eqno{(3.2)}
$$
Then, taking $T=D$ where $D$ is such that $\kappa(t+D)>2\kappa(t)$
for all $t>0$,
$$
1<\frac{\psi(2\kappa(t))}{\psi(\kappa(t))}< \frac{(\psi \circ
\kappa)(t+T)}{(\psi \circ \kappa)(t)}<1+O(t^{-1})<
1+O\left(\frac1{(\psi \circ \kappa)(t)^{1/C}}\right),
$$
where the last inequality follows from (3.2). We obtain the result
by the substitution $\kappa(t)\to t$.
(iii) $\Rightarrow$ (iv) Let $t=\psi^{-1}(u)$. Then for
sufficiently large $H$ we have
$$
\frac{\psi(2\psi^{-1}(u))}{u}<1+\frac{H}{(u)^{1/{C}}},
$$
or
$$
\psi(2\psi^{-1}(u))<u+Hu^\frac{C-1}{C}.
$$
Applying $\psi^{-1}(\cdot)$ to both sides of the last inequality
we get
$$
2\psi^{-1}(u)<\psi^{-1}(u+Hu^\frac{C-1}{C}).%
$$
If $C<1$ then $\psi^{-1}(u+Hu^\frac{C-1}{C})<\psi^{-1}(u+H)$ for
$u>1$ and in this case $\psi^{-1}(u)$ is of exponential increase.
\noindent If $C>1$ then replacing $u$ by $u^C$ we get
$$
2\psi^{-1}(u^C)<\psi^{-1}(u^C+Hu^{C-1})\le\psi^{-1}((u+H/C)^C).
$$
Consequently, $\psi^{-1}(u^C)$ is of exponential increase.
(iv) $\Rightarrow$ (ii) is immediate.
}
\end{propS}
\medskip We can now prove Corollary 3.9.
\subsubsection*{Proof of Corollary 3.9}
Let $\kappa \in \mathrm{D}(\psi)$. Then $\kappa \in
\mathrm{R}(\psi)$ by Proposition 3.20(i) and for each $x \in
M_+(\psi)$ there exists $H = C\nm{x}_{M(\psi)} > 0$ such that
$t\phi_\kappa(x) > - H$ by Proposition 3.19. Hence the conditions
of Theorem 3.7 are satisfied. \vspace*{-.5cm}
\begin{flushright} $\Box$ \end{flushright}
\section{Summary and Examples}
Let $\psi \in \Omega_\ensuremath{\infty}$.
For the convenience of the reader, we summarize the hypotheses on
$\kappa$ that have appeared in the previous sections.
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa \in\mathcal{K}$. Then we
say $\kappa$
\begin{prop2list}{10}{2}{3}
\item has \textbf{\emph{restricted growth with respect to}} $\psi$
if \display{\text{F-}\lim_{n \to \ensuremath{\infty}}
\frac{\psi(\kappa(n))}{\psi(\kappa(n+1))} = 1,}
and \textbf{\emph{the set of $\kappa$ with restricted growth with
respect to $\psi$ is denoted}} $\mathrm{R}(\psi)$. \item has
\textbf{\emph{strong restricted growth with respect to}} $\psi$ if
\display{\lim_{n \to \ensuremath{\infty}}
\frac{\psi(\kappa(n))}{\psi(\kappa(n+1))} = 1,}
and \textbf{\emph{the set of $\kappa$ with strong restricted
growth with respect to $\psi$ is denoted}} $\mathrm{SR}(\psi)$.
\item has \textbf{\emph{dominated growth with respect to}} $\psi$
if $\exists \, C > 0$ such that $\forall t > 0$
\display{\frac{(\psi\circ\kappa)'(t)}{\psi\circ\kappa(t)} <
\frac{C}{t},} and \textbf{\emph{the set of $\kappa$ with dominated
growth with respect to $\psi$ is denoted}} $\mathrm{D}(\psi)$.
\item is of \textbf{\emph{exponential increase}} if $\exists \, C
> 0$ such that $\forall t > 0$ \display{ \kappa(t + C) > 2
\kappa(t),} and \textbf{\emph{the set of $\kappa$ of exponential
increase is denoted}} $\mathcal{K}_{\mathrm{exp}}$.
\end{prop2list}
\end{dfnS}
We denote $\mathrm{X_{\mathrm{exp}}}(\psi) = \mathrm{X}(\psi) \cap
\mathcal{K}_{\mathrm{exp}}(\psi)$, where X is D, SR, or R. The
conditions (i), (ii) and (iii) are increasingly stronger
conditions by Proposition 3.20, hence $\mathrm{D}(\psi) \subset
\mathrm{SR}(\psi) \subset \mathrm{R}(\psi)$. We recall that
$\kappa \in \mathrm{R}(\psi)$ was sufficient for Theorem 2.3,
$\kappa \in \mathrm{R_{exp}}(\psi)$ was necessary and sufficient
for Theorem 2.7, $\kappa \in \mathrm{SR}(\psi)$ was sufficient for
Theorem 2.8, and $\kappa \in \mathrm{D}(\psi)$ was sufficient for
Corollary 3.9. Hence $\kappa \in \mathrm{D_{exp}}(\psi)$ is the
strongest hypothesis on $\kappa$ and implies Theorems 2.3, 2.7,
2.8, 3.14 and Corollary 3.9.
We now point out some explicit examples of functions $\psi$ and
$\kappa$ for which $\kappa \in \mathrm{D_{exp}}(\psi)$.
Indeed, the functions given in Example
4.3 below appear in \cite{CN}. Consequently, Theorems 2.3, 2.7, 2.8,
3.14 and Corollary 3.9 apply to the functionals on Marcinkiewicz
operator spaces used in \cite{CN}. We shall elaborate on this in
our concluding section.
\EXS{Let $\psi : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ be a continuous, concave
and invertible function such that the inverse $\psi^{-1} :
[0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ is of exponential increase. Then $\psi^{-1}
\in \mathrm{D_{exp}}(\psi)$.}
\EXS{Define the function $\psi : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ by
\display{\psi(t) := \log(1+t).} Then $\psi$ is continuous, concave
and invertible. The function \display{\psi^{-1}(t) = e^t - 1} is
of exponential increase. Hence Example 4.2 applies and $\psi^{-1}
\in \mathrm{D_{exp}}(\psi)$. The function ${\kappa}$
given by
\display{{\kappa}(t) := e^t}
is an element of the equivalence class $[\psi^{-1}]$ by Remark 1.9 and
hence provides the same set of functionals as $\psi^{-1}$.}
Let $\psi \in \Omega_\ensuremath{\infty}$. We conclude the summary with a result
on the existence of the sets $\mathrm{X_{\mathrm{exp}}}(\psi)$
where X is D, SR, or R.
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$. The following set A of statements are
equivalent \\
\begin{prop2list}{12}{2}{3}
\item[\text{A(i)}] $\liminf_{t \to \ensuremath{\infty}} \frac{\psi(2t)}{\psi(t)}
= 1 \ ;$ \item[\text{A(ii)}]
$\mathrm{R_{exp}}(\psi)$ is non-empty. \\
\end{prop2list}
The following set B of statements are
equivalent \\
\begin{prop2list}{12}{2}{3}
\item[\text{B(i)}] $\lim_{t \to \ensuremath{\infty}} \frac{\psi(2t)}{\psi(t)} = 1
\ ;$ \item[\text{B(ii)}]
$\mathrm{SR_{exp}}(\psi)$ is non-empty. \\
\end{prop2list}
The following set C of statements are
equivalent \\
\begin{prop2list}{12}{2}{3}
\item[\text{C(i)}] $\frac{\psi(2t)}{\psi(t)} - 1 =
O(\psi(t)^{-1/C})$ for some $C > 0$ \emph{;} \item[\text{C(ii)}]
$\mathrm{D_{exp}}(\psi)$ is non-empty.
\end{prop2list}
\par
\vspace*{1\belowdisplayskip} \proof{ Set A. \ Follows from \cite{DPSS} Theorem 3.4(i)
and \cite{DPSSS} Lemma 3.9 with Theorem 2.7.
\medskip \noindent Set B. \ (i) $\Rightarrow$ (ii) Let $\beta(t) := 2^t$.
It is immediate $\beta \in \mathrm{R_{exp}}(\psi)$ and $\lim_{t
\to \ensuremath{\infty}} \psi(2^{t+1}) / \psi(2^t) =1$ by hypothesis on $\psi$.
\smallskip \noindent (ii) $\Rightarrow$ (i) The hypothesis implies
for any $m \in \ensuremath{\mathbb{N}}$ \display{\lim_{n \to \ensuremath{\infty}}
\frac{\psi(\kappa(n))}{\psi(\kappa(n+m))} = \lim_{n \to \ensuremath{\infty}}
\frac{\psi(\kappa(n+m-1))}{\psi(\kappa((n+m-1) +1))}...
\frac{\psi(\kappa(n))}{\psi(\kappa(n+1))} = 1.} Let $m'$ be any
integer greater than the $C > 0$ such that $\kappa(t + C) > 2
\kappa(t)$ for $t > 0$. Then
\display{\frac{\psi(\kappa(n))}{\psi(\kappa(n+1 + m'))} \leq
\frac{\psi(\kappa(n))}{\psi(2\kappa(n + 1))} \leq
\frac{\psi(t)}{\psi(2t)} \leq 1 } for all $\kappa(n) \leq t \leq
\kappa(n+1)$. Since $\kappa$ is of exponential increase then
$\kappa(n) \to \ensuremath{\infty}$ as $n \to \ensuremath{\infty}$. Hence \display{1 = \lim_{n
\to \ensuremath{\infty}} \frac{\psi(\kappa(n))}{\psi(\kappa(n+1+m'))} \leq
\lim_{t \to \ensuremath{\infty}} \frac{\psi(t)}{\psi(2t)} \leq 1.}
\medskip \noindent Set C. \ Follows from Proposition 3.20. }
\end{thmS}
\REMS{The example $\psi(t)=(\log(1+t))^C,\ C>1$ for large $t$ and
linear for small $t$ shows that the constant $1/C$ in Theorem 4.4
C(ii) cannot \mbox{be replaced with 1.}}
\section{Generalization of the Connes-Dixmier construction}
\subsection{Connes-Dixmier Functionals on Marcinkiewicz Spaces}
The Connes-Dixmier construction of \cite[IV.2]{CN}, which we shall
continue to clothe in the language of singular symmetric
functionals on Marcinkiewicz spaces until the concluding section,
generates singular symmetric functionals on $M_+(\psi)$ supported
at infinity for the specific function $\psi(t)=\log(1+t)$. We
recall the idea of A. Connes' method.
\begin{dfnS}
Let $SC_b^*([0,\infty))$ denote the set of all positive linear
functionals $\gamma$ on $C_b([0,\ensuremath{\infty}))$ such that $\gamma(1)=1$
and $\gamma(f) = 0$ for all $f$ in $C_0([0,\ensuremath{\infty}))$.
\end{dfnS}
A. Connes defines a symmetric functional supported at infinity on
the cone of positive elements of the Marcinkiewicz space
$M(\log(1+t))$ by the formula
$$\tau_\gamma(x) :=
\gamma \left(
\frac{1}{\log(1 + \lambda)}\int_0^\lambda \phi(x)(u) d\log(1+u)
\right)
$$
for all $x \in M_+(\log(1+t))$, where
$\gamma\in SC_b^*([0,\infty))$ and
$$
\phi(x)(u)=\frac{1}{\log(1+u)} \int_0^u x^*(s)ds.
$$
We generalize the construction to any Marcinkiewicz space
$M(\psi)$ of Lebesgue measurable functions, $\psi \in \Omega_\ensuremath{\infty}$, and
demonstrate the functionals so constructed are a sub-class of
functionals of the form $f_{L,\kappa}$ already studied in this
paper.
Let $k \in \mathcal{K}$.
Define
$$
M_k(g)(\lambda) := \frac{1}{k(\lambda)}\int_0^\lambda g(s) dk(s)
$$
where $g \in C_b([0,\ensuremath{\infty}))$ and $\lambda > 0$.
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $k \in\mathcal{K}$. Let $\gamma \in
SC_b^*([0,\ensuremath{\infty}))$. Then \display{\tau_{\gamma,k}(x) = \gamma \circ
M_k(\phi(x)) \ \ensuremath{\ \, \forall \,} x \in M_+(\psi)} is called a
\textbf{\emph{Connes-Dixmier functional on $M_+(\psi)$}}.
\end{dfnS}
\begin{dfnS}
Let $\gamma \in SC_b^*([0,\ensuremath{\infty}))$ and $C$ be the Cesaro operator
of Section 3.1. We call a positive linear functional on
$C_b([0,\ensuremath{\infty}))$ of the form
$$
L_\gamma := \gamma \circ C
$$
a \textbf{\emph{Cesaro-Banach limit on $C_b([0,\ensuremath{\infty}))$}}. Let
$CBL(\ensuremath{\mathbb{R}}_+)$ denote \textbf{\emph{the set of Cesaro-Banach limits
on $C_b([0,\ensuremath{\infty}))$}}.
\end{dfnS}
\REMS{The proof of Theorem 3.3 demonstrates that a Cesaro-Banach
limit $L_\gamma$ has the property of translation invariance and
$L_\gamma(1) = 1$. Hence $L_\gamma \in BL(\ensuremath{\mathbb{R}}_+)$ for all $\gamma
\in SC_b^*([0,\ensuremath{\infty}))$ and the set of Cesaro-Banach limits is a
proper subset of the set $BL(\ensuremath{\mathbb{R}}_+)$,
$$CBL(\ensuremath{\mathbb{R}}_+) \subset BL(\ensuremath{\mathbb{R}}_+).$$
}
\noindent Let $k \in \mathcal{K}$. Define the continuous bounded
function \display{g_k(t) := g(k(t))} for any $t > 0$ and $g \in
C_b([0,\ensuremath{\infty}))$. Clearly, $g\to g_k$ is a $*$-automorphism of
$C_b([0,\ensuremath{\infty}))$.
Let $\gamma \in SC_b^*([0,\ensuremath{\infty}))$. Then the functional $\gamma_k$
on $C_b([0,\ensuremath{\infty}))$ defined by \display{\gamma_k(g) := \gamma(g_k)
\ \ensuremath{\ \, \forall \,} g \in C_b([0,\ensuremath{\infty}))} has the properties $\gamma_k(1)=1$ and
$\gamma_k(f) = 0$ for all $f \in C_0([0,\ensuremath{\infty}))$. Hence $\gamma_k$
is an element of the set $SC_b^*([0,\ensuremath{\infty}))$.
\begin{propS} Let $k \in\mathcal{K}$. Then
$$
\gamma \circ M_k(g) = \gamma_{k} \circ C(g_{k^{-1}})
$$
for all $g \in C_b([0,\ensuremath{\infty}))$.
\par
\vspace*{1\belowdisplayskip}
\proof{Using the substitution $s = k^{-1}(t)$,
$$
M_k(g)(\lambda) = \frac{1}{k(\lambda)}\int_0^\lambda g(s) dk(s)
=\frac{1}{k(\lambda)}\int_0^{k(\lambda)} g(k^{-1}(t)) dt
= C(g_{k^{-1}})(k(\lambda)).
$$
Hence $\gamma(M_k(g)) = \gamma_k(C(g_{k^{-1}}))$.
}
\end{propS}
\vspace*{0.1cm}
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $k \in \mathcal{K}$.
\begin{prop2list}{10}{2}{4}
\item Let $\gamma \in SC_b^*([0,\ensuremath{\infty}))$. Then there
exists a Cesaro-Banach limit $L := L_{\gamma_k} \in CBL(\ensuremath{\mathbb{R}}_+)$
such that
$$
\tau_{\gamma,k}(x) = f_{L,k^{-1}}(x) ,\quad \forall x \in
M_+(\psi).
$$
\item Let $L \in CBL(\ensuremath{\mathbb{R}}_+)$.
Then there exists an element
$\gamma \in SC_b^*([0,\ensuremath{\infty}))$ such that
$$
f_{L,k^{-1}}(x) = \tau_{\gamma,k}(x),\quad \forall
x \in M_+(\psi).
$$
\end{prop2list}
\par
\vspace*{1\belowdisplayskip}
\proof{Immediate from Proposition 5.5.}
\end{thmS}
\noindent The result implies the following important
identification. The set of Connes-Dixmier functionals arising from
the function $k \in \mathcal{K}$ is the set
$$
\inset{\tau_{\gamma,k}}{\gamma \in SC_b^*([0,\ensuremath{\infty}))} =
\inset{f_{L,k^{-1}}}{ L \in CBL(\ensuremath{\mathbb{R}}_+)}. \eqno{(*)}
$$
\subsection{Subsets of Banach Limits and the Cesaro Limit Property}
Let $\kappa \in\mathcal{K}$. The identification $(*)$ above introduces
to our analysis the set of functionals
$$
\inset{ f_{L,\kappa} }{ L \in \Lambda }
$$
where $\Lambda$ is a subset of $BL(\ensuremath{\mathbb{R}}_+)$. We consider, in this
section, a sufficient condition on a subset $\Lambda \subset
BL(\ensuremath{\mathbb{R}}_+)$ such that the statements of Theorem 2.8 and Corollary
3.9 can be extended to the set of functionals $ \inset{
f_{L,\kappa} }{ L \in \Lambda } $.
\begin{dfnS}
Let $\Lambda \subset BL(\ensuremath{\mathbb{R}}_+)$. We say $\Lambda$ has the Cesaro
limit property if, for each $g \in C_b([0,\ensuremath{\infty}))$, \display{ \{ a
, b \} \subset \inset{L(g)}{L \in \Lambda} } where $
a=\liminf_{t\to\ensuremath{\infty}}C(g)(t)$ and $b=\limsup_{t\to\ensuremath{\infty}}C(g)(t).$
\end{dfnS}
\medskip \noindent Let $\kappa \in \mathcal{K}$ and $\Lambda \subset BL(\ensuremath{\mathbb{R}}_+)$.
Define a seminorm on $M(\psi)$ by setting for $x \in M(\psi)$
$$
\nm{x}_{\kappa,\Lambda} := \sup \inset{f_{L,\kappa}(|x|)}{L \in
\Lambda}.
$$
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and
$\kappa \in \mathrm{D}(\psi)$.
Let $\Lambda
\subset BL(\ensuremath{\mathbb{R}}_+)$ have the Cesaro limit property.
Then there exists $0 < c < 1$ such that
$$
c \rho_1(x) \leq \nm{x}_{\kappa,\Lambda} \leq
\rho_1(x),\quad \forall x \in M(\psi).
$$
\par
\vspace*{1\belowdisplayskip}
\proof{As
$$
\rho_1(x) = \limsup_{t \to \ensuremath{\infty}} \phi(x)(t)
= \limsup_{t \to \ensuremath{\infty}} \phi_\kappa(x)(t).
$$
there exists a sequence of positive numbers $\{ t_k \}_{k=1}^\infty$ with $t_k \to \ensuremath{\infty}$ as $k \to \ensuremath{\infty}$
such that
$$
\lim_{k \to \ensuremath{\infty}} \frac{\sigma(t_k)}{t_k} = \rho_1(x) \eqno{(5.1)}
$$
where $\sigma(t) = t \phi_\kappa(x)(t) \ , \ t > 0$. We write
$\sigma(t) = \frac{t}{f(t)} \int_0^{\kappa(t)} x^*(s)ds$ where
$f(t) = \psi \circ \kappa(t)$. Then $\sigma'(t) =
(1-t\frac{f'(t)}{f(t)}) \phi_\kappa(x)(t) + \frac{t}{f(t)}
\kappa'(t) x^*(\kappa(t))$. The hypothesis on $\kappa$ implies
$t\frac{f'(t)}{f(t)} < H$ for some $H \geq 1$ and all $t > 1$.
Hence
$$\sigma'(t) > (1-H) \phi_\kappa(x)(t) $$
Let $s \in [t_k,et_k]$. Then
$$
\sigma(s) - \sigma(t_k) = \int_{t_k}^s \sigma'(t)dt
> (1-H) \int_{t_k}^s \phi_\kappa(x)(t)dt
\geq (1-H) \int_{t_k}^{et_k} \phi_\kappa(x)(t)dt
$$
since $1-H \leq 0$, and
\begin{eqnarray*}
\frac{1}{et_k} \int_{t_k}^{et_k} \frac{\sigma(s)}{s}ds
& > & \frac{1}{et_k} \Big( \sigma(t_k) + (1-H)\int_{t_k}^{et_k} \phi_\kappa(x)(t)dt \Big)
\int_{t_k}^{et_k} \frac{ds}{s} \\
& = & \frac{\sigma(t_k)}{et_k} + (1-H)\frac{1}{et_k}\int_{t_k}^{et_k} \phi_\kappa(x)(t)dt \\
& \geq & \frac{1}{e} \frac{\sigma(t_k)}{t_k} + (1-H)C(\phi_\kappa(x))(et_k)
\end{eqnarray*}
as $\int_{t_k}^{et_k} \frac{ds}{s} = \log\frac{et_k}{t_k} = \log e = 1$.
Combining the previous inequality with
$$
C(\phi_\kappa(x))({et_k})
= \frac{1}{et_k} \int_0^{et_k}
\frac{\sigma(s)}{s} ds
\geq \frac{1}{et_k} \int_{t_k}^{et_k}
\frac{\sigma(s)}{s} ds
$$
yields
$$
C(\phi_\kappa(x))({et_k}) > \frac{1}{e} \frac{\sigma(t_k)}{t_k} + (1-H)C(\phi_\kappa(x))(et_k).
$$
Hence, after rearrangement,
$$
H \, C(\phi_\kappa(x))({et_k}) > \frac{1}{e} \frac{\sigma(t_k)}{t_k}
$$
and
$$
\limsup_{t \to \ensuremath{\infty}} C(\phi_\kappa(x))(t) \geq \limsup_{k \to
\ensuremath{\infty}} C(\phi_\kappa(x))({et_k}) > \frac{1}{He} \limsup_{k \to
\ensuremath{\infty}} \frac{\sigma(t_k)}{t_k} \stackrel{(5.1)}{=} c \rho_1(x)
$$
where $c = (eH)^{-1}$. By the Cesaro limit property, $\exists L \in \Lambda$ such that
$L(\phi_\kappa(x)) = \limsup_{t \to \ensuremath{\infty}} C(\phi_\kappa(x))(t)$.
Hence $f_{L,\kappa}(x) > c \rho_1(x)$ for some $L \in
\Lambda$. The reverse inequality $f_{L,\kappa}(x) \leq \rho_1(x)$
for all $L \in \Lambda$ is obvious. }
\end{thmS}
\bigskip \noindent We now extend the notion of measurability and Definition 3.5.
Let $\Lambda \subset BL(\ensuremath{\mathbb{R}}_+)$. Define the set
\display{\mathcal{F}_\Lambda = \inset{g \in C_b([0,\ensuremath{\infty}))}{L_1(g)
= L_2(g) \ensuremath{\ \, \forall \,} L_1,L_2 \in \Lambda}. } Let $g \in
\mathcal{F}_\Lambda$. We denote the value $A = L(g) \ensuremath{\ \, \forall \,} L \in \Lambda$ by
$$
\text{F$_\Lambda$-}\lim_{t \to \ensuremath{\infty}} g(t) = A.
$$
\begin{dfnS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $\kappa\in \mathcal{K}$. Let $x \in
M_+(\psi)$. We say $x$ is
$\mathcal{F}_{\Lambda,\kappa}$-measurable if $\phi_\kappa(x) \in
\mathcal{F}_\Lambda$.
\end{dfnS}
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and
$\kappa \in \mathrm{D}(\psi)$. Let $\Lambda \subset BL(\ensuremath{\mathbb{R}}_+)$
have
the Cesaro limit property.
Then the following statements are equivalent
\begin{prop2list}{16}{2}{3}
\item $x$ is $\mathcal{C}_\kappa$-measurable, \item $x$ is
$\mathcal{F}_\kappa$-measurable, \item $x$ is
$\mathcal{F}_{\Lambda,\kappa}$-measurable, \item $x$ is
$\mathcal{S}$-measurable.
\end{prop2list}
\par
\vspace*{1\belowdisplayskip} \proof{(iv) $\ensuremath{\Rightarrow}$ (ii) $\ensuremath{\Rightarrow}$ (iii) is
immediate. (iii) $\ensuremath{\Rightarrow}$ (i) is immediate from the Cesaro limit
property. (i) $\ensuremath{\Rightarrow}$ (ii) $\ensuremath{\Rightarrow}$ (iv) by Corollary 3.9. }
\end{thmS}
\subsection{Results on Connes-Dixmier Functionals}
We now concentrate
on the subset $CBL(\ensuremath{\mathbb{R}}_+) \subset BL(\ensuremath{\mathbb{R}}_+)$.
\begin{propS}
The set of Cesaro-Banach limits $CBL(\ensuremath{\mathbb{R}}_+)$
has the Cesaro limit property.
\par
\vspace*{1\belowdisplayskip}
\proof{The proof of Theorem 3.3.}
\end{propS}
\noindent With the identification $(*)$ of Section 5.1,
the results of Section 5.2 can be applied to the set
of Connes-Dixmier functionals as follows.
\begin{thmS}
Let $\psi \in \Omega_\ensuremath{\infty}$ and $k^{-1} \in \mathrm{D}(\psi)$. Then
\begin{prop2list}{10}{2}{4}
\item[\text{A.}] for each $x \in M(\psi)$,
$$
\rho_1(x) \simeq \sup \inset{\tau_{\gamma,k}(|x|)}{\gamma \in
SC_b^*([0,\ensuremath{\infty}))};
$$
\item[\text{B.}]
the following statements are equivalent
\begin{prop2list}{10}{2}{4}
\item $x$ is $\mathcal{C}_{k^{-1}}$-measurable, \item
$\tau_{\gamma,k}(x)$ is independent of $\gamma \in
SC_b^*([0,\ensuremath{\infty}))$, \item $$\tau_{\gamma,k}(x) = \lim_{t \to \ensuremath{\infty}}
\frac{1}{\psi(t)} \int_0^t x^*(s)ds \ \ \ensuremath{\ \, \forall \,} \gamma \in
SC_b^*([0,\ensuremath{\infty})).$$
\end{prop2list}
\end{prop2list}
\par
\vspace*{1\belowdisplayskip}
\proof{
Proposition 5.11, Theorem 5.8 and Theorem 5.10.
}
\end{thmS}
\REMS{We recall for the reader the particular
case of Connes' construction in \cite{CN}.
The pair of functions used in \cite{CN} is
$(\psi(t), k(t)) = (\log(1+t),\log(1+t))$. It is
trivial to check
$k^{-1} \in \mathrm{D_{exp}}(\psi)$ and hence satisfies
the hypothesis of Theorem 5.12.
We note that the claim contained in Theorem 5.12 B.
(i) $\Leftrightarrow$ (ii)
generalises to arbitrary Marcinkiewicz spaces the assertion
proved by A. Connes for the choices $(\psi(t),k(t)) = (\log(1+t),\log(1+t))$
\cite[Proposition IV.2.$\beta$.6]{CN}.
The claim in Theorem 5.12 B. (ii) $\Leftrightarrow$ (iii) is new even for
$\psi(t) = \log(1+t)$.
}
\section{Application to Non-Commutative Geometry}
We conclude the paper by reducing the results to the setting of
singular traces on semifinite von Neumann algebras \cite{DPSS},
which includes, as the type $I$ case, the setting for
non-commutative geometry \cite[VI.2]{CN}.
\medskip We introduce the notation of \cite{DPSS} Section 4.
Let $(\mathcal{N},\tau)$ be the pair of a semifinite von Neumann
algebra $\mathcal{N}$ with a faithful normal semifinite trace
$\tau$. Let $\chi_E$ denote the characteristic function of a
measurable set $E \subset [0,\ensuremath{\infty})$. Define the generalised
singular values of the operator $r \in \mathcal{N}$ with respect
to $\tau$ \cite{FK}, \display{\mu_t(r) = \inf \inset{s \geq
0}{\tau(\chi_{(s,\ensuremath{\infty})}(|r|)) \leq t} .} The function $\mu_t(r) :
[0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ is a non-increasing and right continuous
function. Define the Marcinkiewicz space $M(\log(1+t))$ as the set
of Lebesgue measurable functions $x : [0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$
such that
$$
\nm{x}_{M(\log(1+t))} := \sup_{t > 0} \, \frac{1}{\log(1+t)} \int_0^t x^*(s) ds < \infty
$$
where $x^*$ is the decreasing rearrangement of $x$, see Section
1.2. Define $M_1(\log(1+t))$ as the closure of $L^1([0,\ensuremath{\infty})) \cap
M(\log(1+t))$ in the norm $\nm{.}_{M(\log(1+t))}$. Define the
Marcinkiewicz (normed) operator ideal associated to the
Marcinkiewicz space $M(\log(1+t))$ by \display{
\begin{array}{l}
\mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau) := \inset{r \in \mathcal{N}}{ \mu_t(r) \in M(\log(1+t))} \\
\mathcal{L}_0^{(1,\ensuremath{\infty})}(\mathcal{N},\tau) := \inset{r \in \mathcal{N}}{
\mu_t(r) \in M_1(\log(1+t))}
\end{array}
}
with norm
$$
\nm{r}_{(1,\ensuremath{\infty})} := \nm{\mu_t(r)}_{M(\log(1+t))} \ \text{ for } \
r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau).
$$
We note the separable ideal $\mathcal{L}_0^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ is
the closure in the norm $\nm{.}_{(1,\ensuremath{\infty})}$ of the ideal $
\mathcal{L}^1(\mathcal{N},\tau)$ of all $\tau$-integrable elements from
$\mathcal{N}$.
\bigskip We recall that $\mathcal{K}$ is the set of strictly increasing, invertible,
differentiable and unbounded functions mapping $[0,\ensuremath{\infty}) \to [0,\ensuremath{\infty})$ and $BL(\ensuremath{\mathbb{R}}_+)$
is the set of translation invariant positive linear functionals on $C_b([0,\ensuremath{\infty}))$, see Section 1.1 and Remark 1.9.
Let $k \in \mathcal{K}$ and $L \in BL(\ensuremath{\mathbb{R}}_+)$. Define a
functional on $\mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ by
\display{ F_{L,k}(r) := L \Big( \frac{1}{\log(1 + k^{-1}(t))} \int_0^{k^{-1}(t)} \mu_s(r)ds \Big) }
for all positive
elements $r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$. We
extend $F_{L,k}$ to the positive part of $\mathcal{N}$ by setting
$F_{L,k}(r)=\infty$ for all positive elements $r\in
\mathcal{N}\backslash \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$. A
linear functional $F$ on the von Neumann algebra $\mathcal{N}$ is
called singular (with respect to the faithful normal semi-finite
trace $\tau$) if $F$ vanishes on $\mathcal{L}^1(\mathcal{N},\tau)\cap
\mathcal{N}$. We recall the notation F-$\lim_{n \to \ensuremath{\infty}} a_n = A$,
introduced by G. Lorentz in \cite{GL}, denotes almost convergence of
a sequence $\{ a_n \}_{n \in \ensuremath{\mathbb{N}}}$ to the value $A \in \ensuremath{\mathbb{R}}$, see Definition 1.1.
\begin{thmS}[Trace Theorem]
\medskip \noindent Let $(\mathcal{N},\tau)$ be a semifinite von Neumann factor $\mathcal{N}$
with faithful normal semifinite trace $\tau$. Then $F_{L,k}$ is a
singular trace on $\mathcal{N}$ if and only if $k^{-1}$ satisfies
\begin{prop2list}{10}{2}{3}
\item \text{F-}$\lim_{n \to \ensuremath{\infty}} \frac{\log(k^{-1}(n))}{\log(k^{-1}(n+1))} = 1$,
\item $\exists \, C > 0$ such that $k^{-1}(t + C) > 2k^{-1}(t)$ for all $t > 0$.
\end{prop2list}
\par \vspace*{1\belowdisplayskip} \proof{Let $\alpha := \log(1+t)$. By construction $F_{L,k}(r) := f_{L,k^{-1}}(\mu_t(r))$,
where $f_{L,k^{-1}}$ is given in Definition 1.7,
and condition (i) and (ii) are equivalent to
$k^{-1} \in \mathrm{R_{exp}}(\alpha)$, see Definitions 2.1,2.5.
The
functional $f_{L,k^{-1}}$ is a positive homogeneous functional on
\mbox{$M_+(\alpha)$} satisfying (i) and (ii) of Definition 1.5.
We claim $F_{L,k}$ is a singular trace on $\mathcal{N}$ if and
only if $f_{L,k^{-1}} \in M_+(\alpha)^*_{\mathrm{sym},\ensuremath{\infty}}$.
$(\Rightarrow)$ The functional $F_{L,k}$ is a singular trace on
$\mathcal{N}$. Hence it is additive by hypothesis. Let
$\mathcal{N}$ be a type II (respectively, I) factor. Then by
Theorem 4.4 (respectively, Theorem 4.5) from \cite{DPSS} the
functional $f_{L,k^{-1}}$ on $M(\alpha)$ is additive. Hence
$f_{L,k^{-1}} \in M_+(\alpha)^*_{\mathrm{sym},\ensuremath{\infty}}$.
$(\Leftarrow)$ The functional $f_{L,k^{-1}}$ on $M(\alpha)$ is
symmetric by hypothesis. Then by Theorem 4.2 \cite{DPSS} the
functional $F_{L,k}$ as a functional on
$\mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ is additive. Let $r \in
\mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ be positive and $u \in
\mathcal{N}$ be unitary. Then
$$F_{L,k}(uru^*) = f_{L,k^{-1}}(\mu_t(uru^*))
= f_{L,k^{-1}}(\mu_t(r)) = F_{L,k}(r).
$$
Hence $F_{L,k}$ defines a trace. The fact that
$F_{L,k}$ is a singular trace is immediate.
The result follows as $f_{L,k^{-1}} \in
M_+(\alpha)^*_{\mathrm{sym},\ensuremath{\infty}}$ if and only if $k^{-1} \in
\mathrm{R_{exp}}(\alpha)$ by Theorem 2.8.
}
\end{thmS}
\bigskip \noindent Define the dilation operator
\display{D_a(g)(b) = g(ab) \ \ensuremath{\ \, \forall \,} a,b \in (0,\ensuremath{\infty}), g \in
C_b([0,\ensuremath{\infty})).} An element $\omega \in C_b([0,\ensuremath{\infty}))^*$ is called
dilation invariant if \display{\omega(D_{a}(g)) = \omega(g) \ensuremath{\ \, \forall \,} a
\in (0,\ensuremath{\infty}), g \in C_b([0,\ensuremath{\infty})).} We recall $SC_b^*([0,\infty))$
denotes the set of all positive linear functionals $\gamma$ on
$C_b([0,\ensuremath{\infty}))$ such that $\gamma(1)=1$ and $\gamma(f) = 0$ for
all $f$ in $C_0([0,\ensuremath{\infty}))$, see Definition 5.1. Define
$$
D(\ensuremath{\mathbb{R}}_+) := \inset{\omega \in SC_b^*([0,\infty))}{ \ \omega
\text{ is dilation invariant}}.
$$
Let $\omega \in D(\ensuremath{\mathbb{R}}_+)$. Define the functional $Tr_{\omega}$ on
$\mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ by setting
\display{Tr_{\omega}(r) := \omega \Big( \frac{1}{\log(1+t)} \int_0^t \mu_s(r) ds \Big) }
for all positive $r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$. We shall refer to the
functional $Tr_{\omega}$ as a Dixmier trace.
\medskip \noindent Let $\alpha(t) = \log(1+t)$. Define
$$
M_\alpha(g)(\lambda) := \frac{1}{\log(1+\lambda)}\int_0^\lambda
g(s) d\log(1+t), \ \lambda>0$$ for $g \in C_b([0,\ensuremath{\infty}))$ as in
Section 5.1. Let $\gamma \in SC_b^*([0,\ensuremath{\infty}))$. Define the
functional $tr_{\gamma}$ on
$\mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ by setting
$$
tr_{\gamma}(r) := \gamma \circ M_\alpha
\Big( \frac{1}{\log(1+t)} \int_0^t \mu_s(r) ds \Big)
$$
for all positive $r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$.
We shall refer to the functional $tr_{\gamma}$ as
a Connes-Dixmier trace, after its introduction and use by A. Connes in [2].
\medskip \noindent After Cesaro and Hardy \cite[Section 1.3]{Hardy}, define the Cesaro mean by
$$
C(\lambda) = \frac{1}{\lambda} \int_0^\lambda g(s) ds, \ \lambda>0
$$
for $g \in C_b([0,\ensuremath{\infty}))$. Let $\gamma \in SC_b^*([0,\ensuremath{\infty}))$. The
composition $\gamma \circ C$ is called a Cesaro-Banach limit, see
Definition 5.3, and the set of Cesaro-Banach limits, denoted
$CBL(\ensuremath{\mathbb{R}}_+)$, is a proper subset of $BL(\ensuremath{\mathbb{R}}_+)$.
\medskip The identification of Dixmier and Connes-Dixmier traces corresponding
to the pair $(\mathcal{N},\tau)$ of a semifinite von Neumann algebra $\mathcal{N}$
and faithful normal semifinite trace is as follows.
\begin{thmS} Let $(\mathcal{N},\tau)$ be a semifinite von Neumann algebra $\mathcal{N}$
with faithful normal semifinite trace $\tau$. Then
\begin{eqnarray*}
\inset{Tr_\omega}{\omega \in D(\ensuremath{\mathbb{R}}_+)}
& = & \inset{F_{L,\alpha}}{L \in BL(\ensuremath{\mathbb{R}}_+)} \\[4pt]
\inset{ \, tr_\gamma \, }{\gamma \in SC_b^*([0,\ensuremath{\infty}))}
& = & \inset{F_{L,\alpha}}{L \in CBL(\ensuremath{\mathbb{R}}_+)}.
\end{eqnarray*}
where $\alpha(t) = \log(1+t)$.
\par
\vspace*{1\belowdisplayskip}
\proof{
By construction $F_{L,k}(r) := f_{L,k^{-1}}(\mu_t(r))$,
where $f_{L,k^{-1}}$ is given in Definition 1.7.
By Remark 1.9, $f_{L,\log(1+t)^{-1}}=f_{L,e^t-1}\equiv
f_{L,\exp}$. Therefore, to prove the first equality, it is
sufficient to show that
(i) for a given $\omega \in D(\ensuremath{\mathbb{R}}_+)$, there exists an $L\in
BL(\ensuremath{\mathbb{R}}_+)$ such that
$$L(\phi(x)(e^t))=\omega(\phi(x)(t)),\quad 0\leq x\in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau);$$
where $\phi(x)(t) := (\log(1+t))^{-1} \int_0^t x(s)ds$.
(ii) for a given $L\in BL(\ensuremath{\mathbb{R}}_+)$, there exists an $\omega \in
D(\ensuremath{\mathbb{R}}_+)$ such that the equality above holds.
To establish (i), fix an $\omega \in D(\ensuremath{\mathbb{R}}_+)$ and define
$L(g):=\omega(g_1(\log(t)))$, $g\in C_b([0,\infty))$, $t\ge 0$,
where we set $g_1(s):=0$ if $s<-1$, $g_1(s)=g(s)$ if $s\ge 0$,
$g_1(s)=(1+s)g(0),\ -1\le s<0$. Clearly, $g_1$ is continuous on
$\ensuremath{\mathbb{R}}$. We show that $L\in BL(\ensuremath{\mathbb{R}}_+)$. It is evident that $L$ is a
positive linear functional on $C_b([0,\infty))$ which takes value
$1$ on $g(t)\equiv 1$ and vanishes on $C_0([0,\infty))$. Thus, it
remains to show that $L$ is translation invariant. Fix an $a>0$
and consider $L(T_a(g))=\omega((T_a(g))_1(\log(t))$. For all
sufficiently large $t>0$, the value $(T_a(g))_1(\log(t))$
coincides with $g(\log(t)+a)$. On the other hand, since $\omega
\in D(\ensuremath{\mathbb{R}}_+)$, we have
$$\omega(g_1(\log(t))=\omega(D_{e^a}g_1(\log(t))=\omega(g_1(\log(t\cdot
e^a)))=\omega(g_1(\log(t)+a)).
$$
As the value $g_1(\log(t)+a)$ also coincides with $g(\log(t)+a)$
for all sufficiently large $t>0$, we conclude that
$L(T_a(g))=L(g)$.
To show (ii), fix an $L\in BL(\ensuremath{\mathbb{R}}_+)$ and define
$\omega(g):=L(g(e^t))$, $g\in C_b([0,\infty))$. Again, it is clear
that $\omega$ is a positive linear functional on $C_b([0,\infty))$
which takes value $1$ on $g(t)\equiv 1$ and vanishes on
$C_0([0,\infty))$. To show that $\omega$ is dilation invariant,
fix an arbitrary $\lambda\ge 0$. The translation invariance of $L$
immediately yields that for every $r\in [0,\infty)$
$$
L(g(e^t))=L(T_r(g(e^t))=L(g(e^{t+r}))
$$
and so, setting $r:=-\min\{0,\log(\lambda)\}$, we obtain
$$
\omega(D_\lambda g)=L((D_\lambda g)(e^{t+r}))=L(g(\lambda
e^{t+r}))=L(g(e^{t+(r+\log(\lambda))}),\quad g\in C_b([0,\ensuremath{\infty})).
$$
By construction $r+\log(\lambda)>0$ and again appealing to the
translation invariance of $L$, we conclude $ \omega(D_\lambda
g)=L(g(e^{t}))=\omega(g)$. This completes the proof of the first
equality.
The second equality follows from Theorem 5.6 as $tr_\gamma(r) =
\tau_{\gamma, \log(1+t)}(\mu_s(r))$ where $\tau_{\gamma,\log(1+t)}$
is a Connes-Dixmier functional, see Definition 5.2.
}
\end{thmS}
Theorem 6.2 completes the identification suggested by the results
of \cite{CPS2}. It follows from Theorem 6.2:
\begin{thmS} Let $(\mathcal{N},\tau)$ be a semifinite von Neumann algebra $\mathcal{N}$
with faithful normal semifinite trace $\tau$. Then
\begin{prop2list}{10}{2}{3}
\item the functionals $Tr_\omega$ and $tr_\gamma$ on $\mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$
define singular traces on $\mathcal{N}$ for all
$\omega \in D(\ensuremath{\mathbb{R}}_+)$ and $\gamma \in SC_b^*([0,\ensuremath{\infty}))$,
\item the set of Connes-Dixmier traces is a subset of the set of Dixmier traces.
\end{prop2list}
\par
\vspace*{1\belowdisplayskip} \proof{The implication $(\Leftarrow)$ in the statement of
Theorem 6.1 does not require that $\mathcal{N}$ be a factor. Hence
(i) and (ii) follow from Theorem 6.2 and Example 4.3.}
\end{thmS}
Let $(\mathcal{N},\tau)$ be a semifinite von Neumann algebra $\mathcal{N}$
with faithful normal semifinite trace $\tau$.
The identification in Theorem 6.2 allows the results of previous sections
to be applied to the Marcinkiewicz operator ideal
$\mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ as follows.
\begin{thmS}
Let $r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$. Then
\display{\nm{r}_0 := \inf_{r' \in \mathcal{L}^{(1,\ensuremath{\infty})}_0(\mathcal{N},\tau)}
\nm{r-r'}_{(1,\ensuremath{\infty})}
= \sup_{\omega \in D(\ensuremath{\mathbb{R}}_+)} Tr_\omega(|r|)}
and
\display{\nm{r}_0 := \inf_{r' \in \mathcal{L}^{(1,\ensuremath{\infty})}_0(\mathcal{N},\tau)}
\nm{r-r'}_{(1,\ensuremath{\infty})}
\simeq \sup_{\gamma \in SC_b^*([0,\ensuremath{\infty}))} tr_\gamma(|r|).}
\par
\vspace*{1\belowdisplayskip} \proof{Remark 5.13, Theorem 5.12 A., Theorem 6.2 and
Theorem 2.8 since $\nm{r}_0 \equiv \rho_1(\mu_t(r))$. }
\end{thmS}
\bigskip \noindent In the
following definition (iii) follows A. Connes (see
\cite[IV.2.$\beta$, Proposition 6, Definition 7]{CN}).
\begin{dfnS}
Let $r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ be positive. Then we
say $r$ is
\begin{prop2list}{10}{2}{4}
\item $\mathcal{M}$-measurable if
$$\lim_{\lambda \to \ensuremath{\infty}} M_\alpha \Big(
\frac{1}{\log(1+t)}
\int_0^t \mu_s(r)ds \Big)(\lambda) = A
$$
for some $A \geq 0$, \item $\mathcal{F}$-measurable if
$Tr_\omega(r)$ is independent of $\omega \in D(\ensuremath{\mathbb{R}}_+)$, \item
measurable if $tr_\gamma(r)$ is independent of $\gamma \in
SC_b^*([0,\ensuremath{\infty}))$, \item Tauberian if
$$\lim_{t \to \ensuremath{\infty}} \frac{1}{\log(1+t)}
\int_0^t \mu_s(r)ds = A$$
for some $A \geq 0$.
\end{prop2list}
\end{dfnS}
\begin{thmS}
Let $r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ be positive. Then the
following statements are equivalent
\begin{prop2list}{10}{2}{4}
\item $r$ is $\mathcal{M}$-measurable,
\item $r$ is $\mathcal{F}$-measurable,
\item $r$ is measurable,
\item $r$ is Tauberian.
\end{prop2list}
\par
\vspace*{1\belowdisplayskip} \proof{From the proof of Proposition 5.5 $\mu_t(r)$ is
$\mathcal{M}$-measurable if and only if $\mu_t(r)$ is
$\mathcal{C}_{\alpha^{-1}}$-measurable (see Definition 3.5). Hence
the result follows directly from Remark 5.13 and Theorem 5.10. }
\end{thmS}
\REMS{As mentioned in Remark 5.13, the equivalence of the
statements (i) and (iii) in Theorem 6.6 is a result stated and
proved by A. Connes \cite[IV.2.$\beta$, Proposition 6]{CN} for the
special case $(\mathcal{N},\tau) = (B(H),Tr)$ where $H$ is a
separable Hilbert space and $Tr$ is the canonical trace. That a
positive element $r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$
is measurable if and only if $r$ is Tauberian is a new result. }
Theorem 6.6 has the following corollary, which shall conclude
the paper, linking measurable operators and
results of \cite{CPS2}.
\begin{corS}
Let $r \in \mathcal{L}^{(1,\ensuremath{\infty})}(\mathcal{N},\tau)$ be positive. Define
$$
\zeta_r(s) = \tau(r^s)
$$
for any $s \in \ensuremath{\mathbb{C}}$ with Re$(s) > 1$.
Then the
following statements are equivalent
\begin{prop2list}{10}{2}{4}
\item $r$ is measurable,
\item $$
Tr_{\omega}(r) = \lim_{s \to 1^+} (s-1)\zeta_r(s)
= 2 \Gamma( \textstyle{\frac{1}{2}})^{-1} \lim_{\epsilon \to 0^+} \epsilon \tau(e^{-{(\epsilon r)}^{-2}})
$$
for all $\omega \in D(\ensuremath{\mathbb{R}}_+)$.
\end{prop2list}
\par
\vspace*{1\belowdisplayskip} \proof{Theorem 6.6, Corollary 3.7 \cite{CPS2} and
Proposition 4.2 \cite{CPS2}. }
\end{corS}
|
1,314,259,994,324 | arxiv | \section{Introduction}
The human heart does not beat at a constant rate, even for a
subject in repose. Rather, there is strong variability of the
heart rate. The complexity of this heart rate variability (HRV)
presents a major challenge that has attracted continuing
attention. Many of the explanations proposed are by analogy with
paradigms used in physics to describe complexity, including:
deterministic chaos \cite{Ott:93}; the statistical theory of
turbulence \cite{Frisch:95}; fractal Brownian motion
\cite{Mandelbrot:82}; and critical phenomenon \cite{Jensen:98}.
They have led to new approaches and time-series analysis
techniques including a variety of entropies
\cite{Kurths:95,Pincus:91,Costa:02}, dimensional analysis
\cite{Raab:06}, the correlation of local energy fluctuations on
different scales \cite{Kiyono:05}, the analysis of long range
correlation \cite{Peng:95}, spectral scaling
\cite{Hausdorff:96,Pilgram:99}, the multiscale time asymmetry
index \cite{Costa:05}, multifractal cascades
\cite{Ivanov:99,Ivanov:01}. All these measures allow one to
describe HRV as a non-stationary, irregular, complex fluctuating
process. Depending on the technique in use there has been a very
wide range of conclusions about the regulatory mechanism of heart
rate, ranging from a stochastic feedback configuration
\cite{Ivanov:98b} to the physical system being in a critical state
\cite{Kiyono:05}. HRV can also be considered in terms of the
interactions between coupled oscillators of widely differing
frequencies \cite{Stefanovska:99a}.
Although we now have this huge variety of tools and approaches
for the analysis of HRV, only the last-mentioned has enabled us
to understand the origins of some of the time-scales embedded
in HRV. Each time-scale (frequency) in the coupled oscillator
model \cite{Stefanovska:99a} is represented by a separate
self-oscillator that interacts with the others, and each of the
oscillators represents a particular physiological function. The
frequency variations in HRV can therefore be attributed to the
effects of respiration ($\sim$0.25\,Hz), and myogenic
($\sim$0.1\,Hz), neurogenic ($\sim$0.03\,Hz) and endothelial
($\sim$0.01\,Hz) activity. HRV also contains a fast (short
time-scale) noisy component which forms a noise background in
the HRV spectrum and can be modelled as a white noise source
\cite{Stefanovska:99a}. Its properties are currently an open
question, and one that is important for both understanding and
modelling HRV. A practical difficulty in experimental
investigations is the presence of a strong perturbation,
respiration, that occurs continuously and exerts a particularly
strong influence in modulating the heart rate. This modulation
involves several mechanisms: via mechanical movements of the
chest, chemo-reflex, and couplings to neuronal control centres
\cite{Eckberg:03}. Spontaneous respiration gives rise to a
complex non-periodic signal, and this complexity is inevitably
reflected in HRV \cite{West:05r}. So, in order to understand
the properties of the fast noise, one would ideally remove the
respiratory perturbation and consider the residual HRV which
would then reflect fluctuations of the intrinsic dynamics of
the heart control system.
\begin{figure}[th]
\centerline {\includegraphics[width=14cm]{figure1.eps}}
\caption{\label{fig1} $RR$-intervals for
(a) normal (spontaneous) and (b) intermittent respirations.
Respiration signals (arbitrary units) are shown by dashed lines.
}
\end{figure}
\begin{figure}[ht]
\centerline{\includegraphics[width=12cm]{figure2a.eps}}
\vspace{0.2cm}
\centerline{\includegraphics[width=12cm]{figure2b.eps}}
\caption{\label{fig2} (a) An ECG signal and (b) the
corresponding HRV ($RR$ intervals) signal. In (a) the R-peaks
are marked by $\opencircle$; the ECG signal is shown in
arbitrary units. }
\end{figure}
Consideration of the intrinsic activity of the heart control
system on short-time scales is important for general
understanding of the function of the cardio-vascular system,
leads potentially to diagnostics of causes of arrhythmia
involving problems with neuronal control \cite{Doessel:06}, and
can be a benchmark for modeling HRV. In this paper we present
the results of an experimental study of the intrinsic dynamics
of the heart regulatory system and discuss these results in the
context of modelling the fast noise component. A number of open
problems are identified.
\section{Experimental results}
We analyse the dynamics of the control system in the absence of
explicit perturbations by temporarily {\it removing} the
continuing perturbations caused by respiration
[figure~\ref{fig1}(b)]. To do so, we perform experiments
involving modest breath-holding (apn{\oe}a) intervals. Note
that during long breath-holding the normal state of the
cardiovascular system is significantly modified
\cite{Parkes:06}. The idea of the experiments came from the
observation that spontaneous apn{\oe}a occurs during repose.
Apn{\oe}a intervals of up to 30 sec were used, enabling us to
avoid either anoxia or hyper-ventilation \cite{Parkes:06}.
Respiration-free intervals were produced by {\it intermittent
respiration}, involving an alternation between several normal
(non-deep) breaths and then a breath-hold following the last
expiration, as indicated by the dashed line in
figure~\ref{fig1}(b). The respiratory amplitude was kept close
to normal to avoid hyper-ventilation, and there were relatively
long intervals of apn{\oe}a when the heart dynamics was not
perturbed by respiration. It is precisely these intervals that
are our main object of analysis. The durations of both
respiration and apn{\oe}a intervals were fixed at 30 sec.
Measurements were carried out for 5 relaxed supine subjects,
and they were approved by Research Ethics Committee of
Lancaster University. Note that the measurements presented have
been selected from a larger number of measurements to form a
set recorded under almost identical conditions of time and
duration, with the subjects avoiding either coffee or a meal
for at least 2 hours beforehand. They were 4 males and 1
female, aged in the range 29--36 years, non-smokers, without
any history of heart disease. We stress that the aim of the
current investigation was exploratory: to study typical
behaviour of the internal regulatory system; we have not
performed a large-scale trial of the kind widely used in
medicine when a large number of subjects is necessary because
of the need for subsequent statistical analysis of the data.
The electrocardiogram (ECG) and respiration signals were
recorded \cite{Stefanovska:99a} over 45-60 minutes. The ECG
signals were transformed to HRV by using the marked events
method for extraction of the $RR$-intervals which are shown in
figure~\ref{fig2}.
Figure~\ref{fig1} shows $RR$-intervals found for the different types of
respiration. It is evident that respiration changes the heart rhythm very
significantly. Immediately after exhalation (b), there is an apn{\oe}a interval
where the heart rhythm fluctuates around some level. These fluctuations
correspond to the intrinsic dynamics of the heart control system. It is clear
from (a) that heart rate is {\it continuously} perturbed during normal
respiration, whereas in (b) one can distinguish an interval of intrinsic
dynamics corresponding to apn{\oe}a. Thus, the $j$th interval of
apn{\oe}a is characterized by the time series $\{RR_i\}$; here $i=1,2\ldots$
labels the $i$th $RR$-interval. Finally, we form a set $\{RR_i\}^j$ for
analyses by considering the set as realizations of a random walk and analyzing
their dynamical properties as such.
To reveal dynamics additional to $RR$-intervals, the
differential increments $\Delta RR_i=RR_{i+1}-RR_{i}$ were
analyzed. The differences between $RR$-intervals and their
increments are illustrated in figure~\ref{fig3}. Each apn{\oe}a
time-series $\{RR_i\}^j$ exhibits a trend that is describable
by the slope $a$ of a linear function $RR^j_i \propto a_j \, i$,
where $i$ is a heart beat number and $j$ marks $j$th apn{\oe}a
interval. The trend can be characterized by the distribution of
slopes $P(a)$ shown in figure~\ref{fig4} (a). For all
measurements the distributions are broad and their mean values
differ from zero. Thus the non-stationary nature of HRV on
short time-scales is clearly apparent. Note, that the
distributions $p(a)$ for the increments $\Delta RR$ are
significantly narrower [figure~\ref{fig4} (b)] and that they
are very well fitted to a normal distribution; however, the
mean values of the slopes differ from zero.
\begin{figure}[t]
\centerline{ \includegraphics[width=14cm]{figure3.eps}}
\caption{\label{fig3} (a) $RR$-intervals and (b) increments
$\Delta RR$ corresponding to apn{\oe}a intervals are shown. For
convenience of presentation, the difference between a given
value and the first value of each $j$th apn{\oe}a interval is
drawn in each case: $\widetilde {RR}_i^j=RR_i^j-RR_1^j$ and
$\Delta \widetilde {RR}_i^j=\Delta RR_i^j-\Delta RR_1^j$. }
\end{figure}
\begin{figure}[ht]
\centerline{ \includegraphics[width=14cm]{figure4.eps}}
\caption{\label{fig4} Distributions of trend slopes $P(a)$ of
the sets (a) $\{RR_i\}^j$ and (b) $\{\Delta RR_i\}^j$. }
\end{figure}
Because the dynamics of $RR$-intervals is evidently
non-stationary, we have applied detrended fluctuation analysis
(DFA) \cite{Peng:95} for estimation of the scaling exponents
$\beta$ for the apn{\oe}a sets $\{ RR_i\}^j$. In doing so, we
adapted the DFA method \cite{Peng:95} for short time-series and
used non-overlapped windows (see Appendix for details). Because
the time-series were short, time windows of length 4--15
$RR$-intervals were used to calculate $\beta$. For all measured
subjects, this procedure yielded values of $\beta$ lying within
the range $\beta \in(1.3:1.7)$, with a mean value of $1.45$. If
$RR$-intervals in the sets $\{ RR_i\}^j$ are replaced by
realizations of Brown noise (the integral of white noise)
keeping same lengths of apn{\oe}a intervals, then the
calculation gives $\beta = 1.46 \pm 0.07$. Additionally, a
surrogate analysis was performed for each subject by random
shuffling of the time indices $i$ of $RR_i$-intervals, to
confirm the importance of time-ordering of the $RR$-intervals.
For each realization (set $\{ RR_i\}^j$), 100 surrogate sets
were generated, 100 values of $\beta$ were obtained, and the
mean value $\beta_s$ was calculated. Values of $\beta_s$ for
the surrogate sets lie in the same limits as those for the
original sets, but with a small bias between $\beta$,
calculated using original sets, and $\beta_s$ (see the Appendix
for values of $\beta$ and $\beta_s$). It means that one can see
a correlation between $RR$-intervals, but that it is weak.
Summarizing the DFA results, we can claim that the scaling
exponent $\beta$ is similar to that for free diffusion of a
Brownian particle, but there is nonetheless some correlation
between the $RR$-intervals. We also applied aggregation
analysis \cite{West:05r} in a similar manner and arrived at
qualitatively the same conclusion. Note that in the contrast to
the initial idea of the DFA and aggregation analyses, which
were used for revealing long-range correlations in time series,
we have used these approaches to analyse the diffusion velocity
because they can cope with trends. Long-range correlations
cannot be revealed in the described measurements.
\begin{figure}[t]
\centerline {\includegraphics[width=14cm]{figure5.eps} }
\caption{\label{fig5} Examples of the autocorrelation function
$\rho ( k)$ (a) with and (b) without an oscillatory component.
The crosses indicate $\rho( k )$ calculated on the basis of the
increments $\Delta RR$. The solid line corresponds to the
approximating curve $\rho^{a}( k )=\exp(-\gamma k )\cos(2\pi
\Omega k)$. }
\end{figure}
To estimate the strength of the correlation, {\it stationary}
time-series of the increments $\{\Delta RR_i\}^j$ were
considered. The autocorrelation function $\rho( k )$ was
calculated
\begin{eqnarray}
\label{acfRR}
\rho( k )=\frac{1}{(M-1)\sigma} \sum_{j=1}^{N} \sum_{i=1}^{m^j-k}\widehat{RR}_i^j\widehat{RR}_{i-k}^j ; \\
M=\sum_{j=1}^{N} (m_j-k), \ \ \ \ \ \ \ \ \ \
\sigma=\frac{1}{M-1}\sum_{j=1}^{N} \sum_{i=1}^{m^j-k}
\left(\widehat{RR}_i^j\right)^2 . \nonumber
\end{eqnarray}
Here $\widehat{RR}_i^j=\Delta RR_i^j-\langle \Delta RR^j
\rangle$; the brackets $\langle \rangle$ denote calculation of
the mean value; $i$ and $j$ correspond to the heart beat number
and apn{\oe}a interval respectively, $k=0,1,\ldots$, $m^j$ is
the number of increments $\Delta RR$ in the $j^{\rm th}$
apn{\oe}a; $N$ is the total number of apn{\oe}a intervals.
Figure~\ref{fig5} presents examples of autocorrelation
functions. One of them has pronounced oscillations. An
approximation of $\rho( k)$ by the function $\rho^{a}( k
)=\exp(-\gamma k )\cos(2\pi \Omega k)$ demonstrates that
oscillations occur with frequency near $0.1$ Hz, presumably
corresponding to myogenic processes \cite{Stefanovska:99a} or
(perhaps equivalently) to the Mayer wave associated with blood
pressure feedback \cite{Malpas:02,Julien:06}. Further
investigations via the parametrical spectral analysis for each
apn{\oe}a interval show that these oscillations are of an
on-off nature, i.e.\ observed for parts of the apn{\oe}a
intervals, and not in all of the measurements as can be seen in
figure~\ref{fig5} (b). Examples of apn{\oe}a intervals with and
without oscillations are shown in figure~\ref{fig5add}. When an
oscillatory component is present then its contribution to
$\rho( k )$ is much weaker than the contribution of the noisy
component. The latter is characterized by a very short memory
as demonstrated by fast decay of $\rho( k )$.
\begin{figure}[t]
\centerline {\includegraphics[width=14cm]{figure5add.eps} }
\caption{\label{fig5add} Examples of apn{\oe}a intervals with
(a) and without (b) oscillation of HRV. The circles correspond
to the values of the increments $\Delta RR_i$ and the solid
lines connecting points are guides to the eye. The dashed lines
in figure (a) are added to reveal oscillations. The middle and
upper $\Delta RR_i$ time-series are shifted by 0.1 and 0.2
(sec) accordingly.
}
\end{figure}
The properties of $\Delta RR$ can also be characterized by the
probability density function $P(\Delta RR)$ shown in
figure~\ref{fig6} (a). Figure (b) shows the probability density
function $P(RR)$ of RR-intervals for comparison. Following
\cite{Peng:93}, the $\alpha$-stable distribution has been
widely used to fit the distribution of increments $\Delta RR$,
and {\it strongly} non-Gaussian distributions were observed
\cite{Peng:93}. We perform a similar fitting applying special
software \cite{Nolan:98}. Since the distributions $P(\Delta
RR)$ are almost symmetrical, our attention was concentrated on
the tails, which were characterized by a stability index
$\alpha \in (0,2]$. The case of $\alpha=2$ corresponds to a
Gaussian and, if $\alpha <2$, the tails are wider than
Gaussian. Fitting to our results yields a stability index
$\alpha \in (1.8:2)$, and the goodness-of-fit test (modified
KS-test taking into account the weight to the tails
\cite{Nolan:98}) supports the fitting. Note that, although the
autocorrelation function $\rho( k )$ cannot be used for the
theoretical description of an $\alpha$-stable process
\cite{Samorodnitsky:94}, $\rho( k )$ is nonetheless applicable
for finite time-series.
\begin{figure}[t]
\noindent \includegraphics[width=7cm]{figure6a.eps}\ \ \ \ \ \ \includegraphics[width=7cm]{figure6b.eps}\\
\caption{\label{fig6} (Color online) Normalized probability
density functions (a) $P(\Delta RR)$ of increments of
$RR$-intervals and (b) $P(RR)$ of $RR$-intervals. In (a) the
full (blue) and dashed (red) lines are Gaussian and stable
distributions, respectively, fitted to the data. The insets
show the same distributions plotted with logarithmic ordinate
scales; the circles correspond to $P(\Delta RR)$. The stable
distribution in (a) is characterized by $\alpha=1.86$. In (b)
the full (blue) line is a Gaussian distribution fitted to the
data.}
\end{figure}
If we consider the same length of realization using a Gaussian
random variable instead, we find $\alpha=1.99 \pm 0.01 $. It
means that the calculations of $\alpha$ are very robust. In
addition we carried out a stability test and it too supported
the fitting results. The obtained values of $\alpha \in
(1.8:2)$ differ significantly from the previously reported
values $\alpha \in (1.5:1.7)$ for 24h time-series of
$RR$-intervals \cite{Peng:93}.
Combining all the results, we conclude that the short-time
dynamics of $RR$-intervals can be described as a stochastic
process with stationary increments. This type of stochastic
processes was discussed by A. N. Kolmogorov
\cite{Kolmogorov:40} and applied to the description of a number
of different problems (see e.g.
\cite{Doob:90,Yaglom:62,Rytov:89} for further details). So, HRV
during apn{\oe}a interval cab be presented in the following
form
\begin{eqnarray}
\label{SPSI}
RR_i= RR_{i-1} +\Delta RR_i,
\end{eqnarray}
where $\Delta RR_i$ is a stationary discrete time stochastic
process. Note that the DFA calculation excludes a linear trend,
which is taken into account in Eq. (\ref{SPSI}) as non-zero
mean value of the increments, $\mu_j=\langle \Delta RR_i
\rangle_j$; in general case, $\mu_j$ is a random function of
$j$th apn{\oe}a interval. If one represents $RR$-intervals as a
sum of the linear trend and a random component:
\begin{eqnarray}
\label{SPLT}
RR_i= \mu_j \, i +\xi_i,
\end{eqnarray}
then $\xi_i$ corresponds to the non-stationary process (\ref{SPSI}) with zero mean value of increments. In other words, the superimposed random component of HRV during apn{\oe}a intervals is described by a non-stationary random process.
Increments $\Delta RR_i$ are characterized by a random
$\alpha$-stable process of short memory, with a weak
intermittent oscillatory component of frequency $\sim$\,0.1~Hz.
In the zeroth approximation the increments can {\it safely} be
represented by an uncorrelated Gaussian random process but, in
the next approximation, a weak correlation must be included,
allowing for an intermittent oscillatory component, and for
weak non-Gaussianity of the distribution of increments $\Delta
RR$. These additions reveal, on the one hand, that the
previously reported observation of a non-Gaussian distribution
of increments \cite{Peng:93} is a property of the intrinsic
heart rate regulatory system, but on the other hand, that the
scaling ranges of the stability index $\alpha$ differ
significantly in the presence or absence of external
perturbations (including respiration) acting on the regulatory
system. Consequently an explanation of the scalings reported
in \cite{Peng:93,Peng:95} should include analyses of the effect
of external perturbations and respiration, and not an analysis
of heart rate alone.
\section{Discussion}
\subsection{Non-stationarity of $RR$-intervals during apn{oe}a}
The results presented indicate that there is no firm set point
for the heart control system, and that the heart rhythm
exhibits diffusive behaviour. The slowest dynamics can be
described by a linear trend during apn{\oe}a intervals and its
presence can be treated as a slow regulatory/adaptation
component of the control system. The presence of the slow
time-scales is an established property of HRV \cite{Camm:96}
and their presence, even in the absence of the respiratory
perturbation, can be interpreted as an expected property.
On short time-scales of order several seconds, HRV shows a
diffusive dynamics too. It can be interpreted in two ways. One
possibility is that the control system does not firmly trace
the base (slow) rhythm, because in case of tracing, short
time-fluctuations should ``jump'' around the base rhythm and,
consequently, be stationary. Such a picture corresponds to zero
action of the control system if the heart rate is in a ``safe''
(for the whole cardiovascular system) interval, e.g. $RR \in
[RR_{low}:RR_{high}]$. Another possible explanation could be
that the control system is tracing the base rhythm but the
short-time fluctuations have a non-stationary character. It is
natural to expect that there could be other possible
explanations, and additional investigations are needed to reach
an understanding of the diffusive dynamics on short-time
scales.
In section 2 it is suggested that we should consider the
non-stationarity and diffusive dynamics of $RR$-intervals
within the framework of a stochastic process with independent
increments. It allows one to consider $RR$-intervals as
realizations of the so-called auto-regressive process that is
widely used in time-series analysis \cite{Box:94}. It means
that the direct spectral estimation of $RR$-intervals,
currently used as one of the basic techniques \cite{Camm:96},
is not applicable here and that one must use the theory of
stochastic processes with stationary increments for their
spectral decomposition \cite{Rytov:89}. If in the presence of
respiration, the short-time stochastic component of HRV
preserves non-stationarity then spectral estimation based on
$RR$-intervals is not correct, and increments must be used
instead. Note, that the properties of short-time fluctuations
in the presence of respiration are far from being completely
understood.
\subsection{Non-Gaussianity and correlations of increments $\Delta RR$}
The theories of both stochastic processes with stationary
increments and of auto-regressive analysis place some
limitations on the analysed time-series. The first approach
requires the existence of finite second-order momenta, whereas
the second approach assumes uncorrelated statistics of
increments. Formally, however, non-Gaussianity of the
increments distribution means that the second-order momenta do
not exist \cite{Samorodnitsky:94}, but non-Gaussianity can
still be incorporated into the auto-regressive description
\cite{Nikias:95}. And {\it vice versa}, the presence of
correlations in the increments dynamics requires a modification
of the standard auto-regressive approach, and it is one that
can be incorporated naturally into the general theory. In the
current investigation we ignore these issues. We calculate the
auto-correlation function and use model (\ref{SPSI}), because
the finite length of the time-series guarantees the existence
of the second-order momenta, and the simplicity of (\ref{SPSI})
means that the inclusion of the correlations is a trivial
extension.
Our consideration has the formal character of time-series
analysis because we do not incorporate any preliminary
information about the possible dynamics of $RR$-intervals. The
analysis is based on the use of a set of relatively short
time-series, a fact that defines our choice of simple
statistical measures. One cannot exclude the possibility that
the use of other approaches to such data might provide
additional insight into HRV dynamics. For example, the
fractional Brownian motion approach \cite{West:05r,Heneghan:00}
and the theory of discrete non-stationary non-Markov random
process \cite{Yulmetyev:02} represent different paradigms,
which are based on assumptions about the origin of the data.
Note, that despite a long history of developing the approaches
and their applications, the approaches of fractional Brownian
motion and of stable random process are not standardized tools,
whereas the approach of non-Markov random process is not so
popular. There is no definite recipe for choosing a set of
measures which can uniquely specify (or provide a good
description of) the properties of a renewal (discrete time)
stochastic process.
\subsection{Modelling}
Another way of attempting to understand the results is to try
to reproduce the observed data properties from an appropriate
model. In the context of our experiments, the modelling should
consist of a simulation of the electrical activity of
sinoatrial node (SAN) where the heart beats are initiated. For
modelling, one option is to use a bottom-up approach, which is
currently a very popular technique within the framework of the
complexity paradigm. In fact, available SAN cellular models
allow one to incorporate many details of physiological
processes like the openings and closures of specific ion
channels \cite{Wilders:07}. However, despite the complexity of
the models (40--100 variables) many important features are
still missed. For example, the fundamentally stochastic
dynamics of ion channels is represented by equations that are
deterministic. Heterogeneity of the SAN cellular locations and
intercell communications are among other important open issues
\cite{Ponard:07,Dobrzynski:05}.
An alternative option is the top-down approach using
integrative phenomenological models. In contrast to detailed
cell models, a toy model of the heart as a whole unit can be
developed. It is known that an isolated heart, and a heart in
the case of a brain-dead patient \cite{Neumann:01} demonstrate
nearly periodic behaviour. So, it is reasonable to assume that
the observed HRV is induced by the neuronal heart control
system, which is a part of the central nervous system. The
control system includes a primary site for regulation located
in the medulla \cite{Klabunde:04}, consisting of a set of
neural networks with connections to the hypothalamus and the
cortex. The control is realized via two branches of the nervous
system: the parasympathetic (vagal) and the sympathetic
branches. Although many details of the control system are still
missing \cite{Armour:04,Doessel:06}, it is currently accepted
that the vagal branch operates on faster time scales than the
sympathetic one, and that each branch has a specific
co-operative action on the heart rate and the dynamics of SAN
cells.
Let us consider an integrate-and-fire (IF) model as a model of
a SAN cell in the leading pacemaker. These cells are
responsible for initiating the activity of SAN cells and,
consequently, that of the whole heart \cite{Dobrzynski:05}. The
dynamics of the IF model describes the membrane potential
$U(t)$ of the cell by the following equations
\begin{eqnarray}
\label{IFmodel}
\frac{dU}{dt} = \frac{1}{\tau} ~~~~~~~~~~~~~~~~~~~~~ & \mbox{if} \ \ \ U(t) < U_t \\
U(t)=U_r, \ \ t^*=t & \mbox{if} \ \ \ U(t)=U_t \ \ \ \mbox{and} \ \ \ \frac{dU}{dt} > 0,
\end{eqnarray}
Here $1/\tau$ defines a slope of integration, $U_t$ is the
threshold potential, $U_r$ is the resting (hyperpolarization)
potential; the time $t^*$ corresponds to the cell firing, and
it is the difference between two successive firings that
determines the instantaneous heart period or $RR$-interval,
$RR_i=t^*_i-t^*_{i-1}$. It is known \cite{Klabunde:04} that
increasing sympathetic activity with a combination of
decreasing vagal activity leads to an increase in the heart
period, and {\it vice versa}. Direct stimulation of the
sympathetic branch leads to an increase of the integration
slope $1/\tau$ and a lowering of the threshold potential $U_t$,
whereas vagal activation has the opposite effects, and
additionally, lowers the resting potential $U_r$. Thus, the
neuronal activities can be taken into account as modulations of
the parameters of the IF model (\ref{IFmodel}). For reproducing
HRV during apn{\oe}a, therefore, it is enough to present any of
the parameters $\tau$, $U_t$ or $U_r$ as a stochastic variable
of the form (\ref{SPSI}), for example,
$U_t(t^*_i)=U_t(t^*_{i-1})+\xi_i$, where $\xi_i$ are random
numbers having the stable distribution.
However, the use of more realistic (than IF) models with
oscillatory dynamics, for example Fitzhugh-Nagumo
\cite{FitzHugh:61} or Morris-Lecar \cite{Morris:81} models,
makes the reproduction of the experimental results a more
difficult but interesting task. Currently it is unclear whether
it is possible to obtain a stable distribution of increments by
consideration of the Gaussian type of fluctuations alone, or
whether one should use fluctuations characterizing by a stable
distribution. This point demands further investigation.
\section{Conclusion}
In summary, our experimental modification of the respiration
process reveals that the intrinsic dynamics of the heart rate
regulatory system exhibits stochastic features and can be
viewed as the integrated action of many weakly interacting
components. Even on a short time scale (less then half a
minute) the heart rate is non-stationary and exhibits diffusive
dynamics with superimposed intermittent $\sim$\,0.1~Hz
oscillations. The intrinsic dynamics can be described as a
stochastic process with independent increments and can be
understood within the framework of many-body dynamics as used
in statistical physics. The large number of independent
regulatory perturbations produce a noisy regulatory background,
so that the dynamics of the regulatory rhythm is close to
classical Brownian motion. However there are indications of
non-Gaussianity of increments and weak but important
correlations on short time-scales. The reproduction of these
features, especially the non-Gaussianity property, is an open
problem even in simple toy models.
These results are important both for understanding the general
principles of regulation in biological systems, and for
modeling cardiovascular dynamics. Furthermore, the results
presented may possibly lead to a new clinical classification of
states of the cardiovascular system by analysing the intrinsic
dynamics of the heart control system as suggested in
\cite{Doessel:06}.
\ack
The research was supported by the Engineering and Physical Sciences
Research Council (UK) and by the Wellcome Trust.
|
1,314,259,994,325 | arxiv | \section{Motivation}
\label{Motivation}
In earlier papers \cite{Graft00,Graft01} I argued that the orthodox quantum mechanics solution for EPR is incorrect and that an alternative calculation is required. My previous treatment, although correct, was based on a macroscopic, classical model (spinning disks). I argued that it fully embodies the logic of the quantum solution because it delivers probabilities and correlations predicted by quantum mechanics for joint sampling, and can therefore be useful in explicating issues of separation. However, the argument can be dismissed by those not disposed to see the logical equivalence, or those demanding a purely quantum approach. The previous treatment, by starting with the final quantum eigenvalue probabilities, missed the chance to show the important role of quantum state projection. Here I fill this lacuna of my previous treatment by using quantum mechanics from the beginning.
A second important motivation is to demonstrate that there is not one unique quantum mechanics prediction for EPR. The literature almost universally refers to ``\textit{the} quantum mechanics prediction'', as if application of L{\"u}ders' extension of Von Neumann's projection postulate is the only possible calculation. In fact, however, as shown here, there are several possible calculations, corresponding to the projection rule that applies to the physics of the experiment.
The scheme of the paper is as follows. The quantum joint prediction for EPR correlation is derived and shown to involve a single sampling. The solution for separated measurement, where there are two samplings, is then developed using orthodox quantum mechanics with L{\"u}ders' rule for state projection on measurement. The separated solution is shown to duplicate the predictions of the quantum joint solution. However, it is shown that this solution requires superluminal transmission of information and therefore is physically impossible. Alternative predictions respecting special relativity are developed using both Von Neumann projection and no projection at all. Conditions for the proper application of state projection rules are considered and L{\"u}ders' rule is identified as the primary culprit in the EPR paradox. It cannot be applied to EPR.
\section{The EPR paradox}
\label{The EPR paradox}
Difficulties in the foundations of quantum mechanics first exposed by Einstein, Podolsky, and Rosen~\cite{Einstein00} continue to bedevil our understanding of composite systems with spatiotemporally separated components. The cloud of paradoxical considerations that accompany these difficulties can be termed `the EPR paradox'. The EPR paradox has diverse facets and sequellae but our purposes here allow us to state matters simply as follows. Consider a pair of spin-$\nicefrac{1}{2}$ particles in the (anticorrelated) singlet state. When the particles are measured separately at stations A and B for their spin values along two arbitrarily chosen directions $a$ and $b$, the correlation of the two results is predicted to be $-cos(a-b)$, no matter how far apart spatially the measurements may be performed. Experiments are said to confirm this prediction. However, it is easy to show that no local model can produce such correlations~\cite{Bell00}. If nonlocal effects are introduced, superluminal transmission is required and Lorenz invariance is violated. That is the paradox. Einstein famously called the postulated nonlocal effect ``spooky'', Schr{\"o}dinger called it ``sinister'', and Margenau called it ``monstrous''.
Notwithstanding the apparent horror, the current consensus is that the paradox is resolved by accepting the existence of 'quantum nonlocality', whereby the correlations are explained. Concerns about the conflict with special relativity are set aside by right of the no-signaling theorem. I show that such expedients are superfluous and that the paradox can be dissolved without appealing to nonlocality. I also argue that the concerns about a conflict with special relativity cannot be set aside.
\section{The quantum joint prediction for an EPR experiment}
\label{The quantum joint prediction for an EPR experiment}
Here I develop the quantum joint prediction for an EPR experiment and show that it involves a single sampling. This derivation is typical of and consistent with the derivations invariably encountered in support of quantum nonlocality.
Consider a source emitting pairs of spin-$\nicefrac{1}{2}$ particles in the (anticorrelated) singlet state. The $4x4$ density matrix for this state is:
\begin{equation}
\begin{aligned}
\rho_{singlet}^{4x4} = \frac{1}{2}\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}
\end{aligned}
\end{equation}
Note that here and throughout the paper, density matrices are given in the Z representation, and the particles are propagating along the Y axis. Measurement angles are chosen relative to the Z axis in the X-Z plane.
The particles separate and are measured at the two stations A and B, configured respectively with the measurement angles $a$ and $b$. The measurement operators measure the outcomes along the measurement axis and are given by:
\begin{equation}
\begin{aligned}
M_A^{2x2} = \begin{bmatrix} cos(a) & sin(a) \\ sin(a) & -cos(a) \end{bmatrix}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
M_B^{2x2} = \begin{bmatrix} cos(b) & sin(b) \\ sin(b) & -cos(b) \end{bmatrix}
\end{aligned}
\end{equation}
Now, to get the expectation of the product of the outcomes, orthodox quantum mechanics creates a tensor product of the two measurement operators, then applies the resulting $4x4$ operator to the input singlet state, and finally takes the trace of the result. This is represented as follows and a simple calculation shows the result to be $-cos(a-b)$:
\begin{equation}
\langle AB \rangle = Tr([M_A^{2x2}{\otimes}M_B^{2x2}][\rho_{singlet}^{4x4}]) = -cos(a-b)
\end{equation}
\smallskip
Note that the dependence of the trace on only the difference of the two measurement angles implies rotational invariance.
Dephasing of $\rho_{singlet}^{4x4}$ (decoherence) may occur, and if it occurs the EPR correlations may be washed out. I neglect this effect here and assume that coherence is maintained to focus on the operator separation.
The expectation values for the individual results at the two sides are also given by orthodox quantum mechanics:
\begin{equation}
\langle A \rangle = Tr([M_A^{2x2}{\otimes}\textbf{1}^{2x2}][\rho_{singlet}^{4x4}]) = 0
\end{equation}
\begin{equation}
\langle B \rangle = Tr([\textbf{1}^{2x2}{\otimes}M_B^{2x2}][\rho_{singlet}^{4x4}]) = 0
\end{equation}
Here the $2x2$ measurement operators are expanded to $4x4$ operators for application to the $4x4$ singlet by taking the tensor product of the operator and the $2x2$ identity matrix, per orthodox practice.
Because both $\langle A \rangle$ and $\langle B \rangle$ are equal to 0, the expectation $\langle AB \rangle$ gives the correlation between the two measurement outcomes, each of which can yield the value -1 or 1 (the eigenvalues of the measurement operators).
From these expectation values, it is easy to calculate the probabilities of the eigenvalues of the joint outcomes. For example, the probability of both measurements yielding the value 1 is given by $P_{A = 1,B = 1}$. The four probabilities are found to be:
\begin{equation}
P_{A = -1,B = -1} = P_{A = 1,B = 1} = (1 + \langle AB \rangle) / 4
\end{equation}
\begin{equation}
P_{A = -1,B = 1} = P_{A = 1,B = -1} = (1 - \langle AB \rangle) / 4
\end{equation}
My previous analysis \cite{Graft00} started here with a classical system generating these same probabilities.
To generate actual outcomes that can be compared to those of an EPR experiment, sample the probability distribution $P_{i,j}$:
\begin{equation}
Sample(P_{i,j}) \implies O_A, O_B \in 1, -1
\end{equation}
For a large number of source emissions $N$ in an EPR experiment, collect the series of outcomes from each side and correlate them, as follows:
\begin{equation}
{\langle AB \rangle}_{exp} = \frac{1}{N} \sum_{n=1}^{N} {O_A}_n{O_B}_n \approx -cos(a-b)
\end{equation}
The resulting experimental correlation yields the predicted value $-cos(a-b)$. This result is demonstrated by a simulation model that reproduces all the steps given above (with $N = 10000$). The results of the simulation are shown in Figure 1. The measurement angle $b$ at side B is scanned over {$0-2\pi$} while holding $a$ constant. Two values of $a$ are depicted to demonstrate the rotational invariance. The simulation source code is available on-line \cite{Graft02}.
Two important things to note about the quantum joint prediction are that it involves only one sampling, equation (9), and that this sampling relies upon a tensor product of the two measurement operators so that the sampling has access to both of the selected measurement angles. In an EPR experiment, however, there are two samplings, each having access to only the local measurement angle. The joint prediction therefore cannot be used for EPR and we seek a separated solution that properly represents the two private samplings.
\newpage
\centerline {Figure 1. Correlation resulting from quantum joint}
\centerline {measurement in a simulated EPR experiment}
\begin{center}\includegraphics[scale=0.3]{fig1.png}\end{center}
\section{Separating the quantum joint prediction using L{\"u}ders'\\rule}
\label{Separating the quantum joint prediction using L{\"u}ders' projection}
To separate the quantum joint solution we need to account for two samplings each restricted to knowledge of the local measurement angle. We can do this by considering two sequential measurements, specifically, a measurement at side A using the measurement angle $a$, followed by a measurement at side B using $b$. In the degenerate case of the measurements occurring at exactly the same time, the measurement order is chosen arbitrarily. Refer to Section \ref{The separated solution using L{\"u}ders' projection requires superluminal transmission of information; it cannot be used for EPR}
for further discussion of the relativistic aspects.
I now give the orthodox quantum solution for successive measurements in an EPR experiment. Recall the input singlet state and measurement operators previously described in equations (1), (2), and (3). For the first measurement A, the expectation value $\langle A \rangle$ is given by:
\begin{equation}
\langle A \rangle = Tr([M_A^{2x2}{\otimes}\textbf{1}^{2x2}][\rho_{singlet}^{4x4}]) = 0
\end{equation}
Because the expectation value is 0, the probabilities for our measurement results are given by:
\begin{equation}
P_{A = -1} = P_{A = 1} = 0.5
\end{equation}
Sampling the distribution $P_{i}$ now yields an outcome for the measurement at A:
\begin{equation}
Sample(P_{i}) \implies O_A \in 1, -1
\end{equation}
Now apply the orthodox state projection postulate in the form of L{\"u}ders' rule \cite{Luders}. The input singlet state is rotationally invariant, so it can be expressed in the $a$ measurement basis as follows:
\begin{equation}
\psi_{singlet} = \frac{1}{\sqrt{2}}(\ket{-1_a,1_a} - \ket{1_a,-1_a})
\end{equation}
The $a$ subscripts denote the $a$ measurement basis. If the measurement at A produces a value of $-1_a$, then L{\"u}ders' rule gives the renormalized projected state as:
\begin{equation}
\psi_{L} = \ket{-1_a,1_a}
\end{equation}
This is separable so the projected B state is $\ket{1_a}$, and the corresponding $2x2$ density matrix is $\ket{1_a}\bra{1_a}$. Therefore the projected density matrix in the Z basis for the case of $O_A = -1$ is given by:
\begin{equation}
O_A = -1:~~\rho_B^{2x2} = \ket{1_a}\bra{1_a} = \begin{bmatrix} cos^2(a/2) & sin(a/2)cos(a/2) \\ sin(a/2)cos(a/2) & sin^2(a/2) \end{bmatrix}
\end{equation}
This projection is derived by straightforward application of the orthodox L{\"u}ders rule, also sometimes referred to as the generalized Born rule, especially in the field of quantum information. There is much more to say about state projection rules, but for now we follow the orthodox approach.
Expand $\rho_B^{2x2}$ to $\rho_B^{4x4}$. This is done so that the input state for the second measurement can be assigned as the original $4x4$ singlet state when representing the case of no projection. The expanded density matrix is renormalized to preserve a unity trace.
\begin{equation}
O_A = -1:~~\rho_B^{4x4} = \frac{1}{2}(\textbf{1}^{2x2}{\otimes}\rho_B^{2x2})
\end{equation}
Expand the tensor product:
\small
\begin{equation}
\begin{multlined}
O_A = -1:~~\rho_B^{4x4} =\\
\\
\frac{1}{2}\begin{bmatrix} cos^2(a/2) & sin(a/2)cos(a/2) & 0 & 0\\ sin(a/2)cos(a/2) & sin^2(a/2) & 0 & 0 \\ 0 & 0 & cos^2(a/2) & sin(a/2)cos(a/2) \\ 0 & 0 & sin(a/2)cos(a/2) & sin^2(a/2)
\end{bmatrix}
\end{multlined}
\end{equation}
\normalsize
Similarly, the expression for the projected state when the measurement at side A yields a value of $1_a$ is given by:
\small
\begin{equation}
\begin{multlined}
O_A = 1:~~\rho_B^{4x4} =\\
\\
\frac{1}{2}\begin{bmatrix} sin^2(a/2) & -sin(a/2)cos(a/2) & 0 & 0\\ -sin(a/2)cos(a/2) & cos^2(a/2) & 0 & 0 \\ 0 & 0 & sin^2(a/2) & -sin(a/2)cos(a/2) \\ 0 & 0 & -sin(a/2)cos(a/2) & cos^2(a/2)
\end{bmatrix}
\end{multlined}
\end{equation}
\normalsize
The expectation value of the result at side B can now be given in terms of the projected state:
\begin{equation}
\langle B \rangle = Tr([\textbf{1}^{2x2}{\otimes}M_B^{2x2}][\rho_B^{4x4}])
\end{equation}
The probabilities for the B measurement result are given by:
\begin{equation}
P_{B = 1} = \frac{1 + \langle B \rangle}{2}
\end{equation}
\begin{equation}
P_{B = -1} = \frac{1 - \langle B \rangle}{2}
\end{equation}
Sampling the distribution $P_j$ now yields an outcome for the measurement at B:
\begin{equation}
Sample(P_{j}) \implies O_B \in 1, -1
\end{equation}
For a large number of source emissions $N$ in an EPR experiment, collect the series of outcomes from each side and correlate them, as follows:
\begin{equation}
{\langle AB \rangle}_{exp} = \frac{1}{N} \sum_{n=1}^{N} {O_A}_n{O_B}_n \approx -cos(a-b)
\end{equation}
Again, the resulting experimental correlation yields the predicted value $-cos(a-b)$. This result is demonstrated by a simulation model that reproduces all the steps given above (with $N = 10000$). The results of the simulation are equivalent to those shown in Figure 1, with differences due only to stochasticity. The simulation source code is available on-line \cite{Graft03}. The form of projection to be used is selectable, that is, between L{\"u}ders projection, Von Neumann projection, and no projection (see below).
The separated solution successfully combines two private samplings to deliver the EPR correlations. The solution appeals to the orthodox L{\"u}ders rule of state projection to generate the projected state. However, all is not well.
\section{The separated solution using L{\"u}ders' projection requires superluminal transmission of information; it cannot be used for EPR}
\label{The separated solution using L{\"u}ders' projection requires superluminal transmission of information; it cannot be used for EPR}
Here I consider only the case where the two sides share an inertial frame and make their measurements in that inertial frame. It would be possible to consider relatively moving frames for the two measurement stations, however, nobody claims that quantum entanglement requires relatively moving frames, so allowing for it is an unnecessary complication. It may be interesting to go further and analyze the effects of relativistic motion, but it is not directly germane to the argument here.
In our context, there are always three operational domains relating the measurement times at the two sides (taking the measurement time to be the time of its completion if it is extended in time). In the first domain, measurement A precedes measurement B in the shared frame. In the second domain, measurement B precedes measurement A in the shared frame. In these two domains denote the first measurement by $t_0$ and the second by $t_1$. In the third domain, the measurements are simultaneous, to a certain experimental precision. In this domain, for the purposes of calculation, arbitrarily designate one side's measurement time as $t_0$ and the other's as $t_1$. It is interesting that quantum mechanics has nothing to say about which side projects the other in the case of simultaneous measurements, and I accordingly do not speculate on how the symmetry is broken physically.
In the following material I refer to the measurement times $t_0$ and $t_1$ defined above. It is important to realize that a case of simultaneous measurements (as defined) is not necessarily a case of joint measurement yielding entangled statistics. It is possible to perform marginal measurements simultaneously. This is a commonly overlooked.
\newpage
Orthodox quantum theory employing L{\"u}ders projection, as demonstrated in Section \ref{Separating the quantum joint prediction using L{\"u}ders' projection} for the case of EPR with separation, entails that the measurement at side A produces a projected state at side B. The projected state is one of the two states given in equations (18) and (19). The projected state must be locally present at side B, to be available for the local measurement at side B.
It is obvious from perusal of equations (18) and (19) that the projected state contains information about both the measurement angle $a$ at side A ($a$ appears directly in the density matrices) and information about the outcome (the outcome determines the signs in the entries of the density matrices). Therefore, in terminology more typical for philosophical discussions of quantum foundations, both parameter independence and outcome independence are violated by the separated quantum solution for EPR. Earlier in the history of the EPR paradox it was thought that violation of either parameter \textit{or} outcome independence could suffice to account for the form of the EPR correlations, and much ink was spilled on debating which of the two was operative in EPR. However, it is now clear, not only from the derivation of Section \ref{Separating the quantum joint prediction using L{\"u}ders' projection}, but from other analyses \cite{Graft00,Graft01,Pawlowski}, that \textit{both} parameter and outcome independence must be violated to account for the EPR correlations.
Transmission of information is not problematic as long as the transmission is not required to occur at superluminal speeds, because special relativity would be violated, and that is something that theorists must eschew to maintain a consistent axiom set for physics. However, it is easy to see that the quantum separated solution using L{\"u}ders' rule requires superluminal transmission in the paradigmatic EPR experiment with a large separation between the measurement sides.
Suppose that the measurement sides A and B are separated in space by a very large amount. We can arbitrarily suppose that the separation is one light year (any large separation suffices). Consider without loss of generality the case where side A's measurement time is designated as $t_0$ (we could think of side B measuring at time $t_0$ instead; the general argument is not changed). Suppose now that at time $t_0$ side A randomly chooses a measurement angle and performs the measurement, producing an outcome. According to special relativity, due to the physical separation, the information about the measurement angle and the outcome cannot appear at side B until at least a year has elapsed. However, the measurement at side B can be performed at an arbitrarily short time $t_1$ after the side A measurement at time $t_0$. The timing is determined by the location of the pair emission source between the two sides; we can place it so that $t_1 - t_0$ is arbitrarily small.
Now, if the EPR correlations are to be obtained in such a scenario, as the current consensus believes, then parameter and outcome information become available in the projected state localized to side B within the arbitrarily small inter-measurement time, which is much shorter than the required time (one year) to physically transmit the information. Therefore, the general separated solution using L{\"u}ders' rule requires superluminal transmission of information, violating special relativity.
One could argue that, although the information is indeed present at side B at time $t_1$, there is no way for side B to extract it (per the no-signaling theorem), but this is irrelevant \cite{no-signaling}, and I emphasize that the information \textit{must have been present} at side B at time $t_1$.
While special relativity prevents us from applying L{\"u}ders projection to EPR, there are other physical scenarios where it may be applied, and so there is no outright prohibition from applying L{\"u}ders projection to all physical scenarios. Several scenarios can be conceived where L{\"u}ders projection may be validly applied (the list is not intended to be exhaustive):
\newpage
\begin{itemize}
\item Separable source states. Information transfer is not needed for separable source states. In EPR, the source state is not separable.
\item Nondegenerate spectra. L{\"u}ders' rule extends Von Neumann's projection rule to account for degenerate spectra. If there is no degeneracy, then L{\"u}ders' rule collapses to Von Neumann's rule. In EPR, we are dealing with degenerate spectra.
\item Decomposition/analysis of a true joint measurement. If we have a scenario such as the one in Section \ref{The quantum joint prediction for an EPR experiment}, we could apply an alternative analysis in terms of L{\"u}ders' rule, although the usefulness of such an analysis is not clear, as we should prefer to faithfully represent the samplings defined by the physics of the experiment. In EPR, we do not have a true joint measurement.
\item Cases where the physical arrangement does not require superluminal speeds. For example, if our measurement stations are very close to one another, we could consistently assert that subluminal information transmission occurs, in which case we should be able to identify a physical mechanism for the transmission. In the general case of EPR, however, we deal with large separations, such that if transmission occurs, it must be superluminal.
\end{itemize}
Before considering alternative quantum predictions for EPR that satisfy special relativity, I point out that it is possible to empirically test whether L{\"u}ders projection actually occurs in an experimental arrangement \cite{Adler,Hegerfeldt}. Hegerfeldt and Sala Mayato \cite{Hegerfeldt} argue that different forms of projection ``may appear naturally, depending on the realization of a particular measurement apparatus'' and ``Their applicability depends on the circumstances, i.e., the details of the measurement apparatus.'' The fact that actual determination by experiment of the applicable form of projection has never been performed for EPR is surprising and can be viewed as a disquieting state of affairs in quantum foundations research.
\section{Alternative quantum predictions for correlation in an EPR experiment}
\label{Alternative quantum predictions for correlation in an EPR experiment}
\subsection{Von Neumann projection}
\label{Von Neumann projection}
As shown, L{\"u}ders' rule cannot be applied to EPR experiments, leading us to search for alternatives that can yield a correct separated solution. Von Neumann's projection rule produces a mixture upon measurement \cite{VNnote}, while L{\"u}ders' rule produces a pure state. In general the pure state may be a superposition but in EPR the L{\"u}ders-projected Hilbert subspace contains only a single eigenvalue. In our context, Von Neumann projection produces a mixture of the states $\ket{1_a}$ and $\ket{-1_a}$. The separated solution then proceeds as follows.
Recall the input singlet state and measurement operators previously described in equations (1), (2), and (3). For the first measurement A, the expectation value $\langle A \rangle$ is given (as shown before in Section \ref{The quantum joint prediction for an EPR experiment}) by:
\begin{equation}
\langle A \rangle = Tr([M_A^{2x2}{\otimes}\textbf{1}^{2x2}][\rho_{singlet}^{4x4}]) = 0
\end{equation}
Because the expectation value is 0, the probabilities for our measurement results are given by:
\begin{equation}
P_{A = -1} = P_{A = 1} = 0.5
\end{equation}
Sampling the distribution $P_i$ now yields an outcome for the measurement at A:
\begin{equation}
Sample(P_{i}) \implies O_A \in 1, -1
\end{equation}
The $4x4$ density matrix for the projected state at B applying Von Neumann projection is a mixture of the two possible L{\"u}ders' rule eigenvalues given by equations (18) and (19):
\begin{equation}
\rho_B^{4x4} = \frac{1}{4}\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1
\end{bmatrix}
\end{equation}
Von Neumann projection in the case of EPR creates a mixture equivalent to a normalized identity matrix.
The expectation value of the result at side B can now be given in terms of the projected state:
\begin{equation}
\langle B \rangle = Tr([\textbf{1}^{2x2}{\otimes}M_B^{2x2}][\rho_B^{4x4}]) = 0
\end{equation}
The probabilities for the B measurement result are given by:
\begin{equation}
P_{B = -1} = P_{B = 1} = 0.5
\end{equation}
Sampling the distribution $P_j$ yields an outcome for the measurement at B:
\begin{equation}
Sample(P_{j}) \implies O_B \in 1, -1
\end{equation}
For a large number of source emissions $N$ in an EPR experiment, collect the series of outcomes from each side and correlate them, as follows:
\begin{equation}
{\langle AB \rangle}_{exp} = \frac{1}{N} \sum_{n=1}^{N} {O_A}_n{O_B}_n \approx 0
\end{equation}
The prediction applying Von Neumann projection is that no correlation will be observed. This calculation is confirmed by a simulation whose results are shown in Figure 2. The source code of the simulation is available on-line \cite{Graft03}. The form of projection to be used is selectable, i.e., between L{\"u}ders projection, Von Neumann projection, and no projection.
At first it seems that we can say that no information (specifically, information about the measurement angle and outcome) is transferred by Von Neumann projection, and that therefore special relativity cannot be violated even for very large separations. However, one can see that at least one bit of information must be transferred to signal projection of the singlet state to the normalized $4x4$ identity matrix, as required. Operationally, side B cannot distinguish a singlet state from a normalized $4x4$ identity matrix by its measurement (satisfying no-signaling), so we might be inclined to dismiss this as inconsequential, and console ourselves with the fact that Von Neumann projection nevertheless gives the correct prediction for EPR (no correlation). Theoretically, however, even one bit of information transferred superluminally forces us to reject Von Neumann projection for EPR. The only correct solution for EPR must exclude all information transfer, that is, it must exclude projection completely.
\bigskip
\centerline {Figure 2. Correlation resulting from quantum separated measurement}
\centerline {using Von Neumann projection in a simulated EPR experiment}
\begin{center}\includegraphics[scale=0.3]{fig2.png}\end{center}
\subsection{Null projection}
When all information transfer is excluded by the EPR conditions, projection of any form cannot occur. To calculate this case, we need only suppose that side B measures the original singlet state that it receives from the source. Following the previous calculations, the steps are obvious and are presented without further comment.
\bigskip
\begin{equation}
\langle A \rangle = Tr([M_A^{2x2}{\otimes}\textbf{1}^{2x2}][\rho_{singlet}^{4x4}]) = 0
\end{equation}
\begin{equation}
P_{A = -1} = P_{A = 1} = 0.5
\end{equation}
\begin{equation}
Sample(P_{i}) \implies O_A \in 1, -1
\end{equation}
\begin{equation}
\langle B \rangle = Tr([\textbf{1}^{2x2}{\otimes}M_B^{2x2}][\rho_{singlet}^{4x4}]) = 0
\end{equation}
\begin{equation}
P_{B = -1} = P_{B = 1} = 0.5
\end{equation}
\begin{equation}
Sample(P_{j}) \implies O_B \in 1, -1
\end{equation}
\begin{equation}
{\langle AB \rangle}_{exp} = \frac{1}{N} \sum_{n=1}^{N} {O_A}_n{O_B}_n \approx 0
\end{equation}
Again, the calculation predicts that no correlation will be observed. The source code of a simulation demonstrating it is available on-line \cite{Graft03}. The form of projection to be used is selectable, that is, between L{\"u}ders projection, Von Neumann projection, and no projection. The result is equivalent to that shown in Figure 2, with only stochastic variation.
\section{Discussion}
\label{Discussion}
L{\"u}ders' rule was developed for the treatment of ensembles \cite{Luders}, so its application to individual projection events is already problematic. Furthermore, by blindly applying L{\"u}ders' rule to physical scenarios for which it is not validly applicable, such as EPR, nonlocality is in effect simply postulated by fiat, whereas L{\"u}ders' rule is in reality not only incorrect for EPR, but is not needed to account for experiments correctly designed and analyzed. Prediction using L{\"u}ders' rule is not a unique necessary quantum mechanical calculation for EPR. Alternative quantum mechanical calculations giving different results are available and required.
A short recounting of some previous work on the topic of this paper is valuable to acknowledge important prior work and to locate my arguments in their proper historical context.
In one of the first reactions to the work of Einstein, Podolsky, and Rosen \cite{Einstein00}, Henry Margenau in 1935 famously rejected the projection postulate in any form for all physical scenarios \cite{Margenau}. He wrote: ``...by the removal of a single postulate commonly accepted, the real difficulty in Einstein-Podolsky-Rosen's conclusion disappears.'' Margenau argued that no significant quantum calculation requires the projection postulate. Philip Pearle in 1967 also weighed in against the projection postulate \cite{Pearle}, proposing an alternate interpretation of quantum mechanics lacking projection. Pearle did not, however, address the treatment of the EPR paradox to be expected from his alternate interpretation of quantum mechanics.
Both Margenau and Pearle perhaps go too far in prohibiting projection unconditionally, because we can conceive of physical scenarios where projection may occur. As I have shown here, the correct procedure is to employ the projection rule (including the option of no projection) applicable to the physics of the experiment. For example, a true joint sampling could use L{\"u}ders' rule, while a paradigmatic EPR experiment could not. Importantly, correct projection rules for a given arrangement can be determined empirically.
A recent author that has addressed our topic extensively is Khrennikov \cite{Khrennikov_0,Khrennikov_1,Khrennikov_2,Khrennikov_3,Khrennikov_4}. In \cite{Khrennikov_0} and \cite{Khrennikov_1} Khrennikov argues that Von Neumann projection is crucially different from L{\"u}ders projection and that Von Neumann projection must be used for EPR, dissolving any paradox. Naturally, I am sympathetic to this view, with the caveat that I believe null projection to be more appropriate for EPR. Khrennikov, however, does not address the conflict of this view with the accepted interpretation of the experimental results, whereas I have argued that the experiments, when correctly designed and analyzed, do not show nonlocal correlations.
In \cite{Khrennikov_2} Khrennikov again draws the distinction between Von Neumann and L{\"u}ders projection and claims that neither EPR nor quantum information ``work properly'' when only Von Neumann projection is used. Seemingly contradicting himself, Khrennikov then argues that, in fact, it is a theorem that Von Neumann projection implies L{\"u}ders projection, so there is no problem for the research program of quantum information. Ambiguity continues in \cite{Khrennikov_3}, and as if to correct the contradiction, he cautions that ``conditions of this theorem are the subject of further analysis.'' In \cite{Khrennikov_4} Khrennikov accepts that Von Neumann projection does not entail L{\"u}ders projection.
Khrennikov has done important work in drawing attention to the role of quantum state projection and the distinction between Von Neumann and L{\"u}ders projection, and for his argument that L{\"u}ders projection is not correct for EPR. Surely, L{\"u}ders projection is not correct for EPR, however, whereas Khrennikov argues that if Von Neumann projection is applied, ``no trace of quantum nonlocality would be found'', I argue here that even Von Neumann projection requires a nonlocal transmission of one bit of information (to signal projection of the singlet state to a mixture), and that null projection is the fully correct way to analyze EPR. I also go much further in asserting that nonlocal correlations cannot be obtained in EPR experiments, and that the experiments purporting to show nonlocal correlations are incorrectly designed, analyzed, and/or interpreted.
The correct application to EPR of either null projection or Von Neumann projection rather than L{\"u}ders projection blocks quantum correlations and nonlocality, thereby dissolving the paradox. Quantum nonlocality is seen as a mistake. The misapplication of L{\"u}ders projection is seen as the source of apparent nonlocality. We must be careful to apply L{\"u}ders' rule only to appropriate physical scenarios.
The dissolution of the EPR paradox developed here, while thankfully dissolving this terrible paradox that has troubled us for so long, also restores the consistency of our axiom set for physics \cite{Graft00}, allowing for consistent and coherent discourse.
\renewcommand\refname{References and Notes}
|
1,314,259,994,326 | arxiv | \subsection{Control of Absolute Error}
In many situations, it is desirable to construct an estimator for
$p$ with guaranteed absolute precision and confidence level. For
this purpose, we have
\begin{theorem} \label{coverage_abs} Let $0 < \varepsilon < \frac{1}{2}, \; 0 < \delta < 1, \;
\zeta > 0$ and $\rho > 0$. Let $n_1 < n_2 < \cdots < n_s$ be the ascending
arrangement of all distinct elements of {\small $\left \{ \left \lceil \left
( \frac{24 \varepsilon - 16 \varepsilon^2}{9} \right )^{ 1 - \frac{i}{\tau} } \frac{\ln
\frac{1}{\zeta \delta}}{2 \varepsilon^2} \right \rceil : i = 0, 1, \cdots, \tau \right \}$}
with {\small $\tau = \left \lceil \frac{ \ln \frac{9}{24 \varepsilon - 16 \varepsilon^2} } {
\ln (1 + \rho) } \right \rceil$.} For $\ell = 1, \cdots, s$, define $K_\ell
= \sum_{i = 1}^{n_\ell} X_i, \; \widehat{\boldsymbol{p}}_\ell = \frac{ K_\ell
}{n_\ell}$ and $\boldsymbol{D}_\ell$ such that $\boldsymbol{D}_\ell = 1$ if {\small
$\left ( \left | \widehat{\boldsymbol{p}}_\ell - \frac{1}{2} \right | - \frac{2 \varepsilon }{3} \right
)^2 \geq \frac{1}{4} + \frac{ \varepsilon^2 n_\ell } {2 \ln (\zeta \delta) }$}; and
$\boldsymbol{D}_\ell = 0$ otherwise. Suppose the stopping rule is that
sampling is continued until $\boldsymbol{D}_\ell = 1$ for some $\ell \in
\{1, \cdots, s\}$. Define $\boldsymbol{\widehat{p}} = \frac{\sum_{i=1}^{\mathbf{n}}
X_i}{\mathbf{n}}$ where $\mathbf{n}$ is the sample size when the
sampling is terminated. Define {\small \[ \mathscr{Q}^+ =
\bigcup_{\ell = 1}^s \left \{ \frac{k}{n_\ell} + \varepsilon \in \left (0,
\frac{1}{2} \right ) : k \in \mathbb{Z} \right \} \bigcup \left \{ \frac{1}{2} \right \},
\qquad \mathscr{Q}^- = \bigcup_{\ell = 1}^s \left \{ \frac{k}{n_\ell} - \varepsilon
\in \left (0, \frac{1}{2} \right ) : k \in \mathbb{Z} \right \} \bigcup \left \{
\frac{1}{2} \right \}.
\]}
Then, a sufficient condition to guarantee $\Pr \left \{ \left |
\boldsymbol{\widehat{p}} - p \right | < \varepsilon \mid p \right \}
> 1 - \delta$ for any $p \in (0, 1)$ is that {\small \begin{eqnarray} & &
\sum_{\ell = 1}^s \Pr \{ \widehat{\boldsymbol{p}}_\ell \geq p + \varepsilon, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} < \frac{\delta}{2}
\qquad \forall p \in
\mathscr{Q}^-, \label{2D1}\\
& & \sum_{\ell = 1}^s \Pr \{ \widehat{\boldsymbol{p}}_\ell \leq p - \varepsilon, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} < \frac{\delta}{2}
\qquad \forall p \in \mathscr{Q}^+ \label{2D2}
\end{eqnarray}}
where both (\ref{2D1}) and (\ref{2D2}) are satisfied if $0 < \zeta <
\frac{1}{2(\tau + 1)}$.
\end{theorem}
\subsection{Control of Absolute and Relative Errors}
To construct an estimator satisfying a mixed criterion in terms of
absolute and relative errors with a prescribed confidence level, we
have
\begin{theorem} \label{coverage_mixed} Let $0 < \delta < 1, \; \zeta > 0$ and $\rho >
0$. Let $\varepsilon_a$ and $\varepsilon_r$ be positive numbers such that $0 <
\varepsilon_a < \frac{3}{8}$ and {\small $\frac{6 \varepsilon_a}{3 - 2 \varepsilon_a } < \varepsilon_r
< 1$}. Let $n_1 < n_2 < \cdots < n_s$ be the ascending arrangement of
all distinct elements of {\small $\left \{ \left \lceil \left [ \frac{3}{2} \left
( \frac{1}{\varepsilon_a} - \frac{1}{\varepsilon_r} - \frac{1}{3} \right ) \right ]^{ \frac{i } {
\tau } } \frac{ 4(3 + \varepsilon_r) } {9 \varepsilon_r } \ln \frac{1} { \zeta \delta } \right
\rceil: i = 0, 1, \cdots, \tau \right \}$ } with {\small $\tau = \left \lceil \frac{
\ln \left [ \frac{3}{2} \left ( \frac{1}{\varepsilon_a} - \frac{1}{\varepsilon_r} - \frac{1}{3}
\right ) \right ] } { \ln (1 + \rho) } \right \rceil$. } For $\ell = 1, \cdots,
s$, define {\small $K_\ell = \sum_{i = 1}^{n_\ell} X_i, \;
\widehat{\boldsymbol{p}}_\ell = \frac{K_\ell}{n_\ell},$} {\small
\[ \boldsymbol{D}_\ell = \begin{cases} 0 & \mathrm{for} \; \frac{1}{2} - \frac{2}{3} \varepsilon_a -
\sqrt{ \frac{1}{4} + \frac{ n_\ell \varepsilon_a^2 } {2 \ln (\zeta \delta) } } <
\widehat{\boldsymbol{p}}_\ell < \frac{ 6(1 - \varepsilon_r) (3 - \varepsilon_r) \ln (\zeta
\delta) } { 2 (3 - \varepsilon_r)^2 \ln (\zeta \delta) - 9 n_\ell \varepsilon_r^2} \; \mathrm{or}\\
& \quad \;\; \frac{1}{2} + \frac{2}{3} \varepsilon_a - \sqrt{ \frac{1}{4} + \frac{
n_\ell \varepsilon_a^2 } {2 \ln (\zeta \delta) } } < \widehat{\boldsymbol{p}}_\ell < \frac{ 6(1
+ \varepsilon_r) (3 + \varepsilon_r) \ln (\zeta \delta) } { 2 (3 +
\varepsilon_r)^2 \ln (\zeta \delta) - 9 n_\ell \varepsilon_r^2},\\
1 & \mathrm{else}
\end{cases}
\]}
for $\ell = 1, \cdots, s - 1$ and $\boldsymbol{D}_s = 1$. Suppose the stopping
rule is that sampling is continued until {\small $\boldsymbol{D}_\ell = 1$}
for some $\ell \in \{1, \cdots, s\}$. Let {\small $\widehat{\boldsymbol{p}} =
\frac{\sum_{i=1}^{\mathbf{n}} X_i} {\mathbf{n}}$} where $\mathbf{n}$ is
the sample size when the sampling is terminated. Define {\small
$p^\star = \frac{\varepsilon_a}{\varepsilon_r}$} and {\small
\[ \mathscr{Q}_a^+ = \bigcup_{\ell = 1}^s \left \{ \frac{k}{n_\ell} + \varepsilon_a
\in \left (0, p^\star \right ) : k \in \mathbb{Z} \right \} \cup \left \{ p^\star
\right \}, \qquad \mathscr{Q}_a^- = \bigcup_{\ell = 1}^s \left \{
\frac{k}{n_\ell} - \varepsilon_a \in \left (0, p^\star \right ) : k \in \mathbb{Z} \right
\} \cup \left \{ p^\star \right \},
\]
\[
\qquad \mathscr{Q}_r^+ = \bigcup_{\ell = 1}^s \left \{ \frac{k}{n_\ell (1 +
\varepsilon_r)} \in \left (p^\star, 1 \right ) : k \in \mathbb{Z} \right \}, \quad \qquad
\quad \mathscr{Q}_r^- = \bigcup_{\ell = 1}^s \left \{ \frac{k}{n_\ell (1 -
\varepsilon_r)} \in \left (p^\star, 1 \right ) : k \in \mathbb{Z} \right \}.\qquad \qquad
\qquad\qqu
\]}
Then, {\small $\Pr \left \{ \left | \widehat{\boldsymbol{p}} - p
\right | < \varepsilon_a \; \mathrm{or} \; \left |
\frac{\widehat{\boldsymbol{p}} - p } {p }
\right | < \varepsilon_r \mid p \right \} > 1 - \delta$
}
for any $p \in (0, 1)$ provided that {\small \begin{eqnarray} & & \sum_{\ell
= 1}^s \Pr \{ \widehat{\boldsymbol{p}}_\ell \geq p + \varepsilon_a, \; \boldsymbol{D}_{\ell - 1}
= 0, \; \boldsymbol{D}_\ell = 1 \mid p \} < \frac{\delta}{2} \qquad \forall p \in
\mathscr{Q}_a^-, \label{mix1}\\
& & \sum_{\ell = 1}^s \Pr \{ \widehat{\boldsymbol{p}}_\ell \leq p - \varepsilon_a, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} < \frac{\delta}{2}
\qquad \forall p \in \mathscr{Q}_a^+, \label{mix2}\\
& & \sum_{\ell = 1}^s \Pr \{ \widehat{\boldsymbol{p}}_\ell \geq p (1 + \varepsilon_r),
\; \boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} < \frac{\delta}{2}
\qquad \forall p \in \mathscr{Q}_r^+, \label{mix3}\\
& & \sum_{\ell = 1}^s \Pr \{ \widehat{\boldsymbol{p}}_\ell \leq p (1 -
\varepsilon_r), \; \boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} <
\frac{\delta}{2} \qquad \forall p \in \mathscr{Q}_r^- \label{mix4}
\end{eqnarray}} where these conditions are satisfied for $0 < \zeta < \frac{1}{2 (\tau + 1) }$ . \end{theorem}
\subsection{Control of Relative Error}
In many situations, it is desirable to design a sampling scheme to
estimate $p$ such that the estimator satisfies a relative error
criterion with a prescribed confidence level. By virtue of the
function {\small \[ g(\varepsilon, \gamma) = 1 - \sum_{i= 0}^{\gamma - 1}
\frac{1}{i!} \left ( \frac{\gamma}{ 1 + \varepsilon} \right )^i \exp \left ( - \frac{\gamma}{ 1 +
\varepsilon} \right ) + \sum_{i= 0}^{\gamma - 1} \frac{1}{i!} \left ( \frac{\gamma}{ 1 -
\varepsilon} \right )^i \exp \left ( - \frac{\gamma}{ 1 - \varepsilon} \right ),
\]}
we have developed a simple sampling scheme as described by the
following theorem.
\begin{theorem} \label{coverage_rev_a} Let $0 < \varepsilon < 1, \; 0 < \delta < 1, \; \zeta >
0$ and $\rho > 0$. Let $\gamma_1 < \gamma_2 < \cdots < \gamma_s$ be the ascending
arrangement of all distinct elements of {\small $\left \{ \left \lceil \left
[ \frac{3}{2} \left ( \frac{1}{\varepsilon} + 1 \right ) \right ]^{ \frac{i} { \tau } } \frac{
4(3 + \varepsilon) } {9 \varepsilon } \ln \frac{1} { \zeta \delta } \right \rceil: i = 0, 1,
\cdots, \tau \right \}$ } with {\small $\tau = \left \lceil \frac{ \ln \left [
\frac{3}{2} \left ( \frac{1}{\varepsilon} + 1 \right ) \right ] } { \ln (1 + \rho) } \right
\rceil$.} Let $\widehat{\boldsymbol{p}}_\ell = \frac{ \sum_{i = 1}^{\mathbf{n}_\ell}
X_i } { \mathbf{n}_\ell }$ where $\mathbf{n}_\ell$ is the minimum
number of samples such that $\sum_{i = 1}^{\mathbf{n}_\ell} X_i =
\gamma_\ell$. For $\ell = 1, \cdots, s$, define $\boldsymbol{D}_\ell$ such that
$\boldsymbol{D}_\ell = 1$ if {\small $\widehat{\boldsymbol{p}}_\ell \geq 1 + \frac{ 2
\varepsilon}{3 + \varepsilon} + \frac{ 9 \varepsilon^2 \gamma_\ell } { 2 (3 + \varepsilon)^2 \ln (\zeta
\delta)}$}; and $\boldsymbol{D}_\ell = 0$ otherwise. Suppose the stopping rule
is that sampling is continued until $\boldsymbol{D}_\ell = 1$ for some $\ell
\in \{1, \cdots, s\}$. Define estimator $\widehat{\boldsymbol{p}} =
\frac{\sum_{i=1}^{\mathbf{n}} X_i}{\mathbf{n}}$ where $\mathbf{n}$ is
the sample size when the sampling is terminated. Then, {\small $\Pr
\left \{ \left | \frac{ \widehat{\boldsymbol{p}} - p } { p } \right | \leq \varepsilon \mid p \right
\} \geq 1 - \delta$} for any $p \in (0, 1)$ provided that $\zeta > 0$ is
sufficiently small to guarantee $g(\varepsilon, \gamma_s) < \delta$ and
{\small \begin{eqnarray} & & \ln ( \zeta \delta )
< \left [ \frac{ \left ( 1 + \varepsilon + \sqrt{ 1 + 4 \varepsilon + \varepsilon^2 } \right )^2}
{ 4 \varepsilon^2 } + \frac{1}{2} \right ] \left [ \frac{\varepsilon}{ 1 + \varepsilon} - \ln (1 +
\varepsilon) \right ], \label{cona}\\
& & \sum_{\ell = 1}^s \Pr \{ \widehat{\boldsymbol{p}}_\ell \leq (1 - \varepsilon) p, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} \leq \frac{\delta}{2}
\qquad \forall p \in \mathscr{Q}_r^-, \label{rev1}\\
& & \sum_{\ell = 1}^s \Pr \{ \widehat{\boldsymbol{p}}_\ell \geq (1 + \varepsilon) p, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} \leq \frac{\delta}{2}
\qquad \forall p \in \mathscr{Q}_r^+ \label{rev2} \end{eqnarray}} where {\small
$\mathscr{Q}_r^+ = \bigcup_{\ell = 1}^s \left \{\frac{\gamma_\ell}{m (1 + \varepsilon)
} \in (p^*, 1) : m \in \mathbb{N} \right \} $} and {\small $\mathscr{Q}_r^- =
\bigcup_{\ell = 1}^s \left \{\frac{\gamma_\ell}{m (1 - \varepsilon) } \in (p^*, 1)
: m \in \mathbb{N} \right \}$} with $p^* \in (0, z_{s-1})$ denoting the
unique number satisfying \[ g(\varepsilon, \gamma_s) + \sum_{\ell = 1}^{s -
1} \exp \left ( \frac{\gamma_\ell}{z_\ell} \frac{ (p^* - z_\ell)^2 } {2 \left (
\frac{2 p^*}{3} + \frac{z_\ell}{3} \right ) \left ( \frac{2 p^*}{3} +
\frac{z_\ell}{3} - 1 \right ) } \right ) = \delta \]
where {\small $z_\ell = 1 + \frac{ 2 \varepsilon}{3 + \varepsilon} + \frac{ 9 \varepsilon^2 \gamma_\ell } { 2 (3
+ \varepsilon)^2 \ln (\zeta \delta)}$} for $\ell = 1, \cdots, s - 1$.
\end{theorem}
In this section, we have proposed a multistage inverse sampling plan
for estimating a binomial parameter, $p$, with relative precision.
In some situations, the cost of sampling operation may be high since
samples are obtained one by one when inverse sampling is involved.
In view of this fact, it is desirable to develop multistage
estimation methods without using inverse sampling. For this
purpose, we have
\begin{theorem} \label{noinverse} Let $0 < \varepsilon < 1, \; 0 < \delta < 1$ and $\zeta
> 0 $. Let $\tau$ be a positive integer. For $\ell = 1, 2, \cdots$, let $\widehat{\boldsymbol{p}}_\ell = \frac{ \sum_{i =
1}^{n_\ell} X_i } { n_\ell }$, where $n_\ell$ is deterministic and
stands for the sample size at the $\ell$-th stage. For $\ell = 1, 2,
\cdots$, define $\boldsymbol{D}_\ell$ such that $\boldsymbol{D}_\ell = 1$ if {\small
$\widehat{\boldsymbol{p}}_\ell \geq \frac{ 6(1 + \varepsilon) (3 + \varepsilon) \ln (\zeta \delta_\ell)
} { 2 (3 + \varepsilon)^2 \ln (\zeta \delta_\ell) - 9 n_\ell \varepsilon^2}$}; and
$\boldsymbol{D}_\ell = 0$ otherwise, where $\delta_\ell = \delta$ for $1 \leq \ell
\leq \tau$ and $\delta_\ell = \delta 2^{\tau - \ell}$ for $\ell > \tau$.
Suppose the stopping rule is that sampling is continued until
$\boldsymbol{D}_\ell = 1$ for some stage with index $\ell$. Define estimator
$\widehat{\boldsymbol{p}} = \widehat{\boldsymbol{p}}_{\boldsymbol{l}}$, where $\boldsymbol{l}$ is the index of
stage at which the sampling is terminated. Then, $\Pr \{ \boldsymbol{l} <
\infty \} = 1$ and {\small $\Pr \left \{ \left | \frac{ \widehat{\boldsymbol{p}} - p } { p
} \right | \leq \varepsilon \mid p \right \} \geq 1 - \delta$} for any $p \in (0,
1)$ provided that $2 (\tau + 1 ) \zeta \leq 1$ and $\inf_{\ell > 0}
\frac{n_{\ell + 1}}{n_\ell} > 0$.
\end{theorem}
\subsection{Fixed-width Confidence Intervals}
In some literature, the estimation of $p$ has been formulated as the
problem of constructing a fixed-width confidence interval $(\boldsymbol{L},
\boldsymbol{U})$ such that $\boldsymbol{U} - \boldsymbol{L} \leq 2 \varepsilon$ and that $\Pr \left
\{ \boldsymbol{L} < p < \boldsymbol{U} \mid p \right \}
> 1 - \delta$ for any $p \in (0, 1)$ with prescribed $\varepsilon \in (0, \frac{1}{2})$ and $\delta \in (0,
1)$. For completeness, we shall develop multistage sampling schemes
in this setting.
Making use of the Clopper-Pearson confidence interval
\cite{Clopper}, we have established the following sampling scheme.
\begin{theorem} \label{FW1} For $\alpha \in (0, 1)$ and integers $0 \leq k \leq n$,
define
\[ \mathcal{L} (n, k, \alpha) = \left\{\begin{array}{ll}
0 \;\;\;& {\rm if}\; k=0\\
\underline{p} \; \;\;\;&
{\rm if}\; k > 0
\end{array} \right.\;\;\;{\rm and}\;\;\;
\mathcal{U} (n, k, \alpha) = \left\{\begin{array}{ll}
1 & {\rm if}\; k= n\\
\overline{p} \;
& {\rm if}\; k < n
\end{array} \right.
\]
with $\underline{p} \in (0,1)$ satisfying $\sum_{j=k}^n {n \choose
j} \underline{p}^j (1- \underline{p})^{n-j} = \frac{\alpha}{2}$ and
$\overline{p} \in (0,1)$ satisfying $\sum_{j=0}^{k} {n \choose j}
\overline{p}^j (1- \overline{p})^{n-j} = \frac{\alpha}{2}$. Let $\zeta > 0$ and $\rho > 0$. Let $n_1 < n_2 < \cdots < n_s$ be the
ascending arrangement of all distinct elements of {\small $ \left \{
\left \lceil \left ( \frac{ 2 \varepsilon^2} { \ln \frac{1}{1- 2 \varepsilon} } \right )^{ 1 -
\frac{i}{\tau} } \frac{ \ln \frac{1}{\zeta \delta} } { 2 \varepsilon^2 } \right \rceil : i = 0,
1, \cdots, \tau \right \}$} with {\small $\tau = \left \lceil \frac{ \ln \left (
\frac{1}{ 2 \varepsilon^2} \ln \frac{1}{1- 2 \varepsilon} \right ) } { \ln (1 + \rho)} \right
\rceil$. } For $\ell = 1, \cdots, s$, define $K_\ell = \sum_{i =
1}^{n_\ell} X_i$ and $\boldsymbol{D}_\ell$ such that $\boldsymbol{D}_\ell = 1$ if
$\mathcal{U} (n_\ell, K_\ell, \zeta \delta) - \mathcal{L} (n_\ell, K_\ell, \zeta
\delta) \leq 2 \varepsilon$; and $\boldsymbol{D}_\ell = 0$ otherwise. Suppose the
stopping rule is that sampling is continued until $\boldsymbol{D}_\ell = 1$
for some $\ell \in \{1, \cdots, s\}$. Define {\small $\boldsymbol{L} =
\mathcal{L} \left (\mathbf{n}, \sum_{i=1}^{\mathbf{n}} X_i, \zeta \delta \right
)$} and {\small $\boldsymbol{U} = \mathcal{U} \left (\mathbf{n},
\sum_{i=1}^{\mathbf{n}} X_i, \zeta \delta \right )$}, where $\mathbf{n}$ is
the sample size when the sampling is terminated. Define {\small \[
\mathscr{Q}_L = \bigcup_{\ell = 1}^s \left \{ \mathcal{L} (n_\ell, k, \zeta
\delta) \in \left ( 0, 1 \right ) : 0 \leq k \leq n_\ell \right \}, \qquad
\mathscr{Q}_U = \bigcup_{\ell = 1}^s \left \{ \mathcal{U} (n_\ell, k, \zeta
\delta) \in \left ( 0, 1 \right ) : 0 \leq k \leq n_\ell \right \}.
\]}
Then, a sufficient condition to guarantee $\Pr \left \{ \boldsymbol{L} < p <
\boldsymbol{U} \mid p \right \}
> 1 - \delta$ for any $p \in (0, 1)$ is that {\small \begin{eqnarray} & &
\sum_{\ell = 1}^s \Pr \{ \mathcal{L} (n_\ell, K_\ell, \zeta \delta) \geq p,
\; \boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} < \frac{\delta}{2}
\qquad \forall p \in \mathscr{Q}_L, \label{2D1F}\\
& & \sum_{\ell = 1}^s \Pr \{ \mathcal{U} (n_\ell, K_\ell, \zeta \delta)
\leq p, \; \boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} <
\frac{\delta}{2} \qquad \forall p \in \mathscr{Q}_U \label{2D2F}
\end{eqnarray}}
where both (\ref{2D1F}) and (\ref{2D2F}) are satisfied if $0 < \zeta <
\frac{1}{2(\tau + 1)}$.
\end{theorem}
Making use of Chernoff-Hoeffding inequalities \cite{Chernoff,
Hoeffding}, we have established the following sampling scheme.
\begin{theorem} \label{FW2} For $\alpha \in (0, 1)$ and integers $0 \leq k \leq n$,
define
\[
\mathcal{L} (n, k, \alpha) = \left \{\begin{array}{ll}
\underline{p} \;\;\;& {\rm for}\; 0 < k < n,\\
\left( \frac{\alpha}{2} \right)^{\frac{1}{n}} \; \;\;\;&
{\rm for} \; k = n,\\
0 \; \;\;\;&
{\rm for} \; k = 0
\end{array} \right . \qquad \mathcal{U} (n, k, \alpha) = \left \{\begin{array}{ll}
\overline{p} \;\;\;& {\rm for}\; 0 < k < n,\\
1 - \left( \frac{\alpha}{2} \right)^{\frac{1}{n}} \; \;\;\;&
{\rm for} \; k = 0,\\
1 \; \;\;\;&
{\rm for} \; k = n
\end{array} \right .
\]
with $\underline{p} \in (0, \frac{k}{n})$ satisfying $\mathscr{M}_{\mathrm{B}} \left
( \frac{k}{n}, \underline{p} \right ) = \frac{ \ln (\zeta \delta) } { n}$ and $\overline{p}
\in (\frac{k}{n}, 1)$ satisfying $\mathscr{M}_{\mathrm{B}} \left ( \frac{k}{n},
\overline{p} \right ) = \frac{ \ln (\zeta \delta) } { n}$, where
$\mathscr{M}_{\mathrm{B}} (.,.)$ is a function such that
$\mathscr{M}_{\mathrm{B}} (z,\theta) = z \ln \frac{\theta}{z} + (1 - z) \ln \frac{1 -
\theta}{1 - z}$ for $z \in (0,1)$ and $\theta \in (0, 1)$. Let $\zeta > 0$
and $\rho
> 0$. Let $n_1 < n_2 < \cdots < n_s$ be the ascending arrangement of
all distinct elements of {\small $ \left \{ \left \lceil \left ( \frac{ 2
\varepsilon^2} { \ln \frac{1}{1- 2 \varepsilon} } \right )^{ 1 - \frac{i}{\tau} } \frac{ \ln
\frac{1}{\zeta \delta} } { 2 \varepsilon^2 } \right \rceil : i = 0, 1, \cdots, \tau \right \}$}
with {\small $\tau = \left \lceil \frac{ \ln \left ( \frac{1}{ 2 \varepsilon^2} \ln
\frac{1}{1- 2 \varepsilon} \right ) } { \ln (1 + \rho)} \right \rceil$. } For $\ell =
1, \cdots, s$, define $K_\ell = \sum_{i = 1}^{n_\ell} X_i$ and
$\boldsymbol{D}_\ell$ such that $\boldsymbol{D}_\ell = 1$ if $\mathcal{U} (n_\ell,
K_\ell, \zeta \delta) - \mathcal{L} (n_\ell, K_\ell, \zeta \delta) \leq 2 \varepsilon$;
and $\boldsymbol{D}_\ell = 0$ otherwise. Suppose the stopping rule is that
sampling is continued until $\boldsymbol{D}_\ell = 1$ for some $\ell \in
\{1, \cdots, s\}$. Define {\small $\boldsymbol{L} = \mathcal{L} \left (\mathbf{n},
\sum_{i=1}^{\mathbf{n}} X_i, \zeta \delta \right )$} and {\small $\boldsymbol{U} =
\mathcal{U} \left (\mathbf{n}, \sum_{i=1}^{\mathbf{n}} X_i, \zeta \delta \right
)$}, where $\mathbf{n}$ is the sample size when the sampling is
terminated. Define {\small \[ \mathscr{Q}_L = \bigcup_{\ell = 1}^s \left
\{ \mathcal{L} (n_\ell, k, \zeta \delta) \in \left ( 0, 1 \right ) : 0 \leq k
\leq n_\ell \right \}, \qquad \mathscr{Q}_U = \bigcup_{\ell = 1}^s \left \{
\mathcal{U} (n_\ell, k, \zeta \delta) \in \left ( 0, 1 \right ) : 0 \leq k \leq
n_\ell \right \}.
\]}
Then, a sufficient condition to guarantee $\Pr \left \{ \boldsymbol{L} < p <
\boldsymbol{U} \mid p \right \}
> 1 - \delta$ for any $p \in (0, 1)$ is that {\small \begin{eqnarray} & &
\sum_{\ell = 1}^s \Pr \{ \mathcal{L} (n_\ell, K_\ell, \zeta \delta) \geq p,
\; \boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} < \frac{\delta}{2}
\qquad \forall p \in \mathscr{Q}_L, \label{2D1FC}\\
& & \sum_{\ell = 1}^s \Pr \{ \mathcal{U} (n_\ell, K_\ell, \zeta \delta)
\leq p, \; \boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} <
\frac{\delta}{2} \qquad \forall p \in \mathscr{Q}_U \label{2D2FC}
\end{eqnarray}}
where both (\ref{2D1FC}) and (\ref{2D2FC}) are satisfied if $0 < \zeta
< \frac{1}{2(\tau + 1)}$.
\end{theorem}
Making use of Massart's inequality \cite{Massart:90}, we have
established the following sampling scheme.
\begin{theorem}
\label{FW3} For $\alpha \in (0, 1)$ and integers $0 \leq k \leq n$, define
{\small \[ \mathcal{L} (n, k, \alpha) = \max \left \{ 0, \; \frac{k}{n} +
\frac{3}{4} \; \frac{ 1 - \frac{2k}{n} - \sqrt{ 1 + \frac{9}{ 2 \ln
\frac{2}{\alpha} } \; k ( 1- \frac{k}{n}) } } {1 + \frac{9 n}{ 8 \ln
\frac{2}{\alpha} } } \right \},
\]} and {\small \[ \mathcal{U} (n, k, \alpha) = \min \left \{ 1, \; \frac{k}{n} +
\frac{3}{4} \; \frac{ 1 - \frac{2k}{n} + \sqrt{ 1 + \frac{9}{ 2 \ln
\frac{2}{\alpha} } \; k ( 1- \frac{k}{n}) } } {1 + \frac{9 n}{ 8 \ln
\frac{2}{\alpha} } } \right \}.
\]} Let $\zeta > 0$ and $\rho
> 0$. Let $n_1 < n_2 < \cdots < n_s$ be the ascending arrangement of
all distinct elements of {\small $ \left \{ \left \lceil \frac{8}{9} \left (
\frac{3}{4 \varepsilon} + 1 \right )^{ \frac{i}{\tau} } \left ( \frac{3}{4 \varepsilon} -
1 \right ) \ln \frac{1}{\zeta \delta} \right \rceil : i = 0, 1, \cdots, \tau \right
\}$} with {\small $\tau = \left \lceil \frac{ \ln \left ( \frac{3}{4 \varepsilon} +
1 \right ) } { \ln (1 + \rho)} \right \rceil$. } For $\ell = 1, \cdots, s$,
define $K_\ell = \sum_{i = 1}^{n_\ell} X_i$ and $\boldsymbol{D}_\ell$ such
that $\boldsymbol{D}_\ell = 1$ if \[
1 - \frac{9}{ 2 \ln (\zeta \delta) } \;
K_\ell \left ( 1- \frac{K_\ell }{n_\ell} \right ) \leq \varepsilon^2 \left [
\frac{4}{3} - \frac{3 n_\ell}{ 2 \ln (\zeta \delta) } \right ]^2, \] and
$\boldsymbol{D}_\ell = 0$ otherwise. Suppose the stopping rule is that
sampling is continued until $\boldsymbol{D}_\ell = 1$ for some $\ell \in
\{1, \cdots, s\}$. Define {\small $\boldsymbol{L} = \mathcal{L} \left (\mathbf{n},
\sum_{i=1}^{\mathbf{n}} X_i, \zeta \delta \right )$} and {\small $\boldsymbol{U} =
\mathcal{U} \left (\mathbf{n}, \sum_{i=1}^{\mathbf{n}} X_i, \zeta \delta \right
)$}, where $\mathbf{n}$ is the sample size when the sampling is
terminated. Define {\small \[ \mathscr{Q}_L = \bigcup_{\ell = 1}^s \left
\{ \mathcal{L} (n_\ell, k, \zeta \delta) \in \left ( 0, 1 \right ) : 0 \leq k
\leq n_\ell \right \}, \qquad \mathscr{Q}_U = \bigcup_{\ell = 1}^s \left \{
\mathcal{U} (n_\ell, k, \zeta \delta) \in \left ( 0, 1 \right ) : 0 \leq k \leq
n_\ell \right \}.
\]}
Then, a sufficient condition to guarantee $\Pr \left \{ \boldsymbol{L} < p <
\boldsymbol{U} \mid p \right \}
> 1 - \delta$ for any $p \in (0, 1)$ is that {\small \begin{eqnarray} & &
\sum_{\ell = 1}^s \Pr \{ \mathcal{L} (n_\ell, K_\ell, \zeta \delta) \geq p,
\; \boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} < \frac{\delta}{2}
\qquad \forall p \in \mathscr{Q}_L, \label{2D1mF}\\
& & \sum_{\ell = 1}^s \Pr \{ \mathcal{U} (n_\ell, K_\ell, \zeta \delta)
\leq p, \; \boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_\ell = 1 \mid p \} <
\frac{\delta}{2} \qquad \forall p \in \mathscr{Q}_U \label{2D2mF}
\end{eqnarray}}
where both (\ref{2D1mF}) and (\ref{2D2mF}) are satisfied if $0 < \zeta
< \frac{1}{2(\tau + 1)}$.
\end{theorem}
\bigskip
It should be noted that the interval estimation methods described in
Theorems \ref{FW1}--\ref{FW3} can be made less conservative by using
tight bounds of $C (p, \varepsilon) = 1 - \Pr \{ \boldsymbol{L} < p < \boldsymbol{U} \mid
p \}$ for $p \in [a, b] \subseteq \Theta$ in Theorem \ref{bbsplit}.
Based on such bounds, a branch-and-bound type strategy described in
section 2.8 of \cite{Chen_EST} can be used to facilitate the search
of an appropriate value of $\zeta$ such that the coverage probability
associated with interval $(\boldsymbol{L}, \boldsymbol{U})$ is no less than $1 -
\delta$.
\begin{theorem} \label{bbsplit} Let $\mathcal{L}_\ell = \mathcal{L} (\widehat{\boldsymbol{p}}_\ell,
\zeta, \delta)$ and $\mathcal{U}_\ell = \mathcal{U} (\widehat{\boldsymbol{p}}_\ell, \zeta,
\delta)$ for $\ell = 1, \cdots, s$. Then, \begin{eqnarray*} C (p, \varepsilon) & \leq & \Pr
\{ \boldsymbol{L} \geq a \mid b \} + \Pr \{ \boldsymbol{U} \leq b \mid a \} \\
& \leq & \sum_{\ell = 1}^s \Pr \{ \mathcal{L}_\ell \geq a, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_{\ell } = 1 \mid b \} + \sum_{\ell
= 1}^s \Pr \{ \mathcal{U}_\ell \leq b, \; \boldsymbol{D}_{\ell - 1} = 0, \;
\boldsymbol{D}_{\ell } = 1 \mid a \}, \end{eqnarray*} \begin{eqnarray*} C (p, \varepsilon) & \geq & \Pr \{
\boldsymbol{L} \geq b \mid a \} + \Pr \{ \boldsymbol{U} \leq a \mid b \} \\
& \geq & \sum_{\ell = 1}^s \Pr \{ \mathcal{L}_\ell \geq b, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_{\ell } = 1 \mid a \} + \sum_{\ell
= 1}^s \Pr \{ \mathcal{U}_\ell \leq a, \; \boldsymbol{D}_{\ell - 1} = 0, \;
\boldsymbol{D}_{\ell } = 1 \mid b \} \end{eqnarray*} for any $p \in [a, b]$. Moreover,
if the open interval $(a, b)$ contains no element of the supports of
$\boldsymbol{L}$ and $\boldsymbol{U}$, then \begin{eqnarray*} C (p, \varepsilon) & \leq & \Pr
\{ \boldsymbol{L} \geq b \mid b \} + \Pr \{ \boldsymbol{U} \leq a \mid a \} \\
& \leq & \sum_{\ell = 1}^s \Pr \{ \mathcal{L}_\ell \geq b, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_{\ell } = 1 \mid b \} + \sum_{\ell
= 1}^s \Pr \{ \mathcal{U}_\ell \leq a, \; \boldsymbol{D}_{\ell - 1} = 0, \;
\boldsymbol{D}_{\ell } = 1 \mid a \}, \end{eqnarray*} \begin{eqnarray*} C (p, \varepsilon) & \geq & \Pr
\{ \boldsymbol{L} > a \mid a \} + \Pr \{ \boldsymbol{U} < b \mid b \} \\
& \geq & \sum_{\ell = 1}^s \Pr \{ \mathcal{L}_\ell > a, \;
\boldsymbol{D}_{\ell - 1} = 0, \; \boldsymbol{D}_{\ell } = 1 \mid a \} + \sum_{\ell
= 1}^s \Pr \{ \mathcal{U}_\ell < b, \; \boldsymbol{D}_{\ell - 1} = 0, \;
\boldsymbol{D}_{\ell } = 1 \mid b \} \end{eqnarray*} for any $p \in (a, b)$. \end{theorem}
\bigskip
We would like to note that Theorems 1 and 2 of \cite{Chen_EST} play
important roles in the establishment of the theorems in this
section. As can be seen from Theorems 1--6, the confidence
requirements can be satisfied by choosing $\zeta$ to be sufficiently
small. The application of the double-decision-variable method and
the single-decision-variable method is obvious. To determine $\zeta$
as large as possible and thus make the sampling schemes most
efficient, the computational techniques such as bisection confidence
tuning, domain truncation, triangular partition developed in
\cite{Chen_EST} can be applied.
With regard to the tightness of the double-decision-variable method,
we can develop results similar to Theorems 13, 18 and 23 of
\cite{Chen_EST}.
With regard to the asymptotic performance of our sampling schemes,
we can develop results similar to Theorems 14, 19 and 24 of
\cite{Chen_EST}.
\section{Estimation of Bounded-variable Means}
The method proposed for estimating binomial parameters can be
generalized for estimating means of random variables bounded in interval $[0, 1]$.
Formally, let $Z \in [0, 1]$ be a random variable with expectation $\mu =
\mathbb{E} [Z]$. We can estimate $\mu$ based on i.i.d. random samples
$Z_1, Z_2, \cdots$ of $Z$ by virtue of the following results.
\begin{theorem} \label{coverage_abs} Let $0 < \varepsilon < \frac{1}{2}$ and $0 < \delta < 1$.
Let $n_1 < n_2 < \cdots < n_s$ be a sequence of sample sizes such that
{\small $n_s \geq \frac{ \ln \frac{2 s} { \delta} } { 2 \varepsilon^2 }$}. Define
$\widehat{\boldsymbol{\mu}}_\ell = \frac{ \sum_{i=1}^{n_\ell} Z_i }{n_\ell}$ for
$\ell = 1, \cdots, s$. Suppose the stopping rule is that sampling is
continued until {\small $\left ( \left | \widehat{\boldsymbol{\mu}}_\ell - \frac{1}{2}
\right | - \frac{2 \varepsilon }{3} \right )^2 \geq \frac{1}{4} - \frac{ \varepsilon^2 n_\ell }
{2 \ln (2 s \slash \delta) }$} for some $\ell \in \{1, \cdots, s \}$. Define
$\boldsymbol{\widehat{\mu}} = \frac{\sum_{i=1}^{\mathbf{n}} Z_i}{\mathbf{n}}$ where
$\mathbf{n}$ is the sample size when the sampling is terminated.
Then, $\Pr \left \{ \left | \boldsymbol{\widehat{\mu}} - \mu \right | < \varepsilon \right \}
\geq 1 - \delta$. \end{theorem}
\bigskip
This theorem can be shown by a variation of the argument for Theorem
1.
\begin{theorem} \label{coverage_mix_bound} Let $0 < \delta < 1, \; 0 < \varepsilon_a <
\frac{3}{8}$ and {\small $\frac{6 \varepsilon_a}{3 - 2 \varepsilon_a } < \varepsilon_r < 1$}.
Let $n_1 < n_2 < \cdots < n_s$ be a sequence of sample sizes such that
{\small $n_s \geq 2 \left ( \frac{1}{ \varepsilon_r } + \frac{1} {3} \right ) \left
( \frac{1}{ \varepsilon_a } - \frac{1}{ \varepsilon_r } - \frac{1} {3} \right ) \ln \left (
\frac{2s}{\delta} \right )$}. Define $\widehat{\boldsymbol{\mu}}_\ell = \frac{
\sum_{i=1}^{n_\ell} Z_i }{n_\ell}$ for $\ell = 1, \cdots, s$. Define
{\small \[ \boldsymbol{D}_\ell = \begin{cases} 0 & \mathrm{for} \; \frac{1}{2} - \frac{2}{3}
\varepsilon_a - \sqrt{ \frac{1}{4} + \frac{ n_\ell \varepsilon_a^2 } {2 \ln (\zeta \delta) } }
< \widehat{\boldsymbol{\mu}}_\ell < \frac{ 6(1 - \varepsilon_r) (3 - \varepsilon_r) \ln (\zeta
\delta) } { 2 (3 - \varepsilon_r)^2 \ln (\zeta \delta) - 9 n_\ell \varepsilon_r^2} \; \mathrm{or}\\
& \quad \;\; \frac{1}{2} + \frac{2}{3} \varepsilon_a - \sqrt{ \frac{1}{4} + \frac{
n_\ell \varepsilon_a^2 } {2 \ln (\zeta \delta) } } < \widehat{\boldsymbol{\mu}}_\ell < \frac{
6(1 + \varepsilon_r) (3 + \varepsilon_r) \ln (\zeta \delta) } { 2 (3 +
\varepsilon_r)^2 \ln (\zeta \delta) - 9 n_\ell \varepsilon_r^2},\\
1 & \mathrm{else}
\end{cases}
\]}
for $\ell = 1, \cdots, s - 1$ and $\boldsymbol{D}_s = 1$. Suppose the stopping
rule is that sampling is continued until $\boldsymbol{D}_\ell = 1$ for some
$\ell \in \{1, \cdots, s \}$. Define $\boldsymbol{\widehat{\mu}} =
\frac{\sum_{i=1}^{\mathbf{n}} Z_i}{\mathbf{n}}$ where $\mathbf{n}$ is
the sample size when the sampling is terminated.
Then, $\Pr \left \{ \left | \boldsymbol{\widehat{\mu}} - \mu \right | < \varepsilon_a \; \text{or} \; \left | \boldsymbol{\widehat{\mu}} - \mu \right | < \varepsilon_r \mu \right \}
\geq 1 - \delta$. \end{theorem}
\bigskip
This theorem can be shown by a variation of the argument for Theorem
2. In the general case that $Z$ is a random variable bounded in
$[a, b]$, it is useful to estimate the mean $\mu = \mathbb{E} [ Z]$
based on i.i.d. samples of $Z$ with a mixed criterion. For this
purpose, we shall introduce the function
\[
\mathcal{M}(z, \mu) = \begin{cases} \frac{ (\mu - z)^2 } {2 \left ( \frac{2 \mu}{3} +
\frac{z}{3} \right ) \left ( \frac{2 \mu}{3} + \frac{z}{3} - 1 \right ) } & \text{for}
\; 0 \leq z \leq 1 \; \text{and} \; \mu \in (0, 1),\\
- \infty & \text{for} \; 0 \leq z \leq 1 \; \text{and} \; \mu \notin (0, 1)
\end{cases}
\]
and propose the following multistage estimation method.
\begin{theorem} \label{coverage_mix_general} Let $0 < \delta < 1, \; \varepsilon_a > 0$ and
$0 < \varepsilon_r < 1$. Let $n_1 < n_2 < \cdots < n_s$ be a sequence of
sample sizes such that {\small $n_s \geq \frac{(b - a)^2}{ 2 \varepsilon_a^2}
\ln \left ( \frac{2s}{\delta} \right )$}. Define $\widehat{\boldsymbol{\mu}}_\ell = \frac{
\sum_{i=1}^{n_\ell} Z_i }{n_\ell}, \; \widetilde{\boldsymbol{\mu}}_\ell = a +
\frac{1}{b - a} \widehat{\boldsymbol{\mu}}_\ell$,
\[
\underline{\boldsymbol{\mu}}_\ell = a +
\frac{1}{b - a} \min \left \{ \widehat{\boldsymbol{\mu}}_\ell - \varepsilon_a, \;
\frac{\widehat{\boldsymbol{\mu}}_\ell}{ 1 + \mathrm{sgn} (\widehat{\boldsymbol{\mu}}_\ell) \varepsilon_r }
\right \}, \qquad \overline{\boldsymbol{\mu}}_\ell = a + \frac{1}{b - a} \max \left \{
\widehat{\boldsymbol{\mu}}_\ell + \varepsilon_a, \; \frac{\widehat{\boldsymbol{\mu}}_\ell}{ 1 -
\mathrm{sgn} (\widehat{\boldsymbol{\mu}}_\ell) \varepsilon_r } \right \}
\]
for $\ell = 1, \cdots, s$. Suppose the stopping rule is that sampling
is continued until $\mathcal{M}(\widetilde{\boldsymbol{\mu}}_\ell,
\underline{\boldsymbol{\mu}}_\ell) \leq \frac{1}{n_\ell} \ln \frac{\delta}{2s}$ and
$\mathcal{M}(\widetilde{\boldsymbol{\mu}}_\ell, \overline{\boldsymbol{\mu}}_\ell) \leq
\frac{1}{n_\ell} \ln \frac{\delta}{2s}$ for some $\ell \in \{1, \cdots, s \}$.
Define $\boldsymbol{\widehat{\mu}} = \frac{\sum_{i=1}^{\mathbf{n}} Z_i}{\mathbf{n}}$
where $\mathbf{n}$ is the sample size when the sampling is
terminated.
Then, $\Pr \left \{ \left | \boldsymbol{\widehat{\mu}} - \mu \right | < \varepsilon_a \; \text{or} \; \left | \boldsymbol{\widehat{\mu}} - \mu \right | < \varepsilon_r |\mu| \right \}
\geq 1 - \delta$. \end{theorem}
\section{A Link between
Binomial and Bounded Variables}
There exists an inherent connection between a binomial parameter and
the mean of a bounded variable. In this regard, we have
\begin{theorem} \label{Link} Let $Z$ be a random variable bounded in $[0, 1]$. Let
$U$ a random variable uniformly distributed over $[0, 1]$. Suppose
$Z$ and $U$ are independent. Then,
\[ \mathbb{E} [Z] = \Pr \{ Z \geq U \}.
\] \end{theorem}
\begin{pf} Let $F_{Z, U}$ be the joint distribution of $Z$ and $U$. Let
$F_Z$ be the cumulative distribution function of $Z$. Since $Z$ and
$U$ are independent, using Riemann-Stieltjes integration, we have
\[
\Pr \{ Z \geq U \} = \int_{z = 0}^1 \int_{u = 0}^z d F_{Z, U} =
\int_{z = 0}^1 \int_{u = 0}^z d u \; d F_Z = \int_{z = 0}^1 z \; d
F_Z = \mathbb{E} [Z].
\]
\end{pf}
To see why Theorem \ref{Link} reveals a relationship between the
mean of a bounded variable and a binomial parameter, we define
\[
X = \begin{cases} 1 & \text{for} \; Z \geq U,\\
0 & \text{otherwise}. \end{cases}
\]
Then, by Theorem \ref{Link}, we have $\Pr \{ X = 1 \} = 1 - \Pr \{ X
= 0 \} = \mathbb{E} [Z]$. This implies that $X$ is a Bernoulli random
variable and $\mathbb{E} [Z]$ is actually a binomial parameter. As a
consequence, the techniques of estimating a binomial parameter can
be useful for estimating the mean of a bounded variable. Specially,
for a sequence of i.i.d. random samples $Z_1, Z_2, \cdots$ of bounded
variable $Z$ and a sequence of i.i.d. random samples $U_1, U_2, \cdots$
of uniform variable $U$ such that that $Z_i$ is independent with
$U_i$ for all $i$, we can define a sequence of i.i.d. random samples
$X_1, X_2, \cdots$ of Bernoulli random variable $X$ by
\[
X_i = \begin{cases} 1 & \text{for} \; Z_i \geq U_i,\\
0 & \text{otherwise}. \end{cases}
\]
\section{Conclusion}
We have established a new multistage approach for estimating the
mean of a bounded variable. Our approach can provide an estimator
for the unknown mean which rigorously guarantees prescribed levels
of precision and confidence. Our approach is also very flexible in
the sense that the precision can be expressed in terms of different
types of margins of errors.
\bigskip
|
1,314,259,994,327 | arxiv | \section{Introduction}
The cooling-flow problem in clusters of galaxies has been one of the most
notorious issues in galaxy formation.
The cooling time ($t_c$) of gas within the central $100-200$~kpc of many
clusters is less than a Hubble time
\citep[e.g. ][]{CowieBinney77, FabianNulsen77}.
If there is no compensating heat source distributing
thermal energy over that same region, that gas ought to cool, condense, and relax
toward the cluster's center in a so-called ``cooling flow,''
but exhaustive searches in other wave bands have failed
to locate the $10^{12}-10^{13} \, M_\odot$ of stars or cool gas that should have
accumulated
\citep[e.g. ][]{ODea1994, Antonucci1994, McNamaraJaffe1994}.
Nevertheless, something unusual is happening in clusters with $t_c \ll H_0^{-1}$.
Significantly smaller amounts of gas have been detected in the form of
CO \citep{Edge2002, EdgeFrayer2003} or HI \citep{1994ApJ...436..669O},
vibrationally excited H$_2$ \citep{Donahue2000, Falcke1998, JaffeBremer97},
and evidence for star formation
\citep[e.g. ][]{Cardiel1998,Crawford1999, VD97, ODea2004}
are common in these systems, and {\em Chandra} observations have
shown that radio lobes sometimes carve out huge cavities in
the X-ray emitting gas at the centers of such clusters
\citep[e.g., ][]{McNamaraA2597_2001, Fabian2000_NGC1275, Blanton2003}.
This association of star formation, line emission, and
relativistic plasma with cooling-flow clusters has fed
speculation that feedback from active galactic nuclei
modulates the condensation of hot gas, greatly
reducing the mass-cooling rates naively inferred from
X-ray imaging \citep[e.g., ][]{Bohringer2002, Quilis2001}.
However, active feedback sources
are not found in every cluster with $t_c \ll H_0^{-1}$.
For example, the nearby cooling-flow sample of
\citet{Peres1998}
consists of twenty-three clusters with
$\dot{M} > 100 \, M_\odot \, {\rm yr}$ inferred from {\em ROSAT}
imaging. Of these, thirteen have both an emission-line nebula
and a strong radio source, two have no emission lines but a strong
radio source (A2029, A3112), and three have emission lines but a
weak radio source (A478, A496, A2142) leaving five with no
emission lines and little or no radio activity (A1651, A2244,
A1650, A1689, A644).
To test the idea that feedback from either an AGN, star formation,
or some combination of the two suppresses cooling in the cores of
clusters with $t_c \ll H_0^{-1}$, we observed two objects from
this last set of five with {\em Chandra}: A1650 ($z=0.0845$) and A2244 ($z=0.0968$).
These clusters are luminous X-ray
sources, with bolometric
$L_x \sim 8 \times 10^{44} h_{70}^{-2}$ erg s$^{-1}$ and
estimated gas $T_x$ of 5.5-7.0 keV \citep{David1993}.
Here we compare those clusters with an archival sample of clusters of
similar X-ray luminosities ($L_x=0.4-30 \times 10^{44}$ erg s$^{-1}~h_{70}^{-2}$) and temperatures
($T_x=2.9-7.4$ keV), with
$t_c \ll H_0^{-1}$, with evidence for active feedback in the form of central
radio emission, and in most cases, with emission-line nebulae as well
\citep{Donahue2005B}. We will refer to these clusters as "active clusters."
All of the clusters in the \citet{Donahue2005B} sample and the two clusters
discussed in this paper have single, optically luminous, brightest central galaxies
residing at the centroid of their X-ray emission.
\S~2 describes the observations and calibration procedures.
\S~3 describes the data analysis and the extraction of entropy
profiles, and \S~4 discusses our results, which we summarize
in \S~5. For this paper we assume $H_0=70$ km s$^{-1}$ Mpc$^{-1}$
and a flat universe where $\Omega_M=0.3$.
\section{Observations and Calibration}
The observation dates, flare-free exposure times, and count rates between 0.5-9.5 keV within
a 4' radius aperture are reported in
Table~\ref{log}. The back-illuminated CCD on the Chandra X-ray Observatory
\citep{Chandra2002}, the ACIS-S3 detector, was used for its sensitivity
to soft X-rays. Its field of view ($8\arcmin \times 8\arcmin$)
extends to about 10\% of the virial radius of each cluster, limiting our analysis
to the cluster cores.
\begin{deluxetable}{lccc}
\tablecaption{Chandra Observations \label{log}}
\tablehead{
\colhead{Cluster} & \colhead{Observation Date} & \colhead{Exposure Time} &
\colhead{ACIS-S Count Rate} \\
\colhead{} & \colhead{} & \colhead{(s)} & \colhead{(ct s$^{-1}$)} }
\startdata
Abell 1650 & Aug 3-4, 2003 & 27,260 & 4.36 \\
Abell 2244 & Oct 10-11, 2003 & 56,965 & 4.35 \\
\enddata
\end{deluxetable}
We processed these datasets using the {\em Chandra} calibration software
CALDB 2.29 and CIAO 3.1, released in July 2004\footnote{Chandra Interactive
Analysis of Observations (CIAO), http://cxc.harvard.edu/ciao/}.
Neither observation experienced flares.
We used Chandra deep background observations for our background spectra.\footnote{M. Markevitch, author of http://asc.harvard.edu/cal/ Acis/Cal\_prods/bkgrnd/acisbg/ COOKBOOK}
Source and background spectra were extracted using identical concentric
annuli containing a minimum of 20,000 counts per source spectrum.
Bright point sources were excluded from the event files
before spectral extraction. The spectra were binned to a minimum of 25 counts per
energy bin.
Using XSPEC v11.3.1, we fit the projected and deprojected spectra from 0.7-7.0 keV to
MekaL models \citep{MekaL} with Galactic absorption attenuating the
soft X-rays \citep{MM1983}.
Since the best-fit absorption overlapped the Galactic values of $N_{\rm H}=1.56$ and $2.3 \times 10^{20}$
for A1650 and A2244 respectively \citep{DickeyLockman1990}, we fixed $N_{\rm H}$ at
those values for this analysis. The positions of the Fe-K lines were consistent with the cluster redshifts from galaxy velocities in \citet{StrubleRood1999}.
We computed 90\% uncertainties ($\Delta \chi^2 =2.71$) for the temperature,
normalization, and metallicity at each
annulus. We constrained the metallicity to be
constant across 2-3 annuli. The reduced $\chi^2$ values for the fits
were typically 1.10-1.15. More details about our data analysis strategy and further analyses
are described in \citet{Donahue2005B}, where
we also analyze Chandra archival observations of nine other cooling-flow clusters that have central
radio sources and emission-line nebulae.
Neither cluster exhibits a strong temperature gradient across the core. Abell 2244
is nearly isothermal with $kT = 5.5 \pm 0.5$ keV at every radius $<4'$, and Abell 1650
varies from $5.5 \pm 0.5$ keV in the core to $7.0 \pm 1.0$ keV at 4', statistically
consistent with but somewhat higher than the temperature profile over similar radii obtained with
XMM measurements.
(\citet{TakahashiYamashita2004} adopted a lower redshift ($z=0.0801$) to fit the XMM data,
which may indicate a calibration uncertainty.) These small inner temperature gradients contrast with those of most other cooling flow clusters,
which tend to be more pronounced. Both cluster cores contain a significant metallicity gradient,
ranging from 0.6-0.8 solar in the center to a more typical 0.2-0.3 solar outside the core,
consistent with \citet{TakahashiYamashita2004}. This metallicity
pattern is typical of the other cooling flow clusters we studied.
\section{Data Analysis}
The goal of our data analysis was to determine whether or not clusters without
obvious signatures of feedback were systematically different from those with radio sources
that do show signatures of feedback. Our primary results are that these two clusters
do not show any evidence for ghost cavities and have higher central entropy levels
than clusters showing evidence for feedback.
In order to search for cavities in the intracluster medium, we adaptively smoothed
the X-ray data to a minimum significance of 5-sigma with both a Gaussian and a top-hat kernel.
We found no ``ghost bubbles." On scales larger than about 50 kpc from the center, both clusters exhibited regular, nearly round intensity contours. We also did not see evidence for
filaments, such as that found tracing the H$\alpha$ emission in
Abell 1795 \citep{2001MNRAS.321L..33F} or M~87 \citep{Sparks2004}.
We determined the entropy profiles of these clusters by computing the
adiabatic constant $K = kTn_e^{-2/3}$ at each radius to quantify the
specific entropy. The temperature ($kT$) profiles were measured as described
in \S2. The electron density profiles ($n_e$) were derived by deprojecting
the 0.5-2.0 keV surface brightness profiles within
annuli having 5" widths using the technique of \citet{KCC1983}. The
uncertainties of the deprojected count rate profiles were estimated by bootstrapping
1000 monte-carlo simulations of the original surface brightness profiles.
A spatially-dependent conversion of 0.5-2.0 KeV count rates to electron
densities was obtained from the X-ray spectroscopy. For this paper,
we assumed that the temperature and the count-rate conversion factor in the
central bin were constants.
\begin{figure}
\includegraphics*[width=0.5\textwidth,angle=0]{f1.ps}
\caption{Entropy profiles for Abell 1650 and Abell 2244. All gas at $\sim5$ keV with entropy $\lesssim170$ keV cm$^2$ has $t_c < H_0^{-1}$. The hatched region shows the locus of
entropy profiles for active clusters from the sample of \cite{Donahue2005B}.
\label{EntropyProfiles}}
\end{figure}
Figure~\ref{EntropyProfiles} shows that the entropy profiles of Abell 1650 and Abell 2244 are systematically different from the nine cooling-flow clusters in the sample of active clusters from \cite{Donahue2005B}.
The two radio-quiet clusters have flatter entropy profiles with larger values of central entropy.
To quantify this difference, we fit both a simple power law of $K = K_{100}(r/100~\rm{kpc})^{\alpha}$
and the same power law plus a central entropy $K =K_0 + K_{100}(r/100~\rm{kpc})^{\alpha}$ to the
entropy profiles, as was done for the active clusters in \citet{Donahue2005B}.
Table~\ref{table:entropyfits}
gives the best fits. We find that $\alpha \approx 0.6-0.8$ and $K_0 \approx 30-50 \,{\rm keV \, cm^2}$
in the radio-quiet clusters, in contrast to $\alpha \sim 1$ and $K_0 \approx 10 \,{\rm keV \, cm^2}$
for the active clusters. Figure~\ref{figure:EntPower} shows central entropy values
plotted as a function of 20~cm radio power, from the NVSS \citep{NVSS}. Abell 2244 has a weak, off-center
radio source that may not be associated with the cluster, plotted as an upper limit.
\begin{deluxetable}{lccccc}
\tablecaption{Entropy Profile Fit Results\label{table:entropyfits}}
\tablehead{
\colhead{Cluster} & \colhead{$K_0$} & \colhead{$K_{100}$} & \colhead{$\alpha$} &
\colhead{Total $\chi^2$ }&\colhead{ N } \\
\colhead{} & \colhead{KeV cm$^2$} & \colhead{KeV cm$^2$} & \colhead{} & \colhead{} & \colhead{(d.o.f.)} }
\startdata
Abell 1650 & $27\pm5$ & $150\pm7$ & $0.80\pm0.07$ & 12 & 47 \\
& $=0.00$ & $177$ & $0.56\pm0.02$ & 28 & 48 \\
Abell 2244 & $48\pm5$ & $ 102\pm8$ & $0.97\pm0.08$ & 7 & 31 \\
& $=0.00$ & $162\pm3$ & $0.54\pm0.02$ & 42 & 32 \\
Active Sample & $8\pm4$ & $150\pm50$ & $1.2\pm0.2$ & & \\
& $=0.00$\tablenotemark{*} & $144\pm24$ & $0.96\pm0.15$ & & \\
\enddata
\tablenotetext{*}{The fits set to $0.00$ entropy in the cores for the sample in Donahue
et al. (2005B) were quite poor, except for the case of Abell 2029.}
\end{deluxetable}
\begin{figure}
\includegraphics*[width=0.5\textwidth,angle=0]{f2.eps}
\caption{ Radio power $\nu L_\nu$ from 20 cm observations \citep{NVSS}
(See also \citet{LedlowOwen1995} and \citet{SBO1995}, and 6
cm upper limits for A2244 and A1650 from \citet{Burns1990}.)
\label{figure:EntPower} }
\end{figure}
\section{Discussion}
The significance of elevated central entropy in Abell~1650 and Abell~2244 is that
a larger central entropy implies a longer central cooling time compared to
clusters in \cite{Donahue2005B} of similar temperature. Assuming pure free-free
cooling, the cooling time for gas of temperature $T$ and entropy $K$ is
\begin{equation}
t_c \approx 10^8 \, {\rm yr} \left( \frac {K} {10 \, {\rm keV cm^2}} \right)^{3/2}
\left( \frac {kT} {5 \, {\rm keV}} \right)^{-1} \; \; .
\end{equation}
Thus, these two clusters, which show no evidence for feedback, have a central cooling
time $\sim 1 \,{\rm Gyr}$, while those that do show evidence for feedback have a central
cooling time $\sim 0.1 \, {\rm Gyr}$. According to the definition of \citet{Peres1998}, Abell~1650
and Abell~2244 were properly classified as cooling-flow clusters because $t_c < 5$~Gyr.
However, one does not expect to see significant cooling and condensation of gas
in these clusters for at least another $\sim 5 \times 10^8 \,{\rm yr}$. In other words, evidence
for feedback is seen in those clusters that can trigger it on a $\sim 10^8 \, {\rm yr}$ timescale
and not in clusters in which gas is not currently expected to be condensing. Here we discuss the
implications of this finding.
The most straightforward interpretation of the cooling-time dichotomy between active
and radio-quiet clusters is that radiative cooling in cluster cores triggers AGN feedback
when the central gas begins to condense. \citet{Donahue2005B} find that all nine of
their active clusters have very similar core entropy profiles, suggesting that this set
of clusters has settled into a quasi-steady configuration that is episodically heated by
AGN outbursts on a $\sim 10^8$~year timescale. \citet{VoitDonahue2005} show that
outflows of $\sim 10^{45} \, {\rm erg \, s^{-1}}$ naturally maintain the
observed characteristics of the entropy profiles in these clusters.
If that is the correct interpretation, then it is possible that Abell~1650 and Abell~2244 have
unusually long cooling times because they each experienced unusually strong AGN outbursts
$\gtrsim 1$~Gyr in the past. Raising the central entropy to the observed $\sim 30-50 \,
{\rm keV cm^2}$ levels would require an AGN outflow $\sim 10^{46} \, {\rm erg \, s^{-1}}$ \citep{VoitDonahue2005}. Such outbursts are rare but not unprecedented.
\citet{McNamara2005} have recently observed an outburst of this magnitude in
MS0735+7421, which now has a central entropy $\sim 30 \, {\rm keV \, cm^2}$.
The long cooling time following such an outburst would account for why we do
not see any sign of X-ray cavities in these two clusters.
It is also possible that the central gas in Abell~1650 and Abell~2244 has never cooled
to the point at which it can trigger a strong AGN outburst. That could happen, for example, if
frequent merger shocks have been able to support the core entropy at the $\sim 50 \,
{\rm keV \, cm^2}$ level for several Gyr, or if electron thermal conduction can resupply the
thermal energy radiated by the central gas. One can evaluate the efficacy of thermal
conduction by comparing the size of a radiatively cooling system to the Field length
\begin{equation}
\lambda_{\rm F} = \left( \frac {\kappa T} {n_e^2 \Lambda} \right)^{1/2}
\approx 4 \, {\rm kpc} \, \left( \frac {K} {10 \,{\rm keV \, cm^2}} \right)^{3/2} f_c^{1/2}
\; \; ,
\end{equation}
where $\Lambda$ is the usual cooling function and $\kappa = 6 \times 10^{-7} \, f_c T^{5/2}
\, {\rm erg \, s^{-1} \, cm^{-1} \, K^{-7/2}}$ is the Spitzer conduction coefficient with suppression
factor $f_c$. The approximation assumes free-free cooling ($\Lambda \propto T^{1/2}$), which conveniently
makes $\lambda_{\rm F}$ a function of entropy alone. At radii $\sim 100 \, {\rm kpc}$, we find that
$\lambda_{\rm F} \sim r$ in all the cooling-flow systems we have studied, implying that conduction
can plausibly balance cooling there, as long as $f_c \sim 1$. At radii $\sim 10 \, {\rm kpc}$ in systems
with signs of feedback, we find $\lambda_{\rm F} < r$ even for $f_c = 1$, implying that conduction
cannot balance cooling at small radii, in agreement with the findings of \citet{2004MNRAS.347.1130V}.
At those same small radii in the two systems without signs of feedback, we find $\lambda_{\rm F}
\approx r$ for $f_c \approx 1$, suggesting that these systems are potentially stabilized by
thermal conduction, which would account for their modest temperature gradients.
One speculation that emerges from this brief analysis of thermal conduction is that there
is a critical entropy profile $K(r) \approx 10 \, {\rm keV \, cm^2} \, f_c^{-1/3} (r/4 \, {\rm kpc})^{2/3}$
dividing conductively stabilized systems from those that require feedback. Clusters with
central entropy profiles below this line will continue to cool until some other heat source
intervenes, while conduction stabilizes those clusters above the line. One would then expect
the cluster population to bifurcate into systems with strong central temperature gradients
and feedback and those without either. Furthermore, a very powerful AGN outburst
could induce a transition from a feedback-stabilized state to a conductively-stabilized state by
raising the central entropy level to $\gtrsim 30 \, {\rm keV \, cm^2}$.
Another potential heat source that has been suggested as a solution to the cooling-flow problem
is annihilation of dark matter particles such as neutralinos \citep{QinWu01,Totani2004}. In the
model of \citet{Totani2004}, the annihilation rate peaks in the center because of a spike in the
density profile owing to the central black hole. The steady heating rate in this model is not linked
as directly to baryon cooling as the AGN feedback model suggested here, but it is an interesting
alternative mechanism that could be explored further.
\section{Conclusions}
In order to test whether AGN heating compensates for radiative cooling in the
cores of clusters of galaxies, we have
used {\em Chandra} to observe a small sample consisting of two clusters with central cooling
times $< H_0^{-1}$,
yet no evidence for prominent AGN activity: Abell 1650 and Abell 2244. The X-ray properties of the
cores of these clusters indeed appear systematically different from cores with
more prominent radio emission. While the central cooling times are shorter
than a Hubble time and they have strong metallicity gradients,
they do not have significant central temperature gradients,
and their central entropy levels are markedly higher than in clusters with stronger radio
emission, corresponding to central cooling times of a billion years. Also, there is no
evidence in the X-ray surface brightness maps for fossil X-ray cavities produced by
a relatively recent episode of AGN heating. In contrast to the central cores of the clusters
with stronger radio emission, these cores may be stabilized by conduction if it is
operating at close the Spitzer rate. We suggest that a tremendous AGN outburst, such
as that shocking the ICM in MS0735+74 \citep{McNamara2005} may have elevated the central
entropy of these clusters some $10^9$ years ago. Whether or not conduction is operative
in stabilizing these clusters cannot be determined, but it is energetically feasible.
Further theoretical development and a larger study is required to test whether the
timescales are consistent with entropy profiles of a larger population.
The fact that these clusters with no evident feedback have
higher central entropy than clusters with obvious feedback suggests that
rare but influential AGN outbursts can dramatically change the original distribution of
entropy in clusters of galaxies. Alternatively, the intracluster gas of these clusters
may have started out with higher initial
entropy than the ICM in the active clusters, and it has not cooled to the point of sparking strong
AGN feedback.
\acknowledgements
Support for this work was provided by the National Aeronautics and Space Administration through Chandra Award Numbers SAO GO3-4159X and AR3-4017A issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
|
1,314,259,994,328 | arxiv |
\section{Basic Procedures}
Before stating algorithms, we describe some elementary procedures which will be used as subroutines in our algorithms.
\textbf{\texttt{getPivots}}$(T,d,r)$ takes as input a set $T$ of points with distance function $d$ and a radius $r$. Starting with $P=\emptyset$, it performs a single pass over $T$. Whenever it finds a point $q$ which is not within distance $r$ from any point in $P$, it adds $q$ to $P$. Finally, it returns $P$. Thus, $P$ is a maximal subset of $T$ of points separated pairwise by distance more than $r$. We call points in $P$ \textit{pivots}. By Lemma~\ref{lem_pigeonhole}, if there is a set of $k$ points whose clustering cost for $T$ is at most $r/2$, then $|P|\leq k$. Moreover, due to maximality of $P$, its clustering cost for $T$ is at most $r$. Note that \texttt{getPivots}$()$ runs in time $O(|P|\cdot|T|)$.
\textbf{\texttt{getReps}}$(T,d,g,P,r)$ takes as input a set $T$ of points with distance function $d$, a group assignment function $g$, a subset $P\subseteq T$, and a radius $r$. For each $p\in P$, initializing $N(p)=\{p\}$, it includes in $N(p)$ one point, from each group, which is within distance $r$ from $p$ whenever such a point exists. Note that this is done while performing a single pass over $T$. This procedure runs in time $O(|P|\cdot|T|)$.
Informally, if $P$ is a good but infeasible set of centers, then \texttt{getReps}$()$ finds representatives $N(p)$ of the groups in the vicinity of each $p\in P$. This, while increasing the clustering cost by at most $r$, gives us enough flexibility to construct a feasible set of centers. The procedure \texttt{HittingSet}$()$ that we describe next finds a feasible set from a collection of sets of representatives.
\textbf{\texttt{HittingSet}}$(\mathcal{N},g,\overline{k})$ takes as input a collection $\mathcal{N}=\{N_1,\ldots,N_K\}$ of pairwise disjoint sets of points, a group assignment function $g$, and a vector $\overline{k}=(k_1,\ldots,k_m)$ of capacities of the $m$ groups. It returns a feasible set $S$ intersecting as many $N_i$'s as possible. This reduces to finding a maximum cardinality matching in an appropriately constructed bipartite graph. It is important to note that this procedure does the post-processing: it doesn't make any pass over the input stream of points. This procedure runs in time $O(K^2\cdot\max_i|N_i|)$.
For interested readers, the pseudocodes of these procedures, an explanation of \texttt{HittingSet}$()$, and the proof of its running time appear in Appendix~\ref{app:alg}.
\section{Research Directions}
One research direction is to improve
the theoretical bounds, e.g., get a better approximation ratio in the
distributed setting or prove a better hardness result. Another interesting
direction is to use fair $k$-center for fair rank aggregation using the number
of inversions between two rankings as the metric.
\subsection{A Distributed Algorithm}
In the distributed model of computation, the set $X$ of points to be clustered is distributed equally among $\ell$ processors. Each processor is allowed a restricted access to the metric $d$: it may compute the distance between only its own points. Each processor performs some computation on its set of points and sends a summary of small size to a coordinator. From the summaries, the coordinator then computes a feasible set $S$ of points which covers all the $n$ points in $X$ within a small radius. Let $X_i$ denote the set of points distributed to processor $i$.
\begin{algorithm}[tb]
\caption{Summary computation by the $i$'th processor}
\label{alg_slave}
\begin{algorithmic}
\State {\bfseries Input:} Set $X_i$, metric $d$ restricted to $X_i$, group assignment function $g$ restricted to $X_i$.
\State /* Compute local pivots. */
\State $p^i_1$ $\gets$ an arbitrary point in $X_i$.
\For{$j=2$ {\bfseries to} $k+1$}
\State $p^i_j\gets\argmax_{p\in X_i}\min_{j':1\leq j'<j}d(p,p^i_j)$.
\EndFor
\State $P_i\gets\{p^i_1,\ldots,p^i_k\}$.
\State $r_i\gets\min_{j':1\leq j'\leq k}d(p^i_{k+1},p^i_j)/2$.
\State /* Compute local representative sets. */
\State $\{L(p):p\in P_i\}$ $\gets$ \texttt{getReps}$(X_i,d,g,P_i,2r_i)$.
\State $L_i\gets\bigcup_{p\in P_i}L(p)$.
\State /* Send message to coordinator. */
\State Send $(P_i,L_i)$ to the coordinator.
\end{algorithmic}
\end{algorithm}
The algorithm executed by each processor $i$ is given by Algorithm~\ref{alg_slave}, which consists of two main steps. In the first step, the processor uses Gonzalez's farthest point heuristic to find $k+1$ points. The first $k$ of those constitute the set $P_i$, which we will call the set of \textit{local pivots}. The point $p_{k+1}$ is the farthest point from the set of local pivots, and it is at a distance $2r_i$ from the set of local pivots. Thus, every point $X_i$ is within distance $2r_i$ from the set of pivots. This means,
\begin{observation}\label{obs_Pi_Xi}
The clustering cost of $P_i$ for $X_i$ is $2r_i$.
\end{observation}
In the second step, for each local pivot $p\in P_i$, the processor computes a set $L(p)$ of local representatives in the vicinity of $p$. Finally, the set $P_i$ of local pivots and the union $L_i=\bigcup_{p\in P_i}L(p)$ of local representative sets is sent to the coordinator. Since $L(p)$ contains at most one point from any group, it has at most $m-1$ points other than $p$. Since $|P_i|=k$ we have the following observation.
\begin{observation}\label{obs_msg}
Each processor sends at most $km$ points to the coordinator.
\end{observation}
Moreover, the separation between the local pivots is bounded as follows.
\begin{lemma}\label{lem_tau}
For every processor $i$, we have $r_i\leq\text{OPT}\leq\tau$.
\end{lemma}
\begin{proof}
Suppose $r_i>\tau$. Then $\{p^i_1,\ldots,p^i_{k+1}\}\subseteq X_i$ is a set of $k+1$ points separated pairwise by distance more than $2\tau$. But $S^*$ is a set of at most $k$ points whose clustering cost for $X_i$ is $\text{OPT}\leq\tau$. This contradicts Lemma~\ref{lem_pigeonhole}.
\end{proof}
Observation~\ref{obs_Pi_Xi} allows us to define a covering function $\text{cov}$ from $X$, the input set of points, to $\bigcup_{i=1}^{\ell}P_i$, the set of local pivots, as follows.
\begin{definition}\label{def_cov}
Let $p$ be an arbitrary point in $X$. Suppose $p$ is processed by processor $i$, that is, $p\in X_i$. Then $\text{cov}(p)$ is an arbitrary local pivot in $P_i$ within distance $2r_i$ from $p$.
\end{definition}
Since the processors send only a small number of points to the coordinator, it is very well possible that the optimal set $S^*$ of centers is lost in this process. In the next lemma, we claim that the set of points received by the coordinator contains a good and feasible set of centers nevertheless.
\begin{lemma}\label{lem_5tau}
The set $L=\bigcup_{i=1}^{\ell}L_i$ contains a feasible set, say $B$, whose clustering cost for $\bigcup_{i=1}^{\ell}P_i$ is at most $5\tau$.
\end{lemma}
\begin{proof}
Consider any $c\in S^*$, and suppose it is processed by processor $i$. Then $d(c,\text{cov}(c))\leq2r_i$ by Definition~\ref{def_cov}. Recall that $L(\text{cov}(c))$, the output of \texttt{getReps}$()$, contains one point from every group which has a point within distance $2r_i$ from $\text{cov}(c)$. Therefore, $L(\text{cov(c)})\subseteq L_i$ contains some point, say $c'$, from the same group as $c$ (possibly $c$ itself), such that $d(c',\text{cov}(c))\leq2r_i$. Then $d(c,c')\leq4r_i\leq4\tau$ by the triangle inequality and Lemma~\ref{lem_tau}. Let $B=\{c':c\in S^*\}$. Clearly, $B\subseteq\bigcup_{i=1}^{\ell}L_i$. Since $B$ has exactly as many points from any group as $S^*$, $B$ is feasible. The clustering cost of $B$ for $S^*$ is at most $4\tau$. The clustering cost of $S^*$ for $\bigcup_{i=1}^{\ell}P_i$ is at most $\tau$, because $\bigcup_{i=1}^{\ell}P_i\subseteq X$. By Lemma~\ref{lem_cost_triangle}, the clustering cost of $B$ for $\bigcup_{i=1}^{\ell}P_i$ is at most $5\tau$, as required.
\end{proof}
\begin{algorithm}[tb]
\caption{Coordinator's algorithm}
\label{alg_master}
\begin{algorithmic}
\State $X'\gets\emptyset$, $L\gets\emptyset$.
\State /* Receive messages from processors. */
\For{$i=1$ {\bfseries to} $\ell$}
\State Receive $(P_i,L_i)$ from processor $i$.
\State $X'\gets X'\cup P_i$, $L\gets L\cup L_i$.
\EndFor
\State /* Coordinator now has access to $d$ and $g$ restricted to $X'\cup L$, and capacity vector $\overline{k}=(k_1,\ldots,k_m)$. */
\State /* Compute global pivots. */
\State $P$ $\gets$ \texttt{getPivots}$(X',d,10\tau)$.
\State /* Compute global representative sets. */
\State $\{N(q):q\in P\}$ $\gets$ \texttt{getReps}$(L,d,g,P,5\tau)$.
\State /* Compute solution. */
\State $S$ $\gets$ \texttt{HittingSet}$(\{N(q):q\in P\},g,\overline{k})$.
\State {\bfseries Output} $S$.
\end{algorithmic}
\end{algorithm}
The algorithm executed by the coordinator is given by Algorithm~\ref{alg_master}. The coordinator constructs a maximal subset $P$ of the set of pivots $X'=\bigcup_{i=1}^{\ell}P_i$ returned by the processors such that points in $P$ are pairwise separated by distance more than $10\tau$. $P$ is called the set of global pivots. For each global pivot $q\in P$, the coordinator computes a set $N(q)\subseteq L=\bigcup_{i=1}^{\ell}L_i$ of its global representatives, all of which are within distance $5\tau$ from $q$. Due to the separation between points in $P$, the sets $N(q)$ are pairwise disjoint. Finally, a feasible set $S$ intersecting as many $N(q)$'s as possible is found and returned. (As before, it will be clear that $S$ intersects all the $N(q)$'s.)
\begin{theorem}
The coordinator returns a feasible set whose clustering cost is at most $17\tau$. This is a $17(1+\varepsilon)$-approximation when $\tau\in[\text{OPT},(1+\varepsilon)\text{OPT}]$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem_5tau}, $L$ contains a feasible set, say $B$, whose clustering cost for $X'$ is at most $5\tau$. For each $q\in P\subseteq X'$, let $b_q$ denote a point in $B$ that is within distance $5\tau$ from $q$. Since the points in $X'$ are separated pairwise by distance more than $10\tau$, $b_q$'s are all distinct.
By the property of \texttt{getReps}$()$, the set $N(q)$ returned by it contains a point, say $b'_q$, from the same group as $b_q$.
Let $B'=\{b'_q:q\in P\}$. This set $B'$ intersects $N(q)$ for each $q\in P$. Since $b'_q$ and $b_q$ are from the same group and $b_q$'s are all distinct, $B'$ contains at most as many points from any group as $B$ does. Since $B$ is feasible, so is $B'$. To summarize, there exists a feasible set, namely $B'$, intersecting all the $N(q)$'s.
Recall that $S$, the output of \texttt{HittingSet}$()$, is a feasible set intersecting as many $N(q)$'s as possible. Thus, $S$ also intersects all the $N(q)$'s.
Now, the clustering cost of $S$ for $P$ is at most $5\tau$, because $S$ intersects $N(q)$ for each $q\in P$. The clustering cost of $P$ for $X'$ is at most $10\tau$ by the maximality of the set returned by \texttt{getPivots}$()$. The clustering cost of $X'=\bigcup_{i=1}^{\ell}P_i$ for $X=\bigcup_iX_i$ is at most $2\tau$ because the clustering cost of each $P_i$ for $X_i$ is at most $2r_i\leq2\tau$. These facts and Lemma~\ref{lem_cost_triangle} together imply that the clustering cost of $S$, the output of the coordinator, for $X$ is at most $17\tau$.
\end{proof}
\begin{comment}
\begin{proof}
By Lemma~\ref{lem_5tau}, $L$ contains a feasible set, say $B$, whose clustering cost for $X'$ is at most $5\tau$. For each $q\in P\subseteq X'$, let $b_q$ denote a point in $B$ that is within distance $5\tau$ from $q$. Let $B'=\{b_q:q\in P\}$. We may assume $b_q\in N(q)$ for $q\in P$, because otherwise $N(q)$ contains a point from the same group as $b_q$, and we can replace $b_q$ by that point in $B$ while still maintaining $B$'s clustering cost guarantee. Thus, $B'$ intersects $N(q)$ for each $q\in P$. Moreover, $B'$ being a subset of $B$ is feasible. Recall that $S$, the output of \texttt{HittingSet}$()$, is a feasible set intersecting as many $N(q)$'s as possible. Thus, $S$ also intersects all the $N(q)$'s.
Now, the clustering cost of $S$ for $P$ is at most $5\tau$, because $S$ intersects $N(q)$ for each $q\in P$. The clustering cost of $P$ for $X'$ is at most $10\tau$ by the maximality of the set returned by \texttt{getPivots}$()$. The clustering cost of $X'=\bigcup_{i=1}^{\ell}P_i$ for $X=\bigcup_iX_i$ is at most $2\tau$ because the clustering cost of each $P_i$ for $X_i$ is at most $2r_i\leq2\tau$. These facts and Lemma~\ref{lem_cost_triangle} together imply that the clustering cost of $S$, the output of the coordinator, for $X$ is at most $17\tau$.
\end{proof}
\end{comment}
We note here that even though our distributed algorithm has the same approximation guarantee as Kale's one-pass algorithm,
it is inherently a different algorithm. Ours is extremely parallel whereas Kale's is extremely sequential. We now prove a bound on the running time.
\begin{theorem}
The running time of the distributed algorithm is $O(kn/\ell+mk^2\ell)$. By an appropriate choice of $\ell$, the number of processors, this can be made $O(m^{1/2}k^{3/2}n^{1/2})$.
\end{theorem}
\begin{proof}
For each processor $i$, computing local pivots as well as the call to \texttt{getReps}$()$ takes $O(|P_i|\cdot|X_i|)=O(kn/\ell)$ time each. For the coordinator, the separation between the global pivots and Lemma~\ref{lem_pigeonhole} together enforce $|P|\leq k$. Observation~\ref{obs_msg} implies $|L|\leq m\cdot\max_i|L_i|\leq mk\ell$. Therefore, \texttt{getPivots}$()$ takes time $O(|P|\cdot|X'|)=O(k^2\ell)$ and \texttt{getReps}$()$ takes time $O(|P|\cdot|L|)=O(mk^2\ell)$. The call to \texttt{HittingSet}$()$ takes time $O(k^2\max_q|N(q)|)=O(mk^2)$, thus limiting the coordinator's running time to $O(mk^2\ell)$. Choosing $\ell=\Theta(\sqrt{n/(mk)})$ minimizes the total running time to $O(m^{1/2}k^{3/2}n^{1/2})$.
\end{proof}
\begin{comment}
\begin{proof}
Consider any one processor $i$. It gets the set $X_i$ of size $n/\ell$. It takes $O(kn/\ell)$ time to compute the set $P_i$ of $k$ pivots. The call to \texttt{getReps}$()$ also takes $O(|P_i|\cdot|X_i|)=O(kn/\ell)$ time. Thus, each processor takes $O(kn/\ell)$ time, and since all the processors run in parallel, they finish their tasks in $O(kn/\ell)$ time.
The coordinator executes \texttt{getPivots}$()$ on the set $X'=\bigcup_{i=1}^{\ell}P_i$ of size at most $\ell\cdot\max_i|P_i|\leq k\ell$ and computes the set $P$ of global pivots. Recall that points in $P$ are separated pairwise by distance more than $10\tau$. On the other hand, by Lemma 4, there exists a set of at most $k$ points whose clustering cost for $P$ is at most $5\tau$. Therefore, by Lemma~\ref{lem_pigeonhole}, $|P|\leq k$. Thus, the running time of the \texttt{getPivots}$()$ call is $O(|P|\cdot|X'|)=k^2\ell$. Next, the running time of the call to \texttt{getReps}$()$ is $O(|P|\cdot|L|)$. Here, $|L_i|\leq mk$ by Observation~\ref{obs_msg}, and hence, $|L|=|\bigcup_{i=1}^{\ell}L_i|\leq mk\ell$. Thus, \texttt{getReps}$()$ takes time $O(k\cdot mk\ell)=O(mk^2\ell)$, and this dominates the time taken by \texttt{getPivots}$()$. Finally, we pass at most $k$ sets of size at most $m$, namely the $N(q)$'s, to \texttt{HittingSet}$()$. Therefore, \texttt{HittingSet}$()$ takes time $O(mk^2)$, which is also dominated by the time taken by \texttt{getReps}$()$. Thus, the coordinator takes $O(mk^2\ell)$ time.
To summarize, the running time of our distributed algorithm is $O(kn/\ell+mk^2\ell)$. By choosing $\ell=\Theta(\sqrt{n/(mk)})$, the running time is minimized to $O(m^{1/2}k^{3/2}n^{1/2})$.
\end{proof}
\end{comment}
\section{Experiments}
All experiments are run on HP EliteBook 840 G6 with Intel\textsuperscript{\textregistered} Core\texttrademark~i7-8565U
CPU \@ 1.80GHz having 4 cores and 15.5 GiB of RAM, running Ubuntu 18.04 and Anaconda. We make our code available on GitHub\footnote{\url{https://github.com/sagark4/fair_k_center}}.
We perform our experiments on a massive synthetic dataset, several real datasets, and small synthetic datasets.
The same implementation is used for the large synthetic dataset and the real datasets, but a slightly different implementation is used for small synthetic datasets.
Before presenting the experiments, we first discuss the implementation details that are common to all three experiments.
Specific details are mentioned along with the corresponding experimental setup.
For all our algorithms if the solution size is less than $k$, then we extend the
solution using an arbitrary solution of size $k$ (which also certifies the simple upper bound).
In the case of the distributed algorithm, an arbitrary solution is computed using only the points received by the coordinator. Also, one extra pass is spent into computing solution cost.
In the processors' algorithm, we return $r_i$ along with $(P_i,L_i)$.
No randomness is used for any optimization, making our algorithms completely deterministic. Access to distance between two points is via a method \texttt{get\_distance()}, whose implementation
depends on the dataset.
We use the code shared by Kleindessner et al.\ for their algorithm on github\footnote{\url{https://github.com/matthklein/fair_k_center_clustering}}, exactly as is, for all datasets. In their code, the distance is assumed to be stored in an $n \times n$ distance matrix.
As mentioned in the introduction, we give new implementations for
existing algorithms---those of Chen et al.\ and Kale (we choose to implement Kale's two-pass algorithm only, because it is the better of his two).
Instead of using a matroid intersection subroutine, which can have running time of super quadratic in $n$,
we reduce the postprocessing steps of these algorithms to finding a maximum
matching in an appropriately constructed graph (for details, see \texttt{HittingSet()} in Appendix~\ref{app:alg}). We further reduce maximum matching to max-flow which is computed using Python package NetworkX.
This results in a postprocessing time of
$O(k^2n)$ for Chen et al.~and $O(k^3)$ for Kale.
This step itself makes Chen et al.'s algorithm practical for much larger $n$
than what is observed by Kleindessner et al.
\paragraph*{Handling the guesses}
For all algorithms (except Kleindessner et al.'s), we use $\varepsilon = 0.1$. For Chen et al.'s algorithm, we use geometric guessing starting with the lower bound given by the farthest point heuristic
(call this Gonzalez's lower bound).
For our two-pass algorithm and Kale's algorithm, we use geometric guessing starting with the simple lower bound until the upper bound given by an arbitrary solution.
The values for the guesses $\tau$ in the coordinator's algorithm are scaled down by a factor of $5.1$.
Concretely, let $r_1$ be the maximum among the $r_i$'s.
Then the guesses take values in $\frac{r_1}{5.1},\frac{1.1\cdot r_1}{5.1},\frac{(1.1)^2\cdot r_1}{5.1},\ldots$, until a feasible solution is found.
The factor of $5.1$ ensures that when \texttt{getPivots()} is run with the parameter $10\tau < 2r_1$, we end up picking at least $k$ pivots from $X'$.
We now proceed to present our experiments.
To show the effectiveness of our algorithms on massive datasets, we run them on
a 100 GB synthetic dataset which is a collection of 4,000,000 points in 1000
dimensional Euclidean space, where each coordinate is a uniformly random real in
$(0, 10000)$.
Each point is assigned one of the four groups uniformly at random,
and capacity of each group is set to $2$.
Just reading this data file takes more than four minutes.
Our two-pass algorithm takes 1.95
hours and our distributed
algorithm takes 1.07 hours; both compute a solution of almost the same cost, even though their theoretical guarantees are different. Here, we use block size of $10000$ in the distributed algorithm, i.e., the number of
processors $\ell = 400$.
\paragraph*{For the above dataset and the real datasets:} The input is read from the input file and attributes are
read from the attribute file, one data point at a time, and fed to the
algorithms. This is done in order to be able to handle the 100 GB dataset. Using Python's multiprocessing library, we are able to use
four cores of the
processor~\footnote{\url{https://www.praetorian.com/blog/multi-core-and-distributed-programming-in-python}}.
\subsection{Real Datasets}
We use three real world
datasets: Celeb-A~\cite{liu_etal15}, Sushi~\cite{sushi_dataset}, and
Adult~\cite{adults_dataset}, with $n = 1000$ by selecting the first 1000 data
points (see Table~\ref{tab:real}).
\begin{table*}[t]
\caption{Comparison of solution quality of algorithms for fair $k$-center on
real datasets. Each column after the third corresponds to an algorithm and
shows ratio of its cost and Gonzalez's lower bound. Note that this is not
the approximation ratio. Our two-pass algorithm is the best for majority of
the settings. Dark shaded cell shows the best-cost algorithm and lightly shaded cell shows the second best.}
\label{tab:real}
\centering
\begin{tabular}{|l|l|p{2cm}|l|l|p{2cm}|l|l|}
\hline
Dataset & Capacities & Gonzalez's Lower Bound & Chen et al. & Kale & Kleindessner et al. & Two pass & Distributed \\ \hline
CelebA & [2, 2] & \hfill 30142.4 & 1.9 & 1.9 & \cellcolor{gray!25!white}1.85 & \cellcolor{gray!80!white}1.76 & \cellcolor{gray!80!white}1.76 \\ \hline
CelebA & [2, 2, 2, 2] & \hfill 28247.3 & 2.0 & 2.0 & \cellcolor{gray!25!white}1.9 & \cellcolor{gray!80!white}1.88 & \cellcolor{gray!80!white}1.88 \\ \hline
SushiA & [2, 2] & \hfill 11.0 & 2.18 & 2.18 & 2.27 & \cellcolor{gray!80!white}2.0 & \cellcolor{gray!40!white}2.09 \\ \hline
SushiA & [2] * 6 & \hfill 8.5 & \cellcolor{gray!25!white}2.35 & \cellcolor{gray!25!white}2.35 & \cellcolor{gray!80!white}2.24 & \cellcolor{gray!25!white}2.35 & \cellcolor{gray!80!white}2.24 \\ \hline
SushiA & [2] * 12 & \hfill 7.5 & \cellcolor{gray!25!white}2.13 & \cellcolor{gray!25!white}2.13 & \cellcolor{gray!80!white}2.0 & 2.4 & 2.4 \\ \hline
SushiB & [2, 2] & \hfill 36.5 & \cellcolor{gray!80!white}1.81 & \cellcolor{gray!80!white}1.81 & 2.11 & \cellcolor{gray!80!white}1.81 & \cellcolor{gray!25!white}1.86 \\ \hline
SushiB & [2] * 6 & \hfill 34.0 & 2.0 & \cellcolor{gray!25!white}1.82 & 2.12 & \cellcolor{gray!80!white}1.79 & 2.0 \\ \hline
SushiB & [2] * 12 & \hfill 32.0 & \cellcolor{gray!80!white}1.94 & \cellcolor{gray!80!white}1.94 & \cellcolor{gray!25!white}2.09 & \cellcolor{gray!80!white}1.94 & \cellcolor{gray!80!white}1.94 \\ \hline
Adult & [2, 2] & \hfill 4.9 & 2.04 & 2.13 & 2.44 &\cellcolor{gray!80!white} 1.9 & \cellcolor{gray!25!white}2.02 \\ \hline
Adult & [2] * 5 & \hfill 3.92 & 2.66 & 2.66 & \cellcolor{gray!80!white}2.02 & 2.36 & \cellcolor{gray!25!white}2.35 \\ \hline
Adult & [2] * 10 & \hfill 2.76 & 2.75 & \cellcolor{gray!80!white}2.41 & \cellcolor{gray!25!white}2.48 & \cellcolor{gray!25!white}2.48 & 2.75 \\ \hline
\end{tabular}
\end{table*}
Celeb-A dataset is a set of 202,599 images of human faces with attributes
including male/female and young/not-young, which we use. We use Keras to
extract features from each image~\cite{feature_extraction} via the pretrained
neural network VGG16, which returns a 15360 dimensional real vector for each
image. We use the $\ell_1$ distance as the metric and two settings of groups:
male/female with capacity of $2$ each (denoted by $[2,2]$ in
Table~\ref{tab:real}), and \{male, female\} $\times$ \{young, not-young\} with
capacity of $2$ each (denoted by $[2] * 4$ in Table~\ref{tab:real}).
Sushi dataset is about preferences for different types of Sushis by 5000
individuals with attributes of male/female and six possible age-groups. In
SushiB, the preference is given by a score whereas in SushiA, the preference is
given by an order. For SushiB, we use the $\ell_1$ distance whereas for SushiA, we
use the number of inversions, i.e., the distance between two Sushi rankings is
the number of doubletons $\{i, j\}$ such that Sushi $i$ is preferred over Sushi
$j$ by one ranking and not the other. For both SushiA and SushiB, we use three
different group settings: with gender only, with age group only, and combination
of gender and age group. This results in $2$, $6$, and $12$ groups,
respectively, and the capacities appear as $[2, 2]$, $[2] * 6$, and $[2] * 12$,
respectively, in Table~\ref{tab:real}.
Motivated by Kleindessner et al., we consider the adult dataset~\cite{adults_dataset}, which is extracted from US census data and
contains male/female attribute and six numerical attributes that we use as
features. We normalize this dataset to have zero mean and standard deviation of
one and use the $\ell_1$ distance as the metric. There are two attributes that can
be used to generate groups: gender and race (Black, White, Asian Pacific
Islander, American Indian Eskimo, and Other). Individually and in combination,
this results in $2$, $5$, and $10$ groups, respectively.
For comparison, see Table~\ref{tab:real}. On majority of
settings, our two-pass algorithm outputs a solution with cost smaller than the
rest. We reiterate for emphasis that in addition to being at least as good as
the best in terms of solution quality, our algorithms can handle massive
datasets.
For the distributed algorithm, we use block size of 25, i.e., the number of
processors are $1000/25 = 40$: theoretically, using $\approx\sqrt{n}$ processor gives maximum speedup.
\subsection{Synthetic Datasets}
Motivated by the experiments in Kleindessner et al.,
we use the Erd\H{o}s-R{\'e}nyi graph metric to compare the running time and cost of our algorithms with existing algorithms.
For a fixed natural number $n$, a random metric on $n$ points is generated as follows. First, a random undirected graph on $n$ vertices is sampled in which each edge is independently picked with probability $2\log n/n$.
Second, every edge is assigned a uniformly random weight in $(0,1000)$.
The points in the metric correspond to the vertices of the graph, and the pairwise distances between the points are given by the shortest path distance.
In addition, if $m$ is the number of groups, then each point in the metric is assigned a group in $\{1,2,\ldots,m\}$ uniformly and independently at random.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{running_time.pdf}
\includegraphics[scale=0.55]{running_time_zoom.pdf}
\vspace{-0.5cm}
\caption{Comparing Running Times}
\label{fig:runtimes}
\end{figure}
Figure \ref{fig:runtimes} shows the plots between the running time and instance size $n$; the bottom one is a zoom-in of the top one to the lower four plots.
In this experiment, $n$ takes values in $\{100,150,200,\ldots,350\}$.
The number of groups is fixed to $5$ and the capacity of each group is $2$.
For each fixing of $n$, we run the five algorithms on $20$ independent random metric instances of size $n$ to compute the average running time.
Our two pass algorithm and Kleindessner et al.'s algorithm are the fastest.
Our distributed algorithm is faster than Chen et al.'s algorithm, but slower than Kale's.
\begin{figure}[h]
\centering
\includegraphics[trim={0.0cm 0.5cm 0 0.73cm},clip,scale=0.65]{approximation.pdf}
\caption{Comparing Approximation Ratios}
\label{fig:approx}
\end{figure}
Figure \ref{fig:approx} shows the ratios of the cost of various algorithms to Gonzalez's lower bound.
For this comparison, the instance size is fixed to $500$ and capacities are $[5, 5, 5], [2, 2, 11], [2, 2, 8, 8], [3, 3, 3, 11], [1, 2, 3, 4, 5]$, $[3, 3, 4, 4, 5], [4, 4, 5, 5, 5, 10], [2, 2, 2, 2, 2, 2].$
Here again, for every fixing of the capacities, the algorithm is run on $20$ independent random metric instances to compute the average costs.
Chen et al.'s algorithm achieves the least cost for almost all settings, and Kleindessner et al.'s algorithm gives the highest cost on majority (5 out of 8) of settings.
Our two-pass algorithm and Kale's algorithm perform similar to each other and are quite close to Chen et al.'s. Our distributed algorithm is somewhere in between Chen et al.'s and Kleindessner et al.'s.
Note that the ratios of the costs between any two algorithms is at most $1.167$.
In the implementation of our two pass algorithm, we use geometric guessing starting with the simple lower bound until the algorithm returns a success instead of running all guesses. This is done for a fair comparison in terms of running time.
\subsection{Handling the Guesses}
\label{subsec:guesses}
Given an arbitrarily small parameter $\varepsilon$, a lower bound $L
\le \opt$, and an upper bound $U \ge \opt$, we run our algorithms for guess
$\tau \in \{L, L(1+\varepsilon), L(1+\varepsilon)^2, \ldots, U\}$, which means at most
$\log_{1+\varepsilon}(U/L)$ guesses. Call this method of guesses as geometric guessing starting at $L$ until $U$. For the $\tau \in [\opt, \opt (1 + \varepsilon)]$, our
algorithms will compute a solution successfully.
In the distributed algorithm, by Lemma~\ref{lem_tau}, for each
processor, $r_i \le \opt$. Therefore,
$\max_i r_i \le \opt$. We then run Algorithm~\ref{alg_master} with geometric guessing starting at $\max_i r_i$ until it
successfully finds a solution.
For the two-pass algorithm, let $P$ be the set of first
$k + 1$ points; then $L = \min_{x_1, x_2 \in P} d(x_1, x_2)/2$ is a
lower bound (call this the simple lower bound). Note that no passes need to be spent to compute the simple lower bound. We also need an upper bound
$U \ge \opt$. One can compute an arbitrary solution and its cost---which will
be an upper bound---by spending two more passes (call this the simple upper bound). This results in a four-pass
algorithm. To obtain a truly two pass algorithm and space usage
$O(km \log(1/\eps)/\eps)$, one can use Guha's trick~\cite{guha09}, which is essentially
starting $O(\log(1/\eps)/\eps)$ guesses and if a run with guess $\tau$ fails, then continuing the run with guess $\tau/\varepsilon$ and treating the old summary as the initial
stream for this guess; see also \cite{kale19} for details. But obtaining and using an upper bound is convenient and easy to implement in practice.
\section{Introduction}
Data summarization is a central problem in the area of machine learning, where
we want to compute a small summary of the data. For example, if the input data
is enormous, we do not want to run our machine learning algorithm on the whole
input but on a small \emph{representative} subset. How we select such a
representative summary is quite important. It is well known that if the input is biased, then the
machine learning algorithms trained on this data will exhibit the same bias.
This is a classic example of \emph{selection bias} but as exhibited by
algorithms themselves. Currently used algorithms for data summarization have
been shown to be biased with respect to attributes such as gender, race, and age
(see, e.g., \cite{kay_etal15}), and this motivates the fair data summarization
problem. Recently, the fair $k$-center
problem was shown to be useful in computing fair summary~\cite{kleindessner_etal19a}.
In this paper, we continue the study of
fair $k$-center and add to the series of works on fairness in machine learning
algorithms. Our main results are streaming and distributed algorithms for fair
$k$-center. These models are extremely suitable for handling massive datasets.
The fact that data summarization problem arises when the input is huge makes our
work all the more relevant!
Suppose the input is a set of real vectors with a gender attribute and you want to compute a
summary of $k$ data points such that both\footnote{sincere apologies
to the people who identify with neither} genders are represented equally. Say
we are given a summary $S$. The cost we pay for not including a point in $S$ is
its Euclidean distance from $S$. Then the cost of $S$ is the
largest cost of a point. We want to compute a summary with minimum cost that is
also fair, i.e., contains $k/2$ women and $k/2$ men. In one sentence, we want
to compute a fair summary such that the point that is farthest from this summary
is not too far. Fair $k$-center models this task: let the number of points
in the input be $n$, the number of \emph{groups} be $m$, target summary size be
$k$, and we want to select a summary $S$ such that $S$ contains $k_j$ points belonging to Group~$j$, where $\sum_j k_j = k$.
And we want to minimize $\max_x d(x, S) = \max_x \min_{x'\in S} d(x, x')$, where $d$ denotes the distance function. Note that each point
belongs to exactly one of the $m$ groups; for the case of gender, $m = 2$.
We call the special case
where
$m = 1$ and $k_1 = k$ as just $k$-center throughout this paper.
For $k$-center, there are simple greedy algorithms with an approximation ratio of $2$~\cite{gonzalez85,hochbaum85}, and
getting better than $2$-approximation is
NP-hard~\cite{hsu79}. The NP-hardness result also applies to the more general fair $k$-center.
The best algorithm known for fair $k$-center is a
$3$-approximation algorithm that runs in time $O(n^2\log n)$~\cite{chen_etal16}.
A linear-time algorithm with approximation guarantee of $O(2^m)$, which is
constant if $m$ is, was given recently~\cite{kleindessner_etal19a}. Both of
these algorithms work only in the traditional random access machine model, which
is suitable only if the input is small enough to fit into fast memory. We give
a two-pass streaming algorithm that achieves the approximation ratio arbitrarily
close to $3$. In the streaming setting, input is thought to arrive one point at
a time, and the algorithm has to process the input quickly, using minimum amount
of working memory---ideally linear in the size of a feasible solution, which is
$k$ for fair $k$-center. Our algorithm processes each incoming input point in
$O(k)$ time and uses space $O(km)$, which is $O(k)$ if the number of groups $m$
is very small. This improves the space usage of the existing streaming
algorithm~\cite{kale19} almost quadratically, from $O(k^2)$, while also
matching the best approximation ratio achieved by Chen et al. We also give the first
distributed, constant approximation algorithm where the input is divided among
multiple processors, each of which performs one round of computation and sends a message
of size $O(km)$ to a central processor, which then computes the final solution. Both
rounds of computation are linear time. All the approximation, communication,
space usage, and running-time guarantees are provable. To complement our
distributed algorithm, we prove that any distributed algorithm, even randomized, that works by
each processor sending a subset of its input to a central processor which outputs
the solution, needs to essentially communicate the whole input to achieve an approximation ratio of better than
$4$. This, in fact, applies for the special case of $k$-center showing that
known $4$-approximation algorithm~\cite{malkomes1_etal15} for $k$-center is
optimal.
We perform experiments on real and synthetic datasets and show that our
algorithms are as fast as the linear-time algorithm of Kleindessner et al.,
while achieving improved approximation ratio, which matches that of Chen et al.
Note that this comparison is possible only for small datasets, since those
algorithms do not work either in streaming or in distributed setting. We also
run our algorithms on a really large synthetic dataset of size 100GB, and show that
their running time is only one order of magnitude more than the time taken to just read the input dataset
from secondary memory.
As a further contribution, we give faster implementations of existing
algorithms---those of Kale and Chen et al.
\subsection*{Related work}
Chen et al.\ gave the first polynomial-time algorithm that achieves $3$-approximation. Kale achieves almost the same ratio using just two passes and also gives a one-pass $(17 + \varepsilon)$-approximation algorithm, both using
$O(k^2)$ space.
One way that is incomparable to ours is to compute a fair summary is using a determinantal
measure of diversity~\cite{celis_etal18a}. Fair clustering has been studied
under another notion of fairness, where each cluster must be balanced with
respect to all the groups (no over-or-under-representation of any
group)~\cite{chierichetti_etal17}, and this line of work also has received a lot
of attention in a short span of
time~\cite{BeraCFN19,AhmadianE0M19,bandyapadhyay_et_al19,schmidt_etal20,jia_etal20}.
The $k$-median clustering problem with fairness constraints was first considered
by~\cite{hajiaghayi_etal10} and with more general matroid constraints was
studied by~\cite{krishnaswamy_etal11}. The work of Chen et al.\ and Kale also
actually applies for matroid constraints.
There has been a lot of work done on fairness, and we
refer the reader to overviews by~\cite{kleindessner_etal19a,celis_etal18a}.
\section{Distributed $k$-Center Lower Bound}
In this section we present the formal details of the lower bound discussed in Section~4 of the main paper.
For a natural number $n$, $[n]$ denotes the set $\{1,2,\ldots,n\}$.
\paragraph{The metric $\mathcal{M}(n')$.}
The points in the metric space on $n = 9n'+7$ points are given by
\[S := \{a^*,b_1^*,b_2^*,c^*,a,b,c\} \cup S_1 \cup S_2 \cup S_3,\]
where $|S_1| = |S_2| = |S_3| = 3n'$.
Note that $S_1,S_2,S_3$ are pairwise disjoint and are also disjoint from $\{a^*,b_1^*,b_2^*,c^*,a,b,c\}$. See Figure \ref{metric} for a drawing of the metric; importantly, $x$ is not a point in the metric but is only used to define the pairwise distances.
\begin{figure}[h]
\centering{\includegraphics[width=.4\textwidth]{metric1.pdf}}
\caption{\footnotesize The underlying metric for $n'=2$}
\label{metric}
\end{figure}
The pairwise distances are given in \Cref{metric-pairwisedist}.
Note that if the table entry $i,j$ is indexed by sets, then the entry corresponds to the distance between distinct points in the sets.
\begin{table}[htp]
\ra{1.5}
\centering
\adjustbox{max width = \textwidth}{
\begin{tabular}{@{}c|ccccccccccc@{}}
& $a^*$ & $b_1^*$ & $b_2^*$ & $c^*$ & $a$ & $b$ & $c$ & $S_1$ & $S_2$ & $S_3$\\
\toprule
$a^*$& $0$ & $1$& $1$ &$2$ & $1$ & $2$ & $3$ & $1$ & $2$ & $3$\\
$b_1^*$ & $1$ & $0$ & $2$ &$1$ & $2$ & $1$ & $2$ & $2$ & $1$ & $2$ \\
$b_2^*$ & $1$ & $2$ &$0$ &$1$ & $2$ & $1$ & $2$ & $2$ & $1$ & $2$ \\
$c^*$ & $2$ & $1$ & $1$ &$0$ & $3$ & $2$ & $1$ & $3$ & $2$ & $1$ \\
$a$ & $1$ & $2$ & $2$ &$3$ & $0$ & $3$ & $4$ & $2$ & $3$ & $4$ \\
$b$ & $2$ & $1$ & $1$ &$2$ & $3$ & $0$ & $3$ & $3$ & $2$ & $3$ \\
$c$ & $3$ & $2$ & $2$ &$1$ & $4$ & $3$ & $0$ & $4$ & $3$ & $2$ \\
$S_1$ & $1$ & $2$ & $2$ &$3$ & $2$ & $3$ & $4$ & $2$ & $2$ & $2$ \\
$S_2$ & $2$ & $1$ & $1$ &$2$ & $3$ & $2$ & $3$ & $2$ & $2$ & $2$\\
$S_3$ & $3$ & $2$ & $2$ &$1$ & $4$ & $3$ & $2$ & $2$ & $2$ & $2$
\end{tabular}}
\caption{Pairwise Distances}
\label{metric-pairwisedist}
\end{table}
\paragraph{Input Distribution $\mathcal{D}$ on the Processors' Inputs.}
For $i\in [3]$, let $S_i^1, S_i^2, S_i^3$ be an arbitrary equi-partition of $S_i$.
Moreover, let $\pi:S \rightarrow [n]$ be a uniformly random bijection. Define the sets
$Y_1^j = \pi\left(\{b_1^*,b_2^*,a\} \cup S_1^j\right)$, $Y_2^j = \pi\left(\{a^*,c^*,b\} \cup S_2^j\right)$ and $Y_3^j = \pi\left(\{b_1^*,b_2^*,c\} \cup S_3^j\right)$, for $j\in [3]$.
By the constraints on the sizes of $S_1,S_2,S_3$, we have that each $Y_i^j$ has the same size.
For simplicity of notation, we rename $Y_i^j$ to $Y_{3(i-1)+j}$.
The $i$-th processor's input is $Y_{\Gamma(i)}$, where $\Gamma: [9]\rightarrow [9]$ is a uniformly random permutation.
Note that neither the processors nor the coordinator know the permutations $\pi$ and $\Gamma$.
\begin{theorem}\label{thm:lb}
Consider any distributed algorithm for the $9$ processor $3$-center problem on $\mathcal{M}(n')$ and input distribution $\mathcal{D}$, in which each processor communicates an $\ell$-sized subset of its input points, and the coordinator outputs $3$ of the received points.
If $\ell \leq n'/18$, then with probability at least $1.4\cdot 10^{-4}$, the output is no better than a $4$-approximation.
\end{theorem}
Here, although the probability with which the coordinator fails to outputs a better-than-$4$-approximation is only $1.4\cdot 10^{-4}$, it can be \emph{amplified} to $1-\varepsilon$, for any $\varepsilon > 0$.
We discuss the amplification result before presenting the proof of the above theorem.
\begin{theorem}
Let $\epsilon > 0$, $\beta = 1-1.4\cdot 10^{-4}$ and $\alpha = \left.\left\lceil \frac{100}{91}\cdot \log \left(\frac{1}{\varepsilon}\right) \middle/ \log\left(\frac{1}{\beta}\right) \right\rceil\right.$.
Consider any distributed algorithm for the $9$ processor $3\alpha$-center problem, in which each processor communicates an $\ell$-sized subset of its input points, and the coordinator outputs $3$ of the received points.
There exists a metric space and a distribution on input points, where each processor receives $(n'+3)\alpha$ points, such that if $\ell \leq n'\alpha/1800$, then with probability at least $1-\epsilon$, the output is no better than a $4$-approximation.
\end{theorem}
\begin{proof}
The underlying metric space consists of $\alpha$ disjoint copies of $\mathcal{M}(n')$ separated by an arbitrarily large distance from one another.
Each processor receives the corresponding set of points from each copy, as in the description of $\mathcal{D}$.
First, note that for any processor, there are at most $\alpha/100$ copies from which more than $n'/18$ points are communicated.
By an application of the union bound, we can infer that if each processor communicates at most $n'\alpha/1800$ points to the coordinator, then there are $\left(1-\frac{9}{100}\right)\cdot \alpha$ copies from which no processor sends more than $n'/18$ points.
Theorem~\ref{thm:lb} implies that the coordinator outputs an exact $4$-approximation with probability at least $1-\beta$ on each of at least $91\alpha/100$ independent copies.
Using the independence between the copies further, we can conclude that the coordinator's output fails to be any better than a $4$-approximation with probability at least $1-\beta^{91\alpha/100} \geq 1 - \varepsilon$, which follows from the inequality
$91\alpha/100 \geq \left.\log \left(\frac{1}{\varepsilon}\right) \middle/ \log \frac{1}{\beta}\right.$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:lb}]
For $i\in [9]$, define $B_i$ to be the event that the $i$-th processor communicates a point in $\pi(\{a^*,b_1^*,b_2^*,c^*,a,b,c\})$.
Moreover, define the event $G = \cap_{i=1}^9 B_i^c$, where $B_i^c$ is the complement of $B_i$.
First, we claim that $\Pr[B_i] \leq \ell/n'$.
Indeed, this is true since all points in each processor's input are equidistant and the labels of the input points have been randomly permuted.
This claim in conjunction with the union bound implies that $\Pr[G] \geq 1-\frac{9\ell}{n'}$.
We proceed to show that there exists three processors, say $1,2$ and $3$, such that for every fixing of the permutation $\Gamma$, the probability conditioned on $G$ that the coordinator's output is a subset of the points received from processors $1,2,3$ is at least $\frac{1}{\binom{9}{3}}$.
For every $i\in [9]$, let $M_i$ be the set of points communicated by the $i$-th processor.
Define $M = \cup_{i=1}^9 M_i$, and denote by $O$ the set of output points.
In addition, let $E_{\gamma}$ be the event that $\Gamma = \gamma$.
We now define a function $p$ on subsets $I = \{i_1,i_2,i_3\} \subset [9]$ and permutations $\gamma$ such that $p(I,\gamma) = \Pr[O \subseteq (M_{i_1} \cup M_{i_2} \cup M_{i_3}) \mid G,E_{\gamma}]$.
For every fixing of $\gamma$, since all points in $O$ are chosen from at most three processors, we can say that $\sum_{I\subset [9]: |I|=3} p(I,\gamma) \geq 1$.
Moreover, we have that $p(I,\gamma) = p(I,\gamma')$ for every distinct $\gamma,\gamma'$.
This follows from symmetry and the fact that $\pi$ is a uniformly random bijection.
Let $p^*$ be the largest value of $p$ for any fixing of $\gamma$, and we can assume without loss of generality $p^*$ is achieved for processors $I = \{1,2,3\}$.
As the number of subsets of $[9]$ of size $3$ is $\binom{9}{3}$, we can conclude that $p^* \geq \frac{1}{\binom{9}{3}}$.
Let $E_O$ be the event that $O \subseteq \pi(S_1)$ or $O \subseteq \pi(S_3)$. Conditioned on $E_O$, cost of the output is $4$.
We now have that $\Pr[E_O] \geq \Pr[G] \cdot \Pr[E_O|G]$.
Using facts $S_1 \subset Y_1 \cup Y_2 \cup Y_3$ and $S_3 \subset Y_7 \cup Y_8 \cup Y_9$, we have
\begin{align*}
\Pr[E_O\mid G] \geq \Pr[\Gamma(\{1,2,3\}) = \{1,2,3\}] \cdot p^* + \Pr[\Gamma(\{7,8,9\}) = \{1,2,3\}] \cdot p^* \geq 2 \cdot \frac{3! \cdot 6!}{9!} \cdot \frac{1}{\binom{9}{3}}.
\end{align*}
Since $\Pr[G] \geq 1-\frac{9\ell}{n'}$ and $\ell \leq n'/18$, we get that $\Pr[E_O] \geq 1.4\cdot 10^{-4}$, as desired.
\end{proof}
\section{Distributed $k$-Center Lower Bound: Take 2}
In this section we present the formal details of the lower bound discussed in Section~4 of the main paper.
For a natural number $n$, $[n]$ denotes the set $\{1,2,\ldots,n\}$.
\paragraph{The metric space $\mathcal{M}(n')$.}
The point set of this metric space on $n = 9n'+7$ points is given by
\[S := \{a^*,b_1^*,b_2^*,c^*,a,b,c\} \cup S_1 \cup S_2 \cup S_3,\]
where $|S_1| = |S_2| = |S_3| = 3n'$.
Note that $S_1,S_2,S_3$ are pairwise disjoint and are also disjoint from $\{a^*,b_1^*,b_2^*,c^*,a,b,c\}$. We will call the points $\{a^*,b_1^*,b_2^*,c^*,a,b,c\}$ \textit{critical}. The metric $d:S\times S\longrightarrow\mathbb{R}$ is the shortest-path-length metric induced by the graph shown in Figure \ref{metric} (where $x$ is not a point in $S$ but is only used to define the pairwise distances). The pairwise distances are given in \Cref{metric-pairwisedist}.
Note that if the table entry $i,j$ is indexed by sets, then the entry corresponds to the distance between distinct points in the sets. The following observation can be verified by a case-by-case analysis.
\begin{observation}\label{obs_lb_opt}
The sets $\{a^*,b_1^*,c^*\}$ and $\{a^*,b_2^*,c^*\}$ are the only optimum solutions of the $3$-center problem on $\mathcal{M}(n')$ and they have unit clustering cost. The clustering cost of any subset of $S_1$ is $4$ due to point $c$. Similarly, the clustering cost of any subset of $S_3$ is $4$ due to point $a$.
\end{observation}
\begin{figure}[h]
\centering{\includegraphics[width=.4\textwidth]{metric1.pdf}}
\caption{\footnotesize The underlying metric for $n'=2$}
\label{metric}
\end{figure}
\begin{table}[htp]
\ra{1.5}
\centering
\adjustbox{max width = \textwidth}{
\begin{tabular}{@{}c|ccccccccccc@{}}
& $a^*$ & $b_1^*$ & $b_2^*$ & $c^*$ & $a$ & $b$ & $c$ & $S_1$ & $S_2$ & $S_3$\\
\toprule
$a^*$& $0$ & $1$& $1$ &$2$ & $1$ & $2$ & $3$ & $1$ & $2$ & $3$\\
$b_1^*$ & $1$ & $0$ & $2$ &$1$ & $2$ & $1$ & $2$ & $2$ & $1$ & $2$ \\
$b_2^*$ & $1$ & $2$ &$0$ &$1$ & $2$ & $1$ & $2$ & $2$ & $1$ & $2$ \\
$c^*$ & $2$ & $1$ & $1$ &$0$ & $3$ & $2$ & $1$ & $3$ & $2$ & $1$ \\
$a$ & $1$ & $2$ & $2$ &$3$ & $0$ & $3$ & $4$ & $2$ & $3$ & $4$ \\
$b$ & $2$ & $1$ & $1$ &$2$ & $3$ & $0$ & $3$ & $3$ & $2$ & $3$ \\
$c$ & $3$ & $2$ & $2$ &$1$ & $4$ & $3$ & $0$ & $4$ & $3$ & $2$ \\
$S_1$ & $1$ & $2$ & $2$ &$3$ & $2$ & $3$ & $4$ & $2$ & $2$ & $2$ \\
$S_2$ & $2$ & $1$ & $1$ &$2$ & $3$ & $2$ & $3$ & $2$ & $2$ & $2$\\
$S_3$ & $3$ & $2$ & $2$ &$1$ & $4$ & $3$ & $2$ & $2$ & $2$ & $2$
\end{tabular}}
\caption{Pairwise Distances}
\label{metric-pairwisedist}
\end{table}
\paragraph{Input Distribution $\mathcal{D}$ on the Processors' Inputs.}
For $i\in [3]$, let $S_i^1, S_i^2, S_i^3$ be an arbitrary equi-partition of $S_i$ (and therefore, $|S_i^j|=n'$ for all $i,j$).
Define the sets
$Y_1^j = \{b_1^*,b_2^*,a\} \cup S_1^j$, $Y_2^j = \{a^*,c^*,b\} \cup S_2^j$ and $Y_3^j = \{b_1^*,b_2^*,c\} \cup S_3^j$, for $j\in [3]$.
Observe that each $Y_i^j$ contains exactly $n'+3$ points separated pairwise by distance $2$, and moreover, $3$ of the $n'+3$ points are critical.
We assign the sets $Y_i^j$ randomly to the nine processors after a random relabeling. Formally, we pick a uniformly random bijection $\pi:S\longrightarrow[n]$ as the relabeling and another uniformly random bijection $\Gamma:[3]\times[3]\longrightarrow[9]$, independent of $\pi$, as the assignment. We assign the set $\pi(Y_i^j)$ to processor $\Gamma(i,j)$ for every $i,j$. When a processor or the coordinator queries the distance between $p$ and $q$ where $p,q\in[n]$, it gets $d(\pi^{-1}(p),\pi^{-1}(q))$ as an answer.
Note that neither the processors nor the coordinator know $\pi$ or $\Gamma$.
\begin{theorem}\label{thm:lb}
Consider any distributed algorithm for the $9$ processor $3$-center problem on $\mathcal{M}(n')$ and input distribution $\mathcal{D}$, in which each processor communicates an $\ell$-sized subset of its input points, and the coordinator outputs $3$ of the received points.
If $\ell \leq n'/54$, then with probability at least $1/84$, the output is no better than a $4$-approximation.
\end{theorem}
Here, although the probability with which the coordinator fails to outputs a better-than-$4$-approximation is only $1/84$, it can be \emph{amplified} to $1-\varepsilon$, for any $\varepsilon > 0$.
We discuss the amplification result before presenting the proof of the above theorem.
\begin{theorem}
Let $\epsilon > 0$, $\beta = 1-1.4\cdot 10^{-4}$ and $\alpha = \left.\left\lceil \frac{100}{91}\cdot \log \left(\frac{1}{\varepsilon}\right) \middle/ \log\left(\frac{1}{\beta}\right) \right\rceil\right.$.
Consider any distributed algorithm for the $9$ processor $3\alpha$-center problem, in which each processor communicates an $\ell$-sized subset of its input points, and the coordinator outputs $3$ of the received points.
There exists a metric space and a distribution on input points, where each processor receives $(n'+3)\alpha$ points, such that if $\ell \leq n'\alpha/1800$, then with probability at least $1-\epsilon$, the output is no better than a $4$-approximation.
\end{theorem}
\begin{proof}
The underlying metric space consists of $\alpha$ disjoint copies of $\mathcal{M}(n')$ separated by an arbitrarily large distance from one another.
The point set of each copy is distributed independently to the nine processors as described earlier. Observation \ref{obs_lb_opt} implies that the optimum set of $3\alpha$ centers has unit cost. Also, in order to get a better than $4$-approximation, the coordinator must output a better than $4$-approximate solution from every copy. We prove that this is unlikely.
Since each processor communicates at most $n'\alpha/1800$ points to the coordinator, for any processor there are at most $\alpha/100$ copies from which the it sends more than $n'/18$ points.
Thus, there are $\left(1-\frac{9}{100}\right)\cdot \alpha=91\alpha/100$ copies from which none of the $9$ processors sends more than $n'/18$ points.
Theorem~\ref{thm:lb} implies that the coordinator's output from any of these $91\alpha/100$ copies is no better than a $4$-approximation with probability at least $1-\beta$.
Using the independence between the copies further, we can conclude that the coordinator's output fails to be any better than a $4$-approximation with probability at least $1-\beta^{91\alpha/100} \geq 1 - \varepsilon$, which follows from the inequality
$91\alpha/100 \geq \left.\log \left(\frac{1}{\varepsilon}\right) \middle/ \log \frac{1}{\beta}\right.$.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:lb}}]
Consider any one of the nine processors. It gets the set $\pi(Y_i^j)$ for a uniformly random $(i,j)\in[3]\times[3]$. Furthermore, since $\pi$ is a uniformly random labeling and points in $Y_i^j$ are pairwise equidistant, the processor is neither able to identify the three critical points in its input, nor does it get any information about $(i,j)$. Formally,
\begin{observation}\label{obs_info}
Conditioned on the set of labels that the processor receives, all subsets of three labels are equally likely to be the set of labels of the three critical points, and independently, the processor is equally likely to have received $Y_i^j$ for all $(i,j)$.
\end{observation}
The first consequence of the above observation is that the probability that at least one of the three critical points appears in the set of at most $\ell$ points the processor communicates is at most $3\ell/|Y_i^j|=3\ell/(n'+3)\leq3\ell/n'$. For a given processor $r\in[9]$, let $O_r$ be the set of points it sends to the coordinator, and define $B_r$ to be the event that $O_r$ contains a critical point. Then $\Pr[B_r]\leq3\ell/n'$. Next, define $G$ to be the event that no processor sends a critical point to the corrdinator, that is, $G = \cap_{r=1}^9 B_r^c$, where $B_r^c$ is the complement of $B_r$. Then by the union bound, $\Pr[G] \geq 1-9\cdot\frac{3\ell}{n'}\geq 1/2$, because $\ell\leq n'/54$.
The second consequence of Observation \ref{obs_info} is that the sets $O_r$ are all independent of the random assignment $\Gamma$, even conditional on $G$. Therefore the coordinator fails to get any information about $\Gamma$ from the nine sets $O_1,\ldots,O_9$ it receives. Formally,
\begin{observation}\label{obs_gamma_info}
Conditioned on $O_1,\ldots,O_9$ and also on $G$, the bijection $\Gamma$ is equally likely to be any of the $9!$ bijections from $[3]\times[3]$ to $[9]$.
\end{observation}
Suppose the coordinator outputs $O$, a set of three points, on receiving $O_1,\ldots,O_9$. Then $O\subseteq O_{r_1}\cup O_{r_2}\cup O_{r_3}$ for some $r_1,r_2,r_3\in[9]$. Define $G'$ to be the event that $\{r_1,r_2,r_3\}=\Gamma(\{(1,1),(1,2),(1,3)\})$ or $\{r_1,r_2,r_3\}=\Gamma(\{(3,1),(3,2),(3,3)\})$. In words, $G'$ is the event that the coordinator's output is contained
in $Y_1^1\cup Y_1^2\cup Y_1^3$ or in $Y_3^1\cup Y_3^2\cup Y_3^3$.
Note that the event $G'\cap G$ implies that the coordinator's output is contained
in $S_1^1\cup S_1^2\cup S_1^3=S_1$ or in $S_3^1\cup S_3^2\cup S_3^3=S_3$. Therefore, by Observation \ref{obs_lb_opt}, event $G'\cap G$ implies that the coordinator fails to output a better than $4$-approximation. We are now left to bound $\Pr[G'\cap G]$ from below.
By Observation \ref{obs_gamma_info} and some elementary combinatorics,
\[\Pr\left[\{r_1,r_2,r_3\}=\Gamma(\{1\}\times\{1,2,3\}\})\mid G\right]=\Pr[\{r_1,r_2,r_3\}=\Gamma(\{3\}\times\{1,2,3\}\})\mid G]=\frac{3!\cdot6!}{9!}=\frac{1}{84}\text{,}\]
and therefore, $\Pr[G'\mid G]=1/42$. Thus,
\[\Pr[G'\cap G]=\Pr[G'\mid G]\cdot\Pr[G]\geq\frac{1}{42}\cdot\frac{1}{2}=\frac{1}{84}\text{,}\]
which means that the coordinator fails to output a better than $4$-approximation with probability at least $1/84$, as required.
\end{proof}
\subsection{The Formal Proof}
We now present the formal details of the lower bound.
For a natural number $n$, $[n]$ denotes the set $\{1,2,\ldots,n\}$.
\paragraph{The metric space $\mathcal{M}(n')$.}
The point set of this metric space on $n = 9n'+7$ points is given by
\[S := \{a^*,b_1^*,b_2^*,c^*,a,b,c\} \cup S_1 \cup S_2 \cup S_3,\]
where $|S_1| = |S_2| = |S_3| = 3n'$.
Let $C := \{a^*,b_1^*,b_2^*,c^*,a,b,c\}$. We call the points in $C$ \emph{critical}. Note that $S_1,S_2,S_3$ are pairwise disjoint and are also disjoint from $C$. The metric $d:S\times S\longrightarrow\mathbb{R}$ is the shortest-path-length metric induced by the graph shown in Figure~\ref{metric} (where $x$ is not a point in $S$ but is only used to define the pairwise distances). The pairwise distances are given in Table~\ref{metric-pairwisedist}.
Note that if the table entry $i,j$ is indexed by sets, then the entry corresponds to the distance between distinct points in the sets. The following observation can be verified by a case-by-case analysis.
\begin{observation}\label{obs_lb_opt}
The sets $\{a^*,b_1^*,c^*\}$ and $\{a^*,b_2^*,c^*\}$ are the only optimum solutions of the $3$-center problem on $\mathcal{M}(n')$ and they have unit clustering cost. The clustering cost of any subset of $S_1$ is $4$ due to point $c$. Similarly, the clustering cost of any subset of $S_3$ is $4$ due to point $a$.
\end{observation}
\begin{table}[htp]
\ra{1.5}
\centering
\adjustbox{max width = \textwidth}{
\begin{tabular}{@{}c|ccccccccccc@{}}
& $a^*$ & $b_1^*$ & $b_2^*$ & $c^*$ & $a$ & $b$ & $c$ & $S_1$ & $S_2$ & $S_3$\\
\toprule
$a^*$& $0$ & $1$& $1$ &$2$ & $1$ & $2$ & $3$ & $1$ & $2$ & $3$\\
$b_1^*$ & $1$ & $0$ & $2$ &$1$ & $2$ & $1$ & $2$ & $2$ & $1$ & $2$ \\
$b_2^*$ & $1$ & $2$ &$0$ &$1$ & $2$ & $1$ & $2$ & $2$ & $1$ & $2$ \\
$c^*$ & $2$ & $1$ & $1$ &$0$ & $3$ & $2$ & $1$ & $3$ & $2$ & $1$ \\
$a$ & $1$ & $2$ & $2$ &$3$ & $0$ & $3$ & $4$ & $2$ & $3$ & $4$ \\
$b$ & $2$ & $1$ & $1$ &$2$ & $3$ & $0$ & $3$ & $3$ & $2$ & $3$ \\
$c$ & $3$ & $2$ & $2$ &$1$ & $4$ & $3$ & $0$ & $4$ & $3$ & $2$ \\
$S_1$ & $1$ & $2$ & $2$ &$3$ & $2$ & $3$ & $4$ & $2$ & $2$ & $2$ \\
$S_2$ & $2$ & $1$ & $1$ &$2$ & $3$ & $2$ & $3$ & $2$ & $2$ & $2$\\
$S_3$ & $3$ & $2$ & $2$ &$1$ & $4$ & $3$ & $2$ & $2$ & $2$ & $2$
\end{tabular}}
\caption{Pairwise Distances}
\label{metric-pairwisedist}
\end{table}
\paragraph{Input Distribution $\mathcal{D}$ on the Processors' Inputs.}
For $i\in [3]$, let $S_i^1, S_i^2, S_i^3$ be an arbitrary equi-partition of $S_i$ (and therefore, $|S_i^j|=n'$ for all $i,j$).
Define the sets
$Y_1^j = \{b_1^*,b_2^*,a\} \cup S_1^j$, $Y_2^j = \{a^*,c^*,b\} \cup S_2^j$ and $Y_3^j = \{b_1^*,b_2^*,c\} \cup S_3^j$, for $j\in [3]$.
Observe that each $Y_i^j$ contains exactly $n'+3$ points separated pairwise by distance $2$, and moreover, three of the $n'+3$ points are critical.
We assign the sets $Y_i^j$ randomly to the nine processors after a random relabeling. Formally, we pick a uniformly random bijection $\pi:S\longrightarrow[n]$ as the relabeling and another uniformly random bijection $\Gamma:[3]\times[3]\longrightarrow[9]$, independent of $\pi$, as the assignment. We assign the set $\pi(Y_i^j)$ to processor $\Gamma(i,j)$ for every $i,j$. When a processor or the coordinator queries the distance between $p$ and $q$ where $p,q\in[n]$, it gets $d(\pi^{-1}(p),\pi^{-1}(q))$ as an answer.
Note that neither the processors nor the coordinator knows $\pi$ or $\Gamma$. Let the random variable $\mathcal{P}=(\mathcal{P}_1,\ldots,\mathcal{P}_9)$ denote the partition of the set of labels into a sequence of nine subsets induced by $\pi$ and $\Gamma$, where $\mathcal{P}_r$ is the set of labels of points assigned to processor $r$, that is, $\mathcal{P}_{\Gamma(i,j)}=\pi(Y_i^j)$.
\begin{lemma}\label{thm:lb}
Consider any deterministic distributed algorithm for the $9$ processor $3$-center problem on $\mathcal{M}(n')$ and input distribution $\mathcal{D}$, in which each processor communicates an $\ell$-sized subset of its input points, and the coordinator outputs $3$ of the received points.
If $\ell \leq (n'+3)/54$, then with probability at least $1/84$, the output is no better than a $4$-approximation.
\end{lemma}
Although the probability with which the coordinator fails to outputs a better-than-$4$-approximation is only $1/84$, it can be \emph{amplified} to $1-\varepsilon$, for any $\varepsilon > 0$.
We discuss the amplification result before presenting the proof of the above lemma.
\begin{lemma}\label{lem:amplifiedlb}
Let $\varepsilon>0$ and $c<1/486$ be arbitrary constants, and let
\[\alpha=\left\lceil\frac{84\ln(1/\varepsilon)}{1-486c}\right\rceil\]
Then there exists an instance of the $(3\alpha)$-center problem such that, in the distributed setting with $9$ processors, each communicating at most a $c$ fraction of its input points to the coordinator, the coordinator fails to output a better than $4$-approximation with probability at least $1-\varepsilon$.
\end{lemma}
\begin{proof}
The underlying metric space consists of $\alpha$ disjoint copies of $\mathcal{M}(n')$ separated by an arbitrarily large distance from one another.
The point set of each copy is distributed to the nine processors as described earlier, and these distribtions are independent. Thus, each processor receives $\alpha\cdot(n'+3)$ points. Observation~\ref{obs_lb_opt} implies that in this instance, the optimum set of $3\alpha$ centers (the union of optimum sets of $3$ centers in each copy) has unit cost. Also, in order to get a better than $4$-approximation, the coordinator must output a better than $4$-approximate solution from every copy. We prove that this is unlikely.
By our assumption, each processor sends at most $c\alpha\cdot(n'+3)$ points to the coordinator, where $c<1/486$. Therefore, for each processor, there exist at most $54c\alpha$ copies from which it sends more than $(n'+3)/54$ points to the coordinator. Since we have $9$ processors, there exist at most $9\times54c\alpha=486c\alpha$ copies from which more than $(n'+3)/54$ points are sent by some processor. From each of the remaining $(1-486c)\alpha$ copies, no processor sends more than $(n'+3)/54$ points. By Lemma~\ref{thm:lb}, the coordinator succeeds on each of these copies independently with probability at most $1-1/84$, in producing a better than $4$ approximation. Therefore, the probability that the coordinator succeeds in all the $(1-486c)\alpha$ copies is bounded as
\[\left(1-\frac{1}{84}\right)^{(1-486c)\alpha}\leq\exp\left(-\frac{1-486c}{84}\cdot\alpha\right)\leq\varepsilon\text{,}\]
where the last inequality follows by substituting the value of $\alpha$. Thus, the coordinator fails to produce a better than $4$-approximation with probability at least $1-\varepsilon$.
\end{proof}
\begin{comment}
\begin{lemma}
Let $\epsilon > 0$, $\beta = 1-1.4\cdot 10^{-4}$ and $\alpha = \left.\left\lceil \frac{100}{91}\cdot \log \left(\frac{1}{\varepsilon}\right) \middle/ \log\left(\frac{1}{\beta}\right) \right\rceil\right.$.
Consider any distributed algorithm for the $9$ processor $3\alpha$-center problem, in which each processor communicates an $\ell$-sized subset of its input points, and the coordinator outputs $3$ of the received points.
There exists a metric space and a distribution on input points, where each processor receives $(n'+3)\alpha$ points, such that if $\ell \leq n'\alpha/1800$, then with probability at least $1-\epsilon$, the output is no better than a $4$-approximation.
\end{lemma}
\begin{proof}
The underlying metric space consists of $\alpha$ disjoint copies of $\mathcal{M}(n')$ separated by an arbitrarily large distance from one another.
The point set of each copy is distributed independently to the nine processors as described earlier. Observation \ref{obs_lb_opt} implies that the optimum set of $3\alpha$ centers has unit cost. Also, in order to get a better than $4$-approximation, the coordinator must output a better than $4$-approximate solution from every copy. We prove that this is unlikely.
Since each processor communicates at most $n'\alpha/1800$ points to the coordinator, for any processor there are at most $\alpha/100$ copies from which the it sends more than $n'/18$ points.
Thus, there are $\left(1-\frac{9}{100}\right)\cdot \alpha=91\alpha/100$ copies from which none of the $9$ processors sends more than $n'/18$ points.
Lemma~\ref{thm:lb} implies that the coordinator's output from any of these $91\alpha/100$ copies is no better than a $4$-approximation with probability at least $1-\beta$.
Using the independence between the copies further, we can conclude that the coordinator's output fails to be any better than a $4$-approximation with probability at least $1-\beta^{91\alpha/100} \geq 1 - \varepsilon$, which follows from the inequality
$91\alpha/100 \geq \left.\log \left(\frac{1}{\varepsilon}\right) \middle/ \log \frac{1}{\beta}\right.$.
\end{proof}
\end{comment}
\begin{proof}[\textbf{Proof of Lemma~\ref{thm:lb}}]
Consider any one of the nine processors. It gets the set $\pi(Y_i^j)$ for a uniformly random $(i,j)\in[3]\times[3]$. Since $\pi$ is a uniformly random labeling and points in $Y_i^j$ are pairwise equidistant, the processor is not able to identify the three critical points in its input. This happens even if we condition on the values of $\Gamma$. Formally,
conditioned on $\Gamma$ and $\mathcal{P}$,
all subsets of $\mathcal{P}_r$ of size $3$ are equally likely to be the set of labels of the three critical points in processor $r$'s input, i.e., $Y_i^j$ where $(i,j)=\Gamma^{-1}(r)$.
As a consequence, the probability that at least one of the three critical points appears in the set of at most $\ell$ points the processor communicates is at most $3\ell/|Y_i^j|=3\ell/(n'+3)$, even when we condition on $\Gamma$.
For a given processor $r\in[9]$, let $O_r$ be the set of labels it sends to the coordinator, and define $B_r$ to be the event that $O_r$ contains the label of a critical point. Then $\Pr[B_r\mid\Gamma,\mathcal{P}]\leq3\ell/(n'+3)$. Next, define $G$ to be the event that no processor sends the label of any critical point to the coordinator, that is, $G = \cap_{r=1}^9 B_r^c$, where $B_r^c$ is the complement of $B_r$. Then by the union bound and the fact that $\ell\leq (n'+3)/54$, we have for every partition $P$ of the label set and every bijection $\gamma:[3]\times[3]\longrightarrow[9]$,
\begin{equation}\label{eqn_G}
\Pr[G\mid\Gamma=\gamma,\mathcal{P}=P] \geq 1-9\cdot\frac{3\ell}{n'+3}\geq \frac{1}{2}\text{.}
\end{equation}
Suppose the coordinator outputs $O$, a set of three labels, on receiving $O_1,\ldots,O_9$. Then $O\subseteq O_{r_1}\cup O_{r_2}\cup O_{r_3}$ for some $r_1,r_2,r_3\in[9]$. Observe that $O_1,\ldots,O_9$, $O$, and $\{r_1,r_2,r_3\}$ are all completely determined\footnote{If $O$ intersects less than three of the $O_r$'s, then we define $\{r_1,r_2,r_3\}$ to be the lexicographically smallest set such that $O\subseteq O_{r_1}\cup O_{r_2}\cup O_{r_3}$.} by $\mathcal{P}$. In contrast, due to the random labeling $\pi$, the mapping $\Gamma$ is independent of $\mathcal{P}$. Therefore,
\begin{observation}\label{obs_gamma_info}
Conditioned on $\mathcal{P}$, the bijection $\Gamma$ is equally likely to be any of the $9!$ bijections from $[3]\times[3]$ to $[9]$.
\end{observation}
Next, define $G'$ to be the event that $\{r_1,r_2,r_3\}$ is either $\Gamma(\{(1,1),(1,2),(1,3)\})$ or $\Gamma(\{(3,1),(3,2),(3,3)\})$. In words, $G'$ is the event that the coordinator outputs labels of three points, all of which are contained
in $Y_1^1\cup Y_1^2\cup Y_1^3$ or in $Y_3^1\cup Y_3^2\cup Y_3^3$.
Note that the event $G'\cap G$ implies that the coordinator's output is contained
in $S_1^1\cup S_1^2\cup S_1^3=S_1$ or in $S_3^1\cup S_3^2\cup S_3^3=S_3$. Therefore, by Observation \ref{obs_lb_opt}, event $G'\cap G$ implies that the coordinator fails to output a better than $4$-approximation. We are now left to bound $\Pr[G'\cap G]$ from below.
Since the set $\{r_1,r_2,r_3\}$ is completely determined by $\mathcal{P}$, the event $G'$ is completely determined by $\mathcal{P}$ and $\Gamma$: for any $\mathcal{P}$, there exist exactly $2\cdot3!\cdot6!$ values of $\Gamma$ which cause $G'$ to happen. Formally,
\begin{observation}\label{obs_G'}
For every partition $P$ of the label set, there exist exactly $2\cdot3!\cdot6!$ bijections $\gamma:[3]\times[3]\longrightarrow[9]$ such that $\Pr[G'\mid\mathcal{P}=P,\Gamma=\gamma]=1$, whereas $\Pr[G'\mid\mathcal{P}=P,\Gamma=\gamma']=0$ for all the other bijections $\gamma':[3]\times[3]\longrightarrow[9]$.
\end{observation}
Therefore, we have,
\begin{eqnarray*}
\Pr[G\cap G'] & = & \sum_{P,\gamma}\Pr[G\cap G'\mid\mathcal{P}=P,\Gamma=\gamma]\cdot\Pr[\mathcal{P}=P,\Gamma=\gamma]\\
& = & \sum_{(P,\gamma):\Pr[G'\mid\mathcal{P}=P,\Gamma=\gamma]=1}\Pr[G\mid\mathcal{P}=P,\Gamma=\gamma]\cdot\Pr[\Gamma=\gamma\mid\mathcal{P}=P]\cdot\Pr[\mathcal{P}=P]\\
& \geq & \sum_P\sum_{\gamma:\Pr[G'\mid\mathcal{P}=P,\Gamma=\gamma]=1}\frac{1}{2}\cdot\frac{1}{9!}\cdot\Pr[\mathcal{P}=P]\\
& = & \frac{1}{2}\cdot\frac{1}{9!}\cdot\sum_P\left|\{\gamma:\Pr[G'\mid\mathcal{P}=P,\Gamma=\gamma]=1\}\right|\cdot\Pr[\mathcal{P}=P]\\
& = & \frac{2\cdot3!\cdot6!}{2\cdot9!}\cdot\sum_P\Pr[\mathcal{P}=P]\\
& = & \frac{1}{84}\text{.}
\end{eqnarray*}
Here, we used Observation \ref{obs_G'} for the second and fourth equality, and Equation (\ref{eqn_G}) and Observation \ref{obs_gamma_info} for the inequality. Thus, the coordinator fails to output a better than $4$-approximation with probability at least $1/84$, as required.
\end{proof}
Using Lemma~\ref{lem:amplifiedlb} along with Yao's lemma, we get our main lower-bound theorem.
\begin{theorem}\label{thm:yao}
There exists $c > 0$ such that for any $\varepsilon > 0$, with $k = \Theta(\log (1/\varepsilon))$, any randomized distributed algorithm for $k$-center where each processor communicates at most $cn$ points to the coordinator, who outputs a subset of those points as the solution, is no better than $4$-approximation with probability at least $1 - \varepsilon$.
\end{theorem}
\section{Distributed $k$-Center Lower Bound}
\label{sec:lb}
Malkomes et al.~\cite{malkomes1_etal15} generalized the greedy algorithm~\cite{gonzalez85} to obtain a $4$-approximation algorithm for the $k$-center problem in the distributed setting.
Here we prove a lower bound for the $3$-center problem with $9$ processors for a special class of distributed algorithms:
If each processor communicates less than a constant fraction of their input points, then with a constant probability, the output of the coordinator will be no better than a $4$-approximation to the optimum.
\begin{figure}[h]
\centering{\includegraphics[scale=0.6]{metric1.pdf}}
\caption{\footnotesize The underlying metric for $n'=2$}
\label{metric}
\end{figure}
Figure~\ref{metric} shows a graph metric with $9n'+7$ points for which lower bound holds, where the point $x$ is not a part of the metric but is only used to define the distances.
Note that $|S_1|=|S_2|=|S_3| = 3n'$ and $x$ is at distance of $1$ from each point in $S_1 \cup S_2 \cup S_3$.
For $i\in \{1,2,3\}$, let $S_i^1,S_i^2,S_i^3$ denote an arbitrary equipartition of
$S_i$.
There are $9$ processors, whose inputs are given by $Y_1^j = \{b_1^*,b_2^*,a\} \cup S_1^j$, $Y_2^j = \{a^*,c^*,b\} \cup S_2^j$ and $Y_3^j = \{b_1^*,b_2^*,c\} \cup S_3^j$, for $j\in \{1,2,3\}$.
The goal is to solve the $3$-center problem on the union of their inputs.
(Observe that the optimum solution is $\{a^*,c^*,b^*_1\}$ with distance $1$.)
Each processor is allowed to send a subset of their input points to the coordinator, who outputs three of the received points.
For this class of algorithms, we show that if each processor communicates less than $(n' + 3)/54$ points, then the output of the coordinator is no better than a $4$-approximation to the optimum with probability at least $1/84$.
Using standard amplification arguments, we can generate a metric instance for the ($3\alpha$)-center problem on which with probability at least $1-\varepsilon$, the algorithm outputs no better than $4$-approximation ($\alpha \approx \log(1/\varepsilon)$).
We first discuss the intuition behind the proof.
The key observation is that all points in each $Y_i^j$ are pairwise equidistant.
Therefore, sending a uniformly random subset of the inputs is the best strategy for each processor.
Since each processor communicates only a small fraction of its input points, the probability that the coordinator receives any of the points in $\{a^*,b_1^*,b_2^*,c^*,a,b,c\}$ is negligible.
Conditioned on the coordinator not receiving these points, all the received points are a subset of $S_1\cup S_2 \cup S_3$.
As all points in $S_1\cup S_2 \cup S_3$ are pairwise equidistant, the best strategy for the coordinator is to output $3$ points at random.
Hence, with constant probability, all the points in the output belong to $S_1$ or all of them belong to $S_3$.
This being the case, the output has cost $4$, whereas the optimum cost is $1$.
\input{lb-proof.tex}
\section{Algorithms}
\input{basic_procs.tex}
\input{two_pass.tex}
\input{distributed.tex}
\input{guesses.tex}
\input{lb.tex}
\input{experiments1.tex}
\input{experiments2.tex}
\input{conclusion.tex}
\bibliographystyle{alpha}
\section{Preliminaries}
\label{sec:prelim}
The input to fair $k$-center is a set $X$ of $n$ points in a metric space given by a distance function $d$. We denote this metric space by $(X,d)$. Each point belongs to one of $m$ groups, say $\{1,\ldots,m\}$. Let $g:X\longrightarrow\{1,\ldots,m\}$ denote this group assignment function. Further, for each group $j$, we are given a capacity $k_j$. Let $k=\sum_{j=1}^mk_j$. We call a subset $S\subseteq X$ \textit{feasible} if for every $j$, the set $S$ contains at most $k_j$ points from group $j$. The goal is to compute a feasible set of centers that (approximately) minimizes the clustering cost, formally defined as follows.
\begin{definition}\label{def_cost}
Let $A,B\subseteq X$, then the \textit{clustering cost} of $A$ for $B$ is defined as $\max_{b\in B}\min_{a\in A}d(a,b)$.
\end{definition}
Note here that we allow $A$ to not be a subset of $B$. The following lemmas follow easily from the fact that the distance function $d$ satisfies the triangle inequality.
\begin{lemma}\label{lem_cost_triangle}
Let $A,B,C\subseteq X$. The clustering cost of $A$ for $C$ is at most the clustering cost of $A$ for $B$ plus the clustering cost of $B$ for $C$.
\end{lemma}
\begin{lemma}\label{lem_pigeonhole}
Suppose for a set $T$ of points there exists a set $S$ of $k$ centers, not necessarily a subset of $T$, whose clustering cost for $T$ is at most $\rho$. If $P\subseteq T$ is a set of points separated pairwise by distance more than $2\rho$, then $|P|\leq k$.
\end{lemma}
\begin{proof}
If $|P|>k$ then some two points in $P$ must share one of the $k$ centers, and must therefore be both within distance $\rho$ from that common center. Then by the triangle inequality, they cannot be separated by distance more than $2\rho$.
\end{proof}
We denote by $S^*$ a feasible set which has the minimum clustering cost for $X$, and by $\text{OPT}$ the minimum clustering cost. We assume that our algorithms have access to an estimate $\tau$ of $\text{OPT}$. When $\tau$ is at least $\text{OPT}$, our algorithms compute a solution of cost at most $\alpha\tau$ for a constant $\alpha$. Thus, when $\tau\in[\text{OPT},(1+\varepsilon)\text{OPT}]$, our algorithms compute a $(1+\varepsilon)\alpha$-approximate solution. In Section~\ref{subsec:guesses} we describe how to efficiently compute such a $\tau$.
\section{Algorithms}
\label{app:alg}
The definition of clustering cost (Definition 1) immediately implies the following observations.
\begin{observation}\label{obs_inclusion}
Let $A\supseteq A'$ and $B\subseteq B'$ be sets of points in a metric space given by a distance function $d$. The clustering cost of $A$ for $B$ is at most the clustering cost of $A'$ for $B'$.
\end{observation}
\begin{observation}\label{obs_union}
Let $A_1,A_2,B_1,B_2$ be sets of points in a metric space given by a distance function $d$. Suppose the clustering cost of each $A_i$ for $B_i$ is at most $\tau$. Then the clustering cost of $A_1\cup A_2$ for $B_1,\cup B_2$ is at most $\tau$.
\end{observation}
The following lemma follows easily from the triangle inequality.
\begin{lemma}[Lemma 1 from the paper, restated]
Let $A,B,C\subseteq X$. The clustering cost of $A$ for $C$ is at most the clustering cost of $A$ for $B$ plus the clustering cost of $B$ for $C$.
\end{lemma}
\begin{proof}
Let $d$ be the metric and let $r_{AB}$ and $r_{BC}$ denote the clustering costs of $A$ for $B$ and of $B$ for $C$ respectively. For every $a\in A$, there exists $b\in B$ such that $d(a,b)\leq r_{AB}$. But for this $b$, there exists $c\in C$ such that $d(b,c)\leq r_{BC}$. Thus, for every $a\in A$, there exists a $c\in C$ such that $d(a,c)\leq r_{AB}+r_{BC}$, by the triangle inequality. This proves the claim.
\end{proof}
The pseudocodes of procedures \texttt{getPivots}$()$, \texttt{getReps}$()$, and \texttt{HittingSet}$()$ are given by Algorithms~\ref{alg_pivots},~\ref{alg_representatives}, and~\ref{alg_bmatching} respectively.
\begin{algorithm}[htb]
\caption{\texttt{getPivots}$(T,d,r)$}
\label{alg_pivots}
\begin{algorithmic}
\State {\bfseries Input:} Set $T$ with metric $d$, radius $r$.
\State $P\gets\{p\}$ where $p$ is an arbitrary point in $T$.
\For{{\bfseries each} $q\in T$ (in an arbitrary order)}
\If{$\min_{p\in P}d(p,q)>r$}
\State $P\gets P\cup\{q\}$.
\EndIf
\EndFor
\State {\bfseries Return} $P$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[htb]
\caption{\texttt{getReps}$(T,d,g,P,r)$}
\label{alg_representatives}
\begin{algorithmic}
\State {\bfseries Input:} Set $T$ with metric $d$, group assignment function $g$, subset $P\subseteq T$, radius $r$.
\For{{\bfseries each} $p\in P$}
\State $I_p\gets\{p\}$.
\EndFor
\For{{\bfseries each} $q\in T$ (in an arbitrary order)}
\For{{\bfseries each} $p\in P$}
\If{$d(p,q)\leq r$ and $I_p$ doesn't contain a point from $q$'s group}
\State $I_p\gets I_p\cup\{q\}$.
\EndIf
\EndFor
\EndFor
\State {\bfseries Return} $\{I_p:p\in P\}$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[htb]
\caption{\texttt{HittingSet}$(\mathcal{N},g,\overline{k})$}
\label{alg_bmatching}
\begin{algorithmic}
\State {\bfseries Input:} Collection $\mathcal{N}=(N_1,\ldots,N_K)$ of pairwise disjoint sets of points, group assignment function $g$, vector $\overline{k}=(k_1,\ldots,k_m)$ of capacities.
\State Construct bipartite graph $G=(\mathcal{N},V,E)$ as follows.
\State $V$ $\gets$ $\biguplus_{j=1}^mV_i$, where $V_j$ is a set of $k_j$ vertices.
\For{{\bfseries each} $N_i$ and {\bfseries each} group $j$}
\If{$\exists$ $p\in N_i$ such that $g(p)=j$}
\State Connect $N_i$ to all vertices in $V_j$.
\EndIf
\EndFor
\State Find the maximum cardinality matching $H$ of $G$.
\State $C\gets\emptyset$.
\For{{\bfseries each} edge $(N_i,v)$ of $H$}
\State Let $p$ be a point in $N_i$ from group $j$, where $v\in V_j$.
\State $C\gets C\cup\{p\}$.
\EndFor
\State {\bfseries Return} $C$.
\end{algorithmic}
\end{algorithm}
\begin{observation}\label{obs_getPivots}
The procedure \texttt{getPivots}$()$ performs a single pass over the input set $T$. The set $P$ returned by \texttt{getPivots}$(T,d,r)$ contains points separated pairwise by distance more than $r$. The clustering cost of $P$ for $T$ is at most $r$. Therefore, by Lemma 2 from the paper, if there is a set of $k$ points whose clustering cost for $T$ is at most $r/2$, then $|P|\leq k$ pivots.
\end{observation}
\begin{observation}\label{obs_representatives}
The procedure \texttt{getRep}$()$ executes a single pass over the input set $T$. The points in each set $I_p$ returned by \texttt{getRep}$(T,d,g,P,r)$ belong to distinct groups and are all within distance $r$ from $p$. For every point $q$ within distance $r$ from $p\in P$, $I_p$ contains a point in the same group as $q$ (possibly $q$ itself).
\end{observation}
The procedure \texttt{HittingSet}$()$ constructs the following bipartite graph. The left side vertex set contains $K$ vertices: one for each $N_i$. The right side vertex set is $V=\biguplus_{j=1}^mV_j$, where $V_j$ contains $k_j$ vertices for each group $j$. If $N_i$ contains a point from group $j$, then its vertex is connected to the all of $V_j$. Each matching $H$ in this bipartite graph encodes a feasible subset $C$ of $\biguplus_{i=1}^KN_i$ as follows. For each edge $e=(N_i,v)\in H$ where $v\in V_j$, add to $C$ the point from $N_i$ belonging to group $j$. Observe that since $|V_j|=k_j$ and $H$ is a matching, $C$ contains at most $k_j$ points from group $j$. Moreover, $|C|=|H|$, and hence, a maximum cardinality matching in the bipartite graph encodes a set $C$ intersecting as many of the $N_i$'s as possible.
In our implementation, we enhance the efficienty of \texttt{HittingSet}$()$ as follows. For each group, we introduce only one vertex in the right side vertex set and construct the bipartite graph like \texttt{HittingSet}$()$, directing edges from left to right. We further connect a source to the left side vertices with unit capacity edges, and the right side vertices to a sink with edges of capacities $k_j$. We find the maximum (integral) source-to-sink flow using the Ford-Fulkerson algorithm.
For each $i$ and $j$, if the edge $(N_i,j)$ exists and carries nonzero flow, then we include in $C$ the point in $N_i$ that belongs to group $j$. Our runtime is bounded as follows.
\begin{lemma}
The runtime of \texttt{HittingSet}$()$ is $O(K^2\cdot\max_i|N_i|)$.
\end{lemma}
\begin{proof}
The number of edges in the constructed bipartite graph is $O(K\cdot\max_i|N_i|)$ whereas the value of the max-flow is no more than $K$. The runtime of the Ford-Fulkerson algorithm is of the order of the size of the number of edges times the value of max-flow. Therefore, the runtime of \texttt{HittingSet}$()$, which is dominated by the runtime of the Ford-Fulkerson algorithm, turns out to be $O(K^2\cdot\max_i|N_i|)$.
\end{proof}
\subsection{A Two-Pass Algorithm}
\begin{algorithm}[tb]
\caption{Two-pass algorithm}
\label{alg_2pass}
\begin{algorithmic}
\State {\bfseries Input}: Metric space $(X,d)$, group assignment function $g$, capacity vector $\overline{k}$.
\State /* Pass 1: Compute pivots. */
\State $P$ $\gets$ \texttt{getPivots}$(X,d,2\tau)$.
\State /* Pass 2: Compute representatives. */
\State $\{N(q):q\in P\}$ $\gets$ \texttt{getReps}$(X,d,g,P,\tau)$.
\State /* Compute solution. */
\State $S$ $\gets$ \texttt{HittingSet}$(\{N(q):q\in P\},g,\overline{k})$.
\State {\bfseries Output} $S$.
\end{algorithmic}
\end{algorithm}
Recall that $\tau$ is an upper bound on the minimum clustering cost. Our two-pass algorithm given by Algorithm~\ref{alg_2pass} consists of three steps. First, the algorithm constructs a maximal subset $P\subseteq X$ of pivots separated pairwise by distance more than $2\tau$ by executing one pass on the stream of points. In another pass, the algorithm computes a representative set $N(q)$ of each pivot $q\in P$. Points in the representative set of a pivot are within distance $\tau$ from the pivot. Due to the separation of $2\tau$ between the pivots, these representative sets are pairwise disjoint. Finally, a feasible set $S$ intersecting as many $N(q)$'s as possible is found and returned. (It will soon be clear that $S$ intersects all the $N(q)$'s.)
The algorithm needs working space only to store the pivots and their representative sets. By substituting $S=S^*$ in Lemma~\ref{lem_pigeonhole}, the number of pivots is at most $k$, that is, $|P|\leq k$. Since $N(q)$ contains at most one point from any group, it has at most $m-1$ points other than $q$. Thus,
\begin{observation}
The two-pass algorithm needs just enough working space to store $km$ points.
\end{observation}
The calls to \texttt{getPivots} and \texttt{getReps} both take time $O(|P|\cdot|X|)=O(kn)$, with $O(|P|)=O(k)$ update time per point. The call to \texttt{HittingSet} takes time $O(|P|^2\cdot\max_{q\in P}|N(q)|)=O(mk^2)$. Thus,
\begin{observation}
The two-pass algorithm runs in time $O(kn+mk^2)$, which is $O(kn)$ when $m$, the number of groups, is constant.
\end{observation}
We now prove the approximation guarantee.
\begin{theorem}
The two-pass algorithm returns a feasible set whose clustering cost is at most $3\tau$. This is a $3(1+\varepsilon)$-approximation when $\tau\in[\text{OPT},(1+\varepsilon)\text{OPT}]$.
\end{theorem}
\begin{proof}
Recall that $S^*$ is a feasible set having clustering cost at most $\tau$. For each $q\in P$ let $c_q\in S^*$ denote a point such that $d(q,c_q)\leq\tau$. Since the points in $P$ are separated by distance more than $2\tau$, the points $c_q$ are all distinct. Recall that $N(q)$, the output of \texttt{getReps}$()$, contains one point from every group which has a point within distance $\tau$ from $q$. Therefore, $N(q)$ contains a point, say $b_q$, from the same group as $c_q$ such that $d(q,b_q)\leq\tau$. Consider the set $B=\{b_q:q\in P\}$. This set intersects $N(q)$ for each $q$. Furthermore, $B$ contains exactly as many points from any group as $\{c_q:q\in P\}\subseteq S^*$, and therefore, $B$ is feasible. Thus, there exists a feasible set, namely $B$, intersecting all the pairwise disjoint $N(q)$'s. Recall that $S$, the output of \texttt{HittingSet}$()$, is a feasible set intersecting as many $N(q)$'s as possible. Thus, $S$ also intersects all the $N(q)$'s.
Now, the clustering cost of $S$ for $P$ is at most $\tau$, because $S$ intersects $N(q)$ for each $q\in P$. The clustering cost of $P$ for $X$ is at most $2\tau$ by the maximality of the set returned by \texttt{getPivots}$()$. These facts and Lemma~\ref{lem_cost_triangle} together imply that the clustering cost of $S$, the output of the algorithm, for $X$ is at most $3\tau$.
\end{proof}
|
1,314,259,994,329 | arxiv | \section{Introduction}
In the last few years the number of observed exoplanets have increased dramatically, owing to
missions like CoRoT (Baglin et al. \cite{baglin06}) and \textit{Kepler} (Koch et al. \cite{koch10}), as well as many other ground based and space missions
(see the exoplanet encyclopedia for a complete summary \footnote[1]{http://exoplanet.eu/index.php}).
Precise characterization of exoplanet host stars become more and more important in the framework of the detailed studies of the observed planetary systems. Constraints on the parameters and internal structure of the star can be obtained by comparing models with photometric and
spectroscopic observations (Southworth \cite{southworth11}; Basu et al. \cite{basu12}), but the best precision is obtained from asteroseismology, when the stellar
oscillations may be observed and analysed. This was the case, for example,
for the exoplanet host stars (hereafter EHS) $\iota$ Hor (Vauclair et al. \cite{vauclair08}) and $\mu$ Arae
(Soriano \& Vauclair \cite{soriano10}), both observed with HARPS,
as well as the EHS HAT-P-7, HAT-P-11 and TrES-2 (Christensen-Dalsgaard et al. \cite{jcd10}),
observed with \textit{Kepler}.
Among the EHS, the star HD 52265 is one of the most precisely observed for asteroseismology, as it was the only EHS observed as a main target by CoRoT.
This G0V metal-rich main sequence star has an orbiting jupiter-mass planet at 0.5 AU with a period of
119 days (Naef et al. \cite{naef01}; Butler et al. \cite{butler00}).
It was continuously observed between December 13, 2008 and March 3, 2009, that is 117 consecutive days. As a result, 31 p-mode frequencies were reported,
between 1500--2550 $\mu$Hz, corresponding to $\ell$ = 0, 1 and 2 (Ballot et al. \cite{ballot11}). From this analysis, a large separation of
$<\Delta \nu>$ = 98.4 $\pm$ 0.1 $\mu$Hz and a small separation of $<\delta \nu_{02}>$=8.1 $\pm$ 0.2 $\mu$Hz were found,
and a complete asteroseismic analysis including mode lifetimes was presented. An extensive study of the seismic rotation of HD 52265 was performed by Stahn \cite{stahn11} and Gizon et al. \cite{gizon12}.
Spectroscopic observations of this star were done by several groups, who gave different values of the observed triplet ([Fe/H], log g, Teff).
Their results are given in Table 1. Some of these groups also observed lines of other elements, and gave detailed relative abundances. The results show an overall surmetallicity in this star, similar for most heavy elements, with a small dispersion.
The Hipparcos parallax is 34.54 $\pm$ 0.40 mas (van Leeuwen \cite{leeuwen07}),
which leads to a luminosity value log L/L$_{\sun}=0.29 \pm 0.05$.
A spectroscopic follow up was also done with the Narval spectropolarimeter installed on the Bernard Lyot telescope
at Pic du Midi Observatory (France) during December 2008 and January 2009, i.e., during CoRoT observations.
No magnetic signature was observed.
Preliminary modeling of this star, using spectroscopic constraints, was done by Soriano et al. (\cite{soriano07}), as a preparation to CoRoT observations.
Evolutionary tracks were computed using the Toulouse-Geneva evolution code (TGEC).
According to the spectroscopic constraints, eight models with masses between 1.18 and 1.30 M$_{\sun}$ and metallicities ranging
from 0.19 to 0.27 were chosen for further analysis, and adiabatic p-modes frequencies were computed. Echelle diagrams for each selected model
were presented as well as their corresponding large and small separations. A large separation around $\sim$ 100 $\mu$Hz
was predicted from these models except for the Takeda et al. (\cite{takeda05}) values, which corresponded to a smaller large
separation (around $\sim$ 75 $\mu$Hz).
The detailed CoRoT observations allow going further in this analysis, using the precise seismic results. First of all, the
Takeda et al. (\cite{takeda05}) values,which correspond to a more evolved star, are excluded.
Now we present a complete asteroseismic analysis for HD 52265, and give precise results on the stellar parameters.
The method and models used for the asteroseismic comparisons with observations are described in section 2. All the models,
as described in section 2.1, include element gravitational settling. The seismic tests are discussed in section 2.2 and we
discuss the influence of radiative accelerations on heavy elements in section 2.3. The results are given in section 3.
Section 3.1 is devoted to the results obtained without taking surface effects into account. We test the influence of
varying the initial chemical composition and the introduction or not of radiative levitation on heavy
elements. An analysis of surface effects and the consequences on the results is given in section 3.2.
A first discussion of the results is given in section 3.3. Finally, in section 4 the results obtained using automatic
codes to find the best models for this star from seismology (SEEK and AMP) are presented and compared with the previously
obtained solutions. Summary and discussion are given in section 5.
\begin{table}[h]
\caption{Summary of previous spectroscopic studies of HD 52265}
\label{prevspec}
\centering
\begin{tabular}{c c c l}
\hline\hline
[Fe/H] & T$_\mathrm{eff}$ & log $g$ & Reference \\
\hline
0.27 $\pm$ 0.02 & 6162 $\pm$ 22 & 4.29 $\pm$ 0.04 & Gonzalez et al. 2001 \\
0.23 $\pm$ 0.05 & 6103 $\pm$ 52 & 4.28 $\pm$ 0.12 & Santos et al. 2004 \\
0.19 $\pm$ 0.03 & 6069 $\pm$ 15 & 4.12 $\pm$ 0.09 & Takeda et al. 2005 \\
0.19 $\pm$ 0.03 & 6076 $\pm$ 44 & 4.26 $\pm$ 0.06 & Fischer \& Valenti 2005 \\
0.24 $\pm$ 0.02 & 6179 $\pm$ 18 & 4.36 $\pm$ 0.03 & Gillon \& Magain 2006 \\
0.19 $\pm$ 0.05 & 6100 $\pm$ 60 & 4.35 $\pm$ 0.09 & Ballot et al. 2011 \\
\hline
\end{tabular}
\end{table}
\section{Computations with TGEC}
\subsection{Stellar models}
Stellar models were computed using the TGEC code, (Hui-Bon-Hoa \cite{hui08}; Th\'eado et al. \cite{theado12}), with the OPAL
equation of state and opacities (Rogers \& Nayfonov \cite{rogers02}; Iglesias \& Rogers \cite{iglesias96}), and the NACRE nuclear reaction
rates (Angulo et al. \cite{angulo99}). Convection was treated
using the mixing length theory. For all models, the mixing length parameter was adjusted to that of the solar case, i.e. $\alpha = 1.8$
without overshooting and without extra-mixing. Gravitational settling of helium and metals was included using the Paquette prescription
(Paquette et al. \cite{paquette86}; Michaud et al. \cite{michaud04}). Radiative accelerations of metals were also introduced using the
SVP method (Single Valued Parameters approximation, see Alecian \& LeBlanc \cite{alecian02}, LeBlanc \& Alecian \cite{leblanc04} and
Th\'eado et al. \cite{theado09}). As most stellar evolution codes neglect these radiative accelerations, we analysed the effects on
the seismic results of introducing them or not.
Evolutionary tracks were computed for two metallicity values and two different initial helium abundances. The metallicity values
were chosen as [Fe/H] = 0.23 and 0.30, so that after diffusion the final model value lies inside the observed range. Here [Fe/H]
represents the global overmetallicity with respect to the Sun, defined as $[log(Z/X)_*-log(Z/X)_{\sun}]$, where $Z$ and $X$ are
computed at the stellar surface. Considering the small dispersion of the detailed abundances, this value may be compared with the
observed [Fe/H]. A discussion of the computed detailed abundance variations is given in section 2.3. The initial helium values are
labeled as Y$_{\sun}$ and Y$_{G}$, where Y$_{\sun}$ is the solar helium value taken from Grevesse \& Noels \cite{grevesse93} and
Y$_{G}$ is a helium abundance which increases with Z as expected if the local medium follows the general trend observed for the
chemical evolution of galaxies (C.f. Izotov \& Thuan \cite{izotov04} and \cite{izotov10}).
Adiabatic oscillation frequencies were computed for many models along the evolutionary tracks
using the PULSE code (Brassard \& Charpinet \cite{brassard08}). We computed these frequencies for degrees
$\ell$ = 0 to $\ell$ = 3. For comparisons with the observations (seismic tests) we used the same frequency interval for the computed
as for the observed frequencies for consistency, i.e. between 1500 and 2550 $\mu$Hz, as discussed below.
\subsection{Seismic Tests}
A well known characteristic of p modes in spherical stars is that modes of the same degree with successive radial order $n$
are nearly equally spaced in frequency (e.g. Tassoul \cite{tassoul80}). The large separation is defined as:
\begin{equation}
\Delta \nu _{n,\ell} = \nu_{\mathrm{n+1,\ell}} - \nu_{\mathrm{n,\ell}}
\end{equation}
In real stars, this large separation slightly varies with frequency, so that an average value has to be used for comparisons between models and observations. One has to be careful to use the same frequency range in both cases to do the comparisons. Taking this into account, the fit between the computed and measured large separations is the first step in the process of comparisons. The large separation gives access to the stellar average density (Ulrich \cite{ulrich86}; White et al. \cite{white11}).
A second characteristic of p modes is that the difference between $(n,\ell)$ modes and $(n-1, \ell+2)$ ones varies very slowly with frequency. The small separations are defined as:
\begin{equation}
\delta \nu _{n,\ell} = \nu_{\mathrm{n,\ell}} - \nu_{\mathrm{n-1,\ell+2}}
\end{equation}
These small separations are most sensitive to the stellar core and may give information on the extension of the convective core in some stars
(Tassoul \cite{tassoul80}; Roxburg \& Vorontsov \cite{roxburgh94}; Gough \cite{gough86}; Soriano \& Vauclair \cite{soriano08}).
Provided that the stellar chemical composition is precisely known, the knowledge of both the large and the small separations, which may be plotted in the so-called C-D diagrams, gives good constraints on the stellar parameters (Christensen-Dalsgaard \cite{jcd84}; White et al. \cite{white11}). However, whereas the stellar metallicity can be precisely derived from spectroscopy, the helium content of solar-type stars is not directly known from observations. This ignorance leads to important uncertainties on the evolutionary tracks, and thus on the derived stellar parameters.
We analysed these uncertainties in detail by computing models with various chemical compositions. For each stellar evolutionary track that we computed, we first searched for the model which had an average large separation of $<\Delta \nu>$ = 98.4 $\pm$ 0.1 $\mu$Hz. As the large separation continously decreases when the star evolves along the main sequence, one model only is found with the observed value (within the uncertainties). Then, for each set of computations done with a given initial chemical composition ([Fe/H] and Y), we looked for the model which best fitted the small separations observed between modes of $\ell$ =2 and 0, $<\delta \nu_{02}>$=8.1 $\pm$ 0.2 $\mu$Hz. We further proceed with detailed comparisons of observed and computed \'echelle diagrams.
The final comparison between the models and the seismic observations needs taking into account the surface effects induced by the waves behavior in the outer stellar layers. We computed the frequency shift induced by such effects, using the recipe proposed by Kjeldsen et al. (\cite{kjeldsen08}). In this case, the large separations are modified, as discussed in section 3.2, which leads to corrections in the results.
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{newpulse_logg_fe023yg.eps}
\includegraphics[width=0.48\textwidth]{newpulse_logg_fe023ysol.eps}
\includegraphics[width=0.48\textwidth]{logg_fe030yg.eps}
\includegraphics[width=0.48\textwidth]{logg_fe030ysol.eps}
\caption{Evolutionary tracks in the log $g$ vs log $T_\mathrm{eff}$ plane for the various sets of metallicity and helium abundance, for $\alpha=1.8$ (see text for details).
The symbols indicate the error boxes of Gonzalez et al. (\cite{gonzalez01}) (\textit{asterisks}), Santos et al. (\cite{santos04}) (\textit{diamonds}),
Gillon \& Magain (\cite{gillon06}) (\textit{squares}), Fisher \& Valenti (\cite{fisher05}) (\textit{triangles}), and Ballot et al. (\cite{ballot11}) (\textit{crosses}).
The straight thick line represents the iso-$<\Delta \nu>$ line, with $<\Delta \nu>$=98.4 $\mu$Hz.}
\label{logtracks1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{newpulse_logl_fe023yg.eps}
\includegraphics[width=0.48\textwidth]{newpulse_logl_fe023ysol.eps}
\includegraphics[width=0.48\textwidth]{logl_fe030yg.eps}
\includegraphics[width=0.48\textwidth]{logl_fe030ysol.eps}
\caption{HR diagrams for $\alpha=1.8$ (see text for details). Error boxes are the same horizontally as presented in Figure~\ref{logtracks1}, for the effective temperatures, and correspond vertically to the luminosity uncertainty.
The straight thick line represents the iso-$<\Delta \nu>$ line, with $<\Delta \nu>$=98.4 $\mu$Hz.}
\label{logtracks2}
\end{figure*}
\subsection{Test on atomic diffusion: radiative accelerations}
Atomic diffusion is a very important process inside stars: it can modify the atmospheric abundances and also
have strong implications for the internal structure of stars (Richard, Michaud \& Richer \cite{richard01};
Michaud et al. \cite{michaud04}; Th\'eado et al. \cite{theado09}).
At the present time, most stellar evolution codes include the computation of gravitational settling, but not the
computation of radiative accelerations of heavy elements. These accelerations, which oppose gravitation, are negligible
for the Sun but they rapidly become important for more massive stars (Michaud et al. \cite{michaud76}).
The variations with depth of the radiative accelerations on specific elements can lead to their accumulation or depletion in various
layers inside stars. The presence of a heavy iron layer above layers with smaller molecular weights creates an inverse $\mu$-gradient,
unstable towards thermohaline convection (Vauclair \cite{vauclair04}). This induced mixing has also to be taken into account
in computations of stellar modeling (Th\'eado et al. \cite {theado09}).
An improved TGEC version including radiative accelerations
on C, N, O, Ca and Fe was recently developped (Th\'eado et al. \cite{theado12}). This new version was used to compare
the oscillation frequencies computed with and without introducing the radiative accelerations in the models. We found
that, for solar-type stars (masses less than 1.30 M$_{\sun}$), the difference in the computed frequencies is small. Two models
with the same mass of 1.28 M$_{\sun}$ and the same average large separations of 98.26 $\mu$Hz, one computed with the
radiative accelerations and one neglecting them, present differences in the average small separations of the order of 0.01 $\mu$Hz. These
differences between the two models decrease with decreasing stellar mass. We conclude that the radiative accelerations may be neglected
in the following computations.
The result that radiative accelerations have no important consequences in the present models is consistent with the fact that the
detailed abundances do not show large relative variations for the observed elements (Section 1). Indeed, in case of gravitational
settling, the abundances of heavy elements all decrease in a similar way, in spite of their different masses, due to the slowing
down effect of the coulomb forces. The diffusion velocities vary typically as A/Z$^2$ where A is the mass number and Z the charge,
which is similar for various elements in stellar conditions. This behavior would be modified if radiative accelerations were important.
\begin{table*}
\caption{Examples of models with $\alpha$=1.8, without surface effects.}
\label{fe019yg}
\centering
\begin{tabular}{c c c c c c c c c c c c c}
\hline\hline
[Fe/H]$_{i}$ & Y$_{i}$ & M/M$_{\sun}$ & Age & [Fe/H]$_{S}$ & Y$_{S}$ & log $g$ & log $T_\mathrm{eff}$ & log $(L/L_{\sun})$ & R/R$_{\sun}$ &
M/R$^{3}$ & $<\Delta \nu>$ & $<\delta \nu_{02}>$ \\
& & & [Gyr] & & & [K] & & & & [solar units] & [$\mu$Hz] & [$\mu$Hz] \\
\hline
0.23 & 0.293 & 1.18 & 3.682 & 0.16 & 0.246 & 4.267 & 3.778 & 0.310 & 1.328 & 0.50 & 98.31 & 7.08 \\
0.23 & 0.293 & 1.20 & 3.204 & 0.16 & 0.246 & 4.271 & 3.782 & 0.332 & 1.333 & 0.51 & 98.38 & 7.54 \\
0.23 & 0.293 & 1.22 & 2.820 & 0.16 & 0.248 & 4.273 & 3.787 & 0.355 & 1.341 & 0.51 & 98.36 & 7.85 \\
0.23 & 0.293 & 1.24 & 2.416 & 0.15 & 0.242 & 4.276 & 3.791 & 0.375 & 1.347 & 0.51 & 98.34 & 8.33 \\
\hline
\hline
0.23 & 0.271 & 1.22 & 3.756 & 0.16 & 0.228 & 4.272 & 3.776 & 0.312 & 1.343 & 0.51 & 98.33 & 7.11 \\
0.23 & 0.271 & 1.24 & 3.283 & 0.16 & 0.228 & 4.275 & 3.780 & 0.334 & 1.349 & 0.51 & 98.31 & 7.80 \\
0.23 & 0.271 & 1.26 & 2.865 & 0.16 & 0.230 & 4.278 & 3.785 & 0.355 & 1.355 & 0.51 & 98.34 & 7.98 \\
0.23 & 0.271 & 1.28 & 2.461 & 0.16 & 0.227 & 4.281 & 3.789 & 0.375 & 1.361 & 0.51 & 98.35 & 8.30 \\
\hline
\hline
0.30 & 0.303 & 1.20 & 3.209 & 0.23 & 0.257 & 4.271 & 3.778 & 0.316 & 1.333 & 0.51 & 98.42 & 7.35 \\
0.30 & 0.303 & 1.22 & 2.820 & 0.23 & 0.258 & 4.273 & 3.783 & 0.339 & 1.341 & 0.51 & 98.35 & 7.75 \\
0.30 & 0.303 & 1.24 & 2.431 & 0.23 & 0.261 & 4.276 & 3.787 & 0.361 & 1.347 & 0.51 & 98.38 & 8.11 \\
0.30 & 0.303 & 1.26 & 2.072 & 0.22 & 0.254 & 4.282 & 3.792 & 0.382 & 1.354 & 0.51 & 98.38 & 8.59 \\
\hline
\hline
0.30 & 0.271 & 1.24 & 3.771 & 0.23 & 0.231 & 4.274 & 3.771 & 0.299 & 1.349 & 0.51 & 98.32 & 7.11 \\
0.30 & 0.271 & 1.26 & 3.293 & 0.23 & 0.231 & 4.277 & 3.776 & 0.320 & 1.356 & 0.51 & 98.37 & 7.64 \\
0.30 & 0.271 & 1.28 & 2.865 & 0.23 & 0.231 & 4.280 & 3.780 & 0.341 & 1.363 & 0.51 & 98.34 & 7.87 \\
0.30 & 0.271 & 1.30 & 2.476 & 0.24 & 0.233 & 4.282 & 3.784 & 0.362 & 1.369 & 0.51 & 98.39 & 8.25 \\
\hline
\end{tabular}
\end{table*}
\section{Results}
\subsection{Computations without surface effects }
The computations of evolutionary tracks, with two initial metallicity values, [Fe/H] = 0.23 and 0.30, and two different helium abundances, Y$_{\sun}$ equal to 0.271 and Y$_{G}$ respectively equal to 0.293 and 0.303 for the two metallicity values, lead to four different sets of tracks, each of them covering a mass range from 1.10 to 1.30 $M_{\sun}$. A few of these tracks are presented in Figures 1 and 2. Error boxes of five of the spectroscopic studies given in Table 1 are also drawn in these figures.
As explained in previous sections, we found, along each evolutionary track, a model which has an average large separation consistent with the observed one,
computed in the same frequency range, of $\sim$ [1.5, 2.5] mHz.
The location of all these models in the log g - log $T_\mathrm{eff}$ plane as well as in the HR diagrams
are indicated in Figures~\ref{logtracks1} and \ref{logtracks2} with iso-$<\Delta \nu>$ 98.4 $\mu$Hz lines. For each case, we also computed the average small separation, $\delta \nu _{n,02}$ (Table 2). We can see that for models with the same large separation, the small separation increases for increasing mass, so that in each case (i.e. for each set of chemical composition), there is a model that is consistent with both the large and small separations. However, when comparing the absolute model frequencies with the observed ones, we find that we must shift the computed frequencies by about 20 $\mu$Hz to obtain the best fit with the observed ones. This offset is attributed to surface effects. The \'echelle diagrams corresponding to these best models are given in Figure 3.
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{ED_fe023ygm124_fitm_newpulse.eps}%
\includegraphics[width=0.5\textwidth]{ED_fe023ysolm128_fitm_newpulse.eps}
\includegraphics[width=0.5\textwidth]{ED_fe030ygm124_fitm.eps}%
\includegraphics[width=0.5\textwidth]{ED_fe030ysolm130_fitm.eps}
\caption{Echelle diagrams for the best model found for each set of evolutionary tracks calculated with $\alpha=1.8$. In this set of models, the surface effects are not included. The
model frequencies (\textit{crosses}) are compared to the observed frequencies, represented as diamonds ($\ell$=0), asterisks ($\ell$=1), and triangles ($\ell$=2).To obtain these best fits we had to shift the model frequencies by respectively (top left, top right, bottom left, bottom right) 27, 25, 24, 22 $\mu$Hz (see text)}
\label{edfitm181}
\end{figure*}
\begin{table*}
\caption{Examples of models with $\alpha$=1.8 , including surface effects. Here $<\Delta \nu^*>$ represents the Kjeldsen et al. (\cite{kjeldsen08})-corrected large separations.}
\label{fe019ygse}
\centering
\begin{tabular}{c c c c c c c c c c c c c c}
\hline\hline
[Fe/H]$_{i}$ & Y$_{i}$ & M/M$_{\sun}$ & Age & [Fe/H]$_{S}$ & Y$_{S}$ & log $g$ & log $T_\mathrm{eff}$ & log $(L/L_{\sun})$ & R/R$_{\sun}$ &
M/R$^{3}$ & $<\Delta \nu^*>$ & $<\Delta \nu>$ & $<\delta \nu_{02}>$ \\
& & & [Gyr] & & & [K] & & & & [solar units] & [$\mu$Hz] & [$\mu$Hz] & [$\mu$Hz] \\
\hline
0.23 & 0.293 & 1.18 & 3.548 & 0.16 & 0.247 & 4.275 & 3.779 & 0.306 & 1.315 & 0.52 & 99.67 & 98.27 & 7.27 \\
0.23 & 0.293 & 1.20 & 3.069 & 0.16 & 0.247 & 4.279 & 3.784 & 0.328 & 1.321 & 0.52 & 99.68 & 98.27 & 7.83 \\
0.23 & 0.293 & 1.22 & 2.670 & 0.16 & 0.250 & 4.282 & 3.788 & 0.350 & 1.327 & 0.52 & 99.86 & 98.19 & 8.11 \\
0.23 & 0.293 & 1.24 & 2.251 & 0.16 & 0.244 & 4.287 & 3.792 & 0.369 & 1.331 & 0.52 & 100.08& 98.19 & 8.47 \\
\hline
\hline
0.23 & 0.271 & 1.20 & 4.130 & 0.16 & 0.228 & 4.276 & 3.773 & 0.287 & 1.325 & 0.52 & 99.51 & 98.19 & 6.85 \\
0.23 & 0.271 & 1.22 & 3.622 & 0.16 & 0.228 & 4.279 & 3.777 & 0.309 & 1.331 & 0.52 & 99.56 & 98.18 & 7.26 \\
0.23 & 0.271 & 1.24 & 3.149 & 0.16 & 0.229 & 4.283 & 3.781 & 0.330 & 1.337 & 0.52 & 99.61 & 98.18 & 7.69 \\
0.23 & 0.271 & 1.26 & 2.730 & 0.16 & 0.232 & 4.286 & 3.786 & 0.351 & 1.343 & 0.52 & 99.62 & 98.07 & 8.11 \\
\hline
\hline
0.30 & 0.303 & 1.20 & 3.089 & 0.23 & 0.258 & 4.278 & 3.780 & 0.313 & 1.322 & 0.52 & 99.64 & 98.32 & 7.70 \\
0.30 & 0.303 & 1.22 & 2.685 & 0.23 & 0.260 & 4.282 & 3.784 & 0.335 & 1.328 & 0.52 & 99.72 & 98.13 & 7.94 \\
0.30 & 0.303 & 1.24 & 2.296 & 0.24 & 0.263 & 4.285 & 3.788 & 0.357 & 1.334 & 0.52 & 99.78 & 98.15 & 8.34 \\
0.30 & 0.303 & 1.26 & 1.907 & 0.23 & 0.257 & 4.290 & 3.793 & 0.376 & 1.337 & 0.52 & 100.23& 98.20 & 8.84 \\
\hline
\hline
0.30 & 0.271 & 1.24 & 3.637 & 0.23 & 0.231 &4.282 & 3.773 & 0.295 & 1.338 & 0.52 & 99.56 & 98.27 & 7.30 \\
0.30 & 0.271 & 1.26 & 3.173 & 0.23 & 0.232 & 4.284 & 3.777 & 0.317 & 1.345 & 0.52 & 99.50 & 98.16 & 7.64 \\
0.30 & 0.271 & 1.28 & 2.730 & 0.23 & 0.233 & 4.288 & 3.781 & 0.337 & 1.351 & 0.52 & 99.64 & 98.09 & 8.08 \\
0.30 & 0.271 & 1.30 & 2.326 & 0.23 & 0.236 & 4.291 & 3.785 & 0.357 & 1.355 & 0.52 & 99.80 & 98.20 & 8.52 \\
\hline
\end{tabular}
\end{table*}
\subsection{Computations including surface effects}
We know that stellar modeling fails to represent properly the near-surface layers of the stars. As a consequence,
there is a systematic offset between observed and computed frequencies. This offset is independent of the angular degree $\ell$ and
increases with frequency. This has been studied in the case of the Sun, and a similar offset is expected to occur
in other stars. Using the Sun as a reference, Kjeldsen et al. (\cite{kjeldsen08}) suggested that
for other stars, the near-surface correction on the frequencies may be approximated by:
\begin{equation}
\nu _\mathrm{obs}(n) - \nu_\mathrm{best}(n) = a \left[\frac{\nu_\mathrm{obs}(n)}{\nu_{0}}\right]^{b}
\end{equation}
where $\nu_\mathrm{obs}(n)$ are the observed $\ell$=0 frequencies with radial order $n$, $\nu_\mathrm{best}(n)$ are the calculated frequencies for
the best model which is the model that best describes the star but which still fails to model correctly
the near-surface layers, and $\nu_{0}$ is a constant reference frequency, chosen to be that of the frequency at the maximum amplitude in the power spectrum. The parameter $a$ may be derived as a function of the parameter $b$, which has to be adjusted to the solar case.
We used this method to find the frequency corrections that we have to apply to our models, which lead to a new average large separation $<\Delta \nu^*>$, slightly larger than the observed one.
We then proceed as before to derive models for the same sets of values of the helium abundance and metallicity, using $\alpha = 1.8$ (Table 3).
Figure~\ref{edse18} presents \'echelle diagrams obtained with the new best models. In these graphs, the frequency corrections have been applied to the new model frequencies in order to compare them directly with the observations.
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{ED_fe023ygm122_SE.eps}%
\includegraphics[width=0.5\textwidth]{ED_fe023ysolm126_SE.eps}
\includegraphics[width=0.5\textwidth]{ED_fe030ygm123_SE.eps}%
\includegraphics[width=0.5\textwidth]{ED_fe030ysolm128_SE.eps}
\caption{Echelle diagrams for the best model, including near-surface corrections, as proposed by Kjeldsen et al. \cite{kjeldsen08}
found for each set of evolutionary tracks calculated with $\alpha=1.8$, in comparison with
observed frequencies. The symbols are the same ones as in Figure 3.}
\label{edse18}
\end{figure*}
\subsection{Best models and discussion}
We give in Table~\ref{fe023yg} the parameters of the best models that we obtained for the four different sets of chemical composition. These models have been computed with a mixing length parameter of 1.8, the frequencies have been corrected for surface effects and the computed large and small separations are the closest to the observed ones in the sample. We also give the $\chi^2$ values for the comparisons of the three $\ell$ = 0, 1, 2 lines in the \'echelle diagrams.
Figure~\ref{comp-best} displays the large and small separations as a function of the frequency for the observations and the four best models. The pattern observed in the large separations are well reproduced by the models. For the small separations, the agreement is also very good except for two points at large frequencies (2284 $\mu$Hz and 2479 $\mu$Hz). This suggests that the uncertainties given in Ballot et al. (\cite{ballot11}) for these points were underestimated.
Several concluding points can already be derived from a first analysis of Table 4. First of all, the stellar gravity is obtained, as usual, with a precision of order $0.1\%$. The mass and age depend basically on the chosen value for the initial helium content.
For a low helium value, the mass is 1.26 to 1.28 $M_{\sun}$, and the age 2.73 Gyr,
whereas for a larger helium value the mass is slightly smaller (around 1.22 - 1.23 $M_{\sun}$) as well as the age (2.48 to 2.68 Gyr).
In any case the radius and luminosity are known with a precision of order $1\%$.
We can go further by comparing the effective temperatures of the models with the spectroscopic observations (Figure 6).
As found before for the cases of $\iota$ Hor (Vauclair et al. \cite{vauclair08}) and $\mu$ Arae (Soriano \& Vauclair \cite{soriano10}),
the effective temperatures of the best models are smaller for smaller initial helium and larger for smaller metallicity.
In the present case, we find that the model with the largest metallicity and the smallest helium, represented by black square in figure 6, is at the coolest
limit of the observational boxes and thus may be excluded from the sample on spectroscopic considerations.
We derive the stellar parameters from a mean value of the resulting three models, as given in Table 5.
\begin{table*}
\caption{Best models obtained with the TGEC code, including near surface corrections.}
\label{fe023yg}
\centering
\begin{tabular}{c c c c c c c c c c c }
\hline\hline
M/M$_{\sun}$ & $L/L_{\sun}$ & R/R$_{\sun}$ & log $g$ & T$_\mathrm{eff}$ [K] & age [Gyr] & [Fe/H]$_{i}$ & Y$_{i}$ & [Fe/H]$_{S}$ & Y$_{S}$ & $\chi^{2}$ \\
\hline
1.22 & 2.239 & 1.327 & 4.282 & 6143 & 2.670 & 0.23 & 0.293 & 0.16 & 0.250 & 5.06 \\
1.23 & 2.219 & 1.330 & 4.283 & 6120 & 2.476 & 0.30 & 0.303 & 0.23 & 0.262 & 3.34 \\
1.26 & 2.244 & 1.343 & 4.286 & 6109 & 2.730 & 0.23 & 0.271 & 0.16 & 0.232 & 6.42 \\
1.28 & 2.173 & 1.351 & 4.288 & 6043 & 2.730 & 0.30 & 0.271 & 0.23 & 0.233 & 3.51 \\
\hline
\end{tabular}
\end{table*}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{delta_nu_newpulse_SE.eps}
\includegraphics[width=0.5\textwidth]{d02_obs_newpulse_SE.eps}
\caption{Comparisons between the large separations (top) and the small separations (bottom) of the four best models
indicated by squares for models with [Fe/H]$_{s}$=0.23 ([Fe/H]$_{i}$=0.30) and triangles for models with
[Fe/H]$_{s}$=0.16 ([Fe/H]$_{i}$=0.23). Empty symbols are used to indicated an
helium abundance according to Y$_{g}$,and filled symbols for a solar helium abundance (see text for details.) Observations are represented as white diamonds.}
\label{comp-best}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{logg_boxes_newpulseSE.eps}
\caption{Location of the best models in the log g vs log $T_\mathrm{eff}$ plane. Triangles indicate models with [Fe/H]$_{s}$=0.16 ([Fe/H]$_{i}$=0.23),
and squares indicate models with [Fe/H]$_{s}$=0.23 ([Fe/H]$_{i}$=0.30).
Filled symbols are used to show the models with a solar helium value. Error boxes correspond to Gonzalez et al. (\cite{gonzalez01}) (\textit{asterisks}),
Santos et al. (\cite{santos04}) (\textit{diamonds}), Gillon \& Magain (\cite{gillon06}) (\textit{squares}), Fisher \& Valenti (\cite{fisher05}) (\textit{triangles}),
and Ballot et al. (\cite{ballot11}) (\textit{crosses}) spectroscopic studies.}
\label{edfitm182}
\end{figure}
\section{Scaling relations and automatic fits}
\subsection{Scaling relations}
The empirical scaling relations proposed by Kjeldsen \& Bedding (\cite{kjeldsen95}) can give approximate values of the mass and radius of a star from the observed average large separation, the frequency at the maximum of the power spectrum and the observed effective temperature:
\begin{equation}
\frac{M}{M_{\sun}} = \left(\frac{135 \mu Hz}{<\Delta \nu>}\right)^{4} \left(\frac{\nu_{max}}{3050 \mu Hz}\right)^{3} \left(\frac{T_{eff}}{5777K}\right)^{3/2}
\end{equation}
\begin{equation}
\frac{R}{R_{\sun}} = \left(\frac{135 \mu Hz}{<\Delta \nu>}\right)^{2} \left(\frac{\nu_{max}}{3050 \mu Hz}\right) \left(\frac{T_{eff}}{5777K}\right)^{1/2}
\end{equation}
For HD 52265, the frequency at the maximum amplitude is $\nu_{max}$ = 2090 $\pm$ 20 $\mu$Hz (Ballot et al. \cite{ballot11}). With a large separation of 98.4 $\mu$Hz and an effective
temperature of 6100 K, we obtain from these relations $M = 1.23 M_{\sun}$ and $R = 1.32 R_{\sun}$. With a large separation corrected for the surface effects, of 99.4 $\mu$Hz, we obtain $M = 1.19 M_{\sun}$ and $R = 1.29 R_{\sun}$. In spite of the uncertainties, these results are in good agreement with our own results.
\subsection{Results from the SEEK code}
Computations have been done for this star using the SEEK automatic code (Quirion et al. \cite{quirion10}, and Gizon et al. \cite{gizon12}).
This code makes use of
a large grid of stellar models computed with the Aarhus Stellar Evolution Code (ASTEC). It searches for the best model corresponding to a seismically observed
star, with the help of Bayesian statistics.The input parameters are the large and small average separations, the spectroscopic observables (T$_{eff}$, log g, [Fe/H])
and the absolute magnitude. The output gives the stellar mass, the radius and the age. In the case of HD 52265 the values given by the SEEK code for the mass and radius
are slightly larger than our results, and the age is smaller : $M = 1.27 \pm 0.03 M_{\sun}$, $R = 1.34 \pm 0.02 R_{\sun}$ and age = $2.37 \pm 0.29$ Gyr.
The difference between the SEEK results and ours may be related to a different initial helium, to slightly different values of the average large and small separations as given by Gizon et al. (\cite{gizon12}), or to the fact that the SEEK results correspond to a secondary
maximum of probability, as discussed below.
\subsection{Results from the Asteroseismic Modeling Portal}
We also performed computations for HD 52265 using the Asteroseismic Modeling Portal (https://amp.ucar.edu/). The AMP provides a web-based interface for
deriving stellar parameters for Sun-like stars from asteroseismic data. AMP was developed at the High Altitude Observatory and the Computational \& Information Systems Laboratory of the National Center for Atmospheric Research (Metcalfe et al. \cite{metcalfe09}). It uses the ASTEC and ADIPLS codes (Christensen-Dalsgaard 2008 a,b) coupled with a parallel genetic algorithm (Metcalfe \& Charbonneau \cite{metcalfe03}). Two different computations were done, the first one, AMP(a), by S. Vauclair using all the observed frequencies as given by Ballot et al. (\cite{ballot11}), and the second one, AMP(b), by S. Mathur using only the most reliable frequencies ((l=0, n=14), (l=1, n=14) and (l=2,n=15) were excluded). The final results are very close to the parameters found by using the TGEC code (Table 6).
Interestingly enough, the code found also solutions for a mass of 1.27 $M_{\sun}$ with a small Y (about 0.26) and a small age (about 2.7 Gyr) but
the $\chi^2$ tests showed that these results corresponded to secondary maxima, not to the best solution.
\begin{table}[h]
\caption{Final results for the parameters of the exoplanet-host star HD 52265 obtained with the TGEC code.}
\label{TGEC}
\centering
\begin{tabular}{ c c }
\hline\hline
M/M$_{\sun}$ = 1.24 $\pm$ 0.02 & $[Fe/H]_i$ = 0.27 $\pm$ 0.04 \\
R/R$_{\sun}$ = 1.33 $\pm$ 0.02 & Y$_i$ = 0.28 $\pm$ 0.02 \\
L/L$_{\sun}$ = 2.23 $\pm$ 0.03 & $[Fe/H]_s$ = 0.20 $\pm$ 0.04 \\
log g = 4.284 $\pm$ 0.002 & Y$_s$ = 0.25 $\pm$ 0.02\\
Age (Gyr) = 2.6 $\pm$ 0.2 & T$_{eff}$ (K) = 6120 $\pm$ 20\\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Final results for the parameters of the exoplanet-host star HD 52265 obtained from AMP automatic analysis, using all observed frequencies (a) or only the most reliable frequencies (b).}
\label{AMP}
\centering
\begin{tabular}{ c c c}
\hline\hline
& AMP(a) & AMP(b) \\
\hline
M/M$_{\sun}$ & 1.22 & 1.20 \\
R/R$_{\sun}$ & 1.321 & 1.310 \\
L/L$_{\sun}$ & 2.058 & 2.128 \\
log g & 4.282 & 4.282 \\
$[Fe/H]$ & 0.23 & 0.215 \\
Y & 0.280 & 0.298 \\
Age (Gyr) & 3.00 & 2.38 \\
T$_{eff}$ & 6019 & 6097 \\
\hline
\end{tabular}
\end{table}
\section{Summary and Conclusions}
In the present paper, we performed a detailed analysis of the exoplanet-host star HD 52265, which has been observed by CoRoT during 117 consecutive days, as one of the main targets. The beautiful observational results obtained for this star (Ballot et al. \cite{ballot11}) allowed a precise determination of its parameters, using classical comparisons between models computed with the TGEC and observational data. In our computations, we included atomic diffusion of helium and heavy elements. We found that, for the computed stellar models, the effects of radiative accelerations on individual elements is small and may be neglected. This result is consistent with the fact that detailed abundance analysis show similar enhancements in heavy elements compared to the Sun. We iterated the model computations so as to find a final surface [Fe/H]$_S$ in the observational range. We also compared these results with those obtained using approximate scaling relations (Kjeldsen \& Bedding \cite{kjeldsen95}), and automatic codes like SEEK (Quirion et al. \cite{quirion10}) and AMP (Metcalfe et al. \cite{metcalfe09}). Although the detailed physics included in the models is different, these results are in good agreement.
The good concordance between the results obtained with the TGEC code and the AMP for Sun-like stars was already proved with the star $\mu$ Arae. The results for this star, which were published separately (Soriano \& Vauclair \cite{soriano10} for TGEC; Do\u{g}an et al. \cite{dougan12} for AMP), are presented in the appendix for comparison. Altogether, these works represent real success of the asteroseismic studies for Sun-like stars.
\begin{acknowledgements}
Much thanks are due to Travis Metcalfe for fruitful discussions and for introducing SV to AMP. Computational resources were provided
by TeraGrid allocation TG-AST090107 through the Asteroseismic Modeling Portal (AMP). NCAR is partially supported by the Natiional Science Foundation.
We also thanks the referee for important and constructive remarks.
\end{acknowledgements}
|
1,314,259,994,330 | arxiv | \section{Introduction}
\label{sec:intro}
Dissipative dynamical systems are ubiquitous in Nature. For example, in mechanical systems they occur when energy is pumped into the system through an external driving force acting on it, or energy is drained by means of mechanisms such as friction. The process by which dissipation manifests itself in phase space is through the contraction and/or expansion of volumes of ensembles of initial conditions as they evolve in time. These systems display a rich variety of dynamical behavior governed by objects such as limit cycles, attractors, slow manifolds, etc., and their understanding is of paramount importance to unravel the underlying dynamics at play \cite{guck1983,nayfeh1995,strogatz,kuehn2015,meiss2017}.
The goal we pursue in this work is to illustrate how the method of Lagrangian descriptors (LDs) can be used to explore the phase space of dynamical systems subject dissipation. We believe that this technique would be of great help to gain insight into the dynamical mechanisms that take place in such systems, and provide some guidance in order to carry out a detailed analysis of any system of interest. The advantage of applying LDs is that it is a scalar-based trajectory diagnostic tool that is very simple to implement computationally. This technique was first developed a decade ago for the study of Lagrangian transport and mixing in geophysical flows \cite{madrid2009,mendoza2010}, and in this context it has been applied for instance to the real-time assessment of oil spills \cite{gg2016,guillermo2021} and also to the path planning of transoceanic missions of autonomous underwater vehicle missions \cite{ramos2018}. The identification of hyperbolic trajectories and their stable and unstable manifolds in the time-dependent vector fields defined by ocean currents plays a crucial role in the analysis of these problems, and it was the main focus of these studies. However, other geometrical objects (limit cycles, attractors and slow manifolds) that are relevant for dissipative systems have not been analyzed in the literature with this tool. The contribution of this work is aimed towards filling this gap.
The capability of LDs to reveal phase space structure is addressed here through many classical examples from nonlinear dynamics. The contents of this paper are outlined as follows. In Section \ref{sec:sec1} we provide a brief introduction on the method of Lagrangian descriptors, and describe the numerical setup we have used for this tool throughout this work when exploring the phase sapce of dissipative systems. Section \ref{sec:sec2} is devoted to presenting the results obtained from our analysis of several well-known model dynamical systems by means of LDs. Firstly, in Sec. \ref{subsec:sec1} we look at simple saddle systems that display different expansion and contraction rates, and thus their dynamics are characterized by two distinct time scales. Then, in Sec. \ref{subsec:sec2} we move on to illustrate how LDs can be used to detect bifurcations, and in particular, Andronov-Hopf bifurcations that give rise to the birth of limit cycles. Moreover, we demonstrate that LDs are capable of revealing the limit cycle structures in the classical van der Pol oscillator. Section \ref{subsec:sec3} is focused on the analysis of systems with slow manifolds, and Sec. \ref{subsec:sec4} describes how this tools succeeds in highlighting the intricate and fractal structure of attractors and strange attractors in the Duffing oscillator system. We extend our study of dissipative dynamics further in Sec. \ref{subsec:sec5}, where we illustrate the potential that LDs brings for the analysis of transition ellipsoids in Hamiltonian systems subject to dissipative forces. The development of techniques such as LDs for identifying these geometrical objects in phase space is of paramount importance, since they greatly facilitate the dynamical study of trajectories evolving across an index-1 saddle of the underlying potential energy surface that describes the model system. Finally, in Sec. \ref{sec:conc} we present the conclusions of this work and outlook for future research.
\section{Lagrangian Descriptors}
\label{sec:sec1}
This section briefly introduces the method of Lagrangian descriptors (LDs) that we have used in this work to analyze the phase space structures that govern the dynamics of dissipative systems. This technique is a scalar-based diagnostic tool that was originally introduced in the context of fluid mechanics to study transport and mixing in geophysical flows \cite{madrid2009,mendoza2010,mancho2013lagrangian}, but in the past years it has been found to provide useful information for the analysis of the high dimensional phase space of chemical systems, see e.g. \cite{Agaoglou2019,ldbook2020} and references therein. Although the method was initially developed to analyze continuous-time dynamical systems \cite{mancho2013lagrangian,lopesino2017} with general time dependence, it has also been applied to maps \cite{carlos,GG2019b}, stochastic dynamical systems \cite{balibrea2016} and holomorphic dynamics \cite{GG2020a}.
The idea behind the method of Lagrangian descriptors is very simple. Take any initial condition and accumulate along its trajectory the values attained by a positive scalar function that depends on the phase space variables. This calculation is carried out both forward and backward in time as the system evolves. Once this computation is done for a grid of initial conditions, the scalar output obtained from the method will highlight the location of the invariant stable and unstable manifolds intersecting this slice. These manifolds will appear as 'singular features' of the scalar field, that is, they are detected at points where the values of LDs display an abrupt change, which indicates distinct dynamical behavior. Forward integration of trajectories detects stable manifolds while backward evolution does the same for unstable manifolds. It is also important to remark here that this technique also reveals the structure of KAM tori, but in this case the tori regions correspond to smooth values of the LD function. In fact, tori can be visualized by computing long-term time averages of LDs as discussed in \cite{lopesino2017,GG2019b}.
Consider a general time-dependent dynamical system given by the evolution law:
\begin{equation}
\dfrac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x},t) \;,\quad \mathbf{x} \in \mathbb{R}^{n} \;,\; t \in \mathbb{R} \;,
\end{equation}
where the vector field defining flow satisfies $\mathbf{f}(\mathbf{x},t) \in C^{r}\left(\mathbb{R}^n\right) \times C(\mathbb{R})$, with $r \geq 1$. The method of LDs was originally defined in \cite{madrid2009} as the computation of the arclength of trajectories starting at the initial condition $\mathbf{x}(t_0)$ when they evolve forward and backward for a fixed integration time $\tau > 0$. In this work we will use the definition of LDs based on the $p$-norm, which was first presented in \cite{lopesino2017} to provide a rigorous mathematical foundation for this technique. Given a value $p \in (0,1]$, fix forward and backward integration times, $\tau_f > 0$ and $\tau_b > 0$ respectively, the diagnostic is given by:
\begin{equation}
\mathcal{L}_p(\mathbf{x}_{0},t_0,\tau_f,\tau_b) = \int^{t_0+\tau_f}_{t_0-\tau_b} \sum_{k=1}^{n} |f_{k}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt \;,
\end{equation}
where $\mathbf{x}_0 = \mathbf{x}(t_0)$ is any inital condition. Notice that it can be decomposed into its backward and forward components:
\begin{equation}
\mathcal{L}_p^{(b)}(\mathbf{x}_{0},t_0,\tau_b) = \int^{t_0}_{t_0-\tau_b} \sum_{k=1}^{n} |f_{k}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt \quad,\quad \mathcal{L}_p^{(f)}(\mathbf{x}_{0},t_0,\tau_f) = \int^{t_0+\tau_f}_{t_0} \sum_{k=1}^{n} |f_{k}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt \;,
\end{equation}
Although this alternative definition of LDs does not have such an intuitive physical interpretation as that of arclength, it has been shown to provide many advantages. For example, it allows for a rigorous mathematical analysis of the notion of ``singular structures'' and to establish a mathematical connection of this notion to invariant stable and unstable manifolds in phase space. In \cite{lopesino2017} it was shown that forward integration reveals stable manifolds, while backward evolution unveils unstable manifolds, at points where the scalar field $\mathcal{L}_p$ is non-differentiable. Therefore its gradient displays jumps \cite{lopesino2017,demian2017,naik2019a}. This property that the method detects manifolds where the function is non-differentiable, allows for an easy extraction of these geometrical objects by means of edge detection algorithms similar to those used in image processing, such as Sobel filters and many others. Here, we have applied for this purpose the simple approach of computing the norm of the gradient vector of the scalar field, $||\nabla \mathcal{L}_p||$, and also its laplacian $\Delta \mathcal{L}_p$.
We finish this overview of the method by pointing out that there exists many systems where initial conditions may escape to infinity at a very fast rate, or even in finite time (blow up) when we integrate them forward or backward. In these situations, one must adapt LDs so that the underlying geometry of the phase space can still be recovered when applying this tool. To do so, it suffices to implement the strategy of accumulating the value of LDs along a trajectory for the whole integration time chosen, or until the trajectory leaves a fixed region of the phase space, whatever happens first. This procedure has been proved to successfully circumvent the issue caused by escaping trajectories on the values attained by the LD scalar field \cite{GG2019a,GG2019b,katsanikas2020b,katsanikas2020c}.
\section{Results}
\label{sec:sec2}
\subsection{Dissipative Saddle Systems}
\label{subsec:sec1}
\subsubsection{The Linear Saddle}
Let us begin our study of the phase space of dissipative systems by analyzing the simplest case of a linear saddle with different expansion and contraction rates. Consider the dynamical system:
\begin{equation}
\begin{cases}
\dot{x} = \lambda x \\[.2cm]
\dot{y} = -\mu y
\end{cases}
\label{saddle}
\end{equation}
where $\lambda > 0$ is the parameter that measures expansion about the origin, and $\mu > 0$ determines the contraction rate. This system is dissipative, since the divergence of the vector field is $\lambda-\mu$, which is not zero when $\lambda \neq \mu$. The analytical solution of this system has the expression:
\begin{equation}
x(t) = x_0 \, e^{\lambda t} \quad,\quad y(t) = y_0 \, e^{-\mu t}
\label{anal_sol}
\end{equation}
where $(x_0,y_0)$ is the initial condition of the trajectory at time $t_0 = 0$. For this system, the origin is a hyperbolic equilibrium point and its stable and unstable manifolds are given by:
\begin{equation}
\mathcal{W}^{s}(0,0) = \left\{(x,y) \in \mathbb{R}^2 \; \Big| \; x = 0 \right\} \quad,\quad \mathcal{W}^{u}(0,0) = \left\{(x,y) \in \mathbb{R}^2 \; \Big| \; y = 0 \right\}
\end{equation}
If we apply the $p$-norm definition of LDs to Eq. \eqref{saddle} it is straightforward to show that the manifolds of the system are revealed at points where the scalar field $\mathcal{L}_p$ is non-differentiable. This was shown for the first time in \cite{lopesino2017}. Consider a grid of initial conditions $(x_0,y_0)$ in the plane at ime $t_0 = 0$ and integrate them forward and backward for a time interval $\tau_f$ and $\tau_b$ respectively. Then, LDs yields:
\begin{equation}
\mathcal{L}_p(x_0,y_0,t_0=0,\tau_f,\tau_b) = \int_{-\tau_b}^{\tau_f} |\dot{x}|^p + |\dot{y}|^p \, dt = \dfrac{\lambda^{p-1} |x_0|^p}{p} \left(e^{p\lambda \tau_f} - e^{-p\lambda \tau_b}\right) - \dfrac{\mu^{p-1} |y_0|^p}{p} \left(e^{-p\mu \tau_f} - e^{p\mu \tau_b}\right)
\label{estim}
\end{equation}
and this expression is non-differentiable at points $(0,y_0)$, which correspond to the stable manifold, and also at $(x_0,0)$, which is the unstable manifold. Recall that the forward component of LDs highlights the stable manifold whereas the backward contribution reveals the unstable manifold. Notice however that both coordinate axis are singularities, even if we only integrate forward or backward:
\begin{equation}
\mathcal{L}_p^{(f)}(x_0,y_0,t_0=0,\tau_f,0) = \int_{0}^{\tau_f} |\dot{x}|^p + |\dot{y}|^p \, dt = \dfrac{\lambda^{p-1} |x_0|^p}{p} \left(e^{p\lambda \tau_f} - 1\right) - \dfrac{\mu^{p-1} |y_0|^p}{p} \left(e^{-p\mu \tau_f} - 1\right)
\end{equation}
\begin{equation}
\mathcal{L}_p^{(b)}(x_0,y_0,t_0=0,0,\tau_b) = \int_{-\tau_b}^{0} |\dot{x}|^p + |\dot{y}|^p \, dt = \dfrac{\lambda^{p-1} |x_0|^p}{p} \left(1 - e^{-p\lambda \tau_b}\right) - \dfrac{\mu^{p-1} |y_0|^p}{p} \left(1 - e^{p\mu \tau_b}\right)
\end{equation}
If the integration times are large enough, i.e. $\tau_f \gg 1$ and $\tau_b \gg 1$ we get:
\begin{equation}
\mathcal{L}_p^{(f)}(x_0,y_0,t_0=0,\tau_f,0) \approx \dfrac{\lambda^{p-1} |x_0|^p}{p} e^{p\lambda \tau_f} + \dfrac{\mu^{p-1} |y_0|^p}{p} \approx \dfrac{\lambda^{p-1} |x_0|^p}{p} e^{p\lambda \tau_f}
\end{equation}
\begin{equation}
\mathcal{L}_p^{(b)}(x_0,y_0,t_0=0,0,\tau_b) \approx \dfrac{\lambda^{p-1} |x_0|^p}{p} + \dfrac{\mu^{p-1} |y_0|^p}{p} e^{p\mu \tau_b} \approx \dfrac{\mu^{p-1} |y_0|^p}{p} e^{p\mu \tau_b}
\end{equation}
and this result illustrates that, in forward integration, the singularity that dominates is that corresponding to the stable manifold, while backward integration enhances the unstable manifold.
However, an issue arises in this system when it comes to visualizing both manifolds simultaneously with LDs in the same picture, since there are two different timescales at play when $\lambda \neq \mu$. There is a competition between the exponential growth, described by $e^{\lambda t}$, and the decay rate identified with $e^{-\mu t}$, and this can obscure phase space structure. We illustrate this behavior in Fig. \ref{ld_saddle_sameTime}, where we have calculated LDs for $p = 1/2$ and the system parameters $\lambda = 1$ and $\mu = 2$, using an integration time $\tau_f = \tau_b = 8$. The values of LDs are normalized by subtracting the minimum and dividing by the difference between the maximum and minimum. In panels A) and B) we depict $\mathcal{L}_{p}^{(f)}$ and $\mathcal{L}_{p}^{(b)}$ respectively, and we can see how the method nicely highlights the location of the stable and unstable manifolds respectively. However, when we plot $\mathcal{L}_{p} = \mathcal{L}_{p}^{(f)} + \mathcal{L}_{p}^{(b)}$ in C), only the unstable manifold is visible and we would like to get both manifolds in the same picture. This happens because $\lambda < \mu$, and therefore, when we compute LD backward in time for an initial condition $(0,z_0)$ on the stable manifold using $\tau_b = \tau$, and integrate a symmetric initial condition $(z_0,0)$ on the unstable manifold forward for $\tau_f = \tau$, this yields:
\begin{equation}
\mathcal{L}_p^{(f)}(z_0,0,t_0=0,\tau,0) = \dfrac{\lambda^{p-1} |z_0|^p}{p} \left(e^{p\lambda \tau} - 1\right) \quad,\quad \mathcal{L}_p^{(b)}(0,z_0,t_0=0,0,\tau) = \dfrac{\mu^{p-1} |z_0|^p}{p} \left(e^{p\mu \tau} - 1\right)
\end{equation}
If we divide these two quantities we can compare their magnitude:
\begin{equation}
\dfrac{\mathcal{L}_p^{(f)}(z_0,0,t_0=0,\tau,0)}{\mathcal{L}_p^{(b)}(0,z_0,t_0=0,0,\tau)} = \left(\dfrac{\lambda}{\mu}\right)^{p-1} \dfrac{e^{p\lambda \tau} - 1}{e^{p\mu \tau} - 1}
\end{equation}
and, whenever $\tau \gg 1$, this gives:
\begin{equation}
\dfrac{\mathcal{L}_p^{(f)}(z_0,0,t_0=0,\tau,0)}{\mathcal{L}_p^{(b)}(0,z_0,t_0=0,0,\tau)} \approx \left(\dfrac{\lambda}{\mu}\right)^{p-1} e^{p\left(\lambda-\mu\right) \tau}
\end{equation}
Therefore, if $\lambda < \mu$, it is clear that $\mathcal{L}_p^{(b)}$ dominates, while if $\lambda > \mu$ then $\mathcal{L}_p^{(f)}$ becomes the dominating term. This explains why in Fig. \ref{ld_saddle_sameTime} C) the values of LDs on the unstable manifold dominate those attained on the stable manifold, and hence we cannot see the stable manifold when we plot the quantity $\mathcal{L}_p^{(f)} + \mathcal{L}_p^{(b)}$. Nevertheless, we would like to point out that we can easily extract the location of the points corresponding to the stable and unstable manifolds by means of computing the gradient of $\mathcal{L}^(f)$ and $\mathcal{L}^(b)$ respectively, and overlay both results in the same plot. This is shown in Fig. \ref{ld_saddle_sameTime} D) where we have used the standard coloring scheme widely applied in the literature, that is, red for the unstable manifold and blue for the stable manifold.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.22]{figure1a}
B)\includegraphics[scale=0.22]{figure1b}
C)\includegraphics[scale=0.22]{figure1c}
D)\includegraphics[scale=0.22]{figure1d}
\end{center}
\caption{Phase space of system in Eq. \eqref{saddle} for the model parameters $\lambda =1$, $\mu = 2$ as revealed by LDs using the $p$-norm with $p = 1/2$. A) Forward component of LDs calculated for $\tau_f = 8$. B) Backward component of LDs calculated for $\tau_b = 8$. C) Total LD value obtained by adding the forward and backward contributions. D) Invariant stable (blue) and unstable (red) manifolds extracted from the scalar output of LDs by means of applying the laplacian $\Delta \mathcal{L}_p$.}
\label{ld_saddle_sameTime}
\end{figure}
This visualization problem in the total LD scalar field was first discussed in detail in \cite{lopesino2017}, where it was shown that one can circumvent this issue by changing the value of $p$ that defines the $p$-norm used to compute LDs. The argument that was developed in that paper assumed that $\tau_f = \tau_b$, that is, all initial conditions are integrated for the same time forwards and backwards. However, it seems more natural to proceed as follows. Fix a value $p \in (0,1]$ to define the $p$-norm of LDs, and then adjust the forward and backward integration times $\tau_f$ and $\tau_b$ so that the time scales of the problem compensate and the contributions of the forward and backward components of LDs become comparable. We will determine here a condition for $\tau_f$ and $\tau_b$ that solves the issue of depicting all the phase space structures simultaneously in the same simulation. Take the initial conditions $(z_0,0)$ and $(0,z_0)$ on the unstable and stable manifolds respectively, and integrate the one on $\mathcal{W}^{u}$ forward in time for $\tau_f$, and the one located on $\mathcal{W}^s$ backward in time for $\tau_b$. Impose that the values of LDs obtained are comparable, that is:
\begin{equation}
\mathcal{L}_p^{(f)}(z_0,0,t_0=0,\tau_f,0) \approx \mathcal{L}_p^{(b)}(0,z_0,t_0=0,0,\tau_b) \quad \Leftrightarrow \quad \dfrac{\lambda^{p-1} |z_0|^p}{p} \left(e^{p\lambda \tau_f} - 1\right) \approx \dfrac{\mu^{p-1} |z_0|^p}{p} \left(e^{p\mu \tau_b} - 1\right)
\end{equation}
Taking $\tau_f,\tau_b \gg 1$ and simplifying, this gives:
\begin{equation}
\left(\dfrac{\mu}{\lambda}\right)^{p-1} \approx e^{p\left(\lambda\tau_f - \mu\tau_b\right)} \quad \Leftrightarrow \quad \tau_b \approx \dfrac{\lambda}{\mu} \tau_f + \dfrac{1-p}{\mu p} \ln\left(\dfrac{\mu}{\lambda}\right)
\label{intTimes_rel}
\end{equation}
Therefore, we have found a simple formula that determines how to set the forward and backward integration times in terms of the Lyapunov exponents of the saddle system. In particular, for the case $\lambda = \mu$ the formula indicates that forward and backward integration times should be chosen to be equal. In order to validate that this approach helps improving the visualization of the total LD field, we calculate LDs for the model parameters $\lambda = 1$, $\mu = 2$ and use a forward integration time of $\tau_f = 8$, and a corresponding backward integration time of $\tau_b = 4.346$ determined from the formula in Eq. \eqref{intTimes_rel}. We display the total LD function in Fig. \ref{ld_saddle_difTime} C), and this confirms that we acn adjust the forward and backward integration accordingly to obtain a complete picture of the phase space of the system, despite the different expansion and contraction rates, and hence the timescale difference is captured by the method.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.2]{figure2a}
B)\includegraphics[scale=0.2]{figure2b}
C)\includegraphics[scale=0.2]{figure2c}
\end{center}
\caption{Phase space of system in Eq. \eqref{saddle} for the model parameters $\lambda = 1$, $\mu = 2$ as revealed by LDs using the $p$-norm with $p = 1/2$. A) Forward component of LDs calculated for $\tau_f = 8$. B) Backward component of LDs calculated for $\tau_b = 4.346$. C) Total LD value obtained by adding the forward and backward contributions.}
\label{ld_saddle_difTime}
\end{figure}
\subsubsection{A Nonlinear Saddle Example}
Next, we move on to analyze the nonlinear system with a saddle point at the origin:
\begin{equation}
\begin{cases}
\dot{x} = \mu x \\[.2cm]
\dot{y} = \lambda \left(y - x^2\right)
\end{cases}
\label{nonlin_saddle}
\end{equation}
and focus our analysis in the case where $\lambda = -2$ and $\mu = 1$. It is a simple exercise to show that this system can be reduced to a non-autonomous (in $x$) first order linear ODE that can be solved by means of an integrating factor. In fact, the solution is:
\begin{equation}
x(t) = x_0 e^{t} \quad,\quad y(t) = \frac{1}{2}x_0^{2}e^{2t}+\left(y_0-\frac{1}{2}x_0^{2}\right)e^{-2t}
\end{equation}
where $(x_0,y_0)$ is the initial condition of the trajectory at time $t_0 = 0$. This system has an unstable equilibrium point at the origin and its stable and unstable manifolds are given by:
\begin{equation}
\mathcal{W}^{s}(0,0) = \left\{(x,y) \in \mathbb{R}^2 \; \Big| \; x = 0\right\} \quad,\quad
\mathcal{W}^{u}(0,0) = \left\{(x,y) \in \mathbb{R}^2 \; \Big| \; y = \dfrac{x^2}{2}\right\}
\label{nonlin_saddle_mani}
\end{equation}
We will show that the method of LDs detects the stable manifold when we integrate forward, whereas the unstable manifold is highlighted by the backward integration. For this computation, note that we can not obtain a closed-form expression for the integrals. Hence, we need to perform an asymptotic approximation similar to the one explained in \cite{lopesino2017}. Consider first the forward component of LDs:
\begin{equation}
\mathcal{L}_p^{(f)}(x_0,y_0,t_0=0,\tau_f,0) = \int_{0}^{\tau_f}|\dot{x}|^p + |\dot{y}|^p \, dt \sim \dfrac{|x_0|^p}{p} \left(e^{p\tau_f} - 1\right) + \dfrac{|x_0|^{2p}}{2p} \left(e^{2p\tau_f} - 1\right)
\end{equation}
As we can see, this expression is non-differentiable when $x_0=0$, which coincides with the stable manifold of the system. Similarly, the backward component of LDs can be approximated by:
\begin{equation}
\mathcal{L}_p^{(b)}(x_0,y_0,t_0=0,0,\tau_b) = \int_{-\tau_b}^{0}|\dot{x}|^p + |\dot{y}|^p \, dt \sim \dfrac{|x_0|^p}{p} \left(1-e^{-p\tau_b}\right) + \dfrac{2^{p-1}}{p}\left|y_0-\dfrac{1}{2}x_0^2\right|^p \left(e^{2p\tau_b} - 1\right)
\end{equation}
Since $\tau_b\gg 1$, the leading order singularity in the gradient of the backward component appears in the term that contains $|y_0-x_0^2/2|$. Therefore, it reveals the equation of the unstable manifold of the system. This analytic derivation demonstrates that the method successfully highlights the stable and unstable manifolds of the saddle point at the origin. We illustrate numerically in Fig. \ref{ld_nonlin} that the manifolds are detected. Indeed, panel A) displays the total LD values, and in B) we overlay the LD field with with the stable (blue) and unstable (red) manifolds described in Eq. \eqref{nonlin_saddle_mani}. As a validation, we extract the manifold locations by calculating the gradient of the LD function, $||\nabla \mathcal{L}_p||$, which allows us to detect the singular features (image edges) of the scalar output provided by LDs.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.2]{figure3a}
B)\includegraphics[scale=0.2]{figure3b}
C)\includegraphics[scale=0.2]{figure3c}
\end{center}
\caption{Phase space of system in Eq. \eqref{nonlin_saddle} for the model parameters $\lambda = -2$, $\mu = 1$ as revealed by LDs using the $p$-norm with $p = 1/2$. A) Total LD calculated by integrating forward and backward for $\tau_f = 26$ and $\tau_b = 25$, respectively. B) Analytical comparison of the LD scalar values with the stable (blue) and unstable (red) manifolds given by Eq. \eqref{nonlin_saddle_mani} for the saddle point at the origin of the system described in Eq. \eqref{nonlin_saddle}. C) Extraction of the stable and unstable manifolds from the LD values shown in panel A) by means applying a Sobel-type filter, that is, using the gradient of the LD function, $||\nabla \mathcal{L}_p||$, in order to detect the singular features (image edges) of the scalar output provided by LDs.}
\label{ld_nonlin}
\end{figure}
\subsection{Detection of Limit Cycles}
\label{subsec:sec2}
\subsubsection{Andronov-Hopf Bifurcations}
Our next goal is to show how Lagrangian descriptors can be used to detect the presence of limit cycles. We look first at the system described by the 2D normal form for a supercritical Andronov-Hopf bifurcation \cite{marsden76}, which is given by the following equations:
\begin{equation}
\begin{cases}
\dot{x} = \beta x - y - \sigma x \left(x^2 + y^2\right) \\[.2cm]
\dot{y} = x + \beta y - \sigma y \left(x^2 + y^2\right)
\end{cases}
\label{hopf_nf}
\end{equation}
We will consider here the case where $\sigma > 0$. If one writes Eq. \eqref{hopf_nf} in polar coordinates, it is straightforward to show that the system becomes:
\begin{equation}
\begin{cases}
\dot{r} = r(\beta - \sigma r^2) \\[.2cm]
\dot{\theta} = 1
\end{cases}
\label{hopf_polar}
\end{equation}
and therefore, it is clear that when $\beta > 0$ the system has a stable limit cycle at the circle of radius $\sqrt{\beta/\sigma}$, while for $\beta < 0$ the limit cycle disappears, and the origin becomes an asymptotically stable equilibrium point. One needs to be careful about the integration of initial conditions, because in this case, many trajectories will blow up in finite time. This is nicely exemplified if we solve analytically the system in Eq. \eqref{hopf_polar} for the situation where $\beta = 0$. For this parameter value, the analytical solution of the system \eqref{hopf_polar} is:
\begin{equation}
r(t) = \frac{r_0}{\sqrt{2\sigma tr^2_0+1}} \quad,\quad \theta(t) = \theta_0 + t
\label{analSolLimCy}
\end{equation}
where $(r_0,\theta_0)$ is the initial condition of the trajectory at time $t_0 = 0$. We can clearly see that this solution blows up at time $t = -1/\left(2\sigma r_0^2\right) < 0$, so that one would run into trouble when computing trajectories backward in time. This issue is also present in the solutions to the dynamical system in Eq. \eqref{hopf_polar} for $\beta \neq 0$, and can be easily checked by integrating the ODE using separation of variables.
Our goal now is to apply LDs to the system in Eq. \eqref{hopf_nf} in order to reveal its phase space. Since there is an issue of trajectories blowing up in finite time we will use the strategy of stopping those trajectories that escape a circular region of radius $R = 4$ centered at the origin. In this way, we avoid problems when computing LDs. In order to reveal the phase space of this system we compute LDs for $\tau_f = \tau_b = 8$ using variable time integration, so that the initial conditions that escape the region of the phase space delimited by a circle of radius $R = 4$ centered about the origin are stopped. We run experiments for the mode parameter values $\sigma = 1$, and $\beta = -0.5,\, 0 ,\, 0.5$ and integrate trajectories forwards and backwards for a time $\tau_f = \tau_b = 8$ or until they leave the circle of radius $R = 4$, what happens first. We present the results of this simulations in Fig. \ref{ld_andronov}. In panel A), which corresponds to $\beta = -0.5$, the origin is a stable focus, and the LD values highlight that dynamical behavior. In B), for $\beta = 0$, this case is the critical value of the parameter where the system undergoes a supercritical Andronov-Hopf bifurcation. The origin is still a weakly asymptotically stable focus and a limit cycle is about to emerge. However, the LD scalar field seems to be indicating the existence of a limit cycle at points where its values are very large. This is not correct, and we will comment on this issue later. Lastly, in panel C) we show how LDs nicely capture the location of the limit cycle present in the phase space of the system for $\beta = 0.5$.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.19]{figure4a}
B)\includegraphics[scale=0.19]{figure4b}
C)\includegraphics[scale=0.19]{figure4c}
\end{center}
\caption{Phase space of the dynamical system in Eq. \eqref{hopf_nf}, as revealed by applying the $p$-norm definition of LDs with $p = 1/2$ using an integration time of $\tau_f = \tau_b = 8$, for different values of the model parameters. A) $\beta = -0.5$ and $\sigma = 1$. B) $\beta = 0$ and $\sigma = 1$. C) $\beta = 0.5$ and $\sigma = 1$. Notice the false positive displayed by the method in panel B), since in this case ($\beta = 0$) there is no limit cycle in the system. More details about this issue are given in the main text.}
\label{ld_andronov}
\end{figure}
At this point we would like to discuss the false positive given by LDs for the case where $\beta = 0$. First, we calculate the forward component of LDs:
\begin{equation}
\mathcal{L}_p^{(f)}(r_0,\theta_0,t_0=0,\tau_f,0) = \int_{0}^{\tau_f} |\dot{r}|^p + |\dot{\theta}|^p \, dt =
\begin{cases}
\dfrac{\sigma^{p-1}r_0^{3p-2}}{2-3p}\big[(2\sigma\tau_f r_0^2+1)^{1-3p/2}-1\big]+\tau_f \,,
\quad\mbox{if }p\neq2/3 \\[0.6cm]
\dfrac{\sigma^{-1/3}}{2}\ln(2\sigma\tau_f r_0^2+1) + \tau_f \,,
\quad\mbox{if }p=2/3
\end{cases}
\end{equation}
Thus, it is clear that there are no singularities in the forward integration. However, if we integrate backward, we obtain that:
\begin{equation}
\mathcal{L}_p^{(b)}(r_0,\theta_0,t_0=0,0,\tau_b) = \int_{-\tau_b}^{0} |\dot{r}|^p + |\dot{\theta}|^p \, dt =
\begin{cases}
\dfrac{\sigma^{p-1}r_0^{3p-2}}{2-3p}\big[1-(-2\sigma\tau_b r_0^2+1)^{1-3p/2}\big] + \tau_b \,,
\quad\mbox{if }p\neq2/3 \\[0.6cm]
-\dfrac{\sigma^{-1/3}}{2}\ln(-2\sigma\tau_b r_0^2+1) + \tau_b \,,
\quad\mbox{if }p=2/3
\end{cases}
\end{equation}
and these expressions are non-differentiable when $r_0 = 1/\sqrt{2\sigma\tau_b}$. It is important to observe that this is a false positive, since we know that the system in Eq. \eqref{hopf_polar} does not have a limit cycle for $\beta=0$. Notice that this false positive shrinks as the integration time gets large, and it will tend to disappear when $\tau_b$ goes to infinity. This situation could be interpreted in the sense that the method is identifying the critical value of $\beta$ for which the system's phase space undergoes a bifurcation, and a limit cycle is about to be born. This potential issue of a false positive shows the importance of carefully checking the output provided by the method when analyzing the dynamics of a system. In summary, if a structure tends to disappear as the integration time gets large, it is reasonable to expect that this structure does not have any dynamical significance, although this should be always verified by other means before drawing any further conclusions.
\subsubsection{The van der Pol Oscillator}
We focus our attention now on the classical van der Pol oscillator system, which is a paradigmatic example of a system displaying limit cycle behavior \cite{vdp1922,vdp1926,strogatz,meiss2017}. This dynamical system is described by the following second order differential equation:
\begin{equation}
\ddot{x} + \mu \left(x^2-1\right) \dot{x} + x = 0
\end{equation}
where $\mu \geq 0$. As a system of first order ODEs, this can be rewritten as:
\begin{equation}
\begin{cases}
\dot{x} = y \\[.2cm]
\dot{y} = - x + \mu \left(1-x^2\right) y
\end{cases}
\label{vdp_sys}
\end{equation}
In order to compute LDs for this system we will use again the strategy of stopping trajectories if they escape a certain phase space region, since backward integration also has blow up issues in finite time. For this purpose we set a circle of radius $R = 20$ about the origin, and compute LDs forward and backward for $\tau_f = \tau_b = 50$, or until the trajectory leaves this circle, what happens first. To illustrate how the limit cycle of the system gets larger and its shape is distorted as $\mu$ is increased, we have simulated the system for $\mu = 0.1,\, 0.5,\, 1.5,\, 3$. Results of the total LD field are shown in Fig. \ref{ld_vdp_lowmu}, and the panels from left to right correspond to the cases from lower to higher values of $\mu$. It is clear from these plots that the method successfully recovers the geometry and location of the limit cycle for the van der Pol oscillator, and how it deforms as the model parameter value is increased.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.26]{figure5a}
B)\includegraphics[scale=0.26]{figure5b}
C)\includegraphics[scale=0.26]{figure5c}
D)\includegraphics[scale=0.26]{figure5d}
\end{center}
\caption{Phase space of the van der Pol oscillator in Eq. \eqref{vdp_sys}, as revealed by applying the $p$-norm definition of LDs with $p = 1/2$ using an integration time of $\tau_f = \tau_b = 50$, for different values of the model parameters. A) $\mu = 0.1$. B) $\mu = 0.5$. C) $\mu = 1.5$. D) $\mu = 3$. In order to avoid the issue of trajectories escaping in finite time, we stop them when they exit a circle of radius $R = 20$ centered at the origin.}
\label{ld_vdp_lowmu}
\end{figure}
\subsection{Analysis of Systems with Slow Manifolds}
\label{subsec:sec3}
In this subsection we look at the capability of LDs to reveal slow manifolds in systems that display a separation of time scales \cite{kuehn2015}. Slow manifolds are relevant in many applications, such as those that involve the dynamical study of gliding animals \cite{nave2019}. We focus our attention first on the dynamical system we introduced in Eq. \eqref{nonlin_saddle} which has been widely studied in the literature as a basic model for slow-manifold dynamics \cite{brunton2016,lusch2018}. It is well known that for $\lambda \ll \mu < 0$ this system has a slow manifold at the curve:
\begin{equation}
y = \dfrac{\lambda}{\lambda - 2\mu} x^2 \;.
\label{slow_mani}
\end{equation}
We will study here the case where $\mu = -0.05$ and $\lambda = -1$. For this situation, the dynamics in the $y$-coordinate rapidly evolves towards making $\dot{y}$ very small, and at that point, trajectories follow the asymptotically attracting slow manifold so that dynamics on $x$ become dominant. To reveal the location of the slow manifold we apply LDs and integrate trajectories for $\tau_f = \tau_b = 5$. In Fig. \ref{ld_slowmani} we display from left to right the total, forward and backward LD scalar field respectively, and for comparison we overlay in magenta the location of the slow manifold given by Eq. \eqref{slow_mani}. This test illustrates that the method is correctly highlighting the slow manifold location in the phase space of the system.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.19]{figure6a}
B)\includegraphics[scale=0.19]{figure6b}
C)\includegraphics[scale=0.19]{figure6c}
\end{center}
\caption{Phase space of the system in Eq. \eqref{nonlin_saddle} with model parameters $\lambda = -1$ and $\mu = -0.05$, as revealed by applying the $p$-norm definition of LDs with $p = 1/2$ using an integration time of $\tau_f = \tau_b = 5$. A) Total LDs. B) Forward LDs. C) Backward LDs. The magenta curve represents the slow manifold given by Eq. \eqref{slow_mani}.}
\label{ld_slowmani}
\end{figure}
In order to provide further evidence of the capability of this tool for revealing slow manifolds, we focus next on two classical model dynamical systems that have been widely studied in the literature. First we analyze the classical mechanics problem of a bead in a rotating hoop. A detailed analysis of the stability of this system can be found in \cite{strogatz}. We are interested in solving the dynamical system which is given by:
\begin{equation}
\begin{cases}
\dot{\phi} = \Omega \\[.2cm]
\dot{\Omega} = \dfrac{1}{\varepsilon} \left(f(\phi) - \Omega\right)
\end{cases}
\label{bead}
\end{equation}
where $f(\phi) = \left(\mu \cos(\phi) - 1\right) \sin(\phi)$. We will look at the case where $\mu > 1$ for which the system has two unstable equilibrium points at $\phi = 0$ and $\phi = \pi$, and two stable equilibria at $\phi = \arccos\left(1/\mu\right)$. we will study the slow manifold that exists in the system for $\varepsilon = 0.02$ and $\mu = 2.3$. We compute LDs by integrating trajectories forward and backward for the same time $\tau_f = \tau_b = 10$. The output of these numerical experiments is presented in Fig. \ref{ld_bead}. As before, we have superimposed in the picture the slow manifold of the system in magenta, which is given by the curve $\Omega = f(\phi)$, in order to validate the results obtained. We have also depicted with yellow dots the stable equilibria, and magenta dots represent unstable equilibria. Notice that the method also reproduces for this problem the slow manifold, and it is also highlighting the stable and unstable manifolds of the saddle equilibrium points of the system. We check this fact by extracting the geometrical features of the phase space from the LD scalar field by means of applying the laplacian operator to it. Observe that with this approach we can also obtain an accurate approximation of the slow manifold from the ridges of the laplacian field $\Delta \mathcal{L}_p$.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.28]{figure7a}
B)\includegraphics[scale=0.28]{figure7b}
C)\includegraphics[scale=0.28]{figure7c}
D)\includegraphics[scale=0.28]{figure7d}
\end{center}
\caption{Phase space reconstruction of the bead on a rotating hoop problem in Eq. \eqref{bead} for the system parameters $\varepsilon = 0.02$ and $\mu = 2.3$, by means of LDs with $p = 1/2$. Trajectories are integrated forward and backward for a time $\tau_f = \tau_b = 10$. A) Total LD fuction; B) Forward component of LDs; C) Backward component of LDs; D) Stable (blue) and unstable (red) manifold extraction using the laplacian operator on the LD scalar output, that is, $\Delta \mathcal{L}_p$. In panels A)-C) we have depicted the slow manifold of the system in magenta. Yellow dots correspond to stable equilibria, and magenta dots represent saddle points.}
\label{ld_bead}
\end{figure}
We finish our analysis of the detection of slow manifolds with Lagrangian descriptors by revisiting the van der Pol oscillator problem in Eq. \eqref{vdp_sys} where $\mu \gg 1$. In this case, in order to visualize the phase space of the system we will carry out a Li\'{e}nard transformation. Consider the following definitions:
\begin{equation}
F(x) = \dfrac{1}{3}x^3 - x \quad,\quad w = \dfrac{1}{\mu}y + F(x) = \dfrac{1}{\mu}\dot{x} + F(x)
\end{equation}
then we can write Eq. \eqref{vdp_sys} in the new variables $x$ and $w$ as:
\begin{equation}
\begin{cases}
\dot{x} = \mu \left(w - F(x)\right) \\[.2cm]
\dot{w} = -\dfrac{1}{\mu} x
\end{cases}
\label{lienard}
\end{equation}
This system has a slow manifold at the curve $w = F(x) = x^3/3 - x$, and in order to test how LDs perform when it comes to detecting this structure, we run a simulation using $\mu = 10$ and we integrate trajectories for $\tau_f = \tau_b = 50$ or until they leave a circle of radius $R = 6$ centered at the origin. Results are shown in Fig. \ref{ld_vdp_lienard}, and we have overlaid in magenta the curve that represents the true slow manifold of the system. This comparison between the output of LDs and the location of the slow manifold underlines the success of the method in highlighting this object. Furthermore, we can extract this information from the gradient of LDs, and obtain an approximation of the slow manifold from the scalar field values of the LD output. Notice that this analysis is also revealing the slow and fast branches of the underlying limit cycle of the system.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.28]{figure8a}
B)\includegraphics[scale=0.28]{figure8b}
C)\includegraphics[scale=0.28]{figure8c}
D)\includegraphics[scale=0.28]{figure8d}
\end{center}
\caption{Phase space visualization of the van der Pol oscillator under a Li\'enard transformation given by Eq. \eqref{lienard} for the system parameters $\mu = 10$, by means of LDs with $p = 1/2$. Trajectories are integrated forward and backward for a time $\tau_f = \tau_b = 50$. A) Total LD function; B) Forward component of LDs; C) Backward component of LDs; D) Approximation to the stable (blue) and unstable (red) branches of the slow manifold displayed by the system using the gradient operator on the LD scalar output. In panel A) we have depicted the slow manifold of the system in magenta.}
\label{ld_vdp_lienard}
\end{figure}
\subsection{Identification of Attractors}
\label{subsec:sec4}
The next dynamical topic that we would like to address is the capability of the method of Lagrangian descriptors to detect attractors and their basins of attraction. To illustrate this fact, we have chosen the Duffing equation as the model problem to study \cite{duffing1918,guck1983,korsch2008,kovacic2011}. The dynamics of the periodically forced and damped Duffing oscillator is described by Newton's second law:
\begin{equation}
\ddot{x} + \delta \dot{x} - \alpha x + \beta x^3 = \gamma \cos(\omega t)
\end{equation}
which can be rewritten as a system of first order ODEs in the form:
\begin{equation}
\begin{cases}
\dot{x} = y \\[.2cm]
\dot{y} = - \delta y + \alpha x - \beta x^3 + \gamma \cos(\omega t)
\end{cases}
\label{duffing}
\end{equation}
We will consider in our simulations that $\beta = 1$. We begin by illustrating how in the conservative unforced case, that is, $\gamma = \delta = 0$ we get the classical phase space picture of a double well system with a saddle point at the origin and two homoclinic orbits surrounding the center equilibria. To do so, we calculate LDs by using equal integration times forward and backward $\tau_f = \tau_b = 20$ and depict the results in Fig. \ref{duffi_cons}. In panel A) we present the total LD field, and in B) we have extracted the stable and unstable manifolds as ridges of the LD output by means of the gradient. The location of the centers is marked with yellow dots, and the magenta dot at the origin corresponds to the saddle point. Notice how LDs nicely highlight the figure eight shape formed by the homoclinic orbits that enclose the center equilibria of the system.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.28]{figure9a}
B)\includegraphics[scale=0.28]{figure9b}
\end{center}
\caption{Phase space of the Duffing system in Eq. \eqref{duffing} with no forcing and no damping, that is $\gamma = \delta = 0$, highlighted by LDs with $p = 1/2$ using a forward and backward integration tome of $\tau = 20$. A) Total LD values; B) Stable (blue) and unstable (red) manifolds extracted from the gradient of the LD scalar field. We have marked with yellow dots the stable equilibria, and with a magenta dot the saddle point at the origin.}
\label{duffi_cons}
\end{figure}
In the next experiment we continue working with the unforced Duffing oscillator ($\gamma = 0$) but switch on the damping in the system in order to show that LDs are capable of detecting the boundaries of the basins of attraction of both centers, that now turn into stable foci. We set the damping coefficient to $\delta = 0.3$, and for this parameter value, the eigenvalues at the origin are $\lambda_1 \approx 0.8612$ and $\lambda_2 \approx -1.1612$. Notice that this would imply that we have two different time scales at play in the neighborhood of the origin, and this could difficult the simultaneous visualization of the manifolds when we plot the scalar output of the total LD (forward plus backward), as we discussed for the example of the dissipative saddle. This becomes evident in Fig. \ref{duffi_diss} A) when we compute the total LDs (forward plus backward contributions) for $\tau_f = \tau_b = 25$. Panels B) and C) show the forward and backward components of LD respectively, and in them we can see that the method reveals the location of the stable and unstable manifolds of the system. These manifolds form the boundaries of the basins of attraction of the stable foci depicted with yellow dots in panel D). Despite the fact that it is difficult to visualize both manifold structures in the total LD field because of the different time scales at play, we can easily extract their locations by analyzing the gradient of the forward and backward components of LD separately and plot these information together as we illustrate in D).
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.28]{figure10a}
B)\includegraphics[scale=0.28]{figure10b}
C)\includegraphics[scale=0.28]{figure10c}
B)\includegraphics[scale=0.28]{figure10d}
\end{center}
\caption{Phase space of the Duffing system in Eq. \eqref{duffing} with no forcing ($\gamma = 0$), and a damping coefficient of $\delta = 0.3$, as displayed by LDs with $p = 1/2$ using a forward and backward integration tome of $\tau = 25$. A) Total LD values; B) Forward LD; C) Backward LD; D) Stable (blue) and unstable (red) manifolds extracted from the gradient of the LD scalar field. We have marked with yellow dots the stable equilibria, and with a magenta dot the saddle point at the origin. For guidance, we have included arrows to mark the flow directions on the manifolds.}
\label{duffi_diss}
\end{figure}
We continue the analysis of attractors with LDs by demonstrating that this tool can be used to reconstruct the intricate geometry of strange attractors. To do so, we explore two different cases for the model parameters and introduce forcing into the system, making it nonautonomous. We look at two different situations, one where $\alpha = 1$, $\delta = 0.3$, $\gamma = 0.5$ and $\omega = 1.2$ (displayed in the left column of Fig. \ref{str_att}), and for the other we select $\alpha = 0$, $\delta = 0.05$, $\gamma = 7.5$ and $\omega = 1$ (right column of Fig. \ref{str_att}). The second case corresponds to the setting for the Ueda attractor \cite{ueda1979,ueda1980}. The existence and properties of these strange attractors for the Duffing equation have been widely studied in the literature, see e.g. \cite{moon1979,holmes1980,guck1983,kovacic2011} and references therein. In order to reveal the strange attractor and its stable and unstable manifolds at time $t = 0$ we compute LDs with an integration time $\tau_f = \tau_b = 20$. Since the system is nonautonomous, the attractor is going to evolve and change shape with time, so our analysis is going to provide a snapshot of it at the initial time $t = 0$. If we want to obtain a picture of the attractor at any other time, say $t = t_1$, one just needs to compute LDs using that instant $t = t_1$ as the initial time. We compare our results with the classical technique used to reconstruct strange attractors that consists in calculating a Poincar\'e section by strobing the trajectory of an initial condition at integer multiples of the forcing period. For this test we have chosen $(1,0)$ as the initial condition at time $t = 0$, and we have recorded its location every period of the forcing term for $15000$ periods. In Fig. \ref{str_att} A) and B) we depict the forward component of LDs, while in the panels below, C) and D), the geometry of the stable (blue) and unstable (red) manifolds has been recovered from the LD values with a gradient-type filter. Finally, and as a validation of the results obtained with LDs, we present in E) and F) the corresponding Poincar\'e map in purple.
It is important to remark here that the reconstruction of the attractor by means of a Poincar\'e section approach requires a extremely large integration time, which would be prone to the accumulation of numerical error in the simulation. On the other hand, with the two examples we have discussed above, LDs provide a clear advantage for recovering the structure and location of the attractor, since our experiments were carried out with a small integration time of $\tau_f = \tau_b = 20$, which roughly corresponds to a couple of periods of the forcing, a number that by no means is comparable to the large time interval needed for the Poincar\'e map, Moreover, Poincar\'e maps can be only constructed for periodically forced systems, whereas LDs can be applied in a straightforward manner to both aperiodic and periodic forcing scenarios.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.26]{figure11a}
B)\includegraphics[scale=0.26]{figure11b}
C)\includegraphics[scale=0.28]{figure11c}
D)\includegraphics[scale=0.28]{figure11d}
E)\includegraphics[scale=0.28]{figure11e}
F)\includegraphics[scale=0.28]{figure11f}
\end{center}
\caption{Strange attractors for the Duffing system in Eq. \eqref{duffing} at time $t = 0$ as revealed by LDs, and comparison with Poincar\'e sections. The first column corresponds to the model parameters $\alpha = 1$, $\delta = 0.3$, $\gamma = 0.5$ and $\omega = 1.2$, whereas the second column uses the values $\alpha = 0$, $\delta = 0.05$, $\gamma = 7.5$ and $\omega = 1$ (Ueda attractor). A) and B) depict the backward coponent of LDs with $p = 1/2$ using an integration time of $\tau = 20$ and $\tau = 50$ respectively. In panel C) we show the stable (blue) and unstable (red) manifolds extracted by means of the gradient of the LD scalar field. D) displays the unstable manifold extracted from the laplacian operator. E) and F) illustrate the Poincar\'e section obtained by strobing the location of a trajectory, starting from the initial condition $(1,0)$, every period of the forcing term for $15000$ periods.}
\label{str_att}
\end{figure}
\subsection{Transition Ellipsoids in Hamiltonian systems with two degrees of freedom subject to dissipation}
\label{subsec:sec5}
We would like to finish this work by exploring the capability that the method of Lagrangian descriptors brings for the dynamical analysis of Hamiltonian systems with two degrees of freedom (DoF) subject to damping effects. In many applications, such as astrodynamics \cite{jaffe2002,koon2011}, structural mechanics \cite{collins2012,zhong2018snap}, capsize ship motion \cite{naik2017}, chemical reactions \cite{Uzer2002}, etc., it is of crucial importance to address transition events across an index-1 saddle critical point of a potential energy surface (PES). The phase space objects responsible for controlling this transport mechanism are the stable and unstable manifolds of the normally hyperbolic invariant manifold (NHIM) that exists in the vicinity of the index-1 saddle \cite{Wiggins94}. For 2 DoF Hamiltonian systems, the NHIM is an unstable periodic orbit, that is, it has the geometry of $S^1$, and its stable and unstable manifolds have the topology of cylinders ($S^1\times \mathbb{R}$). These 'tube manifolds' are known in the literature as spherical cylinders \cite{almeida1990,deleon1991} and act as conduits, i.e. a transportation network, that trajectories moving across the index-1 saddle follow along their evolution. Moreover, they are codimension-1 objects in the energy hypersurface of the Hamiltonian system, and thus, they are barriers to transport that trajectories can not cross \cite{wiggins2001}. These cylinders partition the energy surface into 'reactive' and 'non-reactive' trajectories. All trajectories starting from an initial condition inside the tube manifolds are termed as 'reactive', which means that they will cross the index-1 saddle region of the PES, whereas the initial conditions outside the tubes will not move across the index-1 saddle.
However, when damping is added to the Hamiltonian system, it has been recently shown that the topology of the manifolds of the NHIM changes significantly and transforms from a cylindrical geometry into an ellipsoid \cite{zhong2020a,zhong2020b,zhong_thesis}. In particular, the stable cylinder of a conservative Hamiltonian system, which characterizes the trajectories crossing from one side to the other of the index-1 saddle in forward time, becomes an ellipsoid, known as a transition ellipsoid. This phase space structure determines all transition events in forward time in the dissipative setup. Furthermore, the unstable periodic orbit associated to the index-1 saddle equilibrium point that existed in the conservative system disappears and converts into a focus-type asymptotic orbit tending to the equilibrium point itself.
In order to demonstrate that LDs can be used as a simple mathematical tool to detect the location and geometry of this ellipsoid, we will analyze an uncoupled Hamiltonian system with 2 DoF defined from a double well potential in the $x$ DoF, and a harmonic oscillator in $y$. To this model, we will add a linear damping into Hamilton's equations of motion. Consider the Hamiltonian function:
\begin{equation}
H(x,y,p_x,p_y) = \dfrac{p_x^2}{2m_1} + \dfrac{p_y^2}{2m_2} + \dfrac{a}{4}x^4 - \dfrac{b}{2}x^2 + \dfrac{\omega^2}{2}y^2
\label{ham2dof}
\end{equation}
where $m_1$ and $m_2$ are the masses of the $x$ and $y$ DoF respectively, and $\omega$ is the angular frequency of the harmonic oscillator. For our analysis we will set $m_1 = m_2 = a = b = \omega = 1$. We can easily include the effect of dissipation in the system by writing Hamilton's equations in the form:
\begin{equation}
\begin{cases}
\dot{x} = \dfrac{\partial H}{\partial p_x} = \dfrac{p_x}{m_1} \\[.3cm]
\dot{p}_x = -\dfrac{\partial H}{\partial x} - \gamma_x p_x = bx - ax^3 - \gamma_x p_x \\[.3cm]
\dot{y} = \dfrac{\partial H}{\partial p_y} = \dfrac{p_y}{m_2} \\[.3cm]
\dot{p}_y = -\dfrac{\partial H}{\partial y} - \gamma_y p_y = \omega y - \gamma_y p_y
\end{cases}
\label{ham_eqs}
\end{equation}
where $\gamma_x,\gamma_y > 0$ measure the dissipation strength in each DoF. We will look in this work at the case where dissipation has the same influence on both DoF, that is, $\gamma_x = \gamma_y = \gamma$, and we fix for all our calculations $m_1 = m_2 = a = b = \omega = 1$.
In order to analyze the dynamics of the system in Eq. \eqref{ham_eqs}, we define the following Poincar\'e surfaces of section (PSOS):
\begin{equation}
\begin{split}
\Sigma_1 &= \left\lbrace (x,y,p_x,p_y) \in \mathbb{R}^4 \; \Big| \; y = 0 \; ,\; p_y > 0 \; \right\rbrace \\[.2cm]
\Sigma_2 &= \left\lbrace (x,y,p_x,p_y) \in \mathbb{R}^4 \; \Big| \; p_y = 0 \; ,\; p_x > 0 \; \right\rbrace \\[.2cm]
\Sigma_3 &= \left\lbrace (x,y,p_x,p_y) \in \mathbb{R}^4 \; \Big| \; x = -0.4 \; ,\; p_x > 0 \; \right\rbrace
\end{split}
\label{psecs}
\end{equation}
The main reasons for choosing these phase space slices are the following. First, $\Sigma_1$ serves to obtain a global picture of the basins of attraction of the stable equilibrium points located at the wells of the system. Second, we will use the section $\Sigma_2$ to probe the ellipsoidal geometry of the stable manifolds, since it cuts the ellipsoid longitudinally across its equator. Finally, $\Sigma_3$ is a transverse plane to the ellipsoid that would yield a circular cross-section, which gives us the classical picture known as a reactive islands \cite{almeida1990}. This name comes from the fact that all initial conditions chosen inside the closed curve obtained in this cut are reactive trajectories that cross the index-1 saddle region on forward time, and any other initial condition outside it never crosses the saddle along its evolution.
We start looking at the effect of increasing the strength of the damping in the geometry of the invariant manifolds. It is straightforward to show that the conservative Hamiltonian system ($\gamma = 0$) has an equilibrium point of saddle$\times$center stability type at the origin. When dissipation is included, its stability changes, and converts into a saddle$\times$focus. We consider here three different values for the damping coefficient, $\gamma = 0.1,\, 0.25,\, 1$. If we compute LDs in the PSOS $\Sigma_1$, see Fig. \ref{ld_dwell_sec_y_0}, the basins of attraction of the wells in the system are revealed. The spiraling of the unstable manifold that emanates from the equilibrium point at the origin and ends in the equilibrium point in the left and right wells reduces as the damping increases, and this explains why it takes less time for trajectories to approach the stable equilibria. Notice also how the stable manifolds get closer to the origin for large values of the friction, and this represents that the amount of trajectories that can cross from well to well across the saddle point of the PES reduces significantly with large damping values. This implies that the transition ellipsoid becomes smaller in size. Notice that the stable manifold in the total LD scalar field displayed in the left column of Fig. \ref{ld_dwell_sec_y_0} becomes harder to locate as dissipation is increased, but this is expected in the output of LDs, as we have discussed for other examples in this work, since the time scales of the system separate. If we would like to fix this visualization issue, one could integrate initial conditions forward and backward for different times. However, this is not necessary because the manifolds can be directly extracted from the forward and backward components of LDs by means of applying the gradient or laplacian of the LD scalar field, or other edge detection algorithms used in image processing.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.27]{figure12a}
B)\includegraphics[scale=0.27]{figure12b}
C)\includegraphics[scale=0.27]{figure12c}
D)\includegraphics[scale=0.27]{figure12d}
E)\includegraphics[scale=0.27]{figure12e}
F)\includegraphics[scale=0.27]{figure12f}
\end{center}
\caption{Lagrangian descriptors calculated for the double well system in Eq. \eqref{ham_eqs} on the Poincar\'e section $\Sigma_1$ defined in Eq. \eqref{psecs}, using $p = 1/2$ and an integration time of $\tau_f = \tau_b = 15$. The initial energy of the system is $H_0 = 0.05$. Panels in each row correspond, from top to bottom, to dissipation values of $\gamma = 0.1, \, 0.25, \, 1$, respectively. We have depicted the initial energy boundary of the system as a magenta curve. The first column displays the total LD values, and the column on the right shows the sable (blue) and unstable (red) manifolds extracted from the LD scalar field by means of applying the laplacian operator.}
\label{ld_dwell_sec_y_0}
\end{figure}
In order to clearly visualize the ellipsoidal geometry of the stable manifold we calculate LDs on the configuration space PSOS $\Sigma_2$. We depict in Fig. \ref{ld_dwell_sec_py_0} the LD values (left column) and the stable (blue) and unstable (red) manifolds obtained from the laplacian operator of the LD function (right column). In this phase space slice, the ellipsoidal geometry is nicely captured by LDs, confirming its existence in the system. Also, it shrinks in size as the damping gets large. To finish, we analyze the system further to validate that the structure highlighted by LDs is in fact the stable manifold, and that this transition ellipsoid controls the transport of trajectories across the index-1 saddle. To do so, we calculate LDs on the PSOS $\Sigma_3$ using a dissipation of $\gamma = 0.25$, and show the results of the forward LD in Fig. \ref{3d_ld_trajs} A). This slice intersects the ellipsoid transversely into a circle. To test if this closed curve corresponds to the ellipsoid boundary, we select three different initial conditions, the red circle outside the ellipsoid, the blue diamond inside the manifold, and the magenta square on the boundary of the stable manifold: We evolve them forward in time and plot their trajectories in 3D in panel B) and projected onto configuration space in C). This numerical experiment convincingly shows that LDs is correctly detecting the boundary of the transition ellipsoid, since the red initial condition does not cross the index-1 saddle and is attracted by the stable equilibrium point of the left well. On the other hand, the blue trajectory moves across the saddle point, and then it ends its evolution at the right well equilibrium point. Finally, the magenta trajectory which starts exactly at the boundary of the ellipsoid, behaves as expected, and asymptotically tends towards the saddle point at the origin. It does so following a spiral-like trajectory that evidences the saddle-focus stability nature of the saddle equilibrium.
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.27]{figure13a}
B)\includegraphics[scale=0.27]{figure13b}
C)\includegraphics[scale=0.27]{figure13c}
D)\includegraphics[scale=0.27]{figure13d}
E)\includegraphics[scale=0.27]{figure13e}
F)\includegraphics[scale=0.27]{figure13f}
\end{center}
\caption{Lagrangian descriptors calculated for the double well system in Eq. \eqref{ham_eqs} on the Poincar\'e section $\Sigma_2$ defined in Eq. \eqref{psecs}, using $p = 1/2$ and an integration time of $\tau_f = \tau_b = 15$. The initial energy of the system is $H_0 = 0.05$. Panels in each row correspond, from top to bottom, to dissipation values of $\gamma = 0.1, \, 0.25, \, 1$, respectively. We have depicted the initial energy boundary of the system as a magenta curve. The first column displays the total LD values, and the column on the right shows the sable (blue) and unstable (red) manifolds extracted from the LD scalar field by means of applying the laplacian operator.}
\label{ld_dwell_sec_py_0}
\end{figure}
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.27]{figure14a}
B)\includegraphics[scale=0.28]{figure14b}
C)\includegraphics[scale=0.27]{figure14c}
\end{center}
\caption{Computation of LDs and visualization of three-dimensional dynamics for the double well system in Eq. \eqref{ham_eqs} with initial energy $H_0 = 0.05$ and dissipation strength $\gamma = 0.25$. A) LDs on the Poincar\'e section $\Sigma_3$ defined in Eq. \eqref{psecs}, using $p = 1/2$ and an integration time of $\tau_f = \tau_b = 15$. We have marked three different initial conditions, the red circle is outside the transition ellipsoid, the blue diamond is inside, and the magenta square is on its boundary. The magenta curve depicts the initial energy boundary of the system. B) Three dimensional dynamics in forward time of the initial conditions selected in panel A) as they evolve from $t_0 = 0$ to $t = 20$. We have superposed LDs calculated on the phase space slices $\Sigma_2$ and $\Sigma_3$, and the green surface represents the initial energy shell. C) Configuration space projection of the trajectories in panel B), superposed with LDs calculated on the section $\Sigma_2$. Potential energy contours are depicted as black cuves and the outer grey area corresponds the forbidden region for the initial energy of the system.}
\label{3d_ld_trajs}
\end{figure}
\section{Conclusions}
\label{sec:conc}
In this work we have demonstrated how the method of Lagrangian descriptors can be successfully implemented to reveal the relevant phase space structures of dissipative dynamical systems. This study substantially broadens the applicability spectrum of this tool and exemplifies how a simple scalar diagnostic technique can provide relevant insights in order to understand dynamical phenomena in real problems.
Our analysis of different classical models from nonlinear dynamics has shown that this technique has the capability to detect the presence of limit cycles, basins of attraction, strange attractors and slow manifolds. Moreover, by including the possibility of integrating trajectories forward and backward for different times we have generalized the definition of LDs, and this allows us to account for the separation of time scales displayed in these type of problems. On the other hand, in the context of Hamiltonian systems subject to dissipative forces, where the topology of the stable and unstable manifolds changes due to the presence of friction, we have explored this effect on a double well system with two degrees of freedom. The study of this problem by means of LDs convincingly illustrates the advantage that this tool brings for the location of transition ellipsoids which characterize the phase space trajectories that are allowed to move across the index-1 saddle separating two well regions of the underlying potential energy surface.
In the future, our work will focus on applying Lagrangian descriptors to develop a more detailed understanding on the impact of dissipation on the geometry of invariant manifolds and their interaction, and the role that these phase space structures play on governing the dynamics in many Hamiltonian systems subject to friction that arise in real problems and applications.
\section*{Acknowledgments}
VJGG would like to acknowledge the financial support received from the EPSRC Grant No. EP/P021123/1 and the Office of Naval Research Grant No. N00014-01-1-0769 for his research visits over the past years to the School of Mathematics, University of Bristol. Many fruitful discussions with Prof. Stephen Wiggins during this period have inspired the development of this work.
\bibliographystyle{natbib}
|
1,314,259,994,331 | arxiv | \section{Introduction}
Image restoration, including image denoising, deblurring, inpainting, computed tomography, etc., is one of the central problems in imaging science. It aims at recovering an image of high-quality from a given measurement which is degraded during the process of imaging, acquisition, and communication. The image restoration problem is typically modeled as the following linear inverse problem:
\begin{align}\label{Linear_IP}
{\boldsymbol f}={\boldsymbol \mA}\boldsymbol{u}+{\boldsymbol\zeta},
\end{align}
where ${\boldsymbol f}$ is the degraded measurement or the observed image, ${\boldsymbol\zeta}$ is a certain additive noise, and ${\boldsymbol \mA}$ is some linear operator which takes different forms for different image restoration problems.
Since the operator ${\boldsymbol \mA}$ is in general ill-conditioned or non-invertible, it is in general to use a regularization on the images to be restored. Most widely used examples include the variational approaches including total variation (TV) \cite{L.I.Rudin1992} and its nonlocal variants \cite{X.Zhang2010}, the inf-convolution \cite{A.Chambolle1997}, the total generalized variation (TGV) \cite{K.Bredies2014,K.Bredies2010}, and the combined first and second order total variation \cite{M.Bergounioux2010,K.Papafitsoros2014}. Apart from the variational approaches, the applied harmonic analysis approaches including curvelets \cite{E.Candes2006}, Gabor frames \cite{H.Ji2017}, shearlets \cite{G.Kutyniok2011}, complex tight framelets \cite{B.Han2014}, and wavelet frames \cite{J.F.Cai2009,J.F.Cai2008,J.F.Cai2012,R.H.Chan2003} are also widely used in the literature. The common concept of these methods is to use a sparse regularization on discrete images under a discrete linear transformation to regularize smooth image components while preserving image singularities such as edges, ridges, and corners. However, in many applications, the true singularities lie in the continuous domain, and the discretization error will lead to the \emph{basis mismatch} \cite{Y.Chi2010,G.Ongie2016,J.Ying2017} between the true singularities and the discrete image grid. Such a basis mismatch would destroy the sparse structure of the image, and thus can degrade the restoration quality \cite{J.Ying2017}.
Recently, continuous domain regularization is emerging as a powerful alternative to the discrete domain sparse regularization \cite{B.N.Bhaskar2013,E.J.Candes2014,Y.Chen2014,G.Ongie2018}. By such an ``off-the-grid'' approach, we can exploit the sparsity prior in continuous domain, which enables us to alleviate the basis mismatch due to the discretization \cite{G.Ongie2018}. To the best of our knowledge, this off-the-grid regularization stems from the Prony's method \cite{Prony1795} which corresponds a superposition of a few sinusoids to a structured low rank matrix for the Dirac stream retrieval. Hence, we can adopt the so-called \emph{structured low rank matrix (SLRM) approach} \cite{Y.Chen2014,G.Ongie2017,J.C.Ye2017} for the restoration of spectrally sparse signal whose Fourier transform is a Dirac stream \cite{J.F.Cai2016,J.F.Cai2018a,J.F.Cai2019}. Apart from the spectrally sparse signal restoration, the SLRM can also be used to restore Fourier samples of a one dimensional piecewise constant signal \cite{T.Blu2008,M.Vetterli2002} as in this case the Fourier samples of a derivative becomes the superposition of a few sinusoids. However, even though the SLRM can be easily applied to the case of isolated singularities \cite{E.J.Candes2013,W.Xu2014}, the extension to the multi dimensional (piecewise smooth) image restoration is not straightforward \cite{G.Ongie2016}. Since the image singularities such as the edges and ridges in general form a continuous curve in a two dimensional domain, it is in general challenging to construct a structured low rank matrix from the Fourier samples.
In this paper, we introduce a new structured low rank matrix framework for the piecewise smooth image restoration. Our framework is inspired by the $k$th order total generalized variation ($\mathrm{TGV}^k$) \cite{K.Bredies2014,K.Bredies2010}, which is known to be effective in restoring the piecewise polynomial image with sharp edges \cite{W.Guo2014}. Specifically, following the SLRM framework for the piecewise constant image \cite{G.Ongie2016}, we assume that the image singularities (including both jumps and hidden jumps) are located in the zero level set of a band-limited periodic function (called the \emph{annihilating polynomial}). Then we can derive that the gradient can be decomposed into another vector field and the residual, and the Fourier samples of the residual and the symmetric gradient of this vector field can be annihilated by the convolution with the Fourier coefficients of the annihilating polynomial (called the \emph{annihilating filter}). From these annihilation relations, we deduce that the multi-fold Hankel matrices generated from the Fourier samples of a piecewise smooth image are low rank, which in turn enables a balance between the derivatives of order $1,\ldots,k$ via a combined rank minimization of multi-fold Hankel matrices.
As a by-product of the proposed structured low rank matrix framework, we further introduce a wavelet frame based sparse regularization model for the piecewise image restoration via the continuous domain regularization. Briefly speaking, if we can associate a signal/image with a low rank Hankel matrix, its right singular vectors form tight frame filter banks under which the canonical coefficients have a group sparsity according to the index of filters \cite{J.F.Cai2020}. Then motivated by \cite{D.Guo2018}, we assume that the right singular vectors are estimated from a pre-restoration process, and we propose a \emph{sparse regularization} via the wavelet frame analysis approach (e.g. \cite{J.F.Cai2009/10}) as an image restoration model via the continuous domain regularization. Notice that there are several wavelet frame based approaches for the piecewise smooth image restoration \cite{J.F.Cai2016a,J.K.Choi2020,H.Ji2016} in the literature. However, while these existing approaches focus on the sparse approximation of a discrete image, our approach comes from the relaxation of the structured low rank matrices generated by the Fourier samples for the continuous domain regularization.
\subsection{SLRM for piecewise smooth image restoration}\label{SLMAPS}
Our SLRM framework for the piecewise smooth function is mostly related to the recent extension of the SLRM framework to the two dimensional functions in \cite{G.Ongie2018,G.Ongie2015a,G.Ongie2015,G.Ongie2016,H.Pan2014}. Briefly speaking, if the singularity curves of a piecewise constant/holomorphic function, i.e. the supports of the first order (real/complex) derivatives of a target image, lie in the zero level set of an annihilating polynomial, the Fourier transform of the derivatives can be annihilated by the convolution with the annihilating filter. This annihilation relation in turn corresponds the Fourier samples of derivatives to the structured low rank matrices. Based on this framework, the SLRM framework for the piecewise constant image restoration is proposed and studied in \cite{G.Ongie2015a,G.Ongie2015,G.Ongie2016,G.Ongie2017}, together with a restoration guarantee \cite{G.Ongie2018}.
This annihilation relation of the gradient can be easily extended to the higher order derivatives whenever the jump discontinuities of the corresponding derivatives are located in the zero level set of a trigonometric polynomial \cite{G.Ongie2015a}. For instance, we can derive an annihilation relation for the piecewise linear function by considering the Fourier transform of the second order derivatives. Based on this idea, the authors in \cite{Y.Hu2019} proposed a so-called generalized structured low rank (GSLR) approach for the piecewise smooth image restoration. Inspired by the $2$-fold inf-convolution \cite{A.Chambolle1997}, the GSLR approach restores a superposition of a piecewise constant layer and a piecewise linear layer whose first and second order derivatives correspond to low rank Hankel matrices in the frequency domain respectively. By balancing the first order and second order derivatives, the GSLR approach has demonstrated significant improvements in the piecewise smooth image restoration tasks over the existing approaches.
Note that it is not difficult to extend the previous GSLR framework to generic piecewise smooth functions. More precisely, by considering the annihilation relation of higher order derivatives, we can extend the GSLR to the so-called $k$-fold inf-convolution ($k\geq3$) for functions with higher regularity. However, in a piecewise smooth function, there could exist singularities on which the derivatives have jump discontinuities. In this case, the annihilating polynomial describing the image singularities cannot annihilate the $k$th order derivatives \cite[Proposition 4]{G.Ongie2015a}. Since we then need to consider the power of the annihilating polynomial, or equivalently, increase the size of the filter to guarantee the annihilation relation for higher order derivatives, the GSLR framework can degrade the low rank (multi-fold) Hankel matrix structure of the higher order derivatives.
Unlike the GSLR framework in \cite{Y.Hu2019} based on the inf-convolution, the proposed SLRM framework is based on the total generalized variation framework \cite{K.Bredies2010}. Briefly speaking, we decompose the gradient into the residual whose Fourier samples correspond to the structured low rank matrix and a vector field whose Fourier samples of the symmetric gradient correspond to the structured low rank matrix. In other words, since our proposed SLRM framework uses the Fourier samples of the successive first order derivatives, it does not require the powers of the annihilating polynomial describing the image singularities, which enables us to obtain better low rank Hankel matrix structures corresponding to the piecewise smooth functions. In addition, since the TGV takes the inf-convolution as a special case, it can also be verified that the proposed SLRM framework is more general than the GSLR framework for the piecewise smooth functions.
\subsection{Organization and notation of paper}
The rest of this paper is organized as follows. In \cref{OurFramework}, we present the proposed structured low rank matrix framework for the piecewise smooth functions. We first describe the one dimensional case to see the idea clearly, then we extend to the idea to the two dimensional framework. In \cref{ImageRestoration}, we present an image restoration model based on the wavelet frame as an application of the proposed structured low rank matrix framework, followed by an alternating minimization algorithm, and some numerical results are presented to demonstrate the performance of our framework in the piecewise smooth image restoration in \cref{Experiments}. Finally, \cref{Conclusion} concludes this paper with a few future directions. All technical proofs will be postponed to appendices.
Throughout this paper, all two dimensional images and two dimensional $k$ tensors defined on the discrete grid will be denoted by the bold faced lower case letters. Note that a two dimensional discrete image and a two dimensional discrete $k$ tensor can also be identified with a vector and a $2^k$-tuple of vectors (as well as a sequence or a $2^k$-tuple of sequences supported on the grid) respectively whenever convenient. All matrices will be denoted by the bold faced upper case letters, and the $m$th row and the $n$th column of a matrix ${\boldsymbol Z}$ will be denoted by ${\boldsymbol Z}^{(m,:)}$ and ${\boldsymbol Z}^{(:,n)}$, respectively. Denote by
\begin{align}\label{ImageGrid}
{\mathbb O}=\left\{-\lfloor N/2\rfloor,\ldots,\lfloor(N-1)/2\rfloor\right\}^2
\end{align}
with $N\in{\mathbb N}$, the set of $N\times N$ grid. The space of complex valued functions on ${\mathbb O}$ and the space of complex $k$-tensor valued functions on ${\mathbb O}$ are denoted by ${\mathscr V}\simeq{\Bbb C}^{|{\mathbb O}|}$ and ${\mathscr V}_k\simeq{\Bbb C}^{|{\mathbb O}|\times{2^k}}$, respectively. Notice that ${\mathscr V}={\mathscr V}_0$. Given two rectangular grids ${\mathbb K}$ and ${\mathbb M}$, we define
\begin{align*}
{\mathbb K}:{\mathbb M}=\left\{{\boldsymbol k}\in{\mathbb K}:{\boldsymbol k}+{\mathbb M}\subseteq{\mathbb K}\right\}=\left\{{\boldsymbol k}\in{\mathbb K}:{\boldsymbol k}+\boldsymbol{m}\in{\mathbb K}~\text{for all}~\boldsymbol{m}\in{\mathbb M}\right\}.
\end{align*}
Operators on both images and tensors are denoted as the bold faced caligraphic letters. For instance, letting $\boldsymbol{v}\in{\mathscr V}$ and ${\mathbb K}$ be a rectangular $K_1\times K_2$ grid, the corresponding Hankel matrix ${\boldsymbol \mH}\boldsymbol{v}$ is an $M_1\times M_2$ matrix ($M_1=|{\mathbb O}:{\mathbb K}|$ and $M_2=|{\mathbb K}|$) generated by concatenating $K_1\times K_2$ patches of $\boldsymbol{v}$ into row vectors. Note that, in the sense of multi-indices, we have
\begin{align*}
\left({\boldsymbol \mH}\boldsymbol{v}\right)({\boldsymbol k},\boldsymbol{m})=\boldsymbol{v}({\boldsymbol k}+\boldsymbol{m}),~~~~~{\boldsymbol k}\in{\mathbb O}:{\mathbb K},~~\text{and}~~\boldsymbol{m}\in{\mathbb K}.
\end{align*}
With a slight abuse of notation, we also use
\begin{align}\label{kFoldHankel}
{\boldsymbol \mH}\boldsymbol{q}=\left[\begin{array}{ccc}
\left({\boldsymbol \mH}\boldsymbol{q}_1\right)^T&\cdots&\left({\boldsymbol \mH}\boldsymbol{q}_{2^k}\right)^T
\end{array}\right]^T\in{\Bbb C}^{kM_1\times M_2}
\end{align}
to denote the $k$-fold Hankel matrix constructed from $\boldsymbol{q}=\left(\boldsymbol{q}_1,\ldots,\boldsymbol{q}_{2^k}\right)\in{\mathscr V}_k$.
\section{Structured low rank matrix framework for piecewise smooth functions}\label{OurFramework}
In this section, we introduce our structured low rank matrix framework for piecewise smooth functions. For simplicity, we consider the piecewise linear function throughout this paper. Note, however, it is not difficult to extend the proposed framework into the general piecewise smooth functions.
\subsection{SLRM framework for one dimensional signals}\label{1DFramework}
We first establish the structured low rank matrix framework of the following one dimensional piecewise linear model
\begin{align}\label{signalModel}
u(x)=\sum_{j=1}^{K-1}\left(\alpha_jx+\beta_j\right)1_{[x_j,x_{j+1})}(x),
\end{align}
from its Fourier sample
\begin{align*}
\widehat{u}(k)={\mathscr F}(u)(k)=\int_{-\infty}^{\infty}u(x)e^{-2\pi ikx}\mathrm{d} x,~~~~~k\in\left\{-\left\lfloor N/2\right\rfloor,\ldots,\left\lfloor(N-1)/2\right\rfloor\right\},~~~N\in{\mathbb N}.
\end{align*}
In \cref{signalModel}, $\alpha_j,\beta_j\in{\Bbb C}$, $1_{[x_j,x_{j+1})}$ denotes the characteristic function on the interval $[x_j,x_{j+1})$: $1_{[x_j,x_{j+1})}(x)=1$ if $x\in[x_j,x_{j+1})$, and $0$ otherwise, and $-1/2<x_1<x_2<\cdots<x_K<1/2$ are the location of singularities, i.e. by the singularities we mean they include both the jumps and the hidden jumps (jumps of derivatives).
In the sense of distribution, the derivative $u'$ satisfies
\begin{align}\label{uDeri}
u'(x)=\sum_{j=1}^{K-1}\alpha_j1_{[x_j,x_{j+1})}(x)+\sum_{j=1}^K\left[{\mathscr T}_j(u)(x_j)\delta(x-x_j)\right]
\end{align}
where ${\mathscr T}_j(u)(x_j)=\left(\alpha_j-\alpha_{j-1}\right)x_j+\left(\beta_j-\beta_{j-1}\right)$ with $\alpha_0=\alpha_K=\beta_0=\beta_K=0$. Letting
\begin{align}\label{p1D}
p(x)=\sum_{j=1}^{K-1}\alpha_j1_{[x_j,x_{j+1})}(x),
\end{align}
we have
\begin{align*}
u'(x)-p(x)=\sum_{j=1}^K{\mathscr T}_j(u)(x_j)\delta(x-x_j).
\end{align*}
Hence, the Fourier transform of $u'-p$ is a linear combination of complex sinusoids:
\begin{align}
{\mathscr F}(u'-p)(\xi)=\sum_{j=1}^K{\mathscr T}_j(u)(x_j)e^{-2\pi ix_j\xi},~~~~~\xi\in{\Bbb R}.
\end{align}
In addition, since the derivative of $p$ is also a Dirac stream:
\begin{align}
p'(x)=\sum_{j=1}^K\left(\alpha_j-\alpha_{j-1}\right)\delta(x-x_j),
\end{align}
its Fourier transform is expressed as
\begin{align}
{\mathscr F}(p')(\xi)=\sum_{j=1}^K\left(\alpha_j-\alpha_{j-1}\right)e^{-2\pi ix_j\xi},~~~~~\xi\in{\Bbb R}.
\end{align}
i.e. a linear combination of complex sinusoids.
We introduce the following trigonometric polynomial
\begin{align}\label{1DAnnihilatingPolynomial}
\varphi(x)=\prod_{j=1}^K\left(e^{-2\pi ix}-e^{-2\pi ix_j}\right):=\sum_{k=0}^K\mathbf{a}(k)e^{-2\pi ikx}.
\end{align}
Since $\varphi(x_j)=0$ for $j=1,\ldots,K$, it follows that $\varphi\left(u'-p\right)=0$ and $\varphi p'=0$ in the sense of distribution. In the frequency domain, since we have
\begin{align*}
\widehat{\varphi}(\xi)={\mathscr F}(\varphi)(\xi)=\sum_{k=0}^K\mathbf{a}(k)\delta(\xi+k),
\end{align*}
it follows that
\begin{align}
\left({\mathscr F}(u'-p)\ast\widehat{\varphi}\right)(\xi)&=\sum_{k=0}^K{\mathscr F}(u'-p)(\xi+k)\mathbf{a}(k)=0,\label{1DAnni1}\\
\left({\mathscr F}(p')\ast\widehat{\varphi}\right)(\xi)&=\sum_{k=0}^K{\mathscr F}(p')(\xi+k)\mathbf{a}(k)=0,\label{1DAnni2}
\end{align}
for $\xi\in{\Bbb R}$. Therefore, the Fourier transforms of both $u'-p$ and $p'$ are annihilated by the convolution under the Fourier coefficients of $\varphi$.
In many practical cases, we consider the contiguous Fourier samples (i.e. the samples on a regular grid $\left[N\right]:=\left\{-\lfloor N/2\rfloor,\ldots,\lfloor(N-1)/2\rfloor\right\}$), so \cref{1DAnni1,1DAnni2} become the following systems of linear equations
\begin{align}
\sum_{k=0}^K{\mathscr F}(u'-p)(l+k)\mathbf{a}(k)&=0,\label{1DAnniDis1}\\
\sum_{k=0}^K{\mathscr F}(p')(l+k)\mathbf{a}(k)&=0.\label{1DAnniDis2}
\end{align}
In the matrix-vector multiplication form, \cref{1DAnniDis1,1DAnniDis2} lead to
\begin{align*}
{\boldsymbol \mH}\left({\mathscr F}\left(u'-p\right)\big|_{[N]}\right)\mathbf{a}={\mathbf 0}~~~\text{and}~~~{\boldsymbol \mH}\left({\mathscr F}\left(p'\right)\big|_{[N]}\right)\mathbf{a}={\mathbf 0},
\end{align*}
which shows that two $(N-K+1)\times K$ Hankel matrices ${\boldsymbol \mH}\left({\mathscr F}\left(u'-p\right)\big|_{[N]}\right)$ and ${\boldsymbol \mH}\left({\mathscr F}\left(p'\right)\big|_{[N]}\right)$ have nontrivial nullspaces. In particular, for $M>K$, we can see that
\begin{align*}
e^{-2\pi imx}\varphi(x)=\sum_{k=m}^{K+m}\mathbf{a}(k-m)e^{-2\pi ikx},~~~~~m=0,\ldots,M-K
\end{align*}
is also an annihilating polynomial, which shows that, if $K\leq(N-M+1)\wedge M$, we have
\begin{align*}
\mathrm{rank}\left({\boldsymbol \mH}\left({\mathscr F}\left(u'-p\right)\big|_{[N]}\right)\right)\leq K~~~\text{and}~~~\mathrm{rank}\left({\boldsymbol \mH}\left({\mathscr F}\left(p'\right)\big|_{[N]}\right)\right)\leq K.
\end{align*}
Therefore, both ${\boldsymbol \mH}\left({\mathscr F}\left(u'-p\right)\big|_{[N]}\right)\in{\Bbb C}^{(N-M+1)\times M}$ and ${\boldsymbol \mH}\left({\mathscr F}\left(p'\right)\big|_{[N]}\right)\in{\Bbb C}^{(N-M+1)\times M}$ are \emph{rank-deficient,} which shows that it is possible to convert the piecewise regularity in the continuous domain into the low rank Hankel matrices corresponding to the discrete Fourier samples.
\subsection{SLRM framework for two dimensional images}\label{2DFramework}
Now we aim to establish the structured low rank matrix framework for the following two dimensional piecewise linear function model
\begin{align}\label{uModel}
u(\mathbf{x})=\sum_{j=1}^Ju_j(\mathbf{x})1_{\Omega_j}(\mathbf{x}):=\sum_{j=1}^J\left(\boldsymbol{\alpha}_j^T\mathbf{x}+\beta_j\right)1_{\Omega_j}(\mathbf{x}),~~~~~~~\mathbf{x}\in{\Bbb R}^2,
\end{align}
from its Fourier samples
\begin{align}\label{Forward}
\widehat{u}({\boldsymbol k})={\mathscr F}(u)({\boldsymbol k})=\int_{{\Bbb R}^2}u(\mathbf{x})e^{-2\pi i{\boldsymbol k}\cdot\mathbf{x}}\mathrm{d}\mathbf{x},~~~~~~{\boldsymbol k}\in{\mathbb O},
\end{align}
where the sample grid ${\mathbb O}$ is defined as \cref{ImageGrid}. In \cref{uModel}, $\boldsymbol{\alpha}_j=\left(\alpha_{j1},\alpha_{j2}\right)\in{\Bbb C}^2$, $\beta_j\in{\Bbb C}$ and $1_{\Omega_j}$ denotes the characteristic function on a domain $\Omega_j$; $1_{\Omega_j}(\mathbf{x})=1$ if $\mathbf{x}\in\Omega_j$, and $0$ otherwise. Without loss of generality, we assume that $\Omega_j$ lies in $[-1/2,1/2)^2$ for simplicity. (However, it is not difficult to generalize the setting into an arbitrary rectangular region $[-L_1/2,L_1/2)\times[-L_2/2,L_2/2)$.) We further assume that \cref{uModel} is expressed with the smallest number of characteristic functions such that $\Omega_j$'s are pairwise disjoint. Then the singularities of $u$, including both jump discontinuities of $u$ and hidden jump discontinuities of $u$ (jumps of derivatives), agree with $\Gamma=\bigcup_{j=1}^J\partial\Omega_j$, which will be called the \emph{singularity set} of $u$ throughout this paper.
Generally, it is difficult to directly establish the SLRM framework without any further information on the singularity set $\Gamma$. Inspired by the two dimensional FRI framework for the piecewise constant function \cite{G.Ongie2015a,G.Ongie2015,G.Ongie2016,G.Ongie2017}, we assume that there exists a finite rectangular and symmetric\footnote{Throughout this paper, we only consider a finite and symmetric grid for the trigonometric polynomials.} index set ${\mathbb K}$ such that
\begin{align}\label{MajorAssumption}
\Gamma\subseteq\left\{\mathbf{x}\in{\Bbb R}^2:\varphi(\mathbf{x})=0\right\}~~~~~\text{with}~~~~~\varphi(\mathbf{x})=\sum_{{\boldsymbol k}\in{\mathbb K}}\mathbf{a}({\boldsymbol k})e^{-2\pi i{\boldsymbol k}\cdot\mathbf{x}}.
\end{align}
Throughout this paper, we call any function $\varphi(\mathbf{x})$ in the form of \cref{MajorAssumption} the \emph{trigonometric polynomial}, and the zero level set $\left\{\mathbf{x}\in{\Bbb R}^2:\varphi(\mathbf{x})=0\right\}$ the \emph{trigonometric curve}. For a trigonometric polynomial $\varphi$ in \cref{MajorAssumption}, the degree of $\varphi$ is defined as an ordered pair of degrees in each coordinate, and is denoted by $\deg(\varphi)$. In particular, $\varphi$ in \cref{MajorAssumption} with the smallest degree is called the \emph{minimal polynomial}. The algebraic properties of trigonometric polynomials and curves, including the existence of the minimal polynomial, are studied in \cite{G.Ongie2016}, to which interested readers can refer for details.
Under this setting, we present \cref{Th1} to establish the following \emph{(linear) annihilation relation} for the piecewise linear function. The proof can be found in \ref{ProofTh1}.
\begin{theorem}\label{Th1} Let $u(\mathbf{x})$ be defined as in \cref{uModel} where the singularity set $\Gamma$ satisfies \cref{MajorAssumption}. Then for some vector field $p=(p_1,p_2)$, we have
\begin{align}
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla u-p\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})&=0,\label{FirstAnnihil}\\
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla_sp\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})&=0,\label{SecondAnnihil}
\end{align}
where ${\boldsymbol{\xi}}\in{\Bbb R}^2$. Here, $\nabla_s$ is a symmetric gradient defined for $p=(p_1,p_2)$ as
\begin{align}\label{SymGrad}
\nabla_sp=\frac{1}{2}\left(\nabla p+\nabla p^T\right)=\left[\begin{array}{cc}
\partial_1p_1&\displaystyle{\frac{1}{2}\left(\partial_2p_1+\partial_1p_2\right)}\\
\displaystyle{\frac{1}{2}\left(\partial_2p_1+\partial_1p_2\right)}&\partial_2p_2
\end{array}\right]
\end{align}
and the Fourier transform ${\mathscr F}$ is performed to each component of $\nabla u-p$ and $\nabla_sp$.
\end{theorem}
Based on \cref{Th1}, we call $\varphi$ satisfying \cref{FirstAnnihil,SecondAnnihil} an \emph{annihilating polynomial} (for $u$ under a vector field $p$), and the Fourier coefficients $\mathbf{a}:=\left\{\mathbf{a}({\boldsymbol k}):{\boldsymbol k}\in{\mathbb K}\right\}$ an \emph{annihilating filter}. Note that the gradient of $u$ is decomposed into $\nabla u-p$ and $p$ such that the Fourier transforms of both $\nabla u-p$ and $\nabla_sp$ are annihilated by the same annihilating filter $\mathbf{a}$. In addition, since the proof of \cref{Th1} tells us that $p$ is the piecewise constant vector field in $\nabla u$, \cref{SecondAnnihil} can be viewed as an extension of annihilation relation for piecewise constant functions to piecewise constant vector fields.
\begin{rmk}\label{RK1} According to the proof, \cref{Th1} can also be written as follows: for a function $u$ defined as in \cref{uModel}, $\nabla u$ can be decomposed into
\begin{align}\label{GraduDecompose}
\nabla u=p+\mathrm{d}\boldsymbol{\nu}
\end{align}
with a piecewise constant vector field $p$ and a Radon vector measure $\boldsymbol{\nu}$ supported on $\Gamma$ such that
\begin{align}
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\boldsymbol{\nu}\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})&=0,\label{FirstAnnihilRe}\\
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla_sp\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})&=0.\label{SecondAnnihilRe}
\end{align}
We mention that the decomposition \cref{GraduDecompose} is well-defined in the sense that ${\mathscr F}(\nabla_s\boldsymbol{\nu})$ and ${\mathscr F}(\nabla_sp)$ do not share the minimal annihilating polynomial. To see this, let $\varphi$ in \cref{MajorAssumption} be the minimal polynomial for $\Gamma$. For $F\in C_0^{\infty}({\Bbb R}^2,\mathrm{Sym}^2({\Bbb R}^2))$, we have
\begin{align*}
\langle\varphi\nabla_s\boldsymbol{\nu},F\rangle=\int_{\Gamma}\left[\nabla_s^T\left(\varphi F\right)\right]\cdot\mathrm{d}\boldsymbol{\nu}=-\int_{\Gamma}\left[F\left(\nabla\varphi\right)\right]\cdot\mathrm{d}\boldsymbol{\nu}-\int_{\Gamma}\left(\varphi\nabla_s^TF\right)\cdot\mathrm{d}\boldsymbol{\nu}.
\end{align*}
Since $\varphi$ is the minimal polynomial for a trigonometric curve $\Gamma$, $\nabla\varphi\neq0$ a.e. on $\Gamma$ \cite[Proposition A.4]{G.Ongie2016}. Hence, $\varphi\nabla_s\boldsymbol{\nu}\neq0$, or equivalently, ${\mathscr F}(\nabla_s\boldsymbol{\nu})$ does not satisfy the annihilation relation with $\mathbf{a}$. In fact, $\nabla_s\boldsymbol{\nu}$ is annihilated by $\varphi^2$, and since $\varphi$ is the minimal polynomial for $\Gamma$, $\varphi^2$ is the minimal polynomial for annihilating $\nabla_s\boldsymbol{\nu}$.
\end{rmk}
We claim that the linear annihilation relations \cref{FirstAnnihil} and \cref{SecondAnnihil} can be seen as balancing the annihilations of first and second order derivatives. To see this, we firstly consider an extreme case; a piecewise constant function case. In this case, it would be favorable to choose $p=0$, so that \cref{FirstAnnihil} reduces to
\begin{align}\label{Extreme1}
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla u\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})=0,
\end{align}
which is the annihilation relation for $\nabla u$ in \cite{G.Ongie2015a,G.Ongie2015,G.Ongie2016,G.Ongie2017}. For the second extreme case, we note that
\begin{align*}
\nabla^2u=\nabla_s\left(\nabla u\right)=\nabla_s\left(\nabla u-p\right)+\nabla_sp.
\end{align*}
In addition, when $\nabla u$ is a piecewise constant vector field (i.e. $u$ is continuous on $\Gamma$), we can simply choose $p=\nabla u$. Then \cref{SecondAnnihil} becomes
\begin{align}\label{Extreme2}
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla_sp\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})=\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla^2u\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})=0,
\end{align}
the annihilation relation for the second derivatives of $u$. Hence, the vector field $p$ in turn balances \cref{Extreme1,Extreme2} to annihilate the jumps of both $u$ and $\nabla u$ under the annihilating polynomial $\varphi$. Hence, \cref{FirstAnnihil,SecondAnnihil} are closely related to the following TGV:
\begin{align}\label{TGV2}
\mathrm{TGV}(u)=\inf_{u,p}\gamma_1\left\|\nabla u-p\right\|_1+\gamma_2\left\|\nabla_s p\right\|_1,
\end{align}
which in turn measures the jump discontinuities of $u$ and $p$ in our setting. In particular, if $u$ is continuous on $\partial\Omega_j$ for some $j$, this $\partial\Omega_j$ will not be reflected in \cref{FirstAnnihil}.
We also note that if we restrict $p$ in the range of $\nabla$, i.e. $p=\nabla u_2$, then by letting $u_1=u-u_2$, we can rewrite \cref{FirstAnnihil,SecondAnnihil} as
\begin{align}
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla u_1\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})&=0\label{PCAnnihil}\\
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla^2u_2\right)({\boldsymbol{\xi}}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})&=0\label{PLAnnihil},
\end{align}
the GSLR framework in \cite{Y.Hu2019}, which promotes a similar balance through a decomposition $u=u_1+u_2$ with a piecewise constant $u_1$ and a piecewise linear $u_2$. We can easily see that \cref{PCAnnihil,PLAnnihil} are closely related to the following inf-convolution
\begin{align}\label{InfConv2}
\left(J_1\square J_2\right)(u):=\inf_{u=u_1+u_2}\gamma_1\|\nabla u_1\|_1+\gamma_2\|\nabla^2u_2\|_1.
\end{align}
Noting that when $p=\nabla u_2$, then by letting $u_1=u-u_2$, the TGV \cref{TGV2} becomes the above inf-convolution, which means that the TGV takes the inf-convolution as a special case. Likewise, we can see that \cref{FirstAnnihil,SecondAnnihil} in \cref{Th1} take \cref{PCAnnihil,PLAnnihil} in \cite{Y.Hu2019} as a special case.
\begin{rmk}\label{RK2} We further mention that \cref{Th1} is different from the GSLR framework in \cite{Y.Hu2019}. To see this, we assume that the singularity set $\Gamma$ satisfies \cref{MajorAssumption} with the minimal polynomial $\varphi$, i.e. ${\mathbb K}$ is the smallest support of the Fourier coefficients $\mathbf{a}$. Since $\nabla\varphi\neq0$ a.e. on $\Gamma$ \cite[Proposition A.4]{G.Ongie2016}, $\varphi$ may not be able to annihilate the second order derivatives of a piecewise linear function in general \cite{G.Ongie2015a}. Hence, \cref{PCAnnihil,PLAnnihil} prefer to decompose $u=u_1+u_2$ where $u_1$ is a piecewise constant function and $u_2$ is a piecewise linear spline (a continuous piecewise linear function). Meanwhile, the proposed framework is established by annihilating the jumps for each derivative successively, which enables to cover a broader range of piecewise linear functions. Hence, our framework is more generalized than the GSLR in \cite{Y.Hu2019}.
\end{rmk}
Next, we will present two examples to illustrate this.
\begin{example}\label{Ex1} We consider one dimensional examples. Let $u(x)=\left(x+5/4\right)1_{[-1/4,0)}(x)+\left(5/4-x\right)1_{[0,1/4)}(x)$. Then we can see that
\begin{align*}
\varphi(x)=\left(e^{-2\pi ix}-e^{\pi i/2}\right)\left(e^{-2\pi ix}-1\right)\left(e^{-2\pi ix}-e^{-\pi i/2}\right):=\sum_{k=0}^3\mathbf{a}(k)e^{-2\pi ikx}.
\end{align*}
is a minimal polynomial for the singularity set $\left\{-1/4,0,1/4\right\}$. Since
\begin{align*}
u'(x)=1_{[-1/4,0)}(x)-1_{[0,1/4)}(x)+\delta(x+1/4)-\delta(x-1/4)
\end{align*}
we can choose $p=1_{[-1/4,0)}-1_{[0,1/4)}$, i.e.
\begin{align*}
p'(x)=\delta(x+1/4)-2\delta(x)+\delta(x-1/4).
\end{align*}
Hence, it follows that
\begin{align*}
\sum_{k=0}^3{\mathscr F}(u'-p)(\xi+k)\mathbf{a}(k)=0~~~~~\text{and}~~~~~\sum_{k=0}^3{\mathscr F}(p')(\xi+k)\mathbf{a}(k)=0.
\end{align*}
By letting $u_1(x)=1_{[-1/4,1/4)}(x)$ and $u_2(x)=\left(x+1/4\right)1_{[-1/4,0)}(x)+\left(1/4-x\right)1_{[0,1/4)}(x)$, we also have
\begin{align*}
u_1'(x)&=\delta(x+1/4)-\delta(x-1/4),\\
u_2''(x)&=\delta(x+1/4)-2\delta(x)+\delta(x-1/4),
\end{align*}
which leads to
\begin{align*}
\sum_{k=0}^3{\mathscr F}(u_1')(\xi+k)\mathbf{a}(k)=0~~~~~\text{and}~~~~~\sum_{k=0}^3{\mathscr F}(u_2'')(\xi+k)\mathbf{a}(k)=0.
\end{align*}
In other words, both frameworks can establish annihilation relations if a given function can be decomposed into a piecewise constant function and a piecewise linear spline.
\end{example}
\begin{example}\label{Ex2} Let $u(x)=x1_{[-1/4,1/4)}(x)$ with
\begin{align*}
\varphi(x)=\left(e^{-2\pi ix}-e^{\pi i/2}\right)\left(e^{-2\pi ix}-e^{-\pi i/2}\right):=\sum_{k=0}^2\mathbf{a}(k)e^{-2\pi ikx}.
\end{align*}
being a minimal polynomial for the singularity set $\left\{-1/4,1/4\right\}$. In this case, we have
\begin{align*}
u'(x)=1_{[-1/4,1/4)}(x)-\delta(x+1/4)-\delta(x-1/4).
\end{align*}
If we choose $p=1_{[-1/4,1/4)}$, then we have
\begin{align*}
p'(x)=\delta(x+1/4)-\delta(x-1/4),
\end{align*}
which leads to
\begin{align*}
\sum_{k=0}^2{\mathscr F}(u'-p)(\xi+k)\mathbf{a}(k)=0~~~~~\text{and}~~~~~\sum_{k=0}^2{\mathscr F}(p')(\xi+k)\mathbf{a}(k)=0.
\end{align*}
However, we cannot find a piecewise constant $u_1$ and a piecewise linear spline $u_2$ such that $u=u_1+u_2$,
\begin{align*}
\sum_{k=0}^2{\mathscr F}(u_1')(\xi+k)\mathbf{a}(k)=0~~~~~\text{and}~~~~~\sum_{k=0}^2{\mathscr F}(u_2'')(\xi+k)\mathbf{a}(k)=0.
\end{align*}
Notice that we have
\begin{align*}
u''(x)=\delta(x+1/4)-\delta(x-1/4)-\delta'(x+1/4)-\delta'(x-1/4),
\end{align*}
and $\varphi'(x)\neq0$ for $x=\pm1/4$, so $u''\varphi\neq0$. In fact, we have $u''\varphi^2=0$, so that
\begin{align*}
\sum_{k=0}^4{\mathscr F}(u'')(\xi+k){\boldsymbol b}(k)=0,
\end{align*}
where
\begin{align*}
\varphi^2(x)=\left(e^{-2\pi ix}-e^{\pi i/2}\right)^2\left(e^{-2\pi ix}-e^{-\pi i/2}\right)^2:=\sum_{k=0}^4{\boldsymbol b}(k)e^{-2\pi ikx}.
\end{align*}
Hence, these two examples again illustrate that the proposed framework is more generalized than the GSLR framework in \cite{Y.Hu2019}.
\end{example}
In our setting, the Fourier transform of $u$ is sampled on the grid ${\mathbb O}$ in \cref{ImageGrid} with $N\in{\mathbb N}$ large enough to guarantee a high image resolution. Hence, \cref{FirstAnnihil,SecondAnnihil} become the following finite systems of linear equations
\begin{align}
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla u-p\right)(\boldsymbol{m}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})&=0,\label{FirstSystem}\\
\sum_{{\boldsymbol k}\in{\mathbb K}}{\mathscr F}\left(\nabla_sp\right)(\boldsymbol{m}+{\boldsymbol k})\mathbf{a}({\boldsymbol k})&=0,\label{SecondSystem}
\end{align}
where $\boldsymbol{m}\in{\mathbb O}:{\mathbb K}$. In the matrix-vector multiplication form, we have
\begin{align}
{\boldsymbol \mH}\left(\left({\mathscr F}(\nabla u-p)\right)\big|_{{\mathbb O}}\right)\mathbf{a}&={\mathbf 0},\label{FirstRankDeficient}\\
{\boldsymbol \mH}\left(\left({\mathscr F}(\nabla_sp)\right)\big|_{{\mathbb O}}\right)\mathbf{a}&={\mathbf 0}.\label{SecondRankDeficient}
\end{align}
Hence, both ${\boldsymbol \mH}\left(\left({\mathscr F}(\nabla u-p)\right)\big|_{{\mathbb O}}\right)$ and ${\boldsymbol \mH}\left(\left({\mathscr F}(\nabla_sp)\right)\big|_{{\mathbb O}}\right)$ have nontrivial nullspaces. In addition, when the filter support ${\mathbb K}'$ defining two multi-fold Hankel matrices is sufficiently large, both of them have nontrivial nullspaces as well. To see this, let $\varphi$ be the minimal polynomial for the singularity set with coefficients $\mathbf{a}$ supported on ${\mathbb K}$. Then for any trigonometric polynomial $\eta$ with coefficients ${\boldsymbol b}$ supported on ${\mathbb K}':{\mathbb K}$ such that $\mathbf{a}\ast{\boldsymbol b}$ is supported on ${\mathbb K}'$, it follows that
\begin{align*}
\left(\eta\varphi\right)(\mathbf{x}):=\sum_{{\boldsymbol k}\in{\mathbb K}'}\left(\mathbf{a}\ast{\boldsymbol b}\right)({\boldsymbol k})e^{-2\pi i{\boldsymbol k}\cdot\mathbf{x}}
\end{align*}
is an annihilating polynomial and $\mathbf{a}\ast{\boldsymbol b}$ is an annihilating filter, due to the associativity of convolution. In particular, by letting
\begin{align*}
{\boldsymbol b}({\boldsymbol k})=\boldsymbol{\delta}({\boldsymbol k}-\boldsymbol{m})=\left\{\begin{array}{cl}
1&\text{if}~{\boldsymbol k}=\boldsymbol{m}\vspace{0.4em}\\
0&\text{otherwise}
\end{array}\right.~~~~~{\boldsymbol k}\in{\mathbb K},~~\text{and}~~\boldsymbol{m}\in{\mathbb K}':{\mathbb K},
\end{align*}
we can see that
\begin{align*}
e^{-2\pi i\boldsymbol{m}\cdot\mathbf{x}}\varphi(\mathbf{x})=\sum_{{\boldsymbol k}\in\boldsymbol{m}+{\mathbb K}}\mathbf{a}({\boldsymbol k}-\boldsymbol{m})e^{-2\pi i{\boldsymbol k}\cdot\mathbf{x}},~~~~~\boldsymbol{m}\in{\mathbb K}':{\mathbb K},
\end{align*}
is also an annihilating polynomial, or equivalently, the translation $\mathbf{a}(\cdot-\boldsymbol{m})$ also satisfies \cref{FirstRankDeficient,SecondRankDeficient}. Based on this observation, we present proposition \ref{Prop1} to demonstrate the \emph{low rank} properties of multi-fold Hankel matrices, which will establish the correspondence between the Hankel matrices and the complexity of the singularity set of $u$.
\begin{proposition}\label{Prop1} Let $u(\mathbf{x})$ be defined as in \cref{uModel} with the singularity set $\Gamma=\bigcup_{j=1}^J\partial\Omega_j$ satisfying \cref{MajorAssumption}, where $\varphi$ is the minimal polynomial with coefficients on ${\mathbb K}$. For an assumed filter support ${\mathbb K}'$ strictly containing ${\mathbb K}$, we have
\begin{align}
\mathrm{rank}\left({\boldsymbol \mH}\left(\left({\mathscr F}(\nabla u-p)\right)\big|_{{\mathbb O}}\right)\right)&\leq|{\mathbb K}'|-|{\mathbb K}':{\mathbb K}|,\label{FirstUpperBound}\\
\mathrm{rank}\left({\boldsymbol \mH}\left(\left({\mathscr F}(\nabla_sp)\right)\big|_{{\mathbb O}}\right)\right)&\leq|{\mathbb K}'|-|{\mathbb K}':{\mathbb K}|.\label{SecondUpperBound}
\end{align}
Hence, both ${\boldsymbol \mH}\left(\left({\mathscr F}(\nabla u-p)\right)\big|_{{\mathbb O}}\right)$ and ${\boldsymbol \mH}\left(\left({\mathscr F}(\nabla_sp)\right)\big|_{{\mathbb O}}\right)$ are rank deficient.
\end{proposition}
In summary, for $u(\mathbf{x})$ defined as in \cref{uModel}, the two multi-fold Hankel matrices
\begin{align*}
{\boldsymbol \mH}\left(\left({\mathscr F}(\nabla u-p)\right)\big|_{{\mathbb O}}\right)~~~\text{and}~~~{\boldsymbol \mH}\left(\left({\mathscr F}(\nabla_sp)\right)\big|_{{\mathbb O}}\right)
\end{align*}
are of low-rank, which enables to convert the piecewise regularity in the continuous domain into the low rank multi-fold Hankel matrices corresponding to the discrete Fourier samples. Notice that the GSLR framework in \cite{Y.Hu2019} promotes the decomposition $u=u_1+u_2$ where the Fourier samples of $\nabla u_1$ and $\nabla^2u_2$ correspond to the low rank Hankel matrices. In contrast, the proposed SLRM framework is established by decomposing the gradient of $u$ into $\nabla u-p$ and $p$ such that the Fourier samples of $\nabla u-p$ and $\nabla_sp$ correspond to the low rank Hankel matrices. See \cref{SLRMComparison} for the schematic illustrations.
\begin{figure}[ht]
\centering
\subfloat[GSLR framework]{\label{GSLRFramework}\includegraphics[width=1\textwidth]{GSLRSchematic.pdf}}\\
\subfloat[Proposed SLRM framework]{\label{ProposedSLRM}\includegraphics[width=1\textwidth]{ProposedSLRM.pdf}}
\caption{Schematic diagrams for comparison between the GSLR framework in \cite{Y.Hu2019} and the proposed SLRM framework.}\label{SLRMComparison}
\end{figure}
\section{Application to image restoration}\label{ImageRestoration}
\subsection{Continuous domain regularization for piecewise smooth image restoration}\label{ImageRestorationModel}
Let ${\boldsymbol f}\in{\mathscr V}$ be a degraded measurement modeled as
\begin{align}\label{Linear_IP2}
{\boldsymbol f}={\boldsymbol \mA}\boldsymbol{v}+{\boldsymbol\zeta},
\end{align}
where $\boldsymbol{v}={\mathscr F}(u)\big|_{{\mathbb O}}$ with $u$ defined as in \cref{uModel}, and ${\boldsymbol\zeta}$ is some measurement error\footnote{Here, with a slight abuse of notation, we assume the linear operator ${\boldsymbol \mA}$ acts on the Fourier samples in what follows.}. According to \cref{OurFramework}, the (multi-fold) Hankel matrices corresponding to
\begin{align*}
{\mathscr F}(\nabla u-p)({\boldsymbol{\xi}})&=\left(2\pi i\xi_1\widehat{u}({\boldsymbol{\xi}})-\widehat{p}_1({\boldsymbol{\xi}}),2\pi i\xi_2\widehat{u}-\widehat{p}_2({\boldsymbol{\xi}})({\boldsymbol{\xi}})\right),\\
{\mathscr F}(\nabla_sp)({\boldsymbol{\xi}})&=\left[\begin{array}{cc}
2\pi i\xi_1\widehat{p}_1({\boldsymbol{\xi}})&\pi i\left(\xi_2\widehat{p}_1({\boldsymbol{\xi}})+\xi_1\widehat{p}_2({\boldsymbol{\xi}})\right)\vspace{0.45em}\\
\pi i\left(\xi_2\widehat{p}_1({\boldsymbol{\xi}})+\xi_1\widehat{p}_2({\boldsymbol{\xi}})\right)&2\pi i\xi_2\widehat{p}_2({\boldsymbol{\xi}})
\end{array}\right],
\end{align*}
are low rank. Hence, we can consider
\begin{align}\label{RankMinimization}
\begin{split}
\min_{\boldsymbol{v},\boldsymbol{q}}\mathrm{rank}\left({\boldsymbol \mH}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right)+\gamma\mathrm{rank}\left({\boldsymbol \mH}\left({\boldsymbol \mE}\boldsymbol{q}\right)\right)~~~~~\text{subject to}~~{\boldsymbol f}={\boldsymbol \mA}\boldsymbol{v}+{\boldsymbol\zeta}
\end{split}
\end{align}
to restore $\boldsymbol{v}\in{\mathscr V}$, as a continuous domain regularization for the piecewise smooth image restoration. Here, $\boldsymbol{q}={\mathscr F}(p)\big|_{{\mathbb O}}\in{\mathscr V}_1$, and ${\boldsymbol \mD}:{\mathscr V}\to{\mathscr V}_1$ and ${\boldsymbol \mE}:{\mathscr V}_1\to{\mathscr V}_2$ are defined as
\begin{align}
\left({\boldsymbol \mD}\boldsymbol{v}\right)({\boldsymbol k})&=\left(2\pi ik_1\boldsymbol{v}({\boldsymbol k}),2\pi ik_2\boldsymbol{v}({\boldsymbol k})\right),\label{Ddef}\\
\left({\boldsymbol \mE}\boldsymbol{q}\right)({\boldsymbol k})&=\left[\begin{array}{cc}
2\pi ik_1\boldsymbol{q}_1({\boldsymbol k})&\pi i\left(k_2\boldsymbol{q}_1({\boldsymbol k})+k_1\boldsymbol{q}_2({\boldsymbol k})\right)\vspace{0.45em}\\
\pi i\left(k_2\boldsymbol{q}_1({\boldsymbol k})+k_1\boldsymbol{q}_2({\boldsymbol k})\right)&2\pi ik_2\boldsymbol{q}_2({\boldsymbol k})
\end{array}\right],\label{Edef}
\end{align}
for ${\boldsymbol k}=(k_1,k_2)\in{\mathbb O}$, respectively.
When $\boldsymbol{q}={\mathbf 0}$, \cref{RankMinimization} reduces to
\begin{align}\label{TVRankMinimization}
\min_{\boldsymbol{v}}~\mathrm{rank}\left({\boldsymbol \mH}\left({\boldsymbol \mD}_1\boldsymbol{v}\right)\right)~~~~~\text{subject to}~~{\boldsymbol f}={\boldsymbol \mA}\boldsymbol{v}+{\boldsymbol\zeta}
\end{align}
where ${\boldsymbol \mD}_1={\boldsymbol \mD}$. When $\boldsymbol{q}={\boldsymbol \mD}\boldsymbol{v}$, \cref{RankMinimization} becomes
\begin{align}\label{2ndTVRankMinimization}
\min_{\boldsymbol{v}}~\mathrm{rank}\left({\boldsymbol \mH}\left({\boldsymbol \mD}_2\boldsymbol{v}\right)\right)~~~~~\text{subject to}~~{\boldsymbol f}={\boldsymbol \mA}\boldsymbol{v}+{\boldsymbol\zeta}.
\end{align}
where ${\boldsymbol \mD}_2:{\mathscr V}\to{\mathscr V}_2$ is defined as
\begin{align}\label{D2Def}
\left({\boldsymbol \mD}_2\boldsymbol{v}\right)({\boldsymbol k})=\left({\boldsymbol \mE}\left({\boldsymbol \mD}\boldsymbol{v}\right)\right)({\boldsymbol k})=\left[\begin{array}{cc}
-4\pi^2k_1^2\boldsymbol{v}({\boldsymbol k})&-4\pi^2k_1k_2\boldsymbol{v}({\boldsymbol k})\vspace{0.45em}\\
-4\pi^2k_1k_2\boldsymbol{v}({\boldsymbol k})&-4\pi^2k_2^2\boldsymbol{v}({\boldsymbol k})
\end{array}\right]
\end{align}
for ${\boldsymbol k}\in{\mathbb O}$. Finally, when $\boldsymbol{q}={\boldsymbol \mD}\boldsymbol{v}_2$ for some $\boldsymbol{v}_2\in{\mathscr V}$, by letting $\boldsymbol{v}_1=\boldsymbol{v}-\boldsymbol{v}_2$, we obtain
\begin{align}\label{GSLRRankMinimization}
\min_{\boldsymbol{v}_1,\boldsymbol{v}_2}~\mathrm{rank}\left({\boldsymbol \mH}\left({\boldsymbol \mD}_1\boldsymbol{v}_1\right)\right)+\gamma\mathrm{rank}\left({\boldsymbol \mH}\left({\boldsymbol \mD}_2\boldsymbol{v}_2\right)\right)~~~~~\text{subject to}~~{\boldsymbol f}={\boldsymbol \mA}(\boldsymbol{v}_1+\boldsymbol{v}_2)+{\boldsymbol\zeta},
\end{align}
which is a rank minimization model based on the GSLR framework.
From the two extreme cases \cref{TVRankMinimization,2ndTVRankMinimization}, we can see that \cref{RankMinimization} aims to restore piecewise smooth functions by decomposing ${\boldsymbol \mD}\boldsymbol{v}$ into ${\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}$ and $\boldsymbol{q}$, thereby balancing the low rank multi-fold Hankel matrices of the Fourier samples of the first order derivatives and the second order derivatives. In addition, we can also see that \cref{RankMinimization} takes the GSLR rank minimization model \cref{GSLRRankMinimization}, which directly decomposes $\boldsymbol{v}=\boldsymbol{v}_1+\boldsymbol{v}_2$, as a special case. Since our SLRM framework is more generalized than the GSLR framework (see remark \ref{RK2} and examples \ref{Ex1} and \ref{Ex2}), it can be expected that \cref{RankMinimization} is able to restore a wider range of piecewise smooth functions than \cref{GSLRRankMinimization}.
\subsection{From low rank model to tight frame approach}\label{ProposedApproach}
Though \cref{RankMinimization} is an NP-hard problem, there are numerous tractable approaches available, including the convex nuclear norm relaxation (e.g. \cite{J.F.Cai2016,M.Fazel2013}), the iterative reweighted least squares (IRLS) for the Schatten $p$-norm minimization \cite{M.Fornasier2011,Y.Hu2019,K.Mohan2012,G.Ongie2017}, etc. Inspired by the SVD of a low rank Hankel matrix, we propose another relaxation of \cref{RankMinimization}, similar to \cite[Theorem 3.2]{J.F.Cai2020} where the continuous domain regularization for the piecewise constant image restoration is studied. For this purpose, we present the main idea of the image restoration model in \cref{Th2}. The proof is postponed to \ref{ProofTh2}.
We begin with introducing some notation. For a set of $K_1\times K_2$ filters $\mathbf{a}_1,\ldots,\mathbf{a}_{M_2}$ supported on ${\mathbb K}$, we define ${\boldsymbol \mW}$ and ${\boldsymbol \mW}^*$ (the adjoint of ${\boldsymbol \mW}$) as
\begin{align}
{\boldsymbol \mW}&=\left[{\boldsymbol \mS}_{\mathbf{a}_1(-\cdot)}^T,{\boldsymbol \mS}_{\mathbf{a}_2(-\cdot)}^T,\ldots,{\boldsymbol \mS}_{\mathbf{a}_{M_2}(-\cdot)}^T\right]^T,\label{OurAnalysis}\\
{\boldsymbol \mW}^*&=\left[{\boldsymbol \mS}_{\overline{\mathbf{a}}_1},{\boldsymbol \mS}_{\overline{\mathbf{a}}_2},\ldots,{\boldsymbol \mS}_{\overline{\mathbf{a}}_{M_2}}\right],\label{OurSynthesis}
\end{align}
where ${\boldsymbol \mS}_{\mathbf{a}}$ is a discrete convolution under the periodic boundary condition:
\begin{align*}
\left({\boldsymbol \mS}_{\mathbf{a}}\boldsymbol{v}\right)({\boldsymbol k})=\left(\mathbf{a}\ast\boldsymbol{v}\right)({\boldsymbol k})=\sum_{\boldsymbol{m}\in{\mathbb Z}^2}\mathbf{a}({\boldsymbol k}-\boldsymbol{m})\boldsymbol{v}(\boldsymbol{m}).
\end{align*}
In other words, both ${\boldsymbol \mW}$ and ${\boldsymbol \mW}^*$ are concatenations of discrete convolutions.
\begin{theorem}\label{Th2} Let $u(\mathbf{x})$ be defined as in \cref{uModel}, and let $\boldsymbol{v}={\mathscr F}(u)\big|_{{\mathbb O}}\in{\mathscr V}$ and $\boldsymbol{q}={\mathscr F}(p)\big|_{{\mathbb O}}\in{\mathscr V}_1$ be the Fourier samples. Let ${\boldsymbol \mH}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\in{\Bbb C}^{2M_1\times M_2}$ and ${\boldsymbol \mH}\left({\boldsymbol \mE}\boldsymbol{q}\right)\in{\Bbb C}^{4M_1\times M_2}$ be multi-fold Hankel matrices with ${\boldsymbol \mD}$ and ${\boldsymbol \mE}$ defined as in \cref{Ddef,Edef}. Assume that ${\boldsymbol \mH}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)$ and ${\boldsymbol \mH}\left({\boldsymbol \mE}\boldsymbol{q}\right)$ satisfy
\begin{align}\label{Assumption}
\mathrm{rank}\left({\boldsymbol \mH}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right)=r_1\ll2M_1\wedge M_2~~\text{and}~~\mathrm{rank}\left({\boldsymbol \mH}\left({\boldsymbol \mE}\boldsymbol{q}\right)\right)=r_2\ll4M_1\wedge M_2.
\end{align}
Considering full SVDs ${\boldsymbol \mH}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)={\boldsymbol X}_1\boldsymbol{\Sigma}_1{\boldsymbol Y}_1^*$ and ${\boldsymbol \mH}\left({\boldsymbol \mE}\boldsymbol{q}\right)={\boldsymbol X}_2\boldsymbol{\Sigma}_2{\boldsymbol Y}_2^*$, we define $\mathbf{a}_{1l}=M_2^{-1/2}{\boldsymbol Y}_1^{(:,l)}$ and $\mathbf{a}_{2l}=M_2^{-1/2}{\boldsymbol Y}_2^{(:,l)}$ by reformulating each column vector into a $K_1\times K_2$ filter supported on ${\mathbb K}$. Then ${\boldsymbol \mW}_1$ and ${\boldsymbol \mW}_2$ defined as \cref{OurAnalysis} by using filters $\left\{\mathbf{a}_{11},\ldots,\mathbf{a}_{1M_2}\right\}$ and $\left\{\mathbf{a}_{21},\ldots,\mathbf{a}_{2M_2}\right\}$ satisfies
\begin{align}
{\boldsymbol \mW}_1^*{\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)&=\sum_{l=1}^{M_2}{\boldsymbol \mS}_{\overline{\mathbf{a}}_{1l}}\left({\boldsymbol \mS}_{\mathbf{a}_{1l}(-\cdot)}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right)={\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q},\label{LowRankHankelTightFrame1}\\
{\boldsymbol \mW}_2^*{\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}\right)&=\sum_{l=1}^{M_2}{\boldsymbol \mS}_{\overline{\mathbf{a}}_{2l}}\left({\boldsymbol \mS}_{\mathbf{a}_{2l}(-\cdot)}\left({\boldsymbol \mE}\boldsymbol{q}\right)\right)={\boldsymbol \mE}\boldsymbol{q},\label{LowRankHankelTightFrame2}
\end{align}
and for ${\boldsymbol k}\in{\mathbb O}:{\mathbb K}$, we have
\begin{align}
\left({\boldsymbol \mS}_{\mathbf{a}_{1l}}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right)({\boldsymbol k})&={\mathbf 0},~~~~~l=r_1+1,\ldots,M_2\label{TightFrameSparse1}\\
\left({\boldsymbol \mS}_{\mathbf{a}_{2l}}\left({\boldsymbol \mE}\boldsymbol{q}\right)\right)({\boldsymbol k})&={\mathbf 0},~~~~~l=r_2+1,\ldots,M_2\label{TightFrameSparse2},
\end{align}
where the discrete convolution is performed on each component of ${\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}$ and ${\boldsymbol \mE}\boldsymbol{q}$, respectively. Consequently, if ${\boldsymbol \mH}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)$ (and ${\boldsymbol \mH}\left({\boldsymbol \mE}\boldsymbol{q}\right)$, respectively) is of low rank, then its right singular vectors construct a tight frame under which ${\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}$ (and ${\boldsymbol \mE}\boldsymbol{q}$, respectively) is sparsely represented.
\end{theorem}
In words, \cref{Th2} tells us that if the SVDs of multi-fold Hankel matrices are known as oracles, we can explicitly construct tight frames under which ${\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}$ and ${\boldsymbol \mE}\boldsymbol{q}$ are sparsely represented, and the sparsity of canonical coefficients can be grouped according to the filters. Hence, motivated by the idea in \cite{D.Guo2018}, assume that $\widetilde{\boldsymbol{v}}\in{\mathcal V}$ and $\widetilde{\boldsymbol{q}}\in{\mathcal V}_1$ are the a-priori estimations of $\boldsymbol{v}$ and $\boldsymbol{q}$ with the SVDs
\begin{align*}
{\boldsymbol \mH}\left({\boldsymbol \mD}\widetilde{\boldsymbol{v}}-\widetilde{\boldsymbol{q}}\right)=\widetilde{{\boldsymbol X}}_1\widetilde{\boldsymbol{\Sigma}}_1\widetilde{{\boldsymbol Y}}_1^*~~\text{and}~~{\boldsymbol \mH}\left({\boldsymbol \mE}\widetilde{\boldsymbol{q}}\right)=\widetilde{{\boldsymbol X}}_2\widetilde{\boldsymbol{\Sigma}}_2\widetilde{{\boldsymbol Y}}_2^*.
\end{align*}
Then we define the tight frame transforms ${\boldsymbol \mW}_1$ and ${\boldsymbol \mW}_2$ in \cref{OurAnalysis} via
\begin{align*}
\mathbf{a}_{1l}=M_2^{-1/2}\widetilde{{\boldsymbol Y}}_1^{(:,l)}~~~\text{and}~~~\mathbf{a}_{2l}=M_2^{-1/2}\widetilde{{\boldsymbol Y}}_2^{(:,l)},~~~~l=1,\ldots,M_2.
\end{align*}
Since \cref{TightFrameSparse1,TightFrameSparse2} are then approximately true under these ${\boldsymbol \mW}_1$ and ${\boldsymbol \mW}_2$, we remove the group sparsity pattern in the canonical coefficients for the better sparse approximation instead. This leads us to solve
\begin{align}\label{ProposedModel}
\min_{\boldsymbol{v},\boldsymbol{q}}\frac{1}{2}\left\|{\boldsymbol \mA}\boldsymbol{v}-{\boldsymbol f}\right\|_2^2+\left\|\boldsymbol{\gamma}_1\cdot{\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right\|_1+\left\|\boldsymbol{\gamma}_2\cdot{\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}\right)\right\|_1,
\end{align}
where the $\ell_1$ norms take the form of
\begin{align*}
\left\|\boldsymbol{\gamma}_1\cdot{\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right\|_1&=\sum_{l=1}^{M_2}\gamma_{1l}\left\|{\boldsymbol \mS}_{\mathbf{a}_{1l}(-\cdot)}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right\|_1\\
\left\|\boldsymbol{\gamma}_2\cdot{\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}\right)\right\|_1&=\sum_{l=1}^{M_2}\gamma_{2l}\left\|{\boldsymbol \mS}_{\mathbf{a}_{2l}(-\cdot)}\left({\boldsymbol \mE}\boldsymbol{q}\right)\right\|_1
\end{align*}
to reflect the different weights according to the index of filters. Finally, we reflect the singular values of the pre-restored Hankel matrices in the regularization parameters as
\begin{align*}
\gamma_{kl}=\frac{\nu_k}{\widetilde{\boldsymbol{\Sigma}}_k^{(l,l)}+\varepsilon}~~~~~k=1,2~~\text{and}~~l=1,\ldots,M_2
\end{align*}
with some $\nu_k>0$ and a small $\varepsilon>0$ to avoid the division by zero. Hence, we relax the sparsity of tight frame coefficients over the entire range of ${\boldsymbol \mW}_k$'s (not necessarily in groups as in \cref{TightFrameSparse1,TightFrameSparse2}), we expect to achieve more flexibility, thereby leading to the improvements in restoration performance.
Since the wavelet frame based relaxation model \cref{ProposedModel} is inspired by the \cref{RankMinimization} via the SVDs of ${\boldsymbol \mH}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)$ and ${\boldsymbol \mH}\left({\boldsymbol \mE}\boldsymbol{q}\right)$, we can say that our relaxation model \cref{ProposedModel} restores piecewise smooth functions by decomposing ${\boldsymbol \mD}\boldsymbol{v}$ into ${\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}$ and $\boldsymbol{q}$, thereby balancing the low rank Hankel matrices of the Fourier samples of the first order derivatives and that of the second order derivatives. In addition, we further note that, following \cite{J.F.Cai2020}, it is also possible to consider the following data driven tight frame model
\begin{align}\label{PSDDTFModel}
\begin{split}
&~~~\min_{\boldsymbol{v},\boldsymbol{q},\left\{{\boldsymbol c}_j,{\boldsymbol \mW}_j\right\}_{j=1}^2}\frac{1}{2}\left\|{\boldsymbol \mA}\boldsymbol{v}-{\boldsymbol f}\right\|_2^2+\frac{\mu_1}{2}\left\|{\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)-{\boldsymbol c}_1\right\|_2^2+\left\|\boldsymbol{\gamma}\cdot{\boldsymbol c}_1\right\|_0\\
&\hspace{15.00em}+\frac{\mu_2}{2}\left\|{\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}\right)-{\boldsymbol c}_2\right\|_2^2+\left\|\boldsymbol{\gamma}_2\cdot{\boldsymbol c}_2\right\|_0\\
&~~~~~\text{subject to}~~{\boldsymbol \mW}_1^*{\boldsymbol \mW}_1={\boldsymbol \mW}_2^*{\boldsymbol \mW}_2={\boldsymbol \mI},
\end{split}
\end{align}
with the $\ell_0$ norm $\|\boldsymbol{\gamma}_k\cdot{\boldsymbol c}_k\|_0$ ($k=1,2$) encoding the number of nonzero entries in ${\boldsymbol c}_k$'s, to learn tight frames and restore the Fourier sample $\boldsymbol{v}$ simultaneously. Even though it is not clear at this point whether the adaptive tight frame system will lead to better restoration results or not, throughout this paper, we only consider the model \cref{ProposedModel} rather than the data driven tight frame model \cref{PSDDTFModel}, and the reasons are as follows. First of all, given that ${\boldsymbol \mW}_k$'s are properly estimated, it may not be necessary to further learn them with additional computational costs. Second, since \cref{ProposedModel} is convex whereas \cref{PSDDTFModel} is nonconvex, we can easily expect better behavior and theoretical support for the numerical algorithms. Most importantly, from the viewpoint of wavelet frame based image restoration, the model \cref{ProposedModel} is an \emph{analysis approach} \cite{J.F.Cai2009/10} while the data driven tight frame model \cref{PSDDTFModel} can be classified into the \emph{balanced approach} \cite{J.F.Cai2008,R.H.Chan2003}. (See \ref{PreliminariesTightFrame} for the brief descriptions on the wavelet frame based models.) It is well known that the analysis approach reflects the structure of a target image better than other approaches (e.g. \cite{J.F.Cai2012}). Since we derive the wavelet frame based approach from the structured low rank matrix framework, it can be expected that the convex relaxation model \cref{ProposedModel} will reflect our SLRM frameworks for the piecewise smooth functions better than the data driven tight frame model \cref{PSDDTFModel}.
\subsection{Alternating minimization algorithm}\label{AlternatingMinimizationAlgorithm}
Among numerous algorithms which can solve the convex model \cref{ProposedModel}, we adopt the ADMM \cite{J.Eckstein1992} or the split Bregman algorithm \cite{W.Guo2014}, which can convert \cref{ProposedModel} into several subproblems with closed form solutions, together with the convergence guarantee \cite{J.F.Cai2009/10}. More precisely, let ${\boldsymbol c}_1={\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)$, and ${\boldsymbol c}_2={\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}\right)$. Then \cref{ProposedModel} can be rewritten as
\begin{align*}
&\min_{\boldsymbol{v},\boldsymbol{q},{\boldsymbol c}_1,{\boldsymbol c}_2}\frac{1}{2}\left\|{\boldsymbol \mA}\boldsymbol{v}-{\boldsymbol f}\right\|_2^2+\left\|\boldsymbol{\gamma}_1\cdot{\boldsymbol c}_1\right\|_1+\left\|\boldsymbol{\gamma}_2\cdot{\boldsymbol c}_2\right\|_1\\
&\text{subject to}~~~{\boldsymbol c}_1={\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right),~~\text{and}~~{\boldsymbol c}_2={\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}\right).
\end{align*}
Under this reformulation, the overall algorithm is summarized in \cref{Alg1}.
\begin{algorithm}[t!]
\begin{algorithmic}
\STATE{\textbf{Initialization:} $\boldsymbol{v}^{0}$, $\boldsymbol{q}^{0}$, ${\boldsymbol c}_{1}^{0}$, ${\boldsymbol c}_{2}^{0}$, ${\boldsymbol d}_1^0$, ${\boldsymbol d}_2^0$}
\FOR{$n=0$, $1$, $2$, $\cdots$}
\STATE{\textbf{(1)} Update $\boldsymbol{v}$ and $\boldsymbol{q}$:
\begin{align}
\left[\begin{array}{c}\boldsymbol{v}^{n+1}\\
\boldsymbol{q}^{n+1}\end{array}\right]&=\operatornamewithlimits{argmin}_{\boldsymbol{v},\boldsymbol{q}}\frac{1}{2}\left\|{\boldsymbol \mA}\boldsymbol{v}-{\boldsymbol f}\right\|_2^2+\frac{\beta}{2}\left\|{\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)-{\boldsymbol c}_1^n+{\boldsymbol d}_1^n\right\|_2^2+\frac{\beta}{2}\left\|{\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}\right)-{\boldsymbol c}_2^n+{\boldsymbol d}_2^n\right\|_2^2\label{vqsubprob}
\end{align}
\textbf{(2)} Update ${\boldsymbol c}_1$ and ${\boldsymbol c}_2$:
\begin{align}
{\boldsymbol c}_1^{n+1}&=\operatornamewithlimits{argmin}_{{\boldsymbol c}_1}\left\|\boldsymbol{\gamma}_1\cdot{\boldsymbol c}_1\right\|_1+\frac{\beta}{2}\left\|{\boldsymbol c}_1-{\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}^{n+1}-\boldsymbol{q}^{n+1}\right)-{\boldsymbol d}_1^n\right\|_2^2\label{c1sub}\\
{\boldsymbol c}_2^{n+1}&=\operatornamewithlimits{argmin}_{{\boldsymbol c}_2}\left\|\boldsymbol{\gamma}_2\cdot{\boldsymbol c}_2\right\|_1+\frac{\beta}{2}\left\|{\boldsymbol c}_2-{\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}^{n+1}\right)-{\boldsymbol d}_2^n\right\|_2^2\label{c2sub}
\end{align}
\textbf{(3)} Update ${\boldsymbol d}_1$ and ${\boldsymbol d}_2$:
\begin{align}
{\boldsymbol d}_1^{n+1}&={\boldsymbol d}_1^n+{\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}^{n+1}-\boldsymbol{q}^{n+1}\right)-{\boldsymbol c}_1^{n+1}\\
{\boldsymbol d}_2^{n+1}&={\boldsymbol d}_2^n+{\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}^{n+1}\right)-{\boldsymbol c}_2^{n+1}.
\end{align}}
\ENDFOR
\end{algorithmic}
\caption{Split Bregman Algorithm for \cref{ProposedModel}}\label{Alg1}
\end{algorithm}
For \cref{vqsubprob}, since ${\boldsymbol \mW}_k^*{\boldsymbol \mW}_k={\boldsymbol \mI}$ for $k=1,2$, we solve the following system of linear equations
\begin{align}\label{vqsystem}
\left[\begin{array}{cc}
{\boldsymbol \mK}_{11}&{\boldsymbol \mK}_{21}^*\\
{\boldsymbol \mK}_{21}&{\boldsymbol \mK}_{22}
\end{array}\right]\left[\begin{array}{c}
\boldsymbol{v}\\
\boldsymbol{q}
\end{array}\right]=\left[\begin{array}{c}
{\boldsymbol f}_1\\
{\boldsymbol f}_2
\end{array}\right]
\end{align}
where
\begin{align*}
\begin{array}{rl}
{\boldsymbol \mK}_{11}\hspace{-0.8em}&={\boldsymbol \mA}^*{\boldsymbol \mA}+\beta{\boldsymbol \mD}^*{\boldsymbol \mD}\\
{\boldsymbol \mK}_{21}\hspace{-0.8em}&=-\beta{\boldsymbol \mD}\\
{\boldsymbol \mK}_{22}\hspace{-0.8em}&=\beta{\boldsymbol \mI}+\beta{\boldsymbol \mE}^*{\boldsymbol \mE}
\end{array}~~~\text{and}~~~\begin{array}{rl}
{\boldsymbol f}_1\hspace{-0.8em}&={\boldsymbol \mA}^*{\boldsymbol f}+\beta{\boldsymbol \mD}^*\left[{\boldsymbol \mW}_1^*\left({\boldsymbol c}_1^n-{\boldsymbol d}_1^n\right)\right]\vspace{0.88em}\\
{\boldsymbol f}_2\hspace{-0.8em}&=-\beta{\boldsymbol \mW}_1^*\left({\boldsymbol c}_1^n-{\boldsymbol d}_1^n\right)+\beta{\boldsymbol \mE}^*\left[{\boldsymbol \mW}_2^*\left({\boldsymbol c}_2^n-{\boldsymbol d}_2^n\right)\right].
\end{array}
\end{align*}
Depending on the formulation of ${\boldsymbol \mA}$, various methods can be used to solve \cref{vqsystem} efficiently. For example, when ${\boldsymbol \mA}$ is a pointwise multiplication in the frequency domain (e.g. image denoising, image deblurring, and CS restoration), so are the constituent operators ${\boldsymbol \mK}_{11},{\boldsymbol \mK}_{21},$ and ${\boldsymbol \mK}_{22}$, and thus we can solve \cref{vqsystem} by the pointwise Cramer's rule. For a more generalized ${\boldsymbol \mA}$, we can use a distributed optimization based method \cite{S.Boyd2011}.
The closed form solutions for \cref{c1sub,c2sub} is expressed in terms of the soft thresholding:
\begin{align}
{\boldsymbol c}_1^{n+1}&={\boldsymbol \mT}_{\boldsymbol{\gamma}_1/\beta}\left({\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}^{n+1}-\boldsymbol{q}^{n+1}\right)+{\boldsymbol d}_1^n\right),\label{c1explicit}\\
{\boldsymbol c}_2^{n+1}&={\boldsymbol \mT}_{\boldsymbol{\gamma}_2/\beta}\left({\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}^{n+1}\right)+{\boldsymbol d}_2^n\right).\label{c2explicit}
\end{align}
More precisely, the soft thresholding operator ${\boldsymbol \mT}_{\boldsymbol{\gamma}}\left({\boldsymbol c}\right)$ for ${\boldsymbol c}\in{\mathscr V}_k^{M_2}$ and $\boldsymbol{\gamma}=\left[\begin{array}{ccc}
\gamma_1&\cdots&\gamma_{M_2}
\end{array}\right]^T$ is defined as the following componentwise manner:
\begin{align*}
{\boldsymbol \mT}_{\boldsymbol{\gamma}}\left({\boldsymbol c}\right)_{l,m}({\boldsymbol k})=\max\left\{\left|{\boldsymbol c}_{l,m}({\boldsymbol k})\right|-\gamma_l,0\right\}\frac{{\boldsymbol c}_{l,m}({\boldsymbol k})}{|{\boldsymbol c}_{l,m}({\boldsymbol k})|}
\end{align*}
for ${\boldsymbol k}\in{\mathbb O}$, $l=1,\ldots,M_2$, and $m=1,\ldots,2^k$, with the convention that $0/0=0$.
\section{Numerical results}\label{Experiments}
In this section, we conduct some numerical simulations in the context of restoration from the partial Fourier samples as a proof-of-concept study. Specifically, to compare the performance of the direct rank minimization \cref{RankMinimization} and the wavelet frame relaxation \cref{ProposedModel}, we choose to compare the relaxation model
\begin{align}\label{ProposedCSMRI}
\min_{\boldsymbol{v},\boldsymbol{q}}\frac{1}{2}\left\|{\boldsymbol \mR}_{{\mathbb M}}\boldsymbol{v}-{\boldsymbol f}\right\|_2^2+\left\|\boldsymbol{\gamma}_1\cdot{\boldsymbol \mW}_1\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right\|_1+\left\|\boldsymbol{\gamma}_2\cdot{\boldsymbol \mW}_2\left({\boldsymbol \mE}\boldsymbol{q}\right)\right\|_1
\end{align}
solved by \cref{Alg1} with the following Schatten $0$-norm relaxation \cite{M.Fornasier2011,Y.Hu2019,K.Mohan2012,G.Ongie2017} (SLRM model) of \cref{RankMinimization}:
\begin{align}\label{LRHTGVIRLSCSMRI}
\min_{\boldsymbol{v},\boldsymbol{q}}\frac{1}{2}\left\|{\boldsymbol \mR}_{{\mathbb M}}\boldsymbol{v}-{\boldsymbol f}\right\|_2^2&+\gamma_1\left\|{\boldsymbol \mH}\left({\boldsymbol \mD}\boldsymbol{v}-\boldsymbol{q}\right)\right\|_0+\gamma_2\left\|{\boldsymbol \mH}\left({\boldsymbol \mE}\boldsymbol{q}\right)\right\|_0,
\end{align}
where ${\boldsymbol \mR}_{{\mathbb M}}$ denotes a restriction onto the known sample grid ${\mathbb M}$. In addition, since it has been demonstrated in \cite{Y.Hu2019} that the GSLR framework outperforms the structured low rank matrix approaches which consider either the first order derivatives or the second derivatives, we also compare with the following GSLR model in \cite{Y.Hu2019}:
\begin{align}\label{LRHInfConvIRLSCSMRI}
\min_{\boldsymbol{v}_1,\boldsymbol{v}_2}\frac{1}{2}\left\|{\boldsymbol \mR}_{{\mathbb M}}\left(\boldsymbol{v}_1+\boldsymbol{v}_2\right)-{\boldsymbol f}\right\|_2^2&+\gamma_1\left\|{\boldsymbol \mH}\left({\boldsymbol \mD}_1\boldsymbol{v}_1\right)\right\|_0+\gamma_2\left\|{\boldsymbol \mH}\left({\boldsymbol \mD}_2\boldsymbol{v}_2\right)\right\|_0.
\end{align}
In \cref{LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}, $\left\|{\boldsymbol Z}\right\|_0$ is the Schatten $0$-norm of a matrix ${\boldsymbol Z}$ defined as
\begin{align*}
\left\|{\boldsymbol Z}\right\|_0=\ln\det\left(\left({\boldsymbol Z}^*{\boldsymbol Z}\right)^{1/2}+\varepsilon{\boldsymbol I}\right)
\end{align*}
with a small constant $\varepsilon>0$, and \cref{LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI} are solved by the iterative reweighted least squares method described in \cite{Y.Hu2019}.
Finally, to further study the improvements over the conventional on-the-grid approaches, we compare with the piecewise linear framelet (Fra) model (e.g. \cite{J.F.Cai2009/10})
\begin{align}\label{FrameCSMRI}
\min_{\boldsymbol{u}}\frac{1}{2}\left\|{\boldsymbol \mR}_{{\mathbb M}}{\boldsymbol \msF}\boldsymbol{u}-{\boldsymbol f}\right\|_2^2+\left\|\boldsymbol{\gamma}\cdot{\boldsymbol \mW}\boldsymbol{u}\right\|_1,
\end{align}
the TGV model \cite{K.Bredies2010,F.Knoll2011}
\begin{align}\label{TGVCSMRI}
\min_{\boldsymbol{u},\boldsymbol{p}}\frac{1}{2}\left\|{\boldsymbol \mR}_{{\mathbb M}}{\boldsymbol \msF}\boldsymbol{u}-{\boldsymbol f}\right\|_2^2+\gamma_1\left\|\boldsymbol \nabla\boldsymbol{u}-\boldsymbol{p}\right\|_1+\gamma_2\left\|\boldsymbol \nabla_s\boldsymbol{p}\right\|_1,
\end{align}
and the inf-convolution (IF) model \cite{A.Chambolle1997}
\begin{align}\label{InfConvCSMRI}
\min_{\boldsymbol{u}_1,\boldsymbol{u}_2}\frac{1}{2}\left\|{\boldsymbol \mR}_{{\mathbb M}}{\boldsymbol \msF}\left(\boldsymbol{u}_1+\boldsymbol{u}_2\right)-{\boldsymbol f}\right\|_2^2+\gamma_1\left\|\boldsymbol \nabla\boldsymbol{u}_1\right\|_1+\gamma_2\left\|\boldsymbol \nabla^2\boldsymbol{u}_2\right\|_1,
\end{align}
where ${\boldsymbol \msF}$ denotes the two dimensional discrete Fourier transform. Throughout this paper, we use the split Bregman algorithm (e.g. \cite{J.F.Cai2009/10,T.Goldstein2009,W.Guo2014}) to solve \cref{FrameCSMRI,TGVCSMRI,InfConvCSMRI}. All experiments are implemented on MATLAB $\mathrm{R}2014\mathrm{a}$ running on a laptop with $64\mathrm{GB}$ RAM and Intel(R) Core(TM) CPU $\mathrm{i}7$-$8750\mathrm{H}$ at $2.20\mathrm{GHz}$ with $6$ cores.
Throughout the experiments, we test two synthetic images (``Ellipses'' and ``Rectangles''), and two natural images (``Airplane'', and ``Car'') taking the values in $[0,1]$, as shown in \cref{OriginalImages}. The data ${\boldsymbol f}$ is synthesized by randomly sampling $20\%$ of $256\times256$ Fourier samples via the variable density sampling method described in \cite{M.Lustig2007}. A complex white Gaussian noise with the standard deviation $1$ is also added to generate a noisy partial sampling. For \cref{ProposedCSMRI,LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}, we use the $K\times K$ square patch for simplicity. Specifically, we choose $K=31$ for the ``Ellipses'', $K=25$ for the ``Rectangles'', and $K=51$ for ``Airplane'' and ``Car'', depending on the geometry of the target images. More precisely, in \cref{ProposedCSMRI,LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}, we use the discrete Fourier transform of the images restored by the TGV model \cref{TGVCSMRI} and the inf-convolution model \cref{InfConvCSMRI} respectively, to compute the SVDs of multi-fold Hankel matrices using the $K\times K$ patches, which will be used for ${\boldsymbol \mW}_k$'s in \cref{ProposedCSMRI} and initializations in \cref{LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}. For \cref{FrameCSMRI}, we choose ${\boldsymbol \mW}$ to be the undecimated tensor product piecewise linear B-spline framelet transform with $1$ level of decomposition \cite{B.Dong2013}. For \cref{TGVCSMRI,InfConvCSMRI}, we use the forward difference with the periodic boundary condition for the difference operators $\boldsymbol \nabla$, $\boldsymbol \nabla_s$, and $\boldsymbol \nabla^2$. In all of the experiments, we have manually tuned the regularization parameters to achieve the optimal restoration results in each scenario. For the quantitative comparison, we compute the signal-to-noise ratio (SNR), the high frequency error norm (HFEN) \cite{S.Ravishankar2011}, and the structure similarity index map (SSIM) \cite{Z.Wang2004}. Note that for \cref{ProposedCSMRI,LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}, the restored image is computed via the inverse DFT of the restored Fourier samples.
\begin{figure}[t]
\centering
\subfloat[Ellipses]{\label{Logan}\includegraphics[width=3.50cm]{LoganOriginal.pdf}}\hspace{0.001cm}
\subfloat[Rectangles]{\label{Rectangle}\includegraphics[width=3.50cm]{RectangleOriginal.pdf}}\hspace{0.001cm}
\subfloat[Airplane]{\label{Airplane}\includegraphics[width=3.50cm]{AirplaneOriginal.pdf}}\hspace{0.001cm}
\subfloat[Car]{\label{Car}\includegraphics[width=3.50cm]{CarOriginal.pdf}}\vspace{-0.25cm}\\
\subfloat[Ellipses-observed]{\label{LoganZeroPad}\includegraphics[width=3.50cm]{LoganZeroPad.pdf}}\hspace{0.001cm}
\subfloat[Rectangles-observed]{\label{RectangleZeroPad}\includegraphics[width=3.50cm]{RectangleZeroPad.pdf}}\hspace{0.001cm}
\subfloat[Airplane-observed]{\label{AirplaneZeroPad}\includegraphics[width=3.50cm]{AirplaneZeroPad.pdf}}\hspace{0.001cm}
\subfloat[Car-observed]{\label{CarZeroPad}\includegraphics[width=3.50cm]{CarZeroPad.pdf}}
\caption{Visualization of test images and observed images. All images are displayed in the window level $[0,1]$ for the fair comparisons.}\label{OriginalImages}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Images&Indices&Zero fill&Model \cref{ProposedCSMRI}&SLRM \cref{LRHTGVIRLSCSMRI}&GSLR \cref{LRHInfConvIRLSCSMRI}&Fra \cref{FrameCSMRI}&TGV \cref{TGVCSMRI}&IF \cref{InfConvCSMRI}\\ \hline
\multirow{3}{*}{Ellipses}&SNR&$13.75$&$\textbf{36.13}$&$32.18$&$30.93$&$27.41$&$29.67$&$28.36$\\ \cline{2-9}
&HFEN&$0.7746$&$\textbf{0.0375}$&$0.0713$&$0.0789$&$0.1533$&$0.1029$&$0.1351$\\ \cline{2-9}
&SSIM&$0.4162$&$\textbf{0.9943}$&$0.9784$&$0.9746$&$0.8966$&$0.9547$&$0.9335$\\ \hline
\multirow{3}{*}{Rectangles}&SNR&$15.68$&$\textbf{35.00}$&$30.74$&$30.42$&$24.30$&$29.28$&$27.43$\\ \cline{2-9}
&HFEN&$0.7914$&$\textbf{0.0876}$&$0.1456$&$0.1770$&$0.4444$&$0.1878$&$0.2366$\\ \cline{2-9}
&SSIM&$0.6130$&$\textbf{0.9803}$&$\textbf{0.9815}$&$0.9742$&$0.9391$&$0.9552$&$0.9211$\\ \hline
\multirow{3}{*}{Airplane}&SNR&$21.55$&$\textbf{36.23}$&$33.84$&$33.12$&$31.24$&$33.40$&$32.92$\\ \cline{2-9}
&HFEN&$0.4402$&$\textbf{0.0530}$&$0.0724$&$0.0753$&$0.1135$&$0.0803$&$0.0889$\\ \cline{2-9}
&SSIM&$0.7386$&$\textbf{0.9822}$&$0.9706$&$0.9693$&$0.9564$&$0.9669$&$0.9638$\\ \hline
\multirow{3}{*}{Car}&SNR&$15.88$&$\textbf{27.42}$&$25.88$&$24.81$&$22.72$&$25.69$&$25.16$\\ \cline{2-9}
&HFEN&$0.5528$&$\textbf{0.1057}$&$0.1447$&$0.1460$&$0.2290$&$0.1530$&$0.1669$\\ \cline{2-9}
&SSIM&$0.5627$&$\textbf{0.9569}$&$0.9291$&$0.9282$&$0.8761$&$0.9218$&$0.9112$\\ \hline
\end{tabular}
\caption{Comparison of SNRs, HFENs, and SSIMs.}\label{TableResults}
\end{table}
\cref{TableResults} summarizes the SNR, the HFEN, and the SSIM of the aforementioned approaches. \cref{LoganResults,RectangleResults,AirplaneResults,CarResults} display the visual comparisons (the first row) together with the zoom-in views (the second row) and the error maps (the third row) of \cref{ProposedCSMRI} against \cref{FrameCSMRI,TGVCSMRI,InfConvCSMRI,LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}. Throughout this paper, all restored images are displayed in the window level $[0,1]$, and all error maps are displayed in the window level $[0,0.2]$ for fair comparisons. We can see that the relaxed model \cref{ProposedCSMRI} consistently performs best, and the SLRM model \cref{LRHTGVIRLSCSMRI} performs the second in almost every index in all scenarios, and the improvements are visually observable as well, both of which demonstrate that our proposed SLRM framework performs well in the piecewise smooth image restoration.
At first glance, since the models \cref{ProposedCSMRI,LRHTGVIRLSCSMRI} can be regarded as different relaxations of the structured low rank matrix framework for the continuous domain regularization, the results also suggest that the off-the-grid regularization performs better than the on-the-grid regularization as it can reduce the basis mismatch between the true support (or the true singularity) in continuum and the discrete grid, leading to the improvements in the indices. In fact, due to such a basis mismatch, the conventional on-the-grid approaches (\cref{FrameCSMRI,TGVCSMRI,InfConvCSMRI}) suffers from the errors concentrated near the image singularities as well as the distorted shapes compared to \cref{ProposedCSMRI}, which can be seen in the error maps and the zoom-in views of \cref{LoganResults,RectangleResults,AirplaneResults,CarResults}.
Most importantly, the numerical results illustrate the followings. First of all, from the comparisons between \cref{LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}, we can see that the proposed SLRM framework performs better than the GSLR framework in \cite{Y.Hu2019} for the piecewise smooth image restoration. Second, from the comparisons between \cref{ProposedCSMRI,LRHTGVIRLSCSMRI}, we can further see that the relaxation into the wavelet frame analysis approach can achieve further improvements over the direct rank minimization. As previously mentioned, since the proposed SLRM framework takes the GSLR framework in \cite{Y.Hu2019} as a special case, we can restore broader range of piecewise smooth images. In addition, it should be noted that the annihilation of the Fourier samples of the second order derivatives in general require larger annihilating filter than the first order derivatives \cite{G.Ongie2015a}, which may degrade the low rank structures of the corresponding multi-fold Hankel matrix. As a consequence, the GSLR framework can introduce staircase artifacts in the smooth region compared to the proposed SLRM framework, as can be seen in the zoom-in views of \cref{LoganResults,RectangleResults,AirplaneResults,CarResults}. Finally, since the weights corresponding to the derivatives in the image domain amplifies the noise in the high frequencies, it is likely that the direct rank minimization approaches (\cref{LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}) lead to the artifacts near the image singularities corresponding to the high frequency components, whereas the relaxation into the wavelet frame analysis approach (\cref{ProposedCSMRI}) can achieve the denoising effect in spite of the amplified noise, leading to the better restoration results with less artifacts near the edges, as can be seen in the error maps in \cref{LoganResults,RectangleResults,AirplaneResults,CarResults}.
\begin{figure}[t]
\centering
\subfloat[]{\label{LoganOriginal}\includegraphics[width=2.25cm]{LoganOriginal.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHTGV}\includegraphics[width=2.25cm]{LoganLRHTGV.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHTGVIRLS}\includegraphics[width=2.25cm]{LoganLRHTGVIRLS.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHInfConvIRLS}\includegraphics[width=2.25cm]{LoganLRHInfConvIRLS.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganFra}\includegraphics[width=2.25cm]{LoganFrame.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganTGV}\includegraphics[width=2.25cm]{LoganTGV.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganInfConv}\includegraphics[width=2.25cm]{LoganInfConv.pdf}}\vspace{-0.75em}\\
\subfloat[]{\label{LoganOriginalZoom}\includegraphics[width=2.25cm]{LoganOriginalZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHTGVZoom}\includegraphics[width=2.25cm]{LoganLRHTGVZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHTGVIRLSZoom}\includegraphics[width=2.25cm]{LoganLRHTGVIRLSZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHInfConvIRLSZoom}\includegraphics[width=2.25cm]{LoganLRHInfConvIRLSZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganFraZoom}\includegraphics[width=2.25cm]{LoganFrameZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganTGVZoom}\includegraphics[width=2.25cm]{LoganTGVZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganInfConvZoom}\includegraphics[width=2.25cm]{LoganInfConvZoom.pdf}}\vspace{-0.75em}\\
\subfloat[]{\label{LoganMask}\includegraphics[width=2.25cm]{LoganMask.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHTGVError}\includegraphics[width=2.25cm]{LoganLRHTGVError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHTGVIRLSError}\includegraphics[width=2.25cm]{LoganLRHTGVIRLSError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganLRHInfConvIRLSError}\includegraphics[width=2.25cm]{LoganLRHInfConvIRLSError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganFraError}\includegraphics[width=2.25cm]{LoganFrameError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganTGVError}\includegraphics[width=2.25cm]{LoganTGVError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{LoganInfConvError}\includegraphics[width=2.25cm]{LoganInfConvError.pdf}}
\caption{Visual comparisons for ``Ellipses''. \cref{LoganOriginal}: original image, \cref{LoganLRHTGV}: model \cref{ProposedCSMRI}, \cref{LoganLRHTGVIRLS}: SLRM \cref{LRHTGVIRLSCSMRI}, \cref{LoganLRHInfConvIRLS}: GSLR \cref{LRHInfConvIRLSCSMRI}, \cref{LoganFra}: framelet \cref{FrameCSMRI}, \cref{LoganTGV}: TGV \cref{TGVCSMRI}, \cref{LoganInfConv}: infimal convolution \cref{InfConvCSMRI}. \cref{LoganOriginalZoom,LoganLRHTGVZoom,LoganLRHTGVIRLSZoom,LoganLRHInfConvIRLSZoom,LoganFraZoom,LoganTGVZoom,LoganInfConvZoom}: zoom-in views of \cref{LoganOriginal,LoganLRHTGV,LoganLRHTGVIRLS,LoganLRHInfConvIRLS,LoganFra,LoganTGV,LoganInfConv}. Yellow arrows indicate the region worth noting for comparisons among \cref{ProposedCSMRI,LRHTGVIRLSCSMRI,LRHInfConvIRLSCSMRI}, and red arrows indicate the region worth noting for comparisons among \cref{ProposedCSMRI,FrameCSMRI,TGVCSMRI,InfConvCSMRI}. \cref{LoganMask}: sample region, \cref{LoganLRHTGVError,LoganLRHTGVIRLSError,LoganLRHInfConvIRLSError,LoganFraError,LoganTGVError,LoganInfConvError}: error maps of \cref{LoganLRHTGV,LoganLRHTGVIRLS,LoganLRHInfConvIRLS,LoganFra,LoganTGV,LoganInfConv}.}\label{LoganResults}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[]{\label{RectangleOriginal}\includegraphics[width=2.25cm]{RectangleOriginal.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHTGV}\includegraphics[width=2.25cm]{RectangleLRHTGV.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHTGVIRLS}\includegraphics[width=2.25cm]{RectangleLRHTGVIRLS.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHInfConvIRLS}\includegraphics[width=2.25cm]{RectangleLRHInfConvIRLS.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleFra}\includegraphics[width=2.25cm]{RectangleFrame.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleTGV}\includegraphics[width=2.25cm]{RectangleTGV.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleInfConv}\includegraphics[width=2.25cm]{RectangleInfConv.pdf}}\vspace{-0.75em}\\
\subfloat[]{\label{RectangleOriginalZoom}\includegraphics[width=2.25cm]{RectangleOriginalZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHTGVZoom}\includegraphics[width=2.25cm]{RectangleLRHTGVZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHTGVIRLSZoom}\includegraphics[width=2.25cm]{RectangleLRHTGVIRLSZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHInfConvIRLSZoom}\includegraphics[width=2.25cm]{RectangleLRHInfConvIRLSZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleFraZoom}\includegraphics[width=2.25cm]{RectangleFrameZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleTGVZoom}\includegraphics[width=2.25cm]{RectangleTGVZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleInfConvZoom}\includegraphics[width=2.25cm]{RectangleInfConvZoom.pdf}}\vspace{-0.75em}\\
\subfloat[]{\label{RectangleMask}\includegraphics[width=2.25cm]{RectangleMask.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHTGVError}\includegraphics[width=2.25cm]{RectangleLRHTGVError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHTGVIRLSError}\includegraphics[width=2.25cm]{RectangleLRHTGVIRLSError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleLRHInfConvIRLSError}\includegraphics[width=2.25cm]{RectangleLRHInfConvIRLSError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleFraError}\includegraphics[width=2.25cm]{RectangleFrameError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleTGVError}\includegraphics[width=2.25cm]{RectangleTGVError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{RectangleInfConvError}\includegraphics[width=2.25cm]{RectangleInfConvError.pdf}}
\caption{Visual comparisons for ``Rectangles''. \cref{RectangleOriginal}: original image, \cref{RectangleLRHTGV}: model \cref{ProposedCSMRI}, \cref{RectangleLRHTGVIRLS}: SLRM \cref{LRHTGVIRLSCSMRI}, \cref{RectangleLRHInfConvIRLS}: GSLR \cref{LRHInfConvIRLSCSMRI}, \cref{RectangleFra}: framelet \cref{FrameCSMRI}, \cref{RectangleTGV}: TGV \cref{TGVCSMRI}, \cref{RectangleInfConv}: infimal convolution \cref{InfConvCSMRI}. \cref{RectangleOriginalZoom,RectangleLRHTGVZoom,RectangleLRHTGVIRLSZoom,RectangleLRHInfConvIRLSZoom,RectangleFraZoom,RectangleTGVZoom,RectangleInfConvZoom}: zoom-in views of \cref{RectangleOriginal,RectangleLRHTGV,RectangleLRHTGVIRLS,RectangleLRHInfConvIRLS,RectangleFra,RectangleTGV,RectangleInfConv}. Red arrows indicate the region worth noting. \cref{RectangleMask}: sample region, \cref{RectangleLRHTGVError,RectangleLRHTGVIRLSError,RectangleLRHInfConvIRLSError,RectangleFraError,RectangleTGVError,RectangleInfConvError}: error maps of \cref{RectangleLRHTGV,RectangleLRHTGVIRLS,RectangleLRHInfConvIRLS,RectangleFra,RectangleTGV,RectangleInfConv}.}\label{RectangleResults}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[]{\label{AirplaneOriginal}\includegraphics[width=2.25cm]{AirplaneOriginal.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHTGV}\includegraphics[width=2.25cm]{AirplaneLRHTGV.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHTGVIRLS}\includegraphics[width=2.25cm]{AirplaneLRHTGVIRLS.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHInfConvIRLS}\includegraphics[width=2.25cm]{AirplaneLRHInfConvIRLS.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneFra}\includegraphics[width=2.25cm]{AirplaneFra.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneTGV}\includegraphics[width=2.25cm]{AirplaneTGV.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneInfConv}\includegraphics[width=2.25cm]{AirplaneInfConv.pdf}}\vspace{-0.75em}\\
\subfloat[]{\label{AirplaneOriginalZoom}\includegraphics[width=2.25cm]{AirplaneOriginalZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHTGVZoom}\includegraphics[width=2.25cm]{AirplaneLRHTGVZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHTGVIRLSZoom}\includegraphics[width=2.25cm]{AirplaneLRHTGVIRLSZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHInfConvIRLSZoom}\includegraphics[width=2.25cm]{AirplaneLRHInfConvIRLSZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneFraZoom}\includegraphics[width=2.25cm]{AirplaneFraZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneTGVZoom}\includegraphics[width=2.25cm]{AirplaneTGVZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneInfConvZoom}\includegraphics[width=2.25cm]{AirplaneInfConvZoom.pdf}}\vspace{-0.75em}\\
\subfloat[]{\label{AirplaneMask}\includegraphics[width=2.25cm]{AirplaneMask.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHTGVError}\includegraphics[width=2.25cm]{AirplaneLRHTGVError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHTGVIRLSError}\includegraphics[width=2.25cm]{AirplaneLRHTGVIRLSError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneLRHInfConvIRLSError}\includegraphics[width=2.25cm]{AirplaneLRHInfConvIRLSError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneFraError}\includegraphics[width=2.25cm]{AirplaneFraError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneTGVError}\includegraphics[width=2.25cm]{AirplaneTGVError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{AirplaneInfConvError}\includegraphics[width=2.25cm]{AirplaneInfConvError.pdf}}
\caption{Visual comparisons for ``Airplane''. \cref{AirplaneOriginal}: original image, \cref{AirplaneLRHTGV}: model \cref{ProposedCSMRI}, \cref{AirplaneLRHTGVIRLS}: SLRM \cref{LRHTGVIRLSCSMRI}, \cref{AirplaneLRHInfConvIRLS}: GSLR \cref{LRHInfConvIRLSCSMRI}, \cref{AirplaneFra}: framelet \cref{FrameCSMRI}, \cref{AirplaneTGV}: TGV \cref{TGVCSMRI}, \cref{AirplaneInfConv}: infimal convolution \cref{InfConvCSMRI}. \cref{AirplaneOriginalZoom,AirplaneLRHTGVZoom,AirplaneLRHTGVIRLSZoom,AirplaneLRHInfConvIRLSZoom,AirplaneFraZoom,AirplaneTGVZoom,AirplaneInfConvZoom}: zoom-in views of \cref{AirplaneOriginal,AirplaneLRHTGV,AirplaneLRHTGVIRLS,AirplaneLRHInfConvIRLS,AirplaneFra,AirplaneTGV,AirplaneInfConv}. Red arrows indicate the region worth noting. \cref{AirplaneMask}: sample region, \cref{AirplaneLRHTGVError,AirplaneLRHTGVIRLS,AirplaneLRHInfConvIRLS,AirplaneFraError,AirplaneTGVError,AirplaneInfConvError,AirplaneLRHTGVIRLSError,AirplaneLRHInfConvIRLSError}: error maps of \cref{AirplaneLRHTGV,AirplaneFra,AirplaneTGV,AirplaneInfConv}.}\label{AirplaneResults}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[]{\label{CarOriginal}\includegraphics[width=2.25cm]{CarOriginal.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHTGV}\includegraphics[width=2.25cm]{CarLRHTGV.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHTGVIRLS}\includegraphics[width=2.25cm]{CarLRHTGVIRLS.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHInfConvIRLS}\includegraphics[width=2.25cm]{CarLRHInfConvIRLS.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarFra}\includegraphics[width=2.25cm]{CarFra.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarTGV}\includegraphics[width=2.25cm]{CarTGV.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarInfConv}\includegraphics[width=2.25cm]{CarInfConv.pdf}}\vspace{-0.75em}\\
\subfloat[]{\label{CarOriginalZoom}\includegraphics[width=2.25cm]{CarOriginalZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHTGVZoom}\includegraphics[width=2.25cm]{CarLRHTGVZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHTGVIRLSZoom}\includegraphics[width=2.25cm]{CarLRHTGVIRLSZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHInfConvIRLSZoom}\includegraphics[width=2.25cm]{CarLRHInfConvIRLSZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarFraZoom}\includegraphics[width=2.25cm]{CarFraZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarTGVZoom}\includegraphics[width=2.25cm]{CarTGVZoom.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarInfConvZoom}\includegraphics[width=2.25cm]{CarInfConvZoom.pdf}}\vspace{-0.75em}\\
\subfloat[]{\label{CarMask}\includegraphics[width=2.25cm]{CarMask.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHTGVError}\includegraphics[width=2.25cm]{CarLRHTGVError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHTGVIRLSError}\includegraphics[width=2.25cm]{CarLRHTGVIRLSError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarLRHInfConvIRLSError}\includegraphics[width=2.25cm]{CarLRHInfConvIRLSError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarFraError}\includegraphics[width=2.25cm]{CarFraError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarTGVError}\includegraphics[width=2.25cm]{CarTGVError.pdf}}\hspace{0.001cm}
\subfloat[]{\label{CarInfConvError}\includegraphics[width=2.25cm]{CarInfConvError.pdf}}
\caption{Visual comparisons for ``Car''. \cref{CarOriginal}: original image, \cref{CarLRHTGV}: model \cref{ProposedCSMRI}, \cref{CarLRHTGVIRLS}: SLRM \cref{LRHTGVIRLSCSMRI}, \cref{CarLRHInfConvIRLS}: GSLR \cref{LRHInfConvIRLSCSMRI}, \cref{CarFra}: framelet \cref{FrameCSMRI}, \cref{CarTGV}: TGV \cref{TGVCSMRI}, \cref{CarInfConv}: infimal convolution \cref{InfConvCSMRI}. \cref{CarOriginalZoom,CarLRHTGVZoom,CarLRHTGVIRLSZoom,CarLRHInfConvIRLSZoom,CarFraZoom,CarTGVZoom,CarInfConvZoom}: zoom-in views of \cref{CarOriginal,CarLRHTGV,CarLRHTGVIRLS,CarLRHInfConvIRLS,CarFra,CarTGV,CarInfConv}. Red arrows indicate the region worth noting. \cref{CarMask}: sample region, \cref{CarLRHTGVError,CarLRHTGVIRLSError,CarLRHInfConvIRLSError,CarFraError,CarTGVError,CarInfConvError}: error maps of \cref{CarLRHTGV,CarLRHTGVIRLS,CarLRHInfConvIRLS,CarFra,CarTGV,CarInfConv}.}\label{CarResults}
\end{figure}
\section{Conclusion and future directions}\label{Conclusion}
In this paper, we have introduced a new structured low rank matrix framework for the piecewise smooth image restoration. Following the previous structured low rank matrix framework for the piecewise constant image \cite{G.Ongie2016}, we assume that the image singularities lie in the zero level set of a band-limited periodic function. Inspired by the total generalized variation \cite{K.Bredies2010}, we derive the annihilation relation of the Fourier samples of the gradient and the (symmetric) gradient tensor, which in turn leads to the low rank multi-fold Hankel matrices to balance between the first order derivatives and the second order derivatives. In addition, as a by-product of the proposed structured low rank matrix framework, we further introduce a wavelet frame based sparse regularization model for the piecewise image restoration via the continuous domain regularization, from the SVDs of low rank multi-fold Hankel matrices. Finally, the numerical experiments show that the proposed wavelet frame based model based on the proposed SLRM framework outperforms both the conventional on-the-grid approaches and the existing the SLRM framework as well as the existing rank minimization approaches for the piecewise smooth image restoration.
For the future work, we plan to generalize the frameworks in \cite{G.Ongie2016} to the piecewise smooth image restoration. More precisely, we need to develop sampling guarantees for the unique restoration of a continuous domain piecewise smooth image from its uniform low pass Fourier samples, based on the proposed annihilation relations. Unfortunately, it is not clear for us at this moment under what conditions we can restore the singularity set from uniform Fourier samples as the annihilation relations involve a blind deconvolution problem due to the additional unknown vector field $p$ in \cref{Th1}. Nevertheless, this is definitely a future direction we would like to work on. We are also interested in proving the restoration guarantee of \cref{RankMinimization} (or its convex nuclear norm relaxation) given that ${\boldsymbol \mA}={\boldsymbol \mR}_{{\mathbb M}}$, i.e. the sample grid ${\mathbb M}$ is drawn from the image grid ${\mathbb O}$ uniformly at random, which generalizes the theoretical frameworks in \cite{G.Ongie2018} to the piecewise smooth image restoration.
Finally, to broaden the scope of applications, it is also likely to apply the idea in this paper to the various image restoration tasks such as the blind deconvolution (especially that in the frequency domain which is related to the retinex \cite{E.H.Land1971}) and the sparse angle CT. In fact, it is not clear at this moment how to extend this approach to the sparse angle CT restoration, as the measurement can be viewed as the Fourier samples on the discrete radial grid. Nevertheless, this is also an important future work for the broader scope of applications. Finally, observing the successive structure in our SLRM framework, we are also interested in developing a deep learning framework, motivated by the recent works on the deep learning techniques combined with the structured low rank matrix approaches \cite{M.Jacob2019,T.H.Kim2019,A.Pramanik2019a,A.Pramanik2019}.
{ |
1,314,259,994,332 | arxiv | \section{Introduction}
The Schr\"{o}dinger equation with fractional spatial dispersion was
originally derived for the wave function of particles moving by L\'{e}vy
flights, using the Feynman-integral formulation of fundamental quantum
mechanics \cite{Lask1,Lask2}. While experimental realization of fractional
quantum mechanics has not been reported yet, it was proposed to emulate it
in terms of classical photonics, using the commonly known similarity of the
Schr\"{o}dinger equations and equations for the paraxial diffraction of
optical beams \cite{EXP3,PROP}. A universal method for the emulation of the
fractional diffraction is to use the basic 4\textit{f} configuration, which
makes it possible to perform the spatial Fourier transform of the beam,
apply the phase shift, which is tantamount to the action of the fractional
diffraction, by means of an appropriate phase plate, and finally transform
the beam back from the Fourier space \cite{EXP3}. In addition to that,
implementations of the fractional Schr\"{o}dinger equations were proposed in
Levy crystals \cite{EXP1} and polariton condensates \cite{EXP2}.
Theoretical studies initiated by the above-mentioned scheme were developed
in various directions, including the interplay of the fractional diffraction
with parity-time ($\mathcal{PT}$) symmetric potentials \cite{PTS}-\cite%
{ghosts}, propagation of Airy waves in the fractional geometry \cite%
{Yingji1,Yingji2}, and adding the natural Kerr nonlinearity to the
underlying setting, thus introducing fractional nonlinear Schr\"{o}dinger
equations (FNLSEs). The work with nonlinear models has produced many
predictions, such as the modulational instability of continuous waves (CWs)
\cite{Conti} and diverse types of optical solitons \cite{Jorge1}-\cite%
{review}. These are quasi-linear \textquotedblleft accessible solitons" \cite%
{Frac1,Frac2}, gap solitons maintained by lattice potentials \cite{Frac5a}-%
\cite{Frac5}, self-trapped vortices \cite{Frac6,Frac7}, multi-peak \cite%
{Frac8}-\cite{Frac11} and cluster \cite{Frac12} modes, fractional solitons
in discrete systems \cite{Frac13}, localized states featuring spontaneously
broken symmetry \cite{Frac15,Frac16,Frac17}, solitons in dual-core couplers
\cite{Frac18,Frac19}, solitary states supported by the quadratic
nonlinearity \cite{Thirouin,quadratic}, and dark modes \cite{we}. Also
studied were dissipative solitons in the fractional version of the complex
Ginzburg-Landau equation \cite{Frac14}. Many of these results were reviewed
in Ref. \cite{review}.
The objective of the present work is to introduce one-dimensional settings
for binary immiscible fields under the action of the fractional diffraction.
The immiscibility naturally gives rise to stable patterns in the form of
domain walls (DW), alias grain boundaries, which separate half-infinite
domains filled by the immiscible field components. In areas of traditional
physical phenomenology, DWs are well known as basic patterns in thermal
convection \cite{Manneville}-\cite{PLA}. Grain boundaries of a different
physical origin occur in various condensed-matter settings \cite{grain1}-%
\cite{grain3}. In optics, DWs were predicted and experimentally observed in
bimodal light propagation in fibers \cite{optical-DW,optical-DW2}. Similar
states were predicted in binary Bose-Einstein condensates (BECs), provided
that the inter-component repulsion is stronger than the self-repulsion of
each component, which provides for the immiscibility \cite{Mineev}-\cite%
{BEC-DW}.
The interplay of the two-component immiscibility, that maintains DWs, with
fractional diffraction may naturally appear in optics, considering the
fractional bimodal propagation of light in a self-defocusing spatial
waveguide. A similar model, based on a system of fractional Gross-Pitaevskii
equations (FGPEs) \cite{review}, may also naturally emerge in a binary BEC
composed of repulsively interaction particles which move by L\'{e}vy
flights. We construct DW solutions for coupled FNLSEs and verify their
stability by means of numerical methods. Some results -- in particular,
scaling relations which determine the DW's width as a function of basic
parameters of the system -- are obtained in an analytical form.
The paper is organized as follows. The model is formulated in Section 2,
which also includes analytical expressions for CW, i.e., spatially uniform
states, that may be linked by DW patterns, thus supporting them. Analytical
results for the DWs are collected in Section 3. Numerical results are
reported in Section 4, and the paper is concluded by Section 5.
\section{The model and CW states}
\subsection{Basic equations}
In terms of the optical bimodal propagation in the spatial domain, the
scaled system of coupled FNLSEs for amplitudes of copropagating
electromagnetic waves $u\left( x,z\right) $ and $v\left( x,z\right) $ with
orthogonal polarizations is
\begin{eqnarray}
i\frac{\partial u}{\partial z} &=&\frac{1}{2}\left( -\frac{\partial ^{2}}{%
\partial x^{2}}\right) ^{\alpha /2}u+(|u|^{2}+\beta |v|^{2})u-\lambda v,
\notag \\
i\frac{\partial v}{\partial z} &=&\frac{1}{2}\left( -\frac{\partial ^{2}}{%
\partial x^{2}}\right) ^{\alpha /2}v+(|v|^{2}+\beta |u|^{2})v-\lambda u,
\label{system}
\end{eqnarray}%
where $z$ is the propagation distance, $x$ is the transverse coordinate, and
the cubic terms, with normalized coefficients $1$ and $\beta >0$, represent,
respectively, the defocusing nonlinearity of the self-phase-modulation (SPM)
and cross-phase-modulation (XPM) types. The optical self-defocusing occurs,
in particular, in semiconductor waveguides \cite{semi}. In the BEC model,
the SPM and XPM terms represent repulsive interactions between two atomic
states in the binary condensate. In the latter case, the system of scaled
FGPEs is written in the form of Eq. (\ref{system}), with $z$ replaced by the
temporal variable, $t$.
In optics, two natural values of the XPM coefficient are $\beta =2$ for
components $u$ and $v$ representing circular polarizations of light, or $%
\beta =2/3$ in the case of linear polarizations \cite{Agrawal}. The value of
$\beta $ may be varied in broader limits (in particular, the case of $\beta
=3$ plays an essential role below) in photonic crystals \cite%
{PhotCryst,PhotCryst2}. In binary BEC, the effective XPM coefficient can be
readily adjusted by means of the Feshbach resonance \cite{Inguscio,Feshbach}.
In the case of orthogonal linear polarizations in optics [corresponding to $%
\beta =2/3$ in Eq. (\ref{system})], the nonlinear interaction between the
components includes, in addition to the XPM terms, also the four-wave mixing
(FWM), represented by terms $\left( 1/3\right) v^{2}u^{\ast }$ and $\left(
1/3\right) u^{2}v^{\ast }$ in FNLSEs (\ref{system}) for $u$ and $v$ (where $%
\ast $ stands for complex conjugate), although these terms are usually
suppressed by the phase-velocity-birefringence effect \cite{Agrawal}. In any
case, the FWM terms appearing in the optical system with orthogonal linear
polarizations are not relevant in the present context, as the condition of
the immiscibility of the two components holds only for $\beta >1$ [see Eq. (%
\ref{max}) below], eliminating the case of $\beta =2/3$. The optical system
with orthogonal circular polarizations corresponds, as said above, to $\beta
=2$, which admits the immiscibility, but the FWM terms do not appear in the
latter case. Normally, they do not appear either in the BEC model based on
the system of coupled FGPEs, therefore FWM terms are not considered here.
The linear-coupling terms with coefficient $\lambda \geq 0$ in Eq. (\ref%
{system}) account for mixing between the optical modes, or between the two
atomic states in BEC. In the former case, the linear mixing between circular
polarizations may be imposed by the birefringence \cite{Agrawal}, and in the
latter case the mutual conversion of atomic states in BEC\ may be driven by
resonant radiofrequency radiation \cite{radio}.
The fractional-diffraction operator with a positive L\'{e}vy index (LI) $%
\alpha $ is defined as the \textit{Riesz derivative }\cite%
{Agrawal-Riesz,Baleanu,Riesz},
\begin{gather}
\left( -\frac{\partial ^{2}}{\partial x^{2}}\right) ^{\alpha /2}u(x)\equiv
\notag \\
=\frac{1}{2\pi }\int_{-\infty }^{+\infty }|p|^{\alpha }dp\int_{-\infty
}^{+\infty }d\xi e^{ip(x-\xi )}u(\xi )\equiv \frac{1}{\pi }\int_{0}^{+\infty
}p^{\alpha }dp\int_{-\infty }^{+\infty }d\xi \cos \left( p(x-\xi \right)
)u(\xi ), \label{Riesz}
\end{gather}%
which is built as the juxtaposition of the direct and inverse Fourier
transform, with the fractional diffraction acting at the intermediate stage.
While there are different definitions of fractional derivatives, this one
naturally appears in quantum mechanics \cite{Lask1,Lask2} and optics \cite%
{EXP3}. Normally, the LI takes values $1<\alpha \leq 2$, but, in the case of
the self-defocusing sign of the nonlinearity, when the system is not subject
to the wave collapse (implosion driven by self-attraction), it is also
possible to consider values $0<\alpha \leq 1$. The usual (non-fractional)
diffraction naturally corresponds to $\alpha =2$ in Eq. (\ref{system}).
Stationary solutions to Eqs. (\ref{system}) with propagation constant $k<0$
are looked for as%
\begin{equation}
\left\{ u\left( x,z\right) ,v\left( x,z\right) \right\} =e^{ikz}\left\{
U(x),V(x)\right\} , \label{uv}
\end{equation}%
where $U(x)$ and $V(x)$ are real functions which satisfy the following
system of equations:%
\begin{eqnarray}
kU+\frac{1}{2}\left( -\frac{\partial ^{2}}{\partial x^{2}}\right) ^{\alpha
/2}U+(U^{2}+\beta V^{2})U-\lambda V &=&0, \notag \\
kV+\frac{1}{2}\left( -\frac{\partial ^{2}}{\partial x^{2}}\right) ^{\alpha
/2}V+(V^{2}+\beta U^{2})V-\lambda U &=&0. \label{UV}
\end{eqnarray}%
The energy (Hamiltonian) of the stationary state (\ref{UV}) with the Riesz
derivatives defined as per Eq. (\ref{Riesz}) is%
\begin{gather}
E=\frac{1}{4\pi }\int_{-\infty }^{+\infty }dx\int_{-\infty }^{+\infty }d\xi
\int_{0}^{+\infty }p^{\alpha }dp\int_{-\infty }^{+\infty }d\xi \cos \left(
p(x-\xi \right) )\left[ U(x)U(\xi )+V(x)V(\xi )\right] \notag \\
+\int_{-\infty }^{+\infty }dx\left[ \frac{1}{4}\left( U^{4}+V^{4}+2\beta
U^{2}V^{2}\right) -\lambda UV\right] . \label{E}
\end{gather}
Stability of stationary solutions, obtained in the form of expression (\ref%
{uv}), against small perturbations was investigated by means of the usual
approach, looking for the perturbed solution as%
\begin{eqnarray}
u\left( x,z\right) &=&e^{ikz}\left[ U(x)+e^{\gamma z}a(x)+e^{\gamma ^{\ast
}z}b^{\ast }(x)\right] , \notag \\
v\left( x,z\right) &=&e^{ikz}\left[ V(x)+e^{\gamma z}c(x)+e^{\gamma ^{\ast
}z}d^{\ast }(x)\right] , \label{pert}
\end{eqnarray}%
where $\left\{ a(x),b(x),c(x),d(x)\right\} $ are components of an eigenmode
of infinitesimal perturbations, and $\gamma $ is the respective eigenvalue
(which may be a complex number). The substitution of the perturbed
expression (\ref{pert}) in Eq. (\ref{system}) and linearization leads to the
system of coupled equations,%
\begin{eqnarray}
\left( -k+i\gamma \right) a &=&\frac{1}{2}\left( -\frac{\partial ^{2}}{%
\partial x^{2}}\right) ^{\alpha /2}a+\left( 2U^{2}+\beta V^{2}\right)
a+U^{2}b+\beta UV(c+d)-\lambda c, \notag \\
\left( -k-i\gamma \right) b &=&\frac{1}{2}\left( -\frac{\partial ^{2}}{%
\partial x^{2}}\right) ^{\alpha /2}b+\left( 2U^{2}+\beta V^{2}\right)
b+U^{2}a+\beta UV(c+d)-\lambda d, \notag \\
&& \label{linearized} \\
\left( -k+i\gamma \right) c &=&\frac{1}{2}\left( -\frac{\partial ^{2}}{%
\partial x^{2}}\right) ^{\alpha /2}c+\left( 2U^{2}+\beta V^{2}\right)
c+U^{2}d+\beta UV(a+b)-\lambda a, \notag \\
\left( -k-i\gamma \right) d &=&\frac{1}{2}\left( -\frac{\partial ^{2}}{%
\partial x^{2}}\right) ^{\alpha /2}d+\left( 2U^{2}+\beta V^{2}\right)
d+U^{2}c+\beta UV(a+b)-\lambda b. \notag
\end{eqnarray}%
The underlying DW solution is stable if numerical solution of Eq. (\ref%
{linearized}) yields solely imaginary eigenvalues, with zero real parts.
\subsection{Continuous-wave (CW) solutions and the immiscibility condition}
The spatially uniform version of Eq. (\ref{UV}), with $U,V=\mathrm{const}$,\
gives rise to two asymmetric (partly immiscible, with $U\neq V$) CW
solutions, labeled by subscripts $+$ and $-$, which are mirror images of
each other:%
\begin{eqnarray}
\left\{
\begin{array}{c}
U_{+} \\
V_{+}%
\end{array}%
\right\} &=&\frac{1}{\sqrt{2}}\left\{
\begin{array}{c}
\sqrt{-\frac{k}{2}+\frac{\lambda }{\beta -1}}+\sqrt{-\frac{k}{2}-\frac{%
\lambda }{\beta -1}} \\
\sqrt{-\frac{k}{2}+\frac{\lambda }{\beta -1}}-\sqrt{-\frac{k}{2}-\frac{%
\lambda }{\beta -1}}%
\end{array}%
\right\} , \label{CW+} \\
\left\{
\begin{array}{c}
U_{-} \\
V_{-}%
\end{array}%
\right\} &=&\frac{1}{\sqrt{2}}\left\{
\begin{array}{c}
\sqrt{-\frac{k}{2}+\frac{\lambda }{\beta -1}}-\sqrt{-\frac{k}{2}-\frac{%
\lambda }{\beta -1}} \\
\sqrt{-\frac{k}{2}+\frac{\lambda }{\beta -1}}+\sqrt{-\frac{k}{2}-\frac{%
\lambda }{\beta -1}}%
\end{array}%
\right\} . \label{CW-}
\end{eqnarray}%
$\allowbreak $Note that the total density of solutions (\ref{CW+}) and (\ref%
{CW-}) is%
\begin{equation}
U_{+}^{2}+V_{+}^{2}=U_{-}^{2}+V_{-}^{2}=-k. \label{density}
\end{equation}%
While in the limit of $\lambda =0$ (no linear mixing), the obvious CW states
are completely immiscible, with $V_{+}=U_{-}=0$, the partly immiscible
states given by Eqs. (\ref{CW+}) and (\ref{CW-}) were found only recently in
Ref. \cite{PLA}. Parallel to the asymmetric CW states (\ref{CW+}) and (\ref%
{CW-}) there is the mixed (symmetric) one, with%
\begin{equation}
U_{0}=V_{0}=\sqrt{(\lambda -k)/(1+\beta )}. \label{CW0}
\end{equation}
For given $k$, i.e., for given CW density [see Eq. (\ref{density})], CW
states (\ref{CW+}) and (\ref{CW-}) exist under the following condition:%
\begin{equation}
\beta -1>\left( \beta -1\right) _{\mathrm{immisc}}\equiv 2\lambda /|k|.
\label{max}
\end{equation}%
In the absence of the linear mixing, $\lambda =0$, Eq. (\ref{max}) amounts
to the commonly known immiscibility condition \cite{Mineev}, $\beta >\beta _{%
\mathrm{immisc}}=1$. At $\lambda >0$, Eq. (\ref{max}) demonstrates that the
linear mixing pushes the immiscibility threshold to higher values, as was
first demonstrated in Ref. \cite{Merhasin} under normalization condition $%
k=-1$. Precisely at the threshold, i.e., at $\beta -1=2\lambda /|k|$, Eqs. (%
\ref{CW+}), (\ref{CW-}) and (\ref{CW0}) yield the following magnitude of the
CW fields,%
\begin{equation}
U_{\mathrm{thresh}}=V_{\mathrm{thresh}}=\sqrt{\lambda /(\beta -1)}.
\label{threshold}
\end{equation}
As shown in Ref. \cite{Merhasin}, the meaning of immiscibility condition (%
\ref{max}) can be understood in terms of energy: at $\beta -1>\left( \beta
-1\right) _{\mathrm{immisc}}$, for given density of the CW state [see Eq. (%
\ref{density})], the energy density of the partly immiscible state,
determined as per the second line of Eq. (\ref{E}), is lower than that of
the mixed one (\ref{CW0}), hence the asymmetric CW solutions (\ref{CW+}) and
(\ref{CW-}) play the role of the system's ground state, while the mixed CW
state (\ref{CW0}) is unstable. On the other hand, at $\beta -1<\left( \beta
-1\right) _{\mathrm{immisc}}$ the mixed CW state is the only existing one,
being the (stable) ground state in that case.
\section{Domain-wall (DW) solutions: Analytical findings}
DW states exist when Eq. (\ref{UV}) maintains the (partly) immiscible CW
states, as given by Eqs. (\ref{CW+}) and (\ref{CW-}). The DW solution links
two different CW states, that fill the space at $x\rightarrow \pm \infty $,
according to the following boundary conditions:
\begin{eqnarray}
\lim_{x\rightarrow +\infty }\left\{
\begin{array}{c}
U(x) \\
V(x)%
\end{array}%
\right\} &=&\left\{
\begin{array}{c}
U_{-} \\
V_{-}%
\end{array}%
\right\} , \notag \\
&& \label{link} \\
\lim_{x\rightarrow -\infty }\left\{
\begin{array}{c}
U(x) \\
V(x)%
\end{array}%
\right\} &=&\left\{
\begin{array}{c}
U_{+} \\
V_{+}%
\end{array}%
\right\} , \notag
\end{eqnarray}
An essential fact is that, in the particular case of $\beta =3$, the system
of two equations (\ref{UV}) can be exactly reduced to a single equation, by
the substitution of%
\begin{equation}
\left\{
\begin{array}{c}
U(x) \\
V(x)%
\end{array}%
\right\} =\frac{1}{2}\left\{
\begin{array}{c}
\sqrt{\lambda -k}-W(x), \\
\sqrt{\lambda -k}+W(x),%
\end{array}%
\right\} \label{ansatz}
\end{equation}%
where $W(x)$ is a real odd function of $x$ satisfying the equation%
\begin{equation}
(k+\lambda )W+\frac{1}{2}\left( -\frac{\partial ^{2}}{\partial x^{2}}\right)
^{\alpha /2}W+W^{3}=0, \label{W}
\end{equation}%
which is supplemented by the boundary conditions%
\begin{equation}
\lim_{x\rightarrow \pm \infty }W(x)=\pm \frac{1}{2}\sqrt{-k-\lambda }
\label{lim}
\end{equation}%
[note that it follows from Eq. (\ref{max}) with $\beta =3$ that values $%
\sqrt{-k-\lambda }$ in Eq. (\ref{lim}) are real].
In the case of the fractional diffraction (for $\alpha <2$), Eqs. (\ref%
{ansatz}) and (\ref{W}) represent a new result for $\beta =3$, while in the
case of the usual diffraction, i.e., at $\alpha =2$, the respective solution
was recently found in a fully explicit form \cite{PLA}:
\begin{equation}
\left\{
\begin{array}{c}
U(x) \\
V(x)%
\end{array}%
\right\} _{\alpha =2,\beta =3}=\frac{1}{2}\left\{
\begin{array}{c}
\sqrt{-k+\lambda }-\sqrt{-k-\lambda }\tanh \left( \sqrt{-k-\lambda }x\right)
\\
\sqrt{-k+\lambda }+\sqrt{-k-\lambda }\tanh \left( \sqrt{-k-\lambda }x\right)%
\end{array}%
\right\} . \label{exact}
\end{equation}%
The existence of the relevant solution to Eq. (\ref{W}) with $\alpha <2$ is
corroborated by the numerical results for dark solitons as solutions of the
FNLSE, which were reported (in a different context) in Ref. \cite{we}.
Furthermore, as $\left( k+\lambda \right) $ is the single control parameter
in Eq. (\ref{W}), an exact property of the solutions with all values of $%
\alpha $ is a scaling relation for the DW's width, $L$:%
\begin{equation}
L\sim \left( -k-\lambda \right) ^{-1/\alpha }. \label{L}
\end{equation}%
In particular, Eq. (\ref{L}) agrees with the exact solution (\ref{exact}) in
the case of $\alpha =2$.
An approximate scaling relation for the DW can be constructed in the case
when propagation constant $k$ is taken close to the threshold value (\ref%
{max}), i.e., setting%
\begin{equation}
k=-\frac{2\lambda }{\beta -1}-q, \label{q}
\end{equation}%
with
\begin{equation}
0<q\ll 2\lambda /(\beta -1). \label{small-q}
\end{equation}%
In this case, Eqs. (\ref{link}) and (\ref{CW+}), (\ref{CW-}) show that the
DW links boundary values with a small difference between them,%
\begin{equation}
\left\{ U(x=\pm \infty ),V(x=\pm \infty )\right\} \approx \left\{ \sqrt{%
\lambda /(\beta -1)}\mp \sqrt{q}/2,\sqrt{\lambda /(\beta -1)}\pm \sqrt{q}%
/2\right\} , \label{a}
\end{equation}%
where the main term is the same as in Eq. (\ref{threshold}). Further,
straightforward analysis of Eq. (\ref{UV}) demonstrates that, under
condition (\ref{small-q}), the DW's width scales with the variation of $q$ as%
\begin{equation}
L\sim q^{-1/\alpha }, \label{WDS}
\end{equation}%
cf. Eq. (\ref{L}).
Finally, in the case of $\lambda =0$ in Eq. (\ref{UV}), the near-threshold
case is defined, instead of Eq. (\ref{small-q}), simply as
\begin{equation}
0<\beta -1\ll 1, \label{<<}
\end{equation}%
see Eq. (\ref{max}). In this case, the analysis of Eq. (\ref{UV}) leads to
the following asymptotic scaling relation for the DW's width,%
\begin{equation}
L\sim \left( \beta -1\right) ^{-1/\alpha }, \label{beta-1}
\end{equation}%
cf. Eqs. (\ref{L}) and (\ref{beta-1}).
In the case of the normal diffraction, $\alpha =2$, the situation valid
under condition (\ref{<<}) was considered in Ref. \cite{optical-DW}, where
the scaling was obtained in the form of $L\sim (\beta -1)^{-1/2}$, cf. Eq. (%
\ref{beta-1}). Also in agreement with Eq. (\ref{beta-1}), DWs do not exist
in the system with the Manakov's nonlinearity \cite{Manakov}, $\beta =1$.
\section{Numerical results}
Numerical solutions of Eq. (\ref{UV}) for stationary DWs were produced by
means of the well-known Newton's conjugate-gradient method \cite{JYang}. The
results are presented below separately for the systems without and with the
linear coupling ($\lambda =0$ and $\lambda >0$, respectively), and also, in
a brief form for an asymmetric generalization of Eq. (\ref{UV}), with
unequal diffraction coefficients and/or values of the LI for components $U$
and $V$.
\subsection{The system without the linear coupling ($\protect\lambda =0$)}
First, we consider the basic system of stationary equations (\ref{UV}) with $%
\lambda =0$. Results produced by the numerical solution of this system are
summarized in Fig. \ref{fig1} for $\beta =3$, because this value of the XPM
coefficient, as shown above, simplifies the system, allowing one to reduce
it to the single equation (\ref{W}). In this and similar figures, the
propagation constant is fixed as $k=-1$, which is always possible by means
of rescaling. It is seen that the DW patterns are truly robust ones, as the
variation of the LI in broad limits, from $\alpha =2$ up to $\alpha =0.2$,
produces relatively mild changes in the shape of the DWs, which persist, as
solutions of Eq. (\ref{UV}), at all values of the LI (numerical results are
not presented for very small values of $\alpha $, as the numerical method
encounters technical problems in that case).
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.60\textwidth]{fig1}
\end{center}
\caption{A set of stationary profiles of the two components of the
numerically generated DW (domain-wall) solutions of Eq. (\protect\ref{UV})
with $\protect\lambda =0$, $\protect\beta =3$, $k=-1$ and indicated values
of the LI (L\'{e}vy index) varying from $\protect\alpha =2$ (which
corresponds to the normal non-fractional diffraction) up to $\protect\alpha %
=0.2$.}
\label{fig1}
\end{figure}
Furthermore, the numerical solution of the system of linearized equations (%
\ref{linearized}) for small perturbations produces completely stable spectra
of eigenvalues for all stationary DW patterns (not shown here in detail, as
they do not exhibit essential peculiarities). The stability of the DWs was
also corroborated by direct simulations of the system of underlying
equations (\ref{system}), see a typical examples displayed in Fig. \ref{fig2}
for $\alpha =1$.
\begin{figure}[tbp]
\begin{center}
\subfigure[]{\includegraphics[scale=0.9]{fig2a.pdf}} \subfigure[]{%
\includegraphics[scale=0.9]{fig2b.pdf}}
\end{center}
\caption{An example of the stable evolution of the two components of the DW
[shown in panels (a) and (b)] with $k=-1$, produced by simulations of Eq. (%
\protect\ref{system}) with $\protect\alpha =1$, $\protect\beta =3$, and $%
\protect\lambda =0$. The stability of the DW is corroborated by a purely
stable spectrum of eigenvalues produced, for the same DW state, by numerical
solution of Eq. (\protect\ref{linearized}) (not shown here).}
\label{fig2}
\end{figure}
Similar results, i.e., a family of stable DW solutions, are produced by the
numerical analysis of Eqs. (\ref{system}), (\ref{UV}), and (\ref{linearized}%
), for values of the XPM coefficient $\beta \neq 3$, as shown by a set of
profiles in Fig. \ref{fig3} for $\alpha =1$ (it is a typical value of the LI
corresponding to the fractional diffraction).
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.60\textwidth]{fig3}
\end{center}
\caption{A set of DW solutions with $k=-1$, produced by Eq. (\protect\ref{UV}%
) with $\protect\lambda =0$, $\protect\alpha =1$ and values of XPM
coefficient varying in interval $1.05\leq \protect\beta \leq 3$. All the
solutions are stable, according to the calculation of eigenvalues [see Eq. (%
\protect\ref{linearized})] and direct simulations of Eq. (\protect\ref%
{system}).}
\label{fig3}
\end{figure}
The set of DW profiles is displayed in Fig. \ref{fig3} for $\beta \geq 1.05$%
, as, for very small values of $\beta -1$, the width of the DW diverges, in
accordance with the scaling relation given, for $\lambda =0$, by Eq. (\ref%
{beta-1}). For different values of $\alpha $ the DW families are
characterized by dependences of their width $L$ on $(\beta -1)$, as shown in
the top left panel of Fig. \ref{fig4}. The width was identified, from the
numerical solution, as the distance between points where $U(x)$ and $V(x)$
take values $1/2$, i.e., half of the asymptotic values $U(x\rightarrow
-\infty )=V(x=+\infty )=1$ for $k=-1$, see Eqs. (\ref{CW+}) and (\ref{CW-}).
In the same figure \ref{fig4}, the numerically found dependences are
compared to the analytically predicted asymptotic scaling relations given by
Eq. (\ref{beta-1}). It is seen that the prediction is indeed very close to
the numerical results for sufficiently small values of $\left( \beta
-1\right) $.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.80\textwidth]{fig4}
\end{center}
\caption{The top left panel: Width $L$ of the DW for $k=-1$, $\protect%
\lambda =0$, and three different values of the LI, $\protect\alpha =2$
(which corresponds to the normal non-fractional diffraction), $\protect%
\alpha =1.5$, and $\protect\alpha =1,$ vs. the proximity, $\left( \protect%
\beta -1\right) $, to the DW's existence threshold, $\protect\beta =1$. The
other panels show the comparison of the respective numerically found curves,
$L(\protect\beta -1)$, to the asymptotic analytically predicted scaling
relation (\protect\ref{beta-1}).}
\label{fig4}
\end{figure}
\subsection{The system including the linear coupling ($\protect\lambda >0$)}
The inclusion of the linear coupling in Eq. (\ref{UV}) neither destroys DW
solutions nor destabilizes them, but makes their shapes more complex, even
in the case of $\beta =3$, when substitution (\ref{ansatz}) reduces the
system of two equations (\ref{UV}) to the single equation (\ref{W}). First,
Fig. \ref{fig5}(a) demonstrates that the linear coupling with strength $%
\lambda =0.5$ produces a relatively weak effect on the shape of the DW
states if the LI takes values in the interval of $1\leq \alpha \leq 2$. On
the other hand, Fig. \ref{fig5}(b) demonstrates a more conspicuous effect of
the same linear coupling for smaller values of the LI, \textit{viz}., in the
interval of $0.1\leq \alpha <1$: the corresponding DWs become essentially
broader, in comparison to their counterparts found at $\lambda =0$. These
results are summarized by Fig. \ref{fig5}(c), where, in a broader spatial
domain, it is shown that the DW solutions eventually converge to asymptotic
values at $|x|\rightarrow \infty $, which are, according to Eqs. (\ref{lim})
and (\ref{ansatz}),
\begin{equation}
U_{\pm }=\left( \sqrt{0.725}\pm \sqrt{0.225}\right) /\sqrt{2}\approx \left\{
0.94,0.27\right\} \label{U+-}
\end{equation}%
for the case of $k=1$, $\lambda =0.5$, $\beta =3$, although at small values
of $\alpha $ the convergence is very slow. These findings are readily
explained by the scaling relation (\ref{L}), which predicts the growth of
the DW's width with the increase of the linear-coupling constant $\lambda $
and decrease of the LI, $\alpha $.
\begin{figure}[tbp]
\subfigure[]{\includegraphics[scale=0.3]{fig5a.pdf}} \subfigure[]{%
\includegraphics[scale=0.3]{fig5b.pdf}}\newline
\subfigure[]{\includegraphics[scale=0.3]{fig5c.pdf}}
\caption{Shapes of stable DW solutions produced by Eqs. (\protect\ref{ansatz}%
) and (\protect\ref{W}) for $\protect\beta =3$, $\protect\lambda =0.5$, $k=1$%
, and the LI varying in intervals indicated in the panels. (a) Narrow DWs
for $1\leq \protect\alpha \leq 2$; (b) broad DWs for $0.1\leq \protect\alpha %
<1$ [for the comparison's sake, the DW profile with $\protect\alpha =2$,
i.e., in the case of the normal diffraction, is also included in (b)]; (c)
both narrow and broad DWs, displayed in a much larger spatial domain for the
entire range of the values of the LI, $0.2\leq \protect\alpha \leq 2$.}
\label{fig5}
\end{figure}
The effects of the linear coupling on the DW states at $\beta \neq 3$ are
qualitatively similar to those displayed in Fig. \ref{fig5} for $\beta =3$.
In all the cases, the DW solutions remain stable if the linear coupling is
incorporated. Further, for comparison of the $L(\beta -1)$ dependences which
are displayed for $\lambda =0$ in Fig. \ref{fig4}, similar dependences for $%
\lambda =0.5$ are presented in Fig. \ref{fig6}. In this case, the
dependencies end close to $\beta -1=1$, in accordance with Eq. (\ref{max}),
which gives $\left( \beta -1\right) _{\mathrm{immisc}}=1$ for $\lambda =0.5$
and $k=-1$. The fact that all the curves yield the same DWs' width at $\beta
=3$ is explained by the above finding that this value plays a special role,
simplifying the DW solutions.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.60\textwidth]{fig6}
\end{center}
\caption{The numerically found dependences of width $L$ of the DW on $\left(
\protect\beta -1\right) $ for $k=-1$, $\protect\lambda =0.5$, and three
different values of the LI, $\protect\alpha =2$ (which corresponds to the
normal non-fractional diffraction), $\protect\alpha =1.5$, and $\protect%
\alpha =1$. }
\label{fig6}
\end{figure}
\subsection{DWs in a system with unequal diffraction coefficients and
different values of the L\'{e}vy index}
A relevant generalization of the system of coupled equations which give rise
to DW solutions is one with different diffraction coefficients, $D_{1}\neq
D_{2}$, in the two equations, as recently proposed in Ref. \cite{PLA} (in
the case of the normal diffraction). In the optical system, unequal
coefficients $D_{1,2}\equiv \cos ^{2}\theta _{1,2}$ are determined by
different angles $\theta _{1,2}$ between carrier wave vectors of the two
components of the light waves and the common propagation direction. The
effective diffraction coefficients are also unequal in the BEC model
including two heteronuclear components with different atomic masses, $%
m_{1,2}\sim 1/D_{1,2}$, but in the latter case only the system with $\lambda
=0$ is a physically relevant one (two different atomic species cannot
transform into each other).
The extension of Eq. (\ref{UV}) with $D_{1}\neq D_{2}$ takes the form of
\begin{eqnarray}
kU+\frac{D_{1}}{2}\left( -\frac{\partial ^{2}}{\partial x^{2}}\right)
^{\alpha /2}U+(U^{2}+\beta V^{2})U-\lambda V &=&0, \notag \\
kV+\frac{D_{2}}{2}\left( -\frac{\partial ^{2}}{\partial x^{2}}\right)
^{\alpha /2}V+(V^{2}+\beta U^{2})V-\lambda U &=&0. \label{DD}
\end{eqnarray}%
The numerical solution of Eq. (\ref{DD}) demonstrates that the asymmetry of
the diffraction coefficients makes the shapes of the two components of the
DWs mutually asymmetric, but does not destroy them. An example of the so
deformed shape of the DWs is displayed in Fig. \ref{fig7}, for both cases of
$\lambda =0$ and $\lambda >0$. This solution and all others produced by the
asymmetric system remain completely stable, as shown by the solution of the
respectively modified Eq. (\ref{linearized}), as well as by direct
simulations of the asymmetric version of Eq. (\ref{system}) (not shown here
in detail).
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.60\textwidth]{fig7}
\end{center}
\caption{Examples of stable asymmetric DWs with $D_{1}\neq D_{2}$, produced
by the numerical solution of Eq. (\protect\ref{DD}) with $\protect\alpha =1$%
, $\protect\beta =3$, $k=-1,$ and two different values of the
linear-coupling constant, $\protect\lambda =0$ and $\protect\lambda =0.5$ in
the top and bottom panels. respectively. The two field components, $U$ and $%
V $, are displayed for three pairs of values $D_{1,2}$, which are indicated,
from top to bottom, in the panels.}
\label{fig7}
\end{figure}
Furthermore, it is also possible to consider the system with different
values of the LI, $\alpha _{1}\neq \alpha _{2}$, in the equations for the
two components:
\begin{eqnarray}
kU+\frac{D_{1}}{2}\left( -\frac{\partial ^{2}}{\partial x^{2}}\right)
^{\alpha _{1}/2}U+(U^{2}+\beta V^{2})U-\lambda V &=&0, \notag \\
kV+\frac{D_{2}}{2}\left( -\frac{\partial ^{2}}{\partial x^{2}}\right)
^{\alpha _{1}/2}V+(V^{2}+\beta U^{2})V-\lambda U &=&0, \label{LI}
\end{eqnarray}%
where we set $\alpha _{1}=2$ (non-fractional diffraction) and $\alpha _{2}=1$%
. Such a system may be realized in terms of BEC, considering immiscible
components which represent usual particles ($U$) and those moving by the L\'{%
e}vy flights ($V$). This system also supports stable DW states, as shown in
Fig. \ref{fig8}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.60\textwidth]{fig8}
\end{center}
\caption{Examples of stable asymmetric DWs with different values of the LI
in the two componnts, $\protect\alpha _{1}=2$ and $\protect\alpha _{2}=1$,
produced by the numerical solution of Eq. (\protect\ref{LI}) with $\protect%
\beta =3$, $k=-1,$ and $\protect\lambda =0$. The field components, $U$ and $V
$, are displayed for three pairs of values of the diffraction coefficients, $%
D_{1,2}$, which are indicated, from top to bottom, in the plot.}
\label{fig8}
\end{figure}
\section{Conclusion}
The objective of this work is to demonstrate that the variety of states
maintained by the interplay of the fractional diffraction and
self-defocusing cubic nonlinearity, which may be realized in optics and BEC,
can be expanded by predicting stable DWs (domain walls) in the two-component
system of immiscible fields. The numerical results clearly demonstrate that,
in the entire range of values of the respective LI (L\'{e}vy index), $\alpha
<2$, that determines the fractional diffraction, and at all values of the
relative XPM/SPM coefficient $\beta $, which exceed the immiscibility
threshold, given by Eq. (\ref{max}), the DWs exist and are stable. The same
is true for DWs in the system including the linear mixing between the
components, in addition to the XPM interaction between them (which makes the
immiscibility incomplete). The main characteristic of DW structures is their
width. The present analysis demonstrates that the fractional diffraction
essentially affects the scaling which determines the dependence of the width
on the system's parameters. Numerical results for the scaling corroborate
analytical findings, represented by Eqs. (\ref{L}), (\ref{WDS}), and (\ref%
{beta-1}). It is seen that the decrease of $\alpha $ leads to steep increase
of the scaling exponents $\sim 1/\alpha $, which is explained by the fact
that the fractional diffraction is represented by the nonlocal operator [the
Riesz derivative, defined as per Eq. (\ref{Riesz})]. It is also demonstrated
that the DW solutions are essentially simplified in the special case of $%
\beta =3$. Currently available techniques should make it possible to create
the predicted DWs patterns in the experiment that may be performed for the
bimodal light propagation in the temporal domain \cite{arXiv}.
As extension of the present analysis, it may be natural to study static and
dynamical states built as multi-DW patterns. Another relevant direction is
the consideration of two-dimensional settings, such as radial DWs (also in
static and dynamical states) between immiscible light waves in bulk
waveguides, cf. Refs. \cite{DOPO}-\cite{Bookman}.
\section*{Acknowledgment}
This work was supported, in part, by the Israel Science Foundation through
the grant No. 1695/22.
|
1,314,259,994,333 | arxiv | \section{Introduction}
Recently multiferroics with the coexistence of ferroelectricity
and magnetism have been intensively studied not only due to
potential applications but also from basic
interest~\cite{Fiebig}. In particular, BiMnO$_3$ is regarded as a
true multiferroic material with both ferromagnetism and
ferroelectricity. $\rm BiMnO_3$ possesses a distorted perovskite
structure with monoclinic symmetry due to highly polarizable
bismuth ions and Jahn-Teller (JT) active manganese ions. Its
magnetic moment certainly arises from the $3d^4$ electrons of $\rm
Mn^{3+}$ ion, while the ferroelectric polarization does from the
Bi $6s^2$ lone pair~\cite{NAHill}. The saturated magnetization for
a polycrystalline sample was reported to be 3.6 $\mu_{\rm B}/{\rm
Mn}$ and the ferromagnetic transition temperature 105
K~\cite{Chiba}. While the ferroelectric remanent polarization of
BiMnO$_3$ was measured to be 0.043 $\rm \mu C/cm^2$ at 200 K for a
polycrystalline sample~\cite{Moreira450K}, there seems to be no
consensus on the ferroelectric transition temperature $T_{\rm
FE}$. Moreira dos Santos $et$ $al.$, for example, reported $T_{\rm
FE}\,\sim\,$450 K where a reversible structural transition occurs
without a symmetry change~\cite{Moreira450K}; however, Kimura $et$
$al.$ suggested $T_{\rm FE}\,\sim\,$770 K where a
centrosymmetric-noncentrosymmetric structural transition occurs
~\cite{TKimura}.
In BiMnO$_3$ orbital degrees of freedom are also active besides
magnetic and electric ones; the half-filled $e_g$ level of $\rm
Mn^{3+}$ ion induces the JT distortion and an anisotropic charge
distribution with the $3d_{\rm 3z^2-r^2}$ orbital prevails
locally. Then long range orbital order, which is compatible with
the structure, would exist in the system. The detailed structure
of BiMnO$_3$ at room temperature was investigated by neutron
powder diffraction and the compatible orbital order was
proposed~\cite{AMoreira}. It is noted that no noticeable
structural change associated with the ferromagnetic transition was
observed. Regarding the orbital order in $\rm BiMnO_3$, there
remain unresolved issues such as the temperature dependence of
the degree of ordering at high temperatures and the determination
of the transition temperature itself, $T_{\rm o}$. Moreover, the
relationship between the ferroelectricity and the orbital order in
this multiferroic compound is clearly of interest.
In order to address these issues we took advantage of resonant
x-ray scattering (RXS) which can probe local environments of a
specific element. RXS occurs as a second order process by first
exciting a core electron into an empty valence state, as the
incident photon energy is tuned to an absorption edge, and then
de-exciting the electron back to the original state. Direct
involvement of the valence state in the scattering process usually
gives rise to enhanced sensitivity to anisotropic charge
distributions, and the atomic scattering factor is no longer a
scalar but takes a tensorial form called anisotropic tensor
susceptibility (ATS). ATS then allows reflections at some of
normally forbidden positions. Pioneering works regarding ATS were
done in the
eighties~\cite{Templeton,Dmitrienko,Blume,HoDGibbs,Finkelstein},
and more recently Murakami $et$ $al.$ successfully applied RXS to
several orbital ordered compounds~\cite{LSMOMurakami,YMurakami}.
Here we wish to present detailed RXS measurements for $\rm
BiMnO_3$ to reveal a role of the orbital order in this important
multiferroic compound. The energy profile, azimuthal angle
dependence, and temperature dependence of an ATS reflection peak
will be presented and discussed.
\section{Experimentals}
In order to carry out RXS measurements single crystalline samples
are needed, but BiMnO$_3$ is notoriously difficult to synthesize
in a single crystalline form. Thus we resorted to epitaxial films;
an epitaxial film of BiMnO$_3$ was grown on a SrTiO$_3$(111)
substrate (cubic Miller indices) by the pulsed laser deposition
technique. The thickness of the film was approximately 40 nm. The
detailed growth conditions and structural characterizations are
presented elsewhere~\cite{Yang}. The crystal structure of the thin
film was still monoclinic as in bulk. The growth direction or
surface-normal direction, which is parallel to the [111] direction
of the substrate, is perpendicular to the monoclinic $ab$-plane.
Note that with this orientation the modulation vector of the
orbital ordering of BiMnO$_3$ is along the surface-normal. (See
below.) Thus the orientation of the film was deliberately chosen
in order to make the axis of azimuthal rotation (direction of the
scattering vector) coincide with the surface-normal, so that
azimuthal scans are easily performed. Resonant x-ray scattering
experiments were performed on a 4-circle diffractometer installed
at the 3C2 beam line in the Pohang Light Source. The photon energy
near Mn $K$-edge was calibrated using a manganese foil and the
resolution was about 2 eV. The $\sigma$ polarized photon (electric
field perpendicular to the scattering plane) was injected after
being focused by a bent mirror and monochromatized by a Si(111)
double crystal. The linearly polarized $\sigma'$ (electric field
perpendicular to the scattering plane) and $\pi'$ (electric field
in the scattering plane) components of scattered photons were
separated
with a flat pyrolithic graphite crystal with the scattering angle
of 69 $^{\rm o}$ for the Mn $K$-edge.
\section{Crystal Structure and Structure factor of ${\bf\rm BiMnO_3}$}
From the neutron powder diffraction with bulk samples at room
temperature the space group (monoclinic $C2$) and local bond
lengths and angles of BiMnO$_3$ were all
determined~\cite{AMoreira}, and it turned out that this monoclinic
structure is maintained in thin films~\cite{Yang}. The
crystallographic unit cell is illustrated in Fig.~\ref{BMOfig}(a).
Note that $a$, $b$ and $c$ denote the crystallographic axes (and
the lattice parameters) for the monoclinic unit cell, and
$\bar{x}$, $\bar{y}$ and $\bar{z}$ represent the axes of the
perovskite pseudo-cubic cell. The structure also determines the
orientation of JT-distorted MnO$_6$'s and the compatible ordered
pattern of the $e_g$ orbitals of Mn$^{3+}$. The monoclinic
$ab$-planes at $z=0$ and $z=\frac{1}{2}$ contain manganese ions
with the $e_g$ orbital elongated to the pseudo-cubic
$\bar{z}$-axis $d_{3\bar{z}^2-r^2}$, while the orbitals in the
$ab$-planes at $z=\frac{3}{4}+\epsilon$ and
$z=\frac{1}{4}-\epsilon$ are of type $d_{3\bar{y}^2-r^2}$ and
$d_{3\bar{x}^2-r^2}$, respectively. Here $z$ denotes the
coordinate along the $c$-axis and $\epsilon$ designates a small
displacement. As illustrated in Fig.~\ref{BMOfig}(b), this orbital
order is rather peculiar in the sense that each plane parallel to
the $ab$-plane and separated by $c/4$ contains only one kind of
the $e_g$ orbital within the plane and the stacking sequence goes
as
$d_{3\bar{z}^2-r^2}$/$d_{3\bar{x}^2-r^2}$/$d_{3\bar{z}^2-r^2}$/$d_{3\bar{y}^2-r^2}$/$\ldots$($\equiv
\rm Z/X/Z/Y/\ldots$). This long range orbital order corresponds to
the superstructure of the system, and the periodic arrangement of
the anisotropic charge distributions would give rise to measurable
intensities at (00L) ATS reflections with odd L where normal
charge scattering is almost absent (not completely absent because
of nonzero $\epsilon$).
With the crystal structure just described the structure factor
$\mathcal{S}_{\rm (00L)}$ for BiMnO$_3$ (odd L) can be written as
\begin{equation} \label{eqSF}
\mathcal{S}_{\rm (00L)} \,=\, 2 \mathcal{F}_z e^0 + 2 \mathcal{F}_x e^{i 2\pi {\rm L}(\frac{1}{4}-\epsilon)} + 2 \mathcal{F}_z e^{i 2\pi {\rm
L}
\frac{1}{2}} + 2 \mathcal{F}_y e^{i 2\pi {\rm
L}(\frac{3}{4}+\epsilon)} + S({\rm Bi, O})
\end{equation}
where $\mathcal{F}_x$, $\mathcal{F}_y$, and $\mathcal{F}_z$
represent the ATS of the manganese ion with anisotropic $e_g$
orbitals elongated along $\bar{x}$, $\bar{y}$, and $\bar{z}$,
respectively, and $S({\rm Bi, O})$ is symbolic of the contribution
of Thomson scattering (normal charge scattering) due to bismuth
and oxygen ions. $\mathcal{F}_z$, for example, may be written,
based on the pseudocubic axes shown in Fig.~\ref{BMOfig}, as
follows:
\begin{equation}
\mathcal{F}_z = \left(
\begin{array}{ccc}
f_{\bot} & 0 & 0 \\
0 & f_{\bot} & 0 \\
0 & 0 & f_{\parallel} \\
\end{array}
\right)\end{equation} where $f_{\bot}$ and $f_{\parallel}$ stand
for the resonant scattering amplitude with photon polarization
perpendicular and parallel to the $e_g$ elongation direction,
respectively.
Then Eq.~(\ref{eqSF}) becomes
\begin{equation}\label{eqSF2}
\mathcal{S}_{\rm (00L)} \,\simeq\, \underbrace{2(-1)^{\frac{\rm L-1}{2}}i f_{\Delta} \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 0 \\
\end{array}
\right)}_{\rm ATS ~~contribution} + \underbrace{8\pi {\rm
L}(-1)^{\frac{\rm L-1}{2}}\epsilon\, f_{\rm Mn} + S({\rm Bi,
O})}_{\rm Thomson ~~scattering}\,,
\end{equation}
where $f_{\rm Mn}$ denotes the isotropic part of the scattering
factor of manganese, and $f_{\Delta}$ $(\equiv\,
f_{\parallel}-f_{\bot})$ denotes the anisotropic portion.
Note that these (00L) reflections with odd L are allowed even in
the absence of resonant scattering because the slight displacement
$\epsilon$ of Mn ions from the exact position prohibits perfect
cancellation of the various contributions. $S({\rm Bi, O})$ in
Eq.~(\ref{eqSF}) probably also contributes. ATS reflections are
usually so small that the observation of RXS is normally difficult
in the presence of the background. The anomalous ATS reflections
from the BiMnO$_3$ film, however, were measurable because the
charge scattering background at the (00L) peaks turned out to be
reduced by a factor of $10^{-5}$ compared to the intensity of the
fundamental (004) peak.
\section{Resonant x-ray scattering}
In Fig. 2(a) schematically shown is the experimental configuration
for RXS. Again $\sigma$, $\sigma'$, and $\pi'$ represent the
polarization of the incident and scattered photons. Azimuthal
rotation and analyzer configuration are also indicated in the
figure. The fluorescence was measured as a function of photon
energy to determine the Mn $K$ absorption edge. The absorption
from the manganese ion results in the uprise of fluorescence above
6.55 keV indicating the Mn $K$ absorption edge as presented in
Fig.~\ref{BMOENE}(b). The integrated intensity of the (003)
reflection was measured as a function of photon energy as shown in
Fig.~\ref{BMOENE}(c). As expected from the resonant nature, the
intensity is sharply enhanced at 6.552 keV in contrast to the
normal charge scattering intensity which would decrease instead
from absorption. Note that at off-resonance energies, appreciable
integrated intensity remains, instead of decaying to zero, as a
result of normal charge scattering described above. Another
interesting point is that, as shown in the two energy profiles at
different temperatures (300 K, 600 K), the resonant contribution
decreases with temperature in contrast to the off-resonant part
which remains nearly constant.
Even if the peak occurred at the expected position in the
reciprocal space, this fact alone may not be sufficient to
conclude that orbital ordering is indeed the origin of the
scattering. The hall mark of the resonant scattering due to
orbital ordering would be the polarization dependence and
azimuthal variation of the peak intensity. In contrast to
conventional x-ray scattering which neither depends on the
azimuthal angle nor shows polarization conversion, resonant
scattering would exhibit the characteristic oscillations,
depending on the photon polarization, with respect to the
azimuthal angle, i.e., the rotation angle around the scattering
vector. Thus in order to further confirm the orbital order as the
origin of this resonant reflection, the azimuthal angle dependence
and polarization analysis were carried out. In Fig.~\ref{BMOPol}
presented is the azimuthal angle dependence of the (003)
integrated intensity normalized with the fundamental (004) peak
intensity in order to remove sample-shape effects. The zero of the
azimuthal angle $\psi$ is defined when ${\bf k_i}\times{\bf k_f}$
coincides with the $+b$ axis. ${\bf k_i}$ and ${\bf k_f}$, of
course, represent the propagation vector of incident and scattered
beams, respectively. For the $\sigma\rightarrow\pi'$ channel, a
clear three-fold oscillation is observed, while the intensity does
not exhibit any azimuthal dependence for
$\sigma\rightarrow\sigma'$. No doubt this polarization dependence
and the three-fold oscillation seen in the $\sigma\rightarrow\pi'$
channel are due to orbital ordering. It is noted, however, that
these results are not trivially connected to the orbital order of
BiMnO$_3$. What is required is one further step of taking into
consideration the effects of twins which always exist in the film;
the detailed accounts are given in a separate section which
follows.
The temperature variation of the orbital order was elucidated by
measuring the (003) integrated intensities at two different photon
energies, 6.532 keV and 6.552 keV, as shown in Fig.~\ref{BMOTemp}.
While the data obtained at off-resonance energy 6.532 keV are
solely due to Thomson scattering (normal charge scattering), those
RXS data at 6.552 keV are caused by the ATS reflection. The
temperature profile of the RXS data shows two distinct features:
one is that the peak intensity decreases rather rapidly with
temperature and disappears around 770 K. The other is that an
anomaly in the temperature variation appears around 440 K, below
which there is a sudden upturn in the slope. The former
corresponds to the transition temperature of the
centrosymmetric-noncentrosymmetric structural phase transition in
bulk~\cite{TKimura}. The (003) integrated intensity at
off-resonance energy 6.532 keV shows a normal behavior, i.e.,
decreases with temperature due to thermal vibrations, and will
disappear at the structural phase transition point. The lower
temperature 440 K also coincide with the transition temperature
found in bulk; in this case the phase transition involved is the
one without a symmetry change. These two temperatures are
currently under controversy as a point below which
ferroelectricity first appears. At high temperatures, it should be
noted, it is not trivial at all to prove ferroelectricity of a
material. At any rate, the present RXS results prove that there is
a strong connection between the ferroelectricity and orbital order
in BiMnO$_3$. Of course, the orbital order also determines the
exchange interactions and consequently the magnetic ordering.
\section{Calculation of azimuthal angle dependence}
The azimuthal angle dependence of the scattering intensity from
the anomalous part proportional to $2(-1)^{\frac{L-1}{2}}i$ of
Eq.~(\ref{eqSF2}) can be calculated for the experimental
configuration shown in Fig.~2(a). The conversion matrix
($\mathcal{V}$) from the laboratory frame ($x_{\rm L}y_{\rm
L}z_{\rm L}$) to the pseudo-cubic crystallographic frame
($\bar{x}\bar{y}\bar{z}$) is given by
\begin{equation}
\mathcal{V} = \frac{1}{\sqrt{6}}\left(
\begin{array}{ccc}
-\sqrt{3} & -1 & \sqrt{2} \\
\sqrt{3} & -1 & \sqrt{2} \\
0 & 2 & \sqrt{2} \\
\end{array}
\right).
\end{equation}
The polarization vector of the $\sigma$, $\sigma'$, and $\pi{'}$
polarizations under azimuthal rotation by the amount of $\psi$ can
be written in the laboratory frame as follows:
\begin{eqnarray}
\left|\sigma\right>_{\psi} &=& \left|\sigma'\right>_{\psi} = \mathcal{R}(\psi)\left(\begin{array}{c} 1 \\
0
\\ 0
\\\end{array}\right), \\
\left|\pi'\right>_{\psi} &=& \mathcal{R}(\psi)\left(\begin{array}{c} 0 \\ -\sin{\theta} \\
\cos{\theta} \\\end{array}\right),
\end{eqnarray}
where $\theta$ ($\equiv {2\theta}/{2}$) is the Bragg angle, and
$\mathcal{R}(\psi)$ is the azimuthal rotation matrix given by
\begin{equation}
\mathcal{R}(\psi) = \left(
\begin{array}{ccc}
\cos{\psi} & -\sin{\psi} & 0 \\
\sin{\psi} & \cos{\psi} & 0 \\
0 & 0 & 1 \\
\end{array}
\right).
\end{equation}
From these quantities the scattering intensity caused by ATS with
$\sigma$ as the polarization of the incident photons is easily
calculated for the two channels, $\sigma\,\rightarrow\,\sigma'$
and $\sigma\,\rightarrow\,\pi'$:
\begin{equation}
I_{\sigma\rightarrow\sigma', \pi'} = \left|\left<\sigma', \pi'\right|_{\psi}\mathcal{V}^{\dag} \left(
\begin{array}{ccc}
2 f_{\Delta} & 0 & 0 \\
0 & -2 f_{\Delta} & 0 \\
0 & 0 & 0 \\
\end{array} \right) \mathcal{V} \left|\sigma\right>_\psi \right|^2
\end{equation}
Simplifying this equation yields the scattering intensities as
\begin{eqnarray}
I_{\sigma\rightarrow\sigma'} &=&
\frac{4}{3}f_{\Delta}^2\sin^2{2\psi}, \\
I_{\sigma\rightarrow\pi'} &=&
\frac{4}{3}f_{\Delta}^2(\sqrt{2}\cos{\theta}\cos{\psi} +
\cos{2\psi}\sin{\theta})^2.
\end{eqnarray}
There is one more factor to be considered before comparison with
the experimental data is attempted, that is, the existence of
twins has to be taken into account. The films grown on
SrTiO$_3$(111) would include three kinds of stacking sequences of
orbitals , i.e., $\rm Z/X/Z/Y/\ldots$, $\rm X/Y/X/Z/\ldots$, and
$\rm Y/Z/Y/X/\ldots$. Note that the growth direction, which is
the cubic [111] direction, is perpendicular to monoclinic (001) or
$ab$- planes. Thus the measured intensities would be the ones
obtained after averaging for the twins:
\begin{eqnarray}
I_{\sigma\rightarrow\sigma'}^{avr} &=& \frac{2}{3}f_{\Delta}^2, \\
I_{\sigma\rightarrow\pi'}^{avr} &=&
\frac{f_{\Delta}^2}{3}(3+\cos{2\theta} +
2\sqrt{2}\sin{2\theta}\cos{3\psi}).
\end{eqnarray}
After twin averaging, the calculated expressions as a function of
$\psi$ agree with the experimental results depicted in the
Fig.~\ref{BMOPol}. The data for the $\sigma\rightarrow\sigma'$
channel do not show any $\psi$ dependence, whereas for the
$\sigma\rightarrow\pi'$ channel a characteristic 3-fold symmetry
is exhibited.
\section{Conclusions}
The resonant x-ray scattering technique was used with epitaxial
thin films to probe the orbital ordering of a multiferroic
compound, $\rm BiMnO_3$. ATS reflection was observed at (003) in
the reciprocal space as expected, and the characteristic
three-fold oscillation showed up as a function of azimuthal angle.
The experimental results agree with the calculation based on the
orbital ordering when twin effects are taken into account. The
peak intensity was followed as a function of temperature to high
temperatures; it rapidly decreases with temperature and disappears
at $T_{\rm o}\,\approx\,$770 K. Thus orbital ordering occurs
concurrent with the centrosymmetric-noncentrosymmetric structural
phase transition seen in bulk. The peak intensity also shows an
anomaly at 440 K, where again a phase transition was observed in
bulk. Although there is no consensus about the ferroelectric phase
transition point in BiMnO$_3$, being either 440 K or 770 K, the
orbital ordering is closely connected to the ferroelectricity in
the system in either case.
\acknowledgments
We wish to thank the financial supports from
SRC-MOST/KOSEF. Synchrotron measurements were done at the Pohang
Light Source operated by POSTECH and MOST.
|
1,314,259,994,334 | arxiv | \section{Introduction}
The aim of this paper is to demonstrate that a vector
dominance model, generalized to comply with Bjorken scaling, (GVMD),
contains Color Coherent Phenomena anticipated previously
within the parton model and/or for hard PQCD regime.
This explains the ability of GVDM
to describe large sets of data and demonstrates once more that
small $x$ physics of deep inelastic processes is a nontrivial interplay of
soft and hard QCD physics.
Simple vector meson dominance (VMD) models, for review see
\cite{Fey}, \cite{Bauer}, \cite{Shaw-book} and references therein,
describe successfully processes involving photon-hadron interactions
at low ''photon mass'' $Q^2$. However such models, containing a
finite number of vector mesons, do not reproduce approximate Bjorken
scaling of $\sigma_{\gamma^{\ast} N}$ at large $Q^2$.
In order to restore the property of Bjorken scaling within GVDM, the
photon was represented as an infinite sum of vector mesons
\cite{Shaw-book}. The convergence of the sum within this model
requires that the cross section of the
meson-nucleon interaction decreases with the mass of the meson. In
such models nuclear shadowing dies away as $Q^2$ increases, which
contradicts recent experiments \cite{exp1}, which reveal only a
weak dependence of $F_{2A}(x,Q^2)$ on $Q^2$.
This problem does not occur in generalized vector dominance (GVMD)
models, where non-diagonal transitions among mesons have been
introduced. The model we use conjectures that there are transitions
only between mesons with neighboring masses \cite{Schildknecht}, \cite{Shaw}.
The scattering matrix, {\bf S}, then, acquires non-diagonal
elements in the basis of vector mesons. The model describes reasonably
well Bjorken scaling of $\sigma_{\gamma^{\ast} N}$
\cite{Schildknecht}, \cite{Shaw} and nuclear shadowing in DIS \cite{Shaw}.
Alternatively, one can expand the photonic state through states with
the particular cross section of interaction with a target
\cite{Walker}.
It is so called method of cross section fluctuations. The advantage
of such an approach is that different states, having different cross
section of interaction, do not mix, therefore the scattering matrix
{\bf S} or {\bf T} is diagonal in such a basis.
The physical ground for this approach is the fact that
configurations of a different geometrical size are
present in the photon. It is known that the strength
of the interaction of such a configuration depends
on its geometrical size, consequently the photon can
be represented as a superposition of eigenstates of
the {\bf T}-matrix, $|\psi_{n}\rangle$:
\begin{equation}
|\gamma \rangle=\sum c_{n}|\psi_{n} \rangle \label{dec},
\end{equation}
which interact with the target with different strengths:
\begin{equation}
{\it {\bf T}}|\psi_{n} \rangle=\sigma_{n}|\psi_{n} \rangle .
\end{equation}
It is convenient to introduce the distribution over cross
sections $P_{\gamma}(\sigma)$, which gives the probability
for the photon to interact with the target with the cross
section $\sigma$. Having found the set of eigenvalues $\sigma_{n}$
and corresponding coefficients $c_{n}$, one can reconstruct
the distribution $P_{\gamma}(\sigma)$ according to the rule
\cite{Walker},\cite{rule}:
\begin{equation}
P_{\gamma}(\sigma)=\sum_{n} |c_{n}|^2 \delta (\sigma-\sigma_{n}) \label{P} .
\end{equation}
There are no methods allowing for a calculation
of the distribution $P_{\gamma}(\sigma,Q^2)$ from the first principles,
except for $\sigma \rightarrow 0$. In the present paper we evaluate
$P_{\gamma}(\sigma,Q^2)$ within the framework of the GVMD model
and find conspiracy between hard and soft physics
suggested initially within the parton model \cite{Bjorken}:
although the effective cross section decreases as ${1/Q^2}$,
the probability of configurations interacting with a typical hadronic
cross section is ${1/Q^2}$ also.
Thus the significant contribution of non-diagonal transitions resolves the
Gribov puzzle \cite{Gribov}--the contradiction of pre-QCD ideas to
approximate Bjorken scaling for deep inelastic processes.
We study
asymptotic properties of $P_{\gamma}(\sigma,Q^2)$ at small $\sigma$ and
compare predictions obtained in the GVMD model and QCD.
Once again we want to point out that the very fact of existence of
the distribution over cross sections $P_{\gamma}(\sigma,Q^2)$ proves that
there are configurations of different strengths in the photon.
\section{Evaluation of $P_{\gamma}(\sigma,Q^2)$ within GVMD model }
At high energies a photon interacts with a target by means of its
hadronic components. The generalized vector meson dominance model
that we use \cite{Schildknecht}, \cite{Shaw} can be expressed as the following decomposition:
\begin{equation}
|\gamma\rangle=\sum_{n}^{\infty} \frac{e}{f_{n}} \frac{M_{n}^2}
{M_{n}^2+Q^2} |\rho_{n}\rangle \label{vmd}.
\end{equation}
Parameters of the model are chosen such that
\begin{equation}
M_{n}^2=M_{0}^2 (1+2n) ,
\end{equation}
where $M_{0}$=0.77 GeV, assuming $|\rho_{0}\rangle=|\rho\rangle$, and
\begin{equation}
\frac{M_{n}^2}{f_{n}^2}=\frac{M_{0}^2}{f_{0}^2} ,
\end{equation}
where $f_{0}/4\pi$=2.36 .
Note that
for simplicity we consider only one kind of
vector mesons here, discarding contributions of $\omega$ and $\phi$ mesons.
The inclusion of additional flavors would not change qualitative conclusions
of this paper.
The total photoabsorption cross section for transverse photons on
nucleons is \cite{Schildknecht}, \cite{Shaw}:
\begin{equation}
\sigma_{T}(s,Q^2)=\sum_{nm} \frac{e}{f_{n}}
\frac{M_{n}^2}{M_{n}^2+Q^2}
\Sigma_{nm}\frac{e}{f_{m}} \frac{M_{m}^2}{M_{m}^2+Q^2} \label{ttot}.
\end{equation}
$\Sigma_{nm}$ is the scattering matrix in the basis of vector mesons.
Assuming that there are transitions only between the mesons with
neighboring masses, this matrix takes up a symmetrical
tridiagonal form \cite{Schildknecht}, \cite{Shaw} :
\begin{equation}
\Sigma_{nm}=\sigma_{0}\delta_{nm}-\frac{\sigma_{0}}{2}
\frac{M_{n}}{M_{n+1}}(1-0.3\frac{M_{0}^2}{M_{n}^2})\delta_{nm\pm1} ,
\label{sigmaexpl}
\end{equation}
where $\sigma_{0}$=25 mb is the total cross section of interaction of
vector mesons with the target.
Non-diagonal elements of the matrix $\Sigma_{nm}$ are chosen in such
a way that at large masses $M_{n}$ their first terms
cancel diagonal elements. Their second terms have the form of a
simple (nongeneralized)
vector meson dominance model with the cross section inversely
proportional to the mass squared of the mesons. Thus, the model
\cite{Schildknecht}, \cite{Shaw} reproduces approximate Bjorken scaling at relatively
large $Q^2$ in spite of the fact that the cross section of the
diagonal transitions does not decrease with $M^2_{n}$.
We want to stress that within conventional approaches such as the
parton model, PQCD, nonrelativistic quark models of a hadron,
cross sections of the scattering of excited hadronic states
off a hadron target are not very different (may be larger) than
the cross section of the ground state off the same target.
The scattering matrix {\bf T} is diagonal in the basis of
eigenvectors $|\psi_{n}\rangle$ with eigenvalues $\sigma_{n}$
giving possible cross sections. Thus, the problem of relating
the two formalisms reduces to finding
eigenvalues and eigenvectors
of $\Sigma_{nm}$. Having found the representation of the vector of
state of each meson in the basis of eigenvectors, and
using \ (\ref{vmd}), one can find coefficients $c_{n}$
in decomposition \ (\ref{dec}) and, thus, reconstruct
$P_{\gamma}(\sigma,Q^2)$ according to \ (\ref{P}).
The numerical solution to the problem was carried out
with help of the computer software {\it Mathematica} for 10 mesons.
The graphic solution is given in Fig. 1 for $Q^2$=0, 1, 2 GeV$^2$ (we plot $f_{0}^2/ e^{2} \cdot P(\sigma,Q^2$)).
For such $Q^2$ it is sufficient to take into account only first ten mesons.
A more universal characteristic, which depends weakly on a
number of vector mesons, is moments of the
distribution $P_{\gamma}(\sigma,Q^2)$ defined as:
\begin{equation}
\langle \sigma^{n} \rangle=\int P_{\gamma}(\sigma,Q^2) \sigma^{n} d \sigma .
\end{equation}
We give first five moments computed for our specific example of
10 vector mesons in Table \ (\ref{Table1}), although the moments are
not sensitive to the number of mesons taken into consideration.
{}From Fig. 1 one can see the following general tendencies of behavior
of $P_{\gamma}(\sigma,Q^2)$:
\begin{enumerate}
\item{$P_{\gamma}(\sigma,Q^2) \propto 1/ \sigma$ at small $\sigma$}
\item{$P_{\gamma}(\sigma,Q^2) \rightarrow 0$ at large $\sigma$}
\item{$P_{\gamma}(\sigma,Q^2)$ decreases {\it on average} with
increase of $\sigma$}
\item{$\langle \sigma^{n} \rangle \propto 1 / Q^2$ for n=1,2 \dots}
\end{enumerate}
Generally speaking, though the exact analytical form of the
distribution $P_{\gamma}(\sigma,Q^2)$ depends on a number of vector
mesons taken into account ( on the dimension of $\Sigma_{nm}$), the
general tendencies given above reflect universal properties of the
distribution $P_{\gamma}(\sigma,Q^2)$.
For calculation purposes it is useful to have an analytical
expression of $P_{\gamma}(\sigma,Q^2)$ defined for all $\sigma$. We
suggest the following parameterization, which reflects the general
properties of $P_{\gamma}(\sigma,Q^2)$:
\begin{eqnarray}
P_{\gamma}(\sigma,Q^2)&=&N\Big(\frac{1}{\sigma/\sigma_{0}} \nonumber\
\Theta(\frac{1}{C} \frac{\mu_{1}^2}{\mu_{1}^2+Q^2}- \sigma / \sigma_{0} )+\frac{\mu^2}{\mu^2+Q^2}\Pi(\sigma/\sigma_{0}) \Big) \nonumber\\
\Pi(\sigma/\sigma_{0})&=&6.03 \, exp(-15.15 \cdot(\sigma/\sigma_{0}-0.7)^2) \nonumber\\
\mu_{1}^2&=&0.32 \ {\rm GeV}^2 \nonumber\\
\mu^2&=&0.39 \ {\rm GeV}^2 \nonumber\\
C&=&0.589 \nonumber\\
N&=&0.387 \label{param}
\end{eqnarray}
Here $\Theta$ is a step--function. The presented parameterization
reproduces first three moments of the actual distribution
$P_{\gamma}(\sigma,Q^2)$ quite accurately for 0 $< Q^2 <$ 1 GeV$^2$:
the total cross section is given with the accuracy of 4\%, the
second moment is fitted with the accuracy of 8\%, the third
moment accuracy is at the level of 15\%.
One can see that this distribution contains a nontrivial interplay
between hard and soft physics. The first term is a ''hard'' piece
with the cross section proportional to $1/Q^2$ and the probability
which does not depend on $Q^2$. The second term is a ''soft'' piece
with the large cross section and the probability which dies away as
$1/Q^2$. While both terms contribute to the total cross section, the
dominant contribution to higher moments comes from the soft part of
$P_{\gamma}(\sigma,Q^2)$ which guarantees that nuclear shadowing
is the leading twist effect. One can also see it from discrete
versions of the distribution $P_{\gamma}(\sigma,Q^2)$ for 10 and 20
vector mesons.
We have shown that this particular GVMD model leads to the existence
of the nontrivial distribution over cross section, which agrees with
the notion that the photon consists of $q\bar{q}$ configurations,
having different geometrical sizes, therefore interacting with
different cross sections. This phenomenon is called
{\it cross section fluctuations}.
\section{Nuclear shadowing}
In the total cross section of deep inelastic $\gamma^{\ast} \, A$
scattering nuclear shadowing is predominantly inelastic shadowing,
that is due to high mass intermediate states \cite{Gribov}. The
leading contribution (double rescattering) can be expressed in terms
of the cross section of inclusive forward diffractive dissociation
of the virtual photon into states ''$X$'':
\mbox{$\gamma^{\ast} +N \rightarrow X+N$}.
The natural assumption, that any hadron state interacts with
sufficiently heavy nuclei with the same cross section
$\pi R_{A}^2$, leads to the Gribov relationship
between the total photoabsorption cross section and cross
section of the process $e \bar{e} \rightarrow hadrons$,
which contradicts Bjorken scaling \cite{Gribov2}.
The idea how to resolve the Gribov puzzle has been suggested
by Bjorken \cite{Bjorken}.
He applied the parton model to the light cone wave function of the
energetic photon and concluded
that the momentum of the quark (antiquark) in the photon should
be aligned along the momentum of the photon. Such rare,-- with the
probability $1/Q^2$,--large size, asymmetric configurations
give the dominant contribution into the cross section --the aligned jet model
\cite{Bjorken}.
Thus most of quark-antiquark configurations in the photon
are sterile within the parton model approximation.
In QCD this picture is
modified by identifying sterile states as
colorless quark-gluon configurations of a spatially
small size having small interaction cross
sections and by including QCD evolution of sterile states.
QCD evolution leads to a fast increase with energy of the cross section
of interaction of quark configurations having large $k_{t}$ and
to the parton bremsstrahlung \cite{physrep88}.
Thus the QCD improved aligned jet model predicts a nontrivial
competition between
two contributions into $P_{\gamma}(\sigma,Q^2)$. The soft piece
corresponds to a usual hadronic cross section but with the
probability $1/Q^2$. The hard piece to $P_{\gamma}(\sigma,Q^2)$ comes from
cross sections $\sigma \propto 1/Q^2$, but their probability
does not decrease with $Q^2$ .
Another useful representation of the
cross section of inclusive forward diffractive dissociation
of the virtual photon into states ''$X$'' and
deep inelastic $\gamma^{\ast} \, A$ scattering can be given
in terms of cross section fluctuations \cite{MP}.
Within this representation nuclear shadowing in DIS is a
consequence of significant cross section fluctuations.
Nuclear shadowing in DIS is given by the series of terms such
as $\sigma^{n}$. The first term in this series is
\mbox{$\propto \langle \sigma^{2} \rangle / \langle \sigma \rangle$}
\cite{MP},\cite{KL}.
The diagonal vector meson dominance model assumes that cross section
of the interaction of hadronic components of the photon falls off
as the inverse mass squared of the component, $\sigma_{h}\propto 1/m^2_{h}$.
So within this model
$\langle \sigma \rangle \propto 1/Q^2$,
$\langle \sigma^2 \rangle \propto ln(Q^2)/Q^4$, which makes
\begin{equation}
\frac{\langle \sigma^2 \rangle}{\langle \sigma \rangle}=\frac{ln(Q^2)}{Q^2}.
\end{equation}
Therefore within this model nuclear shadowing dies out at large $Q^2$
as a power of $1/Q^2$ which contradicts the
experimental data.
The generalized vector meson dominance model that we use resembles the
aligned jet model in the sense that the meson-nucleon cross section
does not depend on the mass of the meson, which preserves the
scaling of shadowing. The cancelation of diagonal terms by
off-diagonal ones brings the additional factor of $1/m_{h}^2$
which restores the Bjorken scaling. Therefore, these two properties
exhibited by the GVMD model are possible only when the photon
has both ''hard'' and ''soft'' components.
The ability of the GMVD model to describe the experimental
data stems from the fact the model takes into account basic coherent
QCD phenomena such as cross section fluctuations, which arise due to
non-diagonal transitions, and the Color Transparency Phenomenon, which
was taken into account in the particular choice of non-diagonal terms.
Using the exact form of $\Sigma_{nm}$ (\ref{sigmaexpl}) one can present
with good accuracy the total photoabsorption cross
section at $Q^2 > 1$ GeV$^2$:
\begin{equation}
\sigma_{T}(s,Q^2>1 GeV^2)=
\frac{e^2}{f^2_{0}}\sigma_{0}\sum_{n}\frac{M^4_{0}}
{(M^2_{n}+Q^2)^2}\Big(\frac{M^2_{n}}{M^2_{n}+Q^2}+0.3\Big) ,
\end{equation}
which exhibits $1/Q^2$ behavior.
Using the formalism of cross section fluctuations, one can show that
the second moment of the cross section in the adopted model is given as
\begin{equation}
\langle \sigma^2(s,Q^2) \rangle=\sum_{nmk}
\frac{e}{f_{n}} \frac{M_{n}^2}{M_{n}^2+Q^2}\Sigma_{nk}
\Sigma_{km}\frac{e}{f_{m}} \frac{M_{m}^2}{M_{m}^2+Q^2} \label{tot}.
\end{equation}
It turns out that $\Sigma_{nk}\Sigma_{km} \propto 1/M^2_{n}$ for
large $n$, $m$, $k$, which leads to
\begin{equation}
\langle \sigma^2(s,Q^2) \rangle \propto \frac{1}{Q^2}
\end{equation}
for $Q^2 > 1$ GeV$^2$.
Numerical calculations were carried out for 10 vector mesons. The
dependence of
$\langle \sigma^2(s,Q^2) \rangle / \langle \sigma(s,Q^2) \rangle$ on
$Q^2$ is presented in Fig. 2, which shows that for $Q^2 > 5$ GeV$^2$
shadowing becomes independent on $Q^2$.
Note that since our aim in this paper is to investigate soft QCD physics,
we ignore QCD evolution due to hard radiation. At comparison with
the experimental data QCD evolution should be taken into account.
\section{asymptotic behavior of $P_{\gamma}(\sigma,Q^2)$ at small $\sigma$}
Using the explicit form of $\Sigma_{nm}$ given by
Eq.\ (\ref{sigmaexpl}), one can write the total
photoabsorption cross section \ (\ref{ttot}):
\begin{equation}
\sigma_{T}(s,Q^2)=\sum_{n} \frac{e}{f_{n}}
\frac{M_{n}^2}{M_{n}^2+Q^2}(\sigma_{0}
\frac{e}{f_{n}} \frac{M_{n}^2}{M_{n}^2+Q^2}+2c_{n}
\frac{e}{f_{n+1}} \frac{M_{n+1}^2}{M_{n+1}^2+Q^2}) \label{start},
\end{equation}
where
\begin{equation}
c_{n}=-\frac{\sigma_{0}}{2} \frac{M_{n}}{M_{n+1}}(1-0.3
\frac{M_{0}}{M_{n+1}}) .
\end{equation}
Numerical studies show that if one wants to study the limit
of small $\sigma$, one should increase the number of vector mesons
included in sum \ (\ref{start}) (the dimensionality of $\Sigma_{nm}$).
As a result the eigenvalues (possible cross sections) $\sigma_{n}$
fill the interval [0,2$\sigma_{0}$] more and more densely. Therefore,
the minimal eigenvalue (cross section) decreases with increase of the
number of vector mesons $L$.
Note that when $L$ is a large number, $\frac{M_{0}^2}{M_{L}^2+Q^2}$ is
a small parameter, hence the expression for $\sigma_{T}(s,Q^2)$ can be
simplified further:
\begin{equation}
\sigma_{T}(s,Q^2)=2e^2 \frac{M_{0}^2}{f_{0}^2} \sum_{L}
\frac{M_{L}^2}{(M_{L}^2+Q^2)^2} \sigma_{0} (0.15
\frac{M_{0}^2}{M_{L}^2}+\frac{M_{0}^2}{M_{L}^2+Q^2}) .
\end{equation}
The ultimate goal of this subsection
is to express $\sigma_{T}(s,Q^2)$ as an integral over cross sections.
We have explained that large masses $M^2_{L}$ correspond to small
cross sections, therefore we use the following fit for the calculated
minimal cross section (eigenvalue of $\Sigma_{nm}$) for
large $M^2_{L}$ (large $L$) :
\begin{equation}
\sigma(M^2_{L})=A\sigma_{0} \frac{M_{0}^2}{M^2_{L}} \label{model}.
\end{equation}
Numerical works with $\Sigma_{nm}$ reveal that $A$ does not
depend significantly on $N$ for its large enough values, for
example, for $N=85$ $A=2.12$, for $N$=100 $A=2.06$, for $N=200$
$A=1.88$. In our numerical analysis we use {\it the averaged} value $A=2$.
Since for large masses (large $L$) the relative difference between
neighboring cross sections $\sigma(M^2_{L})$ is small, it is accurate
to replace this sum by an integral over cross sections $\sigma$
(keeping in mind that we integrate over small cross sections):
\begin{equation}
\sigma_{T}(s,Q^2)=\int d\sigma \frac{e^2}{f_{0}^2}\sigma_{0}
\frac{1}{(\sigma / \sigma_{0})^2}\frac{AM^4_{0}}{(AM^2_{0}
\sigma_{0} / \sigma+Q^2)^2} \cdot \Big(0.15+\frac{1}{\sigma /
\sigma_{0}} \frac{AM^2_{0}}{AM^2_{0}\sigma_{0} / \sigma+Q^2} \Big) .
\end{equation}
According to the definition of $P_{\gamma}(\sigma,Q^2)$:
\begin{equation}
\sigma_{T}(s,Q^2)=\int d \sigma P_{\gamma}(\sigma) \sigma .
\end{equation}
Comparing these two expressions, we notice that that at small $\sigma$:
\begin{equation}
P_{\gamma}(\sigma,Q^2)= \frac{e^2}{f_{0}^2}\frac{1}{(\sigma /
\sigma_{0})^3}\frac{AM^4_{0}}{(AM^2_{0}\sigma_{0} /
\sigma+Q^2)^2} \cdot \Big(0.15+\frac{1}{\sigma /
\sigma_{0}} \frac{AM^2_{0}}{AM^2_{0}\sigma_{0} / \sigma+Q^2} \Big) .
\end{equation}
One can see that this formula supports our statement that
$P_{\gamma}(\sigma,Q^2) \propto 1/ \sigma$ at small $\sigma$.
It shows
that small cross sections have a sizable probability to
exist in the photon. This phenomenon
leads to Color Transparency Phenomena.
To make numerical estimate of the coefficient of proportionality, we
present the distribution $P_{\gamma}(\sigma,Q^2)$ as
\begin{equation}
P_{\gamma}(\sigma,Q^2)=e^2\frac{1}{\sigma}I_{VMD}(Q^2) \label{limvmd},
\end{equation}
where the coefficient $I_{VMD}(Q^2)$ as a function of $Q^2$ is given
in Table \ (\ref{Table2}). It was calculated for $\sigma /\sigma_{0}=0.1$ .
There is an alternative approach to the calculation
of the asymptotic (limiting) behavior of $P_{\gamma}(\sigma,Q^2)$
based on QCD, which presents the answer in terms of the photon
wave function and the cross section of interaction of a small size
$q\bar{q}$-configuration \cite{Str1}:
\begin{equation}
P(\sigma,Q^2)=\frac{d b^2}{d \sigma(b^2)}
\int_{0}^{1}\sum_{\lambda_{1},
\lambda_{2}}|
\psi(z,b=0)_{\lambda_{1},\lambda_{2}}|^2\frac{dz}{4}N_{C}
\label{genfor} .
\end{equation}
Here $\psi(z,\vec{b})_{\lambda_{1},\lambda_{2}}$ is a light-cone wave
function of the photon made of two quarks with momentum fractions $z$
and $1-z$ and helicities $\lambda_{1}$ and $\lambda_{2}$ ;
$\sigma(b^2)$
is the cross section of the $q\bar{q}$ pair of the transverse size $b$.
In the momentum space the standard expression for the transversely
polarized photon wave function is \cite{Str2}, \cite{BrLe}:
\begin{eqnarray}
\psi(z,\kappa_{\perp})_{\lambda_{1},\lambda_{2}}&=&ee_{f}\frac{1}
{Q^2+\frac{\kappa_{\perp}^2+m^2}{z(1-z)}} \, \frac{1}{z(1-z)}
\Big(-\delta(\uparrow \uparrow)m(\epsilon^{1}+i \epsilon^{2})+
\delta(\downarrow \downarrow)(\epsilon^{1}-i \epsilon^{2}) \nonumber\\
&+&\delta(\uparrow \downarrow)((1-2z)\vec{\epsilon}\cdot
\vec{\kappa_{\perp}}+i\vec{\epsilon}\times \vec{\kappa_{\perp}}+
\delta(\downarrow \uparrow)((1-2z)\vec{\epsilon}\cdot
\vec{\kappa_{\perp}}
-i\vec{\epsilon}\times \vec{\kappa_{\perp}}) \Big)
\end{eqnarray}
Here $e_{f}$ is the electric charge of the quark with flavor
$f$ ; $\kappa_{\perp}$ is the transverse momentum of the
$q\bar{q}$ pair; $\vec{\epsilon}$ is the polarization vector
of the photon. The arrows stand for different helicities of the quarks.
Then we perform a Fourier transform into the impact parameter space:
\begin{equation}
\psi(z,b)_{\lambda_{1},\lambda_{2}}=\frac{1}{(2\pi)^2}
\int d^2 \kappa_{\perp} \psi(z,\kappa_{\perp})_{\lambda_{1},
\lambda_{2}}e^{-i \vec{\kappa_{\perp}} \cdot \vec{b_{\perp}}} \label{Fourier1}
\end{equation}
and study the limit $b \rightarrow 0$. The analysis shows that only
spin-flip components of the wave function give the leading contribution
at the limit $b \rightarrow 0$ :
\begin{eqnarray}
\psi(z,b \rightarrow 0)_{\lambda_{1},\lambda_{2}}=
-\frac{i}{2\pi}ee_{f}\frac{1}{b}
\Big(\delta(\uparrow \downarrow)((1-2z)\vec{\epsilon}\cdot
\vec{n_1}&+&i\vec{\epsilon}\times \vec{n_1} \nonumber\\
+\delta(\downarrow \uparrow)((1-2z)\vec{\epsilon}\cdot
\vec{n_1}&-&i\vec{\epsilon}\times \vec{n_1}) \Big),
\end{eqnarray}
where $\vec{n_1}$ is a unit vector in the $x$-direction. At
once one can see that the photon wave function is singular
at the limit $b \rightarrow 0$. Namely this property
leads to a singular behavior of $P_{\gamma}(\sigma,Q^2)$ at small $\sigma$.
Upon averaging over two possible polarizations of the
photon (not written explicitly) and two possible helicities of
quarks and integrating over $z$ one arrives at the following expression:
\begin{equation}
\int_{0}^{1}\sum_{\lambda_{1},\lambda_{2}}| \psi(z,b
\rightarrow 0 )_{\lambda_{1},\lambda_{2}}|^2 dz=e^2 e_{f}^2
\frac{1}{b^2} I^{\prime}_{QCD}(Q^2) \label{wf} ,
\end{equation}
where $I^{\prime}_{QCD}(Q^2)$ is some numerical factor.
The effective cross section of a $q\bar{q}$-- pair interacting with a
nucleon, $\sigma(b^2)$, as a function of the impact parameter $b$, is
given by \cite{Str1}:
\begin{equation}
\sigma(b^2)=\frac{\pi^2}{3}b^2
xG_{T}(x,\tilde{Q}^2)\alpha_{s}(\tilde{Q}^2)
\label{cross}.
\end{equation}
For $x=10^{-3}$ the effective energy $\tilde{Q}^2=9/b^2$ \cite{Koepf}.
For small but finite cross sections and, therefore, small $b^2$, the
combination $xG_{T}(\tilde{Q}^2)\alpha_{s}(\tilde{Q}^2)$ weakly
depends on $\tilde{Q}^2$. We do not consider this effect in the paper.
Therefore, at small $b^2$, one can write:
\begin{equation}
\frac{d \sigma(b^2)}{d b^2}=
\frac{\pi^2}{3} xG_{T}(x,\tilde{Q}^2)\alpha_{s}(\tilde{Q}^2) \label{deriv}.
\end{equation}
Combining equations \ (\ref{genfor}), \ (\ref{wf}) and \ (\ref{deriv})
we obtain at the limit $\sigma \rightarrow 0$:
\begin{equation}
P_{\gamma}(\sigma,Q^2)=e^2\frac{1}{\sigma}I_{QCD}(Q^2) \label{qcdpred},
\end{equation}
where the coefficient $I_{QCD}(Q^2)$ as a function of $Q^2$ is given
in Table \ (\ref{Table2}).
It was calculated for $\sigma /\sigma_{0}=0.1$ to make a comparison
with the similar prediction of GVMD.
One can see from Table \ (\ref{Table2}) that both GVMD model and QCD give
answers of the same order of magnitude.
Since the wave function of $\rho$-mesons
is $\frac{1}{\sqrt 2}(|u \bar{u} \rangle -|d \bar{d} \rangle)$, one
should use $e_{f}^2=1/2$ when summing over quark flavors.
Comparing Eqs. (\ref{qcdpred}) and (\ref{limvmd}), one can see that
both GVMD model and QCD predict the singular behavior of $P(\sigma)$
at $\sigma \rightarrow 0$: $P_{\gamma}(\sigma) \propto 1/\sigma$. Thus
we conclude that the GVMD model exhibits the phenomenon of Color
Transparency known from QCD, which means that in the photon, with a
noticeable probability, there are hadronic configurations weakly
interacting with the target.
\section{ Hard diffractive electroproduction of photons}
Another application of GVMD which is important for the interpretation
of hard diffractive phenomena at HERA can be found in
the diffractive photoproduction in deep inelastic scattering
initiated by highly virtual photon:
$\gamma^{\ast}(Q^2)+ p \rightarrow \gamma^{\ast}(Q_{f}^2) +p$.
Within GVDM we found that the ratio of the imaginary
parts of the amplitudes for real photon
and virtual photon production is:
\begin{eqnarray}
\frac{{\rm Im}A(\gamma^{\ast}(Q^2)+p\rightarrow \gamma(Q^2=0)+p)}{{\rm Im}A(\gamma^{\ast}(Q^2)+p\rightarrow \gamma^{\ast}(Q^2))+p)}&=& \nonumber\\
\Big(\sum_{nm} \frac{e}{f_{n}}\frac{M_{n}^2}{M_{n}^2+Q^2}\Sigma_{nm}\frac{e}{f_{m}}\Big) &\Big/& \nonumber\\
\Big(\sum_{nm} \frac{e}{f_{n}}\frac{M_{n}^2}{M_{n}^2+Q^2}\Sigma_{nm}\frac{e}{f_{m}} \frac{M_{m}^2}{M_{m}^2+Q^2}\Big) .
\end{eqnarray}
This ratio is given in Fig. 3. One can see that it grows with $Q^2$.
Although PQCD predicts that this ratio is approximately
$Q^2$-independent
at small $x$ and estimates it to
be $\approx$2 \cite{freund}, for a wide range of $Q^2$ both
answers are rather close.
\section{Conclusions}
We have shown that, using the generalized vector meson dominance
model, one can reconstruct the distribution function
$P_{\gamma}(\sigma,Q^2)$, which gives the probability for the
photon to interact with a target with the cross section $\sigma$.
The very existence of such a distribution supports the idea that
the photon consists of $q\bar{q}$ configurations of a different
geometrical size even within nonperturbative QCD regime,
therefore they interact with the target with different cross sections.
We have studied the limiting behavior of $P_{\gamma}(\sigma,Q^2)$ at
$\sigma \rightarrow 0$ and shown that $P(\sigma,Q^2) \propto 1/\sigma$.
That is a very remarkable property which means that there is a
sizable probability of finding a small configuration in the photon.
Moreover, the relative probability to find such a small configuration
in the photon increases when the photon virtuality $Q^2$ grows.
We conclude that the GVMD model describes
well Bjorken scaling and nuclear shadowing because it takes into
account basic coherent QCD phenomena: non-diagonal transitions result
in cross section fluctuation of the virtual photon; the specific type
of the non-diagonal terms, chosen in line with the parton model, leads
to a significant probability of spatially small configurations in the
photon wave function and the
Color Transparency Phenomenon.
As in the parton model, we found in GVMD that although the
contribution
of soft QCD in the photon wave function decreases with increase of
$Q^2$, it gives a comparable contribution into the total cross section and the
dominant contribution into moments of $P(\sigma,Q^2)$.
Since the model gives approximate Bjorken scaling of
nuclear shadowing, it means that the photon has ''hard''
and ''soft'' components, which was suggested by the aligned jet model
and affirmed later by QCD. This reveals the duality between the
parton language and GVMD description of the photon hadronic structure.
Also within the GVMD model we give predictions for the ratio
of the imaginary parts of the amplitudes
of the process $\gamma^{\ast}(Q^2)+ p \rightarrow \gamma^{\ast}
(Q_{f}^2) +p$, which turns out to be consistent with PQCD over
a wide range of $Q^2$.
\section{acknowledgments}
We thank G. Shaw for very helpful discussions.
We are thankful to D. Schildknecht for pointing out reference \cite{Schildknecht}.
This work was
supported by the U.S. Department of Energy grant number DE-FG02-93ER-40771 and US-Israeli Bi-National Science Foundation Grant number 9200126.
\bibliographystyle{unsrt}
|
1,314,259,994,335 | arxiv | \section{Planning Framework}
\label{sec:approach}
To solve Problem 1, we propose to leverage (i) reachability analysis, which allows us to predict future states of the obstacles in the presence of uncertainty during the periods between intermittent information and (ii) potential fields to navigate the robot in the environment while avoiding the reachable sets associated with the obstacles. Nevertheless, a key challenge in employing reachability analysis (RA) is that
it is computationally expensive, resulting in limited applicability in online operations. To mitigate this challenge, we consider a learning-based approach leaving RA offline and enabling fast online planning and control.
The proposed learning-based framework is summarized in Fig.~\ref{fig:highLevelApproach} and consists of 1) an offline phase where a neural network (NN) controller is trained under RA and potential fields and 2) an online phase in which the NN controller, given the most recently received data, generates safe control actions to quickly navigate the dynamic environment. In what follows, we describe in detail the components of the proposed method depicted in Fig.~\ref{fig:highLevelApproach}.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{images/highLevelApproachv4.png}
\caption{Architecture of our proposed approach.}
\label{fig:highLevelApproach}
\vspace{-10pt}
\end{figure}
\subsection{Reachability Analysis for Dynamic Obstacles} \label{sec:reachability}
In this section, we discuss how reachability analysis methods can be used to predict future states of the dynamic obstacles.
In particular, given a dynamic obstacle $i$, as defined in \eqref{eq:odynamics}, along with a set of initial conditions $\mathcal{O}_{t_0}^i$ at time $t_0$, our goal is to compute the set of possible states that obstacle $i$ can be in from a time instant $t_1 \geq t_0$ until a time instant $t_2 \geq t_1$.
Hereafter, we call such sets reachable sets, denote them by $\mathcal{O}_{[t_0,t_1,t_2]}^i$, and define them as follows:
\begin{align} \label{eq:RA}
\text{$\mathcal{O}_{[t_0,t_1,t_2]}^i$} = \{ \bm{o} \in \mathcal{O} \mid &~\exists \bm{o}_{0} \in \mathcal{O}_{t_0}^i, \text{$t' \in [t_1,t_2]$}, \text{$\bm{d}_{[t_0,t']}$} \in \mathcal{D}, \\
\text{$\bm{w}_{[t_0,t']}$} \in \mathcal{W},
&~\bm{o}=G_{t_0,\text{$t'$}}(\bm{o}_0,\bm{d}_{[t_0,t']},\bm{w}_{[t_0,t']})\}, \nonumber
\end{align}
where $G_{t_0,\text{$t'$}}(\mathcal{O},\mathcal{D},\mathcal{W})$ is the system's transition function (i.e., the state the system will be in at time $t'$ when it is in state $\bm{o}_{0}$ at time $t_0$ and is applied input signal $\bm{d}_{[t_0,t']}$ and process noise signal $\bm{w}_{[t_0,t']}$). There exist various methods to compute the reachable set \eqref{eq:RA}, such as Taylor models \cite{Chen2013}, Hamilton-Jacobi theory \cite{Bansal2017}, $\delta$-reachability \cite{Soonho2015}, and ellipsoids \cite{Kurzhanskiy2006}. Our framework is agnostic to the method used, provided that we can extract the reachable sets over time.
\subsection{Potential Fields for Navigating Dynamic Environments}\label{sec:potentialfields}
In what follows, we design potential field controllers to navigate the robot through dynamic environments. The key idea is to design a controller capable of avoiding all reachable obstacle states from current time $t$ until the next batch of information arrives, denoted as $\mathcal{O}_{[t_s,t,t_{s+1}]}$, and therefore, the dynamic obstacles within the time interval $[t,t_{s+1}]$; recall that $t_s$ and $t_{s+1}$ stand for consecutive time instants where the robot receives information about the obstacles.
To design a potential field controller for dynamic environments, we first define the following potential field:
\begin{align}
U(\bm{x}_p) = U_{\text{att}}(\bm{x}_{p})+\sum_{i=1}^{n} U_{i,\text{rep}}(\bm{x}_{p}), \label{eqn:composedField}
\end{align
where $U_{\text{att}}(\bm{x}_{p})$ denotes an attractive field driving the robot towards the goal location $\bm{x}_{g}$ and $U_{i,\text{rep}}(\bm{x}_{p})$ is a repulsive field pushing the robot away from dynamic obstacle $i$. The attractive field is designed to take small values near the robot's goal and large values far from the goal while the repulsive field is constructed to take large values near the obstacles and small values far from them. The control policy is to simply perform gradient descent down this field until the goal is reached.
Letting $\bm{x}_{p}$ denote the current robot position, the attractive field is defined as:
\begin{align}
U_{\text{att}}(\bm{x}_{p}) = \frac{1}{2}K_p \left (\lvert \lvert \bm{x}_p-\bm{x}_{g}\rvert \rvert^2 \right).
\label{eqn:pfeqnAtt}
\end{align}
Since the control action is to follow the negative gradient of the composed field, potential field controllers can make the system get stuck in \textit{dynamic minima}. These occur when a dynamic obstacle repels the robot in the same direction that the obstacle is moving; see, e.g. Fig.~\ref{fig:dynamicMinima}. To minimize the effects of dynamic minima, we leverage the temporal properties of reachability analysis. Every time obstacle information is provided to the robot, we construct the obstacles' forward reachable set defined in \eqref{eq:RA}. Exploiting the conic structure of the reachable sets, we design repulsive potential fields that tend to push the robot behind the obstacle, thus mitigating the dynamic minima behavior seen in Fig.~\ref{fig:dynamicMinima}. For instance, Fig.~\ref{fig:repFieldOfObst} shows an example of the repulsive forces from the reachable set of an obstacle starting at position $[0,0]$ and moving in the positive $x$ direction.
\begin{comment}
\begin{figure}[t
\centering
\begin{subfigure}{0.46\textwidth}
\includegraphics[]{images/image00000005.png}
\label{fig:mean and std of net14}
\end{subfigure}
\begin{subfigure}{0.46\textwidth}
\includegraphics[]{images/image00000009.png}
\label{fig:mean and std of net24}
\end{subfigure}
\begin{subfigure}{0.46\textwidth}
\includegraphics[]{images/image00000022.png}
\label{fig:mean and std of net34}
\end{subfigure}
\begin{subfigure}{0.46\textwidth}
\includegraphics[]{images/image00000039.png}
\label{fig:mean and std of net44}
\end{subfigure}
\caption{Example of a dynamic minima in potential field planning. The robot's path is in green, the goal point is in black, and the obstacle is the blue circle.}
\label{fig:dynamicMinima}
\vspace{-10pt}
\end{figure}
\end{comment}
\begin{figure}[t
\centering
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/image00000005.png}\label{fig:mean and std of net14}}\hspace{1em}%
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/image00000009.png}\label{fig:mean and std of net24}}
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/image00000022.png}\label{fig:mean and std of net34}}\hspace{1em
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/image00000039.png}\label{fig:mean and std of net44}}\label{fig:mean and std of net44}
\caption{Example of a dynamic minima in potential field planning. The robot's path is in green, the goal point is in black, and the obstacle is the blue circle.}
\label{fig:dynamicMinima}
\vspace{-10pt}
\end{figure}
We formally describe this process for obstacle $\bm{o}_i$ as follows: Let $t$ be the current global time, $\bm{x}_p$ be the location of the robot at the current time, $t_0 \leq t$ be the time of the most recently received obstacle state, denoted $\bm{o}_{t_0}^i$. Also, let $\mathcal{O}^{i}_{[t_0,t,t_0+T]}$ be the set of reachable states of $\bm{o}_i$ from time $t$ to $t_0+T$ (see \eqref{eq:RA}), given that $\bm{o}_i$ was at state $\bm{o}_{t_0}^i$ at time $t_0\leq t$.
Finally, define $\bar{\bm{o}}^{i}_{t}(\bm{x}_p)$ to be the closest point in $\mathcal{O}^{i}_{[t_0,t,t_0+T]}$ to the robot's position $\bm{x}_p$:
\begin{align}
\bar{\bm{o}}^{i}_{t}(\bm{x}_{p}) \coloneqq \text{argmin}_{\bm{o} \in \text{$\mathcal{O}^{i}_{[t_0,t,t_0+T]}$}} \left \lVert \bm{x}_{p} - \bm{o} \right \rVert.
\end{align}
The repulsive field that we use to compute the avoidance command for obstacle $\bm{o}_{i}$ at robot location $\bm{x}_p$ and time $t$ is:
\begin{align}
U_{t,\text{rep}}^{i}(\bm{x}_{p}) = \frac{1}{2} K_{r} \left(\frac{1}{\left \lVert \bm{x}_{p} - \bar{\bm{o}}^{i}_{t}(\bm{x}_{p}) \right \rVert -\delta} \right)^2, \label{eqn:augmentedRepFieldSingleObst}
\end{align}
where $\delta$ is given from \eqref{eq:safety}.
\begin{figure}[ht!]
\centering
\includegraphics[width=\linewidth]{images/repFieldFig1_cropped3.PNG}
\vspace{-20pt}
\caption{Repulsive forces from the forward reachable set (blue) of an obstacle.}
\label{fig:repFieldOfObst}
\vspace{-10pt}
\end{figure}
\subsection{Compositional Controller}
\label{sec:compositionAndGuarantees}
The final step for our control scheme is composing the attractive and repulsive fields.
By combining the attractive \eqref{eqn:pfeqnAtt} and repulsive \eqref{eqn:augmentedRepFieldSingleObst} field equations, based on the composition defined in \eqref{eqn:composedField}, the composed field becomes:
\begin{align}
U_{t}(\bm{x}_p) &= \left ( \frac{1}{2}K_p \left (\lvert \lvert \bm{x}_{p}-\bm{x}_{g}\rvert \rvert^2 \right ) \right ) + \nonumber \\
& \; \; \sum_{i=1}^n \frac{1}{2} K_{r} \left(\frac{1}{\left \lVert \bm{x}_{p} - \bar{\bm{o}}^{i}_{t}(\bm{x}_{p}) \right \rVert -\delta}\right)^2. \label{eqn:fullfield}
\end{align}
Our controller uses the same principle of following the negative gradient down the composed potential field as the standard potential field setup. Using the linearity of the gradient, the control output becomes:\footnote{This is standard in potential field planning but it is critical for the compositionality of our approach, so we write out these steps.}
\begin{align}
\bm{u} = &- \nabla \left(\frac{1}{2}K_p \left (\lvert \lvert \bm{x}_{p}-\bm{x}_{g}\rvert \rvert^2 \right) \right) - \nonumber \\
& \; \; \sum_{i=1}^n \nabla \frac{1}{2} K_{r} \left(\frac{1}{\left \lVert \bm{x}_{p} - \bar{\bm{o}}^{i}_{t}(\bm{x}_{p}) \right \rVert -\delta}\right)^2. \label{eqn:fullgrad}
\end{align}
\begin{comment}
In what follows, we provide conditions under which the controller in \eqref{eqn:fullgrad} satisfies the safety constraint \eqref{eq:safety}.
\begin{theorem}\label{thm:safety}
Consider a robot with dynamics $\dot{{\bm{x}}}(t)={\bm{u}}$ navigating in an environment with dynamic obstacles governed by \eqref{eq:odynamics} under the control policy given in \eqref{eqn:fullgrad}\footnote{\textcolor{red}{The dynamics assumption is in potential field planning proofs. See e.g. (3) in \cite{Vasilopoulos2018}.}}. Let $t_0$ be the last time instant where the robot received information $\mathcal{I}(t_0)$.
If the robot's state at time $t_0$ lies at least $\delta$ distance away from the forward reachable set up to time $T$ of every obstacle, then \eqref{eq:safety} is satisfied for all $t\in[t_0,T]$.
\end{theorem}
\begin{proof}
Hereafter, for simplicity of notation, we drop the dependence of the reachable sets \eqref{eq:RA} on the time interval $[t_0,T]$. Consider the following positive function $V({\bm{x}}(t),\mathcal{O}_{1},\hdots,\mathcal{O}_{n})=U({\bm{x}}(t),\mathcal{O}_{1},\hdots,\mathcal{O}_{n})$, where $U({\bm{x}}(t),\mathcal{O}_{1},\hdots,\mathcal{O}_{n})$ is the composed potential field defined in equation \eqref{eqn:fullfield}. For convenience of notation we write these two fields as $V(\bm{x}(t))$ and $U(\bm{x}(t))$.
By the definition of $V({\bm{x}}(t))$ we have that $V({\bm{x}}(t))>0$ and that $\lim V({\bm{x}}(t))=\infty$ as the distance between the robot and at least one obstacle approaches $\delta$, i.e., when safety constraint \eqref{eq:safety} is violated. By assumption, $V({\bm{x}}(t_0))$ has a positive and finite value. Thus, to show that the safety constraint will never be violated within $[t_0, T]$, it suffices show that $V({\bm{x}}(t))$ is non-increasing with time.
To show this, we compute the time derivative of $V({\bm{x}}(t))$ as follows:
\begin{align}\label{eq:timeder}
&\dot{V}(\bm{x}(t)) = \frac{dV({\bm{x}}(t))}{d{\bm{x}}}\frac{d{\bm{x}}}{dt} = \frac{dV({\bm{x}})}{d{\bm{x}}}{\bm{u}} \\ &=\frac{dV({\bm{x}})}{d{\bm{x}}}(-\frac{dU({\bm{x}})}{d{\bm{x}}}) = -\lVert\frac{dV({\bm{x}})}{d{\bm{x}}}\rVert^2\leq 0
\end{align}
completing the proof.\end{proof}
\end{comment}
Next, we discuss intuitively under what conditions the controller in \eqref{eqn:fullgrad} satisfies the safety constraint \eqref{eq:safety}.
\vspace{5pt}
{\em{Safety Discussion:}} First, note that if the reachable set of obstacle $i$ from times $t$ to $t_0+T$, $\mathcal{O}^{i}_{[t_0,t,t_0+T]}$, is convex, the repulsive field vector produced by that obstacle, $\nabla \frac{1}{2} K_{r} \left(\frac{1}{\left \lVert \bm{x}_{p} - \bar{\bm{o}}^{i}_{t}(\bm{x}_{p}) \right \rVert -\delta}\right)^2$, points away from every point of the obstacle's forward reachable set.\footnote{This can be shown by contradiction. Assume that there is a point of the reachable set which the repulsive field points towards. Then it is not possible for the reachable set to be convex.}
For a collision to occur, the repulsive field $U_{t}(x_p)$ must take an infinite value, since the distance from the robot to the obstacle appears in the denominator of each obstacle's repulsive field. So to prove safety it is sufficient to show that $U_{t}(x_p)$ is always upper bounded.
At time $t$, as the robot approaches the reachable set of obstacle $i$, denoted by $\mathcal{O}^{i}_{[t_0,t,t_0+T]}$, the repulsive force generated by the reachable set will become so large as to render the forces from every other obstacle reachable set (assuming the distance between the reachable sets is large enough) and the goal negligible. Thus, as the robot approaches the reachable set of obstacle $i$, it holds that:
\begin{align}
U_{t}(x_p) &\approx \frac{1}{2} K_{r} \left(\frac{1}{\left \lVert \bm{x}_{p} - \bar{\bm{o}}^{i}_{t}(\bm{x}_{p}) \right \rVert -\delta} \right)^2 \\
u &\approx \nabla \frac{1}{2} K_{r} \left(\frac{1}{\left \lVert \bm{x}_{p} - \bar{\bm{o}}^{i}_{t}(\bm{x}_{p}) \right \rVert -\delta}\right)^2
\end{align}
Assuming simple robot dynamics of the form $\dot{{\bm{x}}}(t)={\bm{u}}$, as e.g. in \cite{Vasilopoulos2018}, this control command pushes the robot farther away from every point of $\mathcal{O}^{i}_{[t_0,t,t_0+T]}$. In addition the reachable sets of the obstacles do not grow, since $\mathcal{O}^{i}_{[t_0,t,t_0+T]} \supseteq \mathcal{O}^{i}_{[t_0,t',t_0+T]}, ~\forall t' \geq t$. Thus $U_{t}(x_p)$ is decreasing and is hence upper bounded.
By construction of the reachable sets, the above argument assures safety up to a time threshold $T$, which depends on how far the forward reachable sets get extended. If $T=\infty$ then the safety guarantee holds for all time and as $T$ decreases safety gets sacrificed, but the controller produces less conservative paths. In practice, this threshold should be picked to ensure safety until the next batch of intermittent information arrives. This threshold also gives a way to balance having conservative or aggressive paths.
\subsection{Compositional Neural Network Controller} \label{sec:neural_net}
\begin{comment}
\begin{figure*}[h!]
\centering
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf/img1AllConts.png}\label{fig:uuvpath1}}\hspace{1em}%
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf/img2AllConts.png}\label{fig:uuvpath2}}
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf/img3AllConts.png}\label{fig:uuvpath3}}\hspace{1em}%
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf/img4AllConts.png}\label{fig:uuvpath4}}
\vspace*{-3mm}
\caption{UUV crossing a shipping channel with ship velocities $5.14$m/s. Each ship is represented as a red box, each ship reachable set is represented as a light blue square, and the goal point is the green box. }
\label{fig:UUVpath}
\vspace{-10pt}
\end{figure*}
\end{comment}
\begin{figure*}[h!]
\centering
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf_8Pane/img1AllConts.png}\label{fig:uuvpath1}}\hspace{1em}%
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf_8Pane/img2AllConts.png}\label{fig:uuvpath2}}
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf_8Pane/img3AllConts.png}\label{fig:uuvpath3}}\hspace{1em}%
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf_8Pane/img4AllConts.png}\label{fig:uuvpath4}}
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf_8Pane/img5AllConts.png}\label{fig:uuvpath5}}\hspace{1em}%
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf_8Pane/img6AllConts.png}\label{fig:uuvpath6}}
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf_8Pane/img7AllConts.png}\label{fig:uuvpath7}}\hspace{1em}%
\subfigure[]{\includegraphics[width=0.23\textwidth]{images/uuvpath_NNAndPf_8Pane/img8AllConts.png}\label{fig:uuvpath8}}
\vspace*{-3mm}
\caption{UUV crossing a shipping channel with ship velocities $5.14$m/s. Each ship is represented as a red box, each ship reachable set is represented as a light blue square, and the goal point is the green box. }
\label{fig:UUVpath}
\vspace{-10pt}
\end{figure*}
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{images/minDistsOverTracesAllControllers_barChart_cropped.PNG}
\caption{Minimum distance between the UUV and the ship reachable sets across all $20$ trials, shifted so that the y-axis corresponds to the boundary of the safety threshold $\delta=5$m.}
\label{fig:UUVminDists}
\vspace{-10pt}
\end{figure}
The attractive gradient in \eqref{eqn:fullgrad} is computed using standard gradient techniques. However, each of the $n$ repulsive field gradients requires running reachability analysis, which is generally not computationally feasible at runtime. To get around this issue, we train a neural network to compute the repulsive field gradient of each obstacle.
The NN computes the repulsive field gradient of each obstacle separately. This is due to the compositionality inherent in the linearity of the gradient operator, as shown in \eqref{eqn:fullgrad} and the fact that all obstacles share the same control policy by assumption. The compositionality is a key aspect of our approach, since it means we do not need to know the number of obstacles at training time and the NN based planner can generalize to arbitrary numbers of obstacles.
For each obstacle, the NN takes as input the obstacle information and the time since that information was received and outputs the gradient of the repulsive field given in \eqref{eqn:augmentedRepFieldSingleObst}. Thus, the reachability analysis is baked into the weights of the NN during training, along with the repulsive field calculations. This ensures that our planning framework incorporates the uncertainties from the reachability analysis while running in crowded environments in real time.
We generate the neural network training data by fixing the robot position at $\bm{x}_p$ and iterating over a grid of obstacle states $\bm{o}$, robot headings $\bm{x}_{h}$, and times $t$. For each tuple of these values, we compute the gradient of the repulsive field from \eqref{eqn:augmentedRepFieldSingleObst} for robot location $\bm{x}_p$ assuming obstacle information $\bm{o}$ was received $t$ seconds prior:
\begin{align}
\nabla \left( \frac{1}{2} K_{r} \left(\frac{1}{\left \lVert \bm{x}_p - \bar{\bm{o}}_{t}(\bm{x}_p) \right \rVert - \delta}\right)^2 \right) \label{eqn:NNOutputForTraining}
\end{align}
So the control output at time $t+t_{0}$ is:
\begin{align}
\bm{u} &= \nabla \left(\frac{1}{2}K_p (\lvert \lvert \bm{x}_p-\bm{x}_{g}\rvert \rvert^2)\right) + \sum_{i=1}^{n} \text{NN}(\bm{x}_p,\bm{x}_h,\bm{o}_i,t) \label{eq:NNControl}
\end{align}
where NN$(\bm{x}_p,\bm{x}_h,\bm{o}_i,t)$ is the neural network's output when run from robot location $\bm{x}_p$ and heading $\bm{x}_h$ on obstacle information $\bm{o}_i$ from time $t_0$ at the current time $t+t_0$.
\begin{comment}
\begin{corollary}
Let $\theta_u(\bm{x})$ denote the angle of the control command computed in Eq.~\eqref{eq:NNControl} and let $\theta_{\nabla U}(\bm{x})$ denote the angle of the gradient of the potential field in Eq.~\eqref{eqn:fullfield}.\footnote{As in Theorem \ref{thm:safety}, we are dropping the dependence of $\theta_u$ on $\bm{o}_1,\hdots,\bm{o}_n,t_0$ and the dependence of $\nabla U$ on $\mathcal{O}_1,\hdots,\mathcal{O}_n$ for convenience of notation.} Let $t_0$ be the last time instant where the robot received information $\mathcal{I}(t_0)$.
If the robot's state at time $t_0$ lies at least $\delta$ distance away from the forward reachable set up to time $T$ of every obstacle and:
\begin{align}
\theta_u(\bm{x}) \in & \left[ \theta_{\nabla U}(\bm{x})-\frac{3\pi}{2},\theta_{\nabla U}(\bm{x})-\frac{\pi}{2} \right] \\
&\forall \bm{x} \in \mathbb{R}^3,
t\in[t_0,T],\bm{o}_{i} \in \mathbb{R}^3 \nonumber
\end{align}
then the control scheme satisfies \eqref{eq:safety} for all $t\in[t_0,T]$.
\end{corollary}
\begin{proof} Recall that Theorem \ref{thm:safety} used the fact that $\bm{u}=-\frac{V({\bm{x}}(t))}{d{\bm{x}}}$ to arrive at
\begin{align}
\frac{dV({\bm{x}}(t))}{d{\bm{x}}}\bm{u} \leq 0
\end{align}
NNs in general are not guaranteed to produce fully accurate values, however, from the properties of the inner product, we have:
\begin{align}
\frac{dV({\bm{x}}(t))}{d{\bm{x}}}\bm{u} = \lVert \nabla V(\bm{x}) \rVert \lVert \bm{u} \rVert \text{cos}(\theta)
\end{align}
where $\theta$ is the angle between the vectors $\nabla V(\bm{x}(t))$ and $u$.
It follows that:
\begin{align}
\frac{dV({\bm{x}}(t))}{d{\bm{x}}} \bm{u} \leq 0 \iff \frac{-3\pi}{2} \leq \theta \leq \frac{-\pi}{2}
\end{align}
So for our NN based controller to be safe from $[t_0,T]$, it is sufficient to hold that:
\begin{align}
\theta_u(\bm{x}) \in & \left[ \theta_{\nabla U}(\bm{x})-\frac{3\pi}{2},\theta_{\nabla U}(\bm{x})-\frac{\pi}{2} \right] \\
&\forall \bm{x} \in \mathbb{R}^3,
t\in[t_0,T],\bm{o}_{i} \in \mathbb{R}^3 \nonumber
\end{align}
completing the proof.
\end{proof}
\end{comment}
\section{Background}
\label{sec:background}
\subsection{Potential Fields}
Artificial potential fields are a well known method for path planning in robotics, particularly for navigating to a goal region while avoiding static obstacles. They work by defining an attractive field around the goal region and a repulsive field around each obstacle. See equations \ref{eqn:pfeqnAtt} and \ref{eqn:pfeqnRep}, respectively, for the exact field equations.
\begin{align}
U_{att}(q) = \frac{1}{2}K_p (\lvert \lvert q-q_{g}\rvert \rvert^2)
\label{eqn:pfeqnAtt} \\
U_{rep}(q) = \sum_i \frac{1}{2} K_r (\frac{1}{\lvert \lvert q-o_i \rvert \rvert})^2 \label{eqn:pfeqnRep}
\end{align}
These fields are then summed up, resulting in the composed field, shown in figure....
\begin{align}
U= U_{att}(q) + U_{rep}(q) \label{eqn:composedField}
\end{align}
The motion planning algorithm then computes the next heading of the robot by following the negative gradient of the composed field. The gradient of the composed field is computed as follows.
\begin{align}
\nabla U &= \nabla U_{att}(q) + \nabla U_{rep}(q)
\end{align}
\subsubsection{Properties}
Before moving on, we note some properties of potential fields which will be important later on. Potential fields are used because they are simple and fast to compute. They are also highly compositional, as the obstacle repulsive fields are simply summed up when computing the composed field. However, they are a reactive way of path planning, only considering the instantaneous state of the robot and the environment. They don't consider what may happen in the future. In addition, they can get stuck in local minima present in the composed field. Finally, dynamic environments also present the issue of dynamic minima. This mainly occurs when an obstacle repels the robot in the same direction that the obstacle is moving. See figure \ref{fig:dynamicMinima} for one such example of this phenomenon. We describe how to address these issues in section \ref{sec:approach}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.2375\textwidth}
\centering
\includegraphics[width=\textwidth]{images/image00000005.png}
\caption{The robot approaches the obstacle, which is moving right}
\label{fig:mean and std of net14}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.2375\textwidth}
\centering
\includegraphics[width=\textwidth]{images/image00000009.png}
\caption{The obstacle begins pushing the robot to the right}
\label{fig:mean and std of net24}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.2375\textwidth}
\centering
\includegraphics[width=\textwidth]{images/image00000022.png}
\caption{The obstacle pushes the robot in the same direction as the motion of the obstacle.}
\label{fig:mean and std of net34}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.2375\textwidth}
\centering
\includegraphics[width=\textwidth]{images/image00000039.png}
\caption{The robot gets pushed by the obstacle, taking a much longer path than is ideal}
\label{fig:mean and std of net44}
\end{subfigure}
\caption{Example of a dynamic minima in potential feild planning}
\label{fig:dynamicMinima}
\end{figure}
\subsection{Reachability Analysis}
\MC{Do we need to describe the basics of reachability analysis?}
\section{Conclusion and Future Work}
\label{sec:discussion}
In this work, we have presented a fast path planning framework for operating in dynamic environments with intermittent obstacle information. We combine reachability analysis and artificial potential fields as the backbone of our planning framework and leverage neural networks to ensure our control loop runs in real time. We have applied our approach on a simulation of a UUV crossing a shipping channel and presented experiments with ground vehicles. Thanks to our technique, the robot is able to avoid both collision and getting trapped into dynamical minima.
In the future, we will consider extensions of the proposed framework to account for additional sources of uncertainty, such as localization uncertainty or model uncertainty of the ego-vehicle. We also plan to consider cases in which some vehicles are cooperative while others are not. Additionally, we will investigate the use of verification tools to provide safety guarantees for the proposed learning-based control framework. Finally, we are planning to investigate the inclusion of this control framework into a confidence monitoring framework by leveraging other sources of measurements on these robot, such as the expected measurements from range sensors during a communication denied mission in a dynamic environment.
\section*{Acknowledgments}
This work was supported in part by AFRL and DARPA FA8750-18-C-0090, ARO W911NF-20-1-0080, and ONR N00014-17-1-2012.
\section{Experimental Analysis}
\label{sec:evaluation}
To evaluate our planning framework, we ran our approach on a case study of a UUV crossing a crowded shipping channel in a simulation environment and on unmanned ground vehicles (UGV) in a lab environment. Videos of the experiments can be found in the supplemental material.
\subsection{UUV Case Study}
\begin{comment}
\begin{figure*}[h!]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{images/uuvpath_NNAndPf/img1.png}
\label{fig:uuvpath1}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{images/uuvpath_NNAndPf/img2.png}
\label{fig:uuvpath2}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{images/uuvpath_NNAndPf/img3.png}
\label{fig:uuvpath3}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{images/uuvpath_NNAndPf/img4.png}
\label{fig:uuvpath4}
\end{subfigure}
\vspace*{-3mm}
\caption{UUV crossing a shipping channel. Each ship is represented as a red box and the goal point is the green box. }
\label{fig:UUVpath}
\end{figure*}
\end{comment}
\begin{figure*}
\subfigure[Sequence of snapshots for the ground vehicle going to its goal (green square).]{\includegraphics[width=0.78\textwidth]{images/exps/exp_cross_combined_v2.png}\label{fig:exp_setup_cross}}\hspace{1em}%
\subfigure[Robots' paths]{\includegraphics[width=0.19\textwidth]{images/exps/exp_cross.png}\label{fig:exp_path_cross}}
\vspace{-5pt}
\caption{Experiment results with a ground vehicle and two dynamic obstacles moving in the opposite directions.}
\label{fig:exp_cross}
\end{figure*}
\begin{figure*}
\subfigure[Sequence of snapshots for the ground vehicle going to its goal (green square).]{\includegraphics[width=0.78\textwidth]{images/exps/exp_parallel_combined_v2.png}\label{fig:exp_setup_parallel}}\hspace{1em}%
\subfigure[Robots' paths]{\includegraphics[width=0.19\textwidth]{images/exps/exp_parallel.png}\label{fig:exp_path_parallel}}
\vspace{-5pt}
\caption{Experiment results with a ground vehicle and two dynamic obstacles moving in the same direction.}
\label{fig:exp_parallel}
\end{figure*}
\begin{figure}[h]
\centering
\subfigure[Minimum distance for the case shown in Fig.~\ref{fig:exp_cross}.]{\includegraphics[width=0.23\textwidth]{images/exps/dist_cross.png}\label{fig:exp_dist_cross}}\hspace{1em}%
\subfigure[Minimum distance for the case shown in Fig.~\ref{fig:exp_parallel}.]{\includegraphics[width=0.23\textwidth]{images/exps/dist_parallel.png}\label{fig:exp_dist_parallel}}
\caption{Minimum distance between the ground vehicle and two dynamic obstacles over time.}
\label{fig:experiment}
\vspace{-10pt}
\end{figure}
For our UUV case study, the scenario we considered is as follows: a UUV needs to cross a shipping channel, in which ships are traveling at a constant $5$m/s with noise bounded by $0.05$m/s, each going opposite directions, similar to how cars would travel on a two-lane road. In addition, all ships have a $0.01$ radian noise bound on their headings. Each ship is $75$m long and $25$m wide and there is a $375$m gap in between each successive ship. The total width of the channel is $230$m. The UUV surfaces around $500$m before the channel to receive the location, heading, and speed of each of the ships. The UUV then dives down to a depth of $45$m and travels to a predefined waypoint directly across the channel. The UUV used for the simulation is $2$m long, $15$cm in radius, and travels at a maximum speed of $2.5$m/s. To ensure that there is always at least a 2 UUV lengths distance to the ships, the safety threshold is set to $\delta=5$m. The simulations were run on a real-time, physics-based ROS Gazebo UUV simulator \cite{Manhes2016}.
The obstacle forward reachable sets were extended out by $T=600$ seconds using Flow* \cite{Chen2013}. We found that this was enough to prevent most collisions while still allowing the UUV to quickly cross the channel. The potential field constants were $K_p=5$ and $K_r=15000$. The NN architecture consisted of 15 fully-connected ReLU layers, each with 16 neurons, along with linear layers on the input and output, and was trained using around 8 million data points\footnote{This number depends on the a priori known size of the environment, the maximum distance between the robot and obstacles, the range of obstacle speeds and the maximum time between intermittent information.}. We trained the NN using the Keras library.
We ran $20$ trials of the UUV crossing the shipping channel, varying the initial positions of the ships traveling in each direction. Our planning framework safely guided the UUV through the channel while avoiding the ship reachable sets (and hence the ships themselves). The closest the UUV came to any ship across all trials was $7.5$m. One example path can be seen in blue in Fig.~\ref{fig:UUVpath}. We also plotted the minimum distance to the ship reachable sets for each of the 20 trials in Fig.~\ref{fig:UUVminDists}. Each scenario took around $12$ minutes and the UUV got the ship information $190$ seconds into the scenario. So the UUV had to plan using information which was up to $9$ minutes old to complete the scenarios. Finally, the average runtime of each control iteration was $27$ms and the largest runtime was $81$ms, whereas computing the reachable set of each ship $600$ seconds into the future took Flow* an average of $67$ seconds. In fact, this large runtime prohibits the direct application of reachability tools for online safety analysis in dynamic environments.
We also ran the same $20$ trials using the controller from \eqref{eqn:fullgrad} with precomputed reachable sets for each ship, which we refer to as the reach set PF planner. The closest the UUV came to any ship reachable sets was $18.8$m. One example path can be seen in orange in Fig.~\ref{fig:UUVpath}. The minimum distance to the ship reachable sets for each of the 20 trials can be seen in Fig.~\ref{fig:UUVminDists}. This demonstrates both that the control technique proposed in \eqref{eqn:fullgrad} is safe and effective and that our NN reasonably approximates the repulsive field gradients of the ship reachable sets.
Finally, we ran the same 20 trials using the controller from \eqref{eqn:fullgrad} assuming no uncertainty in the ship speeds and headings. One example path can be seen in purple in Fig.~\ref{fig:UUVpath}. The minimum distance to the ship reachable sets for each of the 20 trials can be seen in Fig.~\ref{fig:UUVminDists}. In $11$ of the $20$ trials the UUV came within the allowed $\delta = 5$m of the reachable sets and in one of the trials the UUV actually collided with one of the reachable sets. This collision can be seen in Fig.~\ref{fig:uuvpath3}.
\subsection{UGV Experiments}
To demonstrate the applicability of the approach to different types of vehicles, we conducted unmanned ground vehicle experiments. The experiments were performed on a Clearpath Jackal UGV and two TurtleBot2 UGVs were used as mobile obstacles. In the offline stage, the reachable sets of the dynamic obstacles were extended out by 8 seconds using Flow*\cite{Chen2013} as explained in Section~\ref{sec:reachability} and an NN was trained to compute the desired repulsive fields for each obstacle fast at runtime as presented in Section~\ref{sec:neural_net}. The NN was trained using around 3 million data points and had an architecture of 4 fully-connected ReLU layers, each with 16 neurons, along with linear layers on the input and output. The potential field constants were $K_p=5$ and $K_r=10$.
At runtime, the robot was tasked to reach its goal location at $ [2.5, 0] $m moving with desired speed of $ 0.5 $m/s while avoiding the other two vehicles which moved straight also with a speed of $ 0.5 $m/s. The ground vehicle observed the position and heading of the mobile obstacles only in the beginning of the operation, then lost its connection. We also assumed here that the robot did not have any onboard sensors to detect the obstacles. A Vicon motion capture system was used to detect the state of the ground vehicle and the initial position and heading of the dynamic obstacles.
We tested our approach on two different scenarios. In the first case, the mobile obstacles were moving in opposite directions in between the robot's start and goal locations. Fig.~\ref{fig:exp_setup_cross} demonstrates the behavior of the ground vehicle and mobile obstacles over time. Fig.~\ref{fig:exp_path_cross} shows the actual paths of these vehicles and Fig.~\ref{fig:exp_dist_cross} shows the minimum distance between the ground vehicle and the obstacles over time. For this case study, the minimum distance from any of the obstacles was recorded as $58.26$cm which is larger than the safety threshold $ \delta = 51 $cm (summation of the ground vehicle and obstacle radius), which means that the ground vehicle was able to take necessary actions to avoid both obstacles. The second scenario we tested was with two obstacles moving in the same direction as shown in Fig.~\ref{fig:exp_setup_parallel}. The resultant paths of the vehicles are demonstrated in Fig.~\ref{fig:exp_path_parallel} and the minimum distance to the obstacles over time is shown in Fig.~\ref{fig:exp_dist_parallel}. Similar to the previous case, the ground vehicle successfully avoided collision with both obstacles with the closest distance recorded as 61.60cm.
\section{Introduction}
\label{ref:intro}
Safe motion and task planning for multi-robot systems in known and static environments has been studied extensively under the assumption that all robots are connected and synchronously exchanging information about their local states and actions \cite{van2008}. However, when robots are deployed in communication- and sensor-limited dynamic environments, data about the current state of the environment may not always be available. For example, consider unmanned underwater vehicles (UUVs), which have been growing in popularity in recent years \cite{Fletcher2000}. UUVs must deal with vessels that typically are not aware of their presence in the water. In fact, UUVs obtain information about the states of the other vessels only when they surface; see, e.g., Fig.~\ref{fig:intro}. Hence, when they are underwater they can only rely on old, deprecated data to plan their motion to avoid colliding with these vessels. This sensory limitation may threaten the safety of such systems. For example, on February 8, 2021, a Japanese submarine collided with a commercial ship off the country's Pacific coast.
The submarine surfaced right under the ship, unaware of its presence, and suffered damage to its diving and communication capabilities. More generally, this type of problem also appears when robots are operating in communication/sensor limited environments, like unmanned aerial or ground vehicles operating in cluttered and adversarial environments (e.g., during military operations).
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{images/intro2.png}
\vspace{-15pt}
\caption{Pictorial representation of a UUV tasked to navigate through a shipping channel under intermittent communication.}
\label{fig:intro}
\vspace{-20pt}
\end{figure}
In this paper, we address the problem of controlling a mobile robot to safely navigate communication- and sensor-limited dynamic environments. The environment considered is (i) \textit{dynamic} in the sense that it is occupied with non-cooperative mobile obstacles/vehicles that move according to known but noisy dynamics; and (ii) \textit{communication/sensor limited} in the sense that the robot cannot sense or communicate with other mobile obstacles/vehicles. We assume that the robot receives intermittent information about the obstacle locations and knows obstacle dynamics and control policies which are subject to unknown process noise.
The major challenge of safe navigation in such environments lies in the intermittent nature of the data related to other vehicles, which requires the robot to plan over stale information. To safely plan with stale information, the robot must reason about the future dynamic obstacle configurations while taking their motion uncertainties into consideration until fresh data are available.
To address this challenge,
we design a novel reachability-based potential field method to avoid collisions with future obstacle configurations. Specifically, reachability analysis is employed to compute the future states of obstacles based on the most recent information the robot received while potential fields are used to avoid these reachable sets. Since reachability analysis is too computationally expensive to perform at runtime, we use this reachability-based potential field method to train
a neural network (NN) control framework at design time. At runtime, given intermittent information, this NN component generates control actions to safely navigate the dynamic environment.
\textbf{Related work:}
Motion planning in dynamic environments has been addressed through methods such as covariant hamiltonian optimization for motion planning
\cite{Zucker2013,Byravan2014,Men2020}, real-time adaptive motion planning
\cite{Vannoy2008,Mcleod2016,guzzi2013human,aoude2013probabilistically,renganathan2020towards} and reinforcement learning \cite{Liu2020}. However, these methods require real-time sensor information about the dynamic obstacles. Dimension reduction methods have also been proposed \cite{Vemula2016}, but they do not handle noise in the obstacle dynamics.
Relevant approaches on motion planning under intermittent information have been proposed recently. For instance, \cite{Khodayi2019,Kantaros2019,aragues2021intermittent} propose distributed intermittent connectivity controllers for multi-robot systems that are tasked with navigation and estimation tasks in communication-denied environments. Conditions on the time interval between intermittent communication events to ensure stability and consensus in multi-agent systems are also developed in \cite{Xu2019}. Similar to this work, mission planning problems under uncontrolled intermittent communication have recently been proposed as well in which localization \cite{Bopardikar2016,Penin2019,Yel2019} and object tracking \cite{Koohifar2018} rely on intermittent information. The above works consider cooperative multi-agent systems whereas our work considers robots navigating in environments occupied with uncertain, dynamic, and uncooperative obstacles. Additionally, reachability analysis has been employed for control design \cite{Seo2019,Ding2011,Malone2017,Pendleton2017,Desai2020, Akametalu2015, Chiang2017}, monitoring \cite{Yel2019,Franco2020}, and providing run-time safety guarantees \cite{Althoff2014,Herbert2019}. The most relevant works to ours are: \cite{Malone2017}, where reachability analysis and potential fields are mixed for planning, and \cite{Chiang2017}, which combines stochastic reachability analysis and RRT for path planning. Differently from our work, \cite{Malone2017} assumes periodic observations which doesn't address intermittent information challenges and \cite{Chiang2017} does not account for the computational complexity of stochastic reachability analysis.
\textbf{Contribution:} The contribution of this paper is four-fold: 1) we present a novel algorithm that combines reachability analysis (RA) with potential fields for safe navigation in uncertain and dynamic environments in the presence of intermittent information; 2) we propose an NN-based compositional method that leverages the time elapsed since the last intermittent data to perform fast planning bypassing the use of expensive RA at runtime; 3) we implement and validate our approach with realistic simulations and experiments on UUVs and ground vehicles in cluttered dynamic environments; 4) we use comparative experiments against planning methods in dynamic environments that ignore motion uncertainties to demonstrate the safety-related benefits of our approach.
\section{Problem Description}
\label{sec:formulation}
Consider a robot governed by the following dynamics:
\begin{equation}\label{eq:rdynamics}
\dot{\bm{x}}=f(\bm{x}(t),\bm{u}(t)),
\end{equation}
where $\bm{x}(t)$ and $\bm{u}(t)$ denote the state (e.g., position and heading) of the robot and the control input at time $t$, respectively. The robot is assumed to operate in an environment with an a priori unknown number $n\geq0, n\in\mathbb{N}$ of dynamic, non-cooperative obstacles governed by the following dynamics:
\begin{equation}\label{eq:odynamics}
\dot{\bm{o}}_i=g(\bm{o}_i(t),\bm{d}_i(t),\bm{w}_i(t)), \;\; i=1,\ldots,n
\end{equation}
where $\bm{o}_i(t) \in \mathcal{O}$, $\bm{d}_i(t)\in \mathcal{D}$ and $\bm{w}_i(t) \in \mathcal{W}$ denote the state, control input and process noise of obstacle $i$ at time $t$, respectively. Hereafter, we assume that all obstacles follow the same control policy, which is known to the robot.
In addition, the system noise is considered unknown but bounded by an a priori known bound $\lVert \bm{w}_i(t) \rVert \leq \delta_w$.
The key assumption of this work is that the robot gets intermittent information about the obstacle state $\bm{o}_i(t_s)$,
which could be planned (e.g., when the UUV surfaces) or at unknown time instants $t_s$ (e.g., a UGV entering a communication denied environment).
The problem we address in this paper is as follows:
\begin{problem}[Safe Planning with Intermittent Information]\label{problem}
Given a robot with dynamics in \eqref{eq:rdynamics}, design a planner to guide it to a goal region ${\bm{x}}_g$ while always avoiding dynamic vehicles modeled as in \eqref{eq:odynamics}. This is specified in the following safety condition:
\begin{align}\label{eq:safety}
\left \lVert \bm{x}(t) - \bm{o}_{i}(t) \right \rVert \geq \delta, \; \; \forall t\geq0, \; \forall i \in \{ 1,\hdots,n\}
\end{align}
with safety threshold $\delta$ under intermittent obstacle-related information $\mathcal{I}(t_s)$ provided at unknown time instants $t_s$.
\end{problem}
\input{approach}
\input{evaluation}
\input{conclusion}
\bibliographystyle{IEEEtran
|
1,314,259,994,336 | arxiv | \section{Introduction}
Contact binary stars are common: According to the only currently
available unbiased statistics --
a by-product of the OGLE microlensing
project -- as discussed in Rucinski \shortcite{ruc97a}
and Rucinski \shortcite{ruc98b},
the spatial frequency of contact binaries among the main-sequence,
galactic-disk
stars of spectral types F to K (intrinsic colors $0.4 < V-I_C < 1.4$)
is about 1/100 to 1/80 (counting contact binaries as single
objects, not as two stars). Most of them have orbital periods within
$0.25 < P < 0.7$ days, and they are very rare for
$P > 1.3 - 1.5$ days \cite{ruc98a}. These properties, as
well as the spatial distribution extending all the way to the
galactic bulge, with moderately large $z$ distances from the
galactic plane, and the kinematic properties \cite{GB88} suggest an Old
Disk population of Turn-Off-Point binaries, i.e.\ a population
characterized by conditions conducive to rapid synchronization and
formation of contact systems from close, but detached, binaries.
The contact binaries are less common in open clusters which are
younger than the galactic disk \cite{ruc98b}, a property indicating
that they form over time of a few Gyrs. It is
obviously of great interest to identify binaries which are related to,
or precede the contact system stage, as the relative numbers would
give us information on durations of the pre- and in-contact stages.
Lucy \shortcite{luc76} and Lucy \& Wilson
\shortcite{LW79} were the first to point out
the observational importance of contact
systems with unequally deep eclipses as possible
exemplification of binaries which are to become contact systems or
are in the ``broken-contact'' phase of the theoretically predicted
Thermal Relaxation Oscillation (TRO) evolution of contact
binary stars, as discussed by Lucy \shortcite{luc76},
Flannery \shortcite{fla76} and
Robertson \& Eggleton \shortcite{RE77}. Lucy \& Wilson called
such contact systems the B-type -- as contrasted to the
previously recognized W-type and A-type contact systems --
because of the light curves
resembling those of the $\beta$~Lyrae-type binaries.
While the A-type are the
closest to the theoretical model of contact binaries with perfect
energy exchange and temperature equalization, the W-type show
relatively small (but still unexplained)
deviations in the sense that {\it less-massive components\/}
have slightly higher surface brightnesses (or temperatures).
Systems of the B-type introduced by Lucy \& Wilson
show large deviations from the contact model in that
{\it more massive components\/} are hotter than predicted by
the contact model. Thus, the energy transfer is inhibited
or absent and the components of the B-type systems behave more like
independent (or thermally de-coupled) ones. While
light-curve-synthesis solutions suggest good geometrical
contact, it has been suggested that these may be semi-detached
binaries with hotter, presumably more-massive
components filling their Roche lobes (we will call
these SH following Eggleton \shortcite{egg96}).
The same OGLE statistics that gave indications of the very high spatial
frequency of contact binaries suggests that
short-period binaries which simultaneously are in contact and
show unequally-deep eclipses are relatively rare in space:
Among 98 contact systems in the volume
limited sample, only 2 have unequally deep minima indicating
components of different effective temperatures \cite{ruc97b}.
Both of these
systems (called there ``poor-thermal-contact'' or ``PTC'' systems,
but which could be as well called B-type contact systems)
have periods longer than 0.37 day and both show the first
maximum (after the deeper eclipse) relatively higher of
the two maxima. This type of
asymmetry is dominant in the spatially much larger (magnitude limited)
sample of systems available in the OGLE survey. As already pointed
by Lucy \& Wilson \shortcite{LW79}, this sense of asymmetry can be
explained most easily as a manifestation of mass-transfer
from the more-massive to the less-massive component.
We add here that this can happen also in a {\it non-contact\/}
SH system,
with the continuum light emission from the interaction volume
between stars contributing to the strong curvature of the light-curve
maxima and mimicking the photometric effects
of the tidally-elongated (contact) structure.
Exactly this type of asymmetry is observed in a system which
is absolutely crucial in the present context, V361~Lyr;
it has been studied by Ka\l u\.zny \shortcite{kal90}
and Ka\l u\.zny \shortcite{kal91}, and later convincingly shown by
Hilditch et al.\ \shortcite{hil97} to be
a semi-detached binary with matter flowing from the more massive to the
less-massive component. The light curve asymmetry in the case of
V361~Lyr is particularly large and stable. A similar asymmetry
and somewhat similar mass-transfer effects (albeit involving
much more massive
components) are observed in the early-type system SV~Cen \cite{ruc92}
where we have a direct evidence of a tremendous mass-transfer
in a very large period change.
The subject of this paper, the close binary
W~Crv (GSC 05525--00352, BD$-12$~3565) is a relatively
bright ($V=11.1$, $B-V=0.66$) system with the
orbital period of 0.388 day. For a long time, this was the short-period
record holder among systems which appear to be
in good geometrical contact, yet which show strongly
unequally-deep eclipses indicating poor thermal contact.
It was as one of the systems exemplifying the
definition of contact systems of the B-type
by Lucy \& Wilson \shortcite{LW79}, although most often its
type of variability has been characterized as EB or $\beta$ Lyrae-type.
A system photometrically
similar to W~Crv with the period of 0.37 days, \#3.012,
has been identified in the OGLE sample \cite{ruc97b}, but it is
too faint for spectroscopic studies.
Our radial velocity data which we describe in this paper
are the first spectroscopic results for W~Crv.
Thus, it would be natural to combine them with the previous
photometric studies. However, we will claim below
that W~Crv is more complex than the current
light-curve synthesis codes can handle. The previous analyses of the
system, without any spectroscopic constraints on
the mass-ratio ($q$), encountered severe
difficulties. A recent extensive study of several light curves of
W~Crv by Odell \shortcite{ode96}, solely based on
photometric data found that the
mass-ratio was practically indeterminable ($0.5 < q < 2$), admitting
solutions ranging between the Algol systems (SC, for semi-detached
with the cool, lower-mass component filling its Roche lobe)
on one hand and all possible configurations which are
conventionally used to explain
the B-type light curves (SH, i.e.\ the broken-contact or pre-contact
semi-detached systems as well as poor-thermal-contact systems)
on the other hand.
A value of $q=0.9$ and the more massive component being eclipsed
at primary minimum were assumed by Odell mostly by plausibility arguments.
For a comprehensive summary of the theoretical issues related
to pre- and in-contact evolution, the reader is suggested to
refer to the review of Eggleton \shortcite{egg96};
observational data for B-type systems similar to W~Crv were collected
and discussed in a five-part series by Ka\l u\.zny, concluded
with Ka\l u\.zny \shortcite{kal86}, and in studies by
Hilditch \& King \shortcite{hil86}, Hilditch et al.\ \shortcite{hil88}
and Hilditch \shortcite{hil89}.
\section{Radial velocity observations}
\begin{figure}
\centerline{\psfig{figure=SRfig1.ps,height=6cm}}
\vspace{0.25cm}
\caption{\label{fig1}
The radial velocity observations of W~Crv versus the
orbital phase. The hotter, more massive component eclipsed in the primary
minimum is marked by filled circles. The data are listed in
Table~\ref{tab1} and the sine-curve
fits (broken lines) correspond to elements given in Table~\ref{tab2}.}
\end{figure}
The radial velocity observations of W~Crv were
obtained in February -- April 1997
at David Dunlap Observatory, University of Toronto using the 1.88 metre
telescope and a Cassegrain spectrograph.
The spectral region of 210 \AA\ centered
on 5185 \AA\ was observed at the spectral
scale of 0.2 \AA/pixel or 12 km~s$^{-1}$/pixel. The
entrance slit of the spectrograph of 1.8 arcsec on the sky
was projected into about 3.5 pixels or 42 km~s$^{-1}$. The exposure
times were typically 10 to 15 minutes.
The radial velocity data are listed in Table~\ref{tab1}
and are shown graphically in Figure~\ref{fig1}. The component velocities
have been determined by fitting gaussian curves to peaks in
the broadening function obtained
through a de-convolution process, as described in Lu \& Rucinski
\shortcite{LR99}. The mean standard deviations from the sine-curve
variations are 7.7 km~s$^{-1}$ for the primary (more-massive,
subscript 1) component and 17.2 km~s$^{-1}$ for the secondary
(less-massive, subscript 2) component. These deviations
give the upper limits to the measurement uncertainties because they
contain the deviations of the component velocities
from the simplified model of circular orbits without any proximity
effects (i.e.\ without allowance for non-coinciding photometric
and dynamic centres of the components).
\begin{table}
\caption{\label{tab1} Radial velocity observations of W~Crv}
\begin{tabular}{@{}rrrrrr}
JD(hel) & Phase& V$_{pri}$ & O--C & V$_{sec}$ & O--C \\
2450000+& & km~s$^{-1}$ & km~s$^{-1}$ & km~s$^{-1}$ & \\
489.811 & 0.564 & 38.8 & 1.7 & $-109.5$ & $-10.9$ \\
489.822 & 0.591 & 57.7 & $-0.9$ & $-136.4$ & $-6.3$ \\
489.835 & 0.624 & 76.0 & $-5.9$ & $-175.0$ & $-11.0$ \\
489.846 & 0.653 & 97.5 & $-0.6$ & $-187.4$ & 0.4 \\
489.858 & 0.685 & 113.7 & 1.6 & $-205.4$ & 2.9 \\
489.869 & 0.713 & 121.9 & 1.9 & $-208.3$ & 11.5 \\
489.881 & 0.744 & 120.4 & $-3.4$ & $-257.9$ & $-32.6$ \\
489.892 & 0.772 & 114.3 & $-8.2$ & $-228.2$ & $-4.7$ \\
489.903 & 0.802 & 130.1 & $13.6$ & $-215.1$ & $-0.5$ \\
489.914 & 0.830 & 113.5 & $7.1$ & $-198.7$ & 1.2 \\
520.674 & 0.091 & $-84.1$ & $10.3$ & 130.9 & 37.2 \\
520.684 & 0.118 & $-118.2$ & $-4.5$ & 147.7 & 25.8 \\
520.697 & 0.149 & $-140.0$ & $-7.6$ & 162.7 & 13.5 \\
520.707 & 0.177 & $-154.1$ & $-8.8$ & 169.3 & 1.3 \\
520.721 & 0.212 & $-156.7$ & $-0.8$ & 205.7 & 22.1 \\
520.732 & 0.240 & $-153.7$ & 5.9 & 187.4 & $-1.6$ \\
520.744 & 0.271 & $-153.1$ & 5.6 & 166.6 & $-21.0$ \\
520.756 & 0.303 & $-160.0$ & $-7.9$ & 157.5 & $-20.5$ \\
520.769 & 0.335 & $-141.1$ & $-0.8$ & 145.7 & $-15.1$ \\
520.779 & 0.363 & $-111.0$ & 14.8 & 153.2 & 13.6 \\
535.675 & 0.746 & 120.8 & $-3.0$ & $-237.4$ & $-12.1$ \\
535.686 & 0.774 & 116.1 & $-6.2$ & $-233.6$ & $-10.5$ \\
539.716 & 0.159 & $-124.0$ & 13.3 & 133.4 & $-22.9$ \\
539.728 & 0.190 & $-163.9$ & $-14.0$& 148.6 & $-26.2$ \\
539.741 & 0.223 & $-160.2$ & $-2.4$ & 153.7 & $-32.6$ \\
539.752 & 0.252 & $-160.8$ & $-0.9$ & 178.4 & $-11.0$ \\
\end{tabular}
\end{table}
The individual observations as well as the
{\it observed minus calculated\/} $(O-C)$ deviations from the
sine-curve fits to radial velocities of individual components
are given in Table~\ref{tab2}. When finding the parameters of
the fits, we assumed only the value of the
period, following Odell \shortcite{ode96},
and determined the mean velocity $V_0$,
the two amplitudes $K_1$ and $K_2$ as well as the moment of the
primary minimum $T_0$. The remaining quantities in that table have
been derived from the amplitudes $K_i$. The errors of the
parameters have been determined by a bootstrap experiment
based on 10,000 solutions with randomly selected observations
with repetitions.
\begin{table}
\caption{\label{tab2} Circular orbit solution for W~Crv}
\begin{tabular}{@{}cccc}
Parameter & Units & Value & Comment \\
$T_0$ & JD(hel) & $2450489.9781 \pm 0.0015$ & \\
$P$ & days & 0.388081 & assumed \\
$K_1$ & km~s$^{-1}$ & $140.8 \pm 2.0$ & \\
$K_2$ & km~s$^{-1}$ & $206.4 \pm 3.7$ & \\
$V_0$ & km~s$^{-1}$ & $-20.1 \pm 1.8$ & \\
$q$ & & $0.682 \pm 0.016$ & derived\\
$(a_1+a_2) \sin i$ & $R_\odot$ & $2.66 \pm 0.04$ & derived \\
$M_1 \sin^3 i$ & $M_\odot$ & $1.00 \pm 0.06$ & derived \\
$M_2 \sin^3 i$ & $M_\odot$ & $0.68 \pm 0.05$ & derived \\
\end{tabular}
\end{table}
Among the spectroscopic elements in Table~\ref{tab2},
the mass-ratio, $q = 0.682 \pm 0.016$,
is the most important datum for proper interpretation of the
light curves. Without external information
on the mass-ratio, strong inter-parametric correlations
in the light-curve analyses are known to frequently produce entirely
wrong solutions (except for cases of total eclipses).
Before attempting a combined solution,
we note that the spectroscopic data, as given in
Table~\ref{tab2}, describe the following system:
The more-massive component is eclipsed in the deeper
eclipse and hence is the hotter of the two. Judging by the
relative depths of the eclipses, and noting the small light
contribution of the secondary component (even if it fills its
Roche lobe), we estimate -- on the basis of the systemic colour at light
maxima $(B-V)=0.66$ -- that the effective temperatures of the
components are approximately 5700K and 4900K.
The mass of the primary component is
$M_1\,\sin^3 i = 1.00\, M_\odot$, so that the primary is apparently
a solar-type star, and the orbital inclination
cannot be far from $i=90^\circ$, although not exactly so as total
eclipses are not observed. Obviously,
the spectroscopic data cannot provide any constraint on the degree of
contact in the system, i.e.\
whether it is a contact system with poor
thermal contact or a semi-detached configuration with one of
the components filling the Roche lobe or perhaps even
a detached binary. There are no spectroscopic
indications of any mass-transfer either,
although -- with the mutual proximity of
components -- one would not expect such obvious signatures of this
process as a stream or an accretion disk; besides, the spectral
region around 5185 \AA\ would not normally show them in any case.
We must seek for constraints on the
system geometry in the light curve and its variations.
\begin{figure}
\centerline{\psfig{figure=SRfig2.ps,height=6cm}}
\vspace{0.25cm}
\caption{\label{fig2} Four seasonal V-filter light curves
of W~Crv as discussed by Odell \shortcite{ode96}
are shown here together, in intensity units, assuming the difference
of 0.37 mag between the comparison and the variable star.
Note the good repetition of the light curves in primary minima and large
variations elsewhere. The codes are: 1966 -- crosses, 1981 -- filled
circles, 1988 -- filled squares, 1993 -- triangles.}
\end{figure}
\section{Attempts of a combined light curve and radial velocity solution}
Four light curves discussed by Odell \shortcite{ode96}
are currently available: the first from 1966
was obtained by Dycus \shortcite{dyc68}, the
remaining three in 1981, 1988 and 1993 were by Odell.
The light curves were
obtained with the same comparison star permitting direct
comparison of the large curves. The large seasonal variations
of the light curves were
interpreted by Odell by star spots. We do not support the spot
hypothesis by pointing out a curious property: A comparison
of the seasonal light curves (Figure~\ref{fig2}) indicates
that all changes take place at light maxima and during the
secondary eclipse when the cooler component is behind the hotter one,
but that primary eclipse is surprisingly similar in all four curves.
This constancy of the primary-eclipse shape
remains irrespectively whether one considers the intensity
or magnitude (relative intensity) units.
We feel that we have here a strong indication that
mass-exchange and accretion processes are operating
between the stars. These processes would produce large
areas of hot plasma, most probably on the inner face of the
less-massive secondary component which is invisible during the primary
minima. One can of course contrive a scenario involving dark spots
appearing in certain areas, but never appearing
on the outer side of the less-massive component, but the dark-spot
hypothesis seems to be the most artificial of all possibilities. We note
that an argument of the diminished brightness being
accompanied by a redder
colour is a weak one as such correlation is expected when plasma
temperature effects are involved, irrespectively whether the spots
are cool or hot.
With strong mass-transfer effects modifying its light curve,
W~Crv is not a typical contact system. In this situation,
a blind application of light-curve synthesis codes may have
led us to entirely wrong sets of parameters.
For that reason, we did not attempt to obtain a light-curve solution
of the system and used the popular light-curve synthesis
program BinMak2 (as described by Bradstreet \shortcite{bra94} and Wilson
\shortcite{wil94}) to explore reasonable
ranges of parameters in different geometrical configurations.
Attempts of conventional light-curve
synthesis solutions of W~Crv encounter several problems.
First of all, the large amplitudes {\it at both minima\/}
totally exclude a detached configuration.
At least one of the components or possibly both contribute to
the strong ellipticity of the light curve, which would not be
surprising in view of the short orbital period and little space
for expansion of components in the system. The system
must be a contact one or must be described by
one of the two possible semi-detached configurations.
Arguably, durations of sub-contact phases of evolution are
very short and the system should quickly reach a semi-detached stage.
Let us call
the three possibilities ``C'' for contact, ``SH'' for the one
with the more massive component filling the Roche lobe and ``SC''
for an Algol configuration with the less massive component filling
its lobe. The shapes of the orbital cross-sections
of the components for these three
possibilities are shown in Figure~\ref{fig3}. We will discuss
them in turn, in reference to Figures~\ref{fig4} and \ref{fig5}
which show the most symmetric 1981 light curve and then
the four seasonal light curves. The 1981 curve was selected
for its relatively symmetric shape, good phase coverage
and absence of what was initially thought to be signatures of dark spots.
The parameters of the best-fitting synthesis models for the V-filter
1981 light curve are given in Table~\ref{tab3}. The values of
equipotentials $\Omega_i$ are defined as in the Wilson-Devinney
program \cite{WD71}
and $r_i$ are the volume radii in units of the orbital centre
separation. The following assumptions on the properties of the
components of W~Crv were made while generating the
synthetic light curves: The limb darkening coefficients
$u_1 = 0.65$ and $u_2 = 0.75$, the gravity exponents
$g_1 = g_2 = 0.32$ and the bolometric albedo $A_1 = A_2 = 0.5$.
The inner and outer equipotentials for $q = 0.682$ were
$\Omega_{in} = 3.215$ and $\Omega_{out} = 2.821$.
The radii given in Table~\ref{tab3} are the volume radii.
\begin{table}
\caption{\label{tab3} Three light-curve synthesis solutions of W~Crv}
\begin{tabular}{@{}lccc}
Parameter & C & SH & SC \\
$\Omega_1$ & 3.156 & 3.215 & 3.4 \\
$\Omega_2$ & 3.156 & 3.4 & 3.215 \\
$i$ (deg) & 88 & 90 & 90 \\
$r_1$ & 0.424 & 0.412 & 0.380 \\
$r_2$ & 0.357 & 0.313 & 0.345 \\
$R_1/R_\odot \sin i$ & 1.13 & 1.10 & 1.01 \\
$R_2/R_\odot \sin i$ & 0.95 & 0.83 & 0.92 \\
Comment & $f=0.15$ & primary & secondary \\
& & fills R.\ lobe & fills R.\ lobe \\
\end{tabular}
\end{table}
\begin{figure}
\centerline{\psfig{figure=SRfig3.ps,height=7cm}}
\vspace{0.25cm}
\caption{\label{fig3} The three configurations of W~Crv
considered in the text, with parameters as listed
in Table~\ref{tab3}, are shown here as sections in the orbital
plane. The Roche critical equipotentials (dotted
lines) and the position of the mass center
(cross) are shown to scale. Note how little
space separates the components; this leads to our hypothesis that strong
mass-transfer phenomena between the components are the source of
additional light which produces the seasonal variations of the light
curve.
}
\end{figure}
\begin{figure}
\centerline{\psfig{figure=SRfig4.ps,height=7cm}}
\vspace{0.25cm}
\caption{\label{fig4} The 1981 light curve is shown
here with the three fits: the
contact model (C) with a mild degree-of-contact
($f=0.15$, continuous line) and two semi-detached
configurations discussed in the text (SH, dotted line and
SC, broken line).
}
\end{figure}
{\bf Contact configuration (C):}
Conventional contact solutions make it
abundantly clear that the strong curvature of
light maxima and large amplitude of light
variations require two properties: a large orbital inclination and a
moderately strong contact, at least $f \simeq 0.15 - 0.25$.
However, the inclination cannot
be exactly 90 degrees as then we would see a total eclipse
in the secondary minimum. The contact-model fit is far from
perfect because of the large seasonal changes, but also indicates a need
of a ``super-reflection'' effect, with
increased albedo not only above the currently most popular value of 0.5
for convective envelopes \cite{ruc69},
but even above its physically allowed upper
limit of unity. This is clearly visible in Figures~\ref{fig4} and
\ref{fig5} in the branches of the secondary minimum.
Cases of the abnormal reflection were already discussed by
Lucy \& Wilson \shortcite{LW79} -- including the case of W~Crv --
and by Ka\l u\.zny \shortcite{kal86}, as indicating some
abnormal brightness distribution between the stars
(most probably, on the inner side of the
secondary component) which could be linked to a mass-exchange phenomenon.
Obvious presence of such effects would make the standard, light-curve
synthesis model -- which
hides all energy and mass transfers deep inside the common
contact envelope -- entirely invalid.
{\bf Semi-detached configuration (SH):} This is the preferred
configuration for B-type systems, either in terms of
a system before forming contact or in the broken-contact phase
of the TRO oscillations. Photometrically, the model does not provide
enough of the light-curve amplitude and curvature at maxima,
even with $i=90^\circ$. The dotted line in Figures~\ref{fig4} and
\ref{fig5} shows this deficiency. However, in this
configuration, it would be natural to expect
departures from the simple geometric model due to the mass exchange
phenomena. The increased reflection effect
could be then explained through an area
on the secondary component which is directly struck by the in-falling
matter from the primary component, while the strong curvature of maxima
could be explained by a light contribution from the accretion region
which is visible only at the quadratures, as is most likely the case
for SV~Cen \cite{ruc92}. Although such a configuration cannot
be modeled with the existing light-curve synthesis codes,
it offers a prediction of the shortening of the orbital period;
in Section~\ref{period} we present
indications that the period is in fact getting longer.
It is also consistent with the
light curve variations almost entirely limited to the light
maxima, with very small seasonal differences between portions
at light minima. If the mass-transfer phenomena between the stars
increase the light-curve amplitude, then the inclination
could take basically any value. For $i < 90$ degrees, the inner side
of the secondary component would be partly visible at secondary
minima explaining large light-curve variations at these phases.
{\bf Semi-detached Algol configuration (SC):} Of the three
geometrical models considered here,
this one best fits the 1981 light curve in all parts except
in the upper branches of the
primary minimum which are wider than predicted. The large
amplitudes of the light variations find a better explanation
in this model than in the SH case. Also, most of the
reflection effect
can be explained with the conventional value of the albedo
by the relatively larger area of the illuminated secondary component.
The mass-transfer in this model should lead to
a period lengthening, as in other Algols. This is what we apparently
see in the times of minima of W~Crv
(see Section~\ref{period}). If the light-curve maxima contain
a light contribution of mass-transfer and/or accretion
effects, then the second maximum
(after the secondary minimum) would be expected -- on the average --
to be more perturbed by the Coriolis-force deflected stream,
and this seems to be the case for W~Crv
(see Figure~\ref{fig2}). Within the SC hypothesis, only
one of the two components, the secondary, would be abnormal (oversize
relative the main-sequence relation, see Tables~\ref{tab2}
and \ref{tab3}), whereas
the C and SH models predict mass-radius inconsistencies
for both components. Thus, we feel
that all the current data suggest that the short-period Algol
configuration is the correct explanation for W~Crv. The major problem,
however, is with the theoretical explanation for such a configuration:
There is simply no place for Algols with periods as short as
0.388 days within the present theories. We return to this problem in
Section~\ref{disc}.
\begin{figure*}
\centerline{\psfig{figure=SRfig5.ps,height=8cm,angle=90}}
\vspace{0.25cm}
\caption{\label{fig5} The four V-filter light curves
of W~Crv (in magnitudes) are shown here together with three
different fits for a contact model (C) with a mild degree-of-contact
($f=0.15$, continuous line), for two semi-detached
configurations discussed in the text (SH, dotted line and
SC, broken line). The fits have been based on
the 1981 light curve (see Figure~4).
Note the small differences between the theoretical
curves when compared with the large seasonal variations in the
observed light curves.
}
\end{figure*}
\section{Period changes}
\label{period}
Although known for almost 65 years, W Crv has not
been extensively observed for moments of minima. Practically all
extant data have been presented by Odell \shortcite{ode96}.
Dr.\ Odell kindly sent very new, unpublished data and corrections to
a few data points listed in Table~1 of his paper. These are given
in Table~\ref{tab4}. We have added to
these the moment of minimum inferred from
our new spectroscopic determination of $T_0$ (see Table~\ref{tab2}).
I what follows, we will use the ephemeris of Odell:
$JD({\rm min}) = 2427861.3635 + 0.388080834 \times E$.
The {\it observed minus calculated\/} $(O-C)$ deviations
from Odell's ephemeris are shown in Figure~\ref{fig6}. The moments
secondary minima, which are based on shallower eclipses with
stronger light-curve perturbations, are marked in the figure
by open circles. Our spectroscopic result
gives a significant, positive deviation of
$(O-C) = +0.0093 \pm 0.0015$ days, in agreement with the newest data
of Odell.
\begin{table}
\caption{\label{tab4} New and corrected moments of
minima for W~Crv}
\begin{tabular}{@{}cccc}
$E$ & $T_0$ & $(O-C)$ & Comment \\
& 2400000+ & days & \\
54750.0 & 49108.7920 & +0.0028 & correction \\
54752.5 & 49109.7626 & +0.0032 & correction \\
58309.0 & 50489.9781 & +0.0093 & spectroscopy \\
60364.5 & 51287.6757 & +0.0067 & new \\
60411.0 & 51305.7230 & +0.0082 & new \\
60413.5 & 51306.6938 & +0.0088 & new \\
60416.0 & 51307.6639 & +0.0087 & new \\
\end{tabular}
\end{table}
\begin{figure}
\centerline{\psfig{figure=SRfig6.ps,height=7cm}}
\vspace{0.25cm}
\caption{\label{fig6} The $(O-C)$ deviations in the
observed moments of eclipses (in days) from the ephemeris of Odell
(1996). The secondary minima are marked by open
circles. The new spectroscopic determination is marked by
the large filled square. Its error has been obtained by
a bootstrap experiment and is well determined, but --
obviously -- systematic effects in photometric and
spectroscopic determinations may be different. The quadratic fit
discussed in the text is shown by a continuous line. The histogram
of the bootstrap results for the quadratic coefficient $a_2$
(in units of $10^{-12}$ days) is shown by the small insert.
}
\end{figure}
\begin{table}
\caption{\label{tab5} Quadratic fits to the time-of-minima
$(O-C)$ deviations and the evolutionary time scales $\tau$}
\begin{tabular}{@{}ccccc}
Value & $a_0$ & $a_1$ & $a_2$ & $\tau$ \\
& days & $10^{-7}$ days & $10^{-12}$ days & $10^7$ years \\
$-95$\% ($-2\,\sigma$) & $-0.0025$ & $-8.28$ & $+3.87$ & 5.33 \\
$-68$\% ($-1\,\sigma$) & $-0.0002$ & $-6.44$ & $+5.93$ & 3.48 \\
median & $+0.0023$ & $-3.79$ & $+7.98$ & 2.58 \\
$+68$\% ($+1\,\sigma$) & $+0.0072$ & $-2.36$ & $+11.06$ & 1.86 \\
$+95$\% ($+2\,\sigma$) & $+0.0097$ & $-1.04$ & $+13.44$ & 1.53 \\
\end{tabular}
\end{table}
The available times-of-minima contain information about
orbital period changes that have taken place over
the 65 years. Disregarding presumably random and much smaller
shifts in the eclipse centres
caused by stellar-surface perturbations (whether we call them spots
or mass-transfer affected areas), the observed deviations from the linear
elements of Odell \shortcite{ode96} in Figure~\ref{fig6}
can be interpretted as consisting of at least two streight segments
or as forming a parabola. We do not consider a possibility that
the discoverer of W~Crv, Tsesevich \shortcite{tse54},
committed a gross error in the timing of the minima
because he was one of the most experienced
observers of variable stars ever. In W~UMa-type systems,
the abrupt changes of the type leading to the streight-segmented
$(O-C)$ diagrams take place in intervals of typically years; these changes
may have some relation to the magnetic-activity
cycles \cite{ruc85}. They are very difficult to handle as they require
very dense eclipse-timing coverage; such a coverage is not
available for W~Crv. It is easier to
analyze the $(O-C)$ deviations for a global quadratic trend
using an expression: $(O-C) = a_0 + a_1 \times E + a_2 \times E^2$.
Because of poor distribution of data points over time, the linear
least-squares would give unreliable error estimates for the
coefficients $a_i$. In view of this difficulty,
the uncertainties have been evaluated using
the bootstrap-sampling technique and are listed
in Table~\ref{tab5} in terms of the median values at the
68 percent (for gaussian distributions, $\pm 1$-sigma) and
95 percent ($\pm 2$-sigma) confidence levels.
The bootstrap technique reveals a strongly non-gaussian
distribution of the uncertainties, as shown for the coefficient $a_2$
in the insert to Figure~\ref{fig6}.
The quadratic coefficient $a_2$ is proportional to the
second derivative of the times of minima hence to the period
change through $dP/dt = 2 a_2/P$. For comparison with the
theory of stellar evolution, it is convenient to consider the
{\it time-scale of the period change\/} given by
$\tau = P/(dP/dt) = P^2/2a_2$. The values of $\tau$ are
given in the last column of Table~\ref{tab5}.
The data given in Table~\ref{tab5} indicate that the
orbital period is becoming longer with the characteristic
time scale of $(1.5 - 5.3) \times 10^7$ years, with the range
based on the highly secure 95 percent confidence level. The sense
of the period change is somewhat unexpected as it indicates -- for the
relative masses that we determined -- that the mass transfer
is from the less-massive component to the more-massive component,
i.e.\ as in Algols (the configuration designated as SC).
One would normally expect
the other semi-detached configuration (SH) for
the pre-contact or broken-contact phases of the TRO cycles.
The period-lengthening argument for the Algol (SC)
configuration is a stronger one than any based on the light
curve analysis which seems to be hopelessly difficult for
W~Crv. The time-scale is exactly in the range expected for
the Kelvin-Helmholtz or thermal time-scale evolution of solar-mass stars,
$\tau_{K-H} = 3.1 \times 10^7 (M/M_\odot)^2
(R/R_\odot)^{-1} (L/L_\odot)^{-1}$,
which is characteristic for systems in the rapid stage of mass exchange
such as $\beta$~Lyrae or SV~Cen.
\section{Discussion and conclusions}
\label{disc}
The present paper contains results of spectroscopic observations
confirming the assumption of Odell \shortcite{ode96} that the
more massive, hotter star is eclipsed in the primary minimum. However,
this information and the value of the mass-ratio
are not sufficient to understand the exact nature
of the system mostly because of the strong light curve variability
which may be interpreted as an indication of mass-exchange and
accretion phenomena producing strong deviations from
the standard binary-star model.
We suggest -- on the basis of the absence of light-curve
perturbations within the primary minima -- that the system is not a
contact binary with components which mysteriously have
different temperatures, but rather a semi-detached system. Furthermore,
we suggest that W~Crv, similarly to
systems like V361~Lyr or SV~Cen, has a light-producing
volume between the stars or -- more likely -- on the inner
face of the secondary component. In the case of V361~Lyr,
there is apparently enough space for the stream of matter to be deflected
by the Coriolis force and strike
the less-massive on the side; in SV~Cen, the photometric effects of
a strong contact are probably entirely due to the additional
light visible only in the orbital quadratures. In
contrast to V361~Lyr and SV~Cen, the
mass-transfer phenomena in W~Crv are visible at all orbital phases
except at primary minima, that is when the inner side of the
cooler component is directed away from the observer.
The general considerations of the light-curve fits in the presence
of large brightness perturbations make both semi-detached
configurations almost equally likely,
but the semi-detached configuration of the Algol type for W~Crv,
i.e.\ the one with the less-massive,
cooler component filling the Roche lobe (SC)
is preferable for two reasons: (1)~it is simpler, as it
leads to only one component deviating from the main-sequence relation
(since the inclination must be close to 90 degrees,
the secondary would have $0.92 R_\odot$ and $0.68 M_\odot$, whereas
the primary would be a solar-type star with
$1.01 R_\odot$ and $1.00 M_\odot$), and
(2)~it can explain the observed {\it lengthening\/} of the orbital period
in the thermal time-scale. This way, W~Crv joins a group of well-known
stars -- such as SV~Cen, V361~Lyr or the famous $\beta$~Lyrae -- where
large, systematic period changes are actually the final proof
of our hypothesis of the Algol configuration. W~Crv would be then the
shortest-period (0.388 days) known Algol consisting of normal
(non-degenerate) components. With such a short period, the system presents
a difficulty to the current theories
describing formation of low-mass Algols, as reviewed by
Yungelson et al.\ \shortcite{yun89}, and of binaries related to
contact systems, as reviewed by Eggleton \shortcite{egg96}.
One can only note that Sarna \& Fedorova \shortcite{SF89}, who considered
formation of solar-type contact binaries
through the Case~A mass-exchange mechanism,
pointed out the importance of the initial mass-ratio: For mass-ratio
sufficiently close to unity, the rapid (hydrodynamical) mass exchange
can be avoided and the system may evolve in
the thermal time-scale. Although
the mass-reversal has not been modeled,
it is likely that W~Crv is the product of such a process.
\section*{Acknowledgments}
We thank Dr.\ Andy Odell for providing the light curve and
time-of-minima data and for extensive correspondence, numerous
advices and suggestions and Drs.\ Bohdan Paczy\'nski and
Janusz Ka\l u\.zny
for a critical reading of the original version of the
paper and several suggestions that improved the
presentation of the paper.
|
1,314,259,994,337 | arxiv | \section{Introduction}
\label{intro}
The original contributions to the study of rapidly rotating neutron stars in the framework of the ``post-Newtonian approximation'' (PNA) are due to Chandrasekhar \citep{C65a}, Krefetz \citep{K67}, and Fahlman \& Anand \citep{FA71}. The problem of fast rigid rotation of neutron stars in hydrostatic equilibrium is treated in \citep{FA71} by considering the relativistic and rotational effects acting on a nonrotating Newtonian configuration obeying the polytropic ``equation of state'' (EOS, EOSs). However, there are certain reasons leading the PNA of first-order in the gravitation parameter $\sigma$ to failure when $\sigma \geq 0.01$. A discussion on this matter can be found in \citep{T65} (Appendix). A further discussion (\citep{FA71}, Sec.~5) verifies the negative conclusions of \citep{T65} and focuses on the imposed limitations when applying this PNA's scheme to several astrophysical objects, since, unfortunately, values of interest lie in the vicinity of $\sigma \simeq 0.1$.
In a recent study \citep{GK14}, we revisit the problem by assuming the relativistic and rotational effects as decoupled perturbations, and by applying to PNA the so-called ``complex plane strategy'' (CPS). This method consists in solving all differential equations involved in the PNA's computational scheme in the complex plane. Numerical integrations are resolved by the Fortran code \texttt{DCRKF54} \citep{GV12}, which is a Runge--Kutta--Fehlberg code of fourth and fifth order, modified so that to integrate ``initial value problems'' (IVP, IVPs) established on systems of first-order ``ordinary differential equations'' (ODE, ODEs) of complex-valued functions in one complex variable along prescribed complex paths.
As discussed in \citep{GK14} (Sec.~5.2), CPS could proceed independently of the particular perturbation approach used. For instance, CPS could be applied to a PNA's scheme of up to second order in $\sigma$, as developed in \citep{CN69}. But, most interesting, CPS could cooperate with a ``hybrid approximative scheme'' (HAS) of PNA (\citep{GK14}, Sec.~5.2), in which the complete solution of the relativistic distortion, as developed in \citep{T65}, is involved.
In this study, we extend the numerical experiments started in \citep{GK14} (Sec.~5.2), by applying HAS to general-relativistic polytropic models of critically rotating neutron stars with $\sigma$ up to $\simeq 0.3$.
We do not intend to repeat here extended parts of \citep{GK14}, except for certain significant issues. For clarity and convenience, we use the same conventions, definitions, and symbols with those in \citep{GK14}.
\section{The Hybrid Approximative Scheme}
\subsection{Preliminaries}
\label{preliminaries}
In this study, we assume that the pressure $p$ and the rest-mass density $\rho$ obey the polytropic ``equation of state'' (EOS)
\begin{equation}
p = K \, \rho^\Gamma =
K \, \rho^{1+\left( 1/n \right)},
\label{2.5}
\end{equation}
where $K$ is the polytropic constant, $\Gamma$ the adiabatic index defined by $\Gamma = 1 + (1/n)$, $n$ the polytropic index, and the normalization equations for the rest-mass density $\rho$ and the coordinate $r$ are defined by
\begin{equation}
\rho = \rho_{\mathrm{c}} \, \Theta^n, \qquad
r = \left[ \frac{(n+1) \, p_{\mathrm{c}}}{4 \, \pi \,
G \, \rho_{\mathrm{c}}^2} \right]^{1/2} \xi
= \left[ \frac{(n+1) \, K \, \rho_\mathrm{c}^{\left(1/n\right)}}{4 \, \pi \,
G \, \rho_\mathrm{c}} \right]^{1/2} \xi
= \alpha \, \xi,
\label{2.5bc}
\end{equation}
where $\rho_\mathrm{c}$ is the central density, $\Theta(\xi, \, \mu)$ (with $\mu = \cos(\vartheta)$) the Lane--Emden function, $p_\mathrm{c}$ the central pressure, and $G$ the gravitation constant. The central density $\rho_\mathrm{c}$ is chosen to be the density unit in the so-called ``classical polytropic units'' (cpu), and the model parameter $\alpha$ is chosen to be the length unit in cpu; accordingly, $\Theta^n$ is the cpu measure of the rest-mass density $\rho$ and $\xi$ the cpu measure of the coordinate $r$.
The ``rotation parameter'' $\upsilon$, representing the effects of rotation, and the ``gravitation parameter'' $\sigma$ (also called ``relativity parameter''), representing the post-Newtonian effects of gravitation, are then defined by (\citep{GK14}, Eqs. (7a) and (7b), respectively)
\begin{equation}
\upsilon = \frac{\Omega^2}{2 \, \pi \, G \, \rho_{\mathrm{c}}}, \qquad
\sigma = \frac{1}{c^2} \, \, \frac{p_{\mathrm{c}}}{\rho_{\mathrm{c}}}.
\label{2.7}
\end{equation}
In the framework of PNA, the function $\Theta(\xi,\mu)$ can be expressed as (\citep{GK14}, Eq.~(9))
\begin{equation}
\begin{aligned}
\Theta(\xi,\mu) &= \sum_{i=0,\,2}^4 P_i(\mu) \, \Theta_i(\xi) \\
&= \alpha_0 \, \theta_{00}(\xi) \, P_0(\mu) \\
&+ \, \alpha_1 \left[ \theta_{10}(\xi) \, P_0(\mu) +
A_{12} \theta_{12}(\xi) P_2(\mu) \right] \\
&+ \, \alpha_2 \left\{ \theta_{20}(\xi) \, P_0(\mu) + \left[ \theta_{22}(\xi) + A_{22} \theta_{12}(\xi) \right] P_2(\mu) \right. \\
&+ \left. \qquad \qquad \qquad \qquad \, \left[ \theta_{24}(\xi) + A_{24} \theta_{14}(\xi) \right] P_4(\mu) \right\} \\
&+ \, \alpha_3 \, \theta_{30}(\xi) \, P_0(\mu),
\label{2.9a}
\end{aligned}
\end{equation}
where $\alpha_i$ are the perturbation parameters (\citep{FA71}, Eq.~(24)): $\alpha_0=1$, $\alpha_1=\upsilon$, $\alpha_2=\upsilon^2$, and $\alpha_3=\sigma$.
The functions $\theta_{ij}$ are involved in the differential equations (\citep{GK14}, Eq.~(12))
\begin{equation}
\frac{d^2\theta_{ij}}{d\xi^2} + \frac{2}{\xi} \, \frac{\theta_{ij}}{d\xi} - \frac{j(j+1)}{\xi^2} \, \theta_{ij} =S_{ij}
\label{2.10}
\end{equation}
with $i = 0, \, 1, \, 2, \, 3,$ and $j = 0, \, 2, \, 4$, solved in view of the initial conditions~(26) of \citep{GK14}. The parameters $A_{ij}$ (\citep{GK14}, Eqs.~(24)--(25)) multiply properly the homogeneous solutions of $\theta_{ij}$ (\citep{FA71}, Eqs.~(42) and (43)), so that the boundary conditions~(16) of \citep{GK14} be satisfied.
The functions $S_{ij}$ are given by Eq.~(13) of \citep{GK14}.
\subsection{The numerical method}
We now consider HAS as a computational scheme applied on PNA of \citep{GK14}, in which the relativistic distortion participates with its complete solution, as it has been developed and computed in \citep{T65}. By substituting the complete solution $\Theta_\sigma$ for the relativistic effects in the place of the sum $\alpha_0 \, \theta_{00}(\xi) + \alpha_3 \, \theta_{30}(\xi)$ (\citep{GK14}, Eq.~(57)), we obtain the form
\begin{equation}
\begin{aligned}
\Theta(\xi,\mu) &=
\Theta_\sigma \, P_0(\mu) \\
&+ \,\alpha_1 \left[ \theta_{10}(\xi) \, P_0(\mu) +
A_{12} \theta_{12}(\xi) P_2(\mu) \right] \\
&+ \, \alpha_2 \left\{ \theta_{20}(\xi) \, P_0(\mu) + \left[ \theta_{22}(\xi) + A_{22} \theta_{12}(\xi) \right] P_2(\mu) \right. \\
&+ \left. \qquad \qquad \qquad \qquad \, \left[ \theta_{24}(\xi) + A_{24} \theta_{14}(\xi) \right] P_4(\mu) \right\}.
\label{2.9b}
\end{aligned}
\end{equation}
To compute the function $\Theta_\sigma$, we use the Oppenheimer--Volkoff equations of hydrostatic equilibrium (cf.~\citep{T65}, Eqs.~(19) and (20)),
\begin{equation}
\frac{d\Theta_\sigma}{d\xi} =
- \, \frac{1}{\xi^2} \left( \Upsilon_\sigma + \sigma \, \xi^3 \, \Theta_\sigma^{n+1} \right) \, \frac{\left[ 1 + (n+1) \, \sigma \, \Theta_\sigma \right]}{1 - 2 \, \sigma \, (n+1) \, \Upsilon_\sigma/\xi},
\label{T6501}
\end{equation}
\begin{equation}
\Upsilon_\sigma' = \xi^2 \, \Theta_\sigma^n \left( 1 + \sigma \, n \, \Theta_\sigma \right),
\label{T6502}
\end{equation}
where the function $\Upsilon_\sigma$ is defined by (cf. \citep{T65}, Eq.~(18))
\begin{equation}
m(r) = 4 \, \pi \, \alpha^3 \, \rho_\mathrm{c} \Upsilon_\sigma(\xi);
\end{equation}
$m(r)$ is the total mass interior to a sphere of radius $r$ (cf. \citep{T65}, Eq.~(12)). In the Newtonian limit $\sigma=0$, Eqs.~(\ref{T6501}) and (\ref{T6502}) reduce to the classical Lane--Emden equation (Eq.~(\ref{2.10}) with $i=j=0$). In the relativistic case $\sigma > 0$, $\Theta_\sigma$ is the total distortion owing to relativistic effects and can be written as (\citep{GK14}, Eq.~(57))
\begin{equation}
\Theta_\sigma = \theta_{00} + \sum_{i=1}^{\infty} \sigma^i \, \theta_{3(i-1)}.
\label{theta3}
\end{equation}
The PNA's scheme in \citep{GK14} includes terms of first order in $\sigma$; in this case, the sum in Eq.~(\ref{theta3}) contains the single term $\sigma \, \theta_{30}$. When with infinite terms, the sum in Eq.~(\ref{theta3}) becomes equal to $\Theta_\sigma - \theta_{00}$. The computational basis of HAS consists in using the complete solution in the relativistic distortion and perturbation terms of up to second order in $\upsilon$ with respect to the rotational distortion.
The initial conditions for solving the differential equations~\eqref{2.10}, ~\eqref{T6501}, and \eqref{T6502} are written as (cf.~\citep{GK14}, Eqs.~(26))
\begin{equation}
\begin{aligned}
&\theta_{00}=1, \qquad \, \,
\frac{d\theta_{00}}{d\xi}=0, \qquad &\mathrm{at} \, \, \, \xi=0, \\
&\theta_{ij}=0, \qquad \, \, \,
\frac{d\theta_{ij}}{d\xi}=0, \, \, i = 1,\,2, \, \, j = 0,
\qquad &\mathrm{at} \, \, \, \xi=0, \\
&\theta_{ij}=\xi^j, \qquad \frac{d\theta_{ij}}{d\xi}=j \, \xi^{j-1}, \, \, i = 1,\,2, \, \, j \geq 2, \qquad &\xi \in \delta(0),
\label{2.18}
\end{aligned}
\end{equation}
where the interval $\delta(0)$ lies in the vicinity of zero, and
\begin{equation}
\Theta_\sigma = 1, \qquad \Upsilon_\sigma = 0.
\label{2.18TY}
\end{equation}
\subsection{The complex-plane strategy}
\label{theCPS}
Equation~(\ref{2.10}) yields for $i=j=0$ the classical Lane--Emden equation, which, integrated along a prescribed interval
$\mathbb{I}_\xi = [\xi_\mathrm{start} = 0, \,\, \xi_\mathrm{end}] \subset \mathbb{R}$
with initial conditions~(\ref{2.18}a,\,b)
gives the Lane--Emden function $\theta_{00}[\mathbb{I}_\xi \subset \mathbb{R}] \subset \mathbb{R}$. To avoid the indeterminate form $\theta_{00}'/\xi$ at the origin, we start integration at a point $\xi_\mathrm{start} = \xi_0$ close to the origin. Since $\xi_0$ is small, the initial conditions~(\ref{2.18}a,\,b) are valid at the starting point $\xi_0$ as well. So, the integration interval becomes
$\mathbb{I}_{\xi0} = [\xi_0,\,\xi_\mathrm{end}] \subset \mathbb{R}$.
The Lane--Emden function $\theta_{00}$ becomes zero at its first root $\Xi_1$, $\theta_{00}(\Xi_1) = 0$. Beyond the first root $\Xi_1$, $\xi > \Xi_1$, $\theta_{00}$ changes sign, $\theta_{00}(\xi) < 0$. Accordingly, $\theta_{00}^n$ is undefined beyond $\Xi_1$, since raising a negative real number to a real power is not defined in $\mathbb{R}$. To remove this syndrome, we can define $\theta_{00}$ as a complex-valued function in one real variable $\xi$ with $\xi \in \mathbb{I}_{\xi0}$,
$\theta_{00}[\mathbb{I}_{\xi0} \subset \mathbb{R}] \subset \mathbb{C}.$
Since $n \in \mathbb{R}$, the term $\theta_{00}^n$ suffers from a ``non-monodromy syndrome'' due to the fact that multiple-valued logarithmic functions are involved in the representation of $\theta_{00}^n$ (see e.g. \citep{Chu60}, Secs.~26--28). To remove this syndrome, we proceed by defining an ``auxiliary Lane--Emden function'' $\chi$ such that $\theta_{00} = \chi^N$ (\citep{GK14}, Eq.~(35)),
where the involved integer $N$ is chosen so that the term $\theta_{00}^n = \chi^{Nn}$ be transformed into a ``raised-to-integer-power'' term.
The ``modified Lane--Emden equation'' for $\chi$ with its initial conditions (\citep{GK14}, (Eqs.~(36) and (37), respectively) can be transformed into an equivalent IVP in two first-order ODEs (\citep{GK14}, (Eqs.~(38) and (39))
\begin{equation}
\frac{d\chi}{d\xi} = \phi,
\label{MLEEe1}
\end{equation}
\begin{equation}
\frac{d\phi}{d\xi} = - \, \frac{2}{\xi} \, \phi - \frac{N-1}{\chi} \, \phi^2 -
\frac{1}{N} \,\, \chi^{N \left(n-1\right)+1},
\label{MLEEe2}
\end{equation}
where $\chi[\mathbb{I}_{\xi0} \subset \mathbb{R}] \subset \mathbb{C}$ and $\phi[\mathbb{I}_{\xi0} \subset \mathbb{R}] \subset \mathbb{C}$, which are solved with initial conditions
\begin{equation}
\chi(\xi_0) = \theta_{00}(\xi_0)^{1/N}, \qquad \phi(\xi_0) = 0.
\label{mivs2}
\end{equation}
To avoid a further singularity at $\Xi_1$, owing to the term $\phi = \chi'$, we assume that
the independent variable $\xi$ is a ``complex distance'', $\xi \in \mathbb{C}$, and that the integration proceeds along a prescribed complex path parallel to the real axis and at a relatively small imaginary distance from it, playing the role of a complex detour. This alternative consists in performing numerical integration along a contour $\mathfrak{C} \subset \mathbb{C}$, being parallel to the real axis $\mathbb{R}$ and distancing $i\,\breve{\xi}_0$ from it, i.e. along the straight line-segment
\begin{equation}
\mathfrak{C} =
\bigl\{
\xi_0 = \bar{\xi}_0 + i \, \breve{\xi}_0 \,\, \longrightarrow \,\,
\xi_\mathrm{end} = \bar{\xi}_\mathrm{end} + i \, \breve{\xi}_0 \bigr\},
\label{Pcontour}
\end{equation}
joining the points $\xi_0$ and $\xi_\mathrm{end}$ in $\mathbb{C}$. The constant imaginary part $\breve{\xi}_0$ of the complex distance $\xi \in \mathfrak{C}$ is usually taken to lie in the interval $\left[10^{-9},\, 10^{-3}\right]$. The real part $\bar{\xi}_\mathrm{end}$ of the complex terminal point $\xi_\mathrm{end}$ is taken here equal to $\bar{\xi}_\mathrm{end} \simeq 2 \, \bar{\Xi}_1$.
Thus the Lane--Emden function $\theta_{00}$ becomes complex-valued function in one complex variable, $\theta_{00}[\mathbb{I}_{\xi0} \subset \mathbb{C}] \subset \mathbb{C}$. Likewise, for the functions $\Theta_\sigma$ and $\Upsilon_\sigma$ (Eqs.~(\ref{T6501})--(\ref{T6502})) we write $\Theta_\sigma[\mathbb{I}_{\xi0} \subset \mathbb{C}] \subset \mathbb{C}$ and $\Upsilon_\sigma[\mathbb{I}_{\xi0} \subset \mathbb{C}] \subset \mathbb{C}$.
The initial conditions~(\ref{2.18}a,\,b) and (\ref{2.18TY}) become
\begin{equation}
\begin{aligned}
&\bar{\theta}_{00}(\xi_0) &=1, \qquad
&\bar{\theta}_{00}'(\xi_0) =0, \qquad
&\breve{\theta}_{00}(\xi_0) = \breve{\theta}_{00}'(\xi_0) = 0, \\
&\bar{\Theta}_{\sigma}(\xi_0) &=1, \qquad
&\breve{\Theta}_{\sigma}(\xi_0) =0, \qquad
&\bar{\Upsilon}_{\sigma}(\xi_0) = \breve{\Upsilon}_{\sigma}(\xi_0) =0.
\label{2.20e}
\end{aligned}
\end{equation}
Furthermore, the initial conditions~(\ref{2.18}c,\,d) for the functions $\theta_{ij}$ with $i>0$ become
\begin{equation}
\theta_{ij}(\xi_0) =
\left( \bar{\theta}_{ij} \right)_0 + i \,
\left( \breve{\theta}_{ij} \right)_0.
\label{ivscomplex}
\end{equation}
In detail, the real parts (\citep{GK14}, Eq.~(46)) are written as
\begin{equation}
\begin{aligned}
\left( \bar{\theta}_{ij} \right)_0 &=0, \qquad \, \,
\left( \frac{d\bar{\theta}_{ij}}{d\xi} \right)_0 =0, \qquad i = 1, \,2, \, \, \, j = 0, \\
\left( \bar{\theta}_{ij} \right)_0 &=\xi^j, \qquad
\left( \frac{d\bar{\theta}_{ij}}{d\xi} \right)_0 = j\,\xi^{j-1}, \qquad i = 1, \, 2, \, \, \, j \geq 2, \\
\left( \bar{\theta}_{30} \right)_0 &=0, \qquad \, \,
\left( \frac{d\bar{\theta}_{30}}{d\xi} \right)_0 = 0, \\
\label{2.20e}
\end{aligned}
\end{equation}
and the imaginary parts as (\citep{GK14}, Eq.~(47))
\begin{equation}
\left( \breve{\theta}_{ij} \right)_0 = 0, \qquad
\left( \frac{d\breve{\theta}_{ij}}{d\xi} \right)_0 = 0.
\label{2.20f}
\end{equation}
The raised-to-real-power terms involved in the definitions of the functions $S_{ij}$ (\citep{GK14}, Eq.~(13)) and in Eqs.~(\ref{T6501})--(\ref{T6502}) are written in terms of the auxiliary functions $\chi$ and $X = \Theta_\sigma^{1/N}$ as
\begin{equation}
\begin{aligned}
&\theta_{00}^{n+1} = \chi^{N(n+1)}, \qquad \theta_{00}^n = \chi^{Nn}, \qquad \theta_{00}^{n-1} = \chi^{N(n-1)}, \qquad \theta_{00}^{n-2} = \chi^{N(n-2)}, \\
&\Theta_\sigma^{n+1} = X^{N(n+1)}, \quad \ \,
\Theta_\sigma^n = X^{Nn}, \quad \ \
\Theta_\sigma' = NX^{N-1}X'.
\label{rtrpt}
\end{aligned}
\end{equation}
For $n = 1.0$ and $2.0$, we choose $N = 1$; thus $\theta_{00} = \chi$ and $\Theta_\sigma = X$. For $n = 1.5$ and $2.5$, we take $N = 2$, which yields $\theta_{00}^{1.5} = \chi^3$, $\theta_{00}^{2.5} = \chi^5$, $\Theta_\sigma^{1.5} = X^3$, and $\Theta_\sigma^{2.5} = X^5$. Finally, for $n = 2.9$, we choose $N = 10$, which gives $\theta_{00}^{2.9} = \chi^{29}$ and $\Theta_\sigma^{2.9} = X^{29}$.
\section{Units}
In this study, the abbreviations ``cgs'', ``gu'', `pu'', and ``cpu'' denote ``cgs units'', ``gravitational units'', ``polytropic units related to the gravitational units'', and ``classical polytropic units'', respectively (for a discussion on the gravitational units and their related polytropic units, see e.g. \citep{GS11}, Sec.~1.2). The units of several physical quantities in the system of gravitational units are given in Table \ref{tab:ugu} and play the role of ``conversion coefficients'', which convert a physical measure in gu to the respective measure in cgs. For instance, if the measure of a density in gu is $\rho_\mathrm{gu}$, then $\rho_\mathrm{cgs} = [D]_\mathrm{gu} \, \rho_\mathrm{gu}$ is its measure in cgs. In gu, any physical quantity has a dimension of the form $\mathrm{cm}^\gamma$ (\citep{GS11}, Eq.~(1); there is only one base unit in gu, the length, measured in cm), that is, explicitly, it has a dimension $[\gamma]$. If $\gamma=0$ for a particular physical quantity, then this quantity is dimensionless in gu.
The units of several physical quantities in the system of the polytropic units related to the gravitational units (see e.g. \citep{CST94}, Eqs.~(4)--(13)) are given in Table \ref{tab:upu} and convert a physical measure in pu to the respective measure in gu. For example, if the measure of a density in pu is $\rho_\mathrm{pu}$, then $\rho_\mathrm{gu} = [D]_\mathrm{pu} \, \rho_\mathrm{pu}$ is its measure in gu; accordingly, its measure in cgs is $\rho_\mathrm{cgs} = [D]_\mathrm{gu} [D]_\mathrm{pu} \, \rho_\mathrm{pu}$. All physical quantities are dimensionless in pu, since their physical dimensions are assigned to their respective units.
The polytropic units related to the gravitational units should not be confused with the classical polytropic units (see e.g. \citep{GTV79}, Sec.~8), defined on the basis of the normalization equations~(\ref{2.5bc}a,\,b). The units of several physical quantities in the system of classical polytropic units are given in Table \ref{tab:cpu} and play the role of conversion coefficients, which convert a physical measure in cpu to the respective measure in cgs. For example, if the measure of a density in cpu is $\rho_\mathrm{cpu}$, then $\rho_\mathrm{cgs} = [D]_\mathrm{cpu} \, \rho_\mathrm{cpu}$ is its measure in cgs; accordingly, its measure in gu is $\rho_\mathrm{gu} = (1/[D]_\mathrm{gu}) \, [D]_\mathrm{cpu} \, \rho_\mathrm{cpu}$ and its measure in pu is $\rho_\mathrm{pu} = (1/[D]_\mathrm{pu}) \, (1/[D]_\mathrm{gu}) \, [D]_\mathrm{cpu} \, \rho_\mathrm{cpu}$. All physical quantities are dimensionless in cpu, since their physical dimensions are assigned to their respective units.
In almost all the computations of this study, we use cpu measures of physical quantities and characteristics, since PNA is inherently oriented to cpu. However, since pu is the system mostly used in the bibliography, all results and comparisons are quoted in pu.
\begin{table}
\begin{center}
\caption{Units of several physical quantities in the system of ``gravitational units'' (gu), converting physical measures in gu to respective measures in cgs.\label{tab:ugu}}
\begin{tabular}{rllr}
\hline\hline
physical quantity & dimension of the & value of & numeric value \\
and its unit in gu & quantity in gu & the unit & of the unit \\
\hline
Length, $[L]_\mathrm{gu}$ & $\mathrm{cm}^1$ & $1$ & $1.000(+00)$ \\
Mass, $[M]_\mathrm{gu}$ & $\mathrm{cm}^1$ & $c^2/G$ & $1.347(+28)$ \\
Density, $[D]_\mathrm{gu}$ & $\mathrm{cm}^{-2}$ & $c^2/G$ & $1.347(+28)$ \\
Pressure, $[P]_\mathrm{gu}$ & $\mathrm{cm}^{-2}$ & $c^4/G$ & $1.210(+49)$ \\
Energy, $[T]_\mathrm{gu}=[W]_\mathrm{gu}$ & $\mathrm{cm}^1$ & $c^4/G$ & $1.210(+49)$ \\
Angular velocity, $[\Omega]_\mathrm{gu}$ & $\mathrm{cm}^{-1}$ & $c$ & $2.998(+10)$ \\
Angular momentum, $[J]_\mathrm{gu}$ & $\mathrm{cm}^{2}$ & $c^3/G$ & $4.038(+38)$ \\
Moment of inertia, $[I]_\mathrm{gu}$ & $\mathrm{cm}^{3}$ & $c^2/G$ & $1.347(+28)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Units of several physical quantities in the system of ``polytropic units related to the gravitational units'' (pu), used to convert physical measures in pu to respective measures in gu. The symbol $K$ denotes here the measure of the polytropic constant in gu, $K = K_\mathrm{gu}$.\label{tab:upu}}
\begin{tabular}{rl}
\hline\hline
physical quantity & value of \\
and its unit in pu & the unit \\
\hline
Length, $[L]_\mathrm{pu}$ & $K^{n/2}$ \\
Mass, $[M]_\mathrm{pu}$ & $K^{n/2}$ \\
Density, $[D]_\mathrm{pu}$ & $K^{-n}$ \\
Pressure, $[P]_\mathrm{pu}$ & $K^{-n}$ \\
Energy, $[T]_\mathrm{pu}=[W]_\mathrm{pu}$ & $K^{-n}$ \\
Angular velocity, $[\Omega]_\mathrm{pu}$ & $K^{-n/2}$ \\
Angular momentum, $[J]_\mathrm{pu}$ & $K^n$ \\
Moment of inertia, $[I]_\mathrm{pu}$ & $K^{3n/2}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Units of several physical quantities in the system of ``classical polytropic units'' (cpu), used to convert physical measures in cpu to respective measures in cgs.
\label{tab:cpu}}
\begin{tabular}{rl}
\hline\hline
physical quantity & value of \\
and its unit in cpu & the unit \\
\hline
Length, $[L]_\mathrm{cpu}$ & $\alpha$~~(see Eq.~(\ref{2.5bc}b)) \\
Density, $[D]_\mathrm{cpu}$ & $\rho_\mathrm{c}$~(see Eq.~(\ref{2.5bc}a)) \\
Pressure, $[P]_\mathrm{cpu}$ & $K_\mathrm{cgs} \, \rho_\mathrm{c}^\Gamma$~(see Eq.~(\ref{2.5})) \\
Mass, $[M]_\mathrm{cpu}$ & $4 \, \pi \, \alpha^3 \, \rho_\mathrm{c}$ \\
Energy, $[T]_\mathrm{cpu}=[W]_\mathrm{cpu}$ & $16 \, \pi^2 \, G \, \alpha^5 \, \rho_\mathrm{c}^2$ \\
Angular velocity, $[\Omega]_\mathrm{cpu}$ & $4 \, \pi \, G \, \rho_\mathrm{c}$ \\
Angular momentum, $[J]_\mathrm{cpu}$ & $8 \, \pi^{1.5} \, G^{0.5} \, \alpha^5 \, \rho_\mathrm{c}^{1.5}$ \\
Moment of inertia, $[I]_\mathrm{cpu}$ & $4 \, \pi \, \alpha^5 \, \rho_\mathrm{c}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{The Computations, I}
\label{comp}
Preliminaries regarding the computational environment used in this work and the Fortran code \texttt{DCRKF54} \citep{GV12} for solving complex IVPs can be found in \citep{GK14}.
Our code runs in four steps. Step 1 (S1) solves Eqs.~(\ref{2.10}) for the functions $\theta_{00}, \, \theta_{10}, \, \theta_{12}, \, \theta_{14}, \, \Theta_\sigma, \, \Upsilon_\sigma$, and stores the solution into proper arrays. All these arrays are interpolated by cubic splines in both their real and imaginary parts. All interpolations have as independent variable the real part $\bar{\xi}$ of the complex distance $\xi$. The radius $\bar{\Xi}_1$ of the undistorted configuration is computed as the first root of the algebraic equation
\begin{equation}
F_\mathrm{I} \left[ \bar{\theta}_{00} \right] \left( \bar{\xi} \, \right) = 0,
\end{equation}
where $F_\mathrm{I} \left[ \bar{\theta}_{00} \right]$ is the interpolating function for the real part $\bar{\theta}_{00}$ of the function $\theta_{00}$.
Then S1 calculates the surface values of the real and imaginary parts of all functions computed and the parameters $k_{00}$, $k_{10}$, $c_{00}$, $c_{10}$, $A_{12}$.
Next, Step 2 (S2) solves Eqs.~(\ref{2.10}) for the functions $\theta_{20}$, $\theta_{22}$, $\theta_{24}$, storing their values into proper arrays. All arrays are interpolated by cubic splines in both their real and imaginary parts.
Step 3 (S3) proceeds with a scheme able to compute the function $\Theta(\bar{\xi},\,\mu)$
at any point $(\bar{\xi},\,\mu)$, with $\bar{\xi} \leq 2 \bar{\Xi}_1$, lying either inside or outside the nonrotating Newtonian configuration of radius $\bar{\Xi}_1$. On the basis of this scheme, S3 can compute the surface of the configuration, that is, the root $\bar{\Xi}_\mu$ of the equation
\begin{equation}
F_\mathrm{I}\left[\bar{\Theta}\right](\bar{\xi},\,\mu) =
\sum_{i=0,\,2}^4 F_\mathrm{I}\left[\bar{\Theta}_i\right](\bar{\xi}\,) \, P_i(\mu) = 0,
\label{boundR}
\end{equation}
where $F_\mathrm{I} \left[ \bar{\Theta}_i \right]$ are the interpolating functions for the real parts $\bar{\Theta}_i$ of the functions $\Theta_i$ defined by Eq.~(\ref{2.9a}), at any $\mu$ with a given accuracy $\tau$.
In the framework of HAS, the boundary of the configuration is assumed to coincide with the equidensity surface
\begin{equation}
\left| F_\mathrm{I}[\bar{\Theta}](\bar{\Xi}_\mu,\,\mu) \right| =
\tau_\mathrm{s} > 0,
\label{equidensity}
\end{equation}
where $\tau_\mathrm{s}$ is a given ``surface parameter'' (for a similar issue arising in the framework of the well-known Hartle's perturbation method, see \citep{GK08}, Sec.~5.1). In addition, it is assumed that the function $F_\mathrm{I}[\bar{\Theta}]$ approaches the boundary condition~(\ref{equidensity}) from positive values, $F_\mathrm{I}[\bar{\Theta}] > 0$, in the case of the highly stiff EOS $n=1.0$,
\begin{equation}
0 \leq F_\mathrm{I}[\bar{\Theta}](\bar{\Xi}_\mu,\,\mu) - \tau_\mathrm{s} < \tau,
\label{Fgtt}
\end{equation}
and from negative values, $F_\mathrm{I}[\bar{\Theta}] < 0$, in the case of the moderately stiff and soft EOSs with $n \geq 1.5$,
\begin{equation}
0 \leq -F_\mathrm{I}[\bar{\Theta}](\bar{\Xi}_\mu,\,\mu) - \tau_\mathrm{s} < \tau.
\label{Fltt}
\end{equation}
It is worth mentioning here that, among the members of a collection of EOSs, the EOS deriving the larger $p$ for a given $\rho$ is the stiffest EOS in the collection; while the EOS leading to the smaller $p$ for the same $\rho$ is the softest EOS. For increasing $n$, the polytropic EOSs are getting softer; thus, in the collection $n = 1.0, \, 1.5, \, 2.0, \, 2.5, \, 2.9$, stiffest EOS is that with $n = 1.0$, while softest EOS is that with $n = 2.9$.
To compute the critical rotation parameter $\upsilon_\mathrm{c}$, S3 treats Eq.~(\ref{boundR}) in the full form of its dependencies,
\begin{equation}
\left|
\sum_{i=0,\,2}^4 F_\mathrm{I}\left[\bar{\Theta}_i\right](\sigma, \, \upsilon, \,
\bar{\Xi}_\mathrm{e}\,) \, P_i(\mu = 0)
\right| = \tau_\mathrm{s},
\label{boundRSU}
\end{equation}
and solves this equation for the ``root'' $\upsilon$ when the ``variables'' $\sigma$ and $\bar{\Xi}_\mathrm{e}$ are given. Accordingly, the root $\upsilon(\sigma, \, \bar{\Xi}_\mathrm{e})$ is the rotation parameter for which the distorted configuration obtains equatorial radius $\bar{\Xi}_\mathrm{e}$ under gravitation parameter $\sigma$. Solving Eq.~(\ref{boundRSU}) with a given accuracy $\tau$ for a mesh of values $\left\{ \left( \bar{\Xi}_\mathrm{e} \right)_m \right\}$ lying in an appropriate interval $\mathbb{I}(\bar{\Xi}_\mathrm{e})$ --- say $\mathbb{I}\left(\bar{\Xi}_\mathrm{e}\right) = [1.2 \, \bar{\Xi}_1, 1.8 \, \bar{\Xi}_1]$ --- and constructing the interpolating function $F_\mathrm{I}\left[\upsilon\right]\left(\bar{\Xi}_\mathrm{e}\right)$, S3 localizes the maximum value $\upsilon_\mathrm{c}$ of this function. This maximum represents the respective $\upsilon_\mathrm{c}$ for the particular $\sigma$; and the value $\bar{\Xi}_\mathrm{e}$ deriving $\upsilon_\mathrm{c}$ is the equatorial radius under gravitation parameter $\sigma$ and rotation parameter $\upsilon_\mathrm{c}$.
By studying the variation of $\upsilon_\mathrm{c}$ with the surface parameter $\tau_\mathrm{s}$, the latter written as $\tau_\mathrm{s}=\sigma/\nu$ with $\nu=1,\,2,\,\dots$, we can determine an optimum value for $\tau_\mathrm{s}$. In particular, our numerical experiments show that there is a value $\nu$, about which this variation changes from near quadratic to near linear of small slope. Such a change occurs when $\nu \sim 10$; hence, the value $\tau_\mathrm{s} \sim \sigma/10$ is adopted in the present study as an optimum $\tau_\mathrm{s}$. Note that in \citep{GK14} all models resolved have gravitation parameters $\sigma \leq 0.008$; due to such small values of $\sigma$, the surface parameter $\tau_\mathrm{s}$ is taken to be zero in \citep{GK14}.
\section{Physical Characteristics}
\label{physchar}
Since physical interest focuses on real parts of functions and parameters, we will hereafter quote only such values and, for simplicity, we will drop overbars denoting real parts of complex quantities. Second, for brevity, we will denote the interpolating functions by the symbols denoting so far the respective mathematical functions; for example, we will write $\Theta$ in the place of the respective interpolating function $F_\mathrm{I}\left[\Theta\right]$ defined by Eq.~(\ref{boundR}). Third, any symbol not explicitly connected to a system of units will denote the cgs measure of the respective physical characteristic; for example, the symbol $M$ will denote the cgs measure of the gravitational mass.
In the framework of PNA, the Newtonian relations for the physical characteristics of interest are modified as follows. First, the gravitational mass $M$ is given by (cf. \citep{GTV79}, Eq.~(8.2))
\begin{equation}
M = \int_V E \, dV =
[M]_\mathrm{cpu}
\int^{1}_{0} \int^{\xi_\mathrm{t}}_{0} \Psi^n \, \xi^2 \, d\xi \, d\mu,
\label{eqmass}
\end{equation}
where $dV$ is the coordinate volume element, $\xi_\mathrm{t}$ the upper limit of the integration in $\xi$ chosen so that $\Xi_\mathrm{e} < \xi_\mathrm{t} \leq \xi_\mathrm{end}$, $E$ the mass-energy density, and $\Psi^n$ the cpu measure of $E$, i.e. $E_\mathrm{cpu} = \Psi^n$. The functions $\Theta$ and $\Psi$ are connected via a sequence of equations, which are based on the relation (cf.~\citep{GS11}, Eq.~(6))
\begin{equation}
E_\mathrm{gu} = \rho_\mathrm{gu} + n \, p_\mathrm{gu}
\label{Erho}
\end{equation}
holding in gu. To find $\Psi(\Theta)$, we first calculate $E_\mathrm{gu}$,
\begin{equation}
\rho = \rho_\mathrm{c} \, \Theta^n, \ \
\rho_\mathrm{gu} = \rho / [D]_\mathrm{gu}, \ \
p_\mathrm{gu} = K_\mathrm{gu} \, \rho_\mathrm{gu}^\Gamma, \ \
E_\mathrm{gu} = \rho_\mathrm{gu} + n \, p_\mathrm{gu},
\label{firstseq}
\end{equation}
where $K_\mathrm{gu}$ is the measure in gu of the polytropic constant $K$ (\citep{GS11}, Sec.~1.2),
\begin{equation}
K_\mathrm{gu} = \left([M]_\mathrm{gu}^\Gamma / [P]_\mathrm{gu}\right) K.
\label{Kappa}
\end{equation}
Next, we convert measures back to cpu,
\begin{equation}
E = [D]_\mathrm{gu} \, E_\mathrm{gu}, \ \
E_\mathrm{cpu} = E / \rho_\mathrm{c}, \ \
\Psi^n = E_\mathrm{cpu}, \ \
\Psi = \left(E_\mathrm{cpu}\right)^{1/n}.
\label{secondseq}
\end{equation}
The baryonic mass $M_0$, also called rest mass, is given by (cf.~\citep{GS11}, Eq.~(108))
\begin{equation}
M_0 = \int_\mathcal{V} \rho \, d\mathcal{V} =
[M]_\mathrm{cpu} \,
\int^{1}_{0} \int^{\xi_\mathrm{t}}_{0} \Theta^n \, \xi^2 \, \Lambda \, d\xi \, d\mu,
\label{eqMass0}
\end{equation}
In this relation, we use the ansatz
\begin{equation}
dV \rightarrow d\mathcal{V},
\label{dVansatz}
\end{equation}
with the meaning that the coordinate volume element $dV$ is substituted by the proper volume element $d\mathcal{V}$ (a discussion on this matter for nonrotating relativistic objects can be found in \citep{B11}, Sec.~2). The ansatz~(\ref{dVansatz}) is equivalent to the substitution
\begin{equation}
d\xi \rightarrow \Lambda \, d\xi
\label{dxiansatz}
\end{equation}
of the coordinate differential $d\xi$, where the function $\Lambda$ plays the role of the metric function $e^{\lambda/2}$ (see e.g. \citep{GK08}, Eqs.~(1) and (5)) in the case that a configuration suffers both relativistic and rotational distortions,
\begin{equation}
\Lambda(\sigma, \, \upsilon, \, \xi) = \left[ 1 - \frac{2 \, G \,
m(\sigma, \, \upsilon, \, \xi)}
{c^2 \, \alpha \, \xi_\mathrm{avr}(\xi)} \right]^{-1/2};
\label{Lambda}
\end{equation}
the term $4 \, \pi \, \alpha^2$, which is also involved in the expression for $\Lambda$ (see e.g. \citep{B11}, Sec.~2), is incorporated into the respective cpu units. The meaning of $\Lambda(\sigma, \, \upsilon, \, \xi)$ is that, for a HAS solution of given $\sigma$ and $\upsilon$, the metric function $\Lambda$ can be considered as a function of the coordinate $\xi$, playing here the role of the semimajor axis $\xi_\mathrm{e}$ of a spheroidal equidensity surface of density $\Psi^n(\xi_\mathrm{e} = \xi,\,\mu=0)$. Accordingly, the function $m(\sigma, \, \upsilon, \, \xi)$ is the gravitational mass inside this spheroid, given by
\begin{equation}
m(\sigma, \, \upsilon, \, \xi) =
[M]_\mathrm{cpu}
\int^{1}_{0} \int^{\xi_\mathrm{t}, \, \Psi^n \geq \Psi^n(\xi_\mathrm{e}=\xi, \, \mu=0)}_{0}
\Psi^n \, \xi^2 \, d\xi \, d\mu,
\label{eqpartmass}
\end{equation}
where in this integration participate only the mass elements with densities $\Psi^n \geq \Psi^n(\xi_\mathrm{e}=\xi, \, \mu=0)$. The function $\xi_\mathrm{avr}(\xi)$ in Eq.~(\ref{Lambda}) denotes the average radius of the particular spheroid; note that, if $\xi_\mathrm{p}$ is its semiminor axis,
\begin{equation}
\Psi^n(\xi_\mathrm{p}, \, \mu=1) = \Psi^n(\xi_\mathrm{e}=\xi, \, \mu=0)
\end{equation}
then a rough approximation of $\xi_\mathrm{avr}(\xi)$ is
\begin{equation}
\xi_\mathrm{avr}(\xi) \, \approx \, (\xi_\mathrm{e} + \xi_\mathrm{p})/2.
\end{equation}
Next, the proper mass $M_\mathrm{P}$ (cf.~\citep{GS11}, Eq.~(109)) is written as
\begin{equation}
M_\mathrm{P} = \int_\mathcal{V} E \, d\mathcal{V} =
[M]_\mathrm{cpu}
\int^{1}_{0} \int^{\xi_t}_{0} \Psi^n \, \xi^2 \, \Lambda \, d\xi \, d\mu,
\label{eqMassP}
\end{equation}
It is worth remarking here that the only difference between Eqs.~(\ref{eqMass0}) and (\ref{eqMassP}) is the appearance of the mass-energy density $\Psi^n$ in the place of the rest-energy density $\Theta^n$.
The rotational kinetic energy $T$ is given by (cf.~\citep{GTV79}, Eq.~(8.5))
\begin{equation}
T =
\int_\mathcal{V} E \, \mathbf{v} \cdot \mathbf{v} \, d\mathcal{V} =
[T]_\mathrm{cpu} \, \left[ \, \frac{1}{2} \, \, \Omega_{*}^2 \, \,
\int^{1}_{0} \int^{\xi_\mathrm{t}}_{0} (1 - \mu^2) \, \Psi^n \,
\xi^4 \, \Lambda \, d\xi \, d\mu \right],
\label{eqkin}
\end{equation}
where $\Omega_*$ is the measure in cpu of the angular velocity $\Omega$;
thus, in cgs, $\Omega = [\Omega]_\mathrm{cpu} \, \Omega_*$. Combining Eq.~(\ref{2.7}a) with the definition of $[\Omega]_\mathrm{cpu}$ (Table \ref{tab:cpu}, sixth entry), we can verify that (\citep{GTV79}, Eq.~(2.20))
\begin{equation}
\Omega_* = \sqrt{\frac{\upsilon}{2}}.
\label{Omega*}
\end{equation}
The gravitational potential energy $W$ is written as (cf.~\citep{GTV79}, Eq.~(8.6))
\begin{equation}
W =
\int_\mathcal{V} E \, \Phi \, d\mathcal{V} =
[W]_\mathrm{cpu} \, \int^{1}_{0} \int^{\xi_\mathrm{t}}_{0}
\Psi^n \, \Phi \, \xi^2 \, \Lambda \, d\xi \, d\mu,
\label{eqpot}
\end{equation}
where $[W]_\mathrm{cpu} = [T]_\mathrm{cpu}$ (Table \ref{tab:cpu}, fifth entry), and the involved gravitational potential $\Phi$ is defined by (cf.~\citep{H86}, Eq.~(2))
\begin{equation}
\Phi(\mathbf{r}) = \, - \, G \,
\int_{\mathcal{V}'} \frac{E(\mathbf{r'})}
{\left| \mathbf{r} - \mathbf{r}' \right|} \, \, d\mathcal{V}'.
\label{eqgp}
\end{equation}
The angular momentum $J$ is given by (cf.~\citep{GTV79}, Eq.~(8.7))
\begin{equation}
J =
\int_\mathcal{V} E \, \mathbf{r} \times \mathbf{v} \, d\mathcal{V} =
[J]_\mathrm{cpu} \, \left[ \Omega_{*} \, \int^{1}_{0} \int^{\xi_\mathrm{t}}_{0}
(1 - \mu^2) \, \Psi^n \, \xi^4 \, \Lambda \, d\xi \, d\mu \right].
\label{eqagmom}
\end{equation}
Finally, it is worth mentioning that the moment of inertia $I$ is given by (see e.g. \citep{GK08}, Eq.~(22))
\begin{equation}
I = \frac{J}{\Omega} = \, [I]_\mathrm{cpu} \, \left( \frac{J_\mathrm{cpu}}
{\Omega_*}
\right).
\label{eqmoi}
\end{equation}
\section{The Computations, II}
\label{comp2}
The physical characteristics discussed in Sec.~\ref{physchar} are computed by passing certain quantities found by S3 to the next Step 4 (S4). This step integrates all double integrals involved in the definitions of the physical characteristics by using Simson's formula as proposed and described by Hachisu (\citep{H86}, Sec.~IV).
In detail, we first define two coordinate arrays; namely, the array $\{\mu_i\}$ in the $\mu$-direction (cf.~\citep{H86}, Eq.~(51)),
\begin{equation}
\mu_i = (i-1)/(\mathtt{KAP} -1 ), \qquad i = 1, \, 2, \, \dots, \, \mathtt{KAP};
\label{KAP}
\end{equation}
and the array $\{\xi_j\}$ in the $\xi$-direction (cf.~\citep{H86}, Eq.~(50)),
\begin{equation}
\xi_j = \left[ (j-1)/(\mathtt{KRP} -1) \right]\,\xi_\mathrm{t}, \qquad
j = 1, \, 2, \, \dots, \, \mathtt{KRP}.
\label{KRP}
\end{equation}
As explained in Sec.~\ref{physchar}, $\xi_\mathrm{t}$ is the upper limit of the integrations with respect to the coordinate $\xi$, lying in the interval $\mathbb{I}(\xi_\mathrm{t}) = (\Xi_\mathrm{e}, \, \xi_\mathrm{end}]$. In the present study, the ``number of the elements $\mu_i$'' \texttt{KAP} and the ``number of the elements $\xi_j$'' \texttt{KRP} are taken equal to $\mathtt{KAP} = \mathtt{KRP} = 201$; and the upper limit of the integrations in the coordinate $\xi$ is taken equal to $\xi_\mathrm{t} = 1.125 \, \, \Xi_\mathrm{e}$.
Having defined the coordinate arrays, we proceed with the computation of the array $\{\Theta^n_{i,j}\}$, which has as elements the rest-mass densities $\Theta^n_{i,j}=\Theta(\xi_j,\,\mu_i)^n$,
\begin{equation}
\Theta^n_{i,j} = \Theta(\xi_j,\,\mu_i)^n \ \ \ \mathrm{if} \ \ \Theta(\xi_j,\,\mu_i) > 0;
\ \mathrm{else} \ \ \Theta^n_{i,j} = 0.
\label{arrayTheta}
\end{equation}
Likewise, the array $\{\Psi^n_{i,j}\}$ with elements the mass-energy densities $\Psi^n_{i,j}=\Psi(\xi_j,\,\mu_i)^n$ is given by
\begin{equation}
\Psi^n_{i,j} = \Psi(\xi_j,\,\mu_i)^n \ \ \ \mathrm{if} \ \ \Psi(\xi_j,\,\mu_i) > 0;
\ \mathrm{else} \ \ \Psi^n_{i,j} = 0.
\label{arrayPsi}
\end{equation}
To calculate the values $\Psi(\xi_j,\,\mu_i)$ from the values $\Theta(\xi_j,\,\mu_i)$, we use the relations~(\ref{firstseq})--(\ref{secondseq}).
Now, to compute the gravitational mass $M$, we first construct an auxiliary array $\{Q_j\}$ with elements
\begin{equation}
Q_j = \, \sum_{i=1(2)}^\mathtt{KAP-2} \, \frac{1}{6} \,
\left( \mu_{i + 2} - \mu_i \right) \,
\left[ \Psi^n_{i,j} + 4 \, \Psi^n_{i + 1,j} + \Psi^n_{i + 2, j}
\right].
\end{equation}
Then $M$ results from the relation
\begin{equation}
M = [M]_\mathrm{cpu} \,
\sum_{j=1(2)}^\mathtt{KRP-2} \, \frac{1}{6} \,
(\xi_{i + 2} - \xi_{i}) \, \left[
\xi_{j}^2 \, Q_{j} + 4 \, \xi_{j + 1}^2 \, Q_{j + 1} + \xi_{j+2}^2 \, Q_{j+2} \right].
\label{gravmass}
\end{equation}
Next, to compute the array $\{m_k\}$ with elements $m_k = m(\sigma,\,\upsilon,\,\xi_k)$ (Eq.~(\ref{eqpartmass})), we first construct an auxiliary array $\{q_{j,k}\}$ with elements
\begin{equation}
q_{j,k} = \, \sum_{i=1(2)}^\mathtt{KAP-2} \, \frac{1}{6} \,
\Delta_{i+2,j,k} \left( \mu_{i + 2} - \mu_i \right) \,
\left[ \Psi^n_{i,j} + 4 \, \Psi^n_{i + 1,j} + \Psi^n_{i + 2, j}
\right].
\end{equation}
where
\begin{equation}
\Delta_{i,j,k} = 1 \ \ \ \mathrm{if} \ \ \Psi^n_{i,j} \geq \Psi^n_{1,k};
\ \mathrm{else} \ \ \Delta_{i,j,k} = 0.
\end{equation}
Then the elements $m_k$ are computed by
\begin{equation}
m_k = [M]_\mathrm{cpu} \,
\sum_{j=1(2)}^\mathtt{KRP-2} \, \frac{1}{6} \,
(\xi_{i + 2} - \xi_{i}) \, \left[
\xi_{j}^2 \, q_{j,k} + 4 \, \xi_{j + 1}^2 \, q_{j + 1,k} + \xi_{j+2}^2 \, q_{j+2,k}
\right].
\label{partialmasses}
\end{equation}
To proceed with the computation of the array $\{\Lambda_k\}$ with elements $\Lambda_k = \Lambda(\sigma,\,\upsilon,\, \xi_k)$ (Eq.~(\ref{Lambda})), we first construct the array $\{\xi_{\mathrm{(avr)}k}\}$ with elements $\xi_{\mathrm{(avr)}k} = \xi_\mathrm{avr}(\xi_k)$. Using the names \texttt{XI(K)} for $\xi_k$, \texttt{XI\_AVR(K)} for $\xi_{\mathrm{(avr)}k}$, \texttt{PSI\_N(I,J)} for $\Psi^n_{i,j}$, we compute the element(s) $\xi_{\mathrm{(avr)}k}$ by the code
\begin{footnotesize}
\begin{verbatim}
XI_AVR(K)=XI(K)
LOOP_I: DO I=2,KAP
LOOP_J: DO J=1,KRP
IF (PSI_N(I,J) < PSI_N(1,K)) THEN
XI_SURFACE_I=XI(J-1)
EXIT LOOP_J
END IF
END DO LOOP_J
XI_AVR(K)=XI_AVR(K)+XI_SURFACE_I
END DO LOOP_I
XI_AVR(K)=XI_AVR(K)/KAP
\end{verbatim}
\end{footnotesize}
Then the elements $\Lambda_k$ are computed by (Eq.~(\ref{Lambda}))
\begin{equation}
\Lambda_k = \left[ 1 - \frac{2 \, m_k / [M]_\mathrm{gu}}{\alpha \, \xi_{\mathrm{(avr)}k}}
\right]^{-1/2}.
\end{equation}
Next, to compute the baryonic mass $M_0$, we reconstruct the auxiliary array $\{Q_j\}$ with new elements
\begin{equation}
Q_j = \, \Lambda_j \, \sum_{i=1(2)}^\mathtt{KAP-2} \, \frac{1}{6} \,
\left( \mu_{i + 2} - \mu_i \right) \,
\left[ \Theta^n_{i,j} + 4 \, \Theta^n_{i + 1,j} + \Theta^n_{i + 2, j}
\right].
\label{QM0}
\end{equation}
Then $M_0$ is computed by the relation~(\ref{gravmass}) with the elements $Q_j$ of Eq.~(\ref{QM0}).
Likewise, to compute the proper mass $M_\mathrm{P}$, we reconstruct the auxiliary array $\{Q_j\}$ with new elements
\begin{equation}
Q_j = \, \Lambda_j \, \sum_{i=1(2)}^\mathtt{KAP-2} \, \frac{1}{6} \,
\left( \mu_{i + 2} - \mu_i \right) \,
\left[ \Psi^n_{i,j} + 4 \, \Psi^n_{i + 1,j} + \Psi^n_{i + 2, j}
\right],
\label{QMP}
\end{equation}
and we compute $M_\mathrm{P}$ by the relation~(\ref{gravmass}) with the elements $Q_j$ of Eq.~(\ref{QMP}).
We proceed now with the computation of the rotational kinetic energy $T$. First, we reconstruct the auxiliary array $\{Q_j\}$ with new elements
\begin{equation}
\begin{aligned}
Q_j = & \, \, \Lambda_j \,
\sum_{i=1(2)}^\mathrm{KAP-2} \, \frac{1}{6} \, (\mu_{i + 2} - \mu_{i}) \\
& \times \, \left[
\Psi_{i,j} \, (1 - \mu_{i}^2) + 4 \, \Psi_{i + 1, j} \, (1-\mu_{i + 1}^2) +
\Psi_{i + 2, j} \, (1 - \mu_{i + 2}^2)
\right].
\end{aligned}
\label{QforTJ}
\end{equation}
Next, we compute $T$ by the relation
\begin{equation}
T= \, [T]_\mathrm{cpu} \, \frac{1}{2} \, \Omega_*^2 \,
\sum_{j=1(2)}^\mathrm{KRP-2} \, \frac{1}{6} \,
(\xi_{i + 2} - \xi_{i}) \left[
\xi_{j}^4 \, Q_j + 4 \, \xi_{j + 1}^4 \, Q_{j + 1} + \xi_{j+2}^4 \, Q_{j+2}
\right]
\label{Trot}
\end{equation}
by using the elements $Q_j$ of Eq.~(\ref{QforTJ}).
In order to compute the gravitational potential energy $W$, we need first to construct the auxiliary arrays $\{Q_{k,\ell}\}$ with elements (cf.~\citep{H86}, Eq.~(54))
\begin{equation}
\begin{aligned}
Q_{k,\ell} = & \, \, \Lambda_k \,
\sum_{i=1(2)}^\mathrm{KAP-2} \, \frac{1}{6} \, (\mu_{i + 2} - \mu_{i}) \\
& \times \, \left[
\Psi_{i,k} \, P_{2\ell}(\mu_{i}) + 4 \, \Psi_{i + 1, k} \, P_{2\ell}(\mu_{i + 1}) +
\Psi_{i + 2, k} \, P_{2\ell}(\mu_{i + 2})
\right],
\end{aligned}
\end{equation}
and $\{R_{\ell,j}\}$ with elements (cf.~\citep{H86}, Eq.~(55))
\begin{equation}
\begin{aligned}
R_{\ell,j} = & \sum_{k=1(2)}^\mathrm{KRP-2} \, \frac{1}{6} \, (\xi_{k + 2} - \xi_{k}) \\
& \times \, \left[
Q_{k,\ell} \, f_{2\ell}(\xi_{k},\xi_j) +
4 \, Q_{k + 1,\ell} \, f_{2\ell}(\xi_{k + 1},\xi_j) +
Q_{k + 2,\ell} \, f_{2\ell}(\xi_{k + 2},\xi_j)
\right],
\end{aligned}
\end{equation}
where the functions $f_{2\ell}(\xi_j,\xi_k)$ are defined by Eq.~(3) of \citep{H86}. Then the elements $\Phi_{i,j}$ of the array $\{\Phi_{i,j}\}$ are given by (cf.~\citep{H86}, Eqs.~(2) and (56); the coefficient $4 \pi G$ has been incorporated into the respective units)
\begin{equation}
\Phi_{i,j} = - \sum_{\ell=0}^{\mathtt{KPL}} R_{\ell,j} \, P_{2\ell}(\mu_{i}).
\end{equation}
In the present study, the ``cutoff number of the Legendre polynomials'' \texttt{KPL} is taken equal to $\mathtt{KPL} = 8$; so, we use Legendre polynomials up to $P_{16}(\mu)$. It remains to construct the auxiliary array $\{S_j\}$ with elements (cf.~\citep{H86}, Eq.~(59))
\begin{equation}
S_j = \, \Lambda_j \, \sum_{i=1(2)}^\mathtt{KAP-2} \, \frac{1}{6} \,
\left( \mu_{i + 2} - \mu_i \right) \,
\left[ \Psi^n_{i,j} \, \Phi^n_{i,j} +
4 \, \Psi^n_{i + 1,j} \, \Phi^n_{i + 1,j} + \Psi^n_{i + 2, j} \, \Phi^n_{i + 2, j}
\right],
\label{SMP}
\end{equation}
Then $|W|$ results from the relation (cf.~\citep{H86}, Eq.~(60); the coefficient $2\pi$ has been incorporated into the respective units)
\begin{equation}
|W| = [W]_\mathrm{cpu} \, \left| \,
-\sum_{j=1(2)}^\mathtt{KRP-2} \, \frac{1}{6} \,
(\xi_{i + 2} - \xi_{i}) \, \left[
\xi_{j}^2 \, S_{j} + 4 \, \xi_{j + 1}^2 \, S_{j + 1} + \xi_{j+2}^2 \, S_{j+2} \right]
\right|.
\label{Wgravpoten}
\end{equation}
Finally, the angular momentum $J$ is computed by the relation
\begin{equation}
J= \, [J]_\mathrm{cpu} \, \Omega_* \,
\sum_{j=1(2)}^\mathrm{KRP-2} \, \frac{1}{6} \,
(\xi_{i + 2} - \xi_{i}) \left[
\xi_{j}^4 \, Q_j + 4 \, \xi_{j + 1}^4 \, Q_{j + 1} + \xi_{j+2}^4 \, Q_{j+2}
\right],
\label{Jang}
\end{equation}
where the auxiliary array $\{Q_j\}$ is that computed by Eq.~(\ref{QforTJ}).
\section{Numerical Results and Discussion}
\label{rd}
We first compute general-relativistic polytropic models of maximum mass, $M_\mathrm{max}$, in critical rotation with $n=2.9, \, 2.5, \, 2.0, \, 1.5, \, \mathrm{and} \, 1.0$. The case $n=2.9$ represents the softest EOS among those resolved, while the case $n=1.0$ represents the stiffest one.
A discussion on models of maximum mass can be found in \citep{GS11} (Sec.~4 and references therein); in the present study, we apply the procedure described there for computing the central rest-mass density $\rho_\mathrm{c}^\mathrm{max} = \rho_\mathrm{c}(M_\mathrm{max})$ of a model of maximum mass. Next, we find the central pressure $p_\mathrm{c}^\mathrm{max}$ by Eq.~(\ref{2.5}), and the mass-energy density $E_\mathrm{c}^\mathrm{max}$ by using the relations~(\ref{Erho})--(\ref{secondseq}). For the polytropic constant $K$, we choose the same values with those in \citep{GS11} (Tables 2--6). The gravitation parameter $\sigma_\mathrm{max}$ is then calculated by Eq.~(\ref{2.7}b).
For decreasing $n$, the values $\sigma_\mathrm{max}$ get increasing; namely, the softest case $n=2.9$ has $\sigma_\mathrm{max} \simeq 0.004$, while the stiffest one $n=1.0$ has $\sigma_\mathrm{max} \simeq 0.3$. Since $\sigma_\mathrm{max}$ is large for $n = 1.0$, we find interesting to study two further models for this case with $\sigma = \sigma_\mathrm{max}/2$ and $\sigma_\mathrm{max}/3$, respectively. The corresponding values $\rho_\mathrm{c}$ are found by writing Eq.~(\ref{2.7}b) in the form
\begin{equation}
\sigma = \frac{1}{c^2} \, \, \frac{K \, \rho_\mathrm{c}^\Gamma}{\rho_{\mathrm{c}}} =
\frac{1}{c^2} \, K \, \rho_\mathrm{c}^{1/n},
\label{2.7bb}
\end{equation}
and by solving it for $\rho_\mathrm{c}$,
\begin{equation}
\rho_\mathrm{c} = \left( \frac{c^2}{K} \, \, \sigma \right)^n.
\label{2.7bbb}
\end{equation}
Regarding rotation, we study models of maximum mass in critical rotation, i.e., having angular velocities equal to their Keplerian angular velocities $\Omega_\mathrm{K}$. Newtonian configurations are characterized by an angular velocity $\Omega_\mathrm{max}$ given by
\begin{equation}
\Omega_\mathrm{max} = \sqrt{\frac{G\,M}{R^3}},
\label{Wmax}
\end{equation}
which is the maximum angular velocity, for which mass shedding does not yet occur at the equator. Apparently, $\Omega_\mathrm{max}$ describes the Newtonian balance of centrifugal and gravitational forces. However, it is an overestimated limit for relativistic objects, for which the upper bound is instead the Keplerian angular velocity $\Omega_\mathrm{K}$. If the angular velocity of the configuration is slightly greater than $\Omega_\mathrm{K}$, then mass shedding occurs at the equator. Thus $\Omega_\mathrm{K}$ is the relativistic analog of $\Omega_\mathrm{max}$. Several methods have been developed for the computation of $\Omega_\mathrm{K}$. A discussion on appropriate methods is given in \citep{PG03} (Sec.~3.7). A detailed description of such a method can be found in \citep{BFGM05} (Sec.~IIA). This method, slightly modified, is used in \citep{SG12} for computing $\Omega_\mathrm{K}$ by applying the ``complex-plane strategy in the framework of Hartle's perturbation method'' (HCPS), keeping terms of up to third order in $\Omega$.
In the framework of HAS, $\Omega_\mathrm{K}$ is computed by the procedure described in Sec.~\ref{comp}. In particular, after having computed the critical rotation parameter $\upsilon_\mathrm{c}$, we find $\Omega_\mathrm{K}$ by Eq.~(\ref{Omega*}),
\begin{equation}
\left( \Omega_\mathrm{K} \right)_\mathrm{cpu} = \Omega_*(\upsilon_\mathrm{c}) =
\sqrt{\frac{\upsilon_\mathrm{c}}{2}} \, .
\label{OmegaK*}
\end{equation}
An interesting issue related to $\Omega_\mathrm{K}$ has to do with a remark made by Fahlman \& Anand in \citep{FA71} (Sec.~5; that particular PNA's scheme is of first order in $\sigma$, and of second order in $\sigma \upsilon$ and $\upsilon$). According to this remark, terms of order $\sigma \upsilon$ are generally opposite in sign to the corresponding terms in $\upsilon^2$ and, hence, these second-order terms tend to cancel each other. In the framework of HAS, however, relativistic and rotational effects are assumed decoupled (Sec.~\ref{intro}); hence, terms in $\upsilon^2$ remain without their counterbalancing terms in $\sigma \upsilon$. Therefore, it is of interest to find which values $\Omega_\mathrm{K}$ are closer to respective values computed by an alternative numerical method: the ones derived by keeping only terms in $\upsilon$, or those derived by including terms in $\upsilon^2$.
To compute ``reference values'' for $\Omega_\mathrm{K}$, we use in this study the well-known RNS package \citep{S92} with grid size $\mathrm{MDIV}\times\mathrm{SDIV}=129 \times 257$, accuracy $a=10^{-6}$ and tolerance $b=10^{-5}$. RNS is an accurate, nonperturbative, iterative method; on the other hand, HAS is a perturbative, noniterative method; so, comparing HAS results with respective RNS results seems to be a decisive test for HAS.
In Table~\ref{tab:OMGAK} we quote percent differences
\begin{equation}
\%D(\Omega_\mathrm{K}) = 100 \, [(\Omega_\mathrm{K})_\mathrm{HAS} - (\Omega_{K})_\mathrm{RNS}]/(\Omega_\mathrm{K})_\mathrm{HAS}
\end{equation}
of HAS values relative to RNS values. We find that the first-order HAS values are closer to those of RNS, except for the softest case $n=2.9$, for which the second-order HAS value is closer to that of RNS. The small $\sigma_\mathrm{max}$ of this case permits us to work with surface parameter $\tau_\mathrm{s}=0$; all other cases are resolved with $\tau_\mathrm{s} = \sigma/10$ (see however the remarks regarding the stiffest case $n = 1.0$ in the next paragraph). Accordingly, we quote numerical results of first order in $\upsilon$ for the models with $n \leq 2.5$, and of second order in $\upsilon$ for the model $n=2.9$.
It is worth clarifying here that the boundary condition~(\ref{Fgtt}) holding for $n = 1.0$ induces a shrinking of the configuration, since the derived boundary lies inside the physical boundary, tending to cooperate with the relativistic effects; so, the configuration can sustain a larger $\upsilon_\mathrm{c}$, i.e. a larger $\Omega_\mathrm{K}$, in comparison with that of the case $\tau_\mathrm{s} = 0$. On the other hand, the boundary condition~(\ref{Fltt}) holding for $n = 1.5, \, 2.0, \, \mathrm{and} \, \, 2.5$ induces an expansion of the configuration, since the derived boundary lies outside the physical boundary, tending to cooperate with the rotational effects; so, the configuration can sustain a smaller $\upsilon_\mathrm{c}$, i.e. a smaller $\Omega_\mathrm{K}$, in comparison with that of the case $\tau_\mathrm{s} = 0$. Both boundary conditions lead to values of $\Omega_\mathrm{K}$ closer to those of RNS. Accordingly, physical characteristics related strongly to rotation (i.e. $\Omega_\mathrm{K}$, $R_\mathrm{e}$, $T$, and $J$) obtain values closer to the ones of RNS. However, regarding the case $n = 1.0$, physical characteristics related strongly to gravitation (i.e. all kinds of mass defined in Sec.~\ref{physchar}, and $W$) obtain values appreciably overestimated with respect to those of RNS due to the intensified shrinking effectes discussed above. Therefore, particularly for the cases $n = 1.0, \, \sigma = \sigma_\mathrm{max}$ (Table~\ref{tab:295}) and $n = 1.0, \, \sigma = \sigma_\mathrm{max}/2$ (Table~\ref{tab:296}), the values of $M$, $M_0$, $M_\mathrm{P}$, and $W$ quoted in the tables are those computed by counterbalancing the additional shrinking effects owing to $\tau_\mathrm{s} > 0$, that is, by putting $\tau_\mathrm{s} = 0$ in the relevant computations.
For brevity, we will drop hereafter the superscript ``max'' from the maximum-mass central densities $\rho_\mathrm{c}$ and $E_\mathrm{c}$.
Tables~\ref{tab:290}--\ref{tab:295} show numerical results for the physical characteristics discussed in Secs.~\ref{physchar} and \ref{comp2}. As compared to RNS, the HAS values exhibit very satisfactory accuracy for the softest case $n=2.9$ and also for the soft case $n = 2.5$. In particular, in the case $n=2.9$ (Table~\ref{tab:290}) the larger value $|\%D|_\mathrm{max} \simeq 0.9$ occurs for $R_\mathrm{e}$ and the average percent difference is $|\%D|_\mathrm{avr} \sim 0.5$. Likewise, in the case $n=2.5$ (Table~\ref{tab:291}) the larger value $|\%D|_\mathrm{max} \simeq 1.5$ occurs for $J$ and the average percent difference is $|\%D|_\mathrm{avr} \sim 0.5$. In addition, Table~\ref{tab:291} shows results computed by HCPS (\citep{SG12}, Table~4; the value of $\Omega_\mathrm{K}$ quoted there has been computed by RNS), and their percent differences relative to respective RNS results. Note that the case $n = 2.5$ is the softest one resolved in \citep{SG12}.
Next, we verify a satisfactory accuracy for the moderately stiff case $n=2.0$ (Table~\ref{tab:292}), where the larger value $|\%D|_\mathrm{max} \simeq 2.5$ occurs for $J$ and the average percent difference is $|\%D|_\mathrm{avr} \sim 1$. Likewise, for the moderately stiff case $n=1.5$ (Table~\ref{tab:293}) we find that the larger value $|\%D|_\mathrm{max} \simeq 5.3$ appears for $|W|$ and the average percent difference is $|\%D|_\mathrm{avr} \sim 2.5$.
On the other hand, there is a tolerable accuracy, at least concerning $|\%D|_\mathrm{avr}$, for the stiffest case $n=1.0$. In particular, Table~\ref{tab:295} shows that the larger value $|\%D|_\mathrm{max} \simeq 12$ arises for $T$, while the average percent difference is $|\%D|_\mathrm{avr} \sim 4.5$. In addition, Table~\ref{tab:295} shows results computed by HCPS (\citep{SG12}, Table~1; the value of $\Omega_\mathrm{K}$ quoted there has been computed by RNS), and their percent differences relative to respective RNS results. The case $n = 1.0$ is the stiffest one resolved in \citep{SG12}.
Second, since for the case $n = 1.0$ the value of $\sigma_\mathrm{max}$ gets large, we find interesting to study two further models having instead $\sigma = \sigma_\mathrm{max}/2$ (Table~\ref{tab:296}) and $\sigma_\mathrm{max}/3$ (Table~\ref{tab:297}). The first model exhibits its larger value $|\%D|_\mathrm{max} \simeq 8.1$ for $T$ and average percent difference $|\%D|_\mathrm{avr} \sim 3.5$. The second model has $|\%D|_\mathrm{max} \simeq 6.2$, occuring for $R_\mathrm{e}$, and $|\%D|_\mathrm{avr} \sim 2.5$. Our results show that both the larger and the average percent differences get decreasing as $\sigma$ decreases; in fact, the model with $\sigma = \sigma_\mathrm{max}/3$ exhibits an accuracy compatible with that of the maximum-mass, critically rotating model $n = 1.5$.
Third, we also find interesting to study a maximum-mass model with $n=1.0$ in very rapid rotation, having angular velocity
\begin{equation}
\Omega_* = \Omega_* \left(\upsilon_\mathrm{c}/2 \right) =
\sqrt{\frac{1}{2} \, \frac{\upsilon_\mathrm{c}}{2}} =
\sqrt{\frac{1}{2}} \ \left( \Omega_\mathrm{K} \right)_\mathrm{cpu}
\simeq 0.7 \, \left( \Omega_\mathrm{K} \right)_\mathrm{cpu}.
\end{equation}
Table~\ref{tab:299} gives the physical characteristics of this model. We find that the larger value $|\%D|_\mathrm{max} \simeq 6.5$ occurs for both $T$ and $J$, while the average percent difference is $|\%D|_\mathrm{avr} \simeq 2.5$. Hence, for this highly relativistic rapidly rotating model, the accuracy achived by HAS is again compatible with that of the maximum-mass, critically rotating model $n = 1.5$.
\section{Concluding Remarks}
Focusing on physical characteristics related strongly to rotation, we remark that HAS computes results, which are close to those of RNS. It is well-known that most perturbative, noniterative methods have great difficulties in computing with satisfactory accuracy quantities like $\Omega_\mathrm{K}$ and $R_\mathrm{e}$. A detailed discussion on this matter can be found in \citep{BFGM05} (Sec.~III). Regarding $R_\mathrm{e}$ in critical rotation (in fact, in mass-shedding limit), Tables~II and III in \citep{BFGM05} quote discrepancies relative to results of nonperturbative methods used in \citep{CST94424} and \citep{BS04} from $\sim 15\%$ to $\sim 25\%$, dependent on the particular models studied (namely, constant-mass and maximum-mass sequences, respectively). In addition, Table~IV in \citep{BFGM05} quotes values of $\Omega_\mathrm{K}$ (in fact, mass-shedding frequencies $\nu_\mathrm{ms}$) with discrepancies, relative to results of \citep{CST94424} and \citep{BS04}, from $\sim 20\%$ to $\sim 25\%$. Furthermore, Tables~\ref{tab:291} and \ref{tab:295} incorporate relevant results of two of the models resolved in \citep{SG12} (Sec.~7, Tables~4 and 1, respectively) by using HCPS: those with $n = 2.5$ and $n = 1.0$, respectively, which represent the softest and stiffest cases studied in \citep{SG12}. The discrepancies regarding $R_\mathrm{e}$ values, relative to RNS, are $\sim 35\%$ and $\sim 30\%$, respectively. In addition, Tables~6,\,7, and 8 in \citep{SG12} show $\Omega_\mathrm{K}$ values (in fact, mass-shedding angular velocities $\Omega_\mathrm{MS}$) for models of constant baryonic mass with $n = 1.0, \, 1.5, \, \mathrm{and} \, \, 2.5$, respectively, computed by HCPS. The discrepancies relative to RNS are from $\sim 17\%$ to $\sim 23\%$.
On the other hand, the results computed by HAS are much closer to those of RNS. In particular, the larger discrepancy concerning $\Omega_\mathrm{K}$ is $\sim 6.5\%$ (Table~\ref{tab:OMGAK}, fifth entry: case $n = 1.0$); while the larger discrepancy concerning $R_\mathrm{e}$ for maximum-mass models is $\sim 5.5\%$ (Table~\ref{tab:293}, fourth entry: case $n = 1.5$). In conclusion, as compared to RNS, HAS is proved to be accurate and reliable for computing models in the extreme regime of maximum mass and critical rotation, from the softest case $n = 2.9$ to the stiffest one $n = 1.0$.
Finally, it should be stressed that HAS is a fast numerical method. In particular, by comparing execution times of HAS and RNS on the same computer, we have verified that HAS is $\sim \! 25$ times faster than RNS for the model $n = 1.0$, $\sim \! 5$ times faster for the model $n = 2.0$, and $\sim \! 3$ times faster for the models $n = 1.5, \, 2.5, \, \mathrm{and} \, \, 2.9$.
\begin{table}
\caption{Percent differences $\%D(\Omega_\mathrm{K})$ of the Keplerian angular velocities $\Omega_\mathrm{K}$ computed by HAS relative to respective values computed by RNS. Labels ``R1'' and ``R2'' denote first-order and second-order results in the rotation parameter $\upsilon$, respectively. The parenthesized signed integers, following numeric values, denote powers of ten.\label{tab:OMGAK}}
\begin{center}
\begin{tabular}{lrrr}
\hline \hline
$n$ & $\sigma~~~~~~~~~~~~~~~$ & R1~~~~~ & R2~~~~~ \\
\hline
$2.9$ & $\sigma_\mathrm{max}=4.41591(-03)$ & $8.114(-01)$ & $5.198(-02)$ \\
$2.5$ & $\sigma_\mathrm{max}=2.68066(-02)$ & $1.224(+00)$ & $1.678(+00)$ \\
$2.0$ & $\sigma_\mathrm{max}=7.10464(-02)$ & $2.235(+00)$ & $5.473(+00)$ \\
$1.5$ & $\sigma_\mathrm{max}=1.50569(-01)$ & $2.988(+00)$ & $3.659(+00)$ \\
$1.0$ & $\sigma_\mathrm{max}=3.19773(-01)$ & $-6.403(+00)$ & $-1.114(+01)$ \\
$1.0$ & $\sigma_\mathrm{max}/2=1.59887(-01)$ & $-6.400(+00)$ & $-1.104(+01)$ \\
$1.0$ & $\sigma_\mathrm{max}/3=1.06591(-01)$ & $-6.035(+00)$ & $-1.046(+01)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Physical characteristics of a general-relativistic, maximum-mass, critically rotating polytropic model with $n=2.9$, $\rho_\mathrm{c} = 1.481012(-07)$, $E_\mathrm{c} = 1.499978(-07)$, $\sigma_\mathrm{max}=4.41591(-03)$. In all required conversions, we use the value $K = 2.6(+13) \, \mathrm{cgs}$. Columns ``HAS'' and ``RNS'' show results computed by HAS and RNS, respectively. Column ``$\%D$'' shows percent differences $\%D(X) = 100 \, (X_\mathrm{HAS} - X_\mathrm{RNS})/X_\mathrm{HAS}$. All quantities (except for $K$, which is given in cgs, and for the dimensionless ratio in the last entry) are given in polytropic units related to the gravitational units (pu). The parenthesized signed integers, following numeric values, denote powers of ten.\label{tab:290}}
\begin{center}
\begin{tabular}{lrrr}
\hline \hline
quantity & HAS~~~~ & RNS~~~~ & $\%D$~~~~ \\
\hline
$M$ & $3.323(+00)$ & $3.323(+00)$ & $1.815(-03)$ \\
$M_0$ & $3.324(+00)$ & $3.324(+00)$ & $-1.038(-02)$ \\
$M_\mathrm{P}$ & $3.348(+00)$ & $3.348(+00)$ & $-1.034(-02)$ \\
$R_\mathrm{e}$ & $9.031(+02)$ & $9.114(+02)$ & $-9.232(-01)$ \\
$\Omega_\mathrm{K}$ & $6.639(-05)$ & $6.635(-05)$ & $5.198(-02)$ \\
$T$ & $2.545(-04)$ & $2.556(-04)$ & $-4.499(-01)$ \\
$|W|$ & $2.540(-02)$ & $2.536(-02)$ & $1.767(-01)$ \\
$J$ & $7.666(+00)$ & $7.705(+00)$ & $-5.021(-01)$ \\
$R_\mathrm{p}/R_\mathrm{e}$ & $6.688(-01)$ & $6.614(-01)$ & $1.100(+00)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Physical characteristics of a general-relativistic, maximum-mass, critically rotating polytropic model with $n=2.5$, $\rho_\mathrm{c} = 1.176534(-04)$, $E_\mathrm{c} = 1.255381(-04)$, $\sigma_\mathrm{max}=2.68066(-02)$. In all required conversions, we use the value $K = 1.5(+13) \, \mathrm{cgs}$. Column ``HCPS'' shows results computed by HCPS. Column ``$\%D_\mathrm{HCPS}$'' shows percent differences $\%D_\mathrm{HCPS}(X) = 100 \, (X_\mathrm{HCPS} - X_\mathrm{RNS})/X_\mathrm{HCPS}$. Details as in Table~\ref{tab:290}.\label{tab:291}}
\begin{center}
\begin{tabular}{lrrrrr}
\hline \hline
quantity & HAS~~~~ & RNS~~~~ & $\%D$~~~~ & HCPS~~~ & $\%D_\mathrm{HCPS}$~~ \\
\hline
$M$ & $1.298(+00)$ & $1.298(+00)$ & $-2.278(-02)$ & $1.293(+00)$ & $-3.867(-01)$ \\
$M_0$ & $1.302(+00)$ & $1.303(+00)$ & $-6.539(-02)$ & $1.300(+00)$ & $-2.308(-01)$ \\
$M_\mathrm{P}$ & $1.351(+00)$ & $1.351(+00)$ & $-6.490(-02)$ & $1.348(+00)$ & $-2.226(-01)$ \\
$R_\mathrm{e}$ & $5.956(+01)$ & $5.910(+01)$ & $7.795(-01)$ & $4.394(+01)$ & $-3.450(+01)$ \\
$\Omega_\mathrm{K}$ & $2.543(-03)$ & $2.512(-03)$ & $1.224(+00)$ \\
$T$ & $8.448(-04)$ & $8.472(-04)$ & $-2.742(-01)$ & $8.235(-04)$ & $-2.878(+00)$ \\
$|W|$ & $5.466(-02)$ & $5.419(-02)$ & $8.658(-01)$ & $5.620(-02)$ & $3.577(+00)$ \\
$J$ & $6.644(-01)$ & $6.745(-01)$ & $-1.517(+00)$ & $6.557(-01)$ & $-2.867(+00)$ \\
$R_\mathrm{p}/R_\mathrm{e}$ & $6.602(-01)$ & $6.533(-01)$ & $1.043(+00)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Physical characteristics of a general-relativistic, maximum-mass, critically rotating polytropic model with $n=2.0$, $\rho_\mathrm{c} = 5.047591(-03)$, $E_\mathrm{c} = 5.764817(-03)$, $\sigma_\mathrm{max}=7.10464(-02)$. In all required conversions, we use the value $K = 1.0(+12) \, \mathrm{cgs}$. Details as in Table~\ref{tab:290}.\label{tab:292}}
\begin{center}
\begin{tabular}{lrrr}
\hline \hline
quantity & HAS~~~~ & RNS~~~~ & $\%D$~~~~ \\
\hline
$M$ & $5.502(-01)$ & $5.494(-01)$ & $1.428(-01)$ \\
$M_0$ & $5.598(-01)$ & $5.592(-01)$ & $1.080(-01)$ \\
$M_\mathrm{P}$ & $6.008(-01)$ & $6.002(-01)$ & $1.087(-01)$ \\
$R_\mathrm{e}$ & $1.008(+01)$ & $9.903(+00)$ & $1.791(+00)$ \\
$\Omega_\mathrm{K}$ & $2.436(-02)$ & $2.381(-02)$ & $2.235(+00)$ \\
$T$ & $1.401(-03)$ & $1.404(-03)$ & $-2.461(-01)$ \\
$|W|$ & $5.332(-02)$ & $5.216(-02)$ & $2.179(+00)$ \\
$J$ & $1.150(-01)$ & $1.179(-01)$ & $-2.537(+00)$ \\
$R_\mathrm{p}/R_\mathrm{e}$ & $6.565(-01)$ & $6.383(-01)$ & $2.769(+00)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Physical characteristics of a general-relativistic, maximum-mass, critically rotating polytropic model with $n=1.5$, $\rho_\mathrm{c} = 5.842562(-02)$, $E_\mathrm{c} = 7.162125(-02)$, $\sigma_\mathrm{max}=1.50569(-01)$. In all required conversions, we use the value $K = 5.3802(+09) \, \mathrm{cgs}$. Details as in Table~\ref{tab:290}.\label{tab:293}}
\begin{center}
\begin{tabular}{lrrr}
\hline \hline
quantity & HAS~~~~ & RNS~~~~ & $\%D$~~~~ \\
\hline
$M$ & $2.950(-01)$ & $2.905(-01)$ & $1.545(+00)$ \\
$M_0$ & $3.094(-01)$ & $3.040(-01)$ & $1.742(+00)$ \\
$M_\mathrm{P}$ & $3.420(-01)$ & $3.363(-01)$ & $1.659(+00)$ \\
$R_\mathrm{e}$ & $2.902(+00)$ & $2.748(+00)$ & $5.307(+00)$ \\
$\Omega_\mathrm{K}$ & $1.217(-01)$ & $1.180(-01)$ & $2.988(+00)$ \\
$T$ & $2.329(-03)$ & $2.235(-03)$ & $4.044(+00)$ \\
$|W|$ & $5.078(-02)$ & $4.807(-02)$ & $5.333(+00)$ \\
$J$ & $3.828(-02)$ & $3.786(-02)$ & $1.089(+00)$ \\
$R_\mathrm{p}/R_\mathrm{e}$ & $6.118(-01)$ & $6.161(-01)$ & $-7.037(-01)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Physical characteristics of a general-relativistic, maximum-mass, critically rotating polytropic model with $n=1.0$, $\rho_\mathrm{c} = 3.197730(-01)$, $E_\mathrm{c} = 4.220278(-01)$, $\sigma_\mathrm{max}=3.19773(-01)$. In all required conversions, we use the value $K = 1.0(+05) \, \mathrm{cgs}$. Details as in Table~\ref{tab:291}.\label{tab:295}}
\begin{center}
\begin{tabular}{lrrrrr}
\hline \hline
quantity & HAS~~~~ & RNS~~~~ & $\%D$~~~~ & HCPS~~~ & $\%D_\mathrm{HCPS}$~~ \\
\hline
$M$ & $1.844(-01)$ & $1.876(-01)$ & $-1.698(+00)$ & $1.789(-01)$ & $-4.863(+00)$ \\
$M_0$ & $2.040(-01)$ & $2.061(-01)$ & $-1.014(+00)$ & $1.965(-01)$ & $-4.886(+00)$ \\
$M_\mathrm{P}$ & $2.306(-01)$ & $2.332(-01)$ & $-1.111(+00)$ & $2.230(-01)$ & $-4.574(+00)$ \\
$R_\mathrm{e}$ & $1.073(+00)$ & $1.032(+00)$ & $3.839(+00)$ & $7.928(-01)$ & $-3.017(+01)$ \\
$\Omega_\mathrm{K}$ & $3.835(-01)$ & $4.080(-01)$ & $-6.403(+00)$ \\
$T$ & $3.582(-03)$ & $4.011(-03)$ & $-1.196(+01)$ & $3.471(-03)$ & $-1.556(+01)$ \\
$|W|$ & $5.086(-02)$ & $4.960(-02)$ & $2.466(+00)$ & $4.753(-02)$ & $-4.355(+00)$ \\
$J$ & $1.868(-02)$ & $1.966(-02)$ & $-5.222(+00)$ & $1.702(-02)$ & $-1.551(+01)$ \\
$R_\mathrm{p}/R_\mathrm{e}$ & $6.082(-01)$ & $5.863(-01)$ & $3.597(+00)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Physical characteristics of a general-relativistic, critically rotating polytropic model with $n=1.0$, $\rho_\mathrm{c} = 1.598870(-01)$, $E_\mathrm{c} = 1.854509(-01)$, $\sigma = \frac{1}{2}\sigma_\mathrm{max}(n=1.0) = 1.59887(-01)$. In all required conversions, we use the value $K = 1.0(+05) \, \mathrm{cgs}$. Details as in Table~\ref{tab:290}.\label{tab:296}}
\begin{center}
\begin{tabular}{lrrr}
\hline \hline
quantity & HAS~~~~ & RNS~~~~ & $\%D$~~~~ \\
\hline
$M$ & $1.780(-01)$ & $1.790(-01)$ & $-5.491(-01)$ \\
$M_0$ & $1.940(-01)$ & $1.947(-01)$ & $-3.477(-01)$ \\
$M_\mathrm{P}$ & $2.074(-01)$ & $2.083(-01)$ & $-4.400(-01)$ \\
$R_\mathrm{e}$ & $1.372(+00)$ & $1.274(+00)$ & $7.114(+00)$ \\
$\Omega_\mathrm{K}$ & $2.772(-01)$ & $2.949(-01)$ & $-6.400(+00)$ \\
$T$ & $2.681(-03)$ & $2.898(-03)$ & $-8.115(+00)$ \\
$|W|$ & $3.278(-02)$ & $3.223(-02)$ & $1.685(+00)$ \\
$J$ & $1.934(-02)$ & $1.965(-02)$ & $-1.612(+00)$ \\
$R_\mathrm{p}/R_\mathrm{e}$ & $5.751(-01)$ & $5.784(-01)$ & $-5.652(-01)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Physical characteristics of a general-relativistic, critically rotating polytropic model with $n=1.0$, $\rho_\mathrm{c} = 1.065910(-01)$, $E_\mathrm{c} = 1.179526(-01)$, $\sigma = \frac{1}{3} \sigma_\mathrm{max}(n=1.0) = 1.06591(-01)$. In all required conversions, we use the value $K = 1.0(+05) \, \mathrm{cgs}$. Details as in Table~\ref{tab:290}.\label{tab:297}}
\begin{center}
\begin{tabular}{lrrr}
\hline \hline
quantity & HAS~~~~ & RNS~~~~ & $\%D$~~~~ \\
\hline
$M$ & $1.608(-01)$ & $1.599(-01)$ & $5.632(-01)$ \\
$M_0$ & $1.725(-01)$ & $1.715(-01)$ & $5.953(-01)$ \\
$M_\mathrm{P}$ & $1.806(-01)$ & $1.797(-01)$ & $5.137(-01)$ \\
$R_\mathrm{e}$ & $1.506(+00)$ & $1.413(+00)$ & $6.175(+00)$ \\
$\Omega_\mathrm{K}$ & $2.274(-01)$ & $2.411(-01)$ & $-6.035(+00)$ \\
$T$ & $2.046(-03)$ & $2.042(-03)$ & $1.705(-01)$ \\
$|W|$ & $2.236(-02)$ & $2.185(-02)$ & $-1.514(+00)$ \\
$J$ & $1.800(-02)$ & $1.694(-02)$ & $5.852(+00)$ \\
$R_\mathrm{p}/R_\mathrm{e}$ & $5.635(-01)$ & $5.739(-01)$ & $-1.843(+00)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Physical characteristics of a general-relativistic, maximum-mass polytropic model rotating with rotation parameter $\upsilon = \upsilon_\mathrm{c}/2$, i.e. $\Omega = \sqrt{1/2} \, \, \Omega_\mathrm{K}$; $n=1.0$, $\rho_\mathrm{c} = 3.197730(-01)$, $E_\mathrm{c} = 4.220278(-01)$, $\sigma_\mathrm{max}=3.19773(-01)$. Details as in Table~\ref{tab:295}.\label{tab:299}}
\begin{center}
\begin{tabular}{lrrr}
\hline \hline
quantity & HAS~~~~ & RNS~~~~ & $\%D$~~~~ \\
\hline
$M$ & $1.707(-01)$ & $1.694(-01)$ & $7.168(-01)$ \\
$M_0$ & $1.875(-01)$ & $1.861(-01)$ & $7.506(-01)$ \\
$M_\mathrm{P}$ & $2.130(-01)$ & $2.116(-01)$ & $6.625(-01)$ \\
$R_\mathrm{e}$ & $7.712(-01)$ & $7.968(-01)$ & $-3.318(+00)$ \\
$\Omega$ & $2.342(-01)$ & $2.342(-01)$ & $0.000(+00)$ \\
$T$ & $9.099(-04)$ & $9.691(-04)$ & $-6.504(+00)$ \\
$|W|$ & $4.545(-02)$ & $4.315(-02)$ & $5.057(+00)$ \\
$J$ & $7.771(-03)$ & $8.276(-03)$ & $-6.503(+00)$ \\
$R_\mathrm{p}/R_\mathrm{e}$ & $8.983(-01)$ & $9.043(-01)$ & $-6.707(-01)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\clearpage
|
1,314,259,994,338 | arxiv | \section{Introduction}
Astrophysical bow shock is a classical subject, and is observed around objects interacting with supersonic gas flows in a wide range of scales from planets to cosmic jets (Dyson et al. 1975; van Buren et al. 1990; Ogura et al. 1995; Wilkin 1996; Arce and Goodman 2002; Reipurth et al. 2002). Galactic-disk scale bow structure is observed in spiral arms, where a supersonic flow in galactic rotation encounters a stagnated gaseous arm in the density-wave potential (Martos and Cox 1998; G{\'o}mez and Cox 2004a, b).
In scales of star-forming (SF) regions, a bow shock was observed at G30.5+00 associated with the SF region W43 in the tangential direction of the 4-kpc molecular arm (Scutum arm) in thermal radio continuum emission associated with a CO line molecular arc (Sofue 1985). The molecular bow at G30.5+00 has recently been studied in detail based on the Nobeyama 45-m CO line survey (Sofue et al. 2018), which we call the molecular bow shock (MBS) (figure \ref{G30}).
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{G30.ps}
\end{center}
\caption{\co intensity map (gray) of the molecular bow shock (MBS) G30.5 in the Galaxy overlaid on a 10-GHz continuum map (contours) (Sofue et al. 2018)), which compose concave arc with respect to W43. Thick lines indicate calculated bows for $\Rbow=25$, 50 and 75 pc.}
\label{G30}
\end{figure}
MBS is a concave arc of molecular gas around an HII region (SF region) formed in the up-stream side of galactic rotation with respect to the SF region. An MBS is formed in such a way that the interstellar gas in galactic supersonic flow encounters a pre-existing HII region on the down stream side in the galactic-shock wave (figue \ref{illust}).
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{illust.ps}
\end{center}
\caption{Illustration of the molecular bow and GCH concave to the central OB star cluster proposed for the W43 SF complex in the 4-kpc arm in the Galaxy (Sofue et al. 2018). Inserted small is an illustration for a wavy sequential star formation discussed later. }
\label{illust}
\end{figure}
A similar phenomenon is observed in star forming regions known as a cometary HII region tailing down stream, when a compact HII region is embedded in a flow of ambient interstellar gas (Arthur \& Hoare 2006; Reid \& Ho 1985; Steggles et al. 2017; van Buren et al. 1990; Fukuda and Hanawa 2000; Campbell-White et al. 2018; Deharveng et al. 2015). The current studies of cometary HII regions have been obtained of sub-parsec to parsec scale objects inside individual SF regions. However, our observations of the association of the spiral-arm scale MBS G30.5 and the SF complex around W43 suggests the existence of larger scale, spiral-arm scale cometary HII regions associated with MBS. Namely, MBS developed in the up-stream side of cometary HII regions may be a common phenemenon in spiral arms.
On such premise, we have searched for bow-shock plus cometary structure in spiral arms of nearby galaxies. In the present paper, we report identification of a number of bow structures in the barred spiral galaxy M83 (NGC 5236). Assuming that dark clouds in optical images represent molecular clouds, we name them molecular bow shocks (MBS), We also show that MBS are generally associated with giant cometary HII regions (GCH) on their down-stream sides, which may alternatively be called giant HII cone (GHC). Therefore, an MBS and a GCH make one single set of objects. So, they may be often referred to either MBS or GCH.
Morphology and energetics (luminosity) of individual HII regions and OB clusters have been studied by optical imaging of M83 using the Hubble Space Telescope (HST) (Chandar et al. 2010, 2014; Liu et al. 2013; Blair et al. 2014; Whitmore et al. 2011). High-resolution molecular gas distribution in M83 has been extensively observed in the CO line emissions, and detailed comparative study with HII regions are obtained using ALMA high resolution maps (Hirota et al. 2018; Egusa et al. 2018).
Structural relation of HII regions and molecular clouds has been one of the major subjects of star formation mechanism in the Galaxy such as cloud-cloud collisions (McKee and Ostriker 2007). However, spatially-resolved relation between indiviidual HII regions and dark clouds in external galaxies seems to have not been studied yet. We here focus on individual HII regions and morphological relation with their associated dark clouds in M83. We will show that their morphology is similar to the cometary cone structure modeled for G30.5 in the Galaxy. In order to explain the morphology, we propose qualitative models based on theories of bow shocks and expanded HII regions.
\section{Extragalactic Giant Cometary \Hii Regions (GCH) and Molecular Bow Shocks (MBS)}
We examine optical images of M83 in $\lambda$438, 502 and 657 nm bands observed with the Hubble Space Telescope (HST) taken from the STScI and NASA web sites.
We adopt the nucleus position at RA=13h 37m 00s.8 and Dec=$-29\deg 51' 56''.0$ (Sofue and Wakamatsu 1994) and a distance of 4.5 Mpc (Thim et al. 2003).
Figure \ref{zoo} shows an HST image of M83 at 657 nm, where are marked giant cometary HII regions (GCH) associated with dark bow-shaped features (MBS) by white arcs. Figure \ref{zoo-II} shows the same, but as supplement for the outer region.
\begin{figure*}
\begin{center}
\includegraphics[width=16cm]{zoo.ps}
\end{center}
\caption{Giant cometary HII regions (GCH) and molecular (dark) bow shocks (MBS) in M83 marked by white arcs on a $\lambda 657$-nm band image taken with the HST (http://www.stsci.edu/hst/wfc3/phot-zp-lbn). Spiral arm and rotation directions are indicated by a dashed line and arrows, respectively. Inserted is an enlargement of region B in RGB (657, 502, 438 nm) color composite. }
\label{zoo}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{zoo-II.ps}
\end{center}
\caption{Same as figure \ref{zoo}, but for outer regions of M83. The dashed large ellipse and long dashed lines indicate corotation circle (Hirota et al. 2014). Thin dashed arcs and small ellipses are less clear or irregular MBS+GCH. Color photo was taken from the web page of NASA at https://apod.nasa.gov/apod/ap140128.html.}
\label{zoo-II}
\end{figure*}
The identification of MBS+GCH was obtained by eye-identification as follows, while numerical identification might be desirable, but is far beyond the scope of the present skills in imaging astronomy. First, it was easy to find HII regions and open clusters using each of the three color photographs (e.g., red for HII regions and blue for OB clusters), as well as assisted by color-coded image as inserted in figure \ref{zoo}. Then, a search for associated dark lanes and clouds was obtained, and young, therefore bright SF/HII regions are almost allways associated with dark clouds. We excluded too faint or diffuse HII regions, which are either associated with diffuse clouds or not associated with dark cloud.
By pairing an HII region with a dark cloud, their morphological relation was looked into in detail individually. In most cases, an HII region is surrounded by an arc of dark lane in such a way that the HII region is lopsided and open to the interarm direction, while the other, brigher side is facing a concave bow-shaped dark lane, as illustrated in figure \ref{illust}. Thus found lopsided HII region and a molecular arc are here identified as a GCH (giant cometary HII region) and MBS (molecular bow shock), assuming that a dark cloud is a molecular cloud.
The GCH and MBS are generally located on the down-stream sides of dark lanes of spiral arms. Each GCH is sheathed inside an MBS, and the inner wall of MBS coincides tith the outer front of the HII region composing a bright rim of \Ha emission. Figure \ref{illust} illustrates the GCH/MBS strucutre.
HII regions in M83 have been classified into several categories according to the sizes and luminosities (Whitmore et al. 2011). We here classify HII regions in M83 into three morphological types, and focus on Type III, and are summarized in table \ref{tabclass}.
\begin{itemize}
\item Type I: HII bubbles around low luminosity OB clusters with sizes smaller than the disk thickness and galactic shock thickness with full extent less than $\sim 30$ pc. HII regions of categories 1 to 3 by Whitmore et al. (2011) are of this type.
\item Type II: Bipolar cylindrical HII region open to the halo with length comparable to the disk scale height. The wall looks like a hole in the disk with diameter comparable to the disk thickness. Categories 3 to 4 are of this type.
\item Type III: Giant cometary HII regions (GCH), alternatively giant HII cones (GHC), open to the halo as well as to inter-arm, which develop around luminous OB associations. The extent is comparable to or greater than the disk thickness and the width of galactic shock wave with full extent as large as $\sim 100-200$ pc. This type of HII regions are of categories 4 to 6.
GCH is characterized by the curvature $\Rbow\sim \Rzero$ of the bow/cometary head, axis length $\Lcone$, and opening angle of the cone.
\end{itemize}
\begin{table*}
\caption{Morphological classification of HII regions.\\}
\begin{center}
\begin{tabular}{llllll}
\hline
\hline
Type & property&$\Rbow$ (pc) & $\Lcone$ (pc) &$\L(\Lsun)$\\
\hline
I & HII bubble& $\sim 10$ &$ \sim 20$ &$\sim 10-10^2$\\
II& Bipolar HII cylinder &$\sim 20$ &$\sim 20-50$ &$\sim 10^2$\\
III & Giant cometary HII cone &$\sim 20-150$ &$\sim 50-200$ & $\sim 10^2-10^5$ \\
\hline
\end{tabular}
\end{center}
\label{tabclass}
\end{table*}
Using figure \ref{zoo}, where the coordinates are indicated, we measured the positions, bow-head curvatures $\Rbow$, and position angles (PA) of the GCH/MBS by fitting the arcs by parabola. The measured values are listed in table \ref{tabbow}.
Figure \ref{histo} shows the frequency distributions of $\Rbow$, estimated $\L$, and offset of PA ($\delta$PV) from vertical direction to the local spiral arm (dashed line in figure \ref{zoo}). In the statistics, we removed GCH/MBS with $\Rbow>150$ pc, which are mostly weak dark lanes without luminous \Ha nebulae (dashed arcs in figure \ref{zoo}).
The PA offset is concentrated around $\delta PA \sim 0\deg$, indicating that the bows (cone) develop perpendicularly to the arms toward the outer inter-arm region. The cones are open toward the down-stream side of the gas flow with the galactic rotation velocity with respect to the spiral density wave at rigidly rotating at the pattern speed slower than the galactic rotation.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{hisRbow.ps}\\
\includegraphics[width=6cm]{hisPA.ps}\\
\includegraphics[width=6cm]{hisLum.ps}
\end{center}
\caption{Frequencies of $\Rbow$, PA offset from vector perpendicular to local arm (dashed line in figure \ref{zoo}), and estimated $\L$. }
\label{histo}
\end{figure}
\begin{table}
\caption{Bow head position (offset from nucleus), $\Rbow$, PA of cone axis, and $\L$.}
\begin{tabular}{llllll}
\hline
\hline
& $\delta$RA(")&$\delta$Dec(")&$\Rzero$(pc)& PA(deg)&$\L(\Lsun)$\\
\hline A & 147.7& -54.0& 160.4& -60.1& 0.47E+05\\
& 139.7& -47.3& 71.3& 45.2& 0.41E+04\\
& 137.7& -51.0& 177.1& 82.8& 0.63E+05\\
& 143.5& -38.4& 169.6& 67.6& 0.55E+05\\
& 135.9& -29.8& 46.8& 0.0& 0.12E+04\\
& 133.5& -26.6& 52.5& 72.7& 0.16E+04\\
& 133.3& -17.4& 47.2& -5.5& 0.12E+04\\
& 143.1& -13.3& 83.9& -0.8& 0.67E+04\\
\hline B & 126.4& 20.7& 39.4& 3.5& 0.69E+03\\
& 134.2& 24.6& 91.6& 54.9& 0.87E+04\\
& 140.0& 24.4& 34.9& 43.4& 0.48E+03\\
& 149.9& 31.1& 31.4& 73.1& 0.35E+03\\
& 164.9& 28.2& 61.2& -72.9& 0.26E+04\\
& 152.7& 35.0& 27.4& 43.7& 0.23E+03\\
& 157.0& 40.5& 39.0& 61.2& 0.67E+03\\
& 159.6& 44.9& 25.9& 32.2& 0.20E+03\\
& 160.7& 48.8& 35.5& 52.8& 0.50E+03\\
& 163.8& 52.0& 23.8& 38.3& 0.15E+03\\
& 131.7& 44.5& 25.5& -21.0& 0.19E+03\\
& 126.6& 59.5& 16.6& 43.6& 0.51E+02\\
& 131.3& 30.6& 51.4& 71.5& 0.15E+04\\
& 139.2& 32.9& 104.4& 48.2& 0.13E+05\\
& 142.1& 34.4& 79.2& 33.2& 0.56E+04\\
& 141.0& 40.4& 142.5& 55.2& 0.33E+05\\
& 148.4& 38.7& 15.6& 22.7& 0.43E+02\\
& 153.7& 42.1& 20.4& 52.2& 0.95E+02\\
& 155.0& 44.2& 19.6& 51.9& 0.85E+02\\
& 149.8& 48.7& 134.2& 45.2& 0.27E+05\\
& 153.9& 54.3& 83.5& 48.6& 0.66E+04\\
& 156.6& 54.1& 191.0& 24.2& 0.78E+05\\
& 144.0& 58.5& 118.6& -5.3& 0.19E+05\\
\hline C & 70.2& 55.4& 37.7& -4.4& 0.61E+03\\
& 73.7& 53.9& 28.0& -37.1& 0.25E+03\\
& 82.7& 54.1& 84.7& -31.6& 0.68E+04\\
& 98.1& 50.5& 54.2& -34.0& 0.18E+04\\
& 90.0& 64.3& 94.8& -35.8& 0.96E+04\\
& 109.6& 58.9& 45.6& -47.0& 0.11E+04\\
& 106.2& 63.5& 91.7& -22.4& 0.87E+04\\
\hline D & 20.9& 25.8& 37.1& -40.5& 0.57E+03\\
& 26.8& 23.5& 17.3& -47.1& 0.58E+02\\
& 34.7& 9.8& 19.9& -43.3& 0.88E+02\\
& 36.3& 5.8& 33.4& -46.1& 0.42E+03\\
\hline E & 29.6& -58.9& 59.2& 64.3& 0.23E+04\\
& 33.2& -49.9& 90.3& 58.8& 0.83E+04\\
& 28.4& -47.8& 32.8& 40.4& 0.40E+03\\
& 34.9& -40.3& 38.3& 62.2& 0.63E+03\\
& 32.8& -34.8& 24.3& 76.0& 0.16E+03\\
& 36.8& -29.2& 79.6& -82.0& 0.57E+04\\
& 43.4& -29.8& 52.5& -35.4& 0.16E+04\\
& 42.8& -22.1& 76.0& -21.4& 0.49E+04\\
\hline
\end{tabular}
\label{tabbow}
\end{table}
\section{Qualitative Models}
\subsection{Bow shock}
.
In our recent paper (Sofue et al. 2018) we modeled the MBS G30.5 by applying the Wilkin's (1996) analytical model for stellar-wind bow shock (figure \ref{G30}). The distance $Q$ of a bow front from the wind source is related to the elevation angle $\phi$ through
\be
Q(\phi)=\Rbow\ {\rm cosec}\ \phi \sqrt{3(1-\phi\ {\rm cot}\ \phi}).
\ee
Here, $\Rbow$ is the stand-off radius defined as the distance of the front on the galactic plane from the wind source, which is measured as the smallest curvature of the bow head facing the gas flow. It is related to the momentum injection rate $\dot{m}_{\rm w}$ by wind of velocity $V_{\rm w}$ from the central star and ram pressure by the in-flowing gas from outside by
\be
\Rbow=\sqrt{\dot{m}_{\rm w} V_{\rm w} \over 4 \pi \rho V^2}.
\label{eq_bshock}
\ee
In case of G30.5 we obtained $\Rbow\sim 54$ pc (Sofue et al. 2018), as shown in figure \ref{G30}, where bow shapes calculated for $\Rbow=25$, 50 and 75 pc are shown. Figure \ref{bowshock} shows the same calculated result compared with a GCH observed in region C of M83.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{front-xy-bow.ps} \\
\includegraphics[width=5cm]{front-xy-bow-m83c.ps}
\end{center}
\caption{Calculated bow shock front (top), and the same overlaid on a Type III GCH in M83 from figure \ref{zoo} with arbitrary scaling (bottom). }
\label{bowshock}
\begin{center}
\includegraphics[width=6cm]{front-xy-HII.ps} \\
\includegraphics[width=5cm]{front-xy-HII-m83c.ps}
\end{center}
\caption{Cometary front of an HII region for different central luminosities, and the same overlaid on a GCH in M83 by arbitrary scaling.}
\label{comHII}
\end{figure}
\subsection{HII cone}
In order to model the front shape of an HII region in inhomogeneous ISM, we first consider the Str\"omgren radius in a uniform ISM given by
\be
\Rhii \simeq \left({3 \nuv \over 4 \pi \ni\ne\ar }\right)^{1/3}
\label{strsph}
\ee
where $\nuv$ is the UV photon number radiated by the OB stars and $\ar \sim 4\times10^{-13} {\rm cm^{-3}s^{-1}}$ is the recombination rate, $\ni$ and $\ne$ are the ion and eletron densities, respectively.
We, then, assume that this equation holds in each small solid angle at any direction in which the density is assumed to be constant. We neglect the dynamical motion of gas inside the HII region. This approximation gives a qualitative shape of the front.
The ISM density variation is represented by an exponentially decreasing function in the $x$ direction perpendicular to the spiral arm with scale width $x_0$.
The $z$-directional (vertical) density profile is assumed to be represented by an inverse hyperbolic cosine function with scale thickness $z_0$.
Thus, the density in the galactic shock wave is expressed by
\be
n=n_0 \ \[\exp \(-{x\/x_0}\)+\epsilon_1\] \[{\rm cosh}^{-1}\( {z\/z_0}\)+\epsilon_2\] .
\label{nxz}
\ee
Here, $n_0$ is a constant, and $\epsilon_1$ and $\epsilon_2$ are constants representing the relative background densities in the galactic plane and halo, respectively. The density increases exponentially at negative $x$, but decreases to the inter-arm value of $\epsilon_1$ at $x\le -2 x_0$, beyond which the front cannot reach in the present cases.
The ion and electron densities are assumed to be related to the neutral gas density $n$ through
\be
\ne\sim \ni \sim n {\Tn \/\Te},
\label{ne}
\ee
where $T_{\rm n}$ and $T_{\rm e}$ are the temperatures of neutral and HII gas.
We now define the representative radius $\Rhii$ as the equilibrium radius of a spherical HII region in uniform gas with density $n$ by
\be
\Rhii \simeq \left[{3 \nuv \over 4 \pi \ar n^2 } \left(\Te \/\Tn \right)^2\right]^{1/3}.
\label{Rhiineu}
\ee
Rewriting $\nuv \sim \L/h\nu$ with L and $h\nu$ being the luminosity of the central OB stars and UV photon energy over $h 912$ A, respectively, we have
\be
\Rhii \sim
96.1 \({\L\/10^4} \)^{1\/3} \({n\/10^2}\)^{-{2\/3}} \({\Tn\/20}\)^{-{2\/3}} \({\Te\/10^4}\)^{2\/3} [{\rm pc}],
\label{Rhiipc}
\ee
where $\L$ is measured in $\Lsun$, $n$ in H cm$^{-3}$, and $\Te$ in K.
Writing $\Rhii =\sqrt{x^2+y^2+z^2}$, we can express $y$ as a function of $x$ and $z$, which represents the front shape of a GCH as
\be
y=\sqrt{r_0^2 \(e^{-{x\/x_0}}+\epsilon_1\)^{-4\/3} \(\cosh^{-1}{z\/z_0}+\epsilon_2\)^{{-4\/3}}-x^2-z^2}.
\label{yxz}
\ee
Here, $r_0$ is a parameter depending on the luminosity of the central OB association counted in terms of the head curvature of GCH $\Rzero$, which represents the radius for the neutral gas density $n=n_0$ at $(x,y,z)=(0,0,0)$.
If we measure $\Rzero$, which is assumed to be equal to $\Rbow$, we can estimate the UV luminosity $\L$ of the exciting OB stars for a fixed gas density $n_0$.
In figure \ref{comHII} we show calculated HII fronts for $r_0=0.5$, 0.75, and 1 in the $(x,y)$ plane calculated for $\epsilon_1=0.1$. The scales are normalized by the scale length $x_0$. The bottom figure is an overlay of the caculated front on a GCH in M83, where the scale is adjusted arbitrarily to fit the image.
In order to confirm that the qualitative model adopted here can reasonably reproduce a result by hydrodynamical simulation, we compare a calculated result of the HII front shape expanded in a sheet with density profile $\propto (1+x/x_0)^2$ with a hydrodynamical simulation by Fukuda and Hanawa (2000). Figure \ref{hanawa} shows the comparison, and we may consider that the model is reasonable in so far as we are interested in qualitative analysis of the front shapes.
\begin{figure}
\begin{center}
\includegraphics[width=5cm]{hanawa1.ps}
\includegraphics[width=5cm]{hanawa2.ps}
\end{center}
\caption{Comparison of hydrodynamical simulation by Fukuda and Hanawa (2000) of an off-center expansion of HII region (left) and front shape given by the present qualitative model (right) in a gas layer with density profile $\propto (1+x/x_0)^{-2}$.}
\label{hanawa}
\end{figure}
Figure \ref{model_3d} shows 3D front shapes for $r_0=0.5$, 0.7, 1 and 1.5 for $x_0=1$, $z_0=1$, $\epsilon_1=0.1$ and $\epsilon_2=0.01$, where the scales are normalized by $\Rhii$.
The front shapes vary according to the parameter $r_0\propto \Rzero \propto L^{1/3}n^{-2/3}$.
For low luminosity OB clusters, the HII front shows an spherical shape of Type I. As the luminosity increases, the front is elongated in the $z$ direction, resulting in Type II bipolar cylinder open to the halo. As $r_0$ increases further, the shape becomes more open into the inter arm region and to halo, resulting in Type III for GCH.
\begin{figure}
\begin{center}
Type I\includegraphics[width=5.8cm]{3d_05_1_1_01_001.ps}\\
Type II\includegraphics[width=5.8cm]{3d_07_1_1_01_001.ps}\\
Type III\includegraphics[width=5.8cm]{3d_1_1_1_01_001.ps}
\vskip 0.7cm
\end{center}
\caption{HII region shapes of Types I (bubble). II (cylinder) and III (GCH cone) according to $r_0=0.5$, 0.7, and 1 for fixed parameters of $x_0=1$, $z_0=1$, $\epsilon_1=0.1$ and $\epsilon_2=0.01$. }
\label{model_3d}
\end{figure}
\begin{table}
\caption{representative GCH parameters}
\begin{tabular}{ll}
\hline
\hline
Parameter & Value \\
\hline
UV luminosity &$\L \sim 10^4\Lsun$ \\
Neutral gas density &$n \sim 10^2$ \Hcc \\
Temp. of neutral gas & $\Tn \sim 20$ K \\
Temp. of HII gas &$\Te$ $\sim 10^4$ K \\
Scaling HII radius & $\Rzero \sim 96.1$ pc \\
\hline
Cone-head radius &$r_0/\Rzero= 0.5,\ 0.75,\ 1$ \\
Shock scale length & $x_0/\Rzero=1$ \\
Disk scale height &$z_0/\Rzero=1$ \\
Backgr. density in disk &$\epsilon_1=0.1$ \\
--- in halo &$\epsilon_2= 0.01$ \\
\hline
\end{tabular}
\label{tabpara}
\end{table}
\subsection{UV luminosity}
\co integrated intensity toward dark lanes in M83 has been observed to be $\Ico\sim 150$ \Kkms (Egusa et al. 2018), which yields molecular gas density of $n\sim 200$ \Hcc for an assumed disk thickness of $\sim 50$ pc and conversion factor of $2\times 10^{20}$ \Hsqcm (\Kkms)$^{-1}$ (Bolatto et al. 2013). Given the gas density, UV luminosities $\L$ of exciting OB stars are estimated for the measured $\Rzero\sim \Rbow$ using equation (\ref{Rhiipc}) as
$\L \sim 4\times 10^2 (\Rzero/96.1)^3 \Lsun$.
Thus estimated luminosities range from $\L\sim 10^3$ to $\sim 10^5\Lsun$ (table \ref{tabbow}), and may be compared with the visual luminosities of $L_{\rm v}\sim \times 10^4 - 10^6\Lsun$ (absolute visual magnitudes $M_{\rm v}\sim -6$ to -10) of OB clusters in M83 (Chander et al. 2010).
\subsection{GCH sheathed inside MBS}
MBS is composed of low-temperature and high-density molecular gas. On the other hand, GCH is high-temperature and low-density HII gas expanded around OB clusters. However, both MBS and GCH show a similar morphology (figures \ref{bowshock} and \ref{comHII}), and occur simultaneously. An HII region encountering a supersonic gas flow in galactic rotation makes a bowshock on its up-stream side, and is blown off toward the down-stream making open cone shape.
The interaction of MBS and GCH may cause mutual deformation. The expansion of GCH is suppressed by MBS on the up-stream side, while it becomes more free in the tail. The side wall of MBS also suppresses the HII expansion, and guides the HII gas into a more collimated cone.
\subsection{Dual-side compression and wavy sequential star formation}
The GCH and MBS act as a mutually triggering mechanism of star formation in such a way that GCH compresses the stacked molecular gas at the bow head from inside, and MBS compresses the molecular clouds from outside. The supersonic flow by the spiral arm's gravitational potential convey and supply molecular clouds to the bow head.
Thus, the bow head becomes an efficient star forming site by the "dual dynamical compression" sandwitched by MBS and GCH. This enhances galactic-scale compression of molecular clouds along the galactic shock, and makes the star formation more efficient than usually considered such as due to cloud-cloud collisions (e.g. McKee and Ostriker 2007).
MBS and BCH appear in a chain along the arms (figures \ref{zoo} and \ref{zoo-II}), and compose a wavy array of SF regions along the galactic shock wave. Furthermore, the bow waves from MBS encounters the neighboring waves, triggering SF in dense regions between the MBS in a sequential way (figure \ref{illust}).
\subsection{High SF rate}
It has been argued that cloud-cloud collision is an efficient mechanism of star formation (McKee and Ostriker 2007), and the star formation rate (SFR) is given by $1/\tc$, where $\tc$ is the cloud-cloud collision time given by
\be
\tc \sim {\rhoc \Rc\/\rhom \sigma}\sim 10^7\ {\rm y}.
\ee
Here, $\rhoc\sim 10^3$ \Hcc is GMC's gas density, $\rhom\sim 200$ \Hcc is the averaged density of molecular gas, $\Rc\sim 30$ pc is the mean cloud radius, and $\sigma\sim 7$ \kms is the ISM turbulent velocity.
On the other hand the cloud-bow head collision time $\tb$ is given by
\be
\tb \sim {\rhoc \Rc \/\rhom (\V-\Vp)\sin\ p} \sim 10^6\ {\rm y},
\ee
where $\V\sim 200$ \kms is the rotation velocity of the galaxy, $\Vp\sim R \Op$ with $R\sim 2$ kpc and $\Op\sim 20$ \kms kpc$^{-1}$ is the pattern velocity, and $p\sim 30\deg-80\deg$ is pitch angle of the spiral arm or the gaseous bar in M83.
We may therefore conclude that the SF rate $1/\tb\sim 10^{-6}$ y$^{-1}$ due to the galactic shock wave assisted by the MBS+GCH dual compression is an order of magnitude higher than the SF rate $1/\tc \sim 10^{-7}$ y$^{-1}$ due to cloud-cloud collisions.
\subsection{Cometary vs cylindrical types}
Our model calculation showed that the cometary structure is more efficiently produced when the density gradient is sharper toward the down-stream side of the arm and flow velocity against the bow head is faster. Such condition may be present in grand-designed arms and bars with significantly faster galactic rotation than the pattern speed. In fact, cometary HII regions are generally observed in two-armed interacting spiral galaxy like M51 or in barred spirals like M83 and NGC 1300.
On the other hand, HII regions in flocculent arms such as M33 and NGC 2403 tend to show cylindrical or spherical morphology. This may be explained if the arms are co-rotating with the galactic rotation, that an HII region expands more symmetrically in the arm's center. Similarly, in distant arms in corotation with the galactic rotation, HII regions also tend to have spherical or cylindrical morphology. These facts may be used to measure the corotation radius by observing the directions of HII cones with respect to the arms.
In order to check this idea, we have traced round, or irregular MBS and GCH in the regions outside of the corotation in regions G to K in figure \ref{zoo-II}, where the big ellipse and long dashed lines indicate the corotation circle of radius $140''$ at position angle $225\deg$ and inclination $24\deg$ (Hirota et al. 2014). Although the tracing was rather inaccurate because of the lack in bright and clear edged HII regions, we found a number of irregular cometary HII regions and less clear MBS, as indicated by dashed arcs and small ellipses. The fraction of irregular cases are larger than in the inner arms as in figure \ref{zoo}. This may imply that the corotation indeed takes place near the indicated corotation circle.
\section{Summary}
Using optical images of nearby spiral galaxies taken with the HST (NASA APOD), we identified many sets of GCH and MBS in the barred spiral galaxy M83.
We classified HII regions into Types I to III according to their degree of openness into the halo and disk. We argued that the GCH and MBS are general phenomena in galactic shock waves.
The cone-shaped morphology of GCH is qualitatively explained by a model of an evolved HII sphere expanded in inhomogeneous ISM with steep density gradient, and the MBS is understood by the bow shock theory. Since in the actual galactic condition the GCH and MBS are coupled with each other, dual side compression of gas at the MBS/GCH heads makes the SFR more efficient by a factor of 10 than SFR by cloud-cloud collisions.
We have further examined high-resolution images of other galaxies from HST and Subaru Telescope, and found that GCH and MBS are general phenomena in grand-designed spiral arms and/or bars. A full atlas of GCH/MBS in nearby galaxies will be presented in a separate paper.
\vskip 5mm
{\bf Aknowledgements}
The optical images of M83 were reproduced from the web sites of STSci at http://www.stsci.edu/hst/wfc3/ and NASA at https://apod.nasa.gov/apod/.
|
1,314,259,994,339 | arxiv | \section{Introduction}\label{sec:introduction}
So far the Standard Model (SM) has been remarkably successful in explaining the
data from the modern hadron colliders like the Tevatron at Fermilab or the Large Hadron
Collider (LHC) at CERN. We have now very strong indications that the only missing piece
of the SM, the Higgs boson, has been discovered \cite{HiggsAtlas,HiggsCMS}. On the other hand,
there does not seem to be any stand-out signal of any of the beyond the
Standard Model (BSM) scenarios. There exist wide variety of scenarios with
specific signatures to validate them. Some of these scenarios have
overlapping signatures. Therefore, even if one finds a new signal, it may
require a lot of work to ensure the connection of the signal with a
specific model. This suggests that, apart from the model-specific analysis of the data, it will
also be useful to look for BSM scenarios in model independent ways. One
method to do so is by constructing suitable effective Lagrangians. These effective
Lagrangians have terms that are consistent
with some of the aspects of the SM, in particular symmetries, but contain
higher dimensional (non-renormalizable) operators. Because of the non-renormalizable
nature of the extra terms, these effective Lagrangians can only be
used in a restricted domain of the energy scale. The particle content of
these effective Lagrangian models is same as that of the SM.
The extra terms in the Lagrangian can introduce new interactions, or they
can modify the existing interactions of some of the particles. In particular,
we note that, we can have modifications of the $Wtb, tth$ and $WWh$ interactions
that can be parametrized as anomalous couplings.
After the discovery of the Higgs boson at the LHC, it would be
important to study various properties of it. In particular,
one would like to study the production of the Higgs boson via all possible
channels. One such category of channels is the production of a Higgs
boson in association with single top quark. In these processes, there
can be additional particles, apart from a top quark and a Higgs boson.
Some of these processes have been studied within the context of the SM \cite{Maltoni},
and also considering scaled up $tth$ and $WWh$ couplings \cite{Barger:2009ky}.
These processes are similar to the single top-quark production processes.
In this case, a Higgs boson is emitted either from the top quark or the $W$ boson.
Due to the similarity with the single top-quark production processes, one would
expect these processes to contribute significantly to the Higgs boson production
at the LHC. However, as pointed out in Ref. \cite{Maltoni}, for the Higgs boson
mass, $m_h <$ 200 GeV, the cross sections of such processes turn out to be rather
small compared to what is expected from the single top-quark production at the LHC.
At the LHC, for $m_h \sim 100-150$ GeV, the dominant contributions come from the
$t$-channel $W$ exchange process, $ p p \to thj$ and associated production with
a $W$ boson, $p p \to tWh$. The authors of Ref. \cite{Maltoni} demonstrated that
for both of these channels, there is a destructive interference between the
diagrams where the Higgs boson is emitted from the top quark and ones with
the Higgs boson emitted from the $W$ boson. Because of the small cross sections,
these channels are generally not considered as significant to measure the
properties of the Higgs boson. However, inclusion of the anomalous couplings changes
the picture. The cross sections can be significantly enhanced to make these
processes phenomenologically useful. In this paper, we study the effect of
anomalous $Wtb, tth$ and $WWh$ interactions on the cross sections and distributions
of the processes that involve the production of a single top quark in association
with a Higgs boson at the LHC. We find that the enhancement in the cross sections
can be more than a factor of ten for some values of the $Wtb$ and $tth$ anomalous
couplings, and as a result
the associated production of a single top quark with the Higgs boson
can become significant at the LHC. Since the associated production of a
Higgs boson with a top quark is quite suppressed in the SM and, at the same
time, very sensitive to some anomalous couplings, it can provide us a
new opportunity to probe any new physics model that can generate these
anomalous couplings. Therefore, once observed, these channels can not only
give us useful information about the couplings but also help us to identify
or constrain some new physics models. However, in this paper we shall not
pursue the details of the possible new physics models, rather restrict
ourselves to the study of the effect of the anomalous couplings that can
appear in the $Wtb, tth$ and $WWh$ vertices on the $pp\to thX$ process in
the effective theory framework. Recently, there have also been a few studies
that consider the change in the sign of the $tth$ Yukawa coupling on the
associated production of a single top quark and a Higgs boson
\cite{Biswas:2012bd,Farina:2012xp,Biswas:2013bd}. This change of sign
leads to a constructive interference among the diagrams and thus a
significant increase in the $thj$ and $thbj$ cross sections. It is argued
that this enhancement can be detected at the LHC using various
decay modes of the Higgs boson \cite{Biswas:2012bd,Farina:2012xp,Biswas:2013bd}.
In this paper, we are not only
considering this situation, but general anomalous $tth$ coupling. In
addition, we consider the effect of anomalous $tbW$ and $WWh$ couplings
also. We also consider a few signatures of the single top
quark and a Higgs boson production and show that these signatures could
be visible at the LHC.
The organization of the paper is as follows. In section 2, we describe the
processes under consideration. In section 3, we discuss the anomalous $Wtb, tth$
and $WWh$ couplings. In section 4, we present the numerical results. In section 5,
we discuss the possibility of observing these processes at the LHC. In the last
section, we present our conclusions.
\section{Processes}
\begin{figure}[!h]
\bc
\subfigure[]{\includegraphics [angle=0,width=.15\linewidth] {thq.eps}}
\subfigure[]{\includegraphics [angle=0,width=.24\linewidth]{thb.eps}}
\subfigure[]{\includegraphics [angle=0,width=.24\linewidth]{thW.eps}}\\
\subfigure[]{\includegraphics [angle=0,width=.15\linewidth]{thqj.eps}}\\
\subfigure[]{\includegraphics [angle=0,width=.24\linewidth]{thbj.eps}}
\subfigure[]{\includegraphics [angle=0,width=.24\linewidth]{thWj.eps}}
\subfigure[]{\includegraphics [angle=0,width=.24\linewidth]{thWb.eps}}
\ec
\caption{Representative Feynman diagrams for the processes listed in Eqs. \ref{eq:thj} - \ref{eq:thwb}.
\label{fig:total}}
\end{figure}
In this section, we describe those processes for the production of a Higgs boson where it is
produced in association with a single top quark. In our analysis we include the tree-level
leading order and the subleading order processes ({\it i.e.,} processes with an extra jet)
that have significant cross sections. The leading order processes are following
\begin{eqnarray}
p ~p & \to & t ~h ~j ~X, \label{eq:thj}\\
p ~p & \to & t ~h ~b ~X, \label{eq:thb}\\
p ~p & \to & t ~h ~W ~X \label{eq:thw}\end{eqnarray}
and the processes with an extra jet are,
\ba
p ~p & \to & t ~h ~j ~j ~X, \label{eq:thjj}\\
p ~p & \to & t ~h ~j ~b ~X, \label{eq:thjb}\\
p ~p & \to & t ~h ~W ~j ~X, \label{eq:thwj}\\
p ~p & \to & t ~h ~W ~b ~X. \label{eq:thwb}
\ea
Here `$j$' represents a jet from a light quark (excluding bottom quark) or a gluon.
Representative parton level diagrams are displayed in Fig. \ref{fig:total}.
The leading order processes can be classified into three categories:
\begin{enumerate}
\item process with $W$ boson in $t$-channel, $p p \to t h j$,
\item process with $W$ boson in $s$-channel, $p p \to t h b$ and
\item process with $W$ boson in the final state, $p p \to t h W$.
\end{enumerate}
As we shall see, the $t$-channel process has the largest cross section, while the $s$-channel
process has the smallest cross section. The subleading diagrams can be obtained by
adding an extra jet (either light or $b$-jet) to these three processes. Some of these
subleading processes can have cross sections larger than the leading-order
processes, specially the s-channel leading-order process. All the above processes
contain one $tbW$ vertex and one $tth$ or $WWh$ vertex. That is why we study the
effect of anomalous couplings in these vertices on the cross sections.
Although subleading processes can have relatively significant
cross sections, but one has to be careful while computing their contribution
at the matrix-element level. These extra jets can be soft and thus lead to infrared divergences.
To avoid the soft jet contribution one has to set a reasonably large $p_T$ cut for them.
Apart from this, there is also the possibility of over counting. Like, {\it e.g.},
in the case of the process $ p p \to t h j j $, the jet pair can come from an on-shell $W$ decay
making it a $ p p \to t h W $ process. Hence to estimate the cross section of this process we
don't allow any on-shell $W$. Similarly, for the process $ p p \to t h W b $, the $bW$ pair
can come from the decay of an on-shell top quark. However, in that case the actual process
will be $pp \to tth$, which has a much larger cross section than the $th$ production.
To avoid such a situation, in our calculation, we allow only one of the top quark to go on-shell.
\section{Anomalous Interactions}
As we discussed above, the processes under consideration have three
electroweak vertices - $tbW, tth$, and $WWh$. (Since $Wqq^\prime$ vertex
with $q$ and $q^\prime$ being the light quarks is severely constrained,
we don't include the possibility of this vertex being anomalous.)
We consider the general modification of these vertices due to BSM interactions.
The possible general structure of these vertices have been extensively
discussed in the literature \cite{AguilarSaavedra:2008zc,Saavedra1,Hagiwara:1993ck,
Barger:2003rs,Whisnant:1994fh}. One parametrizes the effect of heavy BSM physics
by introducing the most general independent set of higher dimensional operators
that satisfies the gauge symmetries of the SM. However, some of these terms generally
reduce to simpler and more familiar forms when relations such as the equations
of motion of the fields are used. We will use these simpler forms for our
calculations.
\subsection*{Anomalous Couplings in the $tbW$ Vertex}
In the SM, the $tbW$ coupling is {\em V-A} type. Therefore, only the left-handed
fermion fields couple to the $W$ boson. So, it allows only a
left-handed top quark to decay into a bottom quark and a $W$ boson. However,
BSM physics can generate several other possible $tbW$ couplings.
One can write down the most general $tbW$ interaction
that includes corrections from dimension-six operators \cite{AguilarSaavedra:2008zc},
\ba
\mc L_{tbW} = \frac{g}{\sqrt2}\bar b\left[\g^\m\left(f_{1L}P_{L} + f_{1R}P_{R}\right) W^-_\m
+\frac{\s^{\m\n}}{m_W}(f_{2L}P_{L} + f_{2R}P_{R}) \left(\pr_{\n}W^-_\m\right)\right]t + H.c.,\label{eq:tbw_lag}
\ea
where, in general, $f_{iL/R}$'s are complex dimensionless parameters. Also
$P_{L,R} = {1 \over 2} (1 \mp \gamma_{5})$. In the SM, $f_{1L} = V_{tb} \approx 1$
while $f_{1R} = f_{2L} = f_{2R} = 0$. In our analysis, we assume the $f_{iL/R}$'s
to be real for simplicity.
Both recent LHC data and Tevatron data put bound on these parameters. Till now
Tevatron puts more stringent bound on these as compared to the LHC\cite{Saavedra:LHC_Tevatron}.
The Tevatron bounds are roughly
\ba
0.8 \lesssim &f_{1L} & \lesssim 1.2\,,\nn\\
-0.5 \lesssim &f_{1R}& \lesssim 0.5\,,\\
-0.2 \lesssim &f_{2L/R} & \lesssim 0.2\,.\nn
\ea
Notice that these bounds are quite loose. Therefore, the SM results can have significant corrections.
We note that there are also bounds on these parameters from the top-quark decays \cite{Drobnak:2010ej},
which are not more stringent.
\subsection*{Anomalous Couplings in the $tth$ Vertex}
In the SM, the top quark couples with the Higgs boson via the Yukawa coupling. In the effective theory,
the most general vertex for $tth$ interaction can be parametrized as \cite{Saavedra1},
\ba
\mc L_{\bar tth} = -\frac{m_t}{v}\bar t\left[\left(1+ y_t^V\right) + i y_t^A \g_5\right]t h.\label{eq:tth_lag}
\ea
In the SM, $y_t^V = y_t^A = 0$ and the first non-zero contributions to $y_t^V$ and $y_t^A$ come from
dimension six operators.
So far there is no direct experimental measurement of the top-quark Yukawa couplings.
However, from the production of the Higgs boson at the LHC through the $ g g \to h$
process, one can obtain information about the $tth$ vertex. The recent analyses of
the Higgs boson production and decays generally assume a generic scaling behavior of
the top-quark Yukawa coupling (see, {\it e.g.,} \cite{Falkowski:2013dza}),
\ba
\mc L_{\bar tth} = -C_t\frac{m_t}{v}\bar tt h.
\ea
The coupling $C_t$ can be written in our notation as,
\ba
C_t = y_t^V + 1.
\ea
These analyses indicate that the value of $C_t$ is close to 1.
However, the uncertainty in these estimates
still leaves some freedom for the anomalous coupling in the $tth$ vertex.
From the theoretical side, unitarity constraints allow order one values
for $y_t^V$ and $y_t^A$ \cite{Whisnant:1994fh}. We note that there has been
a recent bound on these Yukawa couplings by considering the production of a
Higgs boson \cite{Nishiwaki:2013cma}. To estimate the observability, we have restricted our analysis by the bounds of this study.
\subsection*{Anomalous Couplings in the $WWh$ Vertex}
The new higher dimensional operators that can contribute to $WWh$ Vertex can be
written as\cite{Hagiwara:1993ck,Barger:2003rs}
\ba
\mc L_{WWh} &=& g^1_{Wh} \left(G_{\m\n}^+W^{-\m} + G_{\m\n}^-W^{+\m} \right)\pr^\n h + g^2_{Wh}\left(G_{\m\n}^-G^{+\m\n}\right) h \nn\\
&&- g^3_{Wh}\frac{m_W^2}{v}\left(W_\m^+W^{-\m}\right)h,\label{eq:WWh_lag}
\ea
where
\ba
G_{\m\n}^\pm = \pr_\m W_\n^\pm - \pr_\n W_\m^\pm \pm i g \left(W^3_\m W^\pm_\n-W^3_\n W^\pm_\m\right).
\ea
The third term in Eq. \ref{eq:WWh_lag} comes form the normalization of the Higgs boson kinetic
term which gets modified due to higher dimensional operators. The constraints coming from
the electroweak precision data are\cite{Zhang:2003it},
\ba
-0.16{\rm ~TeV}^{-1} \lesssim & g^1_{Wh} & \lesssim 0.13{\rm ~TeV}^{-1}\label{eq:lmtwwh1}\,,\\
-0.26{\rm ~TeV}^{-1} \lesssim & g^2_{Wh} & \lesssim 0.29{\rm ~TeV}^{-1}\,.\label{eq:lmtwwh2}
\ea
Like the $tth$ couplings, the present Higgs boson data from the LHC favors the
SM values for the $WWh$ couplings. In Ref. \cite{Falkowski:2013dza} the authors
indicate that the couplings of the Higgs boson to the $W$ boson lie within 20 \% of
those of the SM values.
\section{Results}
\begin{figure}[!t]
\bc
\includegraphics [angle=-90,width=0.50\linewidth]{top_width.ps}
\ec
\caption{Dependence of top quark width on the anomalous couplings present in the $tbW$ vertex (defined in Eq. \ref{eq:tbw_lag}) --
$\Dl f_{1L}= f_{1L} - 1$, $f_{1R}$, $f_{2L}$ and $f_{2R}$.}\label{fig:top_width}
\end{figure}
The main decay mode of the top quark is $t\to bW$ with a branching ratio of almost 99\%.
Therefore, the presence of anomalous couplings in the $tbW$ vertex can modify the top quark
width significantly. With anomalous couplings, the top quark width is
\begin{eqnarray}
\Gamma(t \to b\;W) = \frac{G_F}{8\pi\sqrt{2}}\; m_t^3\;(1-x^2)
\Big[ (1+x^2-2x^4)(f_{1L}^2+f_{1R}^2) \nonumber &\\
+ (2-x^2-x^4)(f_{2L}^2+f_{2R}^2)
+ 6x(1-x^2)(f_{1L}f_{2R}+f_{2L}f_{1R})\Big],
\end{eqnarray}
where $x= M_W/m_t$.
In Fig. \ref{fig:top_width}, we show the dependence of the decay width of the top quark
on $\Dl f_{1L}= f_{1L} - 1$, $f_{1R}$, $f_{2L}$ and $f_{2R}$. We see that the top quark width
can change by about $\pm 50 \%$ on varying the values of $f_{1L}$ or $f_{2R}$. However, the width
is relatively immune to the change in the values of $f_{2L}$ or $f_{1R}$. We can understand this
as follows. Since, $f_{1L} = 1 + \Delta$ and other couplings are $\sim \Delta$, this implies
\begin{equation}
f_{1L}^2 \simeq 1 + 2 \Delta; \; f_{1R}^2 = f_{2L}^2 = f_{2R}^2 = f_{2L}f_{1R} \simeq \Delta^2 ; \;
{\rm and} \; f_{1L}f_{2R} \simeq \Delta.
\end{equation}
This explains the strong dependence of the decay width on $f_{1L}$ and $f_{2R}$. The weak
dependence of the width on the couplings $f_{2L}$ and $f_{1R}$ is essentially
due to the absence of the terms proportional to $f_{1L}f_{1R}$ and $f_{1L}f_{2L}$.
One needs to include the modified widths when considering the decays of the top quark.
\begin{figure}[!h]
\bc
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {3BFS-f1L.ps}\label{fig:3BFS-f1L}}
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {3BFS-f1R.ps}\label{fig:3BFS-f1R}}\\
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {3BFS-f2L.ps}\label{fig:3BFS-f2L}}
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {3BFS-f2R.ps}\label{fig:3BFS-f2R}}\\
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {3BFS-ytR.ps}\label{fig:3BFS-ytR}}
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {3BFS-ytI.ps}\label{fig:3BFS-ytI}}
\ec
\caption{Dependence of the leading order partonic cross section on $f_{1L}$, $f_{1R}$,$f_{2L}$, $f_{2R}$, $y_t^V$, $y_t^A$.
Here the individual contribution of the three separate subprocesses are marked by the final state particles. Eq. \ref{eq:cuts}
shows the cuts used on the final state partons. }
\label{fig:3BFS}
\end{figure}
\begin{figure}[!h]
\bc
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {4BFS-f1L.ps}\label{fig:4BFS-f1L}}
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {4BFS-f1R.ps}\label{fig:4BFS-f1R}}\\
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {4BFS-f2L.ps}\label{fig:4BFS-f2L}}
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {4BFS-f2R.ps}\label{fig:4BFS-f2R}}\\
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {4BFS-ytR.ps}\label{fig:4BFS-ytR}}
\subfigure[]{\includegraphics [angle=-90,width=.49\linewidth] {4BFS-ytI.ps}\label{fig:4BFS-ytI}}
\ec
\caption{Dependence of the partonic cross section for processes with 4 particles in the final states on
$f_{1L}$, $f_{1R}$,$f_{2L}$, $f_{2R}$, $y_t^V$, $y_t^A$.
The individual contribution of the separate subprocesses are marked by the final state particles.
Here $j$ stands for a light jet. Eq. \ref{eq:cuts}
shows the cuts used on the final state partons. }
\label{fig:4BFS}
\end{figure}
To compute the cross sections for the processes involved, we first implement the new couplings in \textsc{FeynRules} \cite{Christensen:2008py} and then use \textsc{Madgraph5} \cite{Alwall:2011uj}
with LO CTEQ6L1 parton distribution functions \cite{Pumplin:2002vw}. We have used the
following set of kinematic cuts on the final state partons,
\ba
p_T^J > 30 \; \rm{GeV},\: |\eta_J| < 5.0,\; \Dl R(J_1,J_2) = \sqrt{\left(\Dl \et_{J_1,J_2}\right)^2 +\left(\Dl \ph_{J_1,J_2}\right)^2} > 0.4 \label{eq:cuts}
\ea
where $J$ denotes either a light jet or a $b$-jet.
Unlike the $tbW$ and $tth$ anomalous couplings, we find that the associated production of a
single top quark with a Higgs boson is less sensitive to any variation of $WWh$ anomalous
couplings. If one varies $g^i_{Wh} (i=1,2)$ within the ranges shown in Eqs. \ref{eq:lmtwwh1}
and \ref{eq:lmtwwh2}, the cross sections for the different processes vary marginally,
about 10-20\%; the production of $thb$ is an exception that can increase by about 60\%. The variation
of $g^3_{Wh}$ has very little impact on the cross sections.
In Fig. \ref{fig:3BFS}, we show the dependence of the cross sections of the processes $thj,thb$ and $thW$
on $f_{1L}$, $f_{1R}$, $f_{2L}$, $f_{2R}$, $y_t^V$, and $y_t^A$. The SM value of the
cross section for the $thj$ process is about 60 fb. The variation in $f_{1L}$ and $f_{2L}$ does not
increase the cross section much. However, at the edge of allowed values of $f_{2R}$ cross section can
double. There is almost no change in the cross section on varying $f_{1R}$. This overall behavior is almost
like that of the top quark width. So, it can be understood similarly. However,
there is a strong dependence on the Yukawa couplings. As we shall see below, there exist allowed regions
in the phase space where cross section can increase more than 10 times and approaches 600-800 fb. The
cross sections of the other two processes $thb$ and $thW$ do not depend significantly on the
anomalous $tbW$ coupling. However, the cross section of the $thb$ can almost double with the
allowed range of the Yukawa couplings. In Figs. \ref{fig:3BFS-ytR} and \ref{fig:3BFS-ytI}, we can see
the destructive interference between the $WWh$ and $tth$ couplings in the $thj$ production process \cite{Maltoni}.
In Fig. \ref{fig:4BFS}, we show the dependence of the cross sections of the processes $thjj,thbj,thWb$ and $thWj$
on $f_{1L}$, $f_{1R}$, $f_{2L}$, $f_{2R}$, $y_t^V$, and $y_t^A$. The behavior of the $thjj$ and $thbj$ processes
is similar to what we find above. The variation in $f_{1L}$ and $f_{2L}$ changes cross sections marginally;
the variation in $f_{1R}$ has almost no impact on the cross sections. However, at the edge of the allowed parameter
values of $f_{2R}$, the cross sections can double. The cross sections of the processes $thWb$ and $thWj$
have very weak dependence on the anomalous $tbW$ coupling parameters.
However, as earlier, the cross sections have strong dependence on the Yukawa couplings.
The plots in Fig. \ref{fig:3BFS} and Fig. \ref{fig:4BFS} show variation with respect to
change in one parameter, while the other parameters are kept at the SM value. Of course,
we can choose values of all parameters away from the SM values which will give larger
cross sections. We have chosen a set of values which may favor the larger cross sections.
This set of values and the cross sections for those values are given in Table \ref{tab:extremepars}.
(Some recent analyses indicate that the data actually disfavors some of these parameter
points \cite{Nishiwaki:2013cma}. We display these points
in the table for illustration only.) The set of
parameters $\mc P_0$ corresponds to the SM values. The cross sections of the processes are
adding up to about 150 fb. However, there exist parameter sets where the cross sections
can add up to more than $1$ pb. For most of the listed processes, the cross sections can
increase as much as fifteen times or more. With these values of the cross sections, it may
be possible to isolate the production of the Higgs boson in association with a top
quark from the background and observe it at the LHC. We note that anomalous couplings
will also change the angular distributions of the jet and the Higgs boson. In particular,
we find that anomalous $tbW$ coupling enhances the cross section more in the central-rapidity
region of the jet and the Higgs boson for the $thj$ production.
\begin{table}[!h]
\bc
\begin{tabular}{|c|rrrrrrr|}\hline
Param.& $\s_{pp\to thj}$ &$\s_{pp\to thb}$&$\s_{pp\to thW}$&$\s_{pp\to thjj}$&$\s_{pp\to thbj}$&$\s_{pp\to thWj}$&$\s_{pp\to thWb}$\\
Set&(fb)&(fb)&(fb)&(fb)&(fb)&(fb)&(fb)\\\hline\hline
$\mc P_0$& 59.6& 2.1& 17.1& 9.6& 20.1& 12.7& 18.4\\
$\mc P_1$&65.1 & 2.5& 16.9& 10.7 & 22.4 & 12.4 & 18.4\\
$\mc P_2$&69.2 & 3.5& 19.3& 13.1& 24.2& 14.0& 19.1\\
$\mc P_3$&57.3&2.0&17.1& 9.5& 19.9& 12.7& 18.4\\
$\mc P_4$& 180.1& 2.7& 51.6& 35.1& 72.4& 35.8& 18.3\\
$\mc P_5$& 382.9& 3.2& 105.4& 69.6& 144.3& 73.0& 30.3\\
$\mc P_6$& 472.0& 3.4& 116.7& 86.7& 153.3& 79.9& 32.9\\
$\mc P_7$& 567.0& 53.0& 129.9& 169.0& 246.1& 95.3& 93.5\\
$\mc P_8$& 602.3& 29.4& 250.7& 163.8& 263.3& 184.2& 117.1\\
$\mc P_9$& 875.2& 64.4& 229.8& 241.5& 363.5& 167.0& 107.4\\
\hline
\end{tabular}
\vskip 0.2in
\begin{tabular}{|c|rrrrrrrrr|}\hline
Param. Set&$f_{1L}$ &$f_{1R}$&$f_{2L}$&$f_{2R}$&$y_{t}^V$&$y_t^A$
&$g_{Wh}^1$&$g_{Wh}^2$&$g_{Wh}^3$\\
\hline\hline
$\mc P_0$& 1.0&0.0&0.0&0.0&0.0&0.0&0.0&0.0&0.0\\
$\mc P_1$& 1.0&0.0&0.0&0.0&0.0&0.0&0.1&0.0&0.0\\
$\mc P_2$& 1.0&0.0&0.0&0.0&0.0&0.0&0.0&0.3&0.0\\
$\mc P_3$& 1.0&0.0&0.0&0.0&0.0&0.0&0.0&0.0&0.2\\
$\mc P_4$& 0.8&0.2&0.2&0.2&-1.0&-1.0&0.0&0.0&0.0\\
$\mc P_5$& 1.2&0.2&0.2&0.2&-1.0&-1.0&0.0&0.0&0.0\\
$\mc P_6$& 1.2&0.2&0.2&-0.2&-1.0&-1.0&0.0&0.0&0.0\\
$\mc P_7$& 0.8&0.2&0.2&0.2&1.0&1.0&0.0&0.0&0.0\\
$\mc P_8$& 1.2&0.2&0.2&-0.2&1.0&1.0&0.0&0.0&0.0\\
$\mc P_9$& 1.2&0.2&0.2&0.2&1.0&1.0&0.0&0.0&0.0\\\hline
\end{tabular}
\caption{\label{tab:extremepars}Cross-sections for different single top quark and Higgs boson associated production processes for six different
choices of anomalous coupling parameters denoted by $\mc P_{i=1,\ldots,9}$ (explained in the lower table). The set $\mc P_0$ corresponds to
the SM couplings while in sets $\mc P_{1,2,3}$ only $g_{Wh}^{1,2,3}$s are varied.}
\ec
\end{table}
\section{Observability}
We now consider the possible signatures of these processes and their dominant
backgrounds to show that the backgrounds to some of the processes can be manageable.
For $m_h \approx 125$ GeV, the primary decay mode of the Higgs boson
is $h \to b {\bar b}$. To observe any signature of the processes, the
accompanying top quark needs to decay semi-leptonically.
If it decays into jets, the QCD backgrounds from various multijet events would overwhelm the signal.
A very simple signature for all the processes would be ``an isolated $e/\mu$ + jets'', where the top quark decays semi-leptonically and the other particles are either jets or decay into
jets. Such a signature would not be viable due to very large background
from the processes such as ``$W$ + jets'' and ``$t$ + jets''.
However, since most of the jets in the signal
processes are $b$-jets, we can use the tagging of the $b$-jets
to reduce the backgrounds. In particular, for the signature --
``an isolated $e/\mu$ + 3 $b$-jets + light jets'' \cite{Farina:2012xp}, all of the processes under consideration can contribute. To isolate different signal processes, one has to look for other signatures. For example, a signature specific to $tbh$ and $tbhj$ is ``isolated $e/\mu$ + 4 $b$-jets''.
Similarly, ``2 isolated $e/\mu$ + 3 $b$-jets'' can come from the $W$ boson associated
productions {\it i.e.}, $thW$, $thWj$ and $thWb$ when the $W$ boson also decays into leptons.
Since there is an extra $b$-quark in the $thWb$ production, one can also consider
``2 isolated $e/\mu$ + 4 $b$-jets'' to isolate this process. In this paper, we investigate some of these signatures and the corresponding backgrounds in detail and estimate the statistical significance of the signal over background for each of these signatures.
For the signal we consider three cases with three different sets of anomalous couplings consistent with the currently available bounds.
\begin{itemize}
\item Case 1: we consider maximally allowed anomalous $tth$ coupling only \cite{Nishiwaki:2013cma} -
$f_{1L} = 1.0, f_{1R} = f_{2L} = f_{2R} = 0, y_{t}^V = -1.5, y_t^A = 0.5$.
\item
Case 2: we consider almost maximally allowed anomalous $tbW$ coupling only -
$f_{1L} = 1.2, f_{1R} = f_{2L} = 0, f_{2R} = 0.2, y_{t}^V = 0, y_t^A = 0$.
\item
Case 3: we consider the combination of the above two cases -
$f_{1L} = 1.2, f_{1R} = f_{2L} = 0, f_{2R} = 0.2, y_{t}^V = -1.5, y_t^A = 0.5$.
\end{itemize}
Like the signal, we generate events for the potentially significant background processes
(both irreducible and reducible) at the parton level with \textsc{MadGraph5}. When a background process also includes $tbW$ and/or $tth$ vertices, we compute it separately for the three cases mentioned above. Since, this is a
parton level study, it is important to include appropriate
smearing of the parton energies to simulate the energy resolution of a jet.
We use the following resolution function,
$$
{\Delta E \over E} = {a \over E} + {b \over \sqrt{E}} + c.
$$
For a parton jet, we take $a = 4.0, b = 0.5, c = 0.03$. We also smear
the energy of an electron/muon with $a = 0.25, b = 0.1, c = 0.007$. Here, $E$ is in the
units of GeV. We then
construct the smeared four-momenta of the particles using this smeared energy.
We have taken the efficiency
of identifying a $b$-jet as $60\%$. For the reducible backgrounds, we consider the possibility of a light jet to be mistagged as a $b$-jet. For this, the mistagging efficiency for a charm quark is taken as $10\%$ and for any other quark/gluon it is $1\%$.
The choice of smearing parameters and the tagging and mistagging efficiencies
are more or less consistent with the ATLAS experiment.
In Table \ref{tab:l3bj}, we display the results for ``an isolated $e/\mu$
+ 3 $b$-jets + a light jet'' which is a signature for the $pp \to thj$ process.
For the backgrounds (here and below), we consider only the significant ones.
For all the cases, we apply the following generic cuts:
\ba
p_T^{b,\ell} > 20 \; {\rm GeV},\: |\eta_{b, \ell}| < 2.5,\; p_T^j > 25 \; {\rm GeV},\: |\eta_j| < 4.5,\; \Dl R(J/\ell,J/\ell) > 0.4 \label{eq:cuts1}.
\ea
In addition, we require $|M(bb) - M_h| < 15\;$ GeV for at least one $b$-jets pair.
In Cases 1 and 3, we also require the
light jet to be forward, i.e., $ |\eta_j| > 2.5$. There is also a requirement for the minimum
$M(jb)$ for all pairs. Its value for Cases 1, 2 and 3 are 100 GeV, 50 GeV and 90 GeV
respectively. Specially for Case 2, where the background is relatively larger and signal smaller compared to the other two cases, we also require $M(jbb) > 220$
GeV for all combinations and $M(ljb) > 290$ GeV for only highest $p_T$ $b$-jet.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|r r|rrrrrrr|rr|} \hline
& \multicolumn{2}{c|}{Signal} & \multicolumn{7}{c|}{Backgrounds} & \multicolumn{2}{c|}{$S/\sqrt{B}$} \\ \cline{2-12}
&SM & Ano. &$tZj$ &$tbbj$ &$Wbbbj$ &$tt$ &$ttj$ &$tbjj$ &$Wbbjj$ &SM &Ano. \\ \hline\hline
Case 1 &46.45 & 536.68 &23.59 &65.39 &11.10 &0.00 &6129.60 &191.81 &92.74 &0.58 &6.65 \\
Case 2 &74.04 & 187.98 &158.87 &139.27 &42.07 &0.00 &16524.10 &748.22 &262.90 &0.55 &1.41 \\
Case 3 &48.91 & 702.35 &107.51 &106.18 &12.28 &15.01 &6436.08 &340.34 &99.89 &0.58 &8.33 \\ \hline
\end{tabular}
\end{center}
\caption{\label{tab:l3bj}Number of events for the signature ``an isolated $e/\mu$
+ 3 $b$-jets + a light jet''
at the 14 TeV LHC with the integrated luminosity of 100 fb$^{-1}$. The cuts and efficiencies are specified in the text.}
\end{table}
We see that with 100 fb$^{-1}$ integrated luminosity, the signal significance for the pure SM is too low to be observed with such a simple kinematical cut based analysis. \footnote{For low statistics, especially when $S > B$, the ratio $S/\sqrt{B}$ overestimates the signal significance. In that case, one may switch to the quantity $\sqrt{2(S+B){\rm ln}(1+S/B) -2S}$ for significance estimation \cite{Cowan:2010js}. } Here, a multivariate analysis may improve the statistics. Also, for Case 2, after the specialized cuts the signal significance is still not as good as the other two.
The results indicate that with this signature, even with the maximally allowed anomalous $tbW$ couplings the signal can only be detected after the end of the second LHC run if the integrated luminosity is large enough, but, one can put some bounds within a year of the LHC restart on the anomalous $tth$ couplings. However, as we will see below, there are better signatures to probe these couplings.
In Table \ref{tab:l4bj}, we display the results for ``isolated $e/\m$ + 4$b$-jets + a light (forward) jet'' -- a signature for the $thbj$ signal. If we don't include the light jet in the signature, the signal will also get contribution from the $thb$ process. However, for the values of the anomalous couplings that we consider, the process $thb$ has
very small cross section, even with the maximal anomalous couplings. Therefore, we don't include its contribution and include the forward light jet in the signature which can help to reduce the background. For all the cases, we apply the same generic cuts as in Table 2.
In addition, we require $|M(bb) - M_h| < 15\;$ GeV, $ |\eta_j| > 2.0$, $M(bb) > 100$ GeV for all
pairs of $b$-jets, and $M(bj) > 150$ GeV for all pairs. Specifically for Case 2, we also apply a cut on $M(bj)$ on all $bj$ pairs except for the smallest $p_T$ $b$-jet and $M(bb) > 120$ GeV.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|rr|rrrrrr|rr|} \hline
& \multicolumn{2}{c|}{Signal} & \multicolumn{6}{c|}{Backgrounds} & \multicolumn{2}{c|}{$S/\sqrt{B}$} \\ \cline{2-11}
& SM & Ano. & $tZbj$ & $tbbbj$ & $ttbb$ & $tth$ & $ttj$ &$tbbjj$ & SM & Ano. \\ \hline\hline
Case 1 &3.26 & 33.53 & 0.21 & 2.32 & 0.23 & 0.03 & 0.03 & 0.07 &1.92 & 19.72 \\
Case 2 &2.60 & 6.86 & 0.69 & 2.41 & 0.71 & 0.46 & 0.00 & 0.02 &1.26 & 3.31 \\
Case 3 &3.26 & 49.52 & 3.41 & 4.88 & 0.00 & 0.08 & 0.03 & 0.05 &1.12 & 17.03 \\ \hline
\end{tabular}
\end{center}
\caption{\label{tab:l4bj}Number of events for the signature ``isolated $e/\m$ + 4$b$-jets + a light (forward) jet''
at the 14 TeV LHC with the integrated luminosity of 100 fb$^{-1}$.
The cuts and efficiencies are specified in the text.
The reducible background $tbbjj$ includes $ttb$.}
\end{table}
Our choice for the cuts is not necessarily optimum. Rather, it is to illustrate that
anomalous couplings can show up in the associated production of the single top
quark and a Higgs boson. We see that to probe the anomalous couplings this signature is better than the earlier one as the signal significances are better in all the three cases. Because of the larger enhancement of the
cross sections due to the anomalous $tth$ couplings, the signal for the maximal couplings
would be visible within a few months of the restart of the LHC. Even much smaller enhancement
of the cross section, say lower by a factor of 5-6 would also show up in the
second run of the LHC. It will, however, take more than a year to see the signal if only $tbW$
couplings are anomalous. One can also look for other strategies to enhance the significance
in this case.
For example, we find that if we drop the requirement of the light jet being
a forward jet and require a minimum $M(bj)$ for all pairs, then it is possible
to increase the significance to almost 4.
In Table \ref{tab:2l3bj}, we display the results for the signature ``2 isolated $e/\mu$ + 3 $b$-jets'' for $thW$ process.
If we allow an extra light jet in the signature then both $thW$ and $thWj$ will contribute to the signal.
Here, however, for simplicity, we don't demand the extra light jet in the signature and display the results for the $thW$ signal process only.
Like before, we apply the following generic cuts:
\ba
p_T^{b,\ell} > 20 \; {\rm GeV},\: |\eta_{b, \ell}| < 2.5,\; \Dl R(J/\ell,J/\ell) > 0.4 \label{eq:cuts2}.
\ea
In addition, we require $|M(bb) - M_h| < 15\;$ GeV, $M(\ell b) > 180$ GeV for all
pairs of a lepton and a $b$-jet. Since we are now demanding 2 leptons in the final state, a potentially large background can come from ``$Z/\g^*$ + jets" processes.
However, the requirement of three $b$-tagged jets and the invariant mass
cuts described above makes this background small. Moreover, it is possible to almost eliminate the ``$Z$ + jets" background with suitable cuts on the invariant mass
of the lepton pair. Hence, we don't include this background in our estimation.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|rr|rr|rr|} \hline
& \multicolumn{2}{c|}{Signal} & \multicolumn{2}{c|}{Backgrounds} & \multicolumn{2}{c|}{$S/\sqrt{B}$} \\ \cline{2-7}
& SM & Ano. & $ttb$ & $ttj$ & SM & Ano. \\ \hline\hline
Case 1 & 0.65 & 8.01 & 0.09 & 0.14 & 1.36 & 16.40 \\
Case 2 & 0.65 & 1.06 & 0.00 & 0.14 & 1.74 & 2.80 \\
Case 3 & 0.65 & 11.60 & 0.00 & 0.14 & 1.74 & 30.58 \\ \hline
\end{tabular}
\end{center}
\caption{\label{tab:2l3bj}Number of events for the signature ``2 isolated $e/\mu$ + 3 $b$-jets''
at the 14 TeV LHC with the integrated luminosity of 100 fb$^{-1}$. The cuts and efficiencies are specified in the text.}
\end{table}
We again clearly see that if $tth$ coupling is anomalous, then within a few months,
and if $tbW$ coupling is anomalous, then in 2-3 years, the single top quark production
with a Higgs and a $W$ boson would be visible. Alternatively, one can put quite strong bounds on the anomalous couplings (especially the $tth$), if the signal is not visible.
Finally, to complete our analysis, we display the results for the signature ``2 isolated $e/\m$ + 4$b$-jets'' for the $thWb$ process
in Table \ref{tab:2l4bj}. Event selection cuts are similar to the previous case except for a minimum cut on $M(\ell b)$
for all the bottom jet and lepton pairs as $M(\ell b)> 160$ GeV in all the cases. We can further reduce the backgrounds
without loosing much signal events by making this cut stronger.
Due to very small cross section of the signal, very large luminosity will be required
to observe it at the LHC.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|rr|rrrrr|rr|} \hline
& \multicolumn{2}{c|}{Signal} & \multicolumn{5}{c|}{Backgrounds} & \multicolumn{2}{c|}{$S/\sqrt{B}$} \\ \cline{2-10}
& SM & Ano. & $ttbb$ & $tth$ & $ttZ$ & $ttbj$ & $ttjj$ & SM & Ano. \\ \hline \hline
Case 1 & 1.64 & 9.30 & 1.57 & 0.14 & 0.10 & 0.03 & 0.08 & 1.18 & 6.72 \\
Case 2 & 1.64 & 2.90 & 3.74 & 0.72 & 0.11 & 0.06 & 0.13 & 0.75 & 1.33 \\
Case 3 & 1.64 & 13.55 & 3.74 & 0.34 & 0.14 & 0.12 & 0.26 & 0.76 & 6.33 \\ \hline
\end{tabular}
\end{center}
\caption{\label{tab:2l4bj}Number of events for the signature ``2 isolated $e/\m$ + 4$b$-jets''
at the 14 TeV LHC with the integrated luminosity of 1000 fb$^{-1}$. The cuts and efficiencies are specified in the text.}
\end{table}
Other two important decay modes of the Higgs boson, for mass around
$125$ GeV, are $h \to \tau \tau, W W^{*}$. Both have branching ratios
of few percents. Here the decay mode $h \to \tau \tau$ can be useful
with the detection of tau-jets. Then a signature of the type
``isolated lepton + 2 tau-jets + $1/2$ bottom jets'' can be useful. The mimic
backgrounds would be same as that for $h \to b {\bar b}$ case. Here we will
have to include the probability of a jet faking a tau-jet instead of a
bottom-jet. At a longer time scale even $h \to W W^{*}$ can also be useful if one looks at
``one/two isolated leptons + two-tau jets + $1/2$ bottom jet''. A more
detailed study is required for analyzing these signatures.
Before we present our conclusions we would like to note that it may also be possible
to obtain good signal significance by considering a signature that is common to all the signals,
{\it e.g.}, ``$e/\mu$ + 3 $b$-jets + any number of light jets''. As mentioned earlier, in this case all the $pp\to thX$ processes will contribute. However, in this case,
due to jet multiplicity, a parton level estimation for the backgrounds, such as we do in this paper, may not be appropriate.
\section{Conclusions}
In this paper, we have investigated the effect of anomalous couplings in the
$tbW$, $tth$ and $WWh$ vertices on the associated production of a single top
quark with a Higgs boson. We have considered the production of
$t h j$, $t h b$, $t h W$, $t h j j$, $t h j b$, $t h W j$, $t h W b$.
Within the SM, these processes have small cross
sections. However, we find that anomalous $Wtb$ and $tth$ couplings can
enhance the cross sections of the some of these processes significantly.
The cross sections of these processes are mainly sensitive to the top Yukawa
couplings and $f_{1L}$, $f_{2R}$. For some combinations of these couplings,
the cross section of some of the processes can be enhanced
by more than a factor of 10. The combined cross section of the processes
under consideration can be more than $500$ fb. Anomalous $WWh$ couplings
plays less significant role; it can mostly enhance the cross sections to
the extent of $10 - 20\%$. As a result of the sensitivity to the anomalous top Yukawa couplings and $f_{1L}$, $f_{2R}$, these processes have the
potential to act as probes for these couplings.
To verify that these processes can indeed be useful to probe the anomalous couplings at
the LHC, we have also done a signal vs. backgrounds
study with three different choices of the couplings along with the SM case. We have analyzed the following signatures --
a)``an isolated $e/\mu$ + 3 $b$-jets + a light jet'' for the $pp\to thj$ process,
b)``an isolated $e/\mu$ + 4 $b$-jets + a forward light jet'' for the $pp\to thbj$ process,
c) ``2 isolated $e/\mu$ + 3 $b$-jets" for the $pp\to thW$ process
and d) ``2 isolated $e/\mu$ + 4 $b$-jets'' for the $pp\to thWb$ process. Our computation clearly shows that, except the last
one, it is possible to observe these signatures in the next run of the LHC. The last signature suffers from small signal cross section and as a result will require very large luminosity to be observed. In general we find that for large anomalous top Yukawa
couplings these signatures will be visible within a year but for purely anomalous $tbW$
couplings it can take longer unless some other search strategies are used.
In case the signal is not visible, quite strong bounds
on the anomalous couplings can be put.
Finally, we note that if such larger than the
SM cross sections are indeed observed in the future, then it would require
further analysis to identify the couplings responsible for
the enhancement as well as a realistic model that can
contribute to the enhancement of the cross sections.
However, as we saw, there are different viable signatures. So looking at these different signatures together might help in this situation.
\section*{Acknowledgement}
{ AS would like to acknowledge the
partial support available from the Department of Atomic Energy, Government
of India for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-
Chandra Research Institute.}
\begin{spacing}{1}
|
1,314,259,994,340 | arxiv | \section{Introduction}
Pulsations with acoustic modes ($p$-modes) and/or gravity modes ($g$-modes) are observed in a certain number of B-type stars. Some show only $p$-modes ($\beta$ Cep pulsators), some others only $g$-modes (Slowly Pulsating B stars, SPB), whereas some exhibit both types of modes (hybrid pulsators). The driving is due to the $\kappa$-mechanism involving the iron peak elements where they are main contributors to the opacity (the so-called "$Z$-bump"), around log~T=5.3. Stellar models computed with homogeneous abundances and currently available opacity datasets fail to reproduce the observed frequencies. Suitable matches require an increase of opacity in the driving region with ad hoc profiles \citep[e.g., ][]{2004MNRAS.350.1022P, 2017MNRAS.466.2284D}.\\
Efforts have been made by several authors to search for any missing opacity from the atomic physics point of view \citep[e.g., ][]{2016ApJ...823...78T}. Indeed, computed opacities may be underestimated compared to experimental measurements, as shown by \cite{2015Natur.517...56B} for iron at the bottom of the solar convective zone.\\
On another hand, an increase of opacity in the $Z$-bump can be due to a greater amount of the iron-group elements at this location \citep{2004MNRAS.350.1022P}. Local element accumulations can be created by atomic diffusion, which builds abundance stratifications of the chemicals when macroscopic mixing processes are weak enough. Each chemical species then migrates in the stellar medium, mainly under the effect of gravity, dragging the elements towards the stellar center, and radiative acceleration, which pushes them outwards. The latter depends on the ability of each species to absorb photons. As a consequence, individual elements may accumulate or be depleted in specific layers according to the variation of the radiative acceleration acting on them. Radiative accelerations and opacities are closely linked (e.g., their maximum will be at the same location) since both depend on the photon absorption properties of a given ion.
For B stars, a first study of the impact of diffusion has been performed by \cite{2006ASPC..349..201B}, with some simplifying assumptions in the computation of the radiative accelerations. Using a stellar evolution code in which the radiative accelerations are calculated consistently with the time variation of the abundances, \cite{2018A&A...610L..15H} (hereafter HV18) investigated how the opacities could be locally enhanced by an accumulation of Fe and Ni created by atomic diffusion, taking into account fingering mixing and mass-loss. Here, we extend their study with updated values of the Ni radiative accelerations, which are now adjusted to the Opacity Project (OP) data \citep{2005MNRAS.362L...1S}. Also, C, N, and O having significant mass fractions, they were considered in the diffusion computation to test their effect on the triggering of fingering mixing.\\
We first describe the main features of the code used in the calculations. Then we present our results and discuss them in the light of recent observations of the hybrid pulsating B star $\nu$ Eri, and conclude with ongoing and future work.
\section{Method}
\subsection{Stellar models with atomic diffusion}
The stellar models are computed using an optimised version of the Toulouse-Geneva Evolution Code (TGEC, see HV18 for details), which includes atomic diffusion with radiative accelerations. The following isotopes are treated in details: H, $^3$He, $^4$He, $^6$Li, $^7$Li, $^9$Be, $^{10}$B,$^{12}$C, $^{13}$C, $^{14}$N, $^{15}$N, $^{16}$O, $^{17}$O, $^{18}$O, $^{20}$Ne, $^{22}$Ne, $^{24}$Mg, $^{25}$Mg, $^{26}$Mg, $^{40}$Ca, and $^{56}$Fe, to which we add nickel, since its contribution to the opacity is important in the oscillation driving layers.
Atomic diffusion is computed using the \cite{1970mtnu.book.....C} formalism, in which the chemicals move with respect to the dominant species, namely hydrogen. We use the diffusion coefficients derived by \cite{1986ApJS...61..177P} and the OPAL2001 equation of state \citep{2002ApJ...576.1064R}. The nuclear reaction rates are from the NACRE compilation \citep{1999AIPC..495..365A}. At each time step of the evolution, the stellar structure is converged with Rosseland mean opacities computed consistently with the current abundance of each element at each layer of the star. We use the Opacity Project (OP) data and codes \citep{2005MNRAS.360..458B}, which are modified to enhance performance.
Dynamical convection zones are computed using the mixing length formalism with a mixing length parameter of 1.8. We assume that they are instantaneously homogenised. Fingering mixing is triggered when inversions of mean molecular weight occur (e.g., through the stratification of the elements) and is treated as a turbulent mixing for which we use the diffusion coefficient derived by \cite{2013ApJ...768...34B}.
\subsection{Computation of radiative accelerations}
Radiative accelerations are computed with a parametrised prescription, the Single-Valued Parameter (SVP) approximation \citep{2002MNRAS.332..891A,2004MNRAS.352.1329L}. By the use of analytical functions, this approach allows faster computations compared to the interpolation in precomputed tables of radiative accelerations or detailed calculations using monochromatic opacities. The agreement with those methods is very good since the coefficients of the SVP functions are set to adjust the detailed computations.
Still, the current set of SVP parameters is limited to stars of mass between 1 and 5~M$_{\odot}$. For Fe, HV18 remarked trends for these parameters with respect to the stellar mass, which allowed to extrapolate their values for higher masses using a power law function. We tried to apply the same method for C, N, and O but the extrapolations were not satisfactory. Instead, we fitted the total radiative accelerations with respect to the OP server values.
At present, nickel is not considered in the SVP tables so that we have to estimate all its SVP parameters from scratch. As in HV18, we assume that the dependence of the radiative accelerations for each Ni ion with respect to its concentration is the same as for the Fe ion with the same number of electrons. We then fit the total Ni radiative accelerations to those of the OP server. Figure~\ref{g_rad} shows the behaviour of the radiative accelerations of Fe and Ni computed with the SVP method with respect to the local temperature (blue and green solid lines respectively) for a solar homogeneous abundance. The OP radiative acceleration profile for nickel (dashed line) is plotted to illustrate the good agreement between the two methods.\\
In HV18, the radiative accelerations for Ni were obtained through a fit with respect to the Montpellier-Montr\'eal code \citep{1998ApJ...504..539T}, which implements the OPAL opacities \citep[and references therein]{1996ApJ...464..943I}. In comparison, the present radiative acceleration profile for Ni shows a slightly stronger and shallower maximum in the $Z$-bump for our B star model.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth]{FeNiTGEC_OP.pdf}
\caption{SVP radiative accelerations for Fe (blue solid line), and Ni (green solid line). For comparison, the dashed line shows the radiative accelerations for Ni from the OP server. The dash-dotted line denote the local gravity.}
\label{g_rad}
\end{figure}
\section{Results}
We computed a 9.5~M$_{\odot}$ model (the approximate mass of $\nu$ Eri) in which only He, C, N, O, Fe and, Ni were allowed to diffuse, the other elements keeping their initial (solar) abundance. The first four elements are considered owing to their significant contribution to the mean molecular weight and therefore their potential influence on the fingering mixing. The evolution is computed from the pre-Main Sequence (which lasts 0.22~Myr), and diffusion is set on at the beginning of the Main Sequence.\\
The first model was evolved assuming no fingering mixing and no mass-loss. Only atomic diffusion and convection are taken into account in the abundance evolution of the chemicals. Figure~\ref{diffNoTh} shows the abundance and opacity profiles at 0.75~Myr. In the upper panel, we can see the accumulations of Fe (light blue curve) around log~T=5.05, and that of Ni (purple curve) at log~T=5.3, and the corresponding increase of opacity (lower panel, blue solid line). The enhancement is significant in the Z-bump because the accumulation zones of Fe and Ni are broadened thanks to convection. Compared to the opacity profile constrained by asteroseismology, ours is quite different in shape and strength. Here the computations were stopped long before the end of the Main Sequence (which lasts \~24~Myr) to avoid too strong a surface abundance of iron.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth]{diffNoTh.pdf}
\caption{Model of a 9.5 M$_{\odot}$ star including only the effects of atomic diffusion.
Upper panel: mass fractions ratios with respect to initial mass abundance for C, N, O, Fe, Ni (blue, green, red, light blue, and purple solid lines, respectively), He (red dashed line), and H (black dashed line).
Lower panel: Rosseland opacity with homogeneous abundances (green solid line), and profile with stratified concentrations (blue solid line). The grey areas show the convective zones. The total opacity is increased where Fe and Ni accumulate. The dashed line shows the opacity modified according to asteroseismic observations, with the method and values of \cite{2017MNRAS.466.2284D}.}
\label{diffNoTh}
\end{figure}
In Fig.~\ref{diffTh}, the computations include fingering mixing and mass-loss with a rate of $\mathrm{10^{-12}~M_{\odot}/yr}$, typical for this kind of stars \citep{1989MNRAS.241..721P}. As a result, the amount of material expelled by mass-loss is great enough to avoid too strong surface abundances of iron and nickel, and allows to evolve the model up to the age of $\nu$ Eri \citep[around 17~Myr,][]{2004MNRAS.350.1022P}. The abundances of Fe and Ni growing much faster than the depletion of He, C, N, and O, inverse molecular weight gradients appear, leading to fingering mixing. The Fe and Ni accumulation zones are then broadened, leading to rather flat abundance profiles. The overabundance of Fe being lower than that of Ni, the opacity maximum is shifted towards the location of the maximum contribution of Ni (around log~T=5.4). Again, our opacity curve has a different shape compared to that computed by \cite{2017MNRAS.466.2284D}, with an unique broad peak extending from log~T=5.2 to log~T=5.6. In $\nu$~Eri, \cite{2012A&A...539A.143N} derived quasi-solar surface abundances for C, N, O and Fe. Our results are globally consistent with these observations, although we obtain a slightly overabundant iron compared to the solar value. Ni is strongly overabundant but has not been addressed by \cite{2012A&A...539A.143N}.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth]{diffThVent12.pdf}
\caption{Same as Fig.~\ref{diffNoTh} but with fingering mixing and mass-loss. The abundance profiles of Fe and Ni are nearly flat due to fingering mixing.}
\label{diffTh}
\end{figure}
We now have to check if such an opacity profile can provide oscillation frequencies consistent with the observations. As \cite{2012A&A...539A.143N} derived solar values for the surface abundances of most of the oscillating B-type stars of their sample, we plan to extend our study to check whether this constraint can be satisfied with our models.
\section{Conclusion}
About fifty years ago, atomic diffusion was first invoked to account for abundance anomalies at the surface of Chemically Peculiar stars. With the advent of asteroseismic data of increasing sensitivity and precision, considering its effects on the inner structure is now mandatory. Here, we show that atomic diffusion, along with the fingering mixing that it naturally triggers, can lead to a strong opacity enhancement in the layers where the oscillations of B-type stars are driven. Further computations are needed to check whether the observed pulsation frequencies can be excited using our models. The surface abundances put additional constraints that our models have to satisfy.
\bibliographystyle{phostproc}
|
1,314,259,994,341 | arxiv | \section{Introduction} \label{intro} This paper is part of a program
aimed at increasing the number of galaxies with accurate distance
determinations via ground-based multicolor CCD photometry of Cepheid
variables.
In the first paper of the series, Capaccioli, Piotto and Bresolin (1991,
CPB) presented new $BV$ photometry of the Cepheids belonging to the
sample of variables already identified by Sandage and Carlson (1988, SC)
in NGC 3109. In that work, CPB derived a new zero point for the
photometric scale. The new data gave a distance modulus
$\mu_0=25.5$~mag, $\sim25$\% shorter than previously measured by SC. The
availability of only two photometric bands ($B$ and $V$) did not allow
CPB to apply the multicolor method discussed by Freedman (1985, F85) to
directly estimate the internal absorption of the Cepheids in NGC~3109.
Actually, Bresolin, Capaccioli, and Piotto (1993, BCP) pointed out that
the internal absorption in some of the fields studied by CPB could be
greater than the adopted average value E(B-V)=0.04. Besides improving
the photometry of the Cepheids previously discovered on photographic
plates, CCD data allow us also to obtain more accurate and deeper
photometry in crowded fields, enabling the discovery of new fainter
Cepheids. The consequent extension of the period--luminosity relation
($PL$) to shorter periods permits a more accurate determination of its
zero point. For instance, in the second paper of this series, Piotto,
Capaccioli and Pellegrini (1994, PCP) found five new Cepheids in Sextans
A and four in Sextans B, in addition to the five variables in Sextans A
and the three in Sextans B already discovered by Sandage and Carlson
(1982, 1985). The new variables extended the faint end of the $PL$
relation of these two galaxies by one magnitude. PCP observed these two
galaxies in three passbands: $B$, $V$ and $I$, again obtaining a new
zero point for the photometric calibration. Using the multiwavelength
photometry method (F85) they derived a true distance modulus, corrected
for interstellar extinction, of $\mu_0=25.71$ for Sextans A and of
$\mu_0=25.63$ for Sextans~B, assuming a distance modulus $\mu_0=18.50$
for the Large Magellanic Cloud (LMC).
In the framework of this program, here we present a new determination of
the distance to the galaxy NGC~3109 based on a new $BVRI$ photometry of
the Cepheids located in the field F1 of BCP. The data set and the
reduction procedures are illustrated in the Section 2. In Section 3 we
present the already known variables and our 16 new Cepheid candidates on
which the present distance determination of NGC 3109 is based. The
determination of the Cepheid parameters is discussed in Section 4, while
Section 5 is devoted to the determination of the distance to NGC 3109. A
comparison of the PL and period-color relations for the Cepheids in a
few nearby galaxies is presented in Section 6. A brief discussion and a
comparison with the previous determinations of the distance to NGC 3109
is in Section 7.
\section{Observations and Reductions}
This study is a follow up of the work by CPB. In that paper it was
impossible to identify new Cepheids due to the limited time coverage and
small number of data points (six at most). We then collected, in December
1991, March 1992 and February 1993, a series of $B$, $V$, $R$, and
$I$--band images of a $1'.9\times3'.0$ field (Fig. 1) of NGC~3109 coded
as F1 ({\it cf.\/}\ Table~\ref{table1} for the log of the observations).
\begin{figure} \figurenum{1} \epsscale{0.60} \plotone{fig1.ps} \caption{
$B$-band CCD image (2.2m telescope, 40 min exposure) of the field F1 in
NGC~3109. North is up, East to the right. The Cepheids are marked by a
circle. The stars coded as V belong to the original SC sample, while P
refers to the new candidate Cepheids.} \label{fig1} \end{figure}
\singlespace
\begin{deluxetable}{rcccc} \scriptsize \tablenum{1} \tablecolumns{5}
\tablewidth{0pc} \tablecaption{Log book of the observations.
\label{table1}} \tablehead{ \colhead{Date} & \colhead{Band} &
\colhead{Exp. time} & \colhead{Seeing} & \colhead{calib.}\\ \colhead{}
& \colhead{} & \colhead{[min]} & \colhead{FWHM} &
\colhead{err.\tablenotemark{A}}} \startdata 1-12-1991
&I\tablenotemark{B} &24 &$1'.14$&0.05\nl 1-12-1991 &R &12
&$1'.02$&0.05\nl 1-03-1992 &B &40 &$1'.17$&0.03\nl
1-03-1992 &V &20 &$1'.16$&0.03\nl 2-03-1992 &B &40
&$1'.32$&0.03\nl 3-03-1992 &B &40 &$1'.12$&0.03\nl 7-03-1992
&B &40 &$0'.87$&0.03\nl 8-03-1992 &B &40
&$1'.30$&0.03\nl 9-03-1992 &B &40 &$0'.80$&0.03\nl 9-03-1992
&R &18 &$0'.94$&0.07\nl 17-02-1993 &B &30
&$1'.30$&0.03\nl 18-02-1993 &B &30 &$1'.00$&0.03\nl 20-02-1993
&B &25 &$1'.60$&0.03\nl 20-02-1993 &I &13
&$1'.20$&0.05\nl \enddata \tablenotetext{A}{This is the total error on
the calibration (see text for details)} \tablenotetext{B}{This image is
a sum of two images each one with an exposition time of 12 min.}
\end{deluxetable}
The $I$ and $R$ band images of December 1991 were taken with EFOSC2 +
CCD \#17 at the ESO/MPI 2.2m telescope at La Silla in Chile. The CCD
format is $1024\times1024$ pixels of $0''.332$. The set of the $B$, $V$,
and $R$ band images of March 1992 was obtained with the RCA \#8 CCD
camera at the Cassegrain focus of the ESO/MPI 2.2m telescope with a CCD
format of 640$\times$1024 pixels and a pixel size of $0''.175$.
Furthermore, three $B$--band frames at the blue arm of EMMI equipped
with the Tektronix \#31 CCD and one in the $I$--band at the red arm of
EMMI with the Tektronik \#18 CCD have been collected at the ESO-NTT
telescope on February, 1993. CCD dimensions is $1024\times1024$ pixels
and the pixel sizes in blue and red are $0''.370$ and $0''.290$
respectively.
During these observing runs, at least thirteen Landolt's (1983a,b)
standard stars were observed in each color in order to transform the
instrumental photometry into the Landolt standard system.
A large set of bias, dark, and flat field frames have been collected,
particularly during the observing runs at the 2.2m telescope. Indeed,
the columns of the high resolution ESO-RCA chip \#8 have not a constant
bias as it depends on the total intensity of the light which falls over
the column. The procedure used for the image cleaning and calibration is
as in PCP and Pellegrini (1993).
The CCD data have been reduced with DAOPHOT and ALLSTAR. Particular
attention has been devoted in tying the stellar photometry to Landolt's
(1983a,b) standard system. For the calibration of the $B$ and $V$ colors
we used the previous CPB calibration, while for the calibration of the
$R$-band and $I$-band we observed 16 Landolt (1983a,b) standard stars in
each color in December of 1991 and 13 Landolt standard stars for the $I$
band in February of 1993. As in Piotto et al. (1990), in order to obtain
the $R_{1991}$, $I_{1991}$ and $I_{1993}$ calibrated magnitudes we
adopted linear color terms. For the calibration of the $R_{1992}$
magnitudes we applied the same calibration parameters of $R_{1991}$,
after having obtained the linear relation between $R_{1991}$ and
$R_{1992}$. The uncertainty on the calibration of the $B$ and $V$
photometry is $0.03$~mag. This value represents the combination of the
error in the CPB calibration and of the error on the zero point
difference between our photometry and that of CPB. For the $R$ and $I$
bands we obtained a zero point error in the calibration equation of 0.01
mag. However, the largest error source in the calibration procedure
comes from the transformation of the zero point of the
point-spread-function (PSF) fitting photometry of NGC~3109 stars into
the standard star aperture photometry zero point (CPB). In our frames
all of the stars are distributed over a very inhomogeneous background,
making the aperture photometry quite uncertain. For the $I$ band we have
two independent calibrations (one from the 2.2m telescope and one from
the NTT telescope). The average zero point differences from night to
night were of 0.05 mag. The same (average) zero point calibration has
been adopted for the two $I$ frames. In $R$ we could not use the same
method, since the calibration equation was available for one night only.
For $R_{1991}$ we obtained an error of $0.05$~mag on the difference
between the aperture photometry and the PSF fitting photometry. Finally,
we obtained an uncertainty of $0.07$~mag on the calibration of the
$R_{1992}$ magnitude determinations summing the error on the linear
relation between $R_{1991}$ and $R_{1992}$ magnitudes to the calibration
error of the $R_{1991}$ calibration. Throughout the paper, whenever
needed, we took into proper account the fact that the zero point
calibration of $R_{1991}$ is more accurate then the $R_{1992}$ one. The
calibration zero point errors for each image are reported in
Table~\ref{table1}, Col.~5.
The internal error on the photometry can be easily evaluated by comparing
the independent magnitude determinations of the not variable stars in
each color. For each color, Table~\ref{table2}, gives the mean internal
errors (identified with {\it error}, Cols.~2,~4,~6~and~8) and the number
of stars (identified with {\it stars}, Cols.~3,~5,~7~and~9) at different
magnitude intervals (Col.~1) in the four photometric bands. The accuracy
of the CCD light curves of the Cepheids can be obtained from the
uncertainties quoted in this Table.
\begin{deluxetable}{ccrccrccrccr} \scriptsize \tablenum{2}
\tablecolumns{12} \tablewidth{0pc} \tablecaption{Photometric internal
errors in the B, V, R and I bands. \label{table2}} \tablehead{
\colhead{} & \multicolumn{2}{c}{B} &\colhead{} &\multicolumn{2}{c}{V}
&\colhead{} &\multicolumn{2}{c}{R} &\colhead{} &\multicolumn{2}{c}{I}\\
\cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12}\\ \colhead{Mag} &
\colhead{Error} & \colhead{Stars} &\colhead{} & \colhead{Error} &
\colhead{Stars} &\colhead{} & \colhead{Error} & \colhead{Stars}
&\colhead{} & \colhead{Error} & \colhead{Stars}} \startdata 19.75 &
0.04 & 12 && 0.02 & 23&&0.02&42&&0.02&55\nl 20.25 & 0.04 & 28 &&
0.04 & 38&&0.03&66&&0.03&84\nl 20.75 & 0.04 & 83 && 0.04
&108&&0.04&140&&0.04&121\nl 21.25 & 0.04 & 118 && 0.05
&172&&0.05&150&&0.06&121\nl 21.75 & 0.05 & 157 && 0.06
&208&&0.06&163&&0.08&95\nl 22.25 & 0.06 & 189 && 0.07
&227&&0.10&76&&0.12&17\nl 22.75 & 0.07 & 192 && 0.09
&136&&0.20&18&&0.29&8\nl 23.25 & 0.08 & 134 && 0.14 &
50&&--&--&&--&--\nl 23.75 & 0.13 & 33 && 0.30 & 6&&--&--&&--&--\nl
24.25 & 0.17 & 8 && 0.32 & 1&&--&--&&--&--\nl \enddata
\end{deluxetable}
\section{Cepheids in NGC~3109}
\subsection{Cepheids in common with previous studies} \label{oldcefs} SC
have identified 29 Cepheids in NGC~3109. CPB selected only 21 of them,
excluding candidate Cepheids with high photometric uncertainties or
those that did not show significant light variations during the period
covered by their observations ({\it cf.\/}\ CPB for more details). This work is
based on the same selection of 21 Cepheids. Six of them (marked as V in
Fig.~\ref{fig1}) fall in field F1 monitored in the present study. As in
CPB, we applied a zero point shift of $-0.29$ mag to the SC data (SC
data are fainter). The new CCD data in the $B$--band for the six
Cepheids located in the field F1 were plotted together with CPB and SC
photometry, and the periods were interactively adjusted in order to
obtain the best phase match among these three data sets. The new,
complete light curves of these six Cepheids are reproduced in
Fig.~\ref{fig2}. Differences between our and CPB periods are very small,
$\sim 10^{-2}$ days for V72 and $\sim 10^{-4}$ days for the other
variables ({\it cf.\/}\ Table~\ref{table5}).
\begin{figure} \figurenum{2} \plotone{fig2.ps} \caption {Light curves
for the six Cepheid variables identified by SC in field F1. {\it
Crosses} represent the original SC photometry, shifted by $\Delta
B=0.29$~mag (CPB). The {\it Open circles} represent the CPB data and the
{\it filled circles} the new data presented in this paper.} \label{fig2}
\end{figure}
For each of these variables, we could also add another point in the
$V$--band light curve, and one or two magnitude determinations in the
$I$ and $R$ bands. In addition, also SC V36 and V45 variables fall
within our larger $R$ and $I$ frames, allowing us to measure their $R$
and $I$ magnitudes. The new magnitude determinations for the SC Cepheids
are reported in the Table~\ref{table3}.
\begin{deluxetable}{rccccccccc} \scriptsize \tablenum{3}
\tablecolumns{10} \tablewidth{0pc} \tablecaption{SC Cepheids in NGC
3109\tablenotemark{A}. \label{table3}} \tablehead{ \colhead{Julian Date}
& \colhead{B} & \colhead{V} & \colhead{R} & \colhead{I} & \colhead{}
&\colhead{B} & \colhead{V} & \colhead{R} & \colhead{I} } \startdata
&\multicolumn{4}{c}{V6} & \colhead{} & \multicolumn{4}{c}{V7} \nl
\cline{1-10}\nl 2448591.825 & -- & -- & -- & -- & & -- &
-- &21.13 &20.94\nl \phantom{244}8683.579 & -- &22.37 & -- & -- &
&22.65 &21.92 & -- &--\nl \phantom{244}8684.779 &21.85 & -- & -- &
-- & &23.08 & -- & -- & --\nl \phantom{244}8685.561 &22.20 & -- & --
& -- & &23.11 & -- & -- & --\nl \phantom{244}8688.789 &23.10 & --
& -- & -- & & -- & -- & -- & --\nl \phantom{244}8689.794 &22.84 &
-- & -- & -- & & -- & -- & -- & --\nl \phantom{244}8690.786 &21.91
& -- &21.11 & -- & & -- & -- &21.72 &--\nl \phantom{244}9036.804
&22.20 & -- & -- & -- & &22.83 & -- & -- & --\nl
\phantom{244}9037.688 &22.46 & -- & -- & -- & &23.00 & -- & -- &
--\nl \phantom{244}9039.734 & -- & -- & -- &20.90 & & -- & -- & --
& -- \nl \cline{1-10}\nl & \multicolumn{4}{c}{V34} &&
\multicolumn{4}{c}{V35}\nl \cline{1-10}\nl 2448591.825 & -- & --
&20.54 &20.27 & & -- & -- &21.40 &21.20\nl \phantom{244}8683.579
&22.39 & -- & -- & -- & &22.90 &22.07 & -- & -- \nl
\phantom{244}8684.779 &22.55 & -- & -- & -- & & -- & -- & -- &
--\nl \phantom{244}8685.561 &22.53 & -- & -- & -- & &22.84 & -- & --
& --\nl \phantom{244}8688.789 &22.42 & -- & -- & -- & & -- & -- &
-- & --\nl \phantom{244}8689.794 & -- & -- & -- & -- & &22.76 & --
& -- & --\nl \phantom{244}8690.786 &22.31 & -- &20.42 & -- & &22.93
& -- &22.66 &--\nl \phantom{244}9036.804 &22.06 & -- & -- & -- &
&23.21 & -- & -- &--\nl \phantom{244}9037.688 &21.52 & -- & -- & --
& &22.91 & -- & -- &--\nl \phantom{244}9039.734 &21.91 & -- & --
&20.00 & &22.28 & -- & -- &21.08\nl \cline{1-10}\nl &
\multicolumn{4}{c}{V64}& & \multicolumn{4}{c}{V72}\nl \cline{1-10}\nl
2448591.825 & -- & -- &20.22 & 19.90 & & -- & -- &22.41
&21.54\nl \phantom{244}8683.579 &22.10 &21.18 & -- & -- & &22.90 & --
& -- & --\nl \phantom{244}8684.779 &22.22 & -- & -- & -- &&22.83 &
-- & -- & --\nl \phantom{244}8685.561 &22.02 & -- & -- & --
&&22.77 & -- & -- & --\nl \phantom{244}8688.789 &21.43 & -- & -- &
-- &&22.31 & -- & -- & --\nl \phantom{244}8689.794 &21.60 & -- & --
& -- && -- & -- & -- & --\nl \phantom{244}8690.786 &21.69 & --
&20.23 & -- &&22.58 & -- & -- & --\nl \phantom{244}9036.804 &22.14 &
-- & -- & -- &&22.37 & -- & -- & --\nl \phantom{244}9037.688
&22.16 & -- & -- & -- &&22.82 & -- & -- & --\nl
\phantom{244}9039.734 &21.29 & -- & -- &19.96 &&22.72 & -- & -- &
-- \nl \cline{1-10}\nl & \multicolumn{4}{c}{V36}& &
\multicolumn{4}{c}{V45}\nl \cline{1-10}\nl 2448591.825 & -- & --
&22.41 &21.54 && -- & -- &22.50 & -- \nl \phantom{244}8683.579 & --
& -- & -- & -- && -- & -- & -- & -- \nl \phantom{244}8684.779
& -- & -- & -- & -- && -- & -- & -- & --\nl
\phantom{244}8685.561 & -- & -- & -- & -- && -- & -- & -- &
--\nl \phantom{244}8688.789 & -- & -- & -- & -- && -- & -- & --
& --\nl \phantom{244}8689.794 & -- & -- & -- & -- && -- & -- & --
& --\nl \phantom{244}8690.786 & -- & -- & -- & -- && -- & -- &
-- & --\nl \phantom{244}9036.804 & -- & -- & -- & -- && -- & --
& -- & --\nl \phantom{244}9037.688 & -- & -- & -- & -- && -- & --
& -- & --\nl \phantom{244}9039.734 & -- & -- & -- & -- && -- &
-- & -- &--\nl \enddata \tablenotetext{A}{For CPB data see their
Table 3(a).} \end{deluxetable}
\subsection{New candidate Cepheids} \label{newcefs} In order to search
for new candidate Cepheid variables, we used both CPB raw data and the
photometry presented in this paper. We computed for each star, for both
the $B$ and $V$ magnitude determinations, the {\it normalized} standard
deviation ($\sigma_N$), i.e. the standard deviation divided by the mean
photometric internal error in each magnitude intervals of 0.5 mag. We
considered as possible variables only those stars satisfying the
relation: $$\sqrt{\sigma_N^2(B)+\sigma_N^2(V)} \ge 2$$
By this procedure, we isolated 129 stars. The possible Cepheids were
selected from these candidates on the basis of their light curve,
excluding spurious objects in which only one or two points contributed
to the large deviation. We ended with a list of 29 stars. Among these
there were all the Cepheids previously discovered by SC. Of the
remaining variable candidates, only 16 had colors and light curves
(after rephasing) typical of a Cepheid\footnote{$(B-V)$ colors have
proven to be a valuable aid in discriminating Cepheids from other
variables, allowing the rejection of many objects with colors differing
significantly from those typical of Cepheids ({\it cf.\/}\ also
Fig.~\ref{fig11}).}. The 16 new Cepheid candidates located in field F1
are marked with a circle and coded as P in Fig.~\ref{fig1}; their
photometry is reported in Table~\ref{table4}. The most probable periods
of the new Cepheids have been determined by the usual Fourier analysis.
More data are needed to confirm the periods, though the light curves
seem well enough covered to allow a fair determination of the parameters
of the PL relation. The light curves of these new candidates are
plotted with their tentative periods in Fig.~\ref{fig3}.
\begin{figure} \figurenum{3} \plotone{fig3a.ps} \begin{center} {\bf a)}
\end{center} \plotone{fig3b.ps} \begin{center} {\bf b)} \end{center}
\caption {a), b): The Light curves for the 16 new candidate Cepheid
variables with a tentative period identification ({\it cf.} Table 5).
The {\it open circles} represent the CPB's data, while the {\it filled
circles} are for the data presented in this paper.} \label{fig3}
\end{figure}
\begin{deluxetable}{rccccccccc} \scriptsize \tablenum{4}
\tablecolumns{10} \tablewidth{0pc} \tablecaption{New candidate Cepheids
in NGC 3109: CPB and our data. \label{table4}} \tablehead{
\colhead{Julian Date} & \colhead{B} & \colhead{V} & \colhead{R} &
\colhead{I} & \colhead{} & \colhead{B} & \colhead{V} & \colhead{R} &
\colhead{I} } \startdata & \multicolumn{4}{c}{P1} & \colhead{} &
\multicolumn{4}{c}{P2} \nl \cline{1-10}\nl 2447967.588 & 22.56 &
22.02 & -- & -- && 22.81 & 22.40 & -- & --\nl
\phantom{244}7968.559 & 22.97 & 22.14 & -- & -- && 22.08 & 21.88
& -- & --\nl \phantom{244}7969.581 & 23.23 & 22.44 & -- & -- &&
22.16 & 21.98 & -- & --\nl \phantom{244}7970.551 & 23.38 & 22.49 &
-- & -- && 22.38 & 22.02 & -- & --\nl \phantom{244}7971.570 & 23.57
& 22.63 & -- & -- && 22.78 & 22.36 & -- & --\nl
\phantom{244}8591.825 & -- & -- &22.30 &22.07 && -- & -- & 22.01
& --\nl \phantom{244}8683.579 & -- & -- & -- & -- && -- & -- & --
& --\nl \phantom{244}8684.779 & -- & -- & -- & -- && -- & -- &
-- & --\nl \phantom{244}8685.561 & -- & -- & -- & -- && -- & --
& -- & --\nl \phantom{244}8688.789 &23.68 & -- & -- & -- &&22.48 &
-- & -- & --\nl \phantom{244}8689.794 &22.57 & -- & -- & --
&&22.87 & -- & -- & --\nl \phantom{244}8690.786 & -- & -- & -- &
-- &&22.80 & -- & -- & --\nl \phantom{244}9036.804 &22.76 & -- & --
& -- &&22.53 & -- & -- & --\nl \phantom{244}9037.688 &23.03 & --
& -- & -- &&22.68 & -- & -- & --\nl \phantom{244}9039.734 & -- &
-- & -- &21.90 && -- & -- & -- & 21.83 \nl \cline{1-10}\nl &
\multicolumn{4}{c}{P3} && \multicolumn{4}{c}{P4}\nl \cline{1-10}\nl
2447967.588 & 22.04 & 21.27 & -- & -- && 22.94 & 23.16 & --
& --\nl \phantom{244}7968.559 & 22.12 & 21.28 & -- & -- && 23.12 &
-- & -- & --\nl \phantom{244}7969.581 & 22.13 & -- & -- & -- &&
23.15 & -- & -- & --\nl \phantom{244}7970.551 & 22.40 & 21.40 & --
& -- && 23.20 & 22.83 & -- & --\nl \phantom{244}7971.570 & 22.37 &
21.55 & -- & -- && 23.61 & 23.03 & -- & --\nl
\phantom{244}8591.825 & -- & -- &20.65 &20.42 && -- & -- & -- &
--\nl \phantom{244}8683.579 &21.66 &20.97 & -- & -- &&23.02 & -- &
-- & -- \nl \phantom{244}8684.779 &21.59 & -- & -- & -- && -- & --
& -- & --\nl \phantom{244}8685.561 &21.68 & -- & -- & -- &&22.89
& -- & -- & -- \nl \phantom{244}8688.789 &21.97 & -- & -- & --
&&23.21 & -- & -- & --\nl \phantom{244}8689.794 &22.22 & -- & -- &
-- &&22.87 & -- & -- & --\nl \phantom{244}8690.786 & -- & --
&20.94 & -- &&23.27 & -- & -- & --\nl \phantom{244}9036.804 &21.90 &
-- & -- & -- &&22.85 & -- & -- & --\nl \phantom{244}9037.688
&21.63 & -- & -- & -- &&23.16 & -- & -- & --\nl
\phantom{244}9039.734 &21.81 & -- & -- &20.36 &&22.67 & -- & -- &
-- \nl \cline{1-10}\nl & \multicolumn{4}{c}{P5} &&
\multicolumn{4}{c}{P6}\nl \cline{1-10}\nl 2447967.588 & 22.00 &
21.29 & -- & -- && 22.36 & 21.91 & -- & --\nl
\phantom{244}7968.559 & 22.34 & 21.53 & -- & -- && 22.81 & 21.95
& -- & --\nl \phantom{244}7969.581 & 22.29 & 21.51 & -- & -- &&
22.00 & 21.51 & -- & --\nl \phantom{244}7970.551 & 22.41 & -- & --
& -- && 21.90 & 21.70 & -- & --\nl \phantom{244}7971.570 & 22.76 &
21.96 & -- & -- && 22.24 & 21.69 & -- & --\nl
\phantom{244}8591.825 & -- & -- & -- &20.74 && -- & -- &21.36
&21.00\nl \phantom{244}8683.579 &22.36 &21.55 & -- & -- &&22.48 & --
& -- & --\nl \phantom{244}8684.779 & -- & -- & -- & -- &&21.76 &
-- & -- & --\nl \phantom{244}8685.561 &22.68 & -- & -- & --
&&21.84 & -- & -- & --\nl \phantom{244}8688.789 &21.77 & -- & -- &
-- &&21.93 & -- & -- & --\nl \phantom{244}8689.794 &21.83 & -- & --
& -- &&21.80 & -- & -- & --\nl \phantom{244}8690.786 &22.27 & --
&21.07 & -- &&22.00 & -- &21.37 & --\nl \phantom{244}9036.804 &22.94
& -- & -- & -- &&22.36 & -- & -- & --\nl \phantom{244}9037.688
&22.93 & -- & -- & -- &&21.68 & -- & -- & --\nl
\phantom{244}9039.734 &21.62 & -- & -- &20.64 &&22.15 & -- & --
&21.04 \nl \cline{1-10}\nl \tablebreak & \multicolumn{4}{c}{P7} &&
\multicolumn{4}{c}{P8}\nl \cline{1-10}\nl 2447967.588 & 23.07 &
21.71 & -- & -- && 23.05 & 22.64 & -- & --\nl
\phantom{244}7968.559 & -- & -- & -- & -- && 23.44 & -- &
-- & --\nl \phantom{244}7969.581 & -- & 21.62 & -- & -- && 23.06
& 22.84 & -- & --\nl \phantom{244}7970.551 & 22.85 & 21.64 & -- &
-- && 23.17 & -- & -- & --\nl \phantom{244}7971.570 & 23.65 & 21.59
& -- & -- && 23.01 & 23.02 & -- & --\nl \phantom{244}8683.579
&22.64 & 21.48 & -- & -- &&22.74 & -- & -- & --\nl
\phantom{244}8684.779 &22.55 & -- & -- & -- && -- & -- & -- &
--\nl \phantom{244}8685.561 &22.69 & -- & -- & -- && -- & -- & --
& --\nl \phantom{244}8688.789 &22.60 & -- & -- & -- &&22.97 & -- &
-- & --\nl \phantom{244}8689.794 &22.43 & -- & -- & -- &&23.31 & --
& -- & --\nl \phantom{244}8690.786 &22.57 & -- & -- & -- &&23.29 &
-- &--& --\nl \phantom{244}9036.804 &22.36 & -- & -- & -- &&22.82 &
-- & -- & --\nl \phantom{244}9037.688 &22.70 & -- & -- & --
&&22.90 & -- & -- & --\nl \phantom{244}9039.734 & -- & -- & -- &
-- &&22.81 & -- & -- & -- \nl \cline{1-10}\nl \colhead{} &
\multicolumn{4}{c}{P9} & \colhead{} & \multicolumn{4}{c}{P10} \nl
\cline{1-10}\nl 2447967.588 & 22.15 & 21.69 & -- & -- &&
23.09 & -- & -- & --\nl \phantom{244}7968.559 & 21.93 & -- & -- &
-- && 22.74 & -- & -- & --\nl \phantom{244}7969.581 & 22.34 & --
& -- & -- && 22.64 & 21.69 & -- & --\nl \phantom{244}7970.551
& 22.44 & 21.76 & -- & -- && 23.27 & 21.91 & -- & --\nl
\phantom{244}7971.570 & 22.78 & 21.80 & -- & -- && 22.64 & 21.66
& -- & --\nl \phantom{244}8591.825 & -- & -- &21.00 & -- && -- & --
& -- &20.48\\ \phantom{244}8683.579 &22.05 &21.50 & -- & -- &&22.54
&21.61 & -- & --\\ \phantom{244}8684.779 &21.99 & -- & -- & --
&&22.62 & -- & -- & --\\ \phantom{244}8685.561 &22.21 & -- & -- &
-- &&22.96 & -- & -- & --\\ \phantom{244}8688.789 &22.05 & -- & --
& -- && -- & -- & -- & -- \\ \phantom{244}8689.794 & -- & -- &
-- & -- &&22.93 & -- & -- & --\\ \phantom{244}8690.786 & -- & --
&20.99 & -- &&22.39 & -- &21.29 & --\\ \phantom{244}9036.804 &21.79 &
-- & -- & -- &&22.22 & -- & -- & --\\ \phantom{244}9037.688 &22.05
& -- & -- & -- &&22.41 & -- & -- & --\\ \phantom{244}9039.734 &
-- & -- & -- &20.77 &&21.86 & -- & -- &20.35\\
\cline{1-10}\nl & \multicolumn{4}{c}{P11}& & \multicolumn{4}{c}{P12}\nl
\cline{1-10}\nl 2447967.588 & 22.32 & 21.59 & -- & -- &&
22.77 & -- & -- & --\nl \phantom{244}7968.559 & 21.54 & 21.01 & --
& -- && -- & -- & -- & --\nl \phantom{244}7969.581 & 22.06 &
22.12 & -- & -- && 22.85 & 22.74 & -- & --\nl
\phantom{244}7970.551 & 22.31 & 21.78 & -- & -- && 22.97 & 22.98
& -- & --\nl \phantom{244}7971.570 & -- & 21.59 & -- & -- &&
23.37 & 22.92 & -- & --\nl \phantom{244}8591.825 & -- & -- & -- &
-- & & -- & -- &22.49 & --\nl \phantom{244}8683.579 &21.94 &21.73 &
-- & -- & &22.70 &22.32 & -- & --\nl \phantom{244}8684.779 &21.51 &
-- & -- & -- & &23.02 & -- & -- & --\nl \phantom{244}8685.561
&21.74 & -- & -- & -- & &22.72 & -- & -- & --\nl
\phantom{244}8688.789 &21.92 & -- & -- & -- & & -- & -- & -- &
--\nl \phantom{244}8689.794 &22.87 & -- & -- & -- & &23.05 & -- & --
& --\nl \phantom{244}8690.786 &21.53 & -- &21.38 & -- & &22.69 & --
&22.24 & -- \nl \phantom{244}9036.804 &20.97 & -- & -- & -- & & --
& -- & -- & --\nl \phantom{244}9037.688 &22.05 & -- & -- & -- &
&22.70 & -- & -- & --\nl \phantom{244}9039.734 &20.91 & -- & -- &
-- & &22.11 & -- & -- & --\nl \cline{1-10}\nl \tablebreak &
\multicolumn{4}{c}{P13}& & \multicolumn{4}{c}{P14}\nl \cline{1-10}\nl
2447967.588 & 22.26 & 21.80 & -- & -- && 22.64 & 22.37 &
-- & --\nl \phantom{244}7968.559 & 22.29 & 21.57 & -- & -- && 22.41
& 22.24 & -- & --\nl \phantom{244}7969.581 & 22.67 & 21.78 & -- &
-- && 22.43 & 22.08 & -- & --\nl \phantom{244}7970.551 & 22.74 &
22.22 & -- & -- && 22.40 & 22.04 & -- & --\nl
\phantom{244}7971.570 & 21.98 & 21.63 & -- & -- && 22.42 & 21.87
& -- & --\nl \phantom{244}8591.825 & -- & -- &21.35 &21.11 & & -- &
-- &22.20 & --\nl \phantom{244}8683.579 &22.65 & -- & -- & -- &
&22.71 & -- & -- & --\nl \phantom{244}8684.779 & -- & -- & -- & --
& & -- & -- & -- & --\nl \phantom{244}8685.561 &22.51 & -- & -- &
-- & &22.76 & -- & -- & --\nl \phantom{244}8688.789 &22.64 & -- & --
& -- & &23.06 & -- & -- & --\nl \phantom{244}8689.794 &22.62 & --
& -- & -- & & -- & -- & -- & --\nl \phantom{244}8690.786 &22.76 &
-- &21.65 & -- & &23.08 & -- &22.41 & --\nl \phantom{244}9036.804
&22.36 & -- & -- & -- & &22.93 & -- & -- & --\nl
\phantom{244}9037.688 &22.65 & -- & -- & -- & &23.01 & -- & -- &
--\nl \phantom{244}9039.734 &21.99 & -- & -- & -- & &22.71 & -- & --
&20.55\nl \cline{1-10}\nl & \multicolumn{4}{c}{P15}& &
\multicolumn{4}{c}{P16}\nl \cline{1-10}\nl 2447967.588 & 21.88 &
21.15 & -- & -- && 23.97 & 23.28 & -- & --\nl
\phantom{244}7968.559 & 22.02 & 21.44 & -- & -- && 23.06 & 22.69
& -- & --\nl \phantom{244}7969.581 & 22.39 & -- & -- & -- &&
23.45 & 22.85 & -- & --\nl \phantom{244}7970.551 & 22.02 & 21.42 &
-- & -- && -- & 23.20 & -- & --\nl \phantom{244}7971.570 & 21.47
& 21.09 & -- & -- && 22.72 & 22.33 & -- & --\nl
\phantom{244}8591.825 & -- & -- &21.16 & -- & & -- & -- &22.51 &
--\nl \phantom{244}8683.579 & -- & -- & -- & -- & &23.72 &23.36 & --
& --\nl \phantom{244}8684.779 &21.75 & -- & -- & -- & & -- & -- &
-- & --\nl \phantom{244}8685.561 &22.19 & -- & -- & -- & & -- & --
& -- & --\nl \phantom{244}8688.789 &21.73 & -- & -- & -- & &23.44 &
-- & -- & --\nl \phantom{244}8689.794 &21.12 & -- & -- & -- & & --
& -- & -- & --\nl \phantom{244}8690.786 &21.76 & -- & -- & -- &
&22.73 & -- & -- & --\nl \phantom{244}9036.804 &21.85 & -- & -- &
-- & &24.10 & -- & -- & --\nl \phantom{244}9037.688 &21.23 & -- & --
& -- & &24.01 & -- & -- & --\nl \phantom{244}9039.734 &21.73 & --
& -- & -- & &23.75 & -- & -- &22.18 \nl \enddata
\end{deluxetable}
\section{Cepheid parameters} \label{parcef} The old photographic and the
new CCD $B$--band data provide a good coverage of the blue light curves.
It is therefore straightforward to derive the magnitudes at maximum
($B_{max}$) and the mean magnitudes ($<B>$) listed in Cols.~4 and 5 of
Table~\ref{table5}. The parameters for the SC Cepheids have been
estimated giving more weight to the CCD than to the photographic
photometry, particularly at the minimum light. We also attempted to
obtain the phase weighted mean magnitude using the relation (4) in Saha
et al. (1994) after transform the magnitudes into intensities. As
discussed by Saha \& Hoessel (1990), this method takes care of the bias
introduced by the loss of faint measurements. Unfortunately, the phase
sampling of the light curves is not sufficiently even; thus, we
preferred to compute the mean directly from the light curve drawn by
hand trough the available points and use the phase weighted mean only
for a comparison. In order to obtain the mean $V$--band magnitude we
used the same method as for the $B$--band. These values are reported in
Col.~7 of Table~\ref{table5}.
\begin{deluxetable}{rccccccccccc} \scriptsize \tablenum{5}
\tablecolumns{12} \tablewidth{0pc} \tablecaption{NGC~3109 Cepheid
parameters. \label{table5}} \tablehead{ \colhead{Cepheid} &
\colhead{$P_{CPB}$} & \colhead{$P_{our}$}& \colhead{$B_{max}$}&
\colhead{$<B>$}&\colhead{$\sigma_{<B>}$}&
\colhead{$<V>$}&\colhead{$\sigma_{<V>}$}& \colhead{$<R>$}&
\colhead{$\sigma_{<R>}$\tablenotemark{A}} &
\colhead{$<I>$}&\colhead{$\sigma_{<I>}$\tablenotemark{A}}\\
\colhead{Ident} & \colhead{[days]} & \colhead{[days]}
&\colhead{[mag]}&
\colhead{[mag]}&\colhead{[mag]}&\colhead{[mag]}&\colhead{[mag]}&\colhead{[mag]}
&\colhead{[mag]}&\colhead{[mag]}&\colhead{[mag]} } \startdata V6 &
7.0223& 7.02120 &21.78 &22.46& 0.04&21.80& 0.04 &20.92& 0.29 &20.77&
0.23\nl V7 & 5.9879& 5.98810 &21.71 &22.43& 0.05&21.98& 0.05 &21.22&
0.32 &20.91& 0.24\nl V8 &14.4500&-- &20.85 &21.45& 0.10&--&--&--&--
\nl V9 & 7.7670&-- &22.10 &22.90& 0.10&21.90& 0.10 &--&--&-- \nl
V11 & 7.9298&-- &21.85 &22.40& 0.10&--&--&--&-- \nl V12 & 8.1183&--
&21.70 &22.43& 0.10&21.86& 0.10 &--&--&-- \nl V18 & 8.3707&--
&21.60 &22.38& 0.10&21.68& 0.10 &--&--&-- \nl V20 & 8.2718&-- &21.90
&22.40& 0.10&22.20& 0.10 &--&--&-- \nl V34 &17.2320&17.22550 &21.43
&22.12& 0.04&21.28& 0.05 &20.62& 0.30 &20.12& 0.23\nl V35 & 8.1928&
8.19190 &21.65 &22.45& 0.05&21.80& 0.05 &21.53& 0.35 &21.22& 0.27\nl
V36 & 7.1290&-- &21.90 &22.25& 0.10&21.82& 0.10 &21.62& 0.15 &21.40&
0.12\nl V44 & 9.0970&-- &21.10 &21.80& 0.10&21.27& 0.10 &--&--&-- \nl
V45 & 8.7620&-- &21.50 &22.05& 0.10&21.36& 0.10 &21.04& 0.19 &--&--
\nl V57 & 7.2595&-- &21.35 &21.70& 0.10&--&--&--&-- \nl V64
&19.5745&19.57070 &21.05 &21.79& 0.04&20.93& 0.04 &20.44& 0.32 &20.02&
0.25\nl V72 & 8.6805& 9.09400 &21.72 &22.32& 0.04&21.76& 0.04 &21.34&
0.26 &21.02& 0.20\nl V77 & 5.8170&-- &22.10 &22.60& 0.10&22.00& 0.10
&--&--&-- \nl V79 & 8.2480&-- &21.95 &22.55& 0.10&21.70& 0.10
&--&--&-- \nl V81 &19.9570&-- &21.15 &21.88& 0.10&21.12& 0.10
&--&--&-- \nl V92 &13.3165&-- &21.45 &21.93& 0.10&21.15& 0.10
&--&--&-- \nl P1 &--& 8.61700 &22.30 &22.99& 0.05&22.28& 0.05
&22.07& 0.30 &21.76& 0.23\nl P2 &--& 5.27800 &22.00 &22.50&
0.04&22.18& 0.04 &22.10& 0.22 &21.87& 0.17\nl P3 &--&13.60000 &21.50
&22.00& 0.03&21.24& 0.03 &20.77& 0.22 &20.36& 0.17\nl P4 &--&
2.32980 &22.55 &23.13& 0.04&23.06& 0.06 &--&--&-- \nl P5
&--&10.62600 &21.60 &22.30& 0.04&21.64& 0.04 &20.91& 0.30 &20.30& 0.24\nl
P6 &--& 4.70380 &21.65 &22.23& 0.03&21.68& 0.04 &21.40& 0.25 &21.00&
0.20\nl P7 &--& 5.34060 &22.31 &23.00& 0.04&21.65& 0.04 &--&--&-- \nl
P8 &--&2.54200 &22.71 &23.07& 0.03&22.79& 0.04 &--&--&-- \nl P9
&--& 5.34250 &21.74 &22.42& 0.04&21.73& 0.04 &20.93& 0.30 &20.61& 0.23\nl
P10 &--& 3.20620 &21.85 &22.63& 0.04&21.68& 0.05 &21.13& 0.34 &20.53&
0.26\nl P11 &--& 2.78930 &21.10 &22.00& 0.05&21.58& 0.06 &21.12& 0.40
&--&-- \nl P12 &--&10.85540 &21.80 &22.57& 0.04&22.27& 0.05 &21.86&
0.34 &--&-- \nl P13 &--& 6.00240 &21.89 &22.33& 0.03&21.81& 0.04
&21.26& 0.19 &21.00& 0.15\nl P14 &--&17.29510 &22.20 &22.65&
0.03&--&-- &22.09& 0.20 &--&-- \nl P15 &--& 7.10590 &21.10 &21.80&
0.04&21.30& 0.04 &21.19& 0.31 &--&-- \nl P16 &--& 3.54310 &22.70
&23.41& 0.06&22.89& 0.06 &22.32& 0.31 &21.85& 0.24\nl \enddata
\tablenotetext{A}{This error is an upper limit, in fact it is the
half-amplitude of the light curve in this band, obtained using the
information
from the blue light curves (see Sect. 3 for details).}
\end{deluxetable}
Finally, even though there are only random phase data available in $R$
and $I$ bands, it is always possible to use the information from the blue
light curves to correct these observations to the mean $<R>$, $<I>$ light
(Freedman 1988, F88). The Cepheid light curves in blue differ from those
at longer wavelength in two respects: {\it (i)} the amplitude decreases
with increasing wavelength, and {\it (ii)} there is a phase shift moving
from the $B$ to the $I$--band. After transforming the blue photographic
light curves into the CCD photometric scale, a correction for both the
amplitude scale and the phase shift\footnote{We have assumed that the
amplitude ratios as a function of the color and the phase shift are the
same as in Freedman (1988).} was applied as in F88 in order to obtain
the mean magnitudes for the $R$ and $I$ data (Cols. 9 and 11 of
Table~\ref{table5}). Here, we took into proper account that the
$R_{1991}$ magnitude determinations are more accurate then the
$R_{1992}$ ones.
In order to estimate the uncertainty associated with the mean
magnitudes, we considered all the independent sources of errors acting
on this quantity: the amplitude of the light curve that was converted
into its equivalent variance (see Freedman et al. 1991, 1992), the
internal photometric errors, and the zero-point errors in the calibrated
magnitudes (see Table~\ref{table1} and Table~\ref{table2}). For the
bands $B$ and $V$, the total error was obtained adding in quadrature
these errors and dividing by the reduced number of degrees of freedom.
For the bands $R$ and $I$, we have so few points (no more than two) that
we consider more reasonable assuming as error on the mean magnitudes the
half-amplitude of the light curve, as predicted from the $B$ light curve
(F88). Obviously, this uncertainty must be considered as an upper limit
of the error to be associated to the mean magnitude. The resulting
errors are shown in Table 5 Cols. 6,8,10, and 12.
\section{The distance modulus}
\subsection{The relative apparent distance moduli} \label{relmod}
Recently, it has been repeatedly pointed out that the principal source
of systematic errors in the calibration of the primary distance
indicators is in the uncertainty in determination of the zero points of
the Cepheid Period-Luminosity and Period-Luminosity-Color (PLC)
relations (Freedman \& Madore 1993, Fukugita et al. 1993). For this
reason we prefer to derive the distance modulus of NGC~3109 by
comparison with the Large Magellanic Cloud Cepheids, as we did in the
other papers of this series (CPB, PCP) and as it was originally
suggested by F88. Indeed the distance modulus of the LMC can be derived
using several independent methods (and owing to the large number of
known Cepheids in the Magellanic Clouds, these galaxies represent almost
ideal laboratories for the calibrations of PL and PLC relations). The
method has the advantage of leaving the relative distance unaffected by
whatever improvement there will be in the distance of the LMC. According
to Hunter \& Gallagher (1985), the metal content of NGC~3109 should be
just $\sim 1.5$ times higher than in the LMC. Therefore, we do not
expect significant effects on the PL relations for the two galaxies. In
fact, Freedman \& Madore (1990) have shown that the distance moduli
obtained via Cepheid PL relation in a study of three fields of M31 with
metal content from 0.5 to 2.5 times the solar metallicity range over
less than 0.27 mag.
Figures \ref{fig4}, \ref{fig5}, \ref{fig6} and \ref{fig7} present the PL
relations from the mean $B$, $V$, $R$ and $I$ magnitudes of the Cepheids
listed in Table~\ref{table5}. {\it Filled circles} represent both SC and
our Cepheids in the field F1 of NGC~3109, while {\it open circles} refer
to the SC Cepheids located in the other fields. {\it Crosses} reproduce
the photoelectric data for the LMC Cepheids (Sandage 1988, Madore \&
Freedman, 1991), shifted by the zero point difference between the
NGC~3109 PL relation and the LMC one. The slopes of these two relations
appear similar in all four colors, in agreement with the postulate on
which the use of the Cepheids as distance indicators rests, i.e. that
their properties are independent of the host galaxy ({\it cf.\/}\ also
Section~7). For each color we built a PL relation with all the Cepheids
(upper plot) and one in which we used only the Cepheids with a well
defined light curve and a uniform phase coverage (referred to as
``selected'' from here on). This last selection allows us to include
only those Cepheids with the most reliable light curve parameters as
period, minimum, and mean magnitude.
\begin{figure} \figurenum{4} \plotone{fig4.ps} \caption {The
Period-Luminosity relation in the $B$-band for the NGC~3109 Cepheids is
compared with the corresponding relation for the LMC (from Sandage 1988
and Madore \& Freedman, 1991). {\it Filled circles} represent both SC
and our Cepheids located in the field F1 of NGC~3109, {\it open circles}
the SC Cepheids located in the other fields. The magnitude error bars
have dimensions of the order of the symbol size. {\it Crosses} reproduce
the photoelectric data for the LMC Cepheids shifted by the zero point
difference between the NGC~3109 and the LMC PL relations. In the {\it
upper panel} all the variables are used, while in the {\it lower panel}
only the Cepheids with the most reliable light curve parameters are
plotted. A best fit (see text) of the selected sample to the LMC
Cepheids with $\log P < 1.8$ gives us an apparent relative distance
modulus $\Delta\mu_B=6.81\pm 0.11$.} \label{fig4} \end{figure}
\begin{figure} \figurenum{5} \plotone{fig5.ps} \caption{As in Fig. 4,
but for the $V$-band. The resulting apparent relative distance modulus
is $\Delta\mu_V=6.91 \pm 0.09$.} \label{fig5} \end{figure}
\begin{figure} \figurenum{6} \plotone{fig6.ps} \caption{As in Fig. 4,
but for the $R$-band (the plotted errors are an upper limit, see Sect. 4
for details). The resulting apparent relative distance modulus is
$\Delta\mu_R=6.95 \pm 0.13$.} \label{fig6} \end{figure}
\begin{figure} \figurenum{7} \plotone{fig7.ps} \caption{As in Fig. 4,
but for the $I$-band (the plotted errors are an upper limit, see Sect. 4
for details). The resulting apparent relative distance modulus is
$\Delta\mu_I=7.01 \pm 0.12$.} \label{fig7} \end{figure}
In this paper, for consistency with CPB, we adopt a true distance
modulus to the LMC of 18.50 mag (van den Bergh, 1996) \footnote{We want
note that the current best estimate of the distance modulus of the LMC
is based on the light echo from the SN1987a ring. Panagia et al. (1997)
give $\mu_0=18.58\pm 0.03$ mag.} and a mean total extinction to the LMC
Cepheids $E(B-V)=0.08$~mag ({\it cf.\/}\ discussion in CPB). Note that Bessel
(1991) obtained a foreground reddening of the LMC which ranges from 0.04
to 0.09 mag, while the mean internal reddening is 0.06 mag, though in
some regions it reaches values as high as 0.3 mag. In this picture, it
is very difficult to estimate the mean total reddening of the sample of
the LMC Cepheids. As it will be discussed below, the adopted reddening
for the LMC has no effect on the determination of the relative (to the
LMC) and absolute distance modulus of NGC 3109: it only affects the
estimate of the total average reddening of its Cepheids.
In order to calculate the apparent distance moduli relative to the LMC in
all four bands, we obtained the best fit (least squares method) for the
PL relation of the LMC and then we calculated the best match of the PL
relation of NGC~3109 imposing the same slope of the PL relation of the
LMC. For an additional internal check of our results, we repeated these
calculations using different approaches. First, we considered the two
different samples of Cepheids in NGC~3109 defined above. Moreover, while
the PL relation for the LMC was fitted with a straight line over the
range of periods $0< \log P<1.8$\footnote{Cepheids with $\log P>1.8$ are
excluded from the least-squares fit since both the evolutionary status
and the reddening of these longer period Cepheids are controversial.},
the two samples of NGC~3109 Cepheids were fitted both in the entire
range of periods and in the range $0.8<\log P<1.8$ in order to minimize
the effects of the Faint end selection bias in the small number data set
of the more distant galaxy NGC~3109. The resulting relative distance
moduli are all consistent within the errors. Thus we adopted the
relative distance moduli obtained with the Cepheids of the selected
sample of NGC~3109 without any constraint on $\log P$. In this way we
make use of the same range of periods for both the LMC and NGC~3109, but
we exclude those Cepheids with inaccurate parameters which might affect
the reliability of the fit (almost all of them are in the fainter part
of the PL relation). The resulting apparent distance moduli relative to
the LMC ($\Delta\mu$) are reported in the Table~\ref{table6}. The total
error on these apparent distance moduli is obtained adding in quadrature
the error of the fit and the mean internal photometric error (obtained
for each band as a mean of the errors in Table 2 up to magnitude 23.75).
\subsection{The absolute distance modulus} \label{absmod} In order to
determine the true distance modulus (TDM=$\mu_0$) to NGC~3109, the total
(foreground plus internal) extinction for the Cepheids must be evaluated.
\begin{deluxetable}{ccc} \tablenum{6} \tablecolumns{10} \tablewidth{0pc}
\tablecaption{Distance moduli relative to the LMC. \label{table6}}
\tablehead{ \colhead{Bandpass} & \colhead{$\Delta\mu$} & \colhead{Error}}
\startdata B & 6.81 & 0.12\nl V & 6.91 & 0.11\nl R & 6.95 & 0.15\nl
I & 7.01 & 0.15\nl \enddata \end{deluxetable}
Taking advantage of the multicolor apparent distance moduli (see
Table~\ref{table6}) and assuming that all of the wavelength dependence of
these moduli is due to the extinction, it is possible to fit all the data
simultaneously with an interstellar extinction law. In particular, we
assume that the reddening law for the LMC and NGC~3109 is the same as in
the Galaxy, and use the law derived by Cardelli et al. (1989, C89).
\begin{figure} \figurenum{8} \plotone{fig8.ps} \caption{The NGC~3109
apparent distance moduli obtained in 4 bands are plotted as a function
of the corresponding inverse wavelength (in $\mu m^{-1}$). A weighted
least square fit of the Cardelli et al. (1989) extinction law ({\it
solid curve}) leads to a true relative distance modulus
$\Delta\mu_0=7.17\pm 0.01$.} \label{fig8} \end{figure}
Figure~\ref{fig8} shows the relative distance moduli plotted as a
function of the inverse wavelength characteristic of each band. As in
F88, the estimated errors were scaled as a function of the increasing
wavelength to reflect the decreasing strip width. The solid line is the
weighted least square best fit of the adopted extinction law to the
moduli. Following C89, we have adopted $R_V = A_V/E(B-V)=3.1$, where
$A_V$ is the extinction in magnitudes in the $V$ band. With this
assumption, we obtained a reddening free relative distance modulus (RDM)
$\Delta\mu_0=7.17 \pm 0.01$ and a relative reddening $\Delta
E(B-V)=-0.09 \pm 0.02$ between the LMC and NGC 3109. This RDM
corresponds to a TDM $\mu_0=25.67$, adopting $\mu_0=18.50$ for the LMC.
A relative color excess $\Delta E(B-V)=-0.09$ would imply a negative
total mean reddening (TR) $E(B-V)=-0.01$ for NGC 3109, if we assume, as
discussed above and in CPB, $E(B-V)=0.08$ for the total reddening of the
LMC Cepheids. A negative reddening $E(B-V)=-0.07$ has been obtained by
Freedman {\it et al.\/}\ (1992) for NGC 300. Of course, a negative reddening has
no physical meaning, in terms of what we know about the interstellar
medium. First, we must note that in our case the uncertainty on the
$\Delta E(B-V)$ is of the order of 0.04 magnitudes (see below), and
therefore we can assume that the TR is compatible with zero. Still, we
would remain with the unlikely situation of a {\it zero} total
extinction for the Cepheids in NGC~3109. There might be many
explanations for such a result. Systematic calibration errors can play a
role in the determination of the relative extinction, though we can
exclude systematic errors larger than $\sim0.03$~mag. Another
interesting possibility is that the intrinsic color of the Cepheids in
NGC~3109 is systematically bluer than those in the fiducial sample of
the LMC, therefore leading to an underestimation of their reddening.
However, we believe that the largest uncertainty is still on the adopted
total reddening for the LMC. We have already discussed (previous
Section) the difficulties in estimating the mean internal and external
extinction toward the LMC Cepheids. According to Bessel's (1991)
results, the E(B-V)=0.08~mag for the LMC that we have adopted in this
series of papers might be too low. The comparison between the LMC and
NGC 3109 Cepheids seems to confirm that an $E(B-V)_{LMC}=0.1$~mag (see
also Freedman et al., 1991) or higher might be more appropriate.
The total error on the TDM of NGC~3109 is the combination of 1) the
error on the best fit of the extinction law of C89, 2) the uncertainty
on the absolute distance to the LMC, 3) the error on the photometric
zero point, and 4) the error due to the assumption of $R_V=3.1$.
The LMC best distance determination are based on the Cepheids ({\it cf.}
the compilation of Feast \& Walker, 1987), and the associated
uncertainty is $\pm 0.15$~mag; smaller errors seem too optimistic (see
van den Bergh 1996). This is still the dominant source of errors. The
uncertainty on the zero point calibration is $\pm 0.03$ mag (a mean of
the errors estimated in Sect. 2). Finally, C89 have shown that $R_V$ can
range from 2.75 to 5.3 within the Galaxy. Adopting this range in $R_V$
for NGC~3109, the reddening value and the relative distance modulus
would be in the interval $-0.01<E(B-V)<0.01$ and
$7.16<\Delta\mu_0<7.25$, respectively. In other words, a plausible error
estimates for the distance modulus as a consequence of the extinction
law uncertainties is $\pm 0.05$~mag.
In summary, we have that the true distance modulus of NGC~3109 is
$\mu_0=25.67 \pm 0.16$~mag and the total reddening is $E(B-V)=-0.01 \pm
0.04$~mag.
\subsection{Reddening--free PL Relation} In the previous Sections we
have used the information from several photometric bands and the fit of
an interstellar extinction law to the data in order to make an
extrapolation to infinitely long wavelengths and estimate the total
extinction and the true distance modulus. However, even without solving
explicitly for the reddening, it is still possible to obtain an estimate
of the distance modulus which is independent of reddening. This can be
done by defining the reddening-free magnitude $W=V-R_V\times(B-V)$
(Freedman et al., 1992).
Figure~\ref{fig9} displays the $W-\log P$ relation for NGC~3109 (selected
sample) and for the LMC. From this relation we obtain a true relative
distance modulus $\Delta\mu_0=7.23\pm0.20$ which is consistent with the
true relative modulus obtained in the previous Section.
\begin{figure} \figurenum{9} \plotone{fig9.ps} \caption{The
reddening-free $W-\log P$ relation for the NGC~3109 Cepheids ({\it open
circles} for the original SC variables and {\it filled circles} for the
new candidates), compared to the LMC Cepheids ({\it crosses}). The
corresponding true relative distance modulus is $\Delta \mu=7.23\pm
0.20$.} \label{fig9} \end{figure}
The disadvantage of this reddening--free method is that the
uncertainties on the $W$ magnitudes are larger than the single errors on
the original $B$ and $V$ magnitudes, as $W$ results from a combination
of the two. For this reason we obtain an error on the true relative
modulus which is one order of magnitude larger than that on the modulus
obtained by the multiwavelength method. Moreover, $W$ depends on two of
the four available colors only, making the distance based on it
statistically less accurate. For this reason we have constructed a new
parameter $W'$ analog to the $W$, but depending on $R$ and $I$. The two
reddening free parameters ($W$ from $B$ and $V$ and $W'$ from $R$ and
$I$) are statistically uncorrelated and could be used as a further,
independent estimate of the distance modulus. $W'$ was derived
generalizing the definition of $W$, and applying it to the $R$ and $I$
bands. In fact, we can substitute the parameter $R_V$ by ${\cal
R}(R,I)=A_I/E(R-I)$, and then define the new parameter $W'=I-{\cal
R}(I,R)*(R-I)$. ${\cal R}(I,R)$ can be easily derived by the C89 law.
Figure~\ref{fig10} displays the $W'-\log P$ relation for NGC~3109
(selected sample) and for the LMC. From this relation we obtain a true
relative distance modulus $\Delta\mu_0=7.19\pm0.20$ which is in a very
good agree with that obtained from the $W-\log P$ relation. This result
is a proof of the utility and reliability of this method. The use of
both $W$ and $W'$ can be used as a test of self consistency, and a way
of assessing external errors.
\begin{figure} \figurenum{10} \plotone{fig10.ps} \caption {As in fig.
10, but for the new reddening-free $W'-\log P$ relation (see Sect. 5.3
for details) The corresponding true relative distance modulus is $\Delta
\mu=7.19\pm 0.20$.} \label{fig10} \end{figure}
\section{The Period--Color relation} In the Figs. \ref{fig11},
\ref{fig12}, and \ref{fig13}, we compare the Period-Color relations
($PC$) for the LMC and NGC~3109 for the three colors $(B-V)$, $(V-R)$
and $(V-I)$; for NGC~3109 the selected sample has been used (the errors
for the colors $(V-R)$ and $(V-I)$ are larger because we use the maximum
expected value and represent an upper limit, see Sect. 4 for details).
In each figure, the data in the {\it upper panel} are without reddening
correction and those in the {\it lower panel} are corrected according to
the reddening effects. The Fig. \ref{fig14} shows the same three
absorption-corrected PC relations for the LMC, NGC 3109, Sextans A
(PCP), Sextans B (PCP), and IC 1613 (F88). The slopes of these PC
relations are the same, within the uncertainties. The data of NGC~3109,
Sextans A, Sextans B, and IC 1613 have a larger scatter than for the
LMC, consistent with the lower accuracy in the determination of the mean
magnitudes.
\begin{figure} \figurenum{11} \plotone{fig11.ps} \caption{The $(B-V)$
colors for the Cepheids in the LMC ({\it crosses}) and in NGC~3109 ({\it
filled circles}) are plotted against the logarithm of the period in
days. For NGC~3109 we draw also the errors over the determination of the
colors. The {\it upper panel} shows the data without any correction for
the reddening, while the {\it lower panel} displays the absolute colors
[adopting E(B-V)=0.08 for the LMC]. For NGC~3109 we use E(B-V)=0.0 (see
text for details). The trends of the two populations is remarkably
similar. The larger scatter in NGC~3109 is due to the larger errors in
the determination of the mean magnitudes.} \label{fig11} \end{figure}
\begin{figure} \figurenum{12} \plotone{fig12.ps} \caption{As in fig. 11,
but for the $(V-R)$ colors (the plotted errors are an upper limit, see
Sect. 4 for details).} \label{fig12} \end{figure}
\begin{figure} \figurenum{13} \plotone{fig13.ps} \caption{As in fig. 11,
but for the $(V-I)$ colors (the plotted errors are an upper limit, see
Sect. 4 for details).} \label{fig13} \end{figure}
\begin{figure} \figurenum{14} \plotone{fig14.ps} \caption{Period-Color
relation after the correction for reddening for the LMC, NGC~3109,
Sextant~A, Sextant~B and IC~1613. As for NGC~3109, the large scatter for
Sextant~A and Sextant~B is due to the larger errors in the determination
of the mean magnitudes.} \label{fig14} \end{figure}
In order to obtain the zero point difference between the PC relations
for the LMC and the other galaxies, the LMC PC relations were fitted
with a straight line over the range of periods $0<\log P<1.8$, and a
linear fit of the PC relations was performed for the other galaxies
imposing the same slope of the LMC. In this way we could calculate the
mean color difference between the Cepheids in the LMC and in the other
four galaxies. Figure~\ref{fig15} shows that the mean color difference
after the reddening corrections is consistent with zero. This result
shows that, if we suppose that the colors of the Cepheids are independent
of the parent galaxy, the law by C89 allows to calculate relative
reddenings in a consistent way.
\begin{figure} \figurenum{15} \plotone{fig15.ps} \caption{The zero point
difference (after reddening correction) between the $PC$ relations in
(B-V), (V-R), and (V-I) for the LMC and for NGC~3109, Sextans A, Sextans
B, and IC~1613 is plotted against the reddening. The {\it cross} is the
LMC and the two lines represent the range of errors for the LMC zero
point.} \label{fig15} \end{figure}
\begin{figure} \figurenum{16} \plotone{fig16.ps} \caption{The
reddening--free parameter $Q=(B-V)-[E(V-I)/E(B-V)](V-I)$ is plotted as a
function of $Log P$ for both the selected sample of Cepheids in NGC~3109
({\it filled circles}) and in the LMC ({\it crosses}). The widths and
the slopes of the two distributions are the same as well as the zero
point. The larger scatter for the NGC~3109 Cepheids is consistent with
the larger photometric errors (for the magnitude $I$ the error
considered is an upper limit, see Sect. 4 for details).} \label{fig16}
\end{figure}
With the data set presented in this paper, we can check whether the
Cepheids in NGC~3109 have the same average colors as in the LMC. Figure
\ref{fig16} displays the reddening--free parameter
$Q=(B-V)-[E(V-I)/E(B-V)](V-I)$ as a function of $Log P$ for both the
selected sample of Cepheids in NGC~3109 ({\it filled circles}) and in the
LMC ({\it crosses}) (see Freedman {\it et al.\/}, 1992). The slopes of the two
distributions are the same as well as the zero point. The larger scatter
for the NGC~3109 Cepheids is consistent with the larger photometric
errors. The similarity of these two distributions suggest that the colors
of the Cepheids in these galaxies are the same.
Finally, in Fig.~\ref{fig17} we compare the amplitude of the $B$ light
curves of the Cepheids in NGC~3109 and in the LMC. Also in this case we
obtain that the trend and the dispersion of the points for the two
populations of Cepheids are similar, suggesting similar properties.
\begin{figure} \figurenum{17} \plotone{fig17.ps} \caption{The light
curve amplitudes $(B_{min}-B_{max})$ for the LMC ({\it crosses}) and
NGC~3109 ({\it filled circles}) Cepheids are plotted against the
logarithm of the period in days. The trend of the two populations of
Cepheids is remarkably similar, suggesting similar properties.}
\label{fig17} \end{figure}
\section{Conclusion}
In conclusion, an extended PL relation and the multicolor photometry
provide a new, more accurate, de-reddened distance of NGC 3109 relative
to the LMC. Adopting $\mu_0=18.50$ for the LMC, we obtain a true
distance modulus $\mu_0=25.67\pm 0.16$ for NGC 3109; if we assume a
reddening E(B-V)=0.08 for the LMC, the total reddening of the Cepheids
in NGC~3109 is $E(B-V)=-0.01\pm0.04$. The new reddening free distance
modulus corresponds to $1.36\pm0.10$ Mpc.
This new distance is consistent, within the errors, with the value
$\mu_0=25.5\pm 0.2$ given by CPB, though, formally, now NGC 3109 is
placed $\sim7\%$ further away. It is instructive to inspect the origin
of this difference. In CPB we had only B and V-band data. It was
difficult to apply the multiwavelength method to this data set, in view
of the closeness of the two bands and of the poor coverage in the V
band. For this reason, CPB preferred to adopt a mean reddening
E(B-V)=0.04 for NGC 3109 and estimate the true distance modulus with
this assumption in mind. The reddening estimate (from Burstein \&
Heiles, 1992) contour maps was admittedly poor (as noted also by BPC).
The difference between the reddening adopted by CPB and the present
direct estimate of the mean reddening toward NGC 3109 completely
explains the difference in the final absolute modulus. As further
evidence, we can refer to the exercise presented by Piotto et al.~(1995),
where we used the two B and V apparent relative distance moduli of NGC
3109 for a rough estimate of the true relative distance moduli with the
multiwavelength method applied also in the present paper. Even with a
poorer data set Piotto et al. (1995, cf. their Fig.~2) obtained a true
relative distance modulus $\Delta\mu_0=7.23\pm 0.10$, much closer to the
present estimate. This discussion shows by itself the importance of
extending the Cepheid light curves to the largest possible wavelength
interval. For this reason, we are observing these same Cepheids of
NGC~3109 in the JHK bands. We expect to reduce the error in the distance
and reddening estimate. For the importance of the JHK observations of
Cepheids, see Madore and Freedman (1991)
Finally, we want to briefly comment on the similarity of {\it (i)} the
slopes of the Cepheid PL and PC relations, {\it (ii)} the average color,
and {\it (iii)} of the properties of the amplitude-period relations which
result from a direct comparison of the data from five galaxies (cf.
discussion in Section 5.1, and 6). This is consistent with the
fundamental assumption on which the entire extragalactic distance scale
stands. However, we must also point out the large uncertainties still
present in the determination of all the above parameters ({\it cf.\/}\ also the
discussion in Section 5.2), in part as a consequence of the nature of
this kind of research, which requires a huge amount of telescope time.
It is clear that the dispersion (intrinsic and as a consequence of the
observational uncertainties) of the relations presented in Figs. 4-7,
and in Figs. 11-17 are too large for a firm conclusion on the
universality of the Cepheid properties. A lot of work on the theoretical
and observational side is still needed. In particular, we must take
advantage of the recent improvement in the near-IR detectors, where the
PL and PLC relations are much narrower.
\acknowledgements We wish to thank Wendy Freedman for providing us with
her LMC Cepheid data set in a computer readable form and Barry Madore
for the useful comments.\\ The authors acknowledge the support by the
Agenzia Spaziale Italiana. IM acknowledges the partial support of the
Istituto Italiano per gli Studi Filosofici, Napoli.
\newpage
|
1,314,259,994,342 | arxiv | \section{Introduction}
For improving the energy resolution
of cryogenic bolometers and calorimeters,
the thermal properties of
mesoscopic structures play a key role.
They can deviate significantly from the
properties of bulk materials due to the
surface effects that manifest in structures
with a large surface-to-volume ratio.
Breaking the periodic condition of a bulk
material induces surface states
and affects the bonding in the material, which
can dramatically change its electric
and magnetic properties by
transforming conductors into insulators or
semiconductors, and by
inducing magnetism in materials that are
non-magnetic in bulk.
Anomalous behavior can also
result from impurities, dislocations in the lattice,
or from a natural oxide layer growing
on the surface.
Due to its good thermal,
electrical and mechanical
properties, copper is widely used
in low temperature experiments for instance in
thermometry, calorimetry and
bolometry, development of electric
current standards,
refrigeration by nuclear
demagnetization and
electronic cooling \cite{Giazotto:2006_thermometry,Enss:2015_noise_thermometry,Goldie:2011_MoCu_TES,Viisanen:2015_relax,Viisanen:2015_calorim,Pekola:2013_current_standard,Todoshenko:2014_nuclear_demagnetization_cooling,
ullomNIS:2013,Nguyen:cooler_2015}.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.48 \textwidth]{Fig1.pdf}
\caption{The set-up of the measurements. (a) A schematic presentation. Four coaxial direct current (dc) wires are connected to the sample for calibrating the heating power. A dc bias is applied to the NIS-temperature probe through a bias-tee and a cold bias resistor at the printed circuit board (pcb) of the sample box. The thermometer is operated with an rf-drive, directed to the sample through a wide band coaxial wire. The readout is performed through a lumped element Al resonator, which is operated in a transmission mode. The coupling to the $Z_0 = \SI{50}{\Omega}$ impedance
transmission lines is realized with
finger capacitors of capacitance
$C_1 = $\SI{0.025}{pF} and $C_2 = $\SI{0.22}{pF}.
The inductance of the spiral coil is \SI{94}{nH}
and the parasitic capacitance to ground from the resonator and the
sample is \SI{0.61}{pF}.
Another rf-wire is connected to the sample to allow for the application of fast heating pulses. (b) Coloured micrograph of the sample for the heat capacity measurement. The resistance, $R$, is measured for the \SI{12}{\mu m} long section of the actual wire, painted in yellow.}
\label{figure_setup}
\end{figure}
For applications in calorimetry, the main
characteristics of metals
under interest are, in addition to
the electric properties,
the specific heat of the conduction electrons
and their thermal coupling
to the environment.
Magnetic impurities of per mil concentration level
are observed to enhance the specific heat of Cu
by a few times at sub \SI{1}{K}
temperatures \cite{cryo:book} and
the surface of Cu is proposed to host Kondo
impurities \cite{Pierre:2003}
in the dephasing time measurements,
which are sensitive to even a dilute magnetic
impurity concentration,
intensively studied in mesoscopic Cu structures
during the last decades
\cite{Vranken:1988_dephasing,Pierre:2003}.
The specific heat of an electron gas interacting with
low concentration of randomly distributed magnetic
impurities is determined by the Kondo temperature,
$T_\mathrm{K}$, in the material and can exceed the
free electron specific heat by orders of
magnitude\cite{magnetic_impurities:2012}.
The surface of Cu structures
is usually covered by
its natural oxides. $\mathrm{CuO}$ is paramagnetic
even in bulk and $\mathrm{CuO_2}$
has been observed to exhibit various magnetic properties
such as antiferromagnetism and
ferromagnetism in
nanoparticles,
even though the bulk structure is
diamagnetic \cite{Ahmad:2005_CuOx_nanoparticles,Chen:2009_CuOx_nanoparticles}.
Theoretical analysis of
Cu oxide surfaces traces the origin
of ferromagnetism in the pure
$\mathrm{CuO_2}$ nanoparticles to the
increased 2p-3d hybridization in the
nanomaterial,
and the modelling of cation vacancies in
an ideal $\mathrm{CuO_2}$ crystal as well as on its
surfaces indicates the
vacancies to be a possible
source of the observed magnetic
moments \cite{Yu:2015_dft_CuO_surface,Chen:2009_CuOx_nanoparticles}.
Cupric oxide layers play an important
role also in the observed high transition temperature
superconductivity of several heavy fermion compounds,
where the effective mass can exceed that of
bare electrons by two orders of magnitude
resulting in significant increase
in the specific heat of the
compound \cite{Coleman:2007_heavy_fermions_book}.
Lattice dislocations
induce electric field gradients (EFG)
in a material.
Additional heat capacity arising from
the nuclear spin coupling to the
EFG created by
crystal field distortions and magnetic impurities
in metals has been discussed
in Ref.~\cite{Siemensmeyer:1992}.
Previous measurements in magnetic
metallic calorimeters suggest that
additional specific heat in Au arises
from the nuclear quadrupole coupling
to the EFG introduced by Er ions\cite{Enss:2000_calorimeter}.
The accurate estimation of strain induced EFGs
is in practice challenging, since each individual
dislocation in the lattice should be
included in the model.
\section{Description of the experiment}
In this paper we present
specific heat measurements at temperatures
$\SI{120}{}-\SI{250}{mK}$ of Ag and Cu thin film wires.
The normal metal specific heat
at low temperatures is given by
$c = \gamma T + \beta T^3$,
where the first term
is due to the conduction
electrons and the second one comes
from the lattice phonons.
At sub-kelvin temperatures, the
phonons are frozen out and the
electrons are left as the main source of
heat capacity, whereas
at higher temperatures, the electronic
contribution is negligible.
The specific heat measurements on bulk Cu and Ag
address the sum of the two
components
\cite{Martin:specific_heat:1968,Corak:atomic_heat:1955,kittel, pobell}.
The literature values of the low temperature
specific heat
for bulk Cu are usually
\SI{30}{\percent} higher than
the free-electron estimate,
while Ag seems to be a manifestation
of the free-electron
theory.
In those measurements,
the electronic contribution is
extracted based on the temperature
dependence, whereas, in our setup,
the electronic heat capacity
is measured directly
as we Joule heat the electrons by electric current.
We observe that Cu wires exhibit an
anomalously large specific heat,
exceeding the free-electron estimate
by up to an order of magnitude, whereas
the Ag wire follows the free-electron estimate.
\begin{table}[h]
\centering
\caption{Parameters of the samples and the resonators. The resistivities, $\rho$, of the metals are calculated from the measured $R$ and the dimensions of the wires, given in Table~\ref{table_sample_volume}. }
\begin{tabular}{ | c | c | c | c | c | c | c |c |c |c |}
\hline
Sample & Mate- & $R$ & $\rho$& $R_\mathrm{T}$ & $\Delta$ & $R_0$ & $\omega_0/2\pi$ \\
&rial & $(\Omega)$ & $(\Omega m)$ & ($\mathrm{k\Omega}$) & $(\mathrm{meV})$ & ($\mathrm{k\Omega}$) & $(\mathrm{MHz})$ \\ \hline
A & Cu & 96.9 & \SI{3.3e-8}{} & 29 & 2.16 & 34 & 550 \\
B & Cu & 18.6 & \SI{3.0e-8}{} & 21 & 2.38 & 33 & 563 \\
C & Ag & 80.5 & \SI{3.2e-8}{} & $11$ & 2.12 & 45 & 552 \\
\hline
\end{tabular}
\label{table_NIS}
\end{table}
\begin{figure}[b]
\centering
\includegraphics[width = 8cm]{Fig2.pdf}
\caption{Measured $|s_{21}|^2$ as a function of $V_\mathrm{b}$ in sample A. The theoretical estimates (black lines) are obtained with Eqs.~(\ref{s21}) and (\ref{nis_conductance}), assuming $T_{\mathrm{e}}$ to be independent of $V_\mathrm{b}$. This is a reasonable asumption for samples with large normal metal volume at biases $eV_\mathrm{b} < \Delta$. In the top-panel, the current-voltage characteristics of the normal metal wires in samples A, B and C are shown. The solid lines are linear fits to the measured data. }
\label{resistances_rfiv}
\end{figure}
The thin film samples were patterned
by electron beam lithography and the
metals were deposited using standard
electron beam evaporation. We measure
the temperature of the wires using
a normal metal-insulator-superconductor
(NIS) probe embedded in a lumped
element tank circuit
\cite{Rowell:1976_nis,Schmidt:2003_thermometry,Viisanen:2015_relax,Viisanen:2015_calorim}. A schematic illustration
of the measurement setup and an image of the actual structure are shown in
Fig.~\ref{figure_setup}.
The thermometer can detect temporal evolution
of temperature down to
$\mathrm{\mu s}$ time resolution in mesoscopic metal structures,
allowing the measurements of thermal relaxation rates
and electronic heat capacities. In most cases the probe measures quasiparticle current of the junction \cite{Viisanen:2015_relax}, but when placed near a superconductor, it can also operate as a Josephson thermometer \cite{Saira:2016_supercurrent} with lower dissipation.
Upon optimization, the device is a promising candidate
for microwave calorimetry,
in which case the finite volume of the
electron gas in the normal wire works as an absorber
for incoming radiation.
For a sensitive calorimeter, it is necessary
to minimize the size of the wire,
whereas for the present heat capacity measurement,
this is not essential.
The NIS-junction is connected to a \SI{10}{MHz}
bandwidth lumped element
resonator with $\omega/2\pi = $\SI{0.5}{GHz}
resonance frequency, and it is operated here
in the temperature range of $\SI{80}{}-\SI{300}{mK}$.
The heat capacity,
$\mathcal{C}_{\mathrm{e}}$, of the conduction electrons
in the normal metal island is measured
by heating the wire with
a voltage pulse and probing the electron
temperature, $T_{\mathrm{e}}$,
in the normal metal. The thermal response of the wire
depends on the magnitude of the heating pulse,
$P_{\mathrm{H}}$, the heat conduction from the normal metal
electrons to the environment, $G_\mathrm{th}$, and $\mathcal{C}_{\mathrm{e}}$.
All these quantities are determined separately
in our measurement setup.
\begin{figure}[t]
\centering
\includegraphics[width = 0.48 \textwidth]{Fig3.pdf}
\caption{Measurements of thermal coupling of the electrons in the wire to the bath. (a) Measured steady-state temperature $T_{\mathrm{e}}$ of sample A (28 nm thick Cu) against the amplitude of the continuously applied ac heating voltage at different bath temperatures. The solid lines are the theoretical estimates corresponding to $P_{\mathrm{H}} + \dot{Q}_\mathrm{ep}+ \dot{Q}_\mathrm{nis} + \dot{Q}_\mathrm{S} = 0 $, $\Sigma$ and $\rho_{\mathrm{N}}$ ($\approx \SI{2E-8}{\Omega m}$) are used as fitting parameters. The dashed lines are calculated by neglecting $\dot{Q}_\mathrm{S}$, which is justified at temperatures below \SI{250}{mK}. (b) The electron-phonon coupling constant, extracted for samples A-D, shown against bath temperature $T$.}
\label{figure_samples}
\end{figure}
The heating power applied to the
electron gas
is determined in a four probe measurement, see Fig.~\ref{figure_setup}.
Two pairs of dc lines
are connected to the island through
direct NS-contacts.
The resistance
of the section of the wire
between these contacts, $R$,
is measured by applying dc current,
$I$, through it and
measuring the voltage,
$U$, across it.
The measurement data is shown in
the subfigure of
Fig.~\ref{resistances_rfiv}. The linear slope in
the $IU$-curve gives the resistance of the wire,
see Table~\ref{table_NIS}.
In the heat capacity measurement,
the dc lines are left floating
and the heating voltage, $V_{\mathrm{H}}$, is applied
through a low-pass filtered and attenuated rf-line.
The magnitude of the rf heating
is given by $P_{\mathrm{H}} = A V_{\mathrm{H}}^2$, where
the constant $A$ is determined by
the attenuation in the heater line and the
resistance of the normal wire.
In practice, $A$ is determined
by comparing the response
of $T_{\mathrm{e}}$ to the rf power and dc heating $UI$.
At resonance, the transmittance of voltage in the circuit
can be written as
\begin{equation}
|s_{21}| = 2\kappa G_0/(G+G_0),
\label{s21}
\end{equation}where $\kappa$ is about $C_1/C_2$,
$G_0 = Z_0(C_1^2+C_2^2)\omega_0^2$ and
$G=R_{\mathrm{nis}}^{-1}$ is the
differential conductance of the junction,
given by
\begin{equation}
G = \frac{1}{\kBT_{\mathrm{e}}R_\mathrm{T}}\int^{\infty}_{-\infty}{dE}n_{\mathrm{S}} (E)f(E-eV_\mathrm{b})(1-f(E-eV_\mathrm{b})).
\label{nis_conductance}
\end{equation}Here $T_{\mathrm{e}}$ is the electronic temperature in the normal metal, $f$ is the Fermi function, $f(E)~=~[1+\exp{(-E/{\kBT_{\mathrm{e}}}})]^{-1}$ and
$n_{\mathrm{S}}~=~n_0 |\mathrm{Re} ( E/
\sqrt{E^2-\Delta^2})|$ is the Bardeen-Cooper-Schrieffer density of states (DOS) of the superconductor, with
$n_0$ the normal metal DOS at the Fermi level.
The tunnelling resistances, $R_\mathrm{T}$,
of the NIS-junctions
are measured in room temperature.
The parameters of the resonators and the
junctions are shown in
Table~\ref{table_NIS}.
Typically, $R_\mathrm{T}$ increases
by about \SI{10}{\percent}\cite{Gloos:2003}, when cooled
down to sub-Kelvin temperatures.
By measuring the $|s_{21}|^2$ vs
$V_\mathrm{b}$ characteristics of the samples,
one obtains an estimate for $R_0/R_\mathrm{T}$.
To determine $R_0$, the values of $R_\mathrm{T}$
measured in room temperature,
increased by \SI{10}{\percent}, are used.
Fig.~\ref{resistances_rfiv}(b) shows
the $|s_{21}|^2$ vs $V_\mathrm{b}$ characteristics
measured in sample
A at a few different bath temperatures.
The superconducting gap is
obtained by fitting to the data at base temperature.
\section{Thermal conductance measurements}
The steady-state temperature
on the normal metal
island is governed by the heat balance
$\dot{Q} + P_{\mathrm{H}} =0$.
In the present configuration, $\dot{Q}$ is given by $\dot{Q}=\dot{Q}_\mathrm{ep}+\dot{Q}_\mathrm{nis}+\dot{Q}_\mathrm{S}+\dot{Q}_{0}$,
where $\dot{Q}_\mathrm{ep}~=~\Sigma\mathcal{V}(T^5-T_e^5)$ is
the power to the electron gas of the metal wire via electron-phonon scattering,
$\dot{Q}_\mathrm{nis}$ is the heat carried by the tunnelling quasiparticles
across the NIS-temperature probe
and $\dot{Q}_\mathrm{S}$ is the heat leak across the
superconducting aluminium leads. Here, $\Sigma$ is a material-specific parameter to be discussed below,
$\mathcal{V}$ is the volume of the normal metal island
and $T$ is the temperature of the phonon bath.
The background power from the environment to the island, $\dot{Q}_{0}$,
is assumed to be constant (at all bath temperatures), and we assume that $T$
does not depend on the heating power.
Due to $\dot{Q}_{0}$,
$T_{\mathrm{e}}$ of the carefully thermally isolated nanowire saturates to a value close to \SI{100}{mK}
even without external power.
Since all the leads directly connected to the normal
metal wire are superconducting, the heat
flow along them is suppressed at
low temperatures.
For a superconducting wire
of length $l$, thickness $t$ and width $w$, $\dot{Q}_\mathrm{S}$
can be estimated at temperatures $\kBT_{\mathrm{S}} << \Delta$ with
$\dot{Q}_\mathrm{S} \approx -\frac{wt}{l}\int \limits_{T_{\mathrm{S}}}^{T_{\mathrm{e}}} dT' \kappa_{\mathrm{S}} (T')\mathrm{e}^{-\Delta/k_{\mathrm{B}} T'}
$, where $\kappa_{\mathrm{S}} (T')= \frac{6}{\pi^2}\frac{\mathcal{L}\Delta}{k_{\mathrm{B}}\rho_{\mathrm{N}}} \frac{\Delta}{k_{\mathrm{B}} T'}$, $\mathcal{L}$ is the
Lorenz number and $\rho_{\mathrm{N}}$ is the normal state resistivity
of the superconductor.
In the present sample, there are four superconducting
leads directly connected to the normal metal wire,
each of them thermalized to $T_{\mathrm{e}}$
at the NS interface on the wire and to $T$
at the intersection between the
superconducting lead and the normal metal shadow.
$\dot{Q}_\mathrm{nis}$ is given by $\dot{Q}_\mathrm{nis}=\frac{1}{e^2R_\mathrm{T}}\intn_{\mathrm{S}}(E)(E-eV_\mathrm{b} )[f_{\mathrm{N}}(E-eV_\mathrm{b})-f_{\mathrm{S}}(E)]\mathrm{d}E$. $R_\mathrm{T}$ and $\mathcal{V}$ of the measured samples
are sufficiently large, such that $\dot{Q}_\mathrm{nis} \ll \dot{Q}_\mathrm{ep}$, which allows us to extract
$\Sigma$ in the normal wire by measuring $T_{\mathrm{e}}$ under
different heating powers, as
shown in Fig.~\ref{figure_samples}(a).
The electron-phonon coupling constant $\Sigma$ is
measured in samples A-D. The results are shown
as a function of $T$ in
Fig.~\ref{figure_samples}(b), and the
average values are gathered in
Table~\ref{table_sample_volume}.
The parameters are determined by measuring
the steady-state temperature under heating
(samples A-C) and as a reference in a standard
SINIS cooling experiment \cite{Giazotto:2006_thermometry}
using a DC measurement with four NIS tunnel junctions (sample D).
The values measured in this work,
$\Sigma \approx \SI{2E9}{W/m^3K^5}$ for Cu
and $\Sigma \approx \SI{3E9}{W/m^3K^5}$
for Ag
are in agreement with
previously measured values for Cu \cite{Meschke_Cu_sigma:2004}.
For Ag, there is less data found
in literature, but an estimate of \SI{0.5E9}{W/m^3K^5}
was inferred by using
$\Sigma$ as a fitting parameter
in an experiment of
Ref. \cite{Steinbach_Ag_sigma:1996}.
For reference, rough theoretical
estimates of $\Sigma$ can be written
with the density, $\rho$, the speed
of sound, $c$, and the Fermi
wave vector,
$k_{\mathrm{F}}$, in the metal as
$\Sigma = \frac{\zeta(5)}{3\pi ^2}\frac{k_{\mathrm{F}}^4k_{\mathrm{B}}^5}{\hbar^3c\rho}$
\cite{Wellstood:1994_eprel},
where $\zeta(5)\approx \SI{1.0369}{}$.
For Ag and Cu, this would give values $\Sigma=\SI{4E8}{W/m^3K^5}$ and $\Sigma=\SI{2E8}{W/m^3K^5}$, respectively.
\begin{figure}[b]
\centering
\includegraphics[width=8.5cm]{Fig4.pdf}
\caption{Thermal relaxation measurements. (a) Time evolution of $T_{\mathrm{e}}$ in a thin Cu wire (sample A) after switching on the heating pulse. The black solid line is calculated by solving the heat balance equation~(\ref{eq_heat_balance}). (b) Thermal relaxation of thin Cu (sample A) wire at $T=107$ mK (blue) and Ag (sample C) wire at $T=102$ mK (red) after switching off the heating. The quantity on the vertical axis, obtained by subtracting the background and normalizing the signal, is proportional to $\Delta T_{\rm e}=T_{\rm e}-T$. The solid lines are exponential fits to the data. The timetraces are obtained by averaging over $10^5$ repetitions.}
\label{figure_traces}
\end{figure}
\begin{figure}[t]
\includegraphics[width=7cm]{Fig5.pdf}
\caption{Thermal relaxation time constants observed
in a silver wire and three copper wires of different lengths. The dashed line
shows the $\tau = \gamma/(5\Sigma T^{3})$ dependence, using the free-electron Sommerfeld coefficient $\gamma$ and the measured $\Sigma$ of Ag
($\gamma/(5\Sigma) = \SI{4.6E-9}{sK^3}$).
The decrease of the error bars in the Ag data at \SI{150}{mK} is due to the increased averaging and larger heating amplitudes used at higher temperatures. The data measured at zero-bias indicates slower thermal relaxation both in Cu and Ag. This can be partly due to the enhanced heat transport via quasiparticle tunnelling compared with the electron-phonon heat current in the smaller samples. More probable explanation is, however, the saturation of $T_{\mathrm{e}}$ at low temperatures due to the noise induced by biasing the junction. }
\label{figure_relaxation_times}
\end{figure}
\section{Heat capacity measurements}
We measure $\mathcal{C}_{\mathrm{e}}$ of a normal metal wire
by heating it with a voltage pulse of finite length
and observing the time-dependent temperature $T_{\mathrm{e}}$
of the electron system.
Its time evolution
is determined by the heat balance equation
\begin{equation}
\mathcal{C}_{\mathrm{e}} \frac{\mathrm{d} T_{\mathrm{e}}}{\mathrm{d} t}= \dot{Q} + P_{\mathrm{H}}.
\label{eq_heat_balance}
\end{equation}
The measured $T_{\mathrm{e}}$
after switching on a sinusoidal
\SI{6}{MHz} heating pulse is shown in
Fig.~\ref{figure_traces}(a) as a function of time.
Below $T = \SI{250}{mK}$,
due to the suppressed heat flow
across the superconducting leads,
the heat exchange between the normal metal electrons
and the environment
occurs mainly through the electron-phonon scattering
as was demonstrated in Fig. \ref{figure_samples}.
Since $\Sigma$ was measured independently in a DC
measurement, $\mathcal{C}_{\mathrm{e}}$ can then be determined accurately
from the measured time traces.
In particular, for small enough power, $T_{\mathrm{e}}$ decays
exponentially back to equilibrium
with the time constant
$\tau = \mathcal{C}_{\mathrm{e}}/G_{\rm th}$ after switching off the heating pulse,
see Fig.~\ref{figure_traces}(b).
The thermal conductance is given
by $G_{\rm th} = -\frac{d\dot Q_{\rm ep}}{dT_{\rm e}} = 5\Sigma\mathcal{V}T_{\mathrm{e}}^4$.
The heat capacity of the conduction electrons in
a conventional metal can be estimated with
the free-electron model as
$\mathcal{C}_{\mathrm{e}} = \gamma\mathcal{V}T_{\mathrm{e}}$, where $\gamma= \frac{\pi^2}{3} n_0 k_B^2$
is the Sommerfeld coefficient for the metal. For this ideal system,
we thus expect the relaxation time to
obey $\tau = \gamma/(5\SigmaT_{\mathrm{e}}^3)$.
We have measured the thermal relaxation times
of Cu and Ag wires at different bath temperatures,
$T$. The thermal relaxation
times after switching off the heating are
obtained by fitting an exponential function
to the measured data, from which the background
is subtracted and the trace is normalized.
There is a $\sim 2$ $\micro$s electrical
transient when switching the heating
pulse on and off: data over this time
is not included in the fits. In the
Cu samples, the relaxation seems to
consist of two time constants, whereas
in the Ag samples,
the relaxation can be fitted with a
single exponent, $\tau$, which is clearly
faster than the time constants in Cu.
This is the most direct evidence of the
largely different specific heats of Ag and Cu wires,
see Fig.~\ref{figure_traces}(b).
In Fig.~\ref{figure_relaxation_times},
the thermal relaxation in Cu and Ag
wires are shown as a function of $T$.
The time constants shown for Cu are
obtained with a two-exponent fit,
while only the faster time constants
are included in the plot.
The slower relaxation observed
in the Cu samples depends on the
amplitude and the length of the
heating pulse, increasing up to
around \SI{150}{\micro s}
at large heating voltages.
This relaxation is possibly
related to an additional heat capacity,
that relaxes slowly after switching off
the heating pulse. When the heating pulse
is short and the temperature rise only a
few mK, this effect is negligible.
To rule out thermal gradients along the
wire, we have measured the temperature
relaxation in two additional Cu wires of
lengths \SI{8}{\mu m} and \SI{2}{\mu m},
see Fig.~\ref{figure_relaxation_times}.
By describing the thermal relaxation of the
free electrons in the normal metal wire
with the Wiedemann-Franz relation between
the thermal and electrical conductivities,
$\kappa/\sigma = LT_{\mathrm{e}}$, where $L$ is the Lorenz number,
the response time of the electron temperature in the wire
can be estimated as
$\tau_{\mathrm{e-e}}= \mathcal{C}_{\mathrm{e}}/\kappa_{\mathrm{th}}=\gamma\rho l^2/L$.
For the \SI{24}{\mu m} wires this
results in about \SI{50}{ns},
which is about three decades smaller than the
timescales observed in the Cu wires.
Due to the quadratic dependence on length,
the relaxation in the \SI{2}{\mu m} wire should be over
two decades faster compared with the
\SI{24}{\mu m} wires, which is not
observed in our measurements.
From these perspectives we can rule out
the possibility of significant thermal gradients
along the wires.
The specific heat of samples A-C, measured at different
bath temperatures,
is shown in Fig.~\ref{figure_specific_heat}.
The data are obtained by fitting the heat balance
equation~(\ref{eq_heat_balance}) to the measured
time evolution of $T_{\mathrm{e}}$ after switching on a
\SI{6}{MHz} sinusoidal heating voltage,
as illustrated in Fig.~\ref{figure_traces}.
Compared to the Cu wires, the thermal relaxation time of the
Ag wire is clearly shorter, and hence its specific heat
is lower.
The results are similar to those obtained from the decay traces
after switching off the pulse.
In practice the fast relaxation means that the measurement
in Ag wire is limited to temperatures well below 0.2 K
because of the finite bandwidth of the thermometer.
The largest specific heat is observed in the thin Cu wire.
Especially with large amplitude
heating pulses, an extra, very slow relaxation
is observed in copper samples, which is evident in the
non-exponential decay in Fig.~\ref{figure_traces}(b)
as well. Therefore, for the analysis of copper data,
only the beginning of the trace, $\sim 1.5\tau$ after
switching on the heating pulse, was included in the fits.
Within the free-electron model, the Sommerfeld coefficient $\gamma$
obtains values $\gamma= \SI{70.7}{J/m^3K^2}$
and $\gamma = \SI{62.4}{J/m^3K^2}$ for Cu and Ag, respectively.
According to the data in Fig.~\ref{figure_specific_heat},
the Ag wire follows the free-electron theory,
while an anomalously large
specific heat is observed in the Cu wires.
The specific heat of the thinner Cu
wire exceeds the expectation
value by an order of magnitude, while
that of the thicker wire is somewhat lower.
The temperature dependence in the two Cu
samples is nearly quadratic, rather than linear
predicted by the free-electron theory.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{Fig6.pdf}
\caption{Specific heat of the normal metal wires measured at different bath temperatures. The solid lines correspond to the free-electron estimate for Cu (red) and Ag (blue). The data for the Ag wire falls close to the free-electron estimate, while the specific heat of the Cu wires is clearly higher with rather a quadratic dependence on $T$ (dashed lines). The normalized specific heat, $\mathcal{C}_{\mathrm{e}}/(\gamma\mathcal{V} T)$, is shown as a function of $T$ in the inset of the figure. Here $\gamma$ is the Sommerfeld coefficient for each metal.}
\label{figure_specific_heat}
\end{figure}
\begin{table}[b]
\centering
\caption{Physical dimensions and the average values of $\Sigma$ of the normal metal wires measured in this work. The resistances of the metals are measured for the \SI{12}{\mu m} long section of the $l=\SI{24}{\mu m}$ long wires. }
\begin{tabular}{ | c | c | c | c | c | c | c | }
\hline
\parbox[t]{3mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{Sample}}} &\parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{Material}}} & & & & & \\
& &$l$ & $w$ & $t$ & $\mathcal{V}$ & $\Sigma$\\
& & ($\mathrm{\micro m}$) & $(\mathrm{nm})$ & $(\mathrm{nm})$ & $(\mathrm{\mu m^3})$ & $\mathrm{\Big(\frac{GW}{m^3K^5}}\Big)$ \\ \hline
A & Cu & 24$\pm$0.1 & 145$\pm$5 & 28$\pm$1 & 0.10$\pm$0.01 & 2.2$\pm$0.2 \\
B & Cu & 24$\pm$0.1 & 150$\pm$10 & 128$\pm$1 & 0.43$\pm$0.03 & 2.0$\pm$0.1\\
C & Ag & 24$\pm$0.1 & 125$\pm$5 & 37$\pm$2 & 0.11$\pm$0.01 & 2.7$\pm$0.3 \\
D & Cu & 7.2$\pm$0.1 & 180$\pm$10 & 34$\pm$1 & 0.044$\pm$0.004 & 2.1$\pm$0.3 \\
\hline
\end{tabular}
\label{table_sample_volume}
\end{table}
\section{Detailed characterization of the samples}
The measured samples are fabricated by
electron-beam lithography and multi angle
metal deposition in an electron-beam evaporator.
In the evaporator,
the samples are mounted on a
room temperature platform in a
vacuum chamber.
During evaporation, the pressure in
the chamber is between \SI{1E-7}{mbar}
and \SI{1E-6}{mbar} and the
growth rate of the film is
in the range \SI{0.1}{}---\SI{0.3}{nm/s}.
Typically the
samples are stored in ambient air
for a period ranging from
hours to days before the
cooldown to mK temperatures.
We have observed, that the junction resistance of an
$\mathrm{Ag-AlO_2-Al}$ tunnel contact
increases when stored in room temperature.
Indeed, in about a day, the electrodes
detach completely.
We have found this to be a
typical problem with the Ag samples,
while the $\mathrm{Cu-AlO_2-Al}$
tunnel junctions are more
robust.
\begin{figure}[t]
\includegraphics[width=8cm]{Fig7.pdf}
\caption{STEM images of samples of (a) Ag and (b) Cu thin film nanowires. Several wires are imaged and all the Ag samples show a uniform lattice structure throughout the wire. The copper lattice, however, is formed of different growth directions, typical for evaporated Cu. }\label{figure_STEM}
\end{figure}
The geometry and the atomic composition of
the samples are analyzed with scanning
electron microscopy (SEM),
atomic force microscopy (AFM),
secondary ion mass spectroscopy (SIMS)
and scanning transmission electron microscopy (STEM).
The length and width of
the wires are determined by SEM imaging, while
the thickness is measured by AFM.
These values are shown in Table.~\ref{table_sample_volume}.
The STEM imaging of the lattice
structures is performed
for \SI{100}{nm} thick cross-sections
of the wires, cut out by milling with focused
ion beam (FIB).
Before exposing to FIB, the samples are
covered with a protective layer of Au.
Several wires are measured and the
lattice structure is
found to be similar to that of bulk, no
gas pockets or other anomalies are
observed. The images
are shown in Fig.~\ref{figure_STEM}.
The lattice structure of the
evaporated Ag wires is
remarkably uniform; the lattice
vectors point to the same
direction throughout the wire.
In the Cu wires, different growth directions
are observed, which is typical for
evaporated Cu films.
The amount of magnetic impurities in the
evaporated metals is determined by
SIMS, the level of \SI{0.6}{ppm} is measured in Cu and
\SI{40}{ppm} in Ag \cite{eag}.
The analysis is carried out for
\SI{50}{nm} thick films, fabricated on a similar
substrate and deposited in the same
evaporator as the measured samples.
The Cu film fabricated of 99.9999 \% pure source
material was observed to be fairly clean with
magnetic particles below the ppm level
(Fe \SI{0.2}{ppm},
Cr \SI{0.02}{ppm},
Ni \SI{0.2}{ppm},
V \SI{0.004}{ppm}, Mn \SI{0.01}{ppm}
and Co \SI{0.2}{ppm}).
The Ag film deposited of 99.99 \% pure
starting material was observed to contain
higher impurity levels
(Fe \SI{15}{ppm}, Cr \SI{5}{ppm}, Ni \SI{2}{ppm},
V \SI{0.01}{ppm}, Mn \SI{20}{ppm} and
Co \SI{0.08}{ppm}) \cite{eag}.
The impurities in the source materials
can also originate from the surface of the
source material. Materials formed in
a solution can result in lower
surface contamination. This type of
materials are coined "shot".
By using
99.99~\% shot Ag, 99.999 \% shot Ag and 99.999 \% Ag,
we were able to reduce the level of Fe impurities below
\SI{3}{ppm}, but not lower. There was no significant
difference in the observed
impurity concentration of the
metal films evaporated of these
three source materials.
\section{Summary and outlook}
In conclusion, we have measured thermal
properties of thin Ag and Cu wires at
sub-kelvin temperatures and observed
an anomalously large specific
heat in the Cu wires.
Slow thermal relaxation in thin film Cu wires
was previously reported in
\cite{Viisanen:2015_relax,Viisanen:2015_calorim},
whereas here we trace the long time constant to a
large heat capacity, rather than small
thermal conductance.
The Ag samples are observed to follow the free-electron
estimate, consistent with literature values measured in bulk.
The band structures of Ag and Cu are similar,
but the energies of the
d-electrons are clearly lower in Ag \cite{Ag_Cu_band:1964},
which can explain why the free-electron model
applies better for Ag.
However, the observed anomaly in the electronic specific
heat of Cu is much larger than that previously measured in bulk,
exceeding the free-electron estimate by even an order of
magnitude.
The indication that the thicker
wire would have lower specific heat suggests that
the anomalously high specific heat could be
related to the surface of the metal.
However, more experiments are needed
to confirm this conclusion.
Magnetic impurities in Cu might increase the
specific heat of the metal
at low temperatures.
At temperatures below $T_\mathrm{K}$,
the specific heat of an electron gas
interacting with magnetic impurities
is given by $c = k_{\mathrm{B}} T/T_{\mathrm{K}}$,
which can be significantly larger than the
free electron estimate.
Around $T_\mathrm{K}$, $c$ develops a
Kondo peak even at zero magnetic field,
changing the power
law of the specific heat\cite{magnetic_impurities:2012}.
Yet the concentration of
known magnetic impurities in the evaporated Cu
was measured to be as low as \SI{0.6}{ppm} but
\SI{40}{ppm} in Ag.
However, the natural Cu oxides, $\mathrm{CuO_{2}}$
and CuO, are both magnetic,
being a possible source of Kondo
impurities in the measured samples.
Another possible
source of the enhanced specific heat
can arise from the quadrupole splitting of the
nuclei, due to the EFG generated by the
distortion of
the lattice\cite{Enss:2000_calorimeter}.
The lattice of the evaporated Cu is imaged to
have an irregular shape,
whereas the Ag lattice is observed to be completely
uniform. This would suggest the
quadrupole splitting
to be more likely to occur in the Cu samples.
Fabricating much thicker Cu wires as well as wires
with a passivated surface would provide
valuable information for distinguishing between
a surface effect and other possible sources of
heat capacity.
Further interesting measurements
for understanding the
phenomena now observed would be
to explore a wider range of
temperatures and looking at the
dependence of the heat capacity
on magnetic field.
Based on our results, Ag exhibits
a more promising candidate as
an absorber material for a nanocalorimeter. \\
We acknowledge O.-P. Saira for the collaboration
and discussions in the development of the
RF-NIS thermometer, and A. Peltonen and H. Jiang
for the chemical and
structural analysis of the normal metal wires.
J. Peltonen is acknowledged for contributing to the
fabrication and design of the samples and M. Meschke
for technical support with the measurement devices. We thank L. Wang, C. Enss, F. Hekking, H. Courtois, H. Pothier, J. Ankerhold, J. Ullom, Y. Galperin and M. Krusius for discussions.
We acknowledge Jenny and Antti Wihuri foundation and
Academy of Finland projects 273827 and 285494
for financial support, and facilities and technical support
provided by Otaniemi research infrastructure
for Micro and Nanotechnologies (OtaNano).
|
1,314,259,994,343 | arxiv | \section{Introduction}
Probing the faint end of the luminosity function (LF) in galaxy clusters and groups has in many cases exposed a discrepancy between the number of observed dwarf galaxies and the number of dark matter (DM) sub-haloes predicted by current hierarchical cold dark matter models -- the so-called missing satellites problem \citep{1999ApJ...522...82K}. Its origin is still a matter of debate. Either there are many faint satellites not yet discovered, the predictions of the hierarchical models are not reliable, or the large majority of low-mass DM haloes have not formed any stars. To quantify this discrepancy, the LF can be parametrised by the Schechter function, whose logarithmic faint-end slope $\alpha$ can be contrasted with the predicted slope of about $-2$ for the mass spectrum of cosmological DM haloes \citep[e.g.][]{1999ApJ...524L..19M, 2001MNRAS.321..372J}. The observed value of $\alpha$ is generally much lower than expected. This has been shown in many studies, for both low density environments like the Local Group (LG) and galaxy clusters \citep[e.g.][]{1999AJ....118..883P, 2002MNRAS.335..712T, 2005MNRAS.357..783T}.
For the LG and the galaxy clusters Fornax, Perseus and Virgo, the faint-end slope of the LF can be determined by direct cluster membership assignment via spectroscopic redshift measurements \citep[e.g.][]{1999A&AS..134...75H, 2001ApJ...548L.139D,2008MNRAS.383..247P, 2008AJ....135.1837R}. For other galaxy clusters, however, only photometric data are available at magnitudes where $\alpha$ dominates the shape of the LF ($M_V\gtrsim -14$~mag). In this case, cluster galaxies have to be separated from background galaxies either by means of statistical background subtraction or by their morphology and correlations between global photometric and structural parameters. For the latter case, the colour--magnitude relation (CMR) can be used, which is observed not only for giant elliptical galaxies \citep[e.g.][]{1977ApJ...216..214V, 1997A&A...320...41K, 2006MNRAS.370.1106G}, but also for early-type \textit{dwarf} galaxies \citep[e.g.][]{1997PASP..109.1377S, 2003A&A...397L...9H, 2006A&A...459..679A, 2007A&A...463..503M, 2008AJ....135..380L, 2008A&A...486..697M}.
Although they form a common relation in a colour--magnitude diagram, the question of whether giant elliptical galaxies, on the one hand, and early-type dwarf galaxies (dwarf ellipticals (dEs) and dwarf spheroidals (dSphs)), on the other, have the same origin, has been a controversial issue over the past decades. Two major perceptions exist. First, dwarf elliptical galaxies are not the low luminosity counterparts of giant elliptical galaxies, but rather an independent class of objects. This point of view is mainly based on studies of relations between galaxy surface brightness and magnitude, or surface brightness and size -- the Kormendy relation \citep[e.g.][]{1977ApJ...218..333K, 1985ApJ...295...73K, 1991A&A...252...27B, 1992ApJ...399..462B, 1993ApJ...411..153B}. These studies showed an apparent dichotomy of dwarf and giant elliptical galaxies, in the sense that for dwarfs the surface brightness increases with luminosity, whereas the opposite trend is seen for giants. Moreover, a weaker dependence of size on luminosity was observed for dwarfs than for giants. Recently, \citet{2008arXiv0810.1681K} reaffirmed those results in their study of a large sample of Virgo cluster early-type galaxies. They concluded that dwarf galaxies are structurally distinct from giant early-type galaxies and that different mechanisms are responsible for their formation (see also \citealt{2008A&A...489.1015B} and \citealt{2008ApJ...689L..25J}).
The alternative point of view is that the apparent dichotomy is the result of a gradual variation of the galaxy light profile shape with luminosity. If the light profile is described by the \citet{1968adga.book.....S} law, the different behaviour of dwarf and giant early-type galaxies in the surface brightness vs. magnitude relation compared to the Kormendy relation is a natural consequence of the linear relation between S\'ersic index $n$ and galaxy magnitude \citep[e.g.][]{1997ASPC..116..239J, 2003AJ....125.2936G, 2005A&A...430..411G, 2006ApJS..164..334F, 2007ApJ...671.1456C, 2008IAUS..246..377C}. This implies that dwarf ellipticals represent the low luminosity extension of massive elliptical galaxies.
Continuing a series of similar investigations in Fornax and Hydra\,I \citep{2003A&A...397L...9H, 2007A&A...463..503M, 2008A&A...486..697M}, we study in this paper the early-type dwarf galaxy population of the Centaurus cluster, aiming at the investigation of the galaxy LF and photometric scaling relations. It is based on deep VLT/FORS1 imaging data of the central part of the cluster. In Sect.~\ref{sec:sample} we describe the observations, the sample selection and the photometric analysis of the dwarf galaxy candidates. We present our results in Sect.~\ref{sec:results}. The findings are summarised and discussed in Sect.~\ref{sec:discussion}.
\subsection{The Centaurus cluster (Abell 3526)}
The Centaurus cluster, which is categorised as a cluster of richness class zero and Bautz-Morgen type I/II \citep{1958ApJS....3..211A, 1970ApJ...162L.149B}, is the dominant part of the Centaurus-Hydra supercluster \citep{1986AJ.....91....6D, 1987AJ.....93.1338D}. It is characterised by an ongoing merger with an in-falling sub-group and irregular X-ray isophotes \citep{1999ApJ...520..105C, 2001PASJ...53..421F}. The main cluster component is Cen30 with NGC 4696 at its dynamical centre, whereas Cen45 is the in-falling sub-group with NGC 4709 at its centre \citep{1986MNRAS.221..453L, 1997A&A...327..952S}.
\citet{2005A&A...438..103M} derived the distance to Centaurus by means of surface brightness fluctuation (SBF) measurements. They found the SBF-distance to be $45.3\pm2.0$~Mpc ($(m-M)=33.28\pm 0.09$~mag). \citet{2001ApJ...546..681T}, however, measured a significantly shorter distance of 33.8~Mpc, which may partially be attributed to selection effects \citep[see discussion in][]{2003A&A...410..445M}. For 78 cluster galaxies, a mean redshift of $v_{\mathrm{rad}}=3656$~km~s$^{-1}$~is determined in \citet{2006AJ....132..347C}. This corresponds to a distance of $50.8\pm 5.6$~Mpc, assuming $H_0 = 72\pm 8$~km~s$^{-1}$~Mpc$^{-1}$ \citep{2001ApJ...553...47F}, and agrees with the SBF-distance within the errors. Throughout this paper, we adopt a distance modulus of $(m-M)=33.28$~mag \citep{2005A&A...438..103M}, which corresponds to a scale of 220 pc/arcsec.
\section{Observations and sample selection}
\label{sec:sample}
The observations were executed in a service mode run at the Very Large Telescope (VLT) of the European Southern Observatory (ESO, programme 67.A-0358). Seven fields in the central part of the Centaurus cluster of size $7'\times7'$ were observed in Johnson $V$ and $I$ filters, using the instrument FORS1 in imaging mode. The fields cover the central part of the Centaurus cluster with its sub-components Cen30 and Cen45, which are centred on NGC 4696 and NGC 4709, respectively (see Fig.~\ref{fig:fields}). The exposure time was $4\times373$ s in $V$ and $9\times325$ s in $I$. The seeing was excellent, ranging between $0.4''$ and $0.6''$. Additional short images (30 s in both filters) were taken to be able to analyse the brightest cluster galaxies, which are saturated on the long exposures. Furthermore, an eighth (background) field located about $2.5\degr$ west of NGC 4696 was observed.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fields_final.eps}}
\caption{Map of the seven VLT/FORS1 cluster fields (large open squares) with the selected dwarf galaxy candidates, the spectroscopically confirmed cluster members and background galaxies. The major cluster galaxies NGC 4696, NGC 4709 and NGC 4706 are marked by open triangles. Green open hexagons are compact elliptical galaxy (cE) candidates (see Sect.~\ref{sec:cEs}).}
\label{fig:fields}
\end{figure}
\begin{figure*}
\centering
\subfigure{\includegraphics[width=4.5cm]{dw_cen4_1555_final.eps}}
\subfigure{\includegraphics[width=4.5cm]{dw_cen2_3028_final.eps}}
\subfigure{\includegraphics[width=4.5cm]{dw_cen2_1867_final.eps}}
\subfigure{\includegraphics[width=4.5cm]{dw_cen4_2009_final.eps}}
\caption{Thumbnail images of four cluster dwarf galaxy candidates that fulfil our selection criteria (two dEs, one dE,N and one dSph). The objects' absolute magnitudes from the left to the right are: $M_V=-14.4, -13.3, -13.2, -12.0$ mag, assuming a distance modulus of $(m-M)=33.28$ mag \citep{2005A&A...438..103M}. The thumbnail sizes are $40''\times40''$ ($8.8\times 8.8$ kpc at the cluster distance).}
\label{fig:candidates}
\end{figure*}
In this study, we are interested in early-type galaxies. They were selected based on morphology and spectroscopic redshifts. Our images contain 21 spectroscopically confirmed early-type cluster galaxies and one late-type (Sc) cluster galaxy \citep{1997A&AS..124....1J, 1997A&A...327..952S, 2007ApJS..170...95C, 2007A&A...472..111M}. The cluster membership criterion was adopted to be $1700<v_{\mathrm{rad}}<5500$~km~s$^{-1}$.
In order to identify new early-type dwarf galaxy candidates on the images, we followed the same strategy as in our investigations of the dwarf galaxy populations in Fornax and Hydra\,I \citep{2003A&A...397L...9H, 2007A&A...463..503M, 2008A&A...486..697M}. It is a combination of visual inspection and the use of SExtractor \citep{1996A&AS..117..393B} detection routines. We first added several simulated Local Group (LG) dEs and dSphs (projected to the Centaurus cluster distance) to the images. Their magnitudes and central surface brightnesses were adopted according to the relations found by \citet{2003AJ....125.1926G} and \citet{2006MNRAS.365.1263M}. Afterwards, the images were inspected by eye, and candidate cluster dwarf galaxies were selected by means of their morphological resemblance to the simulated galaxies. The main selection criterion was a smooth surface brightness distribution and the lack of substructure or spiral arms. This first search resulted in the identification of 89 previously uncatalogued dE/dSph candidates, from which four are shown in Fig.~\ref{fig:candidates}.
In a second step, we used the SExtractor detection routines to quantify the detection completeness in our data (see also Sect. \ref{sec:lumfunction}) and to find more dwarf galaxy candidates, in particular at the faint magnitude and surface brightness limits. The detection sensitive SExtractor parameters were optimised such that a maximum number of objects from the by-eye catalogue was recovered by the programme. Only 12 of the 89 obvious by-eye detections were not recovered, mostly due to their position close to another bright object or close to the image border. In our search for new dwarf galaxy candidates we focused on those sources in the SExtractor output catalogue whose photometric parameters matched the parameter range of the simulated dwarf galaxies. For this, we applied cuts in the SExtractor output-parameters \texttt{mupeak}, \texttt{area} and \texttt{fwhm} to constrain the output parameter space to the one found for the simulated LG dwarf galaxies \citep[see also][]{2008A&A...486..697M}. We thus rejected barely resolved and apparently small objects with high central surface brightnesses, both being likely background galaxies. The applied cuts are described in detail in Sect. \ref{sec:lumfunction}. In this way, 8 additional objects in the magnitude range $-11.0<M_V<-9.4$ mag were found and added to the by-eye catalogue. On the background field, neither the visual inspection nor the SExtractor analysis resulted in the selection of an object.
To our photometric sample we also added five spectroscopically confirmed background early-type galaxies that are located in the observed fields, in order to be able to compare their photometric properties with the ones of the objects in the by-eye catalogue. In total, our sample contains 123 objects, for which Fig.~\ref{fig:fields} shows a coordinate map.
\subsection{Photometric analysis}
For each selected object we created thumbnail images with sizes extending well into the sky region (see Fig. \ref{fig:candidates}). On these thumbnails we performed the sky subtraction and fitted elliptical isophotes to the galaxy images, using the IRAF-task \texttt{ellipse} in the \texttt{stsdas}\footnote{Space Telescope Science Data Analysis System, STSDAS is a product of the Space Telescope Science Institute, which is operated by AURA for NASA.} package. During the fitting procedure the centre coordinates, the position angle and the ellipticity were fixed, except for some of the brightest cluster galaxies ($V_0\lesssim 15.5$~mag) where the ellipticity or both the ellipticity and the position angle considerably changed from the inner to the outer isophotes. In those cases one or both parameters were allowed to vary.
The total apparent magnitude of each object was derived from a curve of growth analysis. The central surface brightness was determined by fitting an exponential as well as a \citet{1968adga.book.....S} law to the surface brightness profile. From the fit we excluded the inner $1''$ (about 1.5 seeing disks) and the outermost part of the profile, where the measured surface brightness was below the estimated error of the sky background. Corrections for interstellar absorption and reddening were taken from \citet{1998ApJ...500..525S}, who give $A_V=0.378$~mag and $E(V-I)=0.157$~mag for the coordinates of NGC 4696. We adopt these values for all of our observed fields. Zero points, extinction coefficients and colour terms for the individual fields and filters are listed in Table \ref{tab:photcal} in the appendix (only available on-line).
\section{Global photometric and structural parameters}
\label{sec:results}
In this section the results of the photometric analysis are presented. In Sect.~\ref{sec:scalings} we address the colour--magnitude and the magnitude-surface brightness relation of the Centaurus early-type dwarf galaxies and use these relations to facilitate the distinction of cluster and background galaxies. The galaxy luminosity function of probable cluster members is studied in Sect.~\ref{sec:lumfunction}. The structural parameters of the cluster galaxies, as obtained from S\'ersic fits to the surface brightness profiles, are presented in Sect.~\ref{sec:sersic}. Table~\ref{tab:Centaurussample} in the appendix summarises the obtained photometric parameters of the 92 probable Centaurus cluster early-type galaxies in our sample (only available on-line).
\subsection{Fundamental scaling relations}
\label{sec:scalings}
Figure \ref{fig:cmd} shows a colour--magnitude diagram of our sample of early-type galaxies, as defined in Sect. \ref{sec:sample}. Spectroscopically confirmed cluster galaxies ($V_0\lesssim18$ mag) form a colour--magnitude relation (CMR) in the sense that brighter galaxies are on average redder. This sequence continues down to the faint magnitude limit of our survey ($M_V\sim -10$ mag), which is comparable to the absolute magnitudes of the LG dwarf galaxies Sculptor and Andromeda III \citep{2003AJ....125.1926G, 2006MNRAS.365.1263M}. The larger scatter at faint magnitudes is consistent with the larger errors in $(V-I)_0$. The mean measured error in $(V-I)_0$ is 0.03, 0.08 and 0.13~mag for the three magnitude intervals indicated in Fig.~\ref{fig:cmd}. The intrinsic scatter of the datapoints in the same intervals is 0.06, 0.09 and 0.14~mag, respectively, only marginally larger than the measurement errors. Our data do therefore not require an increase of metallicity or age spread among individual galaxies at faint luminosities, compared to brighter luminosities. For the linear fit, we weighted each data point by its colour error, resulting in:
\begin{equation}
\label{eq:cmr}
(V-I)_0 = -0.042(\pm0.001) \cdot M_V + 0.33(\pm0.02)
\end{equation}
with a rms of 0.10. This is in good agreement with the CMRs observed in Fornax and Hydra\,I \citep{2007A&A...463..503M, 2008A&A...486..697M}. Table~\ref{tab:relations} lists the CMR coefficients for each of those clusters. For consistency, we re-fitted the Hydra\,I data using the error weighted values, which slightly changes the coefficients given in \citet{2008A&A...486..697M}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{cmdcentaurus_final.eps}}
\caption{Colour--magnitude diagram of early-type galaxies in the Centaurus cluster. Black dots are probable cluster galaxies, selected by their morphology. Red open circles (blue open squares) mark spectroscopically confirmed cluster members (background galaxies). Blue filled squares and grey filled triangles are probable background objects (see text for details). Grey open triangles are objects which do not follow the magnitude-surface brightness relation (cf. Fig.~\ref{fig:magmu}). Green open hexagons mark the candidates for compact elliptical galaxies (see Sect.~\ref{sec:cEs}). Typical errorbars are indicated on the left. The solid line is a linear fit to the cluster member candidates (Eq.~(\ref{eq:cmr})) with its $2\sigma$ deviations (dotted lines).}
\label{fig:cmd}
\end{figure}
The CMR can be used as a tool to distinguish cluster from background galaxies. This is important, since the selection of cluster galaxy candidates solely based on morphological criteria can lead to the contamination of the sample with background objects that only resemble cluster dwarf ellipticals. In the bright magnitude range the cluster galaxies are identified by their redshift. In the intermediate magnitude range ($17.8<V_0<21.0$ mag), however, seven objects turn out to be likely background galaxies, although they passed our morphological selection criteria (filled squares in Fig.~\ref{fig:cmd}). All seven arguable objects have \citet{1948AnAp...11..247D} surface brightness profiles (also known as $R^{1/4}$ profiles), typical of giant elliptical galaxies. Five of those objects are too red to be a galaxy at $z\sim 0$, the other two share their position in the CMD with spectroscopically confirmed background galaxies (open squares in Fig.~\ref{fig:cmd}). Moreover, Fig.~\ref{fig:sersic} shows that the confirmed background galaxies as well as the seven likely background objects clearly differ from the cluster galaxies, because of their high central surface brightness and their large S\'ersic index. We consider 10 more morphologically selected objects with $V_0>21$ mag likely background objects, as their colours are significantly redder than those of other objects in the same magnitude range (see the filled triangles in Fig.~\ref{fig:cmd}).
We were not able to measure a colour for two objects in our sample (C-3-30 and C-1-47, see Table~\ref{tab:Centaurussample}), since they were located close to the image borders, only fully visible on the $V$-band images. However, they have a typical dE morphology and they fall onto the magnitude-surface brightness relation (Fig.~\ref{fig:magmu}). We thus treat them as probable cluster dwarf galaxies in the following analyses.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{magsurface_final.eps}}
\caption{Plot of the central surface brightness $\mu_{V,0}$, as derived from fitting an exponential law to the surface brightness profile, vs. the apparent magnitude $V_0$ for identified cluster dwarf galaxy candidates. Symbols are as in Fig.~\ref{fig:cmd}. Errors are comparable to the symbol sizes. The solid line is a linear fit to the black dots (Eq.~(\ref{eq:magmu})). Dotted lines are the $2\sigma$ deviations from the fit. Local Group dEs and dSphs, projected to the Centaurus distance, are given by the blue crosses (data from \citet{2003AJ....125.1926G} and \citet{2006MNRAS.365.1263M}). A scale length of $0.6''$ for an exponential profile, representing the resolution limit of our images, is indicated by the dash-dotted line.}
\label{fig:magmu}
\end{figure}
In Fig.~\ref{fig:magmu}, the central surface brightness $\mu_{V,0}$ is plotted against the apparent magnitude $V_0$ for all objects in our sample, whose surface brightness profiles are well represented by an exponential law. These are all objects with $V_0>16.1$ mag (see Table~\ref{tab:Centaurussample}), except for two compact elliptical galaxy candidates, whose properties we will discuss in Sect.~\ref{sec:cEs}. A linear fit to the probable cluster galaxies (black dots) leads to:
\begin{equation}
\label{eq:magmu}
\mu_{V,0} = 0.57(\pm0.07) \cdot M_V + 30.90(\pm0.87)
\end{equation}
with a rms of 0.48. Given that the scatter in the data is much larger than the measured errors in both $M_V$ and $\mu_{V,0}$, we do not error weight the data points. The fit errors were derived from random re-sampling of the data points within their measured scatter. The same method was used in \citet{2007A&A...463..503M} for the Fornax dwarfs, and we re-analyse the Hydra\,I data \citep{2008A&A...486..697M} in the same way. The magnitude-surface brightness relations of the three clusters agree within the errors (see Table~\ref{tab:relations}).
When projected to the Centaurus distance, LG dwarf galaxies are mostly consistent with the same relation, with a few slightly more compact objects \citep{2003AJ....125.1926G, 2006MNRAS.365.1263M}. The likely background objects from Fig.~\ref{fig:cmd} (grey filled triangles) do not follow the relation, but they have central surface brightnesses about 1~mag/arcsec$^2$ higher than other objects of the same magnitude.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{artgalanalysis_final_small.eps}}
\caption{SExtractor output-parameters of recovered simulated galaxies with an exponential scale length $<1''$ (red dots). \textit{The upper left panel} shows the input parameters absolute magnitude $M_V$ and central surface brightness $\mu_V$ of the artificial galaxies (grey dots), together with the probable cluster dwarf galaxies (green solid squares) that were recovered by SExtractor. Equation (\ref{eq:magmu}) with its $2\sigma$ deviations is plotted as in Fig.~\ref{fig:magmu}. The blue dashed line indicates a scale length of $1''$ for an exponential profile. Blue open triangles are the questionable objects discussed in Sect.~\ref{sec:scalings}. The SExtractor output-parameter \texttt{magbest} is plotted against \texttt{mupeak} (\textit{upper right}), \texttt{area} (\textit{lower left}) and \texttt{fwhm} (\textit{lower right}). Dash-dotted lines indicate the global cuts on \texttt{mupeak} and \texttt{fwhm} (see text for details).}
\label{fig:artgal}
\end{figure}
An interesting sub-group of morphologically selected objects is marked by the open triangles. These nine objects are rather compact, having exponential scale lengths of $\lesssim 1''$, close to the resolution limit of our images. Although they lie on the cluster CMR (see Fig.~\ref{fig:cmd}), they are located more than $2\sigma$ away from the magnitude-surface brightness relation -- just as the likely background galaxies that were identified by their position aside the CMR. This suggests that these nine questionable objects are in fact background galaxies with colours similar to the cluster galaxies. However, the LG dwarf galaxy Leo\,I \citep{2003AJ....125.1926G} falls into the same parameter range (Fig.~\ref{fig:magmu}). If these objects were actual cluster galaxies, they would account for about 10\% of the whole dwarf galaxies population in our sample. Given their uncertain nature, we will analyse in Sect.~\ref{sec:lumfunction} how they affect the shape of the galaxy luminosity function. Ultimately, it remains to be clarified by spectroscopic measurements, whether they represent a family of rather compact early-type cluster members or background galaxies.
\subsection{The dwarf galaxy luminosity function}
\label{sec:lumfunction}
In order to study the faint-end of the galaxy luminosity function, the detection completeness in our data has to be quantified and the galaxy number counts have to be corrected to that effect. For this, 10\,000 simulated dwarf galaxies were randomly distributed in 500 runs in each of the seven cluster fields, using a C++ code. The background field was left out from this analysis, since we did not identify any potential dwarf galaxy. The upper left panel of Fig.~\ref{fig:artgal} illustrates the input-parameter range of the simulated galaxies, which extends well beyond the observed parameter space. The artificial galaxies were then recovered by SExtractor, and the SExtractor output-parameters \texttt{magbest}, \texttt{mupeak}, \texttt{area} and \texttt{fwhm} were compared with the parameters of the sample of probable cluster dwarf galaxies, as defined in Sect.~\ref{sec:scalings}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{completeness_final.eps}}
\caption{Completeness as a function of magnitude (in 0.5 mag bins) for each of the seven cluster fields (cf. Fig.~\ref{fig:fields}).}
\label{fig:completeness}
\end{figure}
In a first step, we made use of the SExtractor star/galaxy separator \citep{1996A&AS..117..393B} to sort out wrongly recovered foreground stars, requiring \texttt{class\_star} $<0.05$. Aiming at the rejection of high surface brightness and barely resolved background objects, we then applied several cuts to other SExtractor output-parameters. The objects to be rejected were required to have an exponential scale length shorter than $1''$. This is close to the seeing limit of our images and it is the maximum scale length of the questionable objects from Sect.~\ref{sec:scalings}. The artificial galaxies with scale lengths $<1''$ define well localised areas in plots of \texttt{magbest} versus \texttt{mupeak}, \texttt{area} and \texttt{fwhm} (see Fig.~\ref{fig:artgal}). Since also some of the previously selected dwarf galaxy candidates scatter into the same areas, we finally rejected only those objects that \textit{simultaneously} occupied the locus of barely resolved galaxies in all three parameters \texttt{mupeak}, \texttt{area} and \texttt{fwhm}. In this way, we miss only one of the previously selected probable cluster dwarf galaxies but reject more than 50\% of objects with a scale length shorter than $1''$. In order to further optimise the rejection of obvious background objects we additionally applied global cuts at the upper limit of \texttt{mupeak} and the lower limit of \texttt{fwhm} (Fig.~\ref{fig:artgal}).
Without the application of the cuts, SExtractor recovers 75-85\% of the simulated galaxies at $M_V\leq-12$ mag, which reflects the geometrical incompleteness caused by blending. Applying the cuts in \texttt{mupeak}, \texttt{area} and \texttt{fwhm} rejects $\sim25$\% more artificial galaxies at $M_V\leq-12$~mag. This fraction is consistent with the fraction of visually classified actual galaxies with $M_V>-12$~mag that are excluded by applying the same cuts (9 out of 36). Given that we include all visually classified galaxies into the LF, we scale the completeness values for $M_V>-12$~mag up by 25\%, so that they are consistent with the geometrical completeness at $M_V=-12$~mag (see Fig.~\ref{fig:completeness}).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{lumfunction_final.eps}}
\caption{Luminosity function of the Centaurus dwarf galaxies. The shaded histogram in the \textit{upper panel} shows the uncorrected galaxy number counts. The open histogram gives the completeness corrected number counts. The thin grey and thick black curves are binning independent representations of the counts (Epanechnikov kernel with 0.5 mag width). Dashed curves are the $1\sigma$ uncertainties. \textit{The lower panel} shows the completeness corrected galaxy number counts in logarithmic representation (filled circles). The best fitting Schechter function (red solid line) is overlaid. Open circles give the galaxy number counts including the questionable objects discussed in Sect.~\ref{sec:scalings}. Three different slopes $\alpha$ are indicated. The 50\% completeness limit (averaged over all fields) is given by the vertical line.}
\label{fig:lumfunction}
\end{figure}
\begin{table}
\caption{Fitting coefficients of the CMR, the magnitude-surface brightness relation, and the power-law slope $\alpha$ of the LF, with errors given in parentheses.}
\label{tab:relations}
\centering
\begin{tabular}{lrrrrr}
\hline\hline
~ & \multicolumn{2}{c}{$(V-I)_0 = A\cdot M_V + B$} & \multicolumn{2}{c}{$\mu_{V,0} = C\cdot M_V +D$} \\ \cmidrule(r){2-3} \cmidrule(r){4-5}
~ & $A$ & $B$ & $C$ & $D$ & $\alpha$ \\
\hline
Centaurus & $-0.042$ & $0.36$ & $0.57$ & $30.85$ & $-1.14$ \\
~ & $(0.001)$ & $(0.02)$ & $(0.07)$ & $(0.87)$ & $(0.12)$ \\
Hydra\,I & $-0.044$ & $0.36$ & $0.67$ & $31.57$ & $-1.40$ \\
~ & $(0.001)$ & $(0.01)$ & $(0.07)$ & $(0.99)$ & $(0.18)$ \\
Fornax & $-0.033$ & $0.52$ & $0.68$ & $32.32$ & $-1.33$ \\
~ & $(0.004)$ & $(0.07)$ & $(0.04)$ & $(1.12)$ & $(0.08)$ \\
\hline
\end{tabular}
\end{table}
The completeness corrected galaxy luminosity function for $-17.5<M_V<-9.0$ mag is shown in Fig.~\ref{fig:lumfunction}. Due to the relatively low galaxy number counts (81 in this magnitude range), the LF is only moderately well represented by a \citet{1976ApJ...203..297S} function. From the best fitting Schechter function we derive a faint-end slope of $\alpha=-1.08\pm0.03$ (excluding galaxies fainter than $M_V=-10$ mag). As the slope $\alpha$ dominates the shape of the LF for magnitudes $M_V>-14$ mag, we alternatively fit a power-law to this interval, resulting in $\alpha=-1.14 \pm 0.12$. This characterises best the faint-end slope of the LF. Our result is consistent with the results of \citet{2006AJ....132..347C}, who found $\alpha \sim -1.4\pm0.2$ for the Centaurus cluster. They used statistical corrections as well as spectroscopic redshifts and surface brightness--magnitude criteria for the construction of the LF.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{sersic_final.eps}}
\caption{Parameters of the S\'ersic fits to the galaxy surface brightness profiles. The top (bottom) panel shows the central surface brightness $\mu_0$ (profile shape index $n$) plotted vs. the galaxy magnitude. Black dots are all galaxies that were considered cluster members. Spectroscopically confirmed cluster galaxies are marked by red open circles. Blue open (filled) squares are confirmed (likely) background galaxies (cf. Fig.~\ref{fig:cmd}). The green open hexagons mark the three compact elliptical galaxy candidates (see Sect.~\ref{sec:cEs}).}
\label{fig:sersic}
\end{figure}
The nine questionable objects discussed in Sect.~\ref{sec:scalings} have absolute magnitudes of $-12.4<M_V<-11.2$ mag. Including them into the LF does not significantly change the slope $\alpha$ in the interval $-14<M_V<-10$ mag (see bottom panel of Fig.~\ref{fig:lumfunction}). Fitting a power-law leads to $\alpha=-1.17 \pm 0.12$.
In Table~\ref{tab:relations} the slope $\alpha$ is compared to the ones derived for the Fornax and Hydra\,I clusters \citep{2007A&A...463..503M, 2008A&A...486..697M}. Also for those clusters $\alpha$ is obtained by fitting a power-law to the faint end of the galaxy LF ($M_V>-14$ mag). With $-1.1\gtrsim \alpha \gtrsim -1.4$ all slopes are significantly shallower than the predicted slope of $\sim -2$ for the mass spectrum of cosmological dark-matter haloes \citep[e.g.][]{1974ApJ...187..425P, 1999ApJ...524L..19M, 2001MNRAS.321..372J}.
\subsection{Structural parameters from S\'ersic fits}
\label{sec:sersic}
\begin{figure*}
\centering
\includegraphics[width=17cm]{sizelumidiag_final.eps}
\caption{Plot of the effective radius $R_{\mathrm{eff}}$ against the absolute magnitude $M_V$ of the Centaurus early-type galaxies in comparison to Hydra\,I early-type galaxies from \citet{2008A&A...486..697M}, galaxies from the ACS Virgo Cluster Survey \citep{2006ApJS..164..334F}, Local Group dwarf galaxies and compact elliptical galaxies. The established cEs are M32 \citep{1992ApJ...399..462B, 1993ApJ...411..153B, 2003AJ....125.1926G}, A496cE \citep{2007A&A...466L..21C}, NGC 4486B \citep{2008arXiv0810.1681K}, NGC 5846A \citep{2005AJ....130.1502M, 2008MNRAS.tmp.1243S} and the two cEs from \citet{2005A&A...430L..25M}. The two Antlia cE candidates are taken from \citet{2008MNRAS.tmp.1243S}. Sources for the LG dwarf galaxies are: \citet{2003AJ....125.1926G} and \citet[][and references therein]{2007ApJ...663..948G} for Fornax, Leo I/II, Sculptor, Sextans, Carina and Ursa Minor; \citet{2008ApJ...684.1075M} for Draco, Canes Venatici I/II, Hercules, Leo IV, Coma Berenices, Segue I, Bootes I/II, Ursa Major~I/II and Willman~I; \citet{2006MNRAS.365.1263M} for And I/II/III/V/VI/VII and Cetus; \citet{2007ApJ...659L..21Z} for And IX/X; \citet{2006MNRAS.371.1983M} for And~XI/XII/XIII and \citet{2008ApJ...688.1009M} for And XVIII/XIX/XX. The solid line indicates the size--luminosity relation given by Eq.~(\ref{eq:size1}), the dashed line traces Eq.(\ref{eq:size2}).}
\label{fig:sizelumidiag}
\end{figure*}
In addition to the exponential we also fitted S\'ersic models to the galaxy surface brightness profiles. The fit parameters central surface brightness $\mu_0$ and profile shape index $n$ are plotted versus the galaxy magnitude in Fig.~\ref{fig:sersic}. $\mu_0$ is given by $\mu_0=\mu_{\mathrm{eff}} - 2.5b_n/\ln(10)$, where $\mu_{\mathrm{eff}}$ is the effective surface brightness and $b_n$ is approximated by $b_n = 1.9992n - 0.3271$ for $0.5<n<10$ \citep[][and references therein]{2005PASA...22..118G}. Three bright cluster galaxies (C-4-03/NGC 4706, C-3-04 and C-7-07, see Table \ref{tab:Centaurussample}), morphologically classified as SAB(s)0, SB(s)0 and S0, showed two component surface brightness profiles (bulge + disk), which could not be fitted by a single S\'ersic profile. They were excluded from the analysis.
The vast majority of cluster galaxies defines a continuous relation in the $\mu_0$ vs. $M_V$ diagram (top panel of Fig.~\ref{fig:sersic}). This relation runs from the faintest dwarf galaxies in our sample to bright cluster elliptical galaxies ($M_V\sim -20$ mag). Our results are consistent with other studies that report on a continuous relation for both dwarf galaxies and massive E/S0 galaxies \citep[e.g.][]{2003AJ....125.2936G, 2005A&A...430..411G, 2006ApJS..164..334F, 2008IAUS..246..377C, 2008A&A...486..697M}. Only the two brightest galaxies in our sample (NGC 4696 and NGC 4709) deviate from this relation.
The bottom panel of Fig.~\ref{fig:sersic} shows that also the profile shape index $n$ continuously rises with the galaxy magnitude for $M_V\lesssim-14$ mag. Only the brightest cluster galaxy, NGC 4696, has an exceptionally low S\'ersic index ($n=2.5$). For $M_V\gtrsim-14$ mag, $n$ basically stays constant with a mean value of 0.85. The spectroscopically confirmed background galaxies as well as the likely background objects in our sample can clearly be identified by their large S\'ersic index and their high central surface brightness in comparison to the cluster galaxies. This motivates again the rejection of those object from the cluster galaxy sample. Our results agree with former observations of a correlation of the S\'ersic index with the galaxy luminosity \citep[e.g.][]{1994MNRAS.268L..11Y, 2003Ap&SS.285...87I, 2006ApJS..164..334F, 2008A&A...486..697M}.
\begin{table*}
\caption{Photometric and structural parameters of the cE galaxy candidates. References for radial velocities: $^{a}$\citep{1997A&A...327..952S}, $^{b}$\citep{2007ApJS..170...95C}.}
\label{tab:cEs}
\centering
\begin{tabular}{l c c c c c c c c c c c }
\hline\hline
ID & R.A. & Decl. & P.A. &$\varepsilon$ & $M_V$ & $(V-I)_0$ & $\mu_0$ & $R_{\mathrm{eff}}$ & $n$ & $v_{\mathrm{rad}}$ & $D_{\mathrm{NGC4696}}$ \\
~ & (J2000.0) & (J2000.0) & [$\deg$] & & [mag] & [mag] & [mag/arcsec$^2$] & [pc] & ~ & [km~s$^{-1}$] & [kpc] \\
\hline
C-1-10 & 12:48:53.9 & -41:19:05.3 & 69 & 0.1 & -17.76 & 1.18 & 17.42 & 418 & 1.23 & 2317$^{a}$ & 13 \\
C-1-21 & 12:48:48.6 & -41:20:52.8 & 39 & $0.0\dots0.2$ & -15.57 & 1.18 & 18.56 & 279 & 1.27 & 3053$^{b}$ & 30 \\
C-2-20 & 12:49:33.0 & -41:19:24.0 & -68 & 0.1 & -15.69 & 1.15 & 18.51 & 363 & 1.55 & -- & 109 \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\subfigure{\includegraphics[width=6cm]{sbp_cen1_s338_final.eps}}
\subfigure{\includegraphics[width=6cm]{sbp_cen1_1125_final.eps}}
\subfigure{\includegraphics[width=6cm]{sbp_cen2_0974_final.eps}}
\caption{Surface brightness profiles of the cE galaxy candidates. The extinction corrected surface brightness $\mu_V$ is plotted as a function of semi major axis radius $R$. The solid curve is the best fitting S\'ersic law. The residuals $\Delta\mu=\mu_{V,\mathrm{obs}} - \mu_{V,\mathrm{fit}}$ are shown in the lower panels. Vertical dotted lines mark the seeing affected region $R<1''$, which was excluded from the fit. The dashed lines indicate the effective surface brightness $\mu_{\mathrm{eff}}$ and the effective radius $R_{\mathrm{eff}}$.}
\label{fig:profiles}
\end{figure*}
\subsubsection{Galaxy sizes}
In Fig.~\ref{fig:sizelumidiag} we show the effective radii and absolute magnitudes of the Centaurus early-type galaxies together with the Hydra\,I early-type galaxies from \citet{2008A&A...486..697M}, galaxies from the ACS Virgo Cluster Survey \citep{2006ApJS..164..334F}, Local Group dwarf galaxies and known compact elliptical galaxies. References for the LG dwarfs and the cEs are given in the caption. The sizes of the Centaurus galaxies agree very well with the sizes of the Hydra\,I galaxies and the sizes of bright galaxies ($-20<M_V<-16$~mag) in both samples are fully consistent with the ones obtained in \citet{2006ApJS..164..334F} for the ACS Virgo Cluster Survey galaxies. The apparent $g$-band magnitudes of the ACS Virgo Cluster Survey galaxies were transformed into absolute $V$-band magnitudes, using the transformation $V=g+0.026-0.307(g-z)$~mag given in \citet{2006ApJ...639..838P}. The transformation is derived from a study of diffuse star clusters around the ACS Virgo galaxies. Since the cluster colours are very similar to those of the host galaxies ($1.1<(g-z)<1.6$~mag), we consider the transformation a good approximation for the purposes of our study. We adopt a Virgo distance modulus of $(m-M)=31.09$~mag, corresponding to a scale of 80.1~pc/arcsec \citep{2007ApJ...655..144M}. For the calculation of the $V$-band magnitude of the cE galaxy NGC 5846A we applied $V-R=0.61$~mag \citep{1995PASP..107..945F} and the distance modulus $(m-M)=32.08$~mag, corresponding to scale of 126~pc/arcsec \citep[][and references therein]{2008MNRAS.tmp.1243S}.
The most striking feature in Fig.~\ref{fig:sizelumidiag} is the continuous size--luminosity relation over a wide magnitude range. The effective radius slowly increases as a function of galaxy magnitude for $-21<M_V<-10$~mag. The relation between $\log(R_{\mathrm{eff}})$ and $M_V$ is indicated by the solid line in Fig.~\ref{fig:sizelumidiag} and can be quantified as
\begin{equation}
\label{eq:size1}
\log(R_{\mathrm{eff}})= -0.041(\pm 0.004) \cdot M_V + 2.29(\pm 0.06)
\end{equation}
with an rms of 0.17. At magnitudes fainter than $M_V\gtrsim -13$~mag, the slope of the relation becomes slightly steeper. A fit to the data yields
\begin{equation}
\label{eq:size2}
\log(R_{\mathrm{eff}})= -0.107(\pm 0.007) \cdot M_V + 1.51(\pm 0.07)
\end{equation}
with an rms of 0.17 (dashed line in Fig.~\ref{fig:sizelumidiag}). In their study of photometric scaling relations of early-type galaxies in Fornax, Coma, Antlia, Perseus and the LG, \citet{2008arXiv0811.3198D} reported on a very similar behaviour. However, comparatively few data points are available for $M_V>-10$~mag, i.e. the regime of faint LG dwarf spheroidals, and there might be a bias towards the selection of more compact objects at fainter magnitudes, in the sense that at a given magnitude very extended low surface brightness galaxies are more likely to be missed than more compact ones. Moreover, the two smallest LG dwarf galaxies Segue\,I and Willman\,I \citep{2008ApJ...684.1075M} are suspected to be globular star clusters or dSphs out of dynamical equilibrium, close to disruption, rather than ordinary dwarf galaxies \citep{2007ApJ...663..948G}.
Two groups of objects clearly deviate from the size--luminosity relations defined by the other objects. These are the brightest core galaxies ($M_V\lesssim -21$~mag) which show a very strong dependence of effective radius on absolute magnitude, and a few rather compact galaxies which fall below the main body of normal elliptical galaxies. The latter are discussed in more detail in the following subsection.
\subsubsection{Compact elliptical galaxy candidates}
\label{sec:cEs}
Three unusual objects, having rather small effective radii compared to other cluster galaxies with similar magnitudes, stand out in Fig.~\ref{fig:sizelumidiag}. Do they belong to the class of the so-called compact elliptical galaxies (cEs)? For the three candidates, Table \ref{tab:cEs} lists the coordinates, the absolute magnitude $M_V$, the extinction corrected colour $(V-I)_0$, the central surface brightness $\mu_0$, the effective radius $R_{\mathrm{eff}}$, the S\'ersic index $n$, the available radial velocity $v_{\mathrm{rad}}$ and the projected distance $D_{\mathrm{NGC4696}}$ to the central cluster galaxy NGC~4696. Also given are the position angle (P.A.) and the ellipticity $\varepsilon$ used for the fit of elliptical isophotes to the galaxy image. In Fig.~\ref{fig:profiles} we show for each of the cE galaxy candidates the S\'ersic fits and the according residuals to their surface brightness profiles. In the following three paragraphs we describe in detail how the photometric parameters were obtained and try to judge whether the objects belong to the class of cE galaxies.
\paragraph{C-1-10}
is a spectroscopically confirmed member of the Centaurus cluster. It is listed as CCC 70 in the Centaurus Cluster Catalogue and it is morphologically classified as an E0(M32) galaxy \citep{1997A&AS..124....1J}. The isophote fitting was performed on the 30s exposure, since the long-exposure image was saturated at the object centre. Due to the projected proximity of C-1-10 to the giant galaxy NGC 4696, we created and subtracted a model of the latter before modelling the dwarf galaxy.
With an effective radius of $1.90''$ (418 pc) C-1-10 is the most compact object among the galaxies with similar magnitude in our sample. However, it is larger than most of the cE galaxies mentioned in the literature (see Fig.~\ref{fig:sizelumidiag}). Only for NGC 5846A an even larger effective radius of $\sim500$ pc is reported \citep{2005AJ....130.1502M}. Moreover, C-1-10 does not have a particular high central surface brightness, but it falls exactly on the sequence of regular cluster dwarf galaxies (see upper panel of Fig.~\ref{fig:sersic}). Also its colour is consistent with the cluster CMR (Fig.~\ref{fig:cmd}). Given these properties, C-1-10 is rather a small elliptical galaxy than an exemplar of a cE galaxy.
\paragraph{C-1-21}
is a confirmed Centaurus member \citep{2007ApJS..170...95C}. The best model for C-1-21 was obtained with fixed centre coordinates and position angle, while the ellipticity $\varepsilon$ was allowed to vary ($0.0<\varepsilon<0.2$). Its effective radius of $1.27''$, or 279 pc, is at least three times smaller than the ones of other cluster galaxies of the same luminosity (Fig.~\ref{fig:sizelumidiag}). This is comparable to the size of the two M32 twins in Abell 1689 \citep{2005A&A...430L..25M}, the cE galaxy A496cE \citep{2007A&A...466L..21C} and the cE candidate FS90~192 in the Antlia cluster \citep{2008MNRAS.tmp.1243S}. The central surface brightness of C-1-21 is about 2~mag/arcsec$^2$ higher than the one of equally bright cluster galaxies (see Fig.~\ref{fig:sersic}) and its colour is about 0.15~mag redder than expected from the cluster CMR (Eq.~(\ref{eq:cmr})). Interestingly, both colour and central surface brightness would be consistent with other cluster galaxies, if the object was about 2~mag brighter. This suggests that C-1-21 might originate from a higher mass elliptical or spiral galaxy, which was stripped by the strong tidal field of NGC 4696 \citep[e.g.][]{1973ApJ...179..423F, 2001ApJ...557L..39B}.
Since three common characteristics of cE galaxies, namely the small effective radius, the high central surface brightness and the projected location close to a brighter galaxy, are given, we consider C-1-21 a true cE galaxy.
\paragraph{C-2-20} has no available spectroscopic redshift. Whether it is a cluster galaxy or a background galaxy can at this point only be determined by an educated guess on the basis of its morphological and photometric properties. Absolute magnitude, colour and central surface brightness are very similar to those of C-1-21 (see Table \ref{tab:cEs}). Its very compact morphology ($R_{\mathrm{eff}}=363$ pc) suggests that it is indeed a cluster cE. However, its relatively isolated position (see Fig.~\ref{fig:fields}), far away from the giant galaxies, is unusual for a cE galaxy. We conclude that C-2-20 is definitely a good candidate for a cE galaxy, but whether it really belongs to the Centaurus cluster has to be confirmed by spectroscopic redshift measurements.
\section{Summary and discussion}
\label{sec:discussion}
Based on deep VLT/FORS1 imaging data in Johnson $V$ and $I$ we studied the early-type dwarf galaxy population of the Centaurus cluster. We combined visual classification and SExtractor based detection routines in order to select candidate objects on the images (Sect.~\ref{sec:sample}).
We investigated fundamental scaling relations, such as the colour--magnitude relation and the magnitude-surface brightness relation (Sect.~\ref{sec:scalings}). Both relations were found to be consistent with the ones in the Fornax and Hydra\,I galaxy clusters (see Table~\ref{tab:relations}). Moreover, LG dwarf galaxies projected to the Centaurus distance follow the same magnitude-surface brightness relation. Both scaling relations enabled us to define a sample of probable cluster galaxies, which was used to construct the galaxy luminosity function down to a limiting magnitude of $M_V=-10$ mag (Sect.~\ref{sec:lumfunction}).
\subsection{The faint end of the galaxy LF}
From the completeness corrected galaxy number counts we derive a very flat faint-end slope of the Centaurus galaxy LF. A power law describes best the shape of the faint end of the LF. We measure a slope of $\alpha = -1.14 \pm 0.12$ (see Fig.~\ref{fig:lumfunction} and Table~\ref{tab:relations}). A similar value is obtained when fitting a Schechter function to the data ($\alpha \sim -1.1$). A flat LF for the Centaurus cluster was also derived by \citet{2006AJ....132..347C}. Moreover, our result is consistent with the flat LFs observed in other nearby galaxy clusters, for which the LF was similarly constructed using morphological selection criteria \citep[e.g.][]{2002MNRAS.335..712T, 2003A&A...397L...9H, 2007A&A...463..503M, 2008A&A...486..697M}.
The cluster membership assignment by means of morphology and surface brightness is, of course, the key step for the entire analysis. Misclassifications have to be prevented as far as possible, but they can hardly be avoided entirely. In particular, it is often difficult to distinguish cluster dwarf galaxies with rather high surface brightnesses from background galaxies which only resemble the cluster galaxies. We indeed identified nine questionable objects in our sample, having colours similar to the cluster dwarfs but with significantly higher surface brightnesses along with a rather compact morphology (cf. Sect.~\ref{sec:scalings}). Due to the fact that they deviate more than $2\sigma$ from the magnitude-surface brightness relation (Eq. (\ref{eq:magmu})), which is defined by the probable cluster dwarf galaxies (see Fig.~\ref{fig:magmu}), we conclude that most of those objects do not belong to the cluster. In any case, they do not significantly influence the galaxy LF, raising the faint-end slope marginally from $\alpha=-1.14$ to $\alpha=-1.17$ (cf. Sect.~\ref{sec:lumfunction}).
Another caveat of a morphological selection is that one could potentially misclassify compact, M32-like cluster members as background objects, or vice versa \citep{2002MNRAS.335..712T}. We found three cE galaxy candidates in our sample, of which two are confirmed cluster members and the third one has photometric properties very similar to confirmed cluster galaxies (see Sect.~\ref{sec:cEs}). However, cE galaxies are rare and have (like our candidates) rather bright magnitudes (Fig.~\ref{fig:sizelumidiag}), so that they do not affect the shape of the faint end of the galaxy LF.
Altogether, we are confident not to have misclassified a large number of objects, since we were sensitive to very low surface brightnesses, the seeing of our images was excellent, and we made use of photometric scaling relations to substantiate the morphological classifications.
Besides Virgo, Fornax and Hydra\,I, Centaurus is now the fourth galaxy cluster in the local Universe, whose LF has been investigated down to the regime of dwarf spheroidal galaxies ($M_V\sim -10$ mag). Flat luminosity functions, which are contradictory to the predicted mass spectrum of cosmological dark-matter haloes \citep[e.g.][]{1974ApJ...187..425P, 1999ApJ...524L..19M, 2001MNRAS.321..372J}, have been derived for all of these environments. It seems to become apparent that this discrepancy to hierarchical cold dark matter models of galaxy formation is a common feature of various galaxy clusters/groups. However, one has to note that we primarily investigate early-type galaxies in a rather limited region in the central part of the cluster. The slope of the LF might considerably change when including the outer parts of the cluster into the analysis. Moreover, although not found in our study, late-type dwarf irregular galaxies might affect the shape of the LF as well.
In the end it is essential to have \textit{direct} cluster membership determination via deep spectroscopic surveys for magnitudes $M_V\gtrsim -14$ mag, where the faint-end slope $\alpha$ starts to dominate the shape of the LF (cf. Fig.~\ref{fig:lumfunction}). Beyond the Local Group, this has up to now only been achieved in studies of the galaxy clusters Fornax, Perseus and Virgo \citep[e.g.][]{1999A&AS..134...75H, 2001ApJ...548L.139D, 2008MNRAS.383..247P, 2008AJ....135.1837R}. The next step should therefore be the extension of those surveys to other galaxy clusters like Centaurus or Hydra\,I in order to thoroughly verify the results of the photometric studies. With a reasonable amount of observing time ($\sim 2$ hours integration time) the spectroscopic confirmation of low surface brightness objects is technically feasible for objects with $\mu_V \lesssim 25$~mag/arcsec$^2$, using low-resolution spectrographs like FORS or VIMOS at the VLT (see \texttt{http://www.eso.org/observing/etc/} for exposure time calculators). This surface brightness limit corresponds to an absolute magnitude limit of $M_V\sim -11$ mag at the distance of Centaurus (cf. Fig.~\ref{fig:magmu}). Naturally, at a given magnitude the surface brightness limit introduces a bias towards more successfully measuring the redshifts of smaller objects with higher surface brightnesses. However, the cluster membership assignment via morphological classification for exactly those rather compact objects turns out to be more difficult than for extended low surface brightness galaxies (see Sect.~\ref{sec:scalings}). This means that the observational bias primarily excludes objects for which the morphological membership assignment is more accurate, thus, contamination by background objects is smaller.
\subsection{The dependency of effective radius on luminosity}
We derived structural parameters, such as central surface brightness $\mu_0$, effective radius $R_{\mathrm{eff}}$ and profile shape index $n$, of the probable cluster galaxies by fitting \citet{1968adga.book.....S} models to the galaxy surface brightness profiles (Sect.~\ref{sec:sersic}).
In plots of $\mu_0$ and $n$ versus the galaxy magnitude we observe continuous relations, ranging 10 orders of magnitude from $M_V=-20$ mag to the magnitude limit of our survey (Fig.~\ref{fig:sersic}). This confirms observations of continuous relations in the LG and other galaxy clusters, such as Fornax, Virgo and Hydra\,I \citep[e.g.][]{1994MNRAS.268L..11Y, 2003AJ....125.2936G, 2006ApJS..164..334F, 2008IAUS..246..377C, 2008A&A...486..697M}. Only the brightest cluster galaxies have central surface brightnesses which are lower than expected from the extrapolation of the relation defined by galaxies of lower luminosity. The deviation of these core galaxies from the $M_V$--$\mu_0$ relation can be explained by mass depleted central regions due to the dynamical interaction of a supermassive black hole binary \citep[][and references therein]{2006ApJS..164..334F}. A different point of view is, however, that these galaxies belong to a different sequence, almost perpendicular to the dE sequence, populated with bright early-type galaxies ($M_V\lesssim-20$ mag), for which the surface brightness decreases with increasing magnitude \citep[e.g.][]{1985ApJ...295...73K, 1992ApJ...399..462B, 2008arXiv0810.1681K}.
The size--luminosity diagram is another tool to visualise a dis-/continuity between dwarf and giant elliptical galaxies. Combining our data with studies of early-type galaxies in Virgo \citep{2006ApJS..164..334F}, Hydra\,I \citep{2008A&A...486..697M} and the LG, we find a well defined sequence in such a diagram (see Fig.~\ref{fig:sizelumidiag}). For a wide magnitude range ($-21\lesssim M_V \lesssim -13$~mag) the effective radius changes little with luminosity. For fainter magnitudes the slope of the size--luminosity relation steepens and the sequence continues all the way down to the ultra-faint LG dwarf galaxies ($M_V\sim -4$~mag), which have been identified in the SDSS \citep[e.g.][]{2006MNRAS.371.1983M, 2008ApJ...684.1075M, 2006MNRAS.365.1263M, 2007ApJ...663..948G, 2007ApJ...659L..21Z, 2008ApJ...688.1009M}. Only the brightest core galaxies and compact elliptical galaxies deviate from the relation of ordinary elliptical galaxies. Both the continuous surface brightness vs. absolute magnitude relation and the continuous sequence in the size--luminosity diagram are consistent with the interpretation that dwarf galaxies and more massive elliptical galaxies are one family of objects. In this scenario the scaling relations are caused by the gradual change of the S\'ersic index $n$ with the galaxy magnitude \citep[e.g.][]{1997ASPC..116..239J, 2003AJ....125.2936G, 2005A&A...430..411G}.
In contrast to this interpretation, \citet{2008arXiv0810.1681K} and \citet{2008ApJ...689L..25J} most recently reported on a pronounced dichotomy of elliptical and spheroidal galaxies in the size--luminosity diagram, which is \textit{not} caused by the gradual change of the galaxy light profile with luminosity. \citet{2008arXiv0810.1681K} reaffirm results of older studies \citep[e.g.][]{1985ApJ...295...73K,1991A&A...252...27B, 1992ApJ...399..462B, 1993ApJ...411..153B} and claim that the dwarf galaxy sequence intersects at $M_V\sim-18$~mag a second (much steeper) sequence, which consists of giant elliptical and S0 galaxies and extends to the regime of cE galaxies \citep[see also][]{2008MNRAS.386..864D}. They conclude that massive elliptical and spheroidal galaxies are physically different and have undergone different formation processes. The latter were created by the transformation of late-type galaxies into spheroidals, whereas the giant ellipticals formed by mergers. By comparing the observations to models of ram-pressure stripping and galaxy harassment, \citet{2008A&A...489.1015B} indeed find evidence for different formation mechanisms. Although the bulk of galaxies investigated in \citet{2006ApJS..164..334F} falls into the magnitude range where the dichotomy should become apparent, these authors did not report on two distinct sequences \citep[but see appendix~B in][]{2008arXiv0810.1681K}.
Based on our data we cannot confirm the existence of an E -- Sph dichotomy. Our data rather show a continuous sequence of structural properties across a wide range of galaxy luminosities (masses), supporting the interpretation that dSphs as well as Es are one family of objects. Merely, the most massive core galaxies and the cE galaxies are clearly separated from normal ellipticals. In this context, however, one has to keep in mind that cE galaxies are extremely rare, whereas normal elliptical galaxies are much more frequent. This raises the question of which process is responsible for the peculiar properties of cEs. More than one formation channel is being discussed for cEs like M32. If they are the results of galaxy threshing \citep{2001ApJ...557L..39B}, their ``original'' location in Fig.~\ref{fig:sizelumidiag} was at higher luminosity and larger radius, probably consistent with the main body of ordinary elliptical galaxies. They would rightfully deserve to be termed low mass counterparts of giant ellipticals \citep[][and references therein]{2008arXiv0810.1681K} only if they were intrinsically compact at the time of their formation \citep{2002AJ....124..310C}.
\bibliographystyle{aa}
|
1,314,259,994,344 | arxiv | \section{Introduction}
The Patlak-Keller-Segel (PKS) model can be used to describe the collective dynamics of a large
number of individual agents interacting through a diffusive signal. For instance, it appears for the chemotaxis phenomena of various types of cells, aggregation dynamics of crowds or to describe the gravitational collapse, see~\cite{rr26, MRSV11,CCH}, and references therein. With a source term, it is used as a mechanical description of tumor growth \cite{r34, rBC}. Including nonlinear diffusivity and Newtonian interactions, the PKS model is written
\begin{equation}\label{d1}
\begin{cases}
\partial _t\rho_m=\Delta \rho_m^m+\nabla\cdot(\rho_m\nabla\mathcal{N}\ast\rho_m)\quad\text{for} \quad (x,t)\in\mathbb{R}^n\times\mathbb{R}_+, \quad n\geq 3,
\\[5pt]
\rho_{m}(x,0)=\rho_{m,0}(x)\geq0\quad\text{for }x\in\mathbb{R}^n, \qquad \qquad m>2-2/n, \qquad \text{(subcritical case)}.
\end{cases}
\end{equation}
For chemotaxis, $\rho_m(x,t) \geq 0$ represents the cell density and $\mathcal{N}\ast\rho_m$ represents the chemical substance concentration obtained by convolution with the Newtonian potential
\begin{equation*}
\mathcal{N}(x)=\frac{-1}{n(n-2)\alpha_n|x|^{n-2}}\qquad\text{for } \quad
x\in\mathbb{R}^n\backslash \{0\}, \qquad \Delta \mathcal{N} = \delta,
\end{equation*}
with $\alpha_n>0$ being the volume of the $n$-dimensional unit ball and $\delta$ the Dirac measure. The conservation of mass for the Cauchy problem Eq.~\eqref{d1} holds
\begin{equation*}
\int_{\mathbb{R}^n}\rho_m(x,t)dx=\int_{\mathbb{R}^n}\rho_{m,0}dx:=M, \qquad \forall\ t \geq 0.
\end{equation*}
For solutions of Eq.~\eqref{d1}, the pressure denotes a power of the density (Darcy's law) as
\begin{equation}\label{d6}
P_m:=\frac{m}{m-1}\rho_m^{m-1}, \qquad \quad P_{m,0}:=\frac{m}{m-1}\rho_{m,0}^{m-1}.
\end{equation}
We can rewrite Eq.~\eqref{d1} for the density $\rho_m$ and pressure $P_m$ in terms of the transport
equation with the effective velocity $u_m$ as
\begin{equation}\label{d2}
\partial _t\rho_m=\nabla\cdot(\rho_mu_m), \qquad u_m:=\nabla
P_m+\nabla\mathcal{N}\ast\rho_m.
\end{equation}
By a direct computation, the pressure satisfies the equation
\begin{equation}\label{d3}
\partial_t P_m=(m-1)P_m(\Delta P_m+\rho_m)+\nabla P_m\cdot(\nabla
P_m+\nabla\mathcal{N}\ast\rho_m).
\end{equation}
The competition between the degenerate diffusion and the nonlocal aggregation is
the main characteristic of Eq.~(\ref{d1}) or Eq.~(\ref{d2}). This is well
represented by the free energy functional
\begin{equation}\label{a58}
F_m(\rho_m)= \frac{1}{m-1}\int_{\mathbb{R}^n}\rho_m^mdx-
\frac{1}{2}\int_{\mathbb{R}^n}|\nabla\mathcal{N}\ast\rho_m|^2dx.
\end{equation}
It satisfies the energy identity, which shows that $F_m(\rho_m)$ is non-increasing with time,
\begin{equation}\label{f1}
\frac{dF_m(\rho_m)}{dt}+\int_{\mathbb{R}^n}\rho_m|\nabla(P_m+\mathcal{N}\ast\rho_m)|^2dx=0.
\end{equation}
Since $\frac{\delta F_m(\rho_m)}{\delta \rho_m}=P_m+\mathcal{N}\ast\rho_m$ represents the
chemical potential, there exists a gradient flow structure for the PKS model,
\begin{equation}\label{gf}
\frac{dF_m(\rho_m)}{dt}+\int_{\mathbb{R}^n}\rho_m|\nabla\frac{\delta
F_m(\rho_m)}{\delta\rho_m}|^2dx=0.
\end{equation}
Solutions $\rho_{m,s}$ of the stationary PKS system (SPKS) satisfy that the free energy $F_m(\rho_m(t))$ is constant in time and thus are determined by
\begin{equation}\label{SPKS}
\nabla\rho_{m,s}^m+\rho_{m,s}\nabla\mathcal{N}\ast\rho_{m,s}=0\quad\text{for
}x\in\mathbb{R}^n, \qquad P_{m,s}=\frac{m}{m-1}\rho_{m,s}^{m-1}.
\end{equation}
\\
Since a decade, and the paper~\cite{5} motivated by tumor growth, a large literature has been devoted to studying the incompressible (Hele-Shaw) limit, which means the limit as $m\to\infty$, for several variants of the porous medium equations (see below). In particular, establishing this limit when Newtonian interactions are included, as in Eq.~\eqref{d1} and Eq.~\eqref{SPKS}, has been a long standing question solved in~\cite{CKY_2018} based both on optimal transportation methods and viscosity solutions.
\\
\paragraph{Incompressible limit.} Our purpose is to complete the understanding, from~\cite{CKY_2018}, of the incompressible (Hele-Shaw) limit for the PKS model Eq.~\eqref{d1} in various directions. Firstly, we introduce a third approach based on weak solutions as described below. In particular, our assumptions on the initial data are more general (not necessarily patches data), and the method can easily be extended to source terms when mass varies. Secondly, we prove new regularity results: an $L^3$ estimate on $\nabla P_m$ and regularity \`a la Aronson-B\'enilan showing bounds on the second derivatives of the pressure $P_m$. Thirdly, we can prove directly an estimate on the time derivative of the pressure based on a new idea since a direct approach would not work. Finally, we prove a new uniqueness theorem for the limiting Hele-Shaw problem.
\\
Following~\cite{CKY_2018}, the Hele-Shaw limit system writes
\begin{align}\label{hs}
\begin{cases}
\partial_{t}\rho_{\infty}=\Delta
P_{\infty}+\nabla\cdot(\rho_{\infty}\nabla\mathcal{N}\ast\rho_{\infty}), \qquad
&\text{in } D'\big(\mathbb{R}^n\times \mathbb{R}_+\big),
\\[5pt]
(1-\rho_{\infty})P_{\infty}=0, \qquad \quad 0\leq\rho_{\infty}\leq1, &\text{a.e. }(x,t)\in \mathbb{R}^n\times
\mathbb{R}_+.
\end{cases}
\end{align}
This is a weak version of the geometric Hele-Shaw problem including chemotaxis. We also prove the
complementarity relation (in distributional sense)
\begin{equation}\label{d5}
P_{\infty}(\Delta P_{\infty}+\rho_{\infty})=0.
\end{equation}
It describes the limit pressure by a degenerate elliptic equation once we know the regularity of the
set $\{ p_\infty > 0 \}$, which is a major challenge for
the Hele-Shaw problem, see \cite{rCS,r20,r60, GKM} and reference therein. Furthermore, with Eqs.~\eqref{hs}--\eqref{d5} at hand, the limiting free energy functional easily follows,
\begin{equation}\label{f3}
\begin{cases}
F_{\infty}(\rho_\infty)=\frac{1}{2}\int_{\mathbb{R}^n}\rho_{\infty}\mathcal{N}\ast\rho_{\infty}dx, \qquad 0\leq\rho_\infty\leq 1,
\\[10pt]
\frac{dF_{\infty}(\rho_\infty(t))}{dt}+\int_{\mathbb{R}^n}
\rho_{\infty}(t)|\nabla\big(P_\infty(t)+\mathcal{N}\ast\rho_{\infty}(t)\big)|^2dx=0.
\end{cases}
\end{equation}
Compared with
the free energy~\eqref{a58}, the diffusive effect is replaced by the height
constraint $\rho_\infty\leq 1$. In the end, we extend the uniqueness \cite{4,r46} of solution to the PKS model Eq.~\eqref{d1} to the uniqueness of solution to the Hele-Shaw limit system Eq.~\eqref{hs}.
In the stationary case, the incompressible (Hele-Shaw) limit from the SPKS model Eq.~\eqref{SPKS} as $m\to \infty$, is represented as
\begin{equation}\label{HSSPKS}
\begin{cases}
\nabla P_{\infty,s}+\rho_{\infty,s}\nabla\mathcal{N}\ast\rho_{\infty,s}=0,&\text{in }\mathcal{D}'(\mathbb{R}^n),
\\[5pt]
(1-\rho_{\infty,s})P_{\infty,s}=0, \qquad \quad 0\leq \rho_{\infty,s}\leq 1,\quad &\text{a.e. }x\in\mathbb{R}^n.
\end{cases}
\end{equation}
As before, this corresponds to vanishinf dissipation for the free energy $F_\infty(\rho_{\infty}(t))$.
\\
The limits~\eqref{hs}--\eqref{f3} can be formally derived from the PKS model Eq.~\eqref{d1}. Indeed, taking the limit as $m\to\infty$ in Eq.~\eqref{d1}, we formally obtain the
first equation in~\eqref{hs}. Since we can prove that the limit pressure $P_{\infty}$ is bounded, from ~\eqref{d6} we recover $\eqref{hs}_2$. Also, we can formally attain the complementarity relation Eq.~\eqref{d5} thanks to a direct calculation of Eq.~\eqref{d3} as $m\to\infty$.
In addition, from~\eqref{f1}, we can formally obtain the limit energy functional~\eqref{f3}. It should be emphasized that the structure of
gradient flow as in~\eqref{gf} is still present in a weak form as in the optimal
transportation approach, cf.~\cite{r18,MRS14}. Similarly, the incompressible limit Eq.~$\eqref{HSSPKS}_1$ is formally derived from the SPKS model Eq.~\eqref{SPKS} as $m\to\infty$. As it is wellknown, establishing rigorously these limits faces deep difficulties due to the nonlinearities and weak regularity; the limit $\rho_\infty$ is discontinuous in space and $P_{\infty}$ can undergo discontinuities in time.
\paragraph{Review of literature.} As mentioned earlier, several approaches are possible to overcome the above mentioned difficulties. Optimal transportation methods are used in conservative cases, and the incompressible limit is the so-called {\em congested flows}. This method was intitiated in \cite{MRS10}, and is well adapted for the tansitions from discrete to continuous models~\cite{MRSV11}. It was extended to the two-species case in~\cite{Laborde}. The case of Newtonian drift, and the limit $m\to \infty$ was proved in~\cite{CKY_2018}.
\\
Another approach is by viscosity solutions, see for instance~\cite{IY,r20}, for an external drift see~\cite{LeiKim,r108} and again~\cite{CKY_2018} for Newtonian drifts. In particular, this approach can handle source terms as initiated in~\cite{KimPS}. It has the advantage of handling specifically the free boundary in the limit with minimal assumptions for this purpose.
\\
Our approach is by weak solutions as defined below (see Def.~\ref{def:WS}) and is motivated by tumor growth models of the form \begin{equation}\label{tg}
\partial_t\rho_m=\Delta \rho_m^m+\rho_m G(P_m)\quad\text{for }m>1,
\end{equation}
where $G(P)$ is a given decreasing function satisfying $G(P_M)=0$ for some threshold $P_M>0$. This problem was first solved in \cite{5} using regularity as introduced by Aronson and B\'enilan~\cite{r32} and $BV$ estimates. The method was extended to include a drift, see~\cite{r50}, to replace Darcy's law by Brinkman's law~\cite{r51,r52} and to a system with nutrients in~\cite{r35} using a new estimate in $L^4_{t,x}$ for $\nabla P_m$. Recently, multispecies problems were handled in~\cite{GPS2019,r53}, and a major improvement for compactness followed by~\cite{rPrX,r62}, see also~\cite{IgN} and the most advanced version in~\cite{david:hal-03636939}.
Furthermore, let us recall that for the porous medium equation (PME), i.e., when $G\equiv 0$, the problem leads to the so-called {\em mesa problem} and was also treated in a large literature, see for instance \cite{r4,r5,r10,r12} and references therein. The weak formulation and the variational formulation (using the so-called Baiocchi variable), of Hele-Shaw type were first introduced in~\cite{r107,r17} respectively.
Concerning the Keller-Segel model, with $m$ fixed, very much is known and methods are nowadays well established. Important progresses have been made recently on global existence, large time behaviors, critical mass and finite time blow-up for the multi-dimensional PKS model. In particular, the solutions with different diffusion exponent exhibit different behaviors. For diffusion exponent $1\leq m<2-2/n$ (supercritical case), the diffusion is dominant at the parts of low density and the aggregation is dominant at the parts of high density, then the solution to Eq.~\eqref{d1} exhibits finite time blow-up for large mass and global existence in time for small mass, cf.~\cite{BL13,CC,CW,Sugi06,S2}. For $m=2-2/n$ (critical case), there exists a critical mass $M_c>0$ such that the solution blows up in a finite time for the initial mass $M>M_c$, \cite{Sugi06, CC, CW}, and exists globally in time for the initial mass $M<M_c$, see~\cite{BCL,CCH} and reference therein. And for diffusion exponent $m>2-2/n$ (subcritical case), the diffusion dominates at the parts of high density, the solution to this model is uniformly bounded and exists globally in time without any restriction on the size of the initial data, cf.~\cite{CCJ,S2,sb2014,BL13}. In addition, the large time behaviors have been investigated extensively, one can refer to~\cite{S7,YY,IY,LBW,r18} and references therein.
The SPKS model Eq.~\eqref{SPKS} has also been widely studied. For existence of solutions, see \cite{rr10,MF,LPL,rr2}, for uniqueness see \cite{rr12,r2,rr30}, and for radial symmetry see \cite{r18,r12,rr30}. Critical points of free energy $F_m(\rho_m)$ in~\eqref{a58} have been studied, see \cite{MF,rr10,LPL,rr2} and references therein. For the multi-dimensional SPKS model with more general attractive potential, the authors in~\cite{r18} proved that the solution is radially decreasing symmetric up to a translation obtained by the method of continuous Steiner symmetrization, then it was proved in~\cite{rr12} uniqueness $(m\geq2)$ and non-uniqueness $(1<m<2)$ of the solution to the SPKS with general attractive potential. Before that, the authors in~\cite{rr2} proved that all compactly supported solutions to the 3-dimensional SPKS model~Eq.~\eqref{SPKS} with $m>4/3$ must be radially symmetric up to a translation, hence obtaining uniqueness of the solution among compactly supported functions. Furthermore, for the same case, the authors in~\cite{rr29} proved, in 3 dimensions, uniqueness of the solution among radial functions for a given mass, and their method can handle general potential when $m>2-2/n$. Similar results were obtained in~\cite{rr30} for 2-dimensional case with $m>1$ by an adapted moving plane technique. Carrillo et al. in~\cite{rr10} showed the existence and compact support property of the radially symmetric solutions using dynamical system arguments.
\paragraph{Difficulties and novelties}
However, it should be emphasized that the arguments for passing to incompressible limit in \cite{5,r35,r50} cannot be applied directly to Eq.~\eqref{d1}. This is due to the Newtonian drift in the PKS model, eventhough it is of lower order than the diffusion term. Its singularity gives rise to new and essential challenges for rigorously establishing the incompressible limits~\eqref{hs}--\eqref{f3}. Indeed, for the models of tumor growth as Eq.~\eqref{tg}, the source term $\rho_mG(P_m)$ helps the authors to obtain a uniform $L^1$ estimate for the time derivative of both the density and the pressure by Kato's inequality. But, for Eq.~\eqref{d1}, on the one hand, the nonlocal Newtonian interaction leads to the absence of comparison principle, which means that it is impossible to get a uniform bound for the pressure. On the other hand, one of main challenges is to obtain a uniform $L^1$ estimate for the time derivative of pressure without the help of the source term, despite the effect of nonlocal interaction. Thus, it is difficult to gain the desired compactness on not only the density but also the pressure for the PKS model. Besides, using the weak formulation approach for the incompressible limit for the SPKS model Eq.~\eqref{SPKS} is a new and interesting topic for the diffusion-aggregation equations by the methof of weak solutions, see~\cite{CKY_2018} for viscosity solution methods.
\\
Therefore, to achieve our goals, we develop new estimates and strategies as follows:
\\[2pt]
$\bullet$ We obtain the complementarity relation Eq.~\eqref{d5} for the PKS model Eq.~\eqref{d1}. We first derive a uniform $L^3$ estimate on the pressure gradient in the spirit of \cite{r35}. Then, we establish the uniform Aronson-B\'enilian (AB) estimates in $L^3\cap L^1$ as initiated in~\cite{GPS2019}. In particular, we show a decay rate for the AB estimate in $L^3$ under the form
\begin{equation*}
\|\min\big\{\Delta P_m+\rho_m,0\big\}\|_{L^3(Q_T)}^3\leq \frac{C(T)}{m}.
\end{equation*}
\\
$\bullet$ In addition, we establish a new uniform $L^1$ estimate for the time derivative of pressure. To our knowledge, this is the first time such an estimate is obtained for the high-dimensional porous medium equation (Dracy's law) with a nonlocal attractive interaction since working directly on the pressure is not sufficient.
\\
$\bullet$ To prove the uniqueness of the solution to the Hele-Shaw limit system Eq.~\eqref{hs}, the key is to show that the limit pressure is somehow monotone to the limit density. Suppose that $P_i\rho_i=P_i$ and $0\leq \rho_i\leq 1$ for $i=1,2$ hold, we find $ (P_1-P_2)(\rho_1-\rho_2)\geq0$.
\\
$\bullet$ To establish the incompressible limit of the SPKS model with a given mass, we gain the uniform bound of the pressure and the uniformly bounded support of the density.
\paragraph{Notations} We use the following notations and definitions.
\begin{notation}\label{d11} We set
\\
$\bullet$ $Q_{T}=\mathbb{R}^n\times(0,T)$, \quad
$Q=\mathbb{R}^n\times(0,\infty)$.
\\ $\bullet$
$B_{R}:=\{x:|x|\leq R\},\quad R>0$.\\
$\bullet$
$|f(x)|_{+}=\max\{f(x),0\}$, \quad $|f(x)|_-=-\min\{f(x),0\}$.
\\ $\bullet$
$\nabla^2f:\nabla^2g:=\sum\limits_{i,j=1}^{n}\partial_{ij}^2f\partial_{ij}^2g,\quad
(\nabla^2f)^2:=\nabla^2f:\nabla^2f=\sum\limits_{i,j=1}^{n}(\partial_{ij}^2f)^2$.
\end{notation}
Also, we use $C$ as a generic constant independent of time $t$ and diffusion
exponent $m$, $C(T)$ or $C(T,R)$ denote generic constants only depending on the time $T$ or on $T$ and $R>0$.
\begin{definition}[Weak solution]\label{def:WS}The weak solutions of the PKS model Eq.~\eqref{d1} and the SPKS model Eq.~\eqref{SPKS} are defined as follows:
\\
$\bullet$We recall that a weak solution to Eq.~\eqref{d1} means that for all $T>0$ and all test function $\varphi \in C_0^{\infty}(Q_T)$, such that $\varphi(T)=0$, it holds
\[
\int_{Q_T} \Big[ \rho_m \partial_t \varphi +\rho_m^m \Delta \varphi -\rho_m \nabla \varphi .\nabla {\mathcal N}\ast
\rho_m \Big] dx dt = \int_{\mathbb{R}^n} \rho_{m,0} \varphi(0) dx .
\]
For that $\rho_m$, $\rho_m^m$ and $\rho_m \nabla {\mathcal N}\ast \rho_m$ are supposed to be integrable.\\
$\bullet$ A weak solution to Eq.~\eqref{SPKS}
is defined for all test function $\varphi \in C_0^{\infty}(\mathbb{R}^n)$ as
\[
\int_{\mathbb{R}^n} \Big[ \nabla\rho_{m,s}^m\cdot\nabla\varphi + \rho_{m,s}\nabla {\mathcal N}\ast
\rho_{m,s}\cdot\nabla\varphi \Big]dx=0,
\]
where $\nabla\rho_{m,s}^m$ and $\rho_{m,s} \nabla {\mathcal N}\ast \rho_{m,s}$ are supposed to be integrable.
\end{definition}
\section{Main results}
To state our main results on the incompressible limit of PKS model, we need assumptions on the initial data $\rho_{m,0}$. Firstly, for $\rho_{m,0}$, we assume
\begin{equation}\label{c1}
\begin{cases}
\int_{\mathbb{R}^n} \rho_{m,0} (x) dx =: M <\infty,\quad
\|\rho_{m,0}^{m+1}\|_{L^1(\mathbb{R}^n)}\leq C,\quad\|\rho_{m,0}\|_{L^\infty(\mathbb{R}^n)}<\infty,&\\[2pt]
\|\rho_{m,0}-\rho_{\infty,0}\|_{L^1(\mathbb{R}^n)}\to0,\quad\text{as }m\to\infty,&\\
{\rm supp}(\rho_{m,0})\subset B_{R_m}\text{ for some constant } R_m> 1.&
\end{cases}
\end{equation}
Secondly, for some results, in particular the Aronson-B\'enilan estimate, we also need additional regularity assumptions on the initial data,
\begin{align}
&\|P_{m,0}\|_{L^2(\mathbb{R}^n)}+\| \nabla P_{m,0}\|_{L^2(\mathbb{R}^n)}\leq C,\label{c2}\\[2pt]
&\||\Delta P_{m,0}+\rho_{m,0}|_-\|_{L^1(\mathbb{R}^n)\cap L^2(\mathbb{R}^n)}\leq C.\label{c3}
\end{align}
Furthermore, a compatibility condition is also needed for obtaining the $L^1$ estimate of the time derivative for the pressure,
\begin{equation}\label{c4}
\||(m+1)\rho_{m,0}^{m+1}(\Delta
P_{m,0}+\rho_{m,0})+\nabla\rho_{m,0}^{m+1}\cdot(\nabla
P_{m,0}+\nabla\mathcal{N}\ast\rho_{m,0})|_-\|_{L^1(\mathbb{R}^n)}\leq C.
\end{equation}
Finally, to show the compact support of the solution of the Hele-Shaw limit system Eq.~\eqref{hs}, we need an additional uniform support assumption
\begin{equation}\label{c5}
{\rm supp}(\rho_{m,0})\subset B_{R_0} \quad \text{ for a fixed constant }R_0>0.
\end{equation}
\begin{remark} Let $\varphi\in C_{0}^{\infty}(\mathbb{R}^n)$ and $0\leq \varphi\leq \frac 1 2$, one immediately verifies that the initial data $\rho_{m,0}=\varphi$ satisfies the assumptions \eqref{c1}--\eqref{c4}.
\end{remark}
Assumption \eqref{c1} guarantees global existence of solutions to the Cauchy problem \eqref{d1} because $m>2-2/n$, as mentioned earlier.
We also recall in Appendices \ref{AAA} and \ref{sec:comsupp} that solutions satisfy, for some $\mathcal{R}_{m}(T)$,
\[
\|\rho_m\|_{L^\infty(Q_T)}\leq C(m,T), \qquad {\rm supp}(\rho_m(T))\subset B_{\mathcal{R}_{m}(T)}, \forall \ T>0.
\]
We now gather several uniform regularity estimates, and then establish the stiff limit of the PKS model as $m \to \infty$.
\begin{theorem}[Uniform bounds and compactness] \label{tAE}
Assume~\eqref{c1}, then the global solution $\rho_m$ to the Cauchy problem Eq.~\eqref{d1} in the sense of Def.~\ref{d1} satisfies for any $T>0$,
\[
\sup_{0\leq t\leq T}\|\rho_m(t)\|_{L^{q}(\mathbb{R}^n)}+
\|\rho_m^m\|_{L^2(Q_T)}+\|\nabla\rho_m^m\|_{L^2(Q_T)}\leq C(T),\quad \forall\ q\in[1,m+1],
\]
\[
\int_{Q_T}\nabla\rho_m^m\cdot\nabla\rho_m^{p-1}dxdt\leq C(T,p), \quad 1<p\leq2
,\]
\[
\|\rho_{m}\|_{L^{(2+\frac 2n)m+ \frac 2 n} (Q_T)}+\|\nabla P_m\|_{L^2(Q_T)}\leq C(T),
\]
\[\sup_{0\leq t \leq T} \||\rho_m(t)-1|_+\|_{L^2(\mathbb{R}^n)}\leq\frac{C(T)}{\sqrt m},\]
\[
\|\partial_{t}\nabla\mathcal{N}\ast\rho_{m}\|_{L^2(Q_T)}+\|\nabla\mathcal{N}\ast\rho_m\|_{L^{\infty}(Q_T)}
+ \sup\limits_{0\leq t\leq T}\|\nabla\mathcal{N}\ast\rho_{m}(t)\|_{L^2(\mathbb{R}^n)}\leq C(T),
\]
\[
\sup\limits_{0\leq t\leq T}\|\nabla^2\mathcal{N}\ast\rho_m(t)\|_{L^q(\mathbb{R}^n)}\leq C(T,q),
\]
where $C(T,q)\sim \frac{1}{q-1}$ for $0<q-1\ll 1$ and $C(T,q)\sim q$ for $q\gg
1$ and $ m>n-1$.
\end{theorem}
Thanks to these estimates we may extract subsequences, still denoted by $\rho_m$ such that, as $m\to\infty$, $\rho_m$ converges weakly in $L^2(Q_T)$ to
$\rho_{\infty}\in L^\infty\big(0,T;L_+^1(\mathbb{R}^n)\big)$, $P_{m}$ and $\rho_m P_m$ converge weakly in $L^2(Q_T)$ to the same limit $P_{\infty}\in L^2\big(0,T;H^1(\mathbb{R}^n)\big)$, and $\nabla\mathcal{N}\ast\rho_m$ converges strongly in $L^2_{loc}(Q_{T})$ to
$\nabla\mathcal{N}\ast\rho_{\infty}\in\mathcal{C}\big(0,T;L^2(\mathbb{R}^n)\big)\cap L^\infty\big(0,T; L^\infty(\mathbb{R}^n)\cap \dot{W}^{1,q}(\mathbb{R}^n)\big)\text{ for }1<q<\infty$. We have the
\begin{theorem}[Stiff limit] \label{tAEbis}
With assumption \eqref{c1}, this limit, $(\rho_\infty, P_\infty)$ satisfies the Hele-Shaw limit system in the sense of Def.~\ref{def:WS} as
\begin{align}
&\partial_{t}\rho_{\infty}-\Delta
P_{\infty}=\nabla\cdot(\rho_{\infty}\nabla\mathcal{N}\ast\rho_{\infty}),
&\text{in }\mathcal{D}'(Q_{T}),\label{z6}
\\
&(1-\rho_{\infty})P_{\infty}=0, \qquad 0\leq\rho_{\infty}\leq1, &\text{a.e. in } Q_{T}. \label{z8}
\end{align}
\end{theorem}
Then, using the additional assumptions~\eqref{c2}--\eqref{c3} on the initial data, we obtain the higher regularity estimates on the pressure. We can establish the
\begin{theorem}[Complementarity relation and semi-harmonicity] \label{t15}
Assume $m>\max\{n-1,\frac{5n-2}{n+2}\}$ and that the initial data satisfies \eqref{c1}--\eqref{c4}, then the global weak solution $\rho_m$ to \eqref{d1} satisfies the additional regularity estimates
\[
\|\sqrt{P_m}\nabla P_m\|_{L^2(Q_T)}+\sup\limits_{0\leq t\leq
T}\|\nabla P_m(t)\|_{L^2(\mathbb{R}^n)}\leq C(T),
\]
\[
\|\nabla P_m\|_{L^3(Q_T)}+\|\sqrt{P_m}\nabla^2P_m\|_{L^2(Q_T)}\leq C(T),
\]
\[
\|\sqrt{P_m}\omega_m\|_{L^2(Q_T)}^2+\||\omega_m|_{-}\|_{L^3(Q_T)}^{3}\leq \frac{C(T)}{m}, \qquad \omega_m:=\Delta P_m+\rho_m,
\]
\[
\sup\limits_{0\leq t\leq T}\||\omega_m(t)|_-\|_{L^2\cap L^1(\mathbb{R}^n)}+\sup\limits_{0\leq t\leq T}\|\Delta P_m(t)\|_{L^1(\mathbb{R}^n)}+\|\partial_tP_m\|_{L^1(Q_T)}\leq C(T).
\]
Furthermore, after the extraction of subsequences, as $m\to\infty$, $\nabla P_m$ converges strongly in
$L^2_{loc}(Q_T)$ to $\nabla P_\infty\in L^3(Q_T)\cap L^\infty(0,T;L^2(\mathbb{R}^n))$, and the complementarity relation and semi-harmonicity hold
\begin{equation}\label{z15}
P_{\infty}(\Delta P_{\infty}+\rho_{\infty})=0, \qquad \Delta P_{\infty}+\rho_{\infty}\geq 0, \quad \text{ in }
\mathcal{D}'(Q_{T}).
\end{equation}
It follows that
\begin{equation*}
(1-\rho_\infty)\nabla P_\infty=0,\quad\text{a.e. in } Q_T.
\end{equation*}
\end{theorem}
For the Hele-Shaw system Eqs.~\eqref{z6}--\eqref{z8}, the weak solution to the Cauchy problem is unique.
\begin{theorem}[Uniqueness]\label{t14}
Being given two global weak solutions $\rho_{i}$ for $i=1,2$ to the Hele-Shaw system~\eqref{z6}--\eqref{z8} with the initial assumption
$\rho_{1}(x,0)=\rho_{2}(x,0)\in \dot{H}^{-1}(\mathbb{R}^n)$, we have
\begin{equation*}
\rho_1=\rho_2,\quad P_1=P_2,\quad \text{a.e. in }Q .
\end{equation*}
\end{theorem}
Next, we establish that the limit free energy functional $F_{\infty}\big(\rho_\infty(t)\big)$,
with $0\leq \rho_\infty\leq1$, is non-increasing as time increases.
\begin{theorem}[Compact support and limit energy functional]
Under the initial assumptions~\eqref{c1}--\eqref{c5}, the limit $(\rho_{\infty},P_{\infty})$, as in
Theorems~\ref{tAEbis}--\ref{t15}, are compactly supported for any finite time. For some $\mathcal{R}_0\geq \max(R_0, \sqrt{4n + \frac{4n^2}{n-2}})$, we have
\begin{equation*}
{\rm supp} (P_\infty(t))\subset {\rm supp} (\rho_\infty(t))\subset B_{\mathcal{R}(t)},
\end{equation*}
\[
\mathcal{R}(t):=\big(\mathcal{R}_{0}+n\|\nabla\mathcal{N}\ast\rho_\infty\|_{L^\infty(Q)}\big)e^{\frac{t}{n}}-n\|\nabla\mathcal{N}\ast\rho_\infty\|_{L^\infty(Q)}.
\]
Furthermore, the limit energy dissipation holds for a.e. $t\in[0,\infty)$,
\begin{equation*}
\frac{dF_{\infty}(\rho_\infty(t))}{dt}+\int_{\mathbb{R}^n}
\rho_{\infty}(t) \big|\nabla(P_\infty(t)+\mathcal{N}\ast\rho_{\infty}(t)) \big|^2dx=0\quad\text{ with }\ 0\leq \rho_\infty\leq 1.
\end{equation*}
\end{theorem}
From \cite{r18,rr10,rr12}, we know that the solution to the SPKS model Eq.~\eqref{SPKS} are radially decreasing symmetric up to a translation and compactly supported. This allows us to gather some useful a priori estimates in order to prove the compactness for $\rho_{m,s} $, $P_{m,s}$, and $\mathcal{N}\ast\rho_{m,s}$. Then, we can derive the incompressible limit of the SPKS model Eq.~\eqref{SPKS}.
\begin{theorem}[Incompressible limit for stationary state]\label{ILSPKS} Let $m\geq 3$, $\rho_{m,s}$ be a weak solution to the SPKS model Eq.~\eqref{SPKS} in the sense of Def.~\ref{d1} with a given mass $\|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}=M>0$, $\int_{\mathbb{R}^n}x\rho_{m,s}(x)dx=0$. We define $R_m(M)>0$ satisfying $B_{R_m(M)}:={\rm supp}(\rho_{m,s})$ and $\alpha_m:=\rho_{m,s} (0)=\|\rho_{m,s}\|_{L^\infty(\mathbb{R}^n)}$, then the following regularity estimates hold,
\begin{equation*}
\alpha_{m}^{m-1}\leq\alpha_{m}+\frac{2M}{n(n-2)\omega_n}\leq\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n},
\end{equation*}
\begin{equation*}
R_m(M)\leq R_*(M),\quad\|\rho_{m,s}\|_{L^1\cap L^\infty(\mathbb{R}^n)}\leq C,
\end{equation*}
\begin{equation*}
\quad\|\nabla \mathcal{N}\ast\rho_{m,s}\|_{L^2\cap L^\infty(\mathbb{R}^n)}\leq C,\quad\|\nabla^2 \mathcal{N}\ast\rho_{m,s}\|_{L^p(\mathbb{R}^n)}\leq C(M,p),
\end{equation*}
\begin{equation*}
\||\omega_{m,s}|_-\|_{L^3\cap L^1(\mathbb{R}^n)}^3\leq \frac C m, \qquad \quad \omega_{m,s}=\Delta P_{m,s}+\rho_{m,s},
\end{equation*}
\begin{equation*}
\|P_{m,s}\|_{L^1\cap L^\infty(\mathbb{R}^n)}+\|\nabla P_{m,s}\|_{L^1\cap L^\infty(\mathbb{R}^n)}+\|\Delta P_{m,s}\|_{L^1(\mathbb{R}^n)}\leq C,
\end{equation*}
where $R_*(M)=\log\Big(1+\exp\Big[2n(n-1)\Big(\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}\Big)\Big]^{1/2}\Big)$, $C(M,p)\sim\frac{1}{p-1}$ for $0<p-1\ll1$ and $C(M,p)\sim p$ for $p\gg1$.
Furthermore, after extracting subsequences, as $m\to\infty$, $P_{m,s}$ converges strongly in $L^\infty(\mathbb{R}^n)$ to $P_{\infty,s}\in L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$, $\rho_{m,s}$ converges weakly in $L^p(\mathbb{R}^n)$ for $1< p<\infty$ to $\rho_{\infty,s}\in L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$, $\nabla P_{m,s}$ converges strongly in $L^p(\mathbb{R}^n)$ for $1\leq p<\infty$ to $\nabla P_{\infty,s}\in L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$, and $\nabla \mathcal{N}\ast\rho_{m,s}$ locally converges strongly in $L^p(\mathbb{R}^n)$ for $1\leq p<\infty$ to $\nabla \mathcal{N}\ast\rho_{\infty,s}\in L^2(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)\cap\dot{W}^{1,q}(\mathbb{R}^n)$ for $1<q<\infty$.
Therefore, the incompressible (Hele-Shaw) limit of the SPKS model Eq.~\eqref{SPKS} satisfies
\begin{align*}
&\|\rho_{\infty,s}\|_{L^1(\mathbb{R}^n)}=M,&&\int_{\mathbb{R}^n}x\rho_{\infty,s}dx=0,\\
&0\leq \rho_{\infty,s}\leq 1, \quad (1-\rho_{\infty,s})P_{\infty,s}=0, &&\text{a.e. in }\mathbb{R}^n,\\
&\nabla P_{\infty,s}+\rho_{\infty,s}\nabla\mathcal{N}\ast\rho_{\infty,s}=0, &&\text{a.e. in }\mathbb{R}^n,\\
&\Delta P_{\infty,s}+\rho_{\infty,s}\geq 0,&&\text{in }\mathcal{D}'(\mathbb{R}^n).
\end{align*}
Moreover, it holds for $R(M)>0$ satisfying $|B_{R(M)} |_n=M$ that
\begin{equation*}
\rho_{\infty,s}=\chi_{\{P_{\infty,s}>0\}}=\chi_{B_{R(M)} },\quad \text{a.e. in }\mathbb{R}^n.
\end{equation*}
\end{theorem}
\begin{remark}
The results in Theorem~\ref{ILSPKS} show that the incompressible limit of the SPKS model Eq.~\eqref{SPKS} is the stationary state of the Hele-Shaw problem Eqs.~\eqref{z6}--\eqref{z15}.
\end{remark}
\section{Bounds, compactness and stiff limit}
We establish the estimates in Theorem~\ref{tAE} and then the stiff limit in Theorem~\ref{tAEbis}.
\\
We begin with the a priori regularity results on the density $\rho_m$, and then treat the nonlocal term.
\begin{lemma}[Regularity estimate on density and pressure] \label{l10}
Assume that the initial data satisfies \eqref{c1}. Let $\rho_m$ be the weak solution to Eq.~\eqref{d1} in the sense of Def.~\ref{d1}, then it follows
\begin{align}
&\sup_{0\leq t\leq T}\|\rho_m(t)\|_{L^{m+1}(\mathbb{R}^n)}+
\|\nabla\rho_m^m\|_{L^2(Q_T)}\leq C(T),
\label{est:rho1}
\\[2pt]
&\sup_{0\leq t\leq T}\|\rho_m(t)\|_{L^{p}(\mathbb{R}^n)}+\int
\hskip-4pt \int_{Q_T}\nabla\rho_m^m\cdot\nabla\rho_m^{p-1}dxdt\leq C(T), \;
1<p\leq2,
\label{est:rho2}
\\[2pt]
&\|\rho_{m} \|_{L^{(2+2/n)m+2/n} (Q_T)}+\|\rho_m\nabla
P_m\|_{L^2(Q_T)}\leq C(T),
\label{est:rho3}
\\[2pt]
&\|P_m\|_{L^2(Q_T)}+\|\rho_m^m\|_{L^2(Q_T)}+\|\nabla P_m\|_{L^2(Q_T)}\leq C(T).
\label{est:rho4}
\end{align}
\end{lemma}
\begin{proof}
For \eqref{est:rho1}, we multiply Eq.~\eqref{d1} by $\rho_m^m$ and integrate by parts on
$\mathbb{R}^n$, we find
\begin{equation*}
\begin{aligned}
\frac{1}{m+1} \frac{d} {dt} \int_{\mathbb{R}^n}\rho_m^{m+1}dx+\int_{\mathbb{R}^n}|\nabla\rho_m^m|^2dx&\leq \frac{m}{m+1}\int_{\mathbb{R}^n}\rho_m^{m+2}dx\\
& \leq \left[ \| \rho_m\|_{L^{1}(\mathbb{R}^n)}^{1-\theta} \|\rho_m\|_{L^{\frac{2mn}{n-2}}(\mathbb{R}^n)}^{\theta} \right]^{m+2} \\
& \leq \| \rho_m\|_{L^{1}(\mathbb{R}^n)}^{(1-\theta)(m+2)} \|\rho_m^m\|_{L^{\frac{2n}{n-2}}(\mathbb{R}^n)}^{\theta \frac{m+2}{m}},
\end{aligned}
\end{equation*}
where we have used interpolation inequality with $\frac 1{m+2}=1-\theta+ \theta \frac{n-2}{2mn}$.
We notice that $(1-\theta) (m+2) \leq 1$. Using this and Sobolev's inequality (Theorem~\ref{t8}), we get
\begin{align}
\frac{1}{m+1} \frac d {dt} \int_{\mathbb{R}^n}\rho_m^{m+1}dx+
\int_{\mathbb{R}^n}|\nabla\rho_m^m|^2dx
\leq& \max (1,\| \rho_m\|_{L^{1}(\mathbb{R}^n)}) \| \nabla
\rho_m^m\|_{L^{2}(\mathbb{R}^n)}^{\theta \frac{m+2}{m}} \notag
\\[2pt]
\leq& C + \frac 12 \| \nabla \rho_m^m\|_{L^{2}(\mathbb{R}^n)}^2, \notag
\label{m1}
\end{align}
where we have used ${\theta \frac{m+2}{m}}<2$ for $m>2- \frac 2 n$.
After time integration, we get the inequality
\begin{equation}
\int_{\mathbb{R}^n}\rho_m(t)^{m+1}dx + \frac {m+1}2 \int_0^t\ \| \nabla
\rho_m^m\|_{L^{2}(\mathbb{R}^n)}^2
\leq \int_{\mathbb{R}^n}\rho_{m,0}^{m+1}dx + Ct(m+1),
\label{est:m+1}
\end{equation}
which implies \eqref{est:rho1} in Lemma~\ref{l10};
\[
\|\rho_m(t)\|_{L^{m+1}(\mathbb{R}^n)}\leq (C+ Ct(m+1))^{\frac 1{m+1}} \leq C(T).
\]
A similar calculation\ gives \eqref{est:rho2} and we have
\[
\frac{1}{p}\int_{\mathbb{R}^n}\rho_m(t)^pdx+\frac{4m(p-1)}{(m+p-1)^2}
\int_{Q_t} |\nabla\rho_m^{\frac{m+p-1}{2}}|^2dxds=
\frac{1}{p}\int_{\mathbb{R}^n}\rho_{m,0}^pdx +\frac{p-1}{p} \int_{Q_t} \rho_m^{p+1}dxds.
\]
Interpolating between $L^{m+1}(\mathbb{R}^n)$ and $L^{1}(\mathbb{R}^n)$, we know that the terms on the right hand
side are controlled and thus the gradient term is under control. It remains to
notice that
\[
\frac{4m(p-1)}{(m+p-1)^2}\int_{Q_T}
|\nabla\rho_m^{\frac{m+p-1}{2}}|^2dx
=m(p-1)\int_{Q_T} \rho_m^{m+p-3} |\nabla\rho_m|^2dx
=\int_{Q_T} \nabla\rho_m^m .\nabla\rho_m^{p-1}dx,
\]
and \eqref{est:rho2} is proved.
\\
We turn to \eqref{est:rho3}. Thanks to the interpolation inequality, we have, for
$\alpha \geq 0$ and $t\leq T$,
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n}\rho_{m}(t)^{2m+\alpha}dx&\leq
\| \rho_{m}(t)\|_{L^{m+1}(\mathbb{R}^n)}^{(1-\theta) (2m+\alpha)}
\| \rho_m(t)\|_{L^{\frac{2mn}{n-2}}(\mathbb{R}^n)}^{\theta (2m+\alpha)}
\\
&= \| \rho_{m}(t) \|_{L^{m+1}(\mathbb{R}^n)}^{(1-\theta) (2m+\alpha)}
\| \rho_m^m(t) \|_{L^{\frac{2n}{n-2}}(\mathbb{R}^n)}^{\theta \frac{2m+\alpha}{m}}
\end{aligned}
\end{equation*}
with
\[
\frac{1}{2m+\alpha}= \frac{1-\theta}{m+1} + \frac{\theta (n-2)}{2mn}, \qquad
0\leq \theta \leq 1.
\]
By Sobolev's inequality and the estimate in $L^{m+1}$, we obtain
\[
\int_{\mathbb{R}^n}\rho_{m}(t)^{2m+\alpha}dx \leq C(T) ^{(1-\theta) (2m+\alpha)}
\| \nabla \rho_m^m\|_{L^{2}(\mathbb{R}^n)}^{\theta \frac{2m+\alpha}{m}}.
\]
It remains to choose $\alpha$ such that $\theta \frac{2m+\alpha}{m}=2$ and we
find, integrating in time,
\[
\int_{Q_T} \rho_{m}(t)^{2m+\alpha}dx \leq C(T) ^{(1-\theta)
(2m+\alpha)}
\int_{Q_T} | \nabla \rho_m^m|^2dx.
\]
To compute the value of $\alpha$, we write the condition successively as
\[
2m= (2m+\alpha)\theta = (2m+\alpha) \big[\frac 1 {m+1}-\frac{1}{2m+\alpha} \big]
\big[ \frac{1}{m+1} - \frac{n-2}{2nm} \big]^{-1},
\]
\[
2m= \big[ 2m+\alpha -(m+1) \big] \big[ 1- \frac{(n-2)(m+1)}{2mn} \big]^{-1},
\]
\[
2m - \frac{(n-2)(m+1)}{n}= 2m+\alpha - (m+1) , \qquad \alpha= \frac 2 n (m+1).
\]
This gives the first statement of \eqref{est:rho3}. Then, since $\nabla\rho_m^m=\rho_m\nabla P_m$, we have from \eqref{est:rho1}
$\|\rho_m\nabla P_m\|_{L^2(Q_T)}\leq C(T)$ and \eqref{est:rho3} is proved.
\\
The first estimate of \eqref{est:rho4} is obtained by interpolation and Sobolev's inequality for gradient (Theorem~\ref{t8}) between two estimates in~\eqref{est:rho1}, for $\gamma=0,1$.
\begin{equation*}
\begin{aligned}
\int_{Q_T} \rho_m^{2m-2\gamma}dxdt &\leq \big(\int_{Q_T} \rho_m^{2m+1}dxdt\big)^{\frac{2m-2\gamma-1}{2m}} \big(\int_{Q_T} \rho_mdxdt \big)^{\frac{2\gamma+1}{2m}}\\
&\leq C(T)\int_{0}^{T}\hskip-6pt \big(\int_{\mathbb{R}^n}(\rho_m^m)^{\frac{2n}{n-2}}dx\big)^{\frac{n-2}{n}} \big(\int_{\mathbb{R}^n}\rho_m^\frac{n}{2}dx \big)^{\frac{2}{n}}dt\\
&\leq C(T)\int_{Q_T} |\nabla\rho_m^{m}|^2dxdt\leq C(T).
\end{aligned}
\end{equation*}
We prove the second estimate of \eqref{est:rho4} by means of the estimates $\eqref{est:rho1}_{2nd}$--$\eqref{est:rho2}_{2nd}$,
\begin{equation*}
\begin{aligned}
\int_{Q_T} |\nabla P_m|^2dxdt=&m^2\int_{Q_T} \rho_m^{2(m-2)}|\nabla \rho_m|^2dxdt\\
\leq&\int_{Q_T} (\frac{2m^2}{m-1}\rho_m^{m-1}+\frac{m^2(m-3)}{m-1}\rho_m^{2m-2}|\nabla\rho_m|^2dxdt\\
=&\int_{Q_T} (\frac{2m}{m-1}\nabla\rho_m^{m}\cdot\nabla\rho_m+\frac{m-3}{m-1}|\nabla\rho_m^m|^2)dxdt \leq C(T).
\end{aligned}
\end{equation*}
\end{proof}
Now, we turn to the nonlocal term.
\begin{lemma}[Regularity of the nonlocal term] \label{l18}
Assume $m>n-1$ and \eqref{c1}, let $\rho_m$ be a weak solution to Eq.~\eqref{d1} in the sense of Def.~\ref{d1}, then it
holds for any $T>0$ that
\begin{align}
&\|\nabla\mathcal{N}\ast\rho_m\|_{L^{\infty}(Q_T)}\leq
C(T),&&
\|\partial_{t}\nabla\mathcal{N}\ast\rho_{m}(t)\|_{L^2(Q_T)}\leq C(T),\label{est:con1}\\
&\sup\limits_{0\leq t\leq
T}\|\nabla^2\mathcal{N}\ast\rho_m(t)\|_{L^q(\mathbb{R}^n)}\leq C(T,q),
&&\sup\limits_{0\leq t\leq
T}\|\nabla\mathcal{N}\ast\rho_{m}(t)\|_{L^2(\mathbb{R}^n)}\leq C(T),\label{est:con2}
\end{align}
where $C(T,q)\sim \frac{1}{q-1}$ for $0<q-1\ll 1$ and $C(T,q)\sim q$ for $q\gg
1$. Furthermore, after extraction,it holds that
\begin{equation*}
\nabla\mathcal{N}\ast\rho_{m}\to\nabla\mathcal{N}\ast\rho_{\infty},\ stongly\
in\ L^2_{loc}(Q_{T}),\ as\ m\to\infty.
\end{equation*}
\end{lemma}
\begin{proof}
For the first estimate of \eqref{est:con1}, by means of Lemma~\ref{l10}, we obtain the $L^{\infty}$ estimate for
$\nabla\mathcal{N}\ast\rho_m(t)$ with $m>n-1$ because
\begin{equation}\label{m22}
\begin{aligned}
|\nabla\mathcal{N}\ast\rho_{m}(t)| &\leq
C\int_{\mathbb{R}^n}\frac{\rho_{m}(y,t)}{|x-y|^{n-1}}dy
\\
&\leq C\int_{|x-y|\leq
1}\frac{\rho_{m}(y,t)}{|x-y|^{n-1}}dy+C\int_{|x-y|>1}\frac{\rho_{m}(y,t)}{|x-y|^{n-1}}dy
\\
&\leq
C\Big(\int_{|x-y|\leq1}\frac{1}{|x-y|^{\frac{(n-1)(n+\varepsilon)}{n-1+\varepsilon}}}dy\Big)^{\frac{n-1+\varepsilon}{n+\varepsilon}}\Big(\int_{|x-y|\leq1}\rho_{m}^{n+\varepsilon}(y,t)dy\Big)^{\frac{1}{n+\varepsilon}}+
C
\\
&\leq C(T)\qquad \forall t\in[0,T]\text{ and for some }0<\varepsilon\ll 1.
\end{aligned}
\end{equation}
Let the Laplace inverse operator $\Delta^{-1}=\mathcal{N}\ast$ act on
Eq.~\eqref{d1}, we get a new equation
\begin{equation}\label{m10}
\partial _t\mathcal{N}\ast\rho_m=
\rho_m^m+\nabla\cdot\mathcal{N}\ast(\rho_m\nabla\mathcal{N}\ast\rho_m).
\end{equation}
Then, using the singular integral theory for Newtonian potential
(Lemma~\ref{l12}), \eqref{m22}, and Lemma~\ref{l10}, we obtain
\begin{equation}\label{m23}
\begin{aligned}
\int_{\mathbb{R}^n}|\nabla\nabla\cdot\mathcal{N}\ast(\rho_m\nabla\mathcal{N}\ast\rho_m)|^{2}dx\leq& C\int_{\mathbb{R}^n}|\rho_m\nabla\mathcal{N}\ast\rho_m|^{2}dx\\
\leq&C\|\nabla\mathcal{N}\ast\rho_m\|^2_{L^{\infty}(Q_T)}\int_{\mathbb{R}^n}\rho_m^{2}dx\\
\leq& C(T).
\end{aligned}
\end{equation}
Due to Eq. \eqref{m10}, we use \eqref{m23} and Lemma~\ref{l10}, then it follows
\begin{equation*}
\|\partial_t\nabla\mathcal{N}\ast\rho_m\|_{L^2(Q_T)}\leq \|\nabla
\rho_m^m\|_{L^2(Q_T)}+\|\nabla\nabla\cdot\mathcal{N}\ast(\rho_m\nabla\mathcal{N}\ast\rho_m)\|_{L^2(Q_T)}\leq
C(T)
\end{equation*}
and the second bound of \eqref{est:con1} is proved.
For the first estimate of \eqref{est:con2}, we again use the singular integral theory for Newtonian
potential (Lemma~\ref{l12}), and we have for all $t\in[0,T]$,
\begin{equation}\label{m25}
\|\nabla^2\mathcal{N}\ast\rho_m(t)\|_{L^q(\mathbb{R}^n)}\leq
C(q)\|\rho_m\|_{L^q(\mathbb{R}^n)}\leq C(T,q),
\end{equation}
where $C(T,q)\sim \frac{1}{q-1}$ for $0<q-1\ll 1$ and $C(T,q)\sim q$ for $q\gg
1$.
\\
And for the second bound of \eqref{est:con2}, thanks to the Hardy-Lilttlewood-Sobolev inequality
(Theorem~\ref{t7}) and Lemma~\ref{l10}, we get for all $t\in[0,T]$,
\begin{equation}\label{m43}
\begin{aligned}
\int_{\mathbb{R}^n}|\nabla\mathcal{N}\ast\rho_{m}(t)|^2dx=&-\int_{\mathbb{R}^n}(\Delta\mathcal{N}\ast\rho_{m})\mathcal{N}\ast\rho_{m}dx
=-\int_{\mathbb{R}^n}\rho_{m}\mathcal{N}\ast\rho_{m}dx\\
\leq& C(n)\|\rho_{m}\|_{L^{\frac{2n}{n+2}}(\mathbb{R}^n)}^2
\leq C(T).
\end{aligned}
\end{equation}
The last statement of Lemma \ref{l18} follows from Sobolev's compactness embeddings.
\end{proof}
In order to obtain convergence rate on $|\rho_{m}-1|_+$ in Theorem~\ref{tAE}, it remains to establish the
\begin{lemma}[convergence rate on $|\rho_{m}-1|_+$]$\label{l6}$
Under the initial assumptions \eqref{c1}, let $\rho_{m}$ be the weak solution
to the Cauchy problem Eq.~\eqref{d1} in the sense of Def.~\ref{d1} with $m>n-1$, then
\begin{equation*}
\sup_{0\leq t \leq T} \||\rho_{m}(t)-1|_+\|_{L^2(\mathbb{R}^n)}\leq
\frac{C(T)}{\sqrt m}.
\end{equation*}
\end{lemma}
\begin{proof}
Since, for $m>n-1\geq 2$ and $\rho_m \geq 1$, we have
\begin{equation*}
\rho_m^{m+1} \geq\frac{m(m+1)}{2}(\rho_m-1)^{2},
\end{equation*}
we conclude
\begin{equation*}
sgn(|\rho_m-1|_+)\rho_m^{m+1}\geq \frac{m(m+1)}{2}|\rho_m-1|_+^{2}.
\end{equation*}
From \eqref{est:m+1}, we obtain
\begin{align*}
\int_{\mathbb{R}^n}|\rho_m(t)-1|_+^2dx\leq \frac{2}{m(m+1)}
\int_{\mathbb{R}^n}\rho_{m}^{m+1}(t)dx\leq\frac{C(T)}{m}\quad\text{for all }0\leq t\leq T.
\end{align*}
\end{proof}
\begin{remark}
The result of Lemma~\ref{l6} implies that larger diffusion exponent means
stronger diffusive effect on the zone of high density.
\end{remark}
In the following, with the regularity estimates in Lemmas~\ref{l10}--\ref{l6} in hand, we prove the stiff limit statements in
Theorem~\ref{tAEbis}.
We recall that, thanks to the a priori regularity estimates in
Lemmas~\ref{l10}--\-ref{l6}, $\rho_m$ has a weak limit $\rho_\infty$ with $
\rho_\infty \leq 1$ in $L^p(Q_T)$ for $1<p<\infty$, $P_m$ has a weak limit $P_\infty$ in $L^2(Q_T)$, and we
have locally strong convergence of $\nabla \mathcal{N}\ast\rho_{m}$ to $\nabla
\mathcal{N} \ast \rho_{\infty}$ in $L^2(Q_T)$.
\\
\noindent\emph{Proof of Eq.~\eqref{z6}}. The stiff limit equation \eqref{z6} in Theorem~\ref{tAEbis} follows immediately with these weak limits and the definition of weak solutions in Def.~\ref{def:WS},
where the nonlinear term $\rho_m \nabla \mathcal{N}\ast\rho_{m}$ can pass to the limit $\rho_\infty\nabla\mathcal{N}\ast\rho_\infty$ by weak-strong convergence, and the another nonlinear term $\rho_m^m=\frac{m-1}{m}\rho_m P_m$ weakly converges to $P_\infty$ in $L^2(Q_T)$ from \eqref{m37}.
\\
\noindent\emph{Proof of Eq.~\eqref{z8}, $\rho_\infty P_\infty=P_\infty,\text{ a.e. } (x,t)\in Q_T$}. For the case of tumor growth model in~\cite{5}, the proof is obtained because $\rho_m$ converges strongly, which is not available here. Therefore, we argue in two steps. We firstly prove that after extraction,
\begin{equation}\label{m37}
\rho_m P_m\rightharpoonup P_\infty, \text{ weakly in }L^2(Q_T), \text{ as
}m\to\infty.
\end{equation}
For that, thanks to the relation $\rho_m P_m=\frac{m}{m-1}\rho_m^{m}\leq 2\rho_m^{m}$, it
follows from Lemma~\ref{l10} that $\rho_mP_m$ is bounded in $L^2(Q_T)$ and
thus, after extraction, has a weak limit in $L^2(Q_T)$, which we call
$Q_\infty$.
Due to Young's inequality, we have
\begin{equation*}
P_{m}=\frac{m}{m-1}\rho_m^{m-1}\leq
\frac{m}{m-1}(\frac{m-1}{m}\rho_m^m+\frac{1}{m})=\frac{m-1}{m}\rho_mP_m+\frac{1}{m-1}.
\end{equation*}
In the weak limit, we obtain
\[
P_\infty \leq Q_\infty.
\]
For the reverse inequality, we consider $A>1$ and $m$
sufficiently large. Then, we have
\begin{equation}
\rho_mP_m=\rho_m\min\{A,P_m\}+\rho_m|P_m-A|_+ \leq
(\frac{m-1}{m}A)^\frac{1}{m-1}P_m+\rho_m|P_m-A|_+.
\label{eqAPm}
\end{equation}
We can estimate the last term by
\begin{equation*}
\rho_m|P_m-A|_+\leq
\chi_{\{P_m>A\}}\frac{m}{m-1}\rho_m^m=\chi_{\{P_m>A\}}\frac{m}{m-1}\frac{\rho_m^{2m}}{\rho_m^{m}}\leq
\frac{\rho_m^{2m}}{(\frac{m-1}{m}A)^{m/(m-1)}},
\end{equation*}
and thus, for any non-negative smooth test function $\varphi\in
C_0^{\infty}(Q_T)$, we conclude
\begin{equation*
\limsup\limits_{m\to\infty} \int_{Q_T}\rho_m|P_m-A|_+\varphi
dxdt\leq \limsup\limits_{m\to\infty}
\int_{Q_T}\frac{\rho_m^{2m}}{(\frac{m-1}{m}A)^{m/(m-1)}} dxdt\leq \frac C A.
\end{equation*}
On the other hand, $(\frac{m-1}{m}A)^\frac{1}{m-1}$ converges strongly to $1$.
Therefore by weak-strong convergence $(\frac{m-1}{m}A)^\frac{1}{m-1} P_m$ weakly
converges to $P_\infty$. Passing to the weak limit in \eqref{eqAPm}, we conclude that, for all $A>1$
\[
Q_\infty \leq P_\infty +\frac C A .
\]
We may take $A \to \infty$ and find the desired result, namely \eqref{m37}.
\\
Secondly, we prove that $ \rho_m P_m\rightharpoonup \rho_\infty P_\infty$. For any
smooth test function $\varphi\in C_{0}^{\infty}(Q_T)$, we have, recalling the
strong convergence proved in Lemma~\ref{l18},
\begin{equation}\label{m38}
\begin{aligned}
\lim\limits_{m\to\infty}&\int_{Q_T}\rho_{m}P_{m}\varphi
dxdt= \lim\limits_{m\to\infty} \int_{Q_T}\Delta\mathcal{N}\ast\rho_{m}P_{m}\, \varphi dxdt\\
=&-\lim\limits_{m\to\infty}
\int_{Q_T}\nabla\mathcal{N}\ast\rho_{m}\cdot\nabla P_{m} \, \varphi
dxdt-\lim\limits_{m\to\infty} \int_{Q_T}\nabla\mathcal{N}\ast\rho_{m}\cdot\nabla\varphi P_{m} dxdt\\
=&-\int_{Q_T}\nabla\mathcal{N}\ast\rho_{\infty}\cdot\nabla
P_{\infty}\varphi dxdt- \int_{Q_T}\nabla\mathcal{N}\ast\rho_{\infty}\cdot\nabla\varphi P_{\infty} dxdt\\
=&\int_{Q_T}\rho_{\infty} P_{\infty} \varphi dxdt.
\end{aligned}
\end{equation}
This means that $\rho_m P_m\rightharpoonup \rho_\infty P_\infty$ and we have
obtained the result.
\\
\noindent\emph{Proof of Eq.~\eqref{z8}, $0\leq \rho_\infty\leq1$, a.e. in $Q_T$.} It is directly obtained by Lemma~\ref{l6} and theinequality $\|\rho_\infty\|_{L^2(Q_T)}\leq \liminf\limits_{m\to\infty}\|\rho_m\|_{L^2(Q_T)} \leq C(T)$.
\section{Additional regularity estimates for pressure}
The classical Aronson-B\'enilan(AB) estimate \cite{r32,5} provides regularity for the pressure $P_m$. But the
nonlocal interaction results in the absence of comparison principle and the
$L^{\infty}$ bound from below are missing.
Therefore, we prove uniform AB-type estimates in $L^3\&L^1$ versions, adapting the method in \cite{r35,r34}.
This refgularity is interesting by itself and is used to establish the complementarity relation Eq.~\eqref{z15} which is equivalent to proving
the strong compactness of the sequence $\{\nabla P_m\}_{m>1}$ in $L^2_{loc}(Q_T)$.
\\
In this section, we need to further assume $m>\max \{n-1,\frac{5n-2}{n+2}\}$ because of inequality~\eqref{a53}.
\begin{lemma}\label{l11}
Under the initial assumptions
\eqref{c1}--\eqref{c2}, let $\rho_m$ be a weak solution to the Cauchy problem Eq.~\eqref{d1} in the sense of Def.~\ref{d1}, then it holds
\begin{equation}\label{a16}
\|\sqrt{P_m}\nabla P_m\|_{L^2(Q_T)}\leq C(T) \qquad \forall T>0.
\end{equation}
\end{lemma}
\begin{proof}
Multiplying Eq. \eqref{d3} by $P_m$ and integrating on $\mathbb{R}^n$, then we
have \begin{equation}\label{a14}
\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^n}P_m^2dx+(2m-3)\int_{\mathbb{R}^n}P_m|\nabla
P_m|^2dx=(m-\frac{3}{2})\int_{\mathbb{R}^n}P_m^2\rho_mdx.
\end{equation}
Due to Sobolev's inequality for gradient (Theorem~\ref{t8}), Holder's inequality,
and Lemma~\ref{l10}, we obtain
\begin{equation}\label{a15}
\begin{aligned}
(m-\frac{3}{2})\int_{\mathbb{R}^n}P_m^2\rho_mdx&\leq
(m-\frac{3}{2})(\int_{\mathbb{R}^n}P_m^{\frac{2n}{n-2}}dx)^{\frac{n-2}{n}}(\int_{\mathbb{R}^n}\rho_m^\frac{n}{2}dx)^{\frac{2}{n}}\\
&\leq (m-\frac{3}{2})C(n)C(T)\int_{\mathbb{R}^n}|\nabla P_m|^2dx.
\end{aligned}
\end{equation}
Taking \eqref{a15} into \eqref{a14} and integrating \eqref{a14} on $[0,T]$ gives
\eqref{a16}.
\end{proof}
We are going to establish the uniform $L^3$ estimate for the pressure gradient. Recently, David and Perthame \cite[Theorem 3.2]{r35} proved a uniform sharp
$L^4$ estimate for the pressure gradient. In contrast, we obtain here a uniform
$L^3$ estimate for the pressure grdient by adapting their proof to take into account that the nonlocal
interaction term resulting in the absence of a uniform bound for the pressure.
\begin{theorem}[$L^3$ estimate for pressure gradient]\label{t10}
Under the initial assumptions \eqref{c1}--\eqref{c2}, let $\rho_m$ be a weak solution to the Cauchy problem Eq.~\eqref{d1} in the sense of Def.~\ref{d1}, then it holds for any
given $T>0$ and $m>n-1$ that
\begin{align}
&\sup_{0\leq t\leq T}\|\nabla P_m(t)\|_{L^2(\mathbb{R}^n)}\leq
C(T),&&\|\sqrt{P_m}(\Delta
P_m+\rho_m)\|_{L^2(Q_T)}\leq\frac{C(T)}{\sqrt{m}},\label{est:pg1}\\
&\|\sqrt{P_m}\nabla^2P_m\|_{L^2(Q_T)}\leq C(T),&&
\|\nabla P_m\|_{L^3(Q_T)}\leq C(T)\label{est:pg2}.
\end{align}
\end{theorem}
\begin{proof}
We multiply the pressure Eq.~\eqref{d3} by $-(\Delta P_m+\rho_m)$
and integrate that on $\mathbb{R}^n$, then it follows
\begin{equation}\label{a2}
\begin{aligned}
\frac{1}{2}\frac{d}{dt}& \int_{\mathbb{R}^n}|\nabla
P_m|^2dx-\partial_t\int_{\mathbb{R}^n}\rho_m^mdx+(m-1)\int_{\mathbb{R}^n}P_m(\Delta
P_m+\rho_m)^2dx\\
&+\int_{\mathbb{R}^n}|\nabla P_m|^2\Delta P_mdx+\int_{\mathbb{R}^n}|\nabla
P_m|^2\rho_mdx
+\int_{\mathbb{R}^n}\nabla P_m\cdot\nabla\mathcal{N}\ast\rho_m\Delta P_mdx\\
&+\int_{\mathbb{R}^n}\rho_m\nabla P_m\cdot\nabla\mathcal{N}\ast\rho_mdx=0.
\end{aligned}
\end{equation}
Integrating by parts, we have
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n} |\nabla P_m|^2 \Delta P_mdx &=\int_{\mathbb{R}^n}P_m\Delta(|\nabla P_m|^2)dx\\
=&2\int_{\mathbb{R}^n}P_m\nabla P_m\cdot\nabla(\Delta
P_m)dx+2\int_{\mathbb{R}^n}P_m(\nabla^2P_{m})^2dx\\
=&-2\int_{\mathbb{R}^n}P_m|\Delta P_m|^2dx-2\int_{\mathbb{R}^n}|\nabla
P_m|^2\Delta P_mdx
+2\int_{\mathbb{R}^n}P_m(\nabla^2P_m)^2dx.
\end{aligned}
\end{equation*}
Hence, it holds
\begin{equation}\label{a3}
\int_{\mathbb{R}^n}|\nabla P_m|^2\Delta
P_mdx=-\frac{2}{3}\int_{\mathbb{R}^n}P_m|\Delta
P_m|^2dx+\frac{2}{3}\int_{\mathbb{R}^n}P_m(\nabla^2P_m)^2dx.
\end{equation}
Similarly, integrating by parts, we obtain
\begin{equation}\label{a4}
\begin{aligned}
\int_{\mathbb{R}^n} & \nabla P_m\cdot\nabla\mathcal{N}\ast\rho_m\Delta P_mdx\\
=&-\sum\limits_{i,j}\int_{\mathbb{R}^n}\partial_{ij}^2
P_m\partial_{i}\mathcal{N}\ast\rho_m\partial_{i}P_mdx-\sum\limits_{ij}\int_{\mathbb{R}^n}\partial_i
P_m\partial_{ij}^2\mathcal{N}\ast\rho_m\partial_{j}P_mdx\\
=&\frac{1}{2}\int_{\mathbb{R}^n}|\nabla
P_m|^2\rho_mdx+\int_{\mathbb{R}^n}P_m\nabla\rho_m\cdot\nabla
P_mdx+\int_{\mathbb{R}^n}P_m\nabla^2\mathcal{N}\ast\rho_m:\nabla^2 P_mdx\\
=&\frac{3m-1}{2m-2}\int_{\mathbb{R}^n}\nabla
P_m\cdot\nabla\rho_m^mdx+\int_{\mathbb{R}^n}P_m\nabla^2\mathcal{N}\ast\rho_m:\nabla^2
P_mdx.
\end{aligned}
\end{equation}
Thus, inserting both \eqref{a3} and \eqref{a4} into \eqref{a2}, we have
\begin{equation}\label{a7}
\begin{aligned}
\frac{1}{2}\frac{d}{dt}& \int_{\mathbb{R}^n}|\nabla P_m|^2dx -\frac{d}{dt} \int_{\mathbb{R}^n}\rho_m^mdx+(m-1)\int_{\mathbb{R}^n}P_m(\Delta
P_m+\rho_m)^2dx\\
&\qquad \qquad +\frac{2}{3}\int_{\mathbb{R}^n}P_m(\nabla^2P_m)^2dx+\frac{3m-1}{2m-2}\int_{\mathbb{R}^n}\nabla
P_m\cdot\nabla\rho_m^mdx\\
\leq&\frac{2}{3}\int_{\mathbb{R}^n}P_m|\Delta
P_m|^2dx-\int_{\mathbb{R}^n}P_m\nabla^2P_m:\nabla^2\mathcal{N}\ast\rho_mdx\\
\leq& \frac{2}{3}\int_{\mathbb{R}^n}P_m|\Delta
P_m|^2dx+\frac{1}{3}\int_{\mathbb{R}^n}P_m(\nabla^2P_m)^2dx+\frac{3}{4}\int_{\mathbb{R}^n}P_m(\nabla^2\mathcal{N}\ast\rho_m)^2dx,
\end{aligned}
\end{equation}
where the last inequality follows from
\[
\big| -\int_{\mathbb{R}^n}\nabla^2P_m:\nabla^2\mathcal{N}\ast\rho_m
P_m dx \big| \leq\frac{1}{3}\int_{\mathbb{R}^n}P_m(\nabla^2P_m)^2dx+\frac{3}{4}\int_{\mathbb{R}^n}P_m(\nabla^2\mathcal{N}\ast\rho_m)^2dx.
\]
It easily follows from Lemma~\ref{l10} and Sobolev's inequality that
\begin{equation}\label{a8}
\begin{aligned}
\int_{\mathbb{R}^n}\rho_m^mdx\leq \int_{\mathbb{R}^n}P_{m}\rho_{m}dx
\leq \frac{1}{4}\int_{\mathbb{R}^n}|\nabla P_m|^2dx+C(T).
\end{aligned}
\end{equation}
Similarly, thanks to Lemma~\ref{l18}, the singular integral theory for Newtonian
potential (Lemma~\ref{l12}), Holder's inequality and Young's inequality, then we
have
\begin{equation}\label{a9}
\begin{aligned}
\frac{3}{4}\int_{\mathbb{R}^n}P_m(\nabla^2\mathcal{N}\ast\rho_m)^2dx
\leq&\frac{3}{4}\sum_{ij}(\int_{\mathbb{R}^n}P_m^{\frac{2n}{n-2}}dx)^{\frac{n-2}{2n}}(\int_{\mathbb{R}^n}|\partial_{ij}^2\mathcal{N}\ast\rho_m|^{\frac{4n}{n+2}}dx)^{\frac{n+2}{2n}}
\\
\leq&\frac{3}{4}\sum_{ij}(C(n)\int_{\mathbb{R}^n}|\nabla
P_m|^2dx)^{\frac{1}{2}}(C(n)\int_{\mathbb{R}^n}\rho_m^{\frac{4n}{n+2}}dx)^{\frac{n+2}{2n}}
\\
\leq&\frac{1}{8}\int_{\mathbb{R}^n}|\nabla
P_m|^2dx+\frac{9}{8}C(n)n^2(C(n)\int_{\mathbb{R}}\rho_m^{\frac{4n}{n+2}}dx)^{\frac{n+2}{n}}\\
\leq&\frac{1}{8}\int_{\mathbb{R}^n}|\nabla P_m|^2dx+C(T).
\end{aligned}
\end{equation}
Integrating \eqref{a7} on $[0,t]$ for any $t\in(0,T]$ and using both \eqref{a8}
and \eqref{a9}, then we obtain
\begin{equation}\label{a10}
\begin{aligned}
\frac{1}{4}\int_{\mathbb{R}^n} & |\nabla
P_m(t)|^2dx+\int_{\mathbb{R}^n}\rho_{m,0}^mdx+(m-\frac{7}{3})\int_{0}^{t}\int_{\mathbb{R}^n}P_m(\Delta
P_m+\rho_m)^2dxds\\
&\quad +\frac{1}{3}\int_{0}^{t}\int_{\mathbb{R}^n}P_m(\nabla^2
P_m)^2dxds+\frac{3m-1}{2m-2}\int_{0}^{t}\int_{\mathbb{R}^n}\nabla
P_m\cdot\nabla\rho_m^mdxds\\
\leq&\frac{4}{3}\int_{0}^{t}\int_{\mathbb{R}^n}P_m\rho_m^2dxds+\frac{1}{8}\int_{0}^{t}\int_{\mathbb{R}^n}|\nabla
P_m|^2dxds+C(T)t+\frac{1}{2}\int_{\mathbb{R}^n}|\nabla P_{m,0}|^2dx,
\end{aligned}
\end{equation}
where $\frac{2}{3}\int_{0}^{t}\int_{\mathbb{R}^n}P_m(\Delta
P_m)^2dxds\leq\frac{4}{3}\int_{0}^{t}\int_{\mathbb{R}^n}P_m\rho_m^2dxds
+\frac{4}{3}\int_{0}^{t}\int_{\mathbb{R}^n}P_m(\Delta P_m+\rho_m)^2dxds$ is used.
It easily follows from Lemma~\ref{l10} and Sobolev's inequality that
\begin{equation*}
\begin{aligned}
\frac{4}{3}\int_{0}^{t}\int_{\mathbb{R}^n}P_m\rho_m^2dxds\leq
\frac{1}{3}\int_{0}^{t}\int_{\mathbb{R}^n}|\nabla P_m|^2dxds+C(T)\leq C(T).
\end{aligned}
\end{equation*}
Inserting this into \eqref{a10} and by virtue of Lemma~\ref{l10}, we have,
for all $t\in[0,T]$,
\begin{equation}\label{a12}
\begin{aligned}
\frac{1}{4}\int_{\mathbb{R}^n} & |\nabla
P_m(t)|^2dx+(m-\frac{7}{3})\int_{0}^{t}\int_{\mathbb{R}^n}P_m(\Delta
P_m+\rho_m)^2dxds\\
&+\frac{1}{3}\int_{0}^{t}\int_{\mathbb{R}^n}P_m(\nabla^2 P_m)^2dxds\leq C(T) .
\end{aligned}
\end{equation}
Therefore, it follows from \eqref{a12} that
\[
\sup_{0\leq t\leq T} \|\nabla P_m(t)\|_{L^2(Q_T)}^2+m
\int_{Q_T}P_m(\Delta P_m+\rho_m)^2dxdt+
\int_{Q_T}P_m(\nabla^2P_m)^2dxdt\leq C(T),
\]
and thus \eqref{est:pg1} and the first estimate of \eqref{est:pg2} are obtained.
For the second bound of \eqref{est:pg2}, the above inequality and Lemma~\ref{l11} lead to
\begin{equation*}
\begin{aligned}
\int_{Q_T}|\partial_{i}P_m|^3dxdt
&=\int_{Q_T}\partial_{i}P_m\partial_{i}P_m |\partial_{i}P_m|dxdt \leq2\int_{Q_T}P_m|\partial_{ii}P_m||\partial_{i}P_m|dxdt\\
&\leq \int_{Q_T}P_m(\partial_{ii}P_m)^2dxdt+
\int_{Q_T}P_m(\partial_{i}P_m)^2dxdt \leq C(T).
\end{aligned}
\end{equation*}
Therefore, we have obtained the second estimate of \eqref{est:pg2}.
\end{proof}
Next, our goal is to establish the uniform Aronson-B\'enilan (AB) estimate which uses the new variable
\begin{equation}\label{a21b}
\omega_m:=\Delta P_m+\rho_m.
\end{equation}
\begin{theorem}[Aronson-B\'enilan estimate]$\label{t11}$Assume that initial data
$\rho_{m,0}$ satisfies \eqref{c1}--\eqref{c3}, let $\rho_m$ be a weak solution to the Cauchy problem Eq.~\eqref{d1} in the sense of Def.~\ref{d1}, then
\begin{align}
&\sup_{0\leq t\leq T}\||\omega_m(t)|_-\|_{L^2(\mathbb{R}^n)}\leq
C(T),&&
\||\omega_m|_{-}\|_{L^3(Q_T)}^3\leq \frac{C(T)}{m},\label{est:ab1}\\
&\sup_{0\leq t\leq T}\||\omega_m(t)|_{-}\|_{L^1(\mathbb{R}^n)}\leq
C(T),&&
\sup_{0\leq t\leq T}\|\Delta P_m(t)\|_{L^1(\mathbb{R}^n)}\leq C(T)\label{est:ab2}.
\end{align}
\end{theorem}
\begin{proof}
We rewrite the equation of the density
\begin{equation}\label{aa17}
\begin{aligned}
\partial_{t}\rho_m&=\Delta
\rho_m^m+\nabla\cdot(\rho_m\nabla\mathcal{N}\ast\rho_m)\\
&=\rho_m(\Delta P_m+\rho_m)+\nabla\rho_m\cdot(\nabla
P_m+\nabla\mathcal{N}\ast\rho_m)\\
&=\rho_m\omega_m+\nabla\rho_m\cdot(\nabla
P_m+\nabla\mathcal{N}\ast\rho_m),
\end{aligned}
\end{equation}
and the pressure equation is
\begin{equation*}
\partial_{t}P_m=(m-1)P_m\omega_m+\nabla P_m\cdot\nabla P_m+\nabla
P_m\cdot\nabla\mathcal{N}\ast\rho_m.
\end{equation*}
Then, we compute
\begin{equation}\label{aa18}
\begin{aligned}
\partial_t\Delta P_m=&(m-1)\Delta(P_m\omega_m)+\nabla(\Delta
P_m)\cdot(\nabla\mathcal{N}\ast\rho_m+\nabla P_m)\\
&+\nabla P_m\cdot\nabla
\omega_m+2\nabla^2P_m:(\nabla^2P_m+\nabla^2\mathcal{N}\ast\rho_m).
\end{aligned}
\end{equation}
Combining \eqref{aa17} and \eqref{aa18}, the equation of $\omega_m$ is
\begin{equation*}
\begin{aligned}
\partial_{t}\omega_m=&(m-1)\Delta(P_m\omega_m)+\nabla\omega_m\cdot\nabla\mathcal{N}\ast\rho_m+2\nabla^2P_m:(\nabla^2P_m+\nabla^2\mathcal{N}\ast\rho_m)\\
&+\rho_m\omega_m+2\nabla P_m\cdot\nabla\omega_m.
\end{aligned}
\end{equation*}
Thus, we have
\begin{equation}\label{a20}
\begin{aligned}
\partial_t\omega_m\geq&(m-1)\Delta(P_m\omega_m)+\nabla\omega_m\cdot\nabla\mathcal{N}\ast\rho_m-\frac{1}{2}(\nabla^2\mathcal{N}\ast\rho_m)^2&+2\nabla P_m\cdot\nabla\omega_m+\rho_m\omega_m,
\end{aligned}
\end{equation}
where we use that
\begin{equation*}
\begin{aligned}
2\nabla^2P_m:(\nabla^2P_m+\nabla^2\mathcal{N}\ast\rho_m)&\geq
2\nabla^2P_m:\nabla^2P_m-2\nabla^2P_m:\nabla^2P_m-\frac{1}{2}(\nabla^2\mathcal{N}\ast\rho_m)^2\\
&= -\frac{1}{2}(\nabla^2\mathcal{N}\ast\rho_m)^2.
\end{aligned}
\end{equation*}
Multiplying \eqref{a20} by $-2|\omega_m|_-$, due to Kato's inequality, we obtain
\begin{equation}\label{aa21}
\begin{aligned}
\partial_{t}|\omega_m|_-^2
\leq&
2(m-1)\Delta(P_m|\omega_m|_-)|\omega_m|_-+\nabla|\omega_m|_-^2\cdot\nabla\mathcal{N}\ast\rho_m+2\nabla|\omega_m|_-^2\cdot\nabla
P_m\\
&+(\nabla^2\mathcal{N}\ast\rho_m)^2|\omega_m|_-+2\rho_m|\omega_m|_-^2,
\end{aligned}
\end{equation}
where $\omega_m sgn(|\omega_m|_-)=-|\omega_m|_-$. Integrating \eqref{aa21} on
$\mathbb{R}^n$ and integrating by parts, we find
\begin{equation}\label{a99}
\begin{aligned}
\frac{d}{dt}\int_{\mathbb{R}^n}|\omega_m|_-^2dx\leq&
2(m-1)\int_{\mathbb{R}^n}\Delta(P_m|\omega_m|_-)|\omega_m|_-dx-2\int_{\mathbb{R}^n}|\omega_m|_-^2(\Delta
P_m+\rho_m)dx\\
&+3\int_{\mathbb{R}^n}|\omega_m|_-^2\rho_mdx+\int_{\mathbb{R}^n}(\nabla^2\mathcal{N}\ast\rho_m)^2|\omega_m|_-dx\\
=&
2(m-1)\int_{\mathbb{R}^n}\Delta(P_m|\omega_m|_-)|\omega_m|_-dx+2\int_{\mathbb{R}^n}|\omega_m|_-^3dx\\
&+3\int_{\mathbb{R}^n}|\omega_m|_-^2\rho_mdx+\int_{\mathbb{R}^n}(\nabla^2\mathcal{N}\ast\rho_m)^2|\omega_m|_-dx.
\end{aligned}
\end{equation}
Recalling the definition of $\omega_m$, we compute
\begin{equation}\label{OMEGA}
\begin{aligned}
2(m-1)\int_{\mathbb{R}^n} & \Delta(P_m|\omega_m|_-)|\omega_m|_-dx
=-2(m-1)\int_{\mathbb{R}^n}\nabla(P_m|\omega_m|_-)\nabla|\omega_m|_-dx\\
=&-(m-1)\int_{\mathbb{R}^n}\nabla
P_m\cdot\nabla|\omega_m|_-^2dx-2(m-1)\int_{\mathbb{R}^n}P_m|\nabla|\omega_m|_-|^2dx\\
=&(m-1)\int_{\mathbb{R}^n}\Delta
P_m|\omega_m|_-^2dx-2(m-1)\int_{\mathbb{R}^n}P_m|\nabla|\omega_m|_-|^2dx\\
=&(m-1)\int_{\mathbb{R}^n}(\omega_m-\rho_m)|\omega_m|_-^2dx-2(m-1)\int_{\mathbb{R}^n}P_m|\nabla|\omega_m|_-|^2dx\\
=&-(m-1)\int_{\mathbb{R}^n}
|\omega_m|_-^3dx-(m-1)\int_{\mathbb{R}^n}\rho_m|\omega_m|_-^2dx-2(m-1)\int_{\mathbb{R}^n}P_m|\nabla|\omega_m|_-|^2dx.
\end{aligned}
\end{equation}
And, inserting this in \eqref{a99}, we get
\begin{equation}\label{a22}
\begin{aligned}
\frac{d}{dt}\int_{\mathbb{R}^n}& |\omega_m|_-^2dx+2(m-1)\int_{\mathbb{R}^n}P_m|\nabla|\omega_m|_-|^2dx\\
&+(m-4)\int_{\mathbb{R}^n}\rho_m|\omega_m|_-^2dx
+(m-3)\int_{\mathbb{R}^n}|\omega_m|_-^3dx\\
\leq&\int_{\mathbb{R}^n}(\nabla^2\mathcal{N}\ast\rho_m)^2|\omega_m|_-dx.
\end{aligned}
\end{equation}
Thanks to Young's inequality, Lemma~\ref{l10}, and the singular integral theory
for Newtonian potential (Lemma~\ref{l12}), we have
\[
\begin{aligned}
\int_{\mathbb{R}^n}(\nabla^2\mathcal{N}\ast\rho_m)^2|\omega_m|_-dx&\leq\sum_{ij}\frac{2n}{3^{3/2}}\int_{\mathbb{R}^n}|\partial_{ij}^2\mathcal{N}\ast\rho_m|^3dx+\sum_{ij}\frac{1}{n^2}\int_{\mathbb{R}^n}|\omega_m|_-^3dx\\
&\leq
\sum_{ij}\frac{2n}{3^{3/2}}C(n)\int_{\mathbb{R}^n}\rho_m^3dx+\int_{\mathbb{R}^n}|\omega_m|_-^3dx\\
&\leq \int_{\mathbb{R}^n}|\omega_m|_-^3dx+C(T).
\end{aligned}
\]
Inserting this into \eqref{a22}, we arrive at
\[\begin{aligned}
\frac{d}{dt}\int_{\mathbb{R}^n} |\omega_m|_-^2dx&+2(m-1)\int_{\mathbb{R}^n}P_m|\nabla|\omega_m|_-|^2dx
+(m-4)\int_{\mathbb{R}^n} [\rho_m|\omega_m|_-^2 +|\omega_m|_-^3]dx\\
\leq& C(T).
\end{aligned}
\]
After time integration, we obtain
\begin{equation*}
\begin{aligned}
\sup_{0\leq t\leq T}\int_{\mathbb{R}^n} |\omega_m(t)|_-^2dx& +2m
\int_{Q_T}P_m|\nabla|\omega_m|_-|^2dxdt+m \int_{Q_T} [\rho_m|\omega_m|_-^2d
+|\omega_m|_-^3 ]dxdt\\
\leq& C(T).
\end{aligned}
\end{equation*}
This proves \eqref{est:ab1} of Theorem \ref{t11}.
\\
Next, we multiply \eqref{a20} by $-sgn(|\omega_m|_-)$, then we get
\[
\begin{aligned}
\partial_t|\omega_m|_-\leq&(m-1)\Delta(P_m|\omega_m|_-)+\nabla|\omega_m|_-\cdot\nabla\mathcal{N}\ast\rho_m+sgn(|\omega_m|_-)\frac{1}{2}(\nabla^2\mathcal{N}\ast\rho_m)^2\\
&+2\nabla P_m\cdot\nabla|\omega_m|_-+\rho_m|\omega_m|_-.
\end{aligned}
\]
After integration on $\mathbb{R}^n$, due to the singular integral theory
for Newtonian potential (Lemma~\ref{l12}), and Holder's inequality, we
attain
\[
\begin{aligned}
\frac{d}{dt}\int_{\mathbb{R}^n}|\omega_m|_-dx
\leq&2\int_{\mathbb{R}^n}|\omega_m|_-\rho_mdx+2\int_{\mathbb{R}^n}|\omega_m|_-^2dx+\int_{\mathbb{R}^n}\frac{1}{2}(\nabla^2\mathcal{N}\ast\rho_m)^2dx\\
\leq&
\int_{\mathbb{R}^n}\rho_m^2dx+3\int_{\mathbb{R}^n}|\omega_m|_-^2dx+C(n)\int_{\mathbb{R}^n}\rho_m^2dx
\leq C(T).
\end{aligned}
\]
Integrating in $ t$ as before, we conclude the first estimate of \eqref{est:ab2} for Theorem~\ref{t11}, namely
\begin{equation}\label{a28}
\sup_{0\leq t\leq T}\int_{\mathbb{R}^n}|\omega_m(t)|_-dx\leq C(T).
\end{equation}
Finally, since $|\Delta P_m| \leq |\Delta P_m+\rho_m| + \rho_m =\Delta P_m+\rho_m + 2 |\omega_m|_- + \rho_m $ , we find
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n}|\Delta P_m|dx&\leq \int_{\mathbb{R}^n}\Delta
P_m+\rho_mdx+2\int_{\mathbb{R}^n}|\omega_m|_-dx+\int_{\mathbb{R}^n}\rho_mdx\\
&\leq 2C+2\int_{\mathbb{R}^n}|\omega_m|_-dx,
\end{aligned}
\end{equation*}
therefore, the second bound of \eqref{est:ab2} follows from \eqref{a28} that
\begin{equation*}
\sup_{0\leq t\leq T}\int_{\mathbb{R}^n}|\Delta P_m(t)|dx\leq C(T).
\end{equation*}
The proof of Theorem~\ref{t11} is completed.
\end{proof}
In the end, we are about to justify the $L^1$ time derivative estimate for the pressure. It is not easy to obtain such an estimate, but that is
useful to get locally strong compactness of the pressure gradient sequences $\{\nabla P_m\}_{m>1}$. We make full use of Kato's inequality and the specific form of the Newtonian potential to achieve our goal.
We first give two useful preliminary lemmas.
\begin{lemma}\label{l15}
Assume that the initial data $\rho_{m,0}$ satisfies the assumptions
\eqref{c1}--\eqref{c2}, let $\rho_m$ be a weak solution to the Cauchy problem Eq.~\eqref{d1} in the sense of Def.~\ref{d1}, then it follows
\begin{align*}
\int_{Q_T}\big(\nabla\rho_m^{m}\cdot\nabla\rho_m^{m+1}+|\nabla\rho_m^{m+1}|^2+|\nabla\rho_m^{m+2}|^2)dxdt\leq
C(T).
\end{align*}
\end{lemma}
\begin{proof}
These estimates can be written as $L^2(Q_T) $ bounds on $\rho_m^{\frac 32 } \nabla P_m$, $\rho_m^2 \nabla P_m$, $\rho_m^{\frac 52 } \nabla P_m$. They
are obvious consequences of~\eqref{est:rho4} and \eqref{a16}.
\end{proof}
\begin{lemma}\label{l16}
Under the initial assumptions \eqref{c1}--\eqref{c3}, let the pair $(P_m,\rho_m)$ be a weak solution to the PKS model Eq.~\eqref{d1}in the sense of Def.~\ref{d1}, then it holds
\begin{equation*}
\|\partial_{t}\rho_m-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m\|_{L^1(Q_T)}\leq
C(T).
\end{equation*}
\end{lemma}
\begin{proof}
Since
\begin{equation*}
\partial_t\rho_m-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m\geq\rho_m\omega_m,
\end{equation*}
we have
\begin{equation*}
|\partial_t\rho_m-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m|_- \leq \rho_m|\omega_m|_- \leq \frac 12 [\rho_m^2+ |\omega_m|_-^2] .
\end{equation*}
Consequently, using mass conservation, Theorem~\ref{t11} and Lemma~\ref{l10},
\begin{equation*}
\begin{aligned}
\int_{Q_T} |\partial_t &\rho_m-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m|dxdt
\\
=& \int_{Q_T}(\partial_t\rho_m-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m)dxdt+2
\int_{Q_T}|\partial_t\rho_m-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m|_-dxdt
\\
\leq & 2\int_{Q_T}\rho_m^{2}dxdt+\int_{Q_T}|\omega_m|_-^2dxdt
\leq C(T).
\end{aligned}
\end{equation*}
\end{proof}
In the following, we give the $L^1$ time derivative estimate of pressure.
\begin{theorem}[$L^1$ time derivative estimate of pressure]$\label{t12}$
Under the initial assumptions \eqref{c1}--\eqref{c4}, let $\rho_m$ be a weak solution to the Cauchy problem \eqref{d1} for the PKS in the sense of Def.~\ref{d1}, then
\begin{equation*}
\|\partial_t P_m\|_{L^1(Q_T)}\leq C(T).
\end{equation*}
\end{theorem}
\begin{proof}
We cannot work directly on $\partial_t P_m$ because of the power arising in a remainder term, and thus we use $\partial_t \rho_m^{m+1}$.
For this reason, we rewrite the cell density equation \eqref{d1} with two formulas
\begin{align}
&\partial_t\rho_m=\rho_m(\Delta P_m+\rho_m)+\nabla \rho_m\cdot(\nabla P_m+\nabla
\mathcal{N}\ast\rho_m),\label{a29}\\
&\partial_{t}\rho_m=\Delta\rho_m^m+\rho_m^2+\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m,\label{a30}
\end{align}
and we give two useful equations
\begin{align}
&\partial_t \rho_m^{m+1}=(m+1)\rho_m^{m+1}(\Delta
P_m+\rho_m)+\nabla\rho_m^{m+1}\cdot(\nabla
P_m+\nabla\mathcal{N}\ast\rho_m),\label{a31}\\
&\Delta\rho_m^{m+1}=\frac{m+1}{m}(\rho_m\Delta\rho_m^m+\nabla\rho_m\cdot\nabla\rho_m^m).\label{a32}
\end{align}
With the help of Kato's inequality, we differentiate Eq.~\eqref{a31}
with respect to the time and multiply this by $-sgn(|\partial_t\rho_m|_-)$, then
it holds
\[
\begin{aligned}
\partial_t &|\partial_t\rho_m^{m+1}|_-\leq
(m+1)\Big[|\partial_t\rho_m^{m+1}|_- [\Delta P_m+\rho_m]+ \rho_m^{m+1}|\partial_t\rho_m|_- +\rho_m^{m+1}\Delta |\partial_tP_m|_- \Big]
\\
&+\nabla|\partial_t\rho_m^{m+1}|_-\cdot \nabla [P_m+\mathcal{N}\ast\rho_m]+\nabla\rho_m^{m+1}\cdot\nabla|\partial_tP_m|_-
-sgn_-(\partial_t\rho_m )\nabla\rho_m^{m+1}\cdot\nabla\mathcal{N}\ast\partial_t\rho_m
\end{aligned}
\]
and after integration by parts on $\mathbb{R}^n$ and insertion of Eq.~\eqref{a32}, we find
\[ \begin{aligned}
\frac{d}{dt}\int_{\mathbb{R}^n}|\partial_t\rho_m^{m+1}|_-dx
\leq & \int_{\mathbb{R}^n} |\partial_t\rho_m^{m+1}|_- [m \Delta P_m +(m+1) \rho_m ]dx +m \int_{\mathbb{R}^n} |\partial_tP_m|_- \Delta \rho_m^{m+1}dx
\\
&+ \int_{\mathbb{R}^n} | \nabla\rho_m^{m+1}\cdot\nabla\mathcal{N}\ast\partial_t\rho_m|dx\\
=&(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^{m+1}|_- \rho_m dx+(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^m|_-\rho_m\Delta P_mdx\\
&+(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^m|_-\Delta\rho_m^mdx+(m+1)\int_{\mathbb{R}^n}|\partial_tP_m|_-\nabla\rho_m\cdot\nabla\rho_m^m dx\\
&+ \int_{\mathbb{R}^n} | \nabla\rho_m^{m+1}\cdot\nabla\mathcal{N}\ast\partial_t\rho_m|dx.\\
\end{aligned}
\]
We insert Eqs.~ \eqref{a29}--\eqref{a30} into this inequality, and use Eq.~\eqref{m10} for the last term.
\begin{equation*}
\begin{aligned}
\frac{d}{dt}\int_{\mathbb{R}^n} &|\partial_t\rho_m^{m+1}|_-dx \leq
(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^{m+1}|_-\rho_mdx+(m+1) \int_{\mathbb{R}^n}|\partial_t P_m|_-\nabla\rho_m\cdot\nabla\rho_m^mdx
\\
&+(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^m|_-(\partial_t\rho_m-\rho_m^2-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m-\nabla\rho_m\cdot\nabla
P_m)dx\\
&+ (m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^m|_-(\partial_t\rho_m-\rho_m^2-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m)dx\\
&+\int_{\mathbb{R}^n}\nabla\rho_m^m\cdot\nabla\rho_m^{m+1}dx+\int_{\mathbb{R}^n}|\nabla\rho_m^{m+1}\cdot\nabla\mathcal{N}\ast(\nabla\cdot(\rho_m\nabla\mathcal{N}\ast\rho_m))|dx.
\end{aligned}
\end{equation*}
The two terms with $(m+1)|\partial_t P_m|_-\nabla\rho_m\cdot\nabla\rho_m^m$ and $-(m+1)|\partial_t\rho_m^m|_- \nabla\rho_m\cdot\nabla
P_m$ cancel due to $|\partial_t P_m|_-\nabla\rho_m\cdot\nabla\rho_m^m=|\partial_t\rho_m^m|_- \nabla\rho_m\cdot\nabla
P_m$,
then it holds
\begin{equation}\label{a38}
\begin{aligned}
\frac{d}{dt}\int_{\mathbb{R}^n} &|\partial_t\rho_m^{m+1}|_-dx+(m-1)\int_{\mathbb{R}^n}|\partial_t\rho_m^{m+1}|_-\rho_mdx
+2(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^m|_-|\partial_t\rho_m|_-dx\\
\leq&\underbrace{2(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^m|_-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_mdx}_{\mathcal{A}}+\int_{\mathbb{R}^n}\nabla\rho_m^m\cdot\nabla\rho_m^{m+1}dx\\
&+\underbrace{\int_{\mathbb{R}^n}|\nabla\rho_m^{m+1}\cdot\nabla\mathcal{N}\ast(\nabla\cdot(\rho_m\nabla\mathcal{N}\ast\rho_m))|dx}_{\mathcal{B}}.
\end{aligned}
\end{equation}
For $\mathcal{A}$ and $\mathcal{B}$, we have
\[
\begin{aligned}
\mathcal{A}=&2(m+1)m\int_{\mathbb{R}^n}\rho_m^{m-1}|\partial_t\rho_m|_-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_mdx\\
\leq&(m+1)m\int_{\mathbb{R}^n}\rho_m^{m-1}|\partial_t\rho_m|_-^2dx+(m+1)m\int_{\mathbb{R}^n}\rho_m^{m-1}|\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m|^2dx\\
\leq&(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^{m}|_-|\partial_t\rho_m|_-dx+(m+1)\|\nabla\mathcal{N}\ast\rho_m\|_{L^{\infty}(Q_T)}^2\int_{\mathbb{R}^n}\nabla\rho_m^m\cdot\nabla\rho_mdx,
\end{aligned} \]
\[
\begin{aligned}
\mathcal{B}\leq&
\frac{1}{2}\int_{\mathbb{R}^n}|\nabla\rho_m^{m+1}|^2dx+\frac{1}{2}\int_{\mathbb{R}^n}|\nabla\mathcal{N}\ast(\nabla\cdot(\rho_m\nabla\mathcal{N}\ast\rho_m))|^2dx\\
\leq&
\frac{1}{2}\int_{\mathbb{R}^n}|\nabla\rho_m^{m+1}|^2dx+\frac{1}{2}C(n)\|\nabla\mathcal{N}\ast\rho_m\|_{L^{\infty}(Q_T)}^2\int_{\mathbb{R}^n}\rho_m^2dx.
\end{aligned}
\]
From Lemma~\ref{l10} in which we let $p=2$, we control in $L^1(Q_T)$ the term $\nabla \rho_m^m \cdot \nabla\rho_m$. The terms in $\mathcal{B}$ are also controled thanks to the bounds (in particular in Lemma~\ref{l15}), as well as the second term in final expression of $\mathcal A$. All together the known bounds reduce~\eqref{a38} to
\begin{equation*}
\frac{d}{dt}\int_{\mathbb{R}^n}|\partial_t\rho_m^{m+1}|_-dx+\frac{m^2-1}{m+2} \int_{\mathbb{R}^n}|\partial_t\rho_m^{m+2}|_- dx+(m+1)\int_{\mathbb{R}^n}|\partial_t\rho_m^m|_-|\partial_t\rho_m|_-dx
\leq (m+1) C(T).
\end{equation*}
Therefore, it holds
\begin{equation}\label{a53}
\int_{Q_T}|\partial_t\rho_m^{m+2}|_-dxdt+\int_{Q_T} |\partial_t\rho_m^m|_-|\partial_t\rho_m|_-dxdt\leq C(T).
\end{equation}
Taking account of Lemma~\ref{l10} and Theorem~\ref{t10}, we use Sobolev's
inequality and obtain
\begin{equation*}
\begin{aligned}
\sup_{0\leq t\leq T}\int_{\mathbb{R}^n}\rho_m^{m+2}dx
\leq &C\sup_{0\leq t\leq T}\int_{\mathbb{R}^n}\rho_m^{3}P_mdx
\leq C\sup_{0\leq t\leq T}\|\nabla P_m(t)\|_{L^2(\mathbb{R}^n)}+C(T)\\
\leq& C(T).
\end{aligned}
\end{equation*}
Thus, combining the above inequality and \eqref{a53}, we get
\begin{equation}\label{a80}
\begin{aligned}
\int_{Q_T}|\partial_t\rho_m^{m+2}|dxdt&\leq
\int_{Q_T}\partial_t\rho_m^{m+2}dxdt+2
\int_{Q_T}|\partial_t\rho_m^{m+2}|_-dxdt\\
&\leq
\int_{\mathbb{R}^n}\rho_m^{m+2}(T)dx-\int_{\mathbb{R}^n}\rho_{m,0}^{m+2}dx+2 \int
\hskip-4pt \int_{Q_T}|\partial_t\rho_m^{m+2}|_-dxdt\\
&\leq C(T).
\end{aligned}
\end{equation}
Furthermore, combining \eqref{a80}, Lemma~\ref{l18}, and Lemma~\ref{l15}, we have
\begin{equation}\label{a42}
\begin{aligned}
\int_{Q_T} |\partial_t & \rho_m^{m+2}-\nabla\rho_m^{m+2}\cdot\nabla\mathcal{N}\ast\rho_m|dxdt\\
\leq&\int_{Q_T} \Big[|\partial_t\rho_m^{m+2}|+\frac{1}{2}|\nabla\rho_m^{m+2}|^2+\frac{1}{2}|\nabla\mathcal{N}\ast\rho_m|^2\Big]dxdt
\leq C(T).
\end{aligned}
\end{equation}
By Lemma~\ref{l16} and \eqref{a42}, we obtain
\[
\begin{aligned}
\int_{Q_T}|\partial_t\rho_m^{m-1}&-\nabla\rho_m^{m-1}\cdot\nabla\mathcal{N}\ast\rho_m|dxdt \\
=&
\int_{Q_T}|\partial_t\rho_m^{m-1}-\nabla\rho_m^{m-1}\cdot\nabla\mathcal{N}\ast\rho_m| \big[ \chi_{\{\rho_m\leq\frac{1}{2}\}} +
\chi_{\{\rho_m>\frac{1}{2}\}} \big]
dxdt\\
\leq&(m-1)\frac{1}{2^{m-2}}
\int_{Q_T}|\partial_t\rho_m-\nabla\rho_m\cdot\nabla\mathcal{N}\ast\rho_m|dxdt\\
&+\frac{8(m-1)}{m+2}
\int_{Q_T}|\partial_t\rho_m^{m+2}-\nabla\rho_m^{m+2}\cdot\nabla\mathcal{N}\ast\rho_m|dxdt
\leq C(T).
\end{aligned}
\]
Combining this with Lemma~\ref{l10} and Lemma~\ref{l18}, we end up with
\begin{equation*}
\begin{aligned}
\int_{Q_T} & |\partial_t P_m|dxdt\leq \int_{Q_T}|\partial_t P_m-\nabla
P_m\cdot\nabla\mathcal{N}\ast\rho_m|dxdt+\int_{Q_T}|\nabla
P_m\cdot\nabla\mathcal{N}\ast\rho_m|dxdt\\
\leq&\frac{m}{m-1}\int_{Q_T}|\partial_t \rho_m^{m-1}-\nabla
\rho_m^{m-1}\cdot\nabla\mathcal{N}\ast\rho_m|dxdt+\frac{1}{2}
\int_{Q_T} \big[|\nabla P_m|^2dxdt + |\nabla\mathcal{N}\ast\rho_m|^2 \big] dxdt\\
\leq& C(T),
\end{aligned}
\end{equation*}
where the first inequality is the application of triangle inequality and the
second inequality is due to the Cauchy-Schwarz inequality. The proof is completed.
\end{proof}
\begin{remark}
It should be emphasized that the first step in the proof of Theorem~\ref{t12} is to compute the time derivative of $\rho_m^{m+1}$. We can also begin with $\rho_m^{m}$ (not $P_m$), which requires to use, from the density dissipation formula \eqref{est:rho2} from Lemma~\ref{l10},
\[
\int_{Q_T} |\nabla \rho_m^m \cdot \nabla \rho_m| \rho_m^{-1} dx dt \leq C(T).
\]
This bound needs the entropy $\rho_m \log \rho_m$, we don't do here for the initial assumption.
\end{remark}
\section{Complementarity relation and semi-harmonicity}
Thanks to the a priori regularity estimates provided by Lemmas~\ref{l10}--\ref{l6} and
Theorems~\ref{t10}--\ref{t12}, we can obtain the strong compactness on the
pressure gradient, which allows us to obtain the complementarity relation. Moreover, the semi-harmonicity follows from the AB estimate (Theorem~\ref{t11}).
\begin{theorem}[Complementarity relation and semi-harmonicity]$\label{t13}$
Under the initial assumptions \eqref{c1}--\eqref{c4}, then, the complementarity
relation and the semi-harmonicity property, see Eq.~\eqref{z15}, hold.
\end{theorem}
\begin{proof}
From Lemma~\ref{l10} and Theorems~\ref{t12}, we have
\begin{align*}
\|\nabla P_m\|_{L^2(Q_T)}\leq C(T),\quad \|\partial_t P_m\|_{L^1(Q_T)}\leq C(T).
\end{align*}
Then, after the extraction of subsequences, we obtain
\begin{equation*}
P_m\rightarrow P_\infty,\text{ strongly in } L^1_{loc}(Q_T),\text{ as }
m\to\infty,
\end{equation*}
with the help of Sobolev's compactness embedding. From Theorem~\ref{t10}, we
obtain the weak compactness of the pressure gradient, up to a subsequence, we
have
\begin{equation*}
\nabla P_m\rightharpoonup \nabla P_\infty,\text{ weakly in } L^3(Q_T),\text{ as }
m\to\infty.
\end{equation*}
We define a smooth cutoff function $0\leq \varphi \leq 1$, $\varphi(x)=1$, for $ |x|\leq 1$, $\varphi(x)=0$ for $|x|\geq 2$. Then,
for any $R>0$, we let $\varphi_R(x)=\varphi(\frac{x}{R})$ and $P_{m,R}=\varphi_{R}(x)P_m$. By direct computations, we obtain
\begin{equation}\label{a57}
\|\partial_{t}P_{m,R}\|_{L^1(Q_T)}\leq C(T,R),\quad
\|\nabla P_{m,R}\|_{L^3(Q_T)}\leq C(T,R),\quad
\|\Delta P_{m,R}\|_{L^1(Q_T)}\leq C(T,R).
\end{equation}
For the sake of the above three estimates \eqref{a57}, inspired by
\cite[Theorem~6.1]{r35}, we can establish
\begin{equation*}
\nabla P_{m,R}\to \nabla P_{\infty,R},\text{ strongly in } L^1(Q_T),\text{ as }
m\to\infty.
\end{equation*}
In other words, we can extract a subsequence such that
\begin{equation*}
\nabla P_m\rightarrow \nabla P_\infty,\text{ strongly in } L^1_{loc}(Q_T),\text{
as } m\to\infty.
\end{equation*}
Then, using the uniform $L^3$ bound for the pressure gradient in
Theorem~\ref{t10}, we have
\begin{equation*}
\nabla P_m\rightarrow \nabla P_\infty,\text{ strongly in } L^q_{loc}(Q_T), \text{
for } 1\leq q<3.
\end{equation*}
Hence, in particular, the case $q=2$ is selected to achieve our goal.
Let $\zeta\in C_0^{\infty}(Q_T)$ be a test function,
we multiply the pressure equation \eqref{d3} by $\zeta$ and integrate on $Q_T$,
then it follows
\begin{equation*}
\begin{aligned}
-\frac{1}{m-1}\int_{Q_T} & P_m\partial_t\zeta+|\nabla
P_m|^2\zeta+\nabla P_m\cdot\nabla\mathcal{N}\ast\rho_m\zeta dxdt\\
=&\int_{Q_T}(-|\nabla P_m|^2\zeta-P_m\nabla
P_m\cdot\nabla\zeta+P_m\rho_m\zeta)dxdt.
\end{aligned}
\end{equation*}
Hence, passing to limit as $m\to\infty$, we obtain the complementarity relation
\begin{equation*}
\int_{Q_T}(-|\nabla P_\infty|^2\zeta-P_\infty\nabla
P_\infty\cdot\nabla\zeta+P_\infty\rho_\infty\zeta)dxdt=0,
\end{equation*}
where $\rho_m P_m\rightharpoonup \rho_\infty P_{\infty},\text{ weakly in }
L^2(Q_T),\text{ as } m\to\infty$ results from \eqref{m38}.
This is equivalent to
\begin{equation*}
\int_{Q_T}P_\infty(\Delta P_\infty+\rho_\infty)\zeta dxdt=0,
\end{equation*}
which means that the complementarity relation of Eq.~\eqref{z15} holds.
From Theorem~\ref{t11}, we have $\int_{Q_T}|\Delta
P_m+\rho_m|_-^3dxdt\leq \frac{C(T)}{m}$. Let $\phi\in C_{0}^{\infty}(Q_T)$ be a
nonnegative test function in $Q_T$, then we attain
\begin{equation}
\begin{aligned}
\int_{0}^{T}\int_{\mathbb{R}^n}(\Delta P_\infty+\rho_\infty)\phi
dxdt=&\lim\limits_{m\to\infty}\int_{0}^{T}\int_{\mathbb{R}^n}(\Delta
P_m+\rho_m)\phi dxdt\\
\geq&-\lim\limits_{m\to\infty}\int_{0}^{T}\int_{\mathbb{R}^n}|\Delta
P_m+\rho_m|_-\phi dxdt\\
\geq&-\lim\limits_{m\to\infty}(\int_{0}^{T}\int_{\mathbb{R}^n}|\Delta
P_m+\rho_m|_-^3dxdt)^{\frac{1}{3}}(\int_{0}^{T}\int_{\mathbb{R}^n}\phi^{\frac{3}{2}}
dxdt)^{\frac{2}{3}}\\
=&0.
\end{aligned}
\end{equation}
Hence, we establish the second result (semi-harmonicity property) of Eq.~\eqref{z15}.
The proof is completed.
\end{proof}
\begin{remark}
This result tells us that the limit solution satisfies
\begin{equation*}
\begin{cases}
-\Delta P_\infty=1, &\text{ in }\ \Omega(t):=\{x:P_\infty(x,t)>0\},\\
P_\infty=0& \text{ on }\ \partial\Omega(t),
\end{cases}
\end{equation*}
when enough regularity is available. This is related to the geometric form of the Hele-Shaw free boundary problem
while Eq.~\eqref{z6} is the weak form which determines the motion of the free boundary.
\end{remark}
\begin{theorem}
Under the initial assumptions \eqref{c1}--\eqref{c4}, then we have
\begin{equation*}
\rho_\infty \nabla P_\infty=\nabla P_\infty,\quad\text{a.e. in }Q_T.
\end{equation*}
\end{theorem}
\begin{proof}
On the one hand, we have already proved in Theorem \ref{tAEbis} that $\rho_m \nabla P_m \to \nabla P_\infty$ weakly. On the other hand
$\rho_m \to \rho_\infty$ weakly in $L^2(Q_T)$ and $ \nabla P_m \to \nabla P_\infty$ strongly in $L^2_{loc}(Q_T)$, by weak-strong convergence we obtain that $\rho_m \nabla P_m \to \rho_\infty \nabla P_\infty$ weakly and the result is proved.
\end{proof}
\section{Uniqueness, compact support and energy functional}
In order to prove the uniqueness of the solution to the Hele-Shaw limit system
\eqref{z6}--\eqref{z8}, we use the lifting method in $\dot{H}^{-1}$ as in~\cite{4,r46,
rDDP} rather than the duality method~\cite{5,r50} or the entropy method~\cite{IgN}. The main new difficulty comes
from the nonlocal interaction. The uniform upper bound for the limit
density, and the property that the limit pressure is somehow monotone to the
limit density, allow us to use the energy method to prove the uniqueness as
in either \cite[Theorem~2.4]{4} or \cite[Theorem~3]{r46} for an aggregation
equation with degenerate diffusion.
\begin{proposition}[Uniqueness]$\label{p1}$
Let $(\rho_1,P_1)$ and $(\rho_2,P_2)$ be two solutions to the Cauchy problem
Eq.~\eqref{z6}--\eqref{z8} with initial data satisfying
$\rho_{1}(x,0)=\rho_{2}(x,0)\in \dot{H}^{-1}(\mathbb{R}^n)$, then it follows
\begin{equation*}
\rho_1=\rho_2,\quad P_1=P_2, \quad\text{a.e. in }Q.
\end{equation*}
\end{proposition}
\begin{proof}
First of all, we state that the pressure is somehow monotone to the density.
Since $\rho_1P_1=P_1$ and $\rho_2P_2=P_2$ hold, we have
\begin{equation}\label{z9}
\begin{aligned}
(\rho_1-\rho_2)(P_1-P_2)=&\rho_1 P_1+\rho_2 P_2-\rho_1 P_2-\rho_2 P_1
\\[2pt]
=&(1-\rho_2)P_1+(1-\rho_1)P_2\\
\geq&0.
\end{aligned}
\end{equation}
We estimate the difference of weak solutions in $\dot{H}^{-1}(\mathbb{R}^n)$
motivated
by the fact that the pressure is somehow monotone to the density as \eqref{z9}.
Let $\psi=\mathcal{N}\ast(\rho_1-\rho_2)$, by the integrability and bound of
$\rho_1$ and $\rho_2$, we have $\psi\in L^{\infty}(Q_T)\cap
C(0,T,\dot{H}^1(\mathbb{R}^n))$ and $\nabla\psi\in L^{\infty}(0,T;L^{p}(\mathbb{R}^n))\cap
L^\infty(Q_T)$ for $2\leq p< \infty$, and $\partial_t \psi$ solves
\begin{equation*}
\Delta \partial_t\psi=\partial_{t}\rho_1-\partial_{t}\rho_2.
\end{equation*}
Since
$\|\rho_1(t)-\rho_2(t)\|_{\dot{H}^{-1}(\mathbb{R}^n)}=\|\nabla\psi(t)\|_{L^2(\mathbb{R}^n)}$,
we are going to show $\|\nabla\psi(t)\|_{L^2(\mathbb{R}^n)}=0$ for all $t\geq 0$.
Let $\varphi\in C_0^{\infty}(\mathbb{R}^n)$ be smooth text function satisfying
$\varphi=1$ in $B_1 $ and $0\leq \varphi \leq 1$ in $\mathbb{R}^n$.
Set $\varphi_R(x)=\varphi(\frac{x}{R})$ for $x\in\mathbb{R}^n$ and $R>1$, thanks to the regularity of $\psi$, possibly up to a set of measure zero, it holds
\begin{equation*}
\begin{aligned}
-<\partial_{t}\rho_1-\partial_{t}\rho_2,\psi>
=&\lim\limits_{R\to\infty}-<\partial_{t}\rho_1-\partial_{t}\rho_2,\mathcal{N}\ast[(\rho_1-\rho_2)\varphi_R]>\\
=&\lim\limits_{R\to\infty}<\nabla\partial_{t}\psi,\nabla\mathcal{N}\ast[(\rho_1-\rho_2)\varphi_R]>\\
=&<\nabla\psi,\nabla\partial_{t}\psi>
=\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^n}|\nabla\psi|^2dx.
\end{aligned}
\end{equation*}
Therefore, using the definition of weak solution in Theorem~\ref{tAEbis}, we have
\begin{equation*}
\begin{aligned}
\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^n}|\nabla\psi|^2dx=&\lim\limits_{R\to\infty}-\int_{\mathbb{R}^n}(P_1-P_2)\Delta(\psi\varphi_R)
dx+\int_{\mathbb{R}^n}(\rho_1\nabla\mathcal{N}\ast\rho_1-\rho_2\nabla\mathcal{N}\ast\rho_2)\cdot\nabla(\varphi_R\psi)
dx\\
=&-\int_{\mathbb{R}^n}(P_1-P_2)\Delta\psi
dx+\int_{\mathbb{R}^n}(\rho_1\nabla\mathcal{N}\ast\rho_1-\rho_2\nabla\mathcal{N}\ast\rho_2)\cdot\nabla\psi
dx\\
=&-\int_{\mathbb{R}^n}(P_1-P_2)(\rho_1-\rho_2)dx+\int_{\mathbb{R}^n}(\rho_1-\rho_2)\nabla\mathcal{N}\ast\rho_1\cdot\nabla\psi
dx\\
&+\int_{\mathbb{R}^n}\rho_2\nabla\mathcal{N}\ast(\rho_1-\rho_2)\cdot\nabla\psi
dx\\
:=&I_1+I_2+I_3.
\end{aligned}
\end{equation*}
From \eqref{z9}, we obtain
\begin{equation*}
I_1=-\int_{\mathbb{R}^n}(P_1-P_2)(\rho_1-\rho_2)dx\leq 0.
\end{equation*}
By integrating by parts, we have
\begin{equation}\label{z10}
\begin{aligned}
I_2&=\lim\limits_{R\to\infty}\int_{\mathbb{R}^n}\Delta \psi\nabla \mathcal{N}\ast\rho_1\cdot\nabla(\psi\varphi_R)
dx\\
&=-\lim\limits_{R\to\infty}\sum\limits_{ij}\int_{\mathbb{R}^n}\partial_i\psi\partial_j(\psi\varphi_R)\partial_{ij}\mathcal{N}\ast\rho_1dx-\lim\limits_{R\to\infty}\sum\limits_{ij}\int_{\mathbb{R}^n}\partial_i\psi\partial_{ij}(\psi\varphi_R)\partial_j\mathcal{N}\ast\rho_1dx\\
&=-\sum\limits_{ij}\int_{\mathbb{R}^n}\partial_i\psi\partial_j\psi\partial_{ij}\mathcal{N}\ast\rho_1dx-\sum\limits_{ij}\int_{\mathbb{R}^n}\partial_i\psi\partial_{ij}\psi\partial_j\mathcal{N}\ast\rho_1dx.
\end{aligned}
\end{equation}
Similarly, integrating by parts again, we get
\begin{equation*}
\begin{aligned}
-\sum\limits_{ij}\int_{\mathbb{R}^n}\partial_i\psi\partial_{ij}\psi\partial_j\mathcal{N}\ast\rho_1dx
=&\lim\limits_{R\to\infty} -\sum\limits_{ij}\int_{\mathbb{R}^n}\partial_i\psi\partial_{ij}\psi\partial_j\mathcal{N}\ast\rho_1\varphi_Rdx\\
=& \lim\limits_{R\to\infty}\sum\limits_{ij}\frac{1}{2}\int_{\mathbb{R}^n}(\partial_i\psi)^2\partial_j(\partial_j\mathcal{N}\ast\rho_1\varphi_R)dx\\
=&\frac{1}{2}\int_{\mathbb{R}^n}|\nabla\psi|^2\rho_1
dx,
\end{aligned}
\end{equation*}
which together with \eqref{z10} implies
\begin{equation*}
I_2\leq\int_{\mathbb{R}^n}|\nabla\psi|^2|\nabla^2\mathcal{N}\ast\rho_1|dx+\frac{1}{2}\|\nabla\psi\|_{L^2(\mathbb{R}^n)}^2.
\end{equation*}
By Holder's inequality and $\nabla\psi\in{L^{\infty}(Q_T)}$, for $p\geq 2$, we
have
\begin{equation}\label{z11}
\begin{aligned}
\int_{\mathbb{R}^n}|\nabla\psi|^2|\nabla^2\mathcal{N}\ast\rho_1|dx&\leq
\|\nabla^2\mathcal{N}\ast\rho_1\|_{L^p(\mathbb{R}^n)}(\int_{\mathbb{R}^n}|\nabla\psi|^{\frac{2p}{p-1}})^{\frac{p-1}{p}}\\
&\lesssim
p\|\rho_1\|_{L^p(\mathbb{R}^n)}\|\nabla\psi\|_{L^\infty(\mathbb{R}^n)}^{\frac{2}{p}}(\int_{\mathbb{R}^n}|\nabla\psi|^2dx)^{\frac{p-1}{p}}\\
&\lesssim p(\int_{\mathbb{R}^n}|\nabla\psi|^2dx)^{\frac{p-1}{p}},
\end{aligned}
\end{equation}
where the implicit constant depends only on the uniformly controlled $L^p$ norms
of $\rho_1$ and $\rho_2$ and the second step holds because of the singular
integral theory (Lemma~\ref{l12}).\\
\indent As for $I_3$, we may directly justify
\begin{equation}\label{z12}
I_3=\int_{\mathbb{R}^n}\rho_2\nabla\mathcal{N}\ast(\rho_1-\rho_2)\cdot\nabla\psi
dx=\int_{\mathbb{R}^n}\rho_2|\nabla\psi|^2dx\lesssim
\|\nabla\psi\|_{L^2(\mathbb{R}^n)}^2.
\end{equation}
Let $\gamma(t)=\int_{\mathbb{R}^n}|\nabla\psi(t)|^2dx$, both \eqref{z11} and
\eqref{z12} imply the differential inequality
\begin{equation*}
\frac{d}{dt}\gamma(t)\leq \hat{C}p
\max\{\gamma(t)^{1-\frac{1}{p}},\gamma(t)\},
\end{equation*}
where $\hat{C}$ depends only on the uniformly controlled $L^p$ norm of $\rho_1$,
$\rho_2$.
All the solutions of this differential inequality are bounded from above by
the maximal solution. Since $\gamma(0)=0$ and $\gamma(t)$ is continuous, there
exists $t^*>0$ such that $0\leq \gamma(t)<1,t\in[0,t^*]$, therefore
\begin{equation*}
\frac{d}{dt}\gamma(t)\leq \hat{C}p \gamma(t)^{1-\frac{1}{p}}, \qquad \gamma(0)=0,
\end{equation*}
and $\gamma(t)$ is a subfunction of the solution to the ordinary differential
equation
\begin{equation}\label{z13}
\frac{d}{dt}\bar{\gamma}(t)= \hat{C}p \bar{\gamma}(t)^{1-\frac{1}{p}}, \qquad \bar{\gamma}(0)=0,
\end{equation}
and $\bar{\gamma}(t)=(\hat{C}t)^{p}$ is the unique solution to \eqref{z13}.
Consequently, we obtain
\begin{equation}\label{z14}
\gamma(t)\leq\bar{\gamma}(t)\leq 2^{-p}<1,
\end{equation}
For $0<t<\frac{1}{2\hat{C}}$. Therefore, we can extend $t^*$ to be long enough such that $t^*$ is more
than $\frac{1}{4\hat{C}}$.
\\
In Eq.~\eqref{z14}, we may take $p\to \infty$ to deduce that $\gamma(t)=0$ for $t\in[0,\frac{1}{4\hat{C}}]$ and the proof of uniqueness is complete.
\end{proof}
In fact, we are able to prove the time continuity and initial trace for the Hele-Shaw limit system \eqref{z6}--\eqref{z8}. So far the initial data is obtained in the weak sense of Def.~\ref{def:WS}. This means that the Hele-Shaw equation holds with the initial data
$\rho_{\infty,0}= w-lim \rho_{m,0}$. Notice that we know that $0\leq \rho_{\infty,0}\leq 1$ because the argument of Lemma~\ref{l6} still holds true.
We now prove a additional result, namely the initial density is also obtained by time continuity.
\begin{proposition}[Almost everywhere time continuity]
Assume that initial data $\rho_{m,0}$ and $\rho_{\infty,0}$ satisfy the assumption~\eqref{c1}. Then it holds
\begin{equation*}
\lim_{t\to 0} \rho_{\infty}(t) := \rho_{\infty,0}\text{ a.e. in }\mathbb{R}^n.
\end{equation*}
\end{proposition}
\begin{proof}
Let $\varphi\in C_{0}^{\infty}(\mathbb{R}^n)$. By a standard variant of the test function in Def.~\ref{def:WS}, we have for a.e. $t>0$,
\begin{equation*}
\int_{\mathbb{R}^n}(\rho_{m}(t)-\rho_{m,0})\varphi(x)dx=-\frac{m-1}{m}\int_{0}^{t}\int_{\mathbb{R}^n}\rho_{m}\nabla
P_m\cdot\nabla\varphi
dx-\int_{0}^{t}\int_{\mathbb{R}^n}\rho_{m}\nabla\mathcal{N}\ast\rho_m\cdot\nabla
\varphi dx.
\end{equation*}
Passing to limit, then it holds
\begin{equation*}
\int_{\mathbb{R}^n}(\rho_{\infty}(t)-\rho_{\infty,0})\varphi(x)dx=-\int_{0}^{t}\int_{\mathbb{R}^n}\nabla
P_{\infty}\cdot\nabla\varphi
dx-\int_{0}^{t}\int_{\mathbb{R}^n}\rho_{\infty}\nabla\mathcal{N}\ast\rho_{\infty}\cdot\nabla
\varphi dx.
\end{equation*}
Multiplying (\ref{z6}) by $\varphi$ and integrating on $[0,t]$, we get
\begin{equation*}
\int_{\mathbb{R}^n}(\rho_{\infty}(t)-\rho_{\infty}^0)\varphi(x)dx=-\int_{0}^{t}\int_{\mathbb{R}^n}\nabla
P_{\infty}\cdot\nabla\varphi
dx-\int_{0}^{t}\int_{\mathbb{R}^n}\rho_{\infty}\nabla\mathcal{N}\ast\rho_{\infty}\cdot\nabla
\varphi dx,
\end{equation*}
therefore, we obtain that $\lim_{t\to 0} \rho_{\infty}(t) := \rho_{\infty}^0$ exists in weak-$L^2(\mathbb{R}^n)$ and
\begin{equation*}
\int_{\mathbb{R}^n}\rho_{\infty,0}\varphi dx=\int_{\mathbb{R}^n}\rho_{\infty}^0\varphi dx,
\end{equation*}
which supports our statement.
\end{proof}
Furthermore, we are about to show the compact support of the solution for the Hele-Shaw limit system \eqref{z6}--\eqref{z8}. To study the support of the limit density or the limit pressure, the main
difficulty comes from the nonlocal interaction which prevent bounds on $\rho_m$. However, we may follow~\cite[Lemma~3.8]{CKY_2018} and obtain uniformly control of the pressure, then we deduce that the speed of
propagation for the limit density is finite.
Firstly, we give this approximate equation
\begin{equation}\label{w1}
\partial_{t}\varrho_m=\Delta\varrho_m^m+\nabla\cdot(\varrho_m\nabla\Phi_{1/m}),
\end{equation}
where $\Phi_{1/m}(x,t)=\zeta_{1/m}\ast(\mathcal{N}\ast\rho_{\infty})$ and
$\rho_{\infty}$ is the unique limit density in Theorem~\ref{tAEbis}. Define the
corresponding pressure $\mathrm{P}_m=\frac{m}{m-1}\varrho_m^{m-1}$ that satisfies
the following equation
\begin{equation}\label{w2}
\partial_{t}\mathrm{P}_m=(m-1)\mathrm{P}_m\Delta\mathrm{P}_m+|\nabla
\mathrm{P}_m|^2+\nabla \mathrm{P}_m\cdot\nabla \Phi_{1/m}+(m-1)\mathrm{P}_m\Delta
\Phi_{1/m} .
\end{equation}
Similar to Theorems~\ref{tAE}--\ref{tAEbis}, we can get the following theorem with the same
initial data.
\begin{theorem}\label{t6}
Let $\rho_{m,0}$ and $P_{m,0}$ be the initial data of the density $\varrho_{m}$
and the pressure $\mathrm{P}_{m}$ respectively satisfying \eqref{c1} and $\rho_{\infty}$ be the unique limit density in Theorem~\ref{tAEbis}. Then, after the extraction of
subsequences, $\nabla\Phi_{1/m}$ converges for all $T>0$ strongly
in $L^2(Q_{T})$ as $m\to\infty$ to limit
$\nabla\mathcal{N}\ast\rho_{\infty}$, $\varrho_m$ and $\varrho_{m}
\mathrm{P}_{m}$ converges weakly for all $T>0$ in $L^2(Q_T)$ as $m\to\infty$ to
limits $\varrho_{\infty}\in L^\infty(0,T;L^1(\mathbb{R}^n))\cap L^\infty(Q_T)$ and $\mathrm{P}_{\infty}\in L^2(0,T;H^1(\mathbb{R}^n))$ respectively.
Therefore, the following Hele-Shaw limit system for $(\mathrm{P}_\infty,\varrho_\infty)$ holds as
\begin{align}
&\partial_{t}\varrho_{\infty}=\Delta
\mathrm{P}_{\infty}+\nabla\cdot(\varrho_{\infty}\nabla\mathcal{N}\ast\rho_{\infty}),
&&\text{in } \mathcal{D}'(Q_{T}),\label{w3}\\
&(1-\varrho_{\infty})\mathrm{P}_{\infty}=0,&&\text{a.e.
in } Q_{T},\label{w4}\\
&0\leq\varrho_{\infty}\leq1,&&\text{a.e. in }Q_{T}.\label{w6}
\end{align}
\end{theorem}
\begin{proof}We omit the detailed proof of this theorem, because its proof is
similar to, but easier than, that of Theorems~\ref{tAE}--\ref{tAEbis}.
\end{proof}
It is easy for us to prove that the Hele-Shaw limit system \eqref{z6}--\ref{z8} and
\eqref{w3}--\eqref{w6} have same solutions if we have the same initial assumptions. In
other words, if we get a uniform support of $\varrho_{m}$ and $\mathrm{P}_{m}$,
naturally, we can obtain the supports of $\rho_{\infty}$ and $P_{\infty}$.
\begin{lemma}
For $(\rho_{\infty}$,$P_{\infty})$ in Theorem \ref{tAEbis} and for
$(\varrho_{\infty}$,$\mathrm{P}_{\infty})$ in Theorem \ref{t6} with the initial
assumption $\rho_{\infty}(x,0)=\varrho_{\infty}(x,0)\in \dot{H}^{-1}(\mathbb{R}^n)$, then it follows
\begin{equation*}
\rho_{\infty}=\varrho_{\infty},\quad P_\infty=\mathrm{P}_\infty,\quad \text{a.e. }(x,t)\in Q.
\end{equation*}
\end{lemma}
\begin{proof}
The proof of this lemma is similar to but easier than the proof of
Proposition~\ref{p1}, hence, we omit the detailed processes.
\end{proof}
\begin{lemma}
Let $\varrho_m$ be a nonnegative weak solution to Eq.~\eqref{w1} for any
continuous, compactly supported initial data $\varrho_{m,0}$. Then the pressure
varible $\mathrm{P}_m$ is a viscosity solution to \eqref{w2}.
\end{lemma}
\begin{proof}
The result follows from \cite[Corollary 3.11]{LeiKim}
\end{proof}
We now turn to the $L^\infty$ estimate and the support of the solutions to
\eqref{w3}, which are uniform on $m$. The first ensures that if the initial data
is bounded
uniformly on $m$, it remains uniformly bounded within any finite time. The second
ensures that
if the support of the initial data is bounded uniformly on $m$, it likewise
remains uniformly bounded within any finite time.
\begin{lemma}
[$L^\infty$ estimate and support of $\mathrm{P}_m$] $\label{l17}$Let
$\mathrm{P}_{m}$ be a viscosity
solution to Eq.~\eqref{w3} with continuous, compactly supported initial data
$\mathrm{P}_{m}(\cdot,0)$. Suppose that there exists $\mathcal{R}_{0}\geq 1$
sufficiently large so that $ {\rm supp}(\mathrm{P}_{m}(\cdot,0))\subseteq
B_{\mathcal{R}_0/2}$ and $\mathrm{P}_{m}(\cdot,0)\leq \frac{\mathcal{R}_{0}^2}{4n}$.
Then there exist $\mathcal{R}(t):=\big(\mathcal{R}_{0}+n\|\nabla\mathcal{N}\ast\rho_\infty\|_{L^\infty(Q)}\big)e^{\frac{t}{n}}-n\|\nabla\mathcal{N}\ast\rho_\infty\|_{L^\infty(Q)}$ such that
\begin{itemize}
\item $\{\varrho_m(\cdot,t)>0\}\subseteq B_{\mathcal{R}(t)}$, for all
$t\in[0,\infty)$,
\item $\mathrm{P}_{m}(x,t)\leq \frac{\mathcal{R}^2(t)}{2n}$, for all
$(x,t)\in Q$.
\end{itemize}
\end{lemma}
\begin{proof}
The result follows from \cite[Lemma 3.8]{CKY_2018}.
\end{proof}
\medskip
When the initial density $\rho_{\infty,0}$ satisfies $0\leq \rho_{\infty,0}\leq 1$ and is compactly supported,
we show that the solution $(\rho_\infty, P_{\infty})$ to \eqref{z6}--\eqref{z8} are bounded and compactly supported for all
times.
\begin{theorem}[Compact support]\label{t5}
Suppose that there exists $\mathcal{R}_0$ sufficiently large such that both
$ {\rm supp}(\rho_{\infty}(\cdot,0))\subseteq B_{\mathcal{R}_0/2}$ and $1+\frac{n}{n-2}\leq \frac{\mathcal{R}_0^2}{4n}$, and also both
$\rho_{\infty}(\cdot,0)\in L^1_+(\mathbb{R}^n)$ and $0\leq
\rho_{\infty}(\cdot,0)\leq 1$ hold. Then, for all $t>0$, the solution $(\rho_\infty ,P_\infty)$ to the Cauchy problem for the
Hele-Shaw limit system \eqref{z6}--\eqref{z8} satisfies
\begin{itemize}
\item $\{\rho_{\infty}(\cdot,t)>0\}\subseteq B_{\mathcal{R}(t)} $,
\item $P_{\infty}(\cdot ,t)\leq \frac{\mathcal{R}^2(t)}{2n}$,
\end{itemize}
where $\mathcal{R}(t)$ is given in Lemma~\ref{l17}.
\end{theorem}
\begin{proof}
If we set $\rho_{m,0}=\rho_{\infty,0}$ for $m\geq 2n-1$, so it holds $P_{m,0}=\frac{m}{m-1}\rho_{m,0}^{m-1}\leq 1+\frac{n}{n-2}\leq \frac{\mathcal{R}_0^2}{4n}$. According to Lemma~\ref{l17}, it is easy to complete the proof of Theorem~\ref{t5}.
\end{proof}
In the end, we will establish the limit energy functional for the Cauchy problem of the Hele-Shaw problem \eqref{z6}--\eqref{z15}.
For the PKS model Eq.~\eqref{d1} with the diffusion exponent $1<m<\infty$, the
energy functional is given by
\begin{equation*}
\frac{dF_m(\rho_m)}{dt}+\int_{\mathbb{R}^n}\rho_{m}|\nabla
P_m+\nabla\mathcal{N}\ast\rho_{m}|^2dx=0.
\end{equation*}
The above equality shows that the free energy decreases as the time increases.
Formally, as $m\to\infty$, the limit free energy satisfies
\begin{equation*}
F_{\infty}(\rho_{\infty})=\frac{1}{2}\int_{\mathbb{R}^n}\rho_{\infty}\mathcal{N}\ast\rho_{\infty}dx,\
\quad 0\leq \rho_\infty\leq 1,
\end{equation*}
in which the diffusive effect is replaced by the height constraint of the limit
density,
and the limit energy functional is expressed as
\begin{equation}\label{w7}
\frac{dF_{\infty}(\rho_{\infty}(t))}{dt}+\int_{\mathbb{R}^n}\rho_{\infty}(t)|\nabla
P_\infty(t)+\nabla\mathcal{N}\ast\rho_{\infty}(t)|^2dx=0,\quad 0\leq \rho_\infty\leq 1.
\end{equation}
In the following theorem, we show that the limit energy functional \eqref{w7}
holds.
\begin{theorem}[Limit energy functional]$\label{t4}$
Under the initial assumptions \eqref{c1}--\eqref{c4} and the uniform support assumption for the initial density~\eqref{c5}, let $\rho_{\infty}$ and
$P_{\infty}$ be the limit density and the limit pressure respectively as in
Theorems~\ref{tAE}--\ref{t15}. Then, for a.e. $t\in[0,\infty)$, the
limit energy functional \eqref{w7} holds.
\end{theorem}
\begin{proof}
Under the initial assumptions \eqref{c1}--\eqref{c4} and the additional initial uniform support assumption~\eqref{c5}, due to Theorem~\ref{t5}, it holds
\begin{equation*}
{\rm supp}(P_\infty(t))\subset {\rm supp}(\rho_\infty(t))\subset B_{\mathcal{R}(t)} ,
\end{equation*}
for $\mathcal{R}(t):=2\big(2\mathcal{R}_{0}+\frac{\|\nabla\mathcal{N}\ast\rho_\infty\|_{L^\infty(Q)}}{n}\big)e^{\frac{t}{n}}-\frac{2\|\nabla\mathcal{N}\ast\rho_\infty\|_{L^\infty(Q)}}{n}$ with some $\mathcal{R}_0\geq R_0>0$.
From Theorems~\ref{tAE}--\ref{t15}, for any $T>0$, we have
\begin{equation}\label{fe1}
\rho_\infty\in L^\infty\big(0,T;L^1_+(\mathbb{R}^n)\big)\cap L^\infty(Q_T),\quad P_\infty\in L^2\big(0,T;H^1(\mathbb{R}^n)\big),\quad \nabla P_\infty\in L^3(Q_T),
\end{equation}
\begin{equation}\label{fe2}
\mathcal{N}\ast\rho_{\infty}\in\mathcal{C}(0,T;\dot{W}^{1,r}(\mathbb{R}^n))\cap L^\infty(0,T;\cap \dot{W}^{2,s}(\mathbb{R}^n)) \text{ for }2\leq r\leq\infty \text{ and } 1<s<\infty.
\end{equation}Furthermore, we obtain
\begin{equation}\label{fe3}
\Delta P_\infty\in L^2(0,T;\dot{H}^{-1}(\mathbb{R}^n)\cap \dot{H}^{-2}(\mathbb{R}^n)),\quad \partial_t\rho_\infty \in L^2(0,T;\dot{H}^{-1}(\mathbb{R}^n)).
\end{equation}
Thanks to the complementarity relation \eqref{z15} and Eq.~\eqref{z8}, integrating \eqref{z15} on $\mathbb{R}^n$ and integrating by
parts, then it follows from the regularities \eqref{fe1}--\eqref{fe3} that
\begin{equation}\label{w9b}
\int_{\mathbb{R}^n}P_{\infty}(\Delta
P_{\infty}+\rho_{\infty})dx=\int_{\mathbb{R}^n}P_{\infty}\rho_{\infty}-|\nabla
P_{\infty}|^2dx=0.
\end{equation}
Therefore, it follows
\begin{equation}\label{w8}
\begin{aligned}
\int_{\mathbb{R}^n}\rho_{\infty}|\nabla P_\infty+\nabla\mathcal{N}\ast\rho_{\infty}|^2dx
=&\int_{\mathbb{R}^n}\rho_{\infty}(|\nabla P_{\infty}|^2+2\nabla
P_{\infty}\cdot\nabla\mathcal{N}\ast\rho_{\infty}+|\nabla\mathcal{N}\ast\rho_{\infty}|^2)dx\\
=&\int_{\mathbb{R}^n}(|\nabla
P_{\infty}|^2-2P_{\infty}\rho_{\infty}+\rho_{\infty}|\nabla\mathcal{N}\ast\rho_{\infty}|^2)dx\\
=&\int_{\mathbb{R}^n}(-\rho_{\infty}P_{\infty}+\rho_{\infty}|\nabla\mathcal{N}\ast\rho_{\infty}|^2)dx,
\end{aligned}
\end{equation}
where \eqref{w9b} is used in the last equality. Multiplying \eqref{z6} by
$\mathcal{N}\ast\rho_{\infty}$ and integrating on $\mathbb{R}^n$, according to
the symmetry of convolution operator, we have
\begin{equation}\label{fi}
\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^n}\rho_{\infty}\mathcal{N}\ast\rho_{\infty}dx-\int_{\mathbb{R}^n}(\Delta
P_\infty+\nabla\cdot(\rho_{\infty}\nabla\mathcal{N}\ast\rho_{\infty})\mathcal{N}\ast\rho_{\infty}dx=0,
\end{equation}
integrating by parts and using (\ref{w8}), then it holds due to \eqref{fe1}--\eqref{fe3} that
\begin{equation}\label{f2}
\begin{aligned}
-\int_{\mathbb{R}^n}(\Delta
P_\infty+\nabla\cdot(\rho_{\infty}\nabla\mathcal{N}\ast\rho_{\infty})\mathcal{N}\ast\rho_{\infty}dx
=&\int_{\mathbb{R}^n}(-P_\infty\rho_{\infty}+\rho_{\infty}|\nabla\mathcal{N}\ast\rho_{\infty}|^2)dx\\
=&\int_{\mathbb{R}^n}\rho_{\infty}|\nabla
P_\infty+\nabla\mathcal{N}\ast\rho_{\infty}|^2dx.
\end{aligned}
\end{equation}
Combining \eqref{fi} and \eqref{f2}, for almost everywhere time $t$, we obtain
the limit energy functional
\begin{equation*}
\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^n}\rho_{\infty}\mathcal{N}\ast\rho_{\infty}dx+\int_{\mathbb{R}^n}\rho_{\infty}|\nabla
P_\infty +\nabla\mathcal{N}\ast\rho_{\infty}|^2dx=0,\quad 0\leq \rho_\infty\leq
1.
\end{equation*}
\end{proof}
\begin{remark}
The result of Theorem~\ref{t4} implies that the limit free energy
$F_{\infty}(\rho_{\infty}(t))$ is non-increasing as time $t$ increases.
\end{remark}
\section{Incompressible limit of stationary state}
This section is devoted to showing that the incompressible (Hele-Shaw) limit for the stationary state of Patlak-Keller-Segel (SPKS) model~Eq.~\eqref{SPKS} is the stationary state Eq.~\eqref{HSSPKS} of the Hele-Shaw problem Eq.~\eqref{hs}--\eqref{d5}. By direct computations, the equation of the corresponding pressure $P_{m,s}=\frac{m}{m-1}\rho_{m,s}^{m-1}$ is expressed by
\begin{equation}\label{e14}
(m-1)P_{m,s}(\Delta P_{m,s}+\rho_{m,s})+\nabla P_{m,s}\cdot(\nabla P_{m,s}+\nabla\mathcal{N}\ast\rho_{m,s})=0\quad\text{for } x\in\mathbb{R}^n.
\end{equation}
The following preliminary lemma is combined and extracted from \cite{r18,rr10,rr12}.
\begin{lemma}[Preliminary lemma]\label{ls1}
Assume that $\rho_{m,s}\in L^1(\mathbb{R}^n)\cap L^{\infty}(\mathbb{R}^n)$. Then the solution to the SPKS model Eq.~\eqref{SPKS} exists. Moreover, the solution to the SPKS model Eq.~\eqref{SPKS} is radially decreasing symmetric, unique up to a translation, and compactly supported.
\end{lemma}
In the rest of this section, we carry on the incompressible limit of the stationary state of PKS (SPKS) model Eq.~\eqref{SPKS} in the framework of radial symmetry , and $C$ is a positive constant independent of the exponent $m$.
For any given mass $M>0$, we show that the solution to the SPKS model Eq.~\eqref{SPKS} is uniformly bounded on $m$.
\begin{lemma}[Uniform bound of pressure]$\label{ls2}$
Let $\rho_{m,s}$ be a weak solution to the SPKS model Eq.~\eqref{SPKS} in the sense of Def.~\ref{d1} with $\|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}=M>0$, $m\geq 3$, and $\int_{\mathbb{R}^n}x\rho_{m,s}(x)dx=0$. Set $\alpha_m:=\rho_{m,s}(0)=\|\rho_{m,s}\|_{L^\infty(\mathbb{R}^n)}$, then it holds
\begin{equation*}
\begin{aligned}
&\alpha_{m}\leq \frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2},\quad\alpha_{m}^{m-1}\leq\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
From the SPKS model Eq.~\eqref{SPKS}, we have
\begin{equation*}
\rho_{m,s}(\nabla P_{m,s}+\nabla\mathcal{N}\ast\rho_{m,s})=0.
\end{equation*}
Due to the radially decreasing symmetric property of $\rho_{m,s}$, there exists a constant $C>0$ such that
\begin{equation}\label{ff1}
P_{m,s}=(-\mathcal{N}\ast\rho_{m,s}-C)_+\quad\text{for } x\in\mathbb{R}^n
\end{equation}
and \begin{equation}\label{ff2}
C\leq \|-\mathcal{N}\ast\rho_{m,s}\|_{L^{\infty}(\mathbb{R}^n)}.
\end{equation}
Since $\alpha_{m}=\|\rho_{m,s}\|_{L^{\infty}(\mathbb{R}^n)}=\rho_{m,s}(0)$, we have
\begin{equation}\label{ff3}
\begin{aligned}
-\mathcal{N}\ast\rho_{m,s}&=\frac{1}{n(n-2)\omega_n}\int_{\mathbb{R}^n}\frac{\rho_{m,s}(x-y)}{|y|^{n-2}}dy\\
&\leq \frac{1}{n(n-2)\omega_n}\int_{|y|>1}\frac{\rho_{m,s}(x-y)}{|y|^{n-2}}dy+\alpha_m\frac{1}{n(n-2)\omega_n}\int_{|y|\leq 1}\frac{1}{|y|^{n-2}}dy\\
&\leq \frac{1}{n(n-2)\omega_n}M+\frac{\alpha_m}{2(n-2)}\\
&\leq \frac{\alpha_m}{2}+\frac{1}{n(n-2)\omega_n}M.
\end{aligned}
\end{equation}
Combining \eqref{ff1}--\eqref{ff3}, we obtain
\begin{equation}\label{ff4}
\alpha_{m}^{m-1}\leq \frac{m}{m-1}\alpha_{m}^{m-1}\leq \alpha_m+\frac{2M}{n(n-2)\omega_n},
\end{equation}
where $1\leq\frac{m}{m-1}\leq2$ for $m\geq3$ is used.
The positive solution to the quadratic equations with one unknownn $y^2_2=y_2+\frac{2M}{n(n-2)\omega_n}$ is
\begin{equation*}
y_2=\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}>1.
\end{equation*}
Thus it follows from the property of algebraic equation that
\begin{equation}\label{ff5}
0\leq \alpha_m\leq y_2\quad\text{for }m\geq 3.
\end{equation}
We combine \eqref{ff4}--\eqref{ff5} and obtain
\begin{equation*}
\alpha_{m}^{m-1}\leq \alpha_m+\frac{2M}{n(n-2)\omega_n}\leq \frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}.
\end{equation*}
\end{proof}
\begin{remark}
It should be pointed out that the conclusion of Lemma~\ref{ls2} still holds with the assumption $\rho_{m,s}\in L^\infty(\mathbb{R}^n)\cap L^1(\mathbb{R}^n)$, the radially symmetric property of solution is not necessary.
\end{remark}
Next, we show uniformly bounded support of density, which can prevent the mass from escapeing to infinity as $m\to\infty$. Let $\|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}=M$ and $ {\rm supp}(\rho_{m,s})= B_{R_m(M)} $. We show that there exists a constant $R_{*}(M)$ (only depending on $M$) such that
\begin{equation*}
R_m(M)\leq R_*(M)\text{ for all } m\geq3.
\end{equation*}
Define $\Psi_{m}(r)=\Psi_m(|x|)=\rho_{m,s}^{m-1}(x)$ with $r=|x|$, then the SPKS model Eq.~\eqref{SPKS} as introduced in \cite[Lemma 2.1]{rr10} can be transformed as a
dynamical system
\begin{equation}\label{ff15}
\begin{cases}
\Psi_m''(r)+\frac{n-1}{r}\Psi_m'(r)=-\frac{m-1}{m}\Psi_m^{1/(m-1)}(r)\quad\text{for all }0<r<R_m(M),&\\
\Psi_m(0)=\alpha_m^{m-1}, \quad\Psi_m'(0)=0,&
\end{cases}
\end{equation}
and the following conditions hold:
\begin{equation*}
\begin{cases}
\Psi_m(r)>0,\quad\Psi_m'(r)<0,&\text{on }\big(0,R_m(M)\big),\\
\frac{\Psi_m'(r)}{r}\to -\frac{1}{n}\frac{m-1}{m}\alpha_m^{1/(m-1)},&\text{as }r\to0^+,\\
\Psi_m(r)\to0,&\text{as }r\to R_m(M)_-.
\end{cases}
\end{equation*}
To show the uniform bound of $R_m(M)$, we are going to give a plane autonomous system.
Let
\begin{equation*}
u_{m}(r)=-\frac{m}{m-1}\frac{r\Psi_{m}^{1/(m-1)}(r)}{\Psi_m'(r)} \text{ and }v_m(r)=-\frac{r\Psi_m'(r)}{\Psi_m(r)},
\end{equation*}
by direct computations, there is a plane autonomous system of $(u_m,v_m)$ as
\begin{equation}\label{ff6}
\begin{cases}
r\frac{du_m}{dr}=u_m(n-u_m-\frac{v_m}{m-1}),&\\
r\frac{dv_m}{dr}=v_m\big(-(n-2)+u_m+v_m\big),&
\end{cases}
\end{equation}
for $r>0$ with the initial data
\begin{equation}\label{ff7}
\lim\limits_{r\to0^+}u_m(r)=n,\quad \lim\limits_{r\to0^+}v_m(r)=0,\text{ and }\lim\limits_{r\to0^+}\frac{v_m(r)}{r^2}=\frac{m-1}{mn}\alpha_{m}^{2-m}.
\end{equation}
The strategy is to find $R_*(M)<\infty$ satisfying
\begin{equation*}
\lim\limits_{r\to R_*(M)_-}v_m(r)=+\infty,
\end{equation*}
which means that $R_{m}(M)<R_*(M)$ for any $m\geq3$ holds.
\begin{lemma}\label{ls3}
Suppose that $(u_m,v_m)$ is a solution to the initial value problem Eqs.~\eqref{ff6}--\eqref{ff7}, then it holds for $r>0$ that
\begin{equation*}
\begin{aligned}
n< u_m+\frac{v_m}{m-1},\quad 0<u_m<n,\quad\text{and}\quad 0<v_m.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
Lemma~\ref{ls3} is a direct result of \cite[Lemma 2.2]{rr10}.
\end{proof}
\begin{lemma}[Uniform support of density]$\label{ls4}$
Suppose that $(u_m,v_m)$ is a solution to the initial value problem Eqs.~\eqref{ff6}--\eqref{ff7} with a given mass $M>0$. Then, there exists $R_*(M):=\log\Big(1+\exp\Big[2n(n-1)\Big(\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}\Big)\Big]^{1/2}\Big)$ such that
\begin{equation*}
\lim\limits_{r\to R_*(M)_-}v_m(r)=\infty\quad\text{for all } m\geq3.
\end{equation*}
Furthermore, it holds
\begin{equation*}
{\rm supp}(\rho_{m,s})\subset B_{R_*(M)} \quad\text{for all } m\geq3.
\end{equation*}
\end{lemma}
\begin{proof}
From Lemma~\ref{ls3}, we have
\begin{equation*}
u_m+\frac{v_m}{m-1}>n\quad\text{and}\quad u_m,\ v_m>0,
\end{equation*}
then it holds
\begin{equation*}
u_m+v_m>u_m+\frac{v_m}{m-1}>n.
\end{equation*}
Combing the above inequality and $\eqref{ff6}_2$-\eqref{ff7}, we obtain
\begin{equation}\label{ff8}
\begin{cases}
r\frac{dv_m}{dr}>2v_m&\text{for } r>0,\\
\lim\limits_{r\to0^+}v_m(r)=0,& \lim\limits_{r\to0^+}\frac{v_m(r)}{r^2}=\frac{m-1}{nm}\alpha_{m}^{2-m}.
\end{cases}
\end{equation}
We give an ordinary differential equation:
\begin{equation}\label{ff9}
\begin{cases}
r\frac{dv}{dr}=2v&\text{for }r>0,\\
\lim\limits_{r\to0^+}v(r)=0,& \lim\limits_{r\to0^+}\frac{v(r)}{r^2}=\frac{m-1}{mn}\alpha_{m}^{2-m}.
\end{cases}
\end{equation}
It is easy to obtain the unique solution to Eq.~\eqref{ff9} as
\begin{equation}\label{ff10}
v(r)=\frac{m-1}{mn}\alpha_{m}^{2-m}r^2.
\end{equation}
Since the solution $v(r)$ \eqref{ff10} to Eq.~\eqref{ff9} is a sub-solution to Eq.~\eqref{ff8}, then we have
\begin{equation}\label{ff11}
v_m(r)\geq v(r)=\frac{m-1}{mn}\alpha_{m}^{2-m}r^2\quad\text{for all }r>0.
\end{equation}
With the help of Lemma~\ref{ls2} for a given mass $M>0$, a uniform lower bound independent of $m$ is obtained as
\begin{equation}\label{fff12}
\begin{aligned}
v_m(r)&\geq \frac{m-1}{mn}\alpha_{m}^{2-m}r^2=\frac{m-1}{mn}(\alpha_{m}^{m-1})^{(2-m)/(m-1)}r^2\\
&\geq \frac{m-1}{mn}\big(\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}\big)^{(2-m)/(m-1)}r^2\\
&\geq \frac{1}{2n}\big(\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}\big)^{-1}r^2\quad\text{for all }m\geq3,
\end{aligned}
\end{equation}
where $\frac{m-1}{m}>\frac{1}{2}$ and $\frac{2-m}{m-1}>-1$ are used.
There exists a positive constant $R'(M)$ (only depending on $M$ and $n$) such that
\begin{equation}\label{fff13}
\frac{1}{2n}\Big(\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}\Big)^{-1}R'^2(M)=n-1,
\end{equation}
and $R'(M)$ can be precisely written like
\begin{equation*}
R'(M)=\Big[2n(n-1)\Big(\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}\Big)\Big]^{1/2}>1.
\end{equation*}
Then it follows from \eqref{ff11}--\eqref{fff13} that
\begin{equation}\label{ff12}
v_{m}(r)\geq n-1\quad\text{for all } m\geq3\text{ and }r\geq R'(M).
\end{equation}
Combining $\eqref{ff6}_2$ and \eqref{ff12}, we have
\begin{equation}\label{fff13b}
r\frac{dv_m}{dr}\geq v_m\Big(v_m-(n-2)\Big)\geq \Big(v_m-(n-2)\Big)^2\quad\text{for all } m\geq3\text{ and }r\geq R'(M).
\end{equation}
Set $r:=\log s$ with $R'(M):=\log S'(M)$, we define $v_m(r)=v_m(\log s):=v_m(s)$, then it follows from \eqref{fff13b} that
\begin{equation*}
\frac{dv_m}{ds}\geq \Big(v_m-(n-2)\Big)^2\quad\text{for all } m\geq3\text{ and }s\geq S'(M).
\end{equation*}
On the other hand, we consider the following ordinary differential equation:
\begin{equation*}\label{fff14}
\begin{cases}
\frac{d\omega}{ds}=\Big(\omega-(n-2)\Big)^2\quad\text{for all }s\geq S'(M),&\\
\omega\big(S'(M)\big)=n-1,&
\end{cases}
\end{equation*}
Hence, it holds
\begin{equation*}
\omega(s)=\frac{\omega\big(S'(M)\big)-(n-2)}{1-(s-S'(M))\Big(\omega\big(S'(M)\big)-(n-2)\Big)}+n-2.
\end{equation*}
We find
\begin{equation*}
\omega(s)\to\infty\quad\text{as }s\to\frac{1+S'(M)\Big(\omega(S'(M))-(n-2) \Big)}{\omega(S'(M))-(n-2)}=1+S'(M):=S_*(M).
\end{equation*}
Since $\omega(s)$ is a sub-function of $v_m(s)$ for all $s\geq S'(M)$ and $m\geq 3$, we have
\begin{equation*}
v_m(s)\to\infty,\quad\text{as }s\to S_*(M).
\end{equation*}
Set \begin{equation*}
\begin{aligned}
R_*(M)=&\log S_*(M)=\log(1+e^{R'(M)})\\
=&\log\Big(1+\exp\Big[2n(n-1)\Big(\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}\Big)\Big]^{1/2}\Big),
\end{aligned}
\end{equation*}
then it follows
\begin{equation}\label{ff16}
v_m(r)\to\infty\quad\text{as }r\to R_*(M)_-.
\end{equation}
We need to show $\Psi_m(R_*(M))=0$. If not, due to the monotonicity of $\Psi_m$, we assume that
\begin{equation*}
\Psi_m(r)\geq \Psi_m(R_*(M))>0\quad\text{for all }r\in[0,R_*(M)].
\end{equation*}
We multiply Eq.~$\eqref{ff15}_2$ by $r^{n-1}$ and obtain
\begin{equation*}
[r^{n-1}\Psi_m'(r)]'=-\frac{m-1}{m}r^{n-1}\Psi_m^{1/(m-1)}(r).
\end{equation*}
Integrating the above equation on $[0,r]$ for any $r\in(0,R_*(M)]$, it holds
\begin{equation}\label{ff17}
r^{n-1}\Psi_m'(r)=-\frac{m-1}{m}\int_{0}^{r}s^{n-1}\Psi_m^{1/(m-1)}(s)ds.
\end{equation}
It follows from the definition of $\Psi_m$ that
\begin{equation*}
\begin{aligned}
\frac{m-1}{m}\int_{0}^{r}s^{n-1}\Psi_m^{1/(m-1)}(s)ds=&\frac{m-1}{m}\int_{0}^{r}s^{n-1}\rho_{m,s}(s)ds\\
=&\frac{m-1}{nm\omega_n}\int_{B_r}\rho_{m,s}dx\\
\leq& \frac{m-1}{nm\omega_n}M,
\end{aligned}
\end{equation*}
which together with \eqref{ff17} implies that there exists a small $\delta>0$ (may depending on $m$) such that
\begin{equation*}
|\Psi_m'(r)|< \infty\quad \text{for all }r\in[\delta,R_*(M)].
\end{equation*}
Therefore, we have
\begin{equation*}
|v_m(r)|=|-\frac{r\Psi_m'(r)}{\Psi_m(r)}|<\infty\quad \text{for all }r\in[\delta,R_*(M)],
\end{equation*}
which is contradicted with \eqref{ff16}. In this way, one can show that
\begin{equation*}
\Psi_m\big(R_*(M)\big)=0,
\end{equation*}
and it holds
\begin{equation*}
{\rm supp}(\rho_{m,s})\subset \text{B}_{R_*(M)} \quad\text{for all }m\geq3.
\end{equation*}
\end{proof}
\begin{remark}
It should be emphasized that $R_*(M)=\log\Big(1+\exp\Big[2n(n-1)\Big(\frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}+\frac{2M}{n(n-2)\omega_n}\Big)\Big]^{1/2}\Big)$ strictly increases as the mass $M>0$ strictly increases, which is consistent with the geometric induction that higher mass means larger support.
\end{remark}
We try to obtain the regularity estimate on convolution term. Indeed, to obtain the weak convergence of the nonlinear term $\{\rho_{m,s}\nabla\mathcal{N}\ast\rho_{m,s}\}_{m>1}$, one way is to prove the strong convergence of $\{\nabla\mathcal{N}\ast\rho_{m,s}\}_{m>1}$ by means of the weak-strong convergence.
\begin{lemma}[Regularity estimate on convolution term]$\label{ls5}$
Let $\rho_{m,s}$ be a weak solution to the SPKS model Eq.~\eqref{SPKS} in the sense of Def.~\ref{d1} with $\|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}=M$ and $m\geq3$, then
\begin{align*}
&\|\nabla \mathcal{N}\ast\rho_{m,s}\|_{L^2\cap L^\infty(\mathbb{R}^n)}\leq C,\quad \|\nabla^2 \mathcal{N}\ast\rho_{m,s}\|_{L^p(\mathbb{R}^n)}\leq C(M,p),
\end{align*}
where $C(M,p)\sim\frac{1}{p-1}$ for $0<p-1\ll 1$ and $C(M,p)\sim p$ for $p\gg1$.
Furthermore, thanks to Sobolev's embedding theorem, there exists $\nabla\mathcal{N}\ast\rho_{\infty,s}\in L^2(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$ such that
\begin{equation*}
\nabla \mathcal{N}\ast\rho_{m,s}\to \nabla \mathcal{N}\ast\rho_{\infty,s},\quad\text{strongly in }L^p_{loc}(\mathbb{R}^n)\text{ for }1\leq p<\infty,\quad\text{as }m\to\infty.
\end{equation*}
\end{lemma}
\begin{proof}
With the help of Lemma~\ref{ls2}, we have
\begin{equation}\label{rho}
\|\rho_{m,s}\|_{L^p(\mathbb{R}^n)}\leq \|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}^{1/p}\|\rho_{m,s}\|_{L^{\infty}(\mathbb{R}^n)}^{(p-1)/p}\leq M^{1/p}\alpha_m^{(p-1)/p}\leq C,
\end{equation}
where $\alpha_m\leq \frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}$ from Lemma~\ref{ls2} and $C=\max\{M, \frac{1+\sqrt{1+\frac{8M}{n(n-2)\omega_n}}}{2}\}$.
It similarly follows from \eqref{m43} that
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n}\nabla\mathcal{N}\ast\rho_{m,s}\cdot\nabla\mathcal{N}\ast\rho_{m,s}dx\leq C.
\end{aligned}
\end{equation*}
The $L^\infty$ estimate of $\nabla\mathcal{N}\ast\rho_{m,s}$ easily holds:
\begin{equation}\label{bn}
\begin{aligned}
|\nabla\mathcal{N}\ast\rho_{m,s}|
\leq& C\int_{|x-y| \leq1}\frac{\rho_{m,s}(y,t)}{|x-y|^{n-1}}dy+C\int_{|x-y|>1}\frac{\rho_{m,s}(y)}{|x-y|^{n-1}}dy\\
\leq& C\alpha_m \int_{|x-y|\leq 1}\frac{1}{|x-y|^{n-1}}dy+C\int_{|x-y|>1}\rho_{m,s}dy\\
\leq& C.
\end{aligned}
\end{equation}
Similar to \eqref{m25}, it follows from \eqref{rho} that
\begin{equation*}
\|\nabla^2\mathcal{N}\ast\rho_{m,s}\|_{L^p(\mathbb{R}^n)}\leq C(p)\|\rho_{m,s}\|_{L^p(\mathbb{R}^n)}\leq C(M,p)
\end{equation*}
where $C(p)\sim\frac{1}{p-1}$ for $0<p-1\ll 1$ and $C(p)\sim p$ for $p\gg1$.
Then, thanks to Sobolev's embedding theorem, there exists $\nabla\mathcal{N}\ast\rho_{\infty,s}\in L^2\cap L^\infty(\mathbb{R}^n)$ such that
\begin{equation*}
\nabla \mathcal{N}\ast\rho_{m,s}\to \nabla \mathcal{N}\ast\rho_{\infty,s},\quad\text{strongly in }L^p_{loc}(\mathbb{R}^n)\text{ for }1\leq p<\infty,\quad\text{as }m\to\infty.
\end{equation*}
\end{proof}
In the following, we establish the Aronson-B\'enilan (AB) estimate corresponding tio the stationary state and thus a second order spatial derivative estimate of the pressure. Similar to the case of PKS model, we use the notation
\begin{equation}\label{a21}
\omega_{m,s}:=\Delta P_{m,s}+\rho_{m,s}.
\end{equation}
\begin{lemma}[Aronson-B\'enilan estimate]$\label{ls6}$
Let $\rho_{m,s}$ be a weak solution to the SPKS model Eq.~\eqref{SPKS} in the sense of Def.~\ref{d1} with $\|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}=M$, then, for all $m\geq 3$,
\begin{equation}\label{mainreult}
\||\omega_{m,s}|_-\|_{L^3(\mathbb{R}^n)}^3\leq C/m,\quad\||\omega_{m,s}|_-\|_{L^1(\mathbb{R}^n)}\leq C/m^{1/3},\quad\|\Delta P_{m,s}\|_{L^1(\mathbb{R}^n)}\leq C.
\end{equation}
\end{lemma}
\begin{proof}
To begin with, we rewrite the SPKS model Eq.~\eqref{SPKS} as
\begin{equation*}\label{a17}
0=\Delta \rho_{m,s}^m+\nabla\cdot(\rho_{m,s}\nabla\mathcal{N}\ast\rho_{m,s})
=\rho_{m,s}\omega_{m,s}+\nabla\rho_{m,s}\cdot(\nabla P_{m,s}+\nabla\mathcal{N}\ast\rho_{m,s}),
\end{equation*}
and the pressure equation Eq.~\eqref{e14} is
\begin{equation*}
(m-1)P_{m,s}\omega_{m,s}+\nabla P_{m,s}\cdot\nabla P_{m,s}+\nabla P_{m,s}\cdot\nabla\mathcal{N}\ast\rho_{m,s}=0.
\end{equation*}
Take the Laplace operator $(\Delta)$ action on the above equation, then we have
\begin{equation*}\label{a18}
\begin{aligned}
(m-1)\Delta(P_{m,s}& \omega_{m,s})+\nabla(\Delta P_m)\cdot(\nabla\mathcal{N}\ast\rho_{m,s}+\nabla P_{m,s})\\
&+\nabla P_{m,s}\cdot\nabla \omega_{m,s}+2\nabla^2P_{m,s}:(\nabla^2P_{m,s}+\nabla^2\mathcal{N}\ast\rho_{m,s})=0.
\end{aligned}
\end{equation*}
Hence, the equation of $\omega_{m,s}$ is as follows
\begin{equation*}
\begin{aligned}
(m-1)\Delta(P_{m,s} &\omega_{m,s})+\nabla\omega_{m,s}\cdot\nabla\mathcal{N}\ast\rho_{m,s}+2\nabla^2P_{m,s}:(\nabla^2P_{m,s}+\nabla^2\mathcal{N}\ast\rho_{m,s})\\
&+\rho_{m,s}\omega_{m,s}+2\nabla P_{m,s}\cdot\nabla\omega_{m,s}=0.
\end{aligned}
\end{equation*}
Then, it follows from Kato's inequality that
\begin{equation}\label{f18}
\begin{aligned}
0\leq& -(m-1)\Delta(P_{m,s}\omega_{m,s})-\nabla\omega_{m,s}\cdot\nabla\mathcal{N}\ast\rho_{m,s}+\frac{1}{2}(\nabla^2\mathcal{N}\ast\rho_{m,s})^2\\
&-\rho_{m,s}\omega_{m,s}-2\nabla P_{m,s}\cdot\nabla\omega_{m,s},
\end{aligned}
\end{equation}
where we use the fact
\begin{equation*}
\begin{aligned}
2\nabla^2P_{m,s}:(\nabla^2P_{m,s}+\nabla^2\mathcal{N}\ast\rho_{m,s})
\geq -\frac{1}{2}(\nabla^2\mathcal{N}\ast\rho_{m,s})^2.
\end{aligned}
\end{equation*}
Multiplying \eqref{f18} by $|\omega_{m,s}|_-$ and thanks to Kato's inequality, we have
\begin{equation}\label{ff20}
\begin{aligned}
0\leq &(m-1)\Delta(P_{m,s}|\omega_{m,s}|_-)|\omega_{m,s}|_-+\frac{1}{2}\nabla|\omega_m|_-^2\cdot\nabla\mathcal{N}\ast\rho_m+\nabla|\omega_m|_-^2\cdot\nabla P_{m,s}\\
&+\frac{1}{2}(\nabla^2\mathcal{N}\ast\rho_{m,s})^2|\omega_{m,s}|_-+\rho_{m,s}|\omega_{m,s}|_-^2.
\end{aligned}
\end{equation}
Similar to \eqref{OMEGA}, it easily holds
\begin{equation*}
\begin{aligned}
(m-1)\int_{\mathbb{R}^n} &\Delta(P_{m,s}|\omega_{m,s}|_-) |\omega_{m,s}|_-dx= -\frac{1}{2}(m-1)\int_{\mathbb{R}^n} |\omega_{m,s}|_-^3dx
\\
&-\frac{1}{2}(m-1)\int_{\mathbb{R}^n}\rho_{m,s}|\omega_{m,s}|_-^2dx -(m-1)\int_{\mathbb{R}^n}P_{m,s}|\nabla|\omega_{m,s}|_-|^2dx.
\end{aligned}
\end{equation*}
Then, integrating \eqref{ff20} on $\mathbb{R}^n$, we use the above inequality and obtain
\begin{equation}\label{ff}
\begin{aligned}
2(m-1)\int_{\mathbb{R}^n} &P_{m,s}|\nabla|\omega_{m,s}|_-|^2dx+(m-4)\int_{\mathbb{R}^n}\rho_{m,s}|\omega_{m,s}|_-^2dx\\
&+(m-3)\int_{\mathbb{R}^n}|\omega_{m,s}|_-^3dx
-\int_{\mathbb{R}^n}(\nabla^2\mathcal{N}\ast\rho_{m,s})^2|\omega_{m,s}|_-dx\leq0.
\end{aligned}
\end{equation}
By Young's inequality, Lemma~\ref{ls2}, and the singular integral theory for Newtonian potential (Lemma~\ref{l12}), we have
\begin{equation}\label{f21}
\begin{aligned}
\int_{\mathbb{R}^n}(\nabla^2\mathcal{N}\ast\rho_{m,s})^2|\omega_{m,s}|_-dx&\leq\sum_{ij}\frac{2n}{3^{3/2}}\int_{\mathbb{R}^n}|\partial_{ij}^2\mathcal{N}\ast\rho_{m,s}|^3dx+\sum_{ij}\frac{1}{n^2}\int_{\mathbb{R}^n}|\omega_{m,s}|_-^3dx\\
&\leq \sum_{ij}\frac{2n}{3^{3/2}}C\int_{\mathbb{R}^n}\rho_{m,s}^3dx+\int_{\mathbb{R}^n}|\omega_{m,s}|_-^3dx\\
&\leq \int_{\mathbb{R}^n}|\omega_{m,s}|_-^3dx+CM\alpha_m^{2} \leq \int_{\mathbb{R}^n}|\omega_{m,s}|_-^3dx+C.
\end{aligned}
\end{equation}
Taking (\ref{f21}) into (\ref{ff}), then we attain
\begin{equation*}
2(m-1)\int_{\mathbb{R}^n}P_{m,s}|\nabla|\omega_{m,s}|_-|^2dx+(m-4)\int_{\mathbb{R}^n}\rho_{m,s}|\omega_{m,s}|_-^2dx+(m-4)\int_{\mathbb{R}^n}|\omega_{m,s}|_-^3dx
\leq C.
\end{equation*}
Thus, the first estimate of \eqref{mainreult} holds
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n}|\omega_{m,s}|_-^3dx\leq C/m.
\end{aligned}
\end{equation*}
From Lemma~\ref{ls4} with a given mass $M>0$, there exists a positive constant $R_*(M)$ (only depending on $M$) such that
\begin{equation*}
{\rm supp}(|\omega_{m,s}|_-)\subset B_{R_*(M)} ,
\end{equation*}
then we have
\begin{equation*}
\int_{\mathbb{R}^n}|\omega_{m,s}|_-dx=\int_{B_{R_*(M)} }|\omega_{m,s}|_-dx\leq |B_{R_*(M)} |^{2/3}\big(\int_{B_{R_*(M)} }|\omega_{m,s}|_-^3dx\big)^{1/3}\leq C/m^{1/3},
\end{equation*}
so the second estimate of \eqref{mainreult} holds. For the last estimate of \eqref{mainreult}, by triangle inequality and direct computations, we obtain
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n}|\Delta P_{m,s}|dx\leq& \int_{\mathbb{R}^n}|\Delta P_{m,s}+\rho_{m,s}|dx+M\\
=&\int_{\mathbb{R}^n}(\Delta P_{m,s}+\rho_{m,s})dx+2\int_{\mathbb{R}^n}|\omega_{m,s}|_-dx+M\\
=&2\int_{\mathbb{R}^n}|\omega_{m,s}|_-dx+2M\\
\leq& C.
\end{aligned}
\end{equation*}
The proof is completed.
\end{proof}
Next, we turn to show the $L^\infty$ estimate of the pressure gradient.
\begin{lemma}[$L^\infty$ estimate on pressure gradient]$\label{ls7}$ Let $\rho_{m,s}$ be a weak solution to the SPKS model Eq.~\eqref{SPKS} in the sense of Def.~\ref{d1} with a given mass $M>0$. Then it holds for all $m\geq 3$ that
\begin{equation*}
\|\nabla P_{m,s}\|_{L^\infty\cap L^{1}(\mathbb{R}^n)}\leq C.
\end{equation*}
\end{lemma}
\begin{proof}
Since
\begin{equation*}
\rho_{m,s}\nabla P_{m,s}=-\rho_{m,s}\nabla\mathcal{N}\ast\rho_{m,s},
\end{equation*}
we have
\begin{equation*}
\nabla P_{m,s}=-\nabla\mathcal{N}\ast\rho_{m,s}\quad\text{for }x\in {\rm supp}(P_{m,s}).
\end{equation*}
It follows from Lemma~\ref{ls5} that
\begin{equation*}
|\nabla P_{m,s}|\leq C\quad\text{for }x\in {\rm supp}(P_{m,s}).
\end{equation*}
From Lemma~\ref{ls1}, there exists $R_{m}(M)\leq R_{*}(M)$ for all $m\geq3$ such that
\begin{equation*}
{\rm supp}(P_{m,s})=B_{R_{m}(M)} \subset B_{R_{*}(M)} ,
\end{equation*}
which together with the radially symmetric property of $P_{m,s}$ (Lemma~\ref{ls1}) means
\begin{equation*}
\|\nabla P_{m,s}\|_{L^{\infty}\cap L^{1}(\mathbb{R}^n)}\leq C.
\end{equation*}
\end{proof}
In the end, with the regularity estimates on $\rho_{m,s},P_{m,s}$, and $\mathcal{N}\ast\rho_{m,s}$, we are going to prove the incompressible (Hele-Shaw) limit of the SPKS model Eq.~\eqref{SPKS}.
\begin{lemma}[Incompressible limit]$\label{ls8}$
Let $\rho_{m,s}$ be a weak solution to the SPKS
Eq.~\eqref{SPKS} in the sense of Def.~\ref{d1} with $\int_{\mathbb{R}^n}x\rho_{m,s}(x)dx=0$, $\|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}=M$, and $m\geq3$. Then, after extracting the subsequence, there exist $P_{\infty,s},\nabla P_{\infty,s}\in L^1(\mathbb{R}^n)\cap L^{\infty}(\mathbb{R}^n)$ such that
\begin{align}
&\nabla P_{m,s}\to\nabla P_{\infty,s},\quad\text{strongly in }L^r(\mathbb{R}^n)\text{ for }1\leq r<\infty,&&\text{ as }m\to\infty,\label{k1}\\
&P_{m,s}\to P_{\infty,s},\quad\quad\ \text{ strongly in }L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n),&&\text{ as }m\to\infty.\label{k2}
\end{align}
Furthermore, there exists $\rho_{\infty,s}\in L^1(\mathbb{R}^n)\cap L^{\infty}(\mathbb{R}^n)$ such that
\begin{align}
&\|\rho_{\infty,s}\|_{L^1(\mathbb{R}^n)}=M,&&\int_{\mathbb{R}^n}x\rho_{\infty,s}dx=0,\label{k3}\\
&0\leq \rho_{\infty,s}\leq 1, &&\text{a.e. in }\mathbb{R}^n,\label{k4}\\
&(1-\rho_{\infty,s})P_{\infty,s}=0, &&\text{a.e. in }\mathbb{R}^n,\label{k5}\\
&(1-\rho_{\infty,s})\nabla P_{\infty,s}=0, &&\text{a.e. in }\mathbb{R}^n,\label{k6}\\
&\Delta P_{\infty,s}+\rho_{\infty,s}\geq 0,&&\text{in }\mathcal{D}'(\mathbb{R}^n),\label{k7}\\
&\nabla P_{\infty,s}+\rho_{\infty,s}\nabla \mathcal{N}\ast\rho_{\infty,s}=0,&&\text{in }\mathcal{D}'(\mathbb{R}^n).\label{k8}
\end{align}
Moreover, it holds for $R(M)>0$ satisfying $|B_{R(M)} |_n=M$ that
\begin{equation}\label{sss}
\rho_{\infty,s}=\chi_{\{P_{\infty,s}>0\}}=\chi_{B_{R(M)} }\quad\text{a.e. in }\mathbb{R}^n.
\end{equation}
\end{lemma}
\begin{proof}
Since $\|\nabla P_{m,s}\|_{L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)}\leq C$ (Lemma~\ref{ls7}) and $ {\rm supp}(P_{m,s})\subset B_{R_*(M)} $ (Lemma~\ref{ls4}), then there exist $\nabla P_{\infty,s}\in L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$ and $ {\rm supp}(P_{\infty,s})\subset B_{R_*(M)} $ such that
\begin{equation}\label{ff33}
\begin{aligned}
\nabla P_{m,s}\rightharpoonup \nabla P_{\infty,s},\quad\text{weakly in }L^r(\mathbb{R}^n)\text{ for }1<r<\infty,\quad\text{as }m\to\infty.
\end{aligned}
\end{equation}
Thanks to $\|\Delta P_{m,s}\|_{L^1(\mathbb{R}^n)}\leq C$ (Lemma~\ref{ls6}) and $ {\rm supp}(P_{m,s})\subset B_{R(M)} $ (Lemma~\ref{ls4}), then it follows from the compactness criterion in \cite[(21)]{boga} and $P_{m,s}\in W^{\infty}_{0}(B_{R_*(M)} )$ that
\begin{equation}\label{ff34}
\nabla P_{m,s}\to \nabla P_{\infty,s},\quad\text{strongly in }L^r(\mathbb{R}^n)\text{ for } 1\leq r<\infty,\quad\text{as }m\to\infty.
\end{equation}
By Sobolev's inequality for gradient (Theorem~\ref{t8}), we obtain
\begin{equation*}
\|P_{m,s}-P_{\infty,s}\|_{L^\infty(\mathbb{R}^n)}\leq \|\nabla P_{m,s}-\nabla P_{\infty,s}\|_{L^n(\mathbb{R}^n)}\to0,\quad\text{as }m\to\infty.
\end{equation*}
So Eq.~\eqref{k1}--\eqref{k2} hold.
In addition, Eq.~\eqref{k3} follows from $\|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}=M$, $\int_{\mathbb{R}^n}x\rho_{m,s}dx$=0, and $ {\rm supp}(\rho_{m,s})\subset B_{R_*(M)} $ show that as $m\to \infty$
\begin{equation*}
\begin{aligned}
\int_{\mathbb{R}^n}\rho_{m,s}dx\to\int_{\mathbb{R}^n}\rho_{\infty,s}dx=M, \qquad
\int_{\mathbb{R}^n}x\rho_{m,s}dx\to\int_{\mathbb{R}^n}x\rho_{\infty,s}dx=0.
\end{aligned}
\end{equation*}
Since
\begin{equation*}
\|\rho_{m,s}\|_{L^r(\mathbb{R}^n)}\leq \|\rho_{m,s}\|_{L^{\infty}(\mathbb{R}^n)}^{(r-1)/r}\|\rho_{m,s}\|_{L^1(\mathbb{R}^n)}^{1/r}\leq \alpha_m^{(r-1)/r(m-1)} M^{1/r}\leq C^{(r-1)/r(m-1)} M^{1/r} ,
\end{equation*}
there exists $\rho_{\infty,s}\in L^{r}(\mathbb{R}^n)$ for $1<r<\infty$ such that
\begin{equation*}
\rho_{m,s}\rightharpoonup \rho_{\infty,s},\quad \text{weakly in } L^r(\mathbb{R}^n)\text{ for }1<r<\infty,\quad\text{as }m\to\infty.
\end{equation*}
According to the weak semi-continuity of $L^r$ norm for $1<r<\infty$, we have
\begin{equation*}
\begin{aligned}
\|\rho_{\infty,s}\|_{L^r(\mathbb{R}^n)}
\leq & \liminf\limits_{m\to\infty}\|\rho_{m,s}\|_{L^r(\mathbb{R}^n)}
\leq \liminf\limits_{m\to\infty}(\alpha_m^{m-1})^{(r-1)/r(m-1)}M^{1/r}\\
\leq &\liminf\limits_{m\to\infty}(\frac{1+\sqrt{1+\frac{4M^2}{n^2(n-2)^2}}}{2}+\frac{2M}{n(n-2)\omega_n})^{(r-1)/r(m-1)}M^{1/r}\\
\leq &M^{1/r}.
\end{aligned}
\end{equation*}
We take $r\to\infty$ and obtain $\|\rho_{\infty,s}\|_{L^\infty(\mathbb{R}^n)}\leq 1$,
which shows Eq.~\eqref{k4}.
\\
Let $\varphi(x)\in C_{0}^{\infty}(\mathbb{R}^n)$ be a smooth test function. Due to the definition of weak solution~\eqref{def:WS}, we have
\begin{equation}\label{f36}
\int_{\mathbb{R}^n}\rho_{m,s}\nabla P_{m,s}\cdot\nabla\varphi dx+\int_{\mathbb{R}^n}\rho_{m,s}\nabla\mathcal{N}\ast\rho_{m,s}\cdot\nabla\varphi dx=0.
\end{equation}
Passing \eqref{f36} to limit, then we obtain
\begin{equation}\label{f37}
\int_{\mathbb{R}^n}\rho_{\infty,s}\nabla P_{\infty,s}\cdot\nabla\varphi dx+\int_{\mathbb{R}^n}\rho_{\infty,s}\nabla\mathcal{N}\ast\rho_{\infty,s}\cdot\nabla\varphi dx=0.
\end{equation}
Since $\rho_{m,s}=(\frac{m-1}{m}P_{m,s})^{1/(m-1)}$, we have
\begin{equation}\label{f38}
\begin{aligned}
&\int_{\mathbb{R}^n}\rho_{\infty,s}P_{\infty,s}\varphi dx=\lim\limits_{m\to\infty}\int_{\mathbb{R}^n}\rho_{m,s}P_{m,s}\varphi dx\\
=&\lim\limits_{m\to\infty}\int_{\mathbb{R}^n}(\frac{m-1}{m})^{1/(m-1)}(P_{m,s})^{m/(m-1)}\varphi dx
=&\int_{\mathbb{R}^n}P_{\infty,s}\varphi dx.
\end{aligned}
\end{equation}
Similarly, it holds for $i=1,...,n$ that
\begin{equation}\label{f39}
\begin{aligned}
\int_{\mathbb{R}^n}\rho_{\infty,s}\partial_iP_{\infty,s}\varphi dx=&\lim\limits_{m\to\infty}\int_{\mathbb{R}^n}\rho_{m,s}\partial_iP_{m,s}\varphi dx\\
=&\lim\limits_{m\to\infty}\int_{\mathbb{R}^n}(\frac{m-1}{m})^{m/(m-1)}P_{m,s}^{m/(m-1)}\partial_i\varphi dx\\
=&-\int_{\mathbb{R}^n}P_{\infty,s}\partial_i\varphi dx
=\int_{\mathbb{R}^n}\partial_i P_{\infty,s}\varphi dx.
\end{aligned}
\end{equation}
Using $\rho_{\infty,s}, P_{\infty,s},\nabla P_{\infty,s}\in L^{r}(\mathbb{R}^n)$ for $1<r<\infty$, Eqs.~\eqref{k5}--\eqref{k6} follows from \eqref{f38}--\eqref{f39}.
From Lemma~\ref{ls6}, we obtain
\begin{equation*}
\int_{\mathbb{R}^n}(\Delta P_{m,s}+\rho_{m,s})\varphi dx\geq -\int_{\mathbb{R}^n}|\omega_{m,s}|_-\varphi dx\geq -C\||\omega_{m,s}|_-\|_{L^1(\mathbb{R}^n)}\geq -C/m^{1/3},
\end{equation*}
which means that, after taking the limit, Eq.~\eqref{k7} holds for a nonnegative smooth test function as
\begin{equation*}
\int_{\mathbb{R}^n}(\Delta P_{\infty,s}+\rho_{\infty,s})\varphi dx\geq 0.
\end{equation*}
Combining \eqref{f37} and{f40}, the last statement Eq.~\eqref{k8} is gotten as
\begin{equation*}
\nabla P_{\infty,s}+\rho_{\infty,s}\nabla\mathcal{N}\ast\rho_{\infty,s}=0,\quad\text{a.e. in }\mathbb{R}^n.
\end{equation*}
Since $\rho_{m,s}$ is a solution to the SPKS model Eq.~\eqref{SPKS} with $\int_{\mathbb{R}^n}x\rho_{m,s}(x)dx=0$, $\rho_{m,s}$ and $P_{m,s}$ are radially decreasing symmetric. Therefore, $\rho_{\infty,s}$ and $P_{\infty,s}$ are radially decreasing symmetric, and $\mathcal{N}\ast\rho_{\infty,s}$ is radially symmetric (Lemma~\ref{ls1}). Since $\rho_{\infty,s}P_{\infty,s}=P_{\infty,s}$, there exists $R_1\leq R_2$ such that
\begin{equation*}
{\rm supp}(P_{\infty,s})=B_{R_1} ,\quad {\rm supp}(\rho_{\infty,s})=B_{R_2} .
\end{equation*}
If $R_1<R_2$, it follows from Lemma~\ref{ls8} that
\begin{equation}\label{pp}
\rho_{\infty,s}(r)\frac{\partial}{\partial_r}\mathcal{N}\ast\rho_{\infty,s}(r)=0\quad\text{for }x\in(R_1,R_2).
\end{equation}
However, it follows from \cite[(2.2)]{IY} that for $r>0$
\begin{equation*}\label{nm}
\begin{aligned}
\frac{\partial}{\partial r}\mathcal{N}\ast\rho_{\infty,s}(r)=&\frac{1}{|\partial B_r|_{n-1}}\int_{\partial B_r}|\nabla\mathcal{N}\ast\rho_{\infty,s}|dS_x
=\frac{1}{|\partial B_r |_{n-1}}\int_{\partial B_r}\nabla\mathcal{N}\ast\rho_{\infty,s}\cdot\nu dS_x
\\
=&\frac{1}{|\partial B_r |_{n-1}}\int_{B_r}\Delta \mathcal{N}\ast\rho_{\infty,s} dx \frac{1}{|\partial B_r |_{n-1}}\int_{B_r }\rho_{\infty,s} dx>0,
\end{aligned}
\end{equation*}
which contradicts \eqref{pp}. Thus, we have $R_1=R_2$, and the last statement Eq.~\eqref{sss} is proved.
\end{proof}
\section{Conclusion, extensions and perspectives}
In order to prove the incompressible limit of the Patlak-Keller-Segel system, and
establish the corresponding Hele-Shaw free boundary equation, we have followed
the same lines of proof as initiated in~\cite{5}, with the gradient estimate as
in~\cite{r35}. This has the advantage to also prove optimal second order
estimates. A fundamental new ingredient is a uniform $L^1$ estimate on the time
derivative of pressure. With this new estimate, another possible route to
establish the complementarity condition, the hard part of the problem, would be
to use the pure compactness method in \cite{r62,rPrX}. Still another possible
route is through the obstacle problem, see~\cite{GKM} and the references therein.
We have also established uniqueness, finite propagation speed, and limit energy
functional of solutions to this Hele-Shaw type free boundary problem. In addition, we have studied the incompressible limit for the stationary state of the PKS model, which is new for the diffusion-aggregation equations.
We would like to point out that our analysis for the PKS model is compatible with growth terms, as
they appear naturally when dealing with mechanical models of tumor growth, even
though the technical details have to be checked. Also we treated dimension
$n\geq 3$ to avoid technical issues with the Sobolev inequalities but we do not
expect difficulties in two dimensions.
Several papers have treated of linear drift terms, see~\cite{r20, r50}. But it is
difficult to extend these cases or Newtonian potential to more general attractive
potential, because our proof of the time derivative estimate of pressure strongly
depends on the structure of Newtonian potential. Among open problems, let us also
mention the convergence rate with $m\to \infty$, which has been obtained in few
papers, \cite{r108, rDDP}. Finally the case of systems is only treated without drift, see \cite{r53, r62}. Large time asymptotic of solutions to the Hele-Shaw system \eqref{z6}--\eqref{z8} is an interesting topic.
\cite{CKY_2018} treated the 2-dimensional Hele-Shaw model when the initial density is a patch function.
But that for the n-dimensional ($n\geq3$) Hele-Shaw case is still largely open. The regularity of free boundary for a
Hele-Shaw problem of tumor growth was obtained in~\cite{r60}, also for the porous medium equation with an external drift, cf.~\cite{r20}.
\paragraph{Acknowledegements}
The authors would like to thank Noemi David and Markus Schmidtchen for helpful
discussions and comments.
\\
The research of Q. H. and H-L. L. was supported partially by National Natural Science Foundation of China (No.11931010, 11671384 and 11871047), and by the key research project of Academy for Multidisciplinary Studies, Capital Normal
University, and by the Capacity Building for Sci-Tech Innovation-Fundamental
Scientific Research Funds (No.007/20530290068).
\\
B.P. has received funding from the European Research Council (ERC) under the
European Union's Horizon 2020 research and innovation programme (grant agreement
No.740623).
|
1,314,259,994,345 | arxiv | \section{Introduction}
\subsection{Background}
Multiplication operators on Hilbert spaces of analytic functions appear in several
contexts in analysis. In operator theory, they frequently can be used to model
fairly general classes of operators; see for instance \cite{Agler82,SFB+10}. In complex analysis and harmonic analysis,
multiplication operators provide a functional analytic perspective on concrete function theoretic questions; see
for example \cite{Sarason67,SS61}.
With few exceptions, it is often not an easy task to decide which functions are multipliers
of a given Hilbert space of analytic functions.
Function theoretic descriptions of multipliers exist for spaces such as the Dirichlet space \cite{Stegenga80} and
the Drury--Arveson space \cite{ARS08}, but they tend to be difficult to check in practice.
Nevertheless, there is a very general and well known criterion, which we now describe.
Let $X$ be a non-empty set and let $\cH$ be a reproducing kernel Hilbert space on $X$
with reproducing kernel $K$. We will always assume that reproducing
kernels do not vanish on the diagonal, i.e.\ $K(z,z) \neq 0$ for all $z \in X$.
A function $\varphi: X \to \bC$ is said to be a \emph{multiplier of $\cH$} if $\varphi f \in \cH$
for all $f \in \cH$. In this case, the multiplication operator
\begin{equation*}
M_\varphi: \cH \to \cH, \quad f \mapsto \varphi \cdot f,
\end{equation*}
is bounded by the closed graph theorem, and the \emph{multiplier norm} $||\varphi||_{\Mult(\cH)}$ of $\varphi$ is
defined to be the operator norm of $M_\varphi$.
The characterization of multipliers alluded to above says that a function $\varphi: X \to \bC$
is a multiplier of $\cH$ of multiplier norm at most $1$ if and only if the function
\begin{equation*}
X \times X \to \bC, \quad (z,w) \mapsto K(z,w) (1 - \varphi(z) \ol{\varphi(w)}),
\end{equation*}
is positive semi-definite; see \cite[Theorem 5.21]{PR16}. More explicitly,
$||\varphi||_{\Mult(\cH)} \le 1$ if and only if for every $n \in \bN$ and every finite subset $\{z_1,\ldots,z_n\} \subset X$ of $n$ points in $X$, the $n \times n$ matrix
\begin{equation}
\label{eqn:contractive_multiplier}
\big[ K(z_i,z_j) (1 - \varphi(z_i) \ol{\varphi(z_j)}) \big]_{i,j = 1}^n
\end{equation}
is positive semi-definite. In some special cases, it is not necessary to check this condition for sets
of arbitrary size $n$.
For instance, if $\cH$ is the Hardy space
on the unit disc, then the multipliers of $\cH$ of norm at most $1$ are precisely the analytic
functions on $\bD$ which are bounded in modulus by $1$.
Hence, if we assume a priori that $\varphi$ is analytic, then
it suffices to test positivity of the matrices
in Equation \eqref{eqn:contractive_multiplier}
for all singleton sets, i.e.\ $n=1$ is suffices. Without the analyticity
assumption, a theorem of Hindmarsh \cite{Hindmarsh68} shows that sets of size $n=3$ suffice.
\subsection{\texorpdfstring{$n$-point multiplier norms}{n-point multiplier norms}}
We ask if similar phenomena occur in other classical reproducing kernel Hilbert spaces.
To study this question, we introduce some terminology. We will also
consider operator valued multipliers. If $\cE$ is an auxiliary Hilbert space, we say
that a function $L: X \times X \to \cB(\cE)$ is \emph{n-point positive}
if for every collection $x_1,\ldots,x_n$ of $n$ (not necessarily distinct) points in $X$, the
$n \times n$ matrix $[L(x_i,x_j)]_{i,j=1}^n$ is a positive operator on $\cB(\cE^n)$.
Thus, $L$ is positive if and only if it is $n$-point positive
for all $n \in \bN$.
We furthermore define
the \emph{$n$-point multiplier norm}
of a function $\Phi: X \to \cB(\cE)$ to be
\begin{equation*}
||\Phi||_{\Mult(\cH),n} = \inf \{ C \ge 0 : K(z,w)(C^2 - \Phi(z) \Phi(w)^*) \text{ is $n$-point positive} \},
\end{equation*}
which is understood as $+ \infty$ if no such $C$ exists.
We also adopt the convention that $\|\Phi\|_{\Mult(\cH)} = + \infty$ if $\Phi$ is not a multiplier.
The assumption $K(z,z) \neq 0$ for all $z \in X$ implies that
\begin{equation*}
\|\Phi\|_{\Mult(\cH),1} = \sup_{z \in X} \|\Phi(z)\|_{\cB(\cE)}.
\end{equation*}
Moreover, it is clear that
\begin{equation*}
\|\Phi\|_{\Mult(\cH),1} \le
\|\Phi\|_{\Mult(\cH),2} \le \ldots
\le \|\Phi\|_{\Mult(\cH)},
\end{equation*}
where the quantities are allowed to be infinity.
We now ask:
\begin{quest}
\label{quest:many_points}
Does there exist $n \in \bN$ such that
\begin{enumerate}
\item
$\|\Phi\|_{\Mult(\cH)} = \|\Phi\|_{\Mult(\cH),n}$ for all functions $\Phi: X \to \cB(\cE)$
and all (finite dimensional) Hilbert spaces $\cE$, or
\item $\|\varphi\|_{\Mult(\cH)} = \|\varphi\|_{\Mult(\cH),n}$ for all functions $\varphi: X \to \bC$, or
\item $\|\varphi\|_{\Mult(\cH)} \le C \|\varphi\|_{\Mult(\cH),n}$ for some constant $C > 0$
and all functions $\varphi: X \to \bC$?
\end{enumerate}
\end{quest}
Clearly, a positive answer answer to (1) implies a positive answer to (2), which in turn implies
a positive answer to (3).
One might also ask for the seemingly weaker property that there exists $n \in \bN$ so that
$\|\varphi\|_{\Mult(\cH),n} < \infty$ implies $\varphi \in \Mult(\cH)$. However,
one easily checks that
the space of all functions $\varphi: X \to \bC$ with $\| \varphi\|_{\Mult(\cH),n} < \infty$
is a Banach space in the norm $\| \cdot\|_{\Mult(\cH),n}$
(for instance using Equation \eqref{eqn:n-point-alt} below). Hence the closed graph theorem shows
that the seemingly weaker property is in fact equivalent to (3) above.
In Section \ref{sec:123}, we will show that finiteness of $\|\varphi\|_{\Mult(\cH),2}$
implies that $\varphi$ is continuous in an appropriate sense. Moreover,
we generalize Hindmarsh's theorem and prove that if $\cH$ is a space of holomorphic
functions, then finiteness of $\|\varphi\|_{\Mult(\cH),3}$ implies that $\varphi$ is holomorphic.
It is a recurring phenomenon in functional analysis that properties that hold at every matrix level
have strong consequences. This is also the case in Question \ref{quest:many_points}.
We will show that in a fairly general setting of spaces of holomorphic functions on the Euclidean unit
ball $\bB_d \subset \bC^d$, a positive answer to part (1) of Question \ref{quest:many_points}
only happens in the trivial case when the multiplier norm is the supremum norm.
We remark, however, that it is in general not true that $n$-point multiplier norms
are comparable to the supremum norm (see Corollary \ref{cor:dirichlet_two_point} for
the classical Dirichlet space). Instead, we use Arveson's theory of boundary representations
to prove our result addressing part (1) of Question \ref{quest:many_points}.
More precisely, a regular unitarily invariant space is a reproducing kernel Hilbert space on $\bB_d$ with reproducing
kernel of the form
\begin{equation*}
K(z,w) = \sum_{n=0}^\infty a_n \langle z,w \rangle^n,
\end{equation*}
where $a_0 = 1$, $a_n > 0$ for all $n \in \bN$ and $\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$.
Regular unitarily invariant spaces are a frequently studied class of reproducing kernel
Hilbert spaces; see for instance \cite{GHX04,GRS02}. A discussion of this condition
can be found in \cite{BHM17}. Here, we simply mention
that the Hardy space $H^2(\bD)$, the Drury--Arveson space $H^2_d$, the Dirichlet space $\cD$,
the standard weighted Dirichlet spaces $\cD_a$ as well as standard weighted Dirichlet and Bergman
spaces on $\bB_d$ are regular unitarily invariant spaces. Even the somewhat pathological Salas space \cite{AHM+17a}
belongs to this class.
\begin{thm}
\label{thm:ci_n_point_intro}
Let $d \in \bN$ and let $\cH$ be a regular unitarily invariant space on $\bB_d$.
Then the following are equivalent:
\begin{enumerate}[label=\normalfont{(\roman*)}]
\item There exists $n \in \bN$ so that
$\|\Phi\|_{\Mult(\cH)} = \|\Phi\|_{\Mult(\cH),n}$ for all $\Phi \in \Mult(\cH \otimes \cE)$
and all finite dimensional Hilbert spaces $\cE$.
\item $\Mult(\cH) = H^\infty(\bB_d)$ and $\|\Phi\|_{\Mult(\cH \otimes \cE)} = \sup_{z \in \bB_d} \|\Phi(z)\|$
for all $\Phi \in \Mult(\cH \otimes \cE)$ and all finite dimensional Hilbert spaces $\cE$.
\end{enumerate}
\end{thm}
Since the $1$-point multiplier norm is the supremum norm, (ii) clearly implies that (i) holds with $n=1$.
A stronger version of this result will be proved in Section \ref{sec:ci_subhom}.
There, we also record applications to concrete spaces of functions on the ball.
Regarding the remaining parts of Question \ref{quest:many_points},
we show that for many concrete spaces on the unit disc or on the unit ball, even Part (3)
of Question \ref{quest:many_points} has a negative answer.
For $a \in (0,\infty)$, let $\cD_a(\bB_d)$ be the reproducing kernel Hilbert space on $\bB_d$
with kernel
\begin{equation*}
\frac{1}{(1 - \langle z,w \rangle)^{a}}.
\end{equation*}
We also let $\cD_0(\bB_d)$ be the space corresponding
to the kernel
\begin{equation*}
\frac{1}{\langle z,w \rangle} \log \Big( \frac{1}{1 - \langle z,w \rangle} \Big).
\end{equation*}
Equivalently, for $a > 0$, we have
\begin{equation*}
\mathcal{D}_a(\mathbb{B}_d) = \Big\{ f = \sum_{\alpha \in \mathbb{N}_0^d} \widehat{f}(\alpha) z^\alpha \in \mathcal{O}(\mathbb{B}_d): \|f\|^2
= \sum_{\alpha \in \mathbb{N}_0^d} \frac{\alpha! \Gamma(a)}{\Gamma(a+|\alpha|)} |\widehat{f}(\alpha)|^2 < \infty \Big\}
\end{equation*}
and
\begin{equation*}
\mathcal{D}_0(\mathbb{B}_d) = \Big\{ f = \sum_{\alpha \in \mathbb{N}_0^d} \widehat{f}(\alpha) z^\alpha \in \mathcal{O}(\mathbb{B}_d): \|f\|^2
= \sum_{\alpha \in \mathbb{N}_0^d} (|\alpha| + 1) \frac{\alpha!}{|\alpha|!}|\widehat{f}(\alpha)|^2 < \infty \Big\};
\end{equation*}
the equivalence can be seen by expanding the reproducing kernel in a power series;
see for instance the proof of Theorem 41 in \cite{ZZ08}.
Then $\cD_1(\bD) = H^2$ is the Hardy space, $\cD_0(\bD)= \cD$ is the classical Dirichlet space and the spaces $\cD_a(\bD) = \cD_a$ are standard weighted
Dirichlet spaces on the unit disc for $a \in (0,1)$. Moreover, $\cD_1(\bB_d) = H^2_d$ is the Drury--Arveson space.
\begin{thm}
\label{thm:top_n_point_intro}
Let $d \in \bN$ and let $0 \le a < \frac{d+1}{2}$. Then there do not
exist a constant $C > 0$ and $n \in \bN$ so that
\begin{equation*}
\|\varphi\|_{\Mult(\cD_a(\bB_d))} \le C \| \varphi\|_{\Mult(\cD_a(\bB_d)),n}
\end{equation*}
for all $\varphi \in \Mult(\cD_a(\bB_d))$.
\end{thm}
Notice that this result applies in particular to the standard weighted Dirichlet spaces $\cD_a(\bD)$
for $0 \le a < 1$ and to the Drury--Arveson space $H^2_d$ for $d \ge 2$.
We will obtain Theorem \ref{thm:top_n_point_intro} as a special case of Theorem \ref{thm:top_subhom_intro} below.
In fact, for the spaces $\cD_a(\bD)$, we study the $n$-point multiplier norm in more detail.
We show that the $n$-point multiplier norm on $\cD_a(\mathbb{D})$ is comparable to the supremum norm for $a > 0$;
see Corollary \ref{cor:many_points}.
This argument also provides a direct proof of Theorem \ref{thm:top_n_point_intro}
for these spaces.
For the classical Dirichlet space, the $n$-point multiplier norm turns out to be neither comparable
to the supremum norm nor to the full multiplier norm; see Lemma \ref{cor:dirichlet_two_point} and Proposition
\ref{prop:dirichlet_subhom}.
\subsection{Subhomogeneity}
Question \ref{quest:many_points}, at least when asked for a function $\varphi$ or $\Phi$ which is
a priori assumed to be a multiplier,
can be reformulated in representation theoretic terms.
This reformulation connects Question \ref{quest:many_points} to the property
of subhomogeneity of operator algebras, but it is also useful for the sole purpose of understanding
Question \ref{quest:many_points}, as for instance subhomogeneity passes to subalgebras
and is preserved by isomorphisms.
To explain this connection, recall that $\Mult(\cH)$ is a unital (non-selfadjoint) operator algebra,
via identifying a multiplier with its associated multiplication operator.
If $\cA \subset \cB(\cH)$ is an operator algebra, elements of $M_r(\cA)$
can be regarded as operators on $\cH^r$. This identification makes it possible to endow $M_r(\cA)$
with a norm. In the case of multiplier algebras, $M_r(\Mult(\cH))$ is identified with $\Mult(\cH \otimes \bC^r)$.
If $\cA$ and $\cB$ are operator algebras, a linear map $\pi: \cA \to \cB$ induces
linear maps $\pi^{(r)} : M_r(\cA) \to M_r(\cB)$, defined by applying $\pi$ entrywise.
The linear map $\pi$ is said to be completely contractive if each $\pi^{(r)}$ is contractive,
and completely isometric if each $\pi^{(r)}$ is isometric.
Suppose now that $F = \{z_1,\ldots,z_n\}$ is a finite subset of $X$ and let $\cH \big|_F$
be the restriction of $\cH$ to $F$, i.e.\ the reproducing kernel Hilbert space
on $F$ with reproducing kernel $K \big|_{F \times F}$ (see \cite[Part I, Section 6]{Aronszajn50}).
Then
\begin{equation}
\label{eqn:n-point-alt}
\|\Phi\|_{\Mult(\mathcal{H}),n} = \sup_{|F| \le n} \| \Phi \big|_F \|_{\Mult( (\mathcal{H} |_F) \otimes \mathcal{E})}
\end{equation}
for every function $\Phi: X \to \mathcal{B}(\mathcal{E})$.
Thus, if we consider the unital completely contractive (u.c.c.) homomorphism
\begin{equation*}
\pi_F: \Mult(\cH) \to \Mult(\cH \big|_F), \quad \varphi \mapsto \varphi \big|_F,
\end{equation*}
then for $\Phi \in \Mult(\mathcal{H} \otimes \mathbb{C}^r)$, we have
\begin{equation}
\label{eqn:subhom}
\|\Phi\|_{\Mult(\cH),n} = \sup_{|F| \le n} \| \pi_F^{(r)} (\Phi) \|.
\end{equation}
On the other hand,
\begin{equation*}
\|\Phi\|_{\Mult(\cH)} = \sup_{|F| < \infty} \| \pi_F^{(r)} (\Phi) \|.
\end{equation*}
Note that $\dim(\cH \big|_F) \le |F|$. In particular, we see that $\Mult(\cH)$
is a \emph{residually finite-dimensional} operator algebra, which means that
for every $r \in \bN$ and every $\Phi \in M_r(\Mult(\cH))$,
\begin{equation*}
\|\Phi\| = \sup \{ \|\pi^{(r)} (\Phi) \|: \pi: \Mult(\cH) \to M_n \text{ is a u.c.c.\ homomorphism}, n \in \bN \}.
\end{equation*}
Question \ref{quest:many_points} is then closely related to the question whether in the above supremum,
it suffices to consider representations into $M_n$ for uniformly bounded values of $n$.
The notion of residual finite-dimensionality
was originally defined for $C^*$-algebras and plays a crucial role in this theory;
see for instance \cite{Archbold95} and the references given in the introduction.
Residually finite-dimensional operator algebras of functions were studied in \cite{MP10},
where it was also observed that multiplier algebras are residually finite-dimensional.
For residual finite-dimensionality of more general non-selfadjoint operator algebras,
see \cite{CR18}.
In the theory of $C^*$-algebras, a significant strengthening of the property of residual finite-dimensionality
is the notion of \emph{subhomogeneity}; see for instance \cite[Section IV.1.4]{Blackadar06}.
A unital $C^*$-algebra $\cA$ is said to be $n$-subhomogeneous if every irreducible $*$-representation of $\cA$
acts on a Hilbert space of dimension at most $n$. For $C^*$-algebras,
this is equivalent to saying that for all $a \in \cA$,
\begin{equation*}
\|a\| = \sup \{ \|\pi(a) \| : \pi: \cA \to M_k \text{ is a unital $*$-homomorphism, } k \le n \};
\end{equation*}
see the discussion at the beginning of Section \ref{sec:ci_subhom}.
We therefore make the following definitions.
\begin{defn}
Let $\cA$ be a unital operator algebra and let $n \in \bN$.
\begin{enumerate}
\item $\cA$ is \emph{completely isometrically} $n$-subhomogeneous if for all $r \in \bN$ and all $A \in M_r(\cA)$,
\begin{equation*}
\|A\| = \sup\{ \|\pi^{(r)}(A) \| \},
\end{equation*}
where the supremum is taken over all u.c.c.\ homomorphisms $\pi: \cA \to M_k$ and all $k \le n$.
\item $\cA$ is \emph{isometrically} $n$-subhomogeneous if for all $a \in \cA$,
\begin{equation*}
\|a\| = \sup\{ \|\pi(a) \| \},
\end{equation*}
where the supremum is taken over all unital contractive homomorphisms $\pi: \cA \to M_k$ and all $k \le n$.
\item $\cA$ is \emph{topologically} $n$-subhomogeneous if there exist constants $C_1,C_2 > 0$ so that
for all $a \in \cA$,
\begin{equation*}
\|a\| \le C_1 \sup\{ \|\pi(a) \| \},
\end{equation*}
where the supremum is taken over all unital homomorphisms $\pi: \cA \to M_k$ with $\|\pi\| \le C_2$
and all $k \le n$.
\end{enumerate}
We say that $\cA$ is completely isometrically / isometrically / topologically subhomogeneous
if it is completely isometrically / isometrically / topologically $n$-subhomogeneous
for some $n \in \bN$.
\end{defn}
Thus, Question \ref{quest:many_points} leads us to ask the following
more general question for a reproducing kernel Hilbert space $\cH$.
\begin{quest}
\label{quest:subhomogeneous}
Does there exist $n \in \bN$ so that
\begin{enumerate}
\item $\Mult(\cH)$ is completely isometrically $n$-subhomogeneous, or
\item $\Mult(\cH)$ is isometrically $n$-subhomogeneous, or
\item $\Mult(\cH)$ is topologically $n$-subhomogeneous?
\end{enumerate}
\end{quest}
Equation \eqref{eqn:subhom} shows that for $j=1,2,3$, a positive answer
to part (j) of Question \ref{quest:many_points} implies a positive answer
to part (j) of Question \ref{quest:subhomogeneous} (for the same value of $n$).
We then show the following stronger version of Theorem \ref{thm:ci_n_point_intro}
in Theorem \ref{thm:ci_subhom}.
\begin{thm}
\label{thm:ci_subhom_intro}
Let $\cH$ be a regular unitarily invariant space on $\bB_d$.
Then $\Mult(\cH)$ is completely isometrically subhomogeneous if and only if $\Mult(\cH) = H^\infty(\bB_d)$
completely isometrically.
\end{thm}
The discussion above shows that this result will establish the non-trivial implication (i) $\Rightarrow$ (ii) in Theorem \ref{thm:ci_n_point_intro},
as (i) in Theorem \ref{thm:ci_n_point_intro} in particular implies that $\Mult(\cH)$ is completely isometrically subhomogeneous.
Theorem \ref{thm:top_n_point_intro} also holds in the following stronger sense.
\begin{thm}
\label{thm:top_subhom_intro}
Let $d \in \bN$ and let $0 \le a < \frac{d+1}{2}$. Then $\Mult(\cD_a(\bB_d))$ is not topologically
subhomogeneous.
\end{thm}
Just as Theorem \ref{thm:top_n_point_intro}, this result
applies in particular to the standard weighted Dirichlet spaces $\cD_a(\bD)$
for $0 \le a < 1$ and to the Drury--Arveson space $H^2_d$ for $d \ge 2$.
Theorem \ref{thm:top_subhom_intro} will be proved in Corollary \ref{cor:between_spaces_subhom}.
The cases of $\cD_a(\bD)$ for $0 < a < 1$, $\cD_0(\bD)$ and $H^2_d$ are already contained in
Corollary \ref{cor:not_subhom}, Proposition \ref{prop:dirichlet_subhom}
and Corollary \ref{cor:da_not_subhom},
respectively.
To show Theorem \ref{thm:top_subhom_intro} for the Drury--Arveson space, we establish an embedding result
for multiplier algebras of certain weighted Dirichlet spaces on the unit disc, which
may be of independent interest; see Proposition \ref{prop:mult_embed} for the precise statement. We exhibit two more consequences of the embedding result.
In particular, we show that a certain sufficient condition for membership in the multiplier
algebra of the Drury--Arveson space of \cite{AHM+17c} is not necessary.
This was also proved, using different techniques, in a recent paper of Fang and Xia \cite{FX20}.
The remainder of this paper is organized as follows. In Section \ref{sec:123}, we study the $n$-point multiplier norm
of functions for small values of $n$. In particular, we establish a generalization of Hindmarsh's theorem
for spaces of holomorphic functions. Section \ref{sec:n-point} contains the result that the $n$-point multiplier
norm on $\cD_a$ is comparable to the supremum norm for $a \in (0,1)$, but not for the classical Dirichlet space.
In Section \ref{sec:ci_subhom}, we establish our results on completely isometric subhomogeneity of multiplier algebras.
Section \ref{sec:top_sub_disc} addresses topological subhomogeneity of multiplier algebras on the unit disc.
In Section \ref{sec:subhom_ball}, we deduce from our results on the unit disc results
about topological subhomogeneity
of multiplier algebras on the unit ball. Section \ref{sec:embedding} contains the embedding of multiplier
algebras of weighted Dirichlet spaces into the multiplier algebra of the Drury--Arveson space, from which
we deduce that the multiplier algebra of the Drury--Arveson space is not topologically subhomogeneous.
In Section \ref{sec:embedding_applications}, we give two more applications of our embedding result established in Section \ref{sec:embedding}. Section \ref{sec:spaces_between} deals with topological subhomogeneity
of multiplier algebras of spaces between the Drury--Arveson space and the Hardy space.
Finally, we close the article in Section \ref{sec:questions} with some questions.
We make one final remark regarding notation. If $f,g$ are two functions taking non-negative
real values, we write $f \lesssim g$ if there exists a constant $C > 0$ so that $f \le C g$.
If $f \lesssim g$ and $g \lesssim f$, we write $f \approx g$. Finally, the notation
$f \sim g$ as $x \to x_0$ means that $\lim_{x \to x_0} \frac{f(x)}{g(x)} = 1$.
\subsection*{Acknowledgements} The second named author is grateful for valuable
discussions with Ken Davidson regarding Proposition \ref{prop:mult_embed}. He also
thanks J\"org Eschmeier for asking two questions that led to the results in Section \ref{sec:embedding_applications}.
\section{Boundedness, continuity and analyticity}
\label{sec:123}
In this section, we study the $n$-point multiplier norm of a function
for small values of $n$. Throughout, $\cH$
denotes a reproducing kernel Hilbert space on a set $X$ with kernel $K$
satisfying $K(z,z) \neq 0$ for all $z \in X$.
As mentioned in the introduction,
\begin{equation*}
||\varphi||_{\Mult(\cH),1} = \sup_{z \in X} |\varphi(z)|
\end{equation*}
for all functions $\varphi: X \to \bC$.
The $2$-point norm can be understood in terms of a pseudo-metric induced
by $\cH$, which is for example studied in \cite{ARS+11}.
This pseudo-metric is defined by
\begin{equation*}
\delta_{\cH}(z,w) = \Big( 1- \frac{|K(z,w)|^2}{K(z,z)K(w,w)} \Big)^{1/2}
\end{equation*}
for $z,w \in X$; see \cite[Lemma 9.9]{AM02} for a proof of the triangle inequality.
Recall that the Hardy space $H^2$ is the reproducing kernel Hilbert space on $\mathbb{D}$ whose
reproducing kernel is the Szeg\H{o} kernel $S(z,w) = \frac{1}{1 - z \overline{w}}$.
In this case, $\delta_{H^2}$ is the classical pseudohyperbolic metric $\delta$
on $\bD$, that is,
\begin{equation*}
\delta_{H^2}(z,w) = \delta(z,w) = \Big| \frac{z - w}{1 - \ol{w} z} \Big|;
\end{equation*}
see \cite[Equation 9.8]{AM02}.
We now show that the $2$-point multiplier norm is closely related to the metric $\delta_{\cH}$.
We say that $K$ is non-vanishing if $K(z,w) \neq 0$ for all choices of $z,w$.
\begin{prop}
Let $\cH$ be a reproducing kernel Hilbert space on $X$ with non-vanishing kernel $K$.
Let $\varphi: X \to \bC$ be a function. Then
\begin{equation*}
||\varphi||_{\Mult(\cH),2} \le 1
\end{equation*}
if and only if $\varphi$ is constant of modulus at most $1$ or $\varphi$ maps $X$ into $\bD$
and satisfies
\begin{equation*}
\delta(\varphi(z),\varphi(w)) \le \delta_{\cH}(z,w)
\end{equation*}
for all $z, w \in X$.
\end{prop}
\begin{proof}
Let $z,w \in X$. By Sylvester's criterion, the matrix
\begin{equation*}
\begin{bmatrix}
K(z,z) ( 1 - |\varphi(z)|^2) & K(z,w) (1 - \varphi(z) \ol{\varphi(w)}) \\
K(w,z) ( 1 - \varphi(w) \ol{\varphi(z)}) & K(w,w) (1 - |\varphi(w)|^2)
\end{bmatrix}
\end{equation*}
is positive if and only if $|\varphi(z)| \le 1$ and $|\varphi(w)| \le 1$
and its determinant is non-negative. If $\varphi(z),\varphi(w) \in \bD$,
then the determinant of this matrix
is non-negative if and only if
\begin{equation*}
\frac{|K(z,w)|^2}{K(z,z)K(w,w)} \le \frac{(1- |\varphi(z)|^2)(1 - |\varphi(w)|^2)}{
|1 - \varphi(z) \ol{\varphi(w)}|^2},
\end{equation*}
and since $\delta_{H^2} = \delta$, this last inequality is equivalent to
\begin{equation*}
\delta(\varphi(z),\varphi(w)) \le \delta_{\cH}(z,w).
\end{equation*}
This shows that if $\varphi$ maps $X$ into $\bD$ and is a $\delta_{\cH}-\delta$-contraction,
then $||\varphi||_{\Mult(\cH),2} \le 1$. Clearly, constant functions $\varphi$
of modulus at most $1$ also have $2$-point norm at most $1$.
Conversely, if $\varphi$ has $2$-point norm at most $1$ and is not constant,
then since $K$ does not vanish, \cite[Lemma 2.2]{AHM+17a}
implies that $\varphi$ maps $X$ into $\bD$. By the first paragraph,
$\varphi$ is a $\delta_{\cH}-\delta$-contraction.
\end{proof}
Next, we study the $3$-point multiplier norm.
If $\cH = H^2$, the Hardy space on the disc,
then every function $\varphi: \bD \to \bC$ with $||\varphi||_{\Mult(\cH),3} < \infty$
is analytic by a theorem of Hindmarsh \cite{Hindmarsh68}.
This result can be generalized. We begin with a lemma.
\begin{lem}
\label{lem:analytic_szego}
Let $\cH$ be a reproducing kernel Hilbert space of analytic functions on $\bD$ with kernel $K$.
Let $S$ denote the Szeg\H{o} kernel.
Then there exists $r > 0$ such that $K(rz, rw) \neq 0$ for all $z,w \in \mathbb{D}$ and
such that
\begin{equation*}
\bD \times \bD \to \bC, \quad
(z,w) \mapsto \frac{S(z,w)}{K(r z,r w)},
\end{equation*}
is positive semi-definite.
\end{lem}
\begin{proof}
In the first step, we show that if $(b_n)$ is a sequence
of real numbers with $b_0 > 0$ such that the power series
$\sum_{n=0}^\infty b_n t^n$ has a positive radius of convergence and if we define
\begin{equation*}
L(z,w) = \sum_{n=0}^\infty b_n (z \ol{w})^n,
\end{equation*}
then there exists $r_0 > 0$ such that
\begin{equation*}
(z,w) \mapsto S(z,w) L(r z, rw)
\end{equation*}
is positive semi-definite on $\bD \times \bD$ for all $r \in (0,r_0)$. (This will already
prove the lemma in the case when $K$ has circular symmetry, by taking $L = 1/K$.)
To establish this claim, observe that for sufficiently small $r > 0$, the identity
\begin{equation*}
S(z,w) L(r z , r w) = \sum_{n=0}^\infty (z \ol{w})^n \sum_{n=0}^\infty b_n r^n (z \ol{w})^n
= \sum_{n=0}^\infty (z \ol{w})^n \sum_{k=0}^n b_k r^k
\end{equation*}
holds,
so it suffices to find $r_0 > 0$ such that $\sum_{k=0}^n b_k r^k \ge 0$ for all $n \in \bN$
and all $r \in (0,r_0)$.
Since $\sum_{n=0}^\infty b_n t^n$ has a positive radius of convergence and since $b_0 > 0$, there exists
$r_0 > 0$ such that $\sum_{k=1}^\infty |b_k| r_0^k \le b_0$. Thus,
\begin{equation*}
\sum_{k=0}^n b_k r^k \ge b_0 - \sum_{k=1}^n |b_k| r^k \ge b_0 - \sum_{k=1}^\infty |b_k| r_0^k \ge 0
\end{equation*}
for all $r \in (0,r_0)$, as desired.
Having established the claim, suppose
now that $K$ is the reproducing kernel of a Hilbert space of analytic functions
on $\bD$.
We will show that there exist $s > 0$ and a sequence of real numbers $(b_n)$ with $b_0 > 0$
such that $\sum_{n=0}^\infty b_n t^n$ has a positive radius of convergence, $K(sz ,s w) \neq 0$ for all $z,w \in \mathbb{D}$, and
\begin{equation*}
(z,w) \mapsto K^{-1}( s z, s w) - \sum_{n=0}^\infty b_n (z \ol{w})^n
\end{equation*}
is positive semi-definite in a neighborhood of $(0,0)$. Once this is accomplished,
the first part shows that there exists $r_0 > 0$ such that $L(r z,rw) S(z,w) \ge 0$
for all $r \in (0,r_0)$,
where $L(z,w) = \sum_{n=0}^\infty b_n (z \ol{w})^n$. Hence, in the order
induced by positivity, we obtain by the Schur product theorem for sufficiently small $r > 0$
the inequality
\begin{equation*}
K^{-1}(s r z, s r w) S(z,w) \ge L(r z, rw) S(z,w) \ge 0,
\end{equation*}
which finishes the proof.
To construct the sequence $(b_n)$, observe that since $\cH$ consists
of holomorphic functions, $K$ is holomorphic in the first variable
and conjugate holomorphic in the second variable. Moreover, $K(0,0) > 0$ by our standing assumption that reproducing kernels
do not vanish on the diagonal.
By Hartogs' theorem, $K$ is jointly continuous, so $K$ is non-vanishing in neighborhood of $0$, and there exist complex numbers $(c_{n m})$ such that
\begin{equation*}
K^{-1} (z,w) = \sum_{n,m=0}^\infty c_{n m} z^n \ol{w}^m
\end{equation*}
for $z,w$ in a neighborhood of $0$.
Since $K(z,w) = \ol{K(w,z)}$, it follows that $c_{n m} = \ol{c_{m n}}$ for all $m,n \in \bN_0$.
In particular, each $c_{n n}$ is real. Moreover, $c_{0 0} > 0$ as $K(0,0) > 0$.
By replacing $K$ with a positive multiple of $K$, we may
assume that $c_{0 0} = 1$.
Moreover, by replacing $K(z,w)$ with $K(sz , sw)$ for suitable $s > 0$, we may assume that $|c_{n m}| \le 1$
whenever $(n,m) \neq (0,0)$.
Then
\begin{equation}
\label{eqn:K_inverse}
K^{-1}(z,w)
= 1 + \sum_{n = 1}^\infty c_{n n} (z \ol{w})^n
+ \sum_{n < m} (c_{n m} z^n \ol{w}^m + \ol{c_{n m}} z^m \ol{w}^n).
\end{equation}
To treat the last sum, observe that if $f$ is a function on $\bD$, then for all $\varepsilon > 0$,
\begin{equation*}
f(z) + \ol{f(w)} = \left( \varepsilon + \frac{f(z)}{\varepsilon} \right)
\left( \varepsilon + \frac{\ol{f(w)}}{\varepsilon} \right) - \varepsilon^2 - \frac{f(z) \ol{f(w)}}{\varepsilon^2},
\end{equation*}
hence
\begin{equation*}
f(z) + \ol{f(w)} \ge - \varepsilon^2 - \frac{f(z) \ol{f(w)}}{\varepsilon^2}.
\end{equation*}
Using this observation with $f(z) = \ol{c_{n m}} z^{m -n}$ and $\varepsilon = \varepsilon_m$, to be determined later, as well as the Schur product theorem, we see that if $n < m$, then
\begin{align*}
c_{n m} z^n \ol{w}^m + \ol{c_{n m}} z^m \ol{w}^n
&= z^n \ol{w}^n ( c_{n m} \ol{w}^{m -n} + \ol{c_{n m}} z^{m - n}) \\
&\ge z^n \ol{w}^n (- \varepsilon_m^2 - \varepsilon_{m}^{-2} |c_{n m}|^2 z^{m -n} \ol{w}^{m - n}) \\
&\ge - \varepsilon_m^2 z^n \ol{w}^n - \varepsilon_m^{-2} z^{m} \ol{w}^m.
\end{align*}
Consequently, if $\varepsilon_m^2 = 2^{-m-1}$, then
\begin{align*}
\sum_{n < m} (c_{n m} z^n \ol{w}^m + \ol{c_{n m}} z^m \ol{w}^n)
&\ge - \sum_{n < m} (\varepsilon_m^2 z^n \ol{w}^n + \varepsilon_m^{-2} z^m \ol{w}^m) \\
&= - \sum_{n =0}^{\infty} \sum_{m=n+1}^\infty \varepsilon_m^2 z^n \ol{w}^n
- \sum_{m=0}^\infty \sum_{n=0}^{m-1} \varepsilon_{m}^{-2} z^m \ol{w}^m \\
&\ge - \frac{1}{2} \sum_{n=0}^\infty (z \ol{w})^n - \sum_{m=0}^\infty m 2^{m+1} z^m \ol{w}^m,
\end{align*}
where all sums converge absolutely for $(z,w)$ in a neighborhood of $0$.
Combining this estimate with Equation \eqref{eqn:K_inverse}, we see that
\begin{equation*}
K^{-1}(z,w) \ge \frac{1}{2} - \sum_{n=1}^\infty \Big(\frac{3}{2} + n 2^{n+1}\Big) (z \ol{w})^n,
\end{equation*}
so if we set $b_0 = \frac{1}{2}$ and $b_n = -\tfrac{3}{2} - n 2^{n+1}$, then
$(b_n)$ satisfies all desired properties.
\end{proof}
We can now generalize Hindmarsh's theorem \cite{Hindmarsh68}.
The original theorem is obtained by taking $\cH = H^2$.
\begin{prop}
\label{prop:Hindmarsh_general}
Let $\cH$ be a reproducing kernel Hilbert space of analytic functions
on an open domain $\Omega \subset \bC^d$. Then every function
$\varphi: \Omega \to \bC$ with $||\varphi||_{\Mult(\cH),3} < \infty$
is analytic on $\Omega$.
\end{prop}
\begin{proof}
Suppose that $||\varphi||_{\Mult(\cH),3} \le 1$. We will show that $\varphi$
is analytic in each variable separately.
To this end, let $w \in \Omega$ and $j \in \{1,\ldots,d\}$,
and choose $s > 0$ such that $w + s \bD e_j \subset \Omega$,
where $e_1,\ldots,e_d$ is the standard basis of $\mathbb{C}^d$.
Let $D = w + s \bD e_j$, define
\begin{equation*}
\iota: \bD \to D, \quad t \mapsto w + s t e_j,
\end{equation*}
and let $k$ be the kernel on $\bD$ given by $k(z,w) = K(\iota(z),\iota(w))$.
Then the reproducing kernel Hilbert space $\cH(k)$ on $\bD$ with kernel $k$
consists of analytic functions. Let $\psi = \varphi \circ \iota$. Then
\begin{equation*}
||\psi||_{\Mult(\cH(k)),3} \le ||\varphi||_{\Mult(\cH),3} \le 1.
\end{equation*}
Lemma \ref{lem:analytic_szego} shows that there exists $r > 0$ such that
\begin{equation*}
S(z,w) k^{-1}(r z,rw) \ge 0,
\end{equation*}
where $S$ denotes the Szeg\H{o} kernel.
Since $||\psi||_{\Mult(\cH(k)),3} \le 1$, the function
\begin{equation*}
(z,w) \mapsto k(rz,rw) (1 - \psi(rz) \ol{\psi(rw)})
\end{equation*}
is $3$-point positive, hence an application of the Schur product theorem yields that
\begin{equation*}
(z,w) \mapsto S(z,w) (1 - \psi(r z) \ol{\psi(r w)})
\end{equation*}
is $3$-point positive as well. In this setting, Hindmarsh's theorem \cite{Hindmarsh68}, see also
\cite[Theorem III.2]{Donoghue74}, implies that $z \mapsto \psi(r z)$ is analytic in $\bD$.
(Hindmarsh's theorem concerns functions in the upper half plane, but a routine application
of the Cayley transform yields the corresponding statement in the unit disc.)
Therefore, $\psi$ is analytic in a neighborhood of the origin, so that $\varphi$ is analytic
in the $j$-th variable in a neighborhood of $w$.
\end{proof}
\section{\texorpdfstring{The $n$-point norm for spaces on $\bD$}{The n-point norm for spaces on D}}
\label{sec:n-point}
In this section, we study the $n$-point multiplier norm for spaces of holomorphic
functions on the unit disc.
Our first goal is to show that Question \ref{quest:many_points} has a negative answer for the weighted
Dirichlet spaces $\cD_a$, i.e.\ no $n$-point multiplier norm is comparable
to the full multiplier norm for these spaces. While we will establish a more general result concerning subhomogeneity of $\Mult(\cD_a)$ in Section \ref{sec:top_sub_disc},
we consider this easier question first as it illustrates the ideas that will be used later.
The following lemma shows that for
certain spaces on $\bD$,
the $n$-point multiplier norm of an analytic
function is comparable to the supremum norm.
In the sequel, we let $\Aut(\bD)$ denote the group
of conformal automorphisms of $\bD$.
\begin{lem}
\label{lem:n_point}
Let $\cH$ be a reproducing kernel Hilbert
space on $\bD$.
Suppose that
every conformal automorphism of $\bD$ is a multiplier
on $\cH$ and that
there exists a constant $C > 0$ such that
\begin{equation*}
||\theta||_{\Mult(\cH)} \le C
\end{equation*}
for all $\theta \in \Aut(\bD)$.
Then
\begin{equation*}
||f||_{\infty} \le ||f||_{\Mult(\cH),n} \le C^{n-1} ||f||_{\infty}
\end{equation*}
for all $f \in H^\infty$ and all $n \ge 1$.
\end{lem}
\begin{proof}
The first inequality always holds, so it suffices to show the second one.
To this end, suppose that $f \in H^\infty$ with $||f||_\infty \le 1$.
Let $z_1,\ldots,z_n \in \bD$. We wish to show that
\begin{equation*}
\big[K(z_i,z_j) (C^{2 n-2} - f(z_i) \ol{f(z_j)}) \big]_{i,j=1}^n
\end{equation*}
is positive.
Since
$f$ belongs to the unit ball of $H^\infty$, there exists by classical Nevanlinna-Pick
interpolation a finite Blaschke product $B$ of degree at most $n-1$ and a complex
number $\lambda$ with $|\lambda| \le 1$ such that
$\lambda B(z_i) = f(z_i)$ for $1 \le i \le n$; see \cite[Section I.2]{Garnett07} or \cite[Theorem 6.15]{AM02}.
Since $B$ is a product
of at most $n-1$ conformal automorphisms of $\bD$, the assumption on $\cH$
implies that $ \lambda B$ is a multiplier of $\cH$ of norm at most $C^{n-1}$. Therefore,
\begin{equation*}
\big[ K(z_i,z_j) ( C^{2n-2} - f(z_i) \ol{f(z_j)}) \big] =
\big[ K(z_i,z_j) ( C^{2n-2} - \lambda B(z_i) \ol{\lambda B(z_j)}) \big]
\end{equation*}
is positive, as desired.
\end{proof}
Using basic results from operator space theory, it is possible
to extend the preceding lemma to operator-valued multipliers.
If $\cE$ is an auxiliary Hilbert space,
let $H^\infty(\cB(\cE))$ denote the space of bounded $\cB(\cE)$-valued
holomorphic functions on $\bD$, equipped with the supremum norm
\begin{equation*}
\|\Phi\|_{\infty} = \sup_{z \in \bD} \|\Phi(z)\|_{\cB(\cE)}.
\end{equation*}
Thus, $H^\infty(\cB(\cE)) = \Mult(H^2 \otimes \cE)$, with equality of norms.
\begin{lem}
\label{lem:n_point_vector}
Assume the setting of Lemma \ref{lem:n_point} and let $\cE$ be an auxiliary Hilbert space.
Then
\begin{equation*}
\|\Phi\|_\infty \le \|\Phi\|_{\Mult(\cH),n} \le n C^{n-1} \|\Phi\|_{\infty}
\end{equation*}
for all $\Phi \in H^\infty(\cB(\cE))$ and all $n \ge 1$.
\end{lem}
\begin{proof}
Once again, the first inequality always holds.
To prove the second inequality,
a straightforward approximation argument shows that it suffices to consider
finite-dimensional spaces $\cE$.
Let $n \in \bN$, let $F \subset \bD$ with $|F| \le n$ and consider the restriction mapping
\begin{equation*}
R: H^\infty \to \Mult(\cH|_F), \quad \varphi \mapsto \varphi \big|_F.
\end{equation*}
Lemma \ref{lem:n_point} implies that $R$ is bounded with norm at most $C^{n-1}$.
Since
\begin{equation*}
\dim( \Mult(\cH |_F)) \le n,
\end{equation*}
a basic result from operator space theory implies that $R$
is completely bounded with completely bounded norm at most $n C^{n-1}$;
see for instance \cite[Corollary 2.2.4]{ER00}.
In other words,
if $k \ge 1$ and $\Phi \in H^\infty(M_k)$, then $\|\Phi\|_{\Mult( (\cH|_F) \otimes \bC^k)} \le n C^{n-1} \|\Phi\|_\infty$.
Taking the supremum over all finite subsets $F$ of $\bD$ of cardinality at most $n$
therefore yields the second inequality.
\end{proof}
Lemma \ref{lem:n_point} implies a negative answer to Question \ref{quest:many_points} for the weighted
Dirichlet spaces $\cD_a$, where $a \in (0,1)$. In fact, we obtain the following more precise statement.
\begin{cor}
\label{cor:many_points}
Let $a \in (0,1)$. Then there exists a constant $C > 0$ so that
for all $n \in \bN$ and all $f \in H^\infty$,
\begin{equation*}
\|f\|_\infty \le \|f\|_{\Mult(\cD_a),n} \le C^{n-1} \|f\|_\infty.
\end{equation*}
More generally, for all $n \in \bN$, all auxiliary Hilbert spaces $\cE$
and all $\Phi \in H^\infty(\cB(\cE))$,
\begin{equation*}
\|\Phi\|_\infty \le \|\Phi\|_{\Mult(\mathcal{D}_a),n} \le n C^{n-1} \|\Phi\|_{\infty}.
\end{equation*}
In particular,
there do not exist a constant $C> 0$ and $n \in \bN$ so that
\begin{equation*}
\|\varphi\|_{\Mult(\cD_a)} \le C \|\varphi\|_{\Mult(\cD_a),n}
\end{equation*}
for all $\varphi \in \Mult(\cD_a)$.
\end{cor}
\begin{proof}
We show that $\cD_{a}$ satisfies the assumptions of Lemma \ref{lem:n_point},
which will prove the first two statements (by Lemma \ref{lem:n_point} and Lemma \ref{lem:n_point_vector}). The final statement then
follows from the well known fact that the multiplier norm of $\cD_{a}$ is not equivalent to the supremum norm.
Indeed, using the explicit formula for the norm in $\mathcal{D}_a$ from the introduction, we see that
\begin{equation*}
\|z^n\|_{\Mult(\cD_a)}^2 \ge \|z_n\|_{\cD_a}^2 = \frac{n! \Gamma(a)}{\Gamma(n+a)} \approx (n+1)^{1-a},
\end{equation*}
which tends to infinity as $n \to \infty$; see for instance \cite{Wendel48} for the asymptotic relation.
Using the special form of the reproducing kernel of $\mathcal{D}_a$
and a familiar identity for disc automorphisms \cite[Theorem 2.2.5 (2)]{Rudin08}, it is not hard to see that
$\Mult(\cD_a)$ is isometrically invariant under compositions with conformal
automorphisms, that is, if $\varphi \in \Mult(\cD_{a})$ and if $\theta \in \Aut(\bD)$,
then $\varphi \circ \theta \in \Mult(\cD_{a})$ and $\| \varphi \circ \theta\|_{\Mult(\cD_{a})}
= \|\varphi\|_{\Mult(\cD_{a})}$; see for instance the easy implication of \cite[Corollary 4.4]{Hartz17a}.
In particular, $\|\theta\|_{\Mult(\cD_a)} = \|z\|_{\Mult(\cD_{a})}$ for all $\theta \in \Aut(\bD)$,
so that $\cD_{a}$ satisfies the assumptions of Lemma \ref{lem:n_point}.
\end{proof}
The classical Dirichlet space $\cD$ requires a more careful analysis, as $\Aut(\bD)$
is not a bounded subset of $\Mult(\cD)$. In fact, we will shortly see that the $2$-point
multiplier norm on the Dirichlet space is not equivalent to the supremum norm on $\bD$,
and hence provides more information than the $1$-point norm.
This result will be a consequence of the following estimate of the derivative in terms of the $2$-point norm
on the Dirichlet space, which is better than the classical estimate
in the Schwarz--Pick lemma by a factor of $\log( \frac{1}{1 - |z|})^{1/2}$.
\begin{lem}
\label{lem:der_dirichlet}
There exists a constant $C > 0$ so that
for all $f \in \cO(\bD)$ and all $z \in \bD$,
\begin{equation}
\label{eqn:der_dirichlet}
|f'(z)| \le C \frac{\|f\|_{\Mult(\cD),2}}{(1 - |z|) \log(\frac{1}{1- |z|})^{1/2}}.
\end{equation}
\end{lem}
\begin{proof}
By Proposition 18 (a) of \cite{BS84}, there exists a constant $C > 0$ so that
for all $f \in \Mult(\cD)$, inequality \eqref{eqn:der_dirichlet} holds with $\|f\|_{\Mult(\cD)}$
in place of $\|f\|_{\Mult(\cD),2}$. Indeed, this follows from the fact that the map
$f \mapsto f'$ is a bounded linear map from $\Mult(\cD)$ into $\Mult(\cD,L^2_a)$, combined with
the standard estimate
for multipliers between reproducing kernel Hilbert spaces.
Let now $f \in \cO(\bD)$ with $\|f\|_{\Mult(\cD),2} \le 1$ and let $z \in \bD$.
By the complete Nevanlinna--Pick property of $\cD$ (see, for instance, \cite[Corollary 7.41]{AM02}), there exists for every $w \in \bD$
a multiplier $\varphi_w \in \Mult(\cD)$ with $\|\varphi_w\|_{\Mult(\cD)} \le 1$
and $\varphi_w(z) = f(z), \varphi_w(w) = f(w)$. Thus,
\begin{equation*}
|f(z) - f(w)| = |\varphi_w(z) - \varphi_w(w)|
\le \sup_{t \in [0,1]} |\varphi_w'(z + t (w - z))| \, |z - w|.
\end{equation*}
Applying the estimate for $\varphi_w'$ explained in the preceding paragraph, we find that
\begin{equation*}
\frac{ |f(z) - f(w)|}{|z - w|} \le \sup_{t \in [0,1]} \frac{C}{ ( 1 - |z + t (w - z)|) \log(\frac { 1}{1 - | z + t (w - z)|})^{1/2}}.
\end{equation*}
Taking the limit $w \to z$, the lemma follows.
\end{proof}
The result about the $2$-point norm that was alluded to above is an immediate consequence.
\begin{cor}
\label{cor:dirichlet_two_point}
The norm $\|\cdot\|_{\Mult(\cD),2}$ and the supremum norm on $\bD$ are not equivalent
on the space of functions that are holomorphic in a neighborhood of $\ol{\bD}$.
\end{cor}
\begin{proof}
Let $r \in (0,1)$ and let $f(z) = \frac{r-z}{1 - rz}$. Then $f \in \Aut(\bD)$ and in particular
$\|f\|_\infty \le 1$. On the other hand,
\begin{equation*}
|f'(r)| = \frac{1}{1 - r^2} \ge \frac{1}{2(1 - r)},
\end{equation*}
so Lemma \ref{lem:der_dirichlet} implies that
\begin{equation*}
\|f\|_{\Mult(\cD),2} \ge \frac{1}{C} (1 - r) \log\Big( \frac{1}{1 - r}\Big)^{1/2} |f'(r)|
\ge \frac{1}{2 C} \log\Big( \frac{1}{1 - r}\Big)^{1/2},
\end{equation*}
which tends to infinity as $r \to 1$.
\end{proof}
We will show that in spite of the last corollary, there is no $n \in \bN$ so
that the multiplier norm on the Dirichlet space is equivalent to the $n$-point multiplier
norm. To avoid repetition, we postpone the proof to Section \ref{sec:top_sub_disc},
where we establish a more general result about subhomogeneity of the multiplier algebra
of the Dirichlet space.
\section{Completely isometric subhomogeneity}
\label{sec:ci_subhom}
The goal of this section is to prove Theorems \ref{thm:ci_n_point_intro} and \ref{thm:ci_subhom_intro}. To
this end, we will make use of Arveson's theory of boundary representations \cite{Arveson69}
(see also \cite[Chapter 4]{BL04} and \cite{DK15}),
which generalizes the classical notion of the Choquet boundary of a uniform algebra.
Let $\cA \subset \cB(\cH)$ be a unital subalgebra. It follows from Arveson's extension
theorem that every unital completely contractive map $\varphi: \cA \to \cB(\cK)$ extends to a unital
completely positive map $\pi: C^*(\cA) \to \cB(\cK)$.
The map $\varphi$ is said to have the \emph{unique extension property} if there is a unique such
extension, and this extension is a $*$-homomorphism. If, in addition, the extension $\pi$ is
an irreducible representation of $C^*(\cA)$, then it is called a \emph{boundary representation}
of $\cA$. This notion is in fact independent of the concrete representation of $\cA$
as an operator algebra \cite[Theorem 2.1.2]{Arveson69}.
In general, completely isometric copies of an operator algebra $\cA$ can generate many different $C^ *$-algebras.
The \emph{$C^*$-envelope} is the smallest one in the following sense.
A \emph{$C^*$-cover}
of $\cA$ is a $C^*$-algebra $\kA$ together with a unital complete isometry $j: \cA \to \kA$
so that $\kA = C^*( j(\cA))$. The $C^*$-envelope is a $C^*$-cover $\iota: \cA \to C^*_{env}(\cA)$
so that for every other $C^*$-cover $j: \cA \to \kA$, there is a $*$-homomorphism $\pi: \kA \to C^*_{env}(\cA)$ so that $\iota = \pi \circ j$.
As an example, completely isometric copies of the disc algebra $A(\bD)$ generate
the $C^*$-algebras $C(\bT), C(\ol{\bD})$ and the Toeplitz algebra. The $C^*$-envelope
of the disc algebra is $C(\bT)$.
It is a theorem of Davidson and Kennedy \cite{DK15}, proved
by Arveson in the separable case \cite{Arveson08}, that every operator algebra
has sufficiently many boundary representations in the sense that their direct sum $\pi$ is completely
isometric. As a consequence, the $C^*$-envelope of $\cA$ is the $C^*$-algebra
generated by $\pi(\cA)$. Boundary representations and $C^*$-envelopes have a long and rich
history, which we will not review here. Instead, we refer to \cite{Arveson08,DK15} and the references therein.
Recall from the introduction that a unital $C^*$-algebra $\kA$ is called $n$-subhomogeneous
if every irreducible representation of $\kA$ has dimension at most $n$. Equivalently, for all
$a \in \kA$,
\begin{equation}
\label{eqn:subhom_c_star}
\|a\| = \sup \{ \|\pi(a) \| : \pi: \kA \to M_k \text{ is a unital $*$-homomorphism, } k \le n \}.
\end{equation}
(For each $a\in \mathcal{A}$, there is an irreducible GNS-representation $\pi$ of $\mathcal{A}$ with $\|\pi(a)\| = \|a\|$, so $n$-subhomogeneity
implies \eqref{eqn:subhom_c_star}. Conversely, if \eqref{eqn:subhom_c_star} holds for all $a \in \mathcal{A}$,
then $\mathcal{A}$ embeds into a product $\prod_{i \in I} M_{n_i}$ with $n_i \le n$ for all $i$, which implies
that $\mathcal{A}$ is $n$-subhomogeneous; see \cite[Proposition IV.1.4.6]{Blackadar06}.)
Recall further that we defined a unital operator algebra $\mathcal{A}$ to be
completely isometrically $n$-subhomogeneous if for all $r \in \mathbb{N}$
and all $A \in M_r(\mathcal{A})$,
\begin{equation*}
\|A\| = \sup \{ \|\pi^{(r)}(A) \|: \pi: \mathcal{A} \to M_k \text{ is a u.c.c.\ homomorphism, } k \le n \}.
\end{equation*}
The following proposition connects completely isometric subhomogeneity of operator algebras
to subhomogeneity of $C^*$-algebras.
\begin{prop}
\label{prop:boundary_subhom}
Let $\cA \subset \cB(\cH)$ be a unital operator algebra and let $n \in \bN$.
The following assertions are equivalent:
\begin{enumerate}[label=\normalfont{(\roman*)}]
\item $\cA$ is completely isometrically $n$-subhomogeneous,
\item the $C^*$-envelope of $\cA$ is an $n$-subhomogeneous $C^*$-algebra,
\item every boundary representation of $\cA$ acts on a Hilbert space of dimension at most $n$.
\end{enumerate}
\end{prop}
\begin{proof}
(i) $\Rightarrow$ (ii)
Suppose that $\cA$ is completely isometrically $n$-subhomogeneous. Then
there exists an index set $I$ and natural numbers $(n_i)_{i \in I}$ with $\sup_{i \in I} n_i \le n$
and a unital complete isometry
\begin{equation*}
\Phi: \cA \to \prod_{i \in I} M_{n_i}.
\end{equation*}
The second characterization of $n$-subhomogeneity mentioned in the discussion
before the proposition shows that the product on the right and hence also the subalgebra $C^*(\Phi(\cA))$ are $n$-subhomogeneous.
Since the $C^*$-envelope of $\cA$ is a quotient of $C^*(\Phi(\cA))$,
the first characterization of $n$-subhomogeneity shows that the $C^*$-envelope is $n$-subhomogeneous as well.
(These permanence properties for $n$-subhomogeneous $C^*$-algebras also follow directly from \cite[Proposition IV.1.4.6]{Blackadar06}.)
(ii) $\Rightarrow$ (iii)
The invariance principle
for boundary representations \cite[Theorem 2.1.2]{Arveson69} (see \cite[Proposition 3.1]{Arveson} for a
modern proof) shows that every boundary representation of $\cA$ is an irreducible
representation of the $C^*$-envelope, hence it acts on a Hilbert space of dimension at most $n$.
(iii) $\Rightarrow$ (i) The main result of \cite{DK15} shows that the direct
sum of all boundary representations of $\cA$ is completely isometric on $\cA$,
hence $\cA$ is completely isometrically $n$-subhomogeneous.
\end{proof}
Recall that a regular unitarily invariant space is a reproducing kernel Hilbert space on $\bB_d$ with reproducing
kernel of the form
\begin{equation*}
K(z,w) = \sum_{n=0}^\infty a_n \langle z,w \rangle^n,
\end{equation*}
where $a_0 = 1$, $a_n > 0$ for all $n \in \bN$ and $\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$.
If $\cH$ is regular unitarily invariant space, then the polynomials are multipliers of $\cH$.
We can now prove Theorem \ref{thm:ci_subhom_intro},
which we restate for the convenience of the reader. As explained in the introduction, this will establish
Theorem \ref{thm:ci_n_point_intro} as well.
\begin{thm}
\label{thm:ci_subhom}
Let $\cH$ be a regular unitarily invariant space on $\bB_d$.
Then $\Mult(\cH)$ is completely isometrically subhomogeneous if and only if $\Mult(\cH) = H^\infty(\bB_d)$
completely isometrically.
\end{thm}
\begin{proof}
It is clear that $H^\infty(\bB_d)$ is completely isometrically $1$-subhomogeneous.
Conversely, let $A(\cH)$ denote the norm closure of the polynomials inside of $\Mult(\cH)$
and let $A(\bB_d)$ denote the ball algebra.
Then the natural inclusion $A(\cH) \hookrightarrow A(\bB_d)$ is completely contractive.
We first show that if $A(\cH)$ is completely isometrically subhomogeneous, then this
inclusion is a complete isometry.
To this end, suppose that the inclusion $A(\cH) \hookrightarrow A(\bB_d)$ is not a complete isometry.
The maximum modulus principle then shows
that the map
\begin{equation*}
R: A(\cH) \to C(\partial \bB_d), \quad f \mapsto f \big|_{\partial \bB_d},
\end{equation*}
is not a complete isometry.
Since $\cH$ is regular, \cite[Theorem 4.6]{GHX04} yields a short exact sequence of $C^*$-algebras
\begin{equation*}
0 \to \cK(\cH) \to C^*(A(\cH)) \to C( \partial \bB_d) \to 0,
\end{equation*}
where the first map is the inclusion map and the second map agrees
with $R$ on $A(\cH)$. Thus, the quotient map by the compact operators on $C^*(A(\cH))$
is not completely isometric on $A(\cH)$. In this setting, Arveson's boundary
theorem \cite[Theorem 2.1.1]{Arveson72} implies that the identity representation of $C^*(A(\cH))$ is a boundary
representation for $A(\cH)$. In particular, $A(\cH)$ has an infinite dimensional boundary
representation and is therefore not completely isometrically subhomogeneous by Proposition \ref{prop:boundary_subhom}.
Finally, we turn from $A(\cH)$ to $\Mult(\cH)$.
The natural inclusion $\Mult(\cH) \hookrightarrow H^\infty(\bB_d)$ is completely contractive.
If $\Mult(\cH)$ is completely isometrically subhomogeneous,
then so is the subalgebra $A(\cH)$. By the preceding paragraph, $A(\cH) = A(\bB_d)$ completely isometrically.
If $F$ belongs the unit ball of $M_n(H^\infty(\bB_d))$, then for each $r \in (0,1)$,
the function $F_r(z) = F(r z)$ belongs to the unit ball of $M_n(A(\bB_d))$,
hence $F_r$ belongs the to the unit ball of $M_n(\Mult(\cH))$ for all $r \in (0,1)$.
Since $F_r$ converges to $F$ pointwise as $r \to 1$, we conclude that $F$ belongs to the unit ball
of $M_n(\Mult(\cH))$. Therefore, $\Mult(\cH) = H^\infty(\bB_d)$ completely isometrically.
\end{proof}
The question of when the identity representation is a boundary representation of $A(\cH)$
was already studied in \cite{GHX04}.
We record two concrete applications to spaces on the unit ball.
Recall that $\cD_a(\bB_d)$ is the reproducing kernel Hilbert space on $\bB_d$ with kernel $K(z,w)
= \frac{1}{(1 - \langle z,w \rangle)^a}$, and $\cD_0(\bB_d)$ has kernel
$K(z,w) = \frac{1}{\langle z,w \rangle} \log( \frac{1}{1 - \langle z,w \rangle})$.
\begin{cor}
\label{cor:ci_subhom_ball_dirichlet}
Let $a \in [0,\infty)$. Then $\Mult(\cD_a(\bB_d))$ is completely isometrically subhomogeneous if and only if $a \ge d$.
\end{cor}
\begin{proof}
Let $\cH = \cD_a(\bB_d)$.
If $a = d$, then $\cH$ is the Hardy space on the ball; if $a > d$, then $\cH$ is a weighted Bergman space;
see, for instance, \cite[Theorem 2.7 and Proposition 4.28]{Zhu05}.
In either case, $\Mult(\cH) = H^\infty(\bB_d)$,
which is completely isometrically $1$-subhomogeneous.
Conversely, one can deduce from Examples 1 and 2 in \cite{GHX04} that the identity
representation of $A(\cH)$, the norm closure of the polynomials in $\Mult(\mathcal{H})$, is a boundary representation if $a < d$, so that $\Mult(\cH)$ is not completely
isometrically subhomogeneous by Proposition \ref{prop:boundary_subhom}.
Alternatively,
we can argue with Theorem \ref{thm:ci_subhom} as follows.
If $a > 0$, then
\begin{equation*}
\|z_i\|_{\mathcal{H}}^2 = \frac{\Gamma(a)}{\Gamma(a+1)} = \frac{1}{a}.
\end{equation*}
If $\Mult(\cH)$ is completely isometrically subhomogeneous, then
Theorem \ref{thm:ci_subhom} implies that $\Mult(\cH) = H^\infty(\bB_d)$ completely isometrically.
In particular,
\begin{equation*}
1 =
\left\|
\begin{bmatrix}
z_1 \\ \vdots \\ z_d
\end{bmatrix} \right\|_{\Mult(\cH,\cH \otimes \bC^d)}
\ge \sum_{n=1}^d \|z_i\|^2_{\cH} = \frac{d}{a},
\end{equation*}
so that $a \ge d$. Similarly, if $a=0$, then $\|z_i\|^2_{\cH} = 2$, so that $\Mult(\cH)$ is not completely isometrically subhomogeneous
by the same reasoning.
\end{proof}
Our second application concerns spaces with the complete
Nevanlinna--Pick property; see \cite{AM02} for background on this topic.
\begin{cor}
Let $\cH$ be a regular unitarily invariant space on $\bB_d$ with the complete
Nevanlinna--Pick property. Then $\Mult(\cH)$ is completely isometrically subhomogeneous if and only if $d=1$
and $\cH = H^2(\bD)$.
\end{cor}
\begin{proof}
It is clear that $\Mult(H^2(\bD)) = H^\infty(\bD)$ is completely isometrically $1$-sub\-homogeneous.
Conversely, it was proved in \cite[Theorem 6.2]{CH18} that the identity representation of $A(\cH)$ is a boundary
representation unless $\cH = H^2(\bD)$, so the result follows from Proposition \ref{prop:boundary_subhom}.
Alternatively, we can again argue with Theorem \ref{thm:ci_subhom} directly.
Observe that if $K(z,w) = \sum_{n=0}^\infty a_n \langle z,w \rangle^n$, then $\|z_i\|^2_{\cH} = \frac{1}{a_1}$.
If $\Mult(\cH)$ is completely
isometrically subhomogeneous, then $\Mult(\cH) = H^\infty(\bB_d)$ completely isometrically by Theorem \ref{thm:ci_subhom}, so
\begin{equation*}
1 = \left\|
\begin{bmatrix}
z_1 \\ \vdots \\ z_d
\end{bmatrix} \right\|_{\Mult(\cH,\cH \otimes \bC^d)}
\ge \sum_{n=1}^d \|z_i\|^2_{\cH} = \frac{d}{a_1},
\end{equation*}
hence $a_1 \ge d$.
Since $\cH$ has the complete Nevanlinna--Pick property,
there exists a sequence $(b_n)$ of non-negative numbers with $\sum_{n=1}^\infty b_n \le 1$ and
$\sum_{n=0}^\infty a_n t^n = \frac{1}{1 - \sum_{n=1}^\infty b_n t^n}$ for $t \in \bD$. (This is a variant of \cite[Theorem 7.33]{AM02}, see for instance \cite[Lemma 2.3]{Hartz17a} for the precise statement.)
Since $a_1 = b_1$,
it follows that $b_1 = d =1$ and $b_n = 0$ for $n \ge 2$, hence $\cH = H^2(\bD)$.
\end{proof}
\section{Topological subhomogeneity for spaces on the disc}
\label{sec:top_sub_disc}
In this section, we study topological subhomogeneity of multiplier algebras
of weighted Dirichlet spaces on the unit disc. As in Section \ref{sec:n-point},
the basic idea is to interpolate holomorphic functions in $\bD$ by finite Blaschke products.
If $A$ is a diagonalizable $n \times n$ matrix with $\sigma(A) \subset \bD$, then
classical Nevanlinna--Pick interpolation shows that for any $f \in H^\infty(\bD)$ with $\|f\|_\infty \le 1$,
there exists a finite Blaschke product $B$ of degree at most $n-1$ and $\lambda \in \ol{\bD}$
such that $f$ and $\lambda B$ agree on $\sigma(A)$, hence $f(A) = \lambda B(A)$.
We require a generalization of this fact to not necessarily diagonalizable matrices,
which corresponds to interpolating suitable derivatives as well.
This generalization readily follows from Sarason's approach to Nevanlinna--Pick interpolation \cite{Sarason67}.
\begin{lem}
\label{lem:Blaschke_product}
Let $A \in M_n$ with $\sigma(A) \subset \bD$
and let $f \in H^\infty$ with $||f||_\infty \le 1$.
Then there exists a finite Blaschke product $B$ of degree at most $n-1$ and a complex
number $\lambda$ with $|\lambda| \le 1$ such that $f(A) = \lambda B(A)$.
\end{lem}
\begin{proof}
Clearly, we may assume that $f \neq 0$ and that $A \neq 0$.
Let $\psi$ be the finite Blaschke product of degree at most $n$
whose zeros are those of the minimal polynomial
of $A$, counted with multiplicity.
Let $K = H^2 \ominus \psi H^2$.
The corollary to Proposition 5.1 of \cite{Sarason67} shows that
there exists a unique function $\varphi \in H^\infty$ with
\begin{equation}
\label{eqn:sarason}
P_K M_\varphi \big|_K =
P_K M_f \big|_{K}
\quad \text{ and } \quad
\|\varphi\|_\infty = \|P_K M_f \big|_{K} \|;
\end{equation}
moreover, this $\varphi$ is a rational function of constant modulus
on the unit circle with strictly fewer zeros than $\psi$. In other words,
$\varphi = \lambda B$ for a finite Blaschke product $B$ of degree at most $n-1$
and a number $\lambda \in \bC$ with $|\lambda| \le 1$.
Equation \eqref{eqn:sarason} and co-invariance of $K$ under multiplication
operators imply that $f - \lambda B \in \psi H^2$.
Consequently, $f(A) = \lambda B(A)$, as desired.
\end{proof}
The next step is to establish a version of Lemma \ref{lem:n_point}
for representations of multiplier algebras. In order to be able to treat
the classical Dirichlet space as well, we formulate and prove a more flexible version
in which $\Aut(\bD)$ is not assumed to be a bounded subset of the multiplier algebra.
\begin{lem}
\label{lem:subhomogeneous}
Let $\cH$ be a reproducing kernel Hilbert space on $\bD$
and suppose that $\Aut(\bD) \subset \Mult(\cH)$. For $r \in [0,1)$, let
\begin{equation*}
h(r) = \sup_{\theta \in \Aut(\bD)} \| \theta(r z) \|_{\Mult(\cH)}.
\end{equation*}
Let $\pi: \Mult(\cH) \to M_n$ be a unital bounded homomorphism.
Then for all $r \in (0,1)$, the inclusion $H^\infty( r^{-1} \bD) \subset \Mult(\cH)$
holds, and
\begin{equation*}
\|\pi(f)\| \le \|\pi\| h(r)^{n-1} \sup_{z \in r^{-1} \bD} |f(z)|
\end{equation*}
for all $f \in H^\infty(r^{-1} \bD)$.
\end{lem}
\begin{proof}
We first observe that the condition $\Aut(\bD) \subset \Mult(\cH)$ implies
that $z \in \Mult(\cH)$ and that $\sigma(z) \subset \ol{\bD}$.
Indeed, if $\lambda \notin \ol{\bD}$, then
\begin{equation*}
\frac{\ol{\lambda}^{-1} - z}{1 - \lambda^{-1} z} \in \Mult(\cH),
\end{equation*}
hence $\ol{\lambda}^{-1} - z$ belongs to the ideal of $\Mult(\cH)$
generated by $\lambda - z$, and hence $1$ belongs to this ideal, so that $\lambda -z$
is invertible. The analytic functional calculus therefore implies that every function
analytic in a neighborhood of $\ol{\bD}$ is a multiplier of $\cH$.
Next, let $\pi: \Mult(\cH) \to M_n$ be a unital bounded homomorphism, let $r \in (0,1)$
and let $f$ be an element of the unit ball of $H^\infty(r^{-1} \bD)$.
Define $A = \pi(z) \in M_n$, so that $\sigma(T) \subset \ol{\bD}$ by the first paragraph.
Applying Lemma \ref{lem:Blaschke_product} to the matrix $r A$ and the function $z \mapsto f(r^{-1} z)$,
we find $\lambda \in \ol{\bD}$ and a finite Blaschke product $B$ of degree at most $n-1$ so that
$\lambda B(r A) = f(A)$. Using that $\pi$ is a unital bounded homomorphism, we conclude that
\begin{equation*}
\pi(f) = f(A) = \lambda B(r A) = \lambda \pi( B(r z)).
\end{equation*}
Since $B$ is a product of at most $n-1$ disc automorphisms, $\|B (r z)\|_{\Mult(\cH)} \le h(r)^{n-1}$,
so that
\begin{equation*}
\|\pi(f)\| \le \|\pi\| \|B( r z) \|_{\Mult(\cH)} \le \|\pi\| h(r)^{n-1},
\end{equation*}
which is the desired estimate.
\end{proof}
The following result is a generalization of Corollary \ref{cor:many_points}.
\begin{cor}
\label{cor:not_subhom}
Let $a \in (0,1)$. Then $\Mult(\cD_a)$ is not topologically subhomogeneous.
\end{cor}
\begin{proof}
As explained in the proof of Corollary \ref{cor:many_points}, $\Aut(\bD) \subset \Mult(\cD_a)$
and
\begin{equation*}
\|\theta\|_{\Mult(\cD_a)} = \|z\|_{\Mult(\cD_a)}
\end{equation*}
for all $\theta \in \Mult(\cD_a)$. Moreover, it is well known that if $\varphi \in \Mult(\cD_a)$
and $r \in (0,1)$, then $\varphi(r z) \in \Mult(\cD_a)$ and $\|\varphi(r z) \|_{\Mult(\cD_a)}
\le \|\varphi\|_{\Mult(\cD_a)}$; this follows, for instance, from rotational invariance of $\Mult(\cD_a)$
and a routine application of the Poisson kernel. Thus, the function $h$ in Lemma \ref{lem:subhomogeneous}
is bounded above by $\|z\|_{\Mult(\cD_a)}$. An application of Lemma \ref{lem:subhomogeneous} therefore
shows that for every unital bounded homomorphism $\pi: \Mult(\cH) \to M_n$
and every polynomial $f \in \bC[z]$, the estimate
\begin{equation*}
\|\pi(f)\| \le \|\pi\| \|z\|^{n-1}_{\Mult(\cD_a)} \sup_{z \in \ol{\bD}} |f(z)|
\end{equation*}
holds. Since the multiplier norm on $\cD_a$ is not dominated by a constant times the supremum norm
on $\ol{\bD}$ (see the proof of Corollary \ref{cor:many_points}),
it follows that $\Mult(\cD_a)$ is not topologically subhomogeneous.
\end{proof}
To apply Lemma \ref{lem:subhomogeneous} in the case of the Dirichlet space,
we require the following estimate.
\begin{lem}
\label{lem:dirichlet_auto_multiplier}
There exists a constant $C > 0$ so that for all $\theta \in \Aut(\bD)$
and all $r \in (0,1)$,
\begin{equation*}
\|\theta(r z) \|_{\Mult(\cD)} \le C \log \Big( \frac{2}{1-r} \Big)^{1/2}.
\end{equation*}
\end{lem}
\begin{proof}
It is a result of Brown and Shields \cite[Proposition 18]{BS84} that if $f \in H^\infty$ and
$\sum_{n=1}^\infty (n \log(n)) | \widehat f(n)|^2 < \infty$, then $f \in \Mult(\cD)$; see also \cite[Exercise 5.1.3]{EKM+14}. Here $f(z) = \sum_{n=0}^\infty \widehat f(n) z^n$.
The closed graph theorem (or an inspection of the proof of this result) then shows that
there exists a constant $C > 0$ such that
\begin{equation*}
\|f\|_{\Mult(\cD)} \le C \Big( \|f\|_\infty + \Big( \sum_{n=1}^\infty n \log(n) | \widehat f(n)|^2 \Big)^{1/2} \Big)
\end{equation*}
for all functions $f$ that are holomorphic in a neighborhood of $\ol{\bD}$.
We will apply this inequality to a disc automorphism $\theta(z) = \frac{a - z}{1 - \ol{a} z}$, where $a \in \bD$,
and the function $f(z) = \theta(r z)$. Note that
\begin{equation*}
f(z) = a - \sum_{n=1}^\infty r (\ol{a} r)^{n-1} (1 - |a|^2) z^n.
\end{equation*}
It therefore suffices to show that there exists a (possibly different) constant $C > 0$
so that for all $a \in \bD$ and all $r \in [0,1)$,
\begin{equation}
\label{eqn:auto_mult}
\sum_{n=1}^\infty n \log(n) r^2 |a r|^{2 n -2} (1 - |a|^2)^2 \le
C \log \Big( \frac{2}{1 - r} \Big).
\end{equation}
Clearly, we may assume that $|a| \ge \frac{1}{2}$ and $r \ge \frac{1}{2}$.
We use the known asymptotic
\begin{equation*}
\sum_{n=1}^\infty n \log(n) x^n \sim \frac{1}{(1 - x)^2} \log \Big( \frac{1}{1- x} \Big) \quad
\text{ as } x \nearrow 1,
\end{equation*}
which for instance can be deduced from \cite[Chapter VII, Example 5, p. 242]{Titchmarsh52}.
With this asymptotic identity, we estimate the left hand side of \eqref{eqn:auto_mult}
as
\begin{align*}
r^2 (1 - |a|^2)^2 \sum_{n=1}^\infty n \log(n) |a r|^{2 n -2}
&\lesssim \frac{ (1 - |a|^2)^2}{( 1- | a r|^2)^2} \log \Big( \frac{1}{1 - |a r|^2} \Big) \\
&\le \log \Big( \frac{1}{1 - r} \Big),
\end{align*}
where the implied constants do not depend on $a$ or on $r$. This estimate finishes the proof.
\end{proof}
\begin{rem}
Lemma \ref{lem:der_dirichlet} shows that the estimate in Lemma \ref{lem:dirichlet_auto_multiplier}
is best possible up to constants. Indeed, if $\theta(z) = - \frac{r - z}{1 - r z}$
and $f(z) = \theta(rz)$, then $f'(r) = \frac{r (1-r^2)}{(1-r^3)^2} \sim \frac{2}{9(1-r)}$ as $r \nearrow 1$,
so
\begin{equation*}
\|\theta(r z) \|_{\Mult(\cD)} \gtrsim \log \Big( \frac{1}{1 - r} \Big)^{1/2}
\quad \text{ as } r \nearrow 1.
\end{equation*}
\end{rem}
We are now ready to show that $\Mult(\cD)$ is not subhomogeneous.
\begin{prop}
\label{prop:dirichlet_subhom}
The algebra $\Mult(\cD)$ is not topologically subhomogeneous.
In particular, there does not exist a constant $C > 0$ and $n \in \bN$ so that
\begin{equation*}
\|\varphi\|_{\Mult(\cD)} \le C \|\varphi\|_{\Mult(\cD),n}
\end{equation*}
for all $\varphi \in \Mult(\cD)$.
\end{prop}
\begin{proof}
Assume towards a contradiction that there exist $n \in \bN$ and constants $C_1,C_2 > 0$
so that for all $f \in \bC[z]$, the estimate
\begin{equation}
\label{eqn:subhom_contra}
\|f\|_{\Mult(\cD)} \le C_1 \sup\{ \|\pi(f) \| \}
\end{equation}
holds,
where the supremum is taken over all unital homomorphisms $\pi: \Mult(\cD) \to M_k$
with $\|\pi\| \le C_2$ and $k \le n$.
Lemma \ref{lem:subhomogeneous} and
Lemma \ref{lem:dirichlet_auto_multiplier} show that there
exists a constant $C > 0$ so that for all $r \in (0,1)$, all $f \in \bC[z]$
and all representations $\pi: \Mult(\cD) \to M_k$ as above,
\begin{equation*}
\|\pi(f)\|
\le C_2 C^{n-1}
\log \Big( \frac{2}{1 -r} \Big)^{(n-1)/2}
\sup_{|z| \le r^{-1}} |f(z)|.
\end{equation*}
We apply this inequality to $f(z) = z^m$ for $m$ large and $r = 1 - \frac{1}{m}$.
Since $(1 - \frac{1}{m})^{-m}$ tends to $e$, we find that
\begin{equation*}
\|\pi(z^m)\| \lesssim \log(2 m)^{(n-1)/2},
\end{equation*}
where the implied constant is independent of $m$ and of the particular
representation $\pi$. On the other hand,
\begin{equation*}
\|z^m\|_{\Mult(\cD)} \ge \|z^m\|_{\cD} = \sqrt{m+1}.
\end{equation*}
Since
\begin{equation*}
\frac{\sqrt{m+1}}{\log(2 m)^{(n-1)/2}} \to \infty
\end{equation*}
as $m \to \infty$, this contradicts
the assumption \eqref{eqn:subhom_contra}.
Finally, the additional statement follows from the fact that the $n$-point
norm is dominated by the supremum on the right-hand side of \eqref{eqn:subhom_contra};
see Equation \eqref{eqn:subhom} in the introduction.
\end{proof}
\section{Topological subhomogeneity for spaces on the ball}
\label{sec:subhom_ball}
In this section, we generalize the results about topological subhomogeneity in the preceding section from the unit disc
to the unit ball. To deduce the higher dimensional statements from the ones in dimension one, we establish
an embedding result. To this end, we require variant of a result of Kacnelson \cite{Kacnelson72}.
Let $I$ be a totally ordered set and let $\cH$ be a Hilbert space with orthogonal basis $(e_i)_{i \in I}$.
We say that a bounded linear operator $T$ on $\cH$ is lower triangular if
\begin{equation*}
\langle T e_i, e_j \rangle = 0 \quad \text{ whenever } j < i,
\end{equation*}
and we write $\cT(\cH)$ for the algebra of all bounded lower triangular operators on $\cH$.
The statement below can be found
in \cite[Corollary 3.2]{AHM+18a} in the case when $I = \bN_0$, equipped with
the usual ordering.
\begin{lem}
\label{lem:kacnelson_triangular}
Let $\cH$ and $\cK$ be Hilbert spaces with $\cH \subset \cK$ as vector spaces.
Let $I$ be a totally ordered set and let $(e_i)_{i \in I}$ be a sequence
of vectors in $\cH$ that is an orthogonal basis both for $\cH$ and for $\cK$.
If the family $( \|e_i\|_{\cH} / \|e_i\|_{\cK})$
is non-decreasing with respect to the order on $I$,
then $\cT(\cH) \subset \cT(\cK)$, and the inclusion is a complete contraction.
\end{lem}
\begin{proof}
If $I$ is a finite set, the result was established in \cite[Corollary 3.2]{AHM+18a}.
We will reduce the general case to this particular case. To this end,
let $\cF(I)$ denote the set of finite subsets of $I$, which is a directed
set under inclusion.
For $J \in \cF(I)$,
let $\cH_J \subset \cH$ and $\cK_J \subset \cK$ denote the linear spans of $\{e_i: i \in J\}$,
respectively, so that $\cH_J = \cK_J$ as vector spaces, but with possibly different norms.
Moreover, let $P_J \in B(\cK)$ denote the orthogonal projection onto
$\cK_J$, and notice that on $\cH$, the operator $P_J$
also acts as the orthogonal projection onto $\cH_J$.
Since for any $T \in \cT(\cH)$, the operator
$
P_J T P_J
$
is lower triangular on $\cH_J$, the case of finite $I$ shows that
$\|P_J T P_J\|_{B(\mathcal{K})} \le \|T\|_{B(\mathcal{H})}$, and
the net $(P_J T P_J)_{J \in \cF(I)}$ converges to $T$ in the strong operator topology of $\cB(\cK)$.
The general result follows from this observation.
\end{proof}
The main tool of this section is the following lemma. It allows us to
embed multiplier algebras on the unit disc into multiplier
algebras on the unit ball by extending a function $f$ on $\bD$
to the function $z \mapsto f(z_1)$ on the unit ball.
\begin{lem}
\label{lem:first_proj}
Let $\cH$ be a reproducing kernel Hilbert space on $\bB_d$ with reproducing
kernel
\begin{equation*}
K(z,w) = \sum_{n=0}^\infty a_n \langle z,w \rangle^n,
\end{equation*}
where $a_n > 0$ for all $n \in \bN_0$.
Let $d' \le d$ and let $\cH'$ be the reproducing kernel Hilbert space on $\bB_{d'}$ with
reproducing kernel $K \big|_{ \bB_{d'} \times \bB_{d'}}$.
Then the map
\begin{equation*}
\Phi: \Mult(\cH') \to \Mult(\cH), \quad \varphi \mapsto \varphi \circ P,
\end{equation*}
where $P$ denotes the projection from $\bC^d$ onto $\bC^{d'}$ given
by $P(z_1,\ldots,z_d) = (z_1,\ldots,z_{d'})$,
is a complete isometry.
\end{lem}
\begin{proof}
We will repeatedly use the basic fact that the monomials $(z^\alpha)_{\alpha \in \mathbb{N}_0^d}$
form an orthogonal basis of $\mathcal{H}$ with
\begin{equation*}
\|z^\alpha\|^2_{\mathcal{H}} = \frac{\alpha!}{a_n |\alpha|!}
\end{equation*}
In particular, the map
\begin{equation*}
V: \cH' \to \cH, \quad f \mapsto f \circ P,
\end{equation*}
is an isometry, so it suffices to show that $\Phi$ is a complete contraction.
Clearly, we may assume that $d' < d$.
Throughout the proof, we write $z= (u,v)$, where $u = (z_1,\ldots,z_{d'})$
and $v = (z_{d'+1} ,\ldots, z_d)$.
For $\beta \in \bN_0^{d - d'}$, let
\begin{equation*}
H_{\beta} = \bigvee \{u^\alpha v^\beta : \alpha \in \bN_0^{d'}\} \subset \cH,
\end{equation*}
so that $\cH$ is the orthogonal direct sum of the subspaces $H_{\beta}$. Moreover,
$H_0 = V \cH'$.
Observe that if $\varphi \in \Mult(\cH')$, then the (a priori unbounded) operator
$M_{\varphi \circ P}$ preserves this direct sum decomposition. Thus, it suffices
to show that for all $\beta \in \bN_0^{d - d'}$, $n \in \bN_0$ and all $[\varphi_{i j}] \in M_n(\Mult(\cH))$,
\begin{equation*}
\| [M_{\varphi_{i j} \circ P} \big|_{H_\beta}] \|_{M_n(B(H_\beta))}
\le \| [M_{\varphi_{i j}}] \|_{M_n(B(\cH'))}.
\end{equation*}
To this end, let $\beta \in \bN_0^{d - d'}$, let $k = |\beta|$
and let $\widetilde H_{\beta}$ be the Hilbert space of formal power series
in the variable $u = (z_1,\ldots,z_{d'})$ with orthogonal basis $(u^{\alpha} z_1^k)_{\alpha \in \bN_0^{d'}}$ and norm
defined by
\begin{equation*}
\| u^\alpha z_1^k \|_{\widetilde H_{\beta}} = \|u^\alpha v^\beta \|_{\cH}.
\end{equation*}
If $\varphi \in \Mult(\cH')$, then $M_{\varphi \circ P} \big|_{H_{\beta}}$ is unitarily equivalent
to $M_{\varphi}$ on $\widetilde H_{\beta}$.
To compare the norm of $M_\varphi$ on $\widetilde H_\beta$ and on $\cH'$, we will use
Lemma \ref{lem:kacnelson_triangular}.
Let $\cH'_k = \bigvee \{u^\alpha z_1^k : \alpha \in \bN_0^{d'} \} \subset \cH'$. Then
\begin{equation}
\label{eqn:beta}
\frac{ \| u^\alpha z_1^k \|^2_{\cH_k'} }{ \|u^\alpha z_1^k \|_{\widetilde \cH_{\beta}}^2} =
\frac{ \|u^\alpha z_1^k \|^2_{\cH}}{\|u^\alpha v^\beta \|^2_{\cH}}
= \frac{ (\alpha + k e_1 )!}{\alpha! \beta!}
= \frac{ \prod_{j=1}^k (\alpha_1 + j)}{\beta!},
\end{equation}
which is increasing in $\alpha_1$. Thus, $\cH_k' \subset \widetilde \cH_\beta$ as vector spaces, and if we order $\bN_0^{d'}$ lexicographically,
then the quantity in \eqref{eqn:beta} is non-decreasing in $\alpha \in \bN_0^{d'}$. Moreover,
if $\varphi \in \Mult(\cH')$, then the operator $M_{\varphi}$ is lower triangular with respect
to the orthogonal basis $(u^\alpha z_1^k)_{\alpha}$ of $\cH_k'$ in this ordering,
so that by Lemma \ref{lem:kacnelson_triangular},
\begin{align*}
\| [M_{\varphi_{i j} \circ P} \big|_{H_\beta}] \|_{M_n(B(H_\beta))}
= \| [M_{\varphi_{i j}} ] \|_{M_n(B(\widetilde H_\beta))}
&\le \| [M_{\varphi_{i j}} \big|_{\cH'_k}] \|_{M_n(B(\cH'_k))} \\
&\le \| [M_{\varphi_{i j}}] \|_{M_n(B(\cH'))}
\end{align*}
for all $n \in \bN$ and all $[\varphi_{i j}] \in M_n(\Mult(\cH'))$.
\end{proof}
\begin{rem}
\label{rem:first_proj_cnp}
If $\cH$ is a complete Nevanlinna--Pick space, one can argue more easily as follows.
Let $\Phi \in \Mult(\cH' \otimes \bC^n)$ be a multiplier of norm at most $1$.
By the complete Nevanlinna--Pick property, there exists a multiplier
$\Psi \in \Mult(\cH \otimes \bC^n)$ of norm at most $1$ with $\Psi \big|_{\bB_{d'}} = \Phi$.
Writing $z=(u,v)$ as in the preceding proof, we see that for $(u,v) \in \bB_d$,
\begin{equation*}
(\Phi \circ P)(u,v) = \Psi(u,0)
= \frac{1}{2 \pi} \int_{0}^{2 \pi} \Psi(u, e^{ i t} v) \, dt.
\end{equation*}
The symmetry of $K$ implies that for each $t \in \bR$, the function $(u,v) \mapsto \Phi(u, e^{i t} v)$
is a multiplier of norm at most $1$, hence $\Phi \circ P$ is a multiplier of norm
at most $1$ as well. Since the reverse inequality is clear, this argument
proves the result in the case of a complete Nevanlinna--Pick space.
\end{rem}
We now obtain a negative answer to Question \ref{quest:subhomogeneous} for the spaces $\cD_a(\bB_d)$, where $a \in [0,1)$.
In particular, Question \ref{quest:many_points} has
a negative answer for these spaces as well.
\begin{cor}
\label{cor:ball_not_subhom}
Let $a \in [0,1)$ and let $d \in \bN$. Then $\Mult(\cD_a(\bB_d))$ is not topologically subhomogeneous.
In particular, there do not exist a constant $C > 0$ and $n \in \bN$ so that
\begin{equation*}
\|f\|_{\Mult(\cD_a(\bB_d))} \le C
\|f\|_{\Mult(\cD_a(\bB_d)),n}
\end{equation*}
for all $f \in \Mult(\cD_a(\bB_d))$.
\end{cor}
\begin{proof}
By Proposition \ref{prop:dirichlet_subhom} and Corollary \ref{cor:not_subhom}, the algebra
$\Mult(\cD_a(\bD))$ is not topologically subhomogeneous. Lemma \ref{lem:first_proj} shows
that $\Mult(\cD_a(\bD))$ is (completely isometrically) isomorphic to a subalgebra of $\Mult(\cD_a(\bB_d))$,
hence $\Mult(\cD_a(\bB_d))$ is not topologically subhomogeneous either. As before, the additional
statement follows from the fact that the $n$-point multiplier norm can be computed using
representations of dimension at most $n$.
\end{proof}
\section{Embedding weighted Dirichlet spaces into the Drury--Arveson space}
\label{sec:embedding}
In this section, we show that the multiplier algebra of the Drury--Arveson
space $H^2_d$ is not subhomogeneous for $d \ge 2$. This does not follow
from Lemma \ref{lem:first_proj}, as the multiplier algebra of $H^2_1$ is $H^\infty(\bD)$, which is $1$-subhomogeneous.
Instead, we show that multiplier algebras of certain weighted Dirichlet spaces embed completely
isometrically into $\Mult(H^2_d)$.
We begin with a known construction that produces an embedding for the Hilbert function spaces.
Let $d \in \bN$ and let
\begin{equation*}
\tau: \ol{\bB_d} \to \ol{\bD}, \quad z \mapsto d^{d/2} z_1 z_2 \ldots z_d.
\end{equation*}
It follows from the inequality of arithmetic and geometric means that $\tau$ maps $\ol{\bB_d}$
onto $\ol{\bD}$ and $\bB_d$ onto $\bD$. Let
\begin{equation*}
a_n = \| \tau^n\|^{-2}_{H^2_d}
\end{equation*}
and let $\cH_d$ be the reproducing kernel Hilbert space on $\bD$ with reproducing kernel
\begin{equation*}
K(z,w) = \sum_{n=0}^\infty a_n (z \ol{w})^n.
\end{equation*}
The spaces $\cH_d$ are weighted Dirichlet spaces on $\bD$ that embed into $H^2_d$, cf.\ Lemmas 3.1 and 4.1
in \cite{Hartz17}.
\begin{lem}
\label{lem:H_d_embed}
Let $d \in \bN$.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item The map
\begin{equation*}
V: \cH_d \to H^2_d, \quad f \mapsto f \circ \tau,
\end{equation*}
is an isometry whose range consists of all functions in $H^2_d$ that are power series in $z_1 z_2 \ldots z_d$.
\item The asymptotic identity $a_n \approx (n+1)^{(1-d)/2}$ holds.
\item The space $\cH_2$ is equal to the weighted Dirichlet space $\cD_{1/2}$, with equality of norms.
\item The space $\cH_3$ is equal to the classical Dirichlet space $\cD$, with equivalence of norms.
\item For $d \ge 4$, $\cH_d \subset A(\bD)$ and $\Mult(\cH_d) = \cH_d$ with equivalent norms.
\end{enumerate}
\end{lem}
\begin{proof}
(a)
Since $(z^n)$ and $(\tau^n)$ are orthogonal sequences in $\cH_d$ and $H^2_d$, respectively, $V$
is an isometry by definition of the sequence $(a_n)$. Moreover, if $M$ is denotes the closed
subspace of $H^2_d$ generated by the monomials $(z_1 z_2 \ldots z_d)^n$, then the range of $V$ is clearly
contained in $M$ and contains all polynomials in $z_1 z_2 \cdots z_d$, so it is equal to $M$.
(b) By Stirling's formula, $n! \sim \sqrt{2 \pi n} (\frac{n}{e})^n$, so
\begin{equation*}
a_n^{-1} = \|\tau^n\|^2_{H^2_d} = d^{n d} \frac{ (n!)^d}{(d n)!} \sim
(2 \pi)^{(d-1)/2} n^{(d-1)/2},
\end{equation*}
from which (b) follows.
(c) follows from a computation involving the binomial series, see \cite[Lemma 4.1]{Hartz17}.
(d) is a consequence of (b).
(e) Part (b) shows that the sequence $(a_n)$ belongs to $\ell^1$ for $d \ge 4$, so that $\cH_d$ consists of
continuous functions on $\ol{\bD}$.
Finally, the equality $\cH_d = \Mult(\cH_d)$ follows from part (b) combined with
Proposition 31 and Example 1 in Section 9
of \cite{Shields74}.
\end{proof}
We will show that the embedding $V$ of Lemma \ref{lem:H_d_embed} is also a complete isometry on the level
of multiplier algebras.
\begin{prop}
\label{prop:mult_embed}
For $d \in \bN$, the map
\begin{equation*}
\Phi: \Mult(\cH_d) \to \Mult(H^2_d), \quad \varphi \mapsto \varphi \circ \tau,
\end{equation*}
is a unital complete isometry.
\end{prop}
We require some preparation for the proof of this result.
\begin{rem}
It follows from part (c) of Lemma \ref{lem:H_d_embed} that the space $\cH_2$ has the
complete Nevanlinna--Pick property, and in general, the spaces $\cH_d$ have the complete Nevanlinna--Pick property, at least up to equivalence of norms by part (b) of Lemma \ref{lem:H_d_embed}.
Indeed, $\mathcal{D}_{1/2}$ is well known to have the complete Nevanlinna--Pick property,
which can be seen by expanding the reciprocal of the reproducing kernel of $\mathcal{D}_{1/2}$ into a binomial series, see also \cite[Theorem 7.33]{AM02}.
Moreover, if $\widetilde{\mathcal{H}_d}$ is the reproducing kernel Hilbert space on $\mathbb{D}$ with reproducing kernel
\begin{equation*}
\widetilde{K}(z,w) = \sum_{n=0}^\infty (n+1)^{(1-d)/2} (z \overline{w})^n,
\end{equation*}
then $\widetilde{\mathcal{H}_d}$ has the complete Nevanlinna--Pick property for all $d \ge 1$ (see, for instance, \cite[Corollary 7.41]{AM02}),
and Lemma \ref{lem:H_d_embed} (b) implies that $\mathcal{H}_d = \widetilde{\mathcal{H}_d}$ with equivalent norms.
We emphasize that the construction in Proposition \ref{prop:mult_embed} differs from the universality
theorem for complete Nevanlinna--Pick spaces of Agler and M\textsuperscript{c}Carthy\ \cite{AM00}.
Indeed, the main result of \cite{AM00}
yields an injection $j: \bD \to \bB_d$ for some $d \in \bN \cup \{ \infty \}$
such that
\begin{equation*}
H^2_d \to \cH_2, \quad f \mapsto f \circ j,
\end{equation*}
is a co-isometry and such that
\begin{equation*}
\Mult(H^2_d) \to \Mult(\cH_2), \quad \varphi \mapsto \varphi \circ j,
\end{equation*}
is a complete quotient map. In fact, $d= \infty$ is necessary,
see \cite[Corollary 11.9]{Hartz17a}. Thus, \cite{AM00} allows us to realize $\Mult(\cH_2)$ as a quotient
of $\Mult(H^2_\infty)$. In Proposition \ref{prop:mult_embed}, the arrows are reversed.
We obtain a surjection $\tau: \bB_2 \to \bD$ such that
\begin{equation*}
\cH \to H^2_2, \quad f \mapsto f \circ \tau,
\end{equation*}
is an isometry and such that
\begin{equation*}
\Mult(\cH) \to \Mult(H^2_2), \quad \varphi \mapsto \varphi \circ \tau,
\end{equation*}
is a complete isometry. Thus, Proposition \ref{prop:mult_embed} allows us to realize $\Mult(\cH)$ as a subalgebra
of $\Mult(H^2_2)$.
\end{rem}
Our proof of Proposition \ref{prop:mult_embed} is related to a proof suggested by Ken Davidson in the case $d=2$.
Davidson's proof uses Schur multipliers and is in turn based on a computation of Jingbo Xia (see \cite[Example 6.6]{CD16}).
We begin by analyzing the action of the operator $M_\tau$ on the Drury--Arveson space.
In our computations with monomials, we will use familiar multi-index notation. In addition,
we set $\mathbf{1} = (1,1,\ldots,1) \in \bN_0^d$
and define
\begin{equation*}
\partial \bN_0^d = \{ \alpha \in \bN_0^d: \alpha_j = 0 \text{ for some } j \in \{1,\ldots, d \} \}.
\end{equation*}
Finally, if $\alpha \in \partial \bN_0^d$, let
\begin{equation*}
X_\alpha = \{ \alpha + k \mathbf{1}: k \in \bN_0 \}
\end{equation*}
and correspondingly, define
\begin{equation*}
H_\alpha = \ol{ \spa} \{ z^\beta: \beta \in X_\alpha \} \subset H^2_d.
\end{equation*}
Observe that the space $H_0$ is precisely the range of the isometry $V$ of Lemma \ref{lem:H_d_embed}.
Recall that $\tau(z) = d^{d/2} z_1 z_2 \ldots z_d$.
\begin{lem}
\label{lem:DA_decomp}
For $d \in \bN$, the space $H^2_d$ admits an orthogonal decomposition
\begin{equation*}
H^2_d = \bigoplus_{\alpha \in \partial \bN_0^d} H_\alpha.
\end{equation*}
For each $\alpha \in \partial \bN_0^d$, the space $H_\alpha$ is reducing for the multiplication operator $M_\tau$,
and the operator $M_\tau \big|_{H_\alpha}$ is unitarily equivalent to a unilateral weighted shift with positive weight sequence
$(w_{k,\alpha})_{k=0}^\infty$, where
\begin{equation*}
w_{k, \alpha}^2 = d^d \frac{ \prod_{j=1}^d (\alpha_j+k+1)}{\prod_{j=1}^d ( |\alpha| + k d + j)}.
\end{equation*}
\end{lem}
\begin{proof}
To prove the first assertion, it suffices to show that $\bN_0^d$ is the disjoint union
\begin{equation*}
\bN_0^d = \bigcup_{ \alpha \in \partial \bN_0^d} X_\alpha.
\end{equation*}
To see this, let $\beta \in \bN_0^d$ and let $k = \min (\beta_1,\ldots,\beta_d)$. Then $\beta - k \mathbf{1} \in \partial \bN_0^d$, so $\beta \in X_\alpha$, where $\alpha = \beta - k \mathbf{1}$. To see that the union is disjoint,
observe that if $\beta = \alpha + k \mathbf{1}$ with $\alpha \in \partial \bN_0^d$,
then $k = \min (\beta_1,\ldots,\beta_d)$ since $\alpha \in \partial \bN_0^d$,
hence $\alpha$ is uniquely determined by $\beta$.
It is clear that each $H_\alpha$ is invariant and hence reducing for $M_\tau$. Moreover,
with respect to the orthonormal basis $(z^{\alpha + k \mathbf{1}} / ||z^{\alpha + k \mathbf{1}}||)_{k=0}^\infty$ of $H_\alpha$,
the operator $M_\tau$ is a weighted shift with positive weights $w_{k,\alpha}$ given by
\begin{align*}
w_{k,\alpha}^2 = d^d \frac{ ||z^{\alpha + (k+1) \mathbf{1}}||^2}{ ||z^{\alpha + k \mathbf{1}}||^2}
&= d^d \frac{ (\alpha + (k+1) \mathbf{1})! \, | \alpha + k \mathbf{1}|!}{(\alpha + k \mathbf{1})! \, |(\alpha + (k+1) \mathbf{1})|!} \\
&= d^d \frac{ \prod_{j=1}^d (\alpha_j+k+1)}{\prod_{j=1}^d ( |\alpha| + k d + j)},
\end{align*}
as asserted.
\end{proof}
To prove Proposition \ref{prop:mult_embed}, we need to control the norm of $M_\tau$, and more generally
of operators of the form $M_{\varphi \circ \tau}$, on the spaces $H_\alpha$.
For this purpose, we will use Kacnelson's result \cite{Kacnelson72} is a similar way as we did
in the proof of Lemma \ref{lem:first_proj}. More precisely, we will apply the following
variant of Kacnelson's result.
\begin{cor}
\label{cor:weighted_shift_domination}
Let $\cH$ be a Hilbert space with an orthonormal basis $(e_n)_{n=0}^\infty$
and let $S_1$ and $S_2$ be two weighted shifts given by
\begin{equation*}
S_1 e_n = \alpha_n e_{n+1} \quad \text{ and } \quad S_2 e_n = \beta_n e_{n+1},
\end{equation*}
where $\alpha_n,\beta_n > 0$ for $n \in \bN_0$.
If $\beta_n \le \alpha_n$ for all $n \in \bN_0$, then the map
\begin{equation*}
p(S_1) \mapsto p(S_2) \quad (p \in \bC[z])
\end{equation*}
is a unital complete contraction.
\end{cor}
\begin{proof}
Let $\cH_1$ be the space of formal power series in the variable $z$ with orthogonal
basis $(z^n)_{n=0}^\infty$ and norm defined by
\begin{equation*}
\|z^n\|_{\cH_1} = \prod_{j=0}^{n-1} \alpha_j,
\end{equation*}
where the empty product is understood as $1$.
Similarly, $\cH_2$ has a norm defined by
\begin{equation*}
\|z^n\|_{\cH_2} = \prod_{j=0}^{n-1} \beta_j.
\end{equation*}
Then $S_i$ is unitarily equivalent to the operator of multiplication by $z$ on $\cH_i$ for $i=1,2$,
and these operators are lower triangular with respect to the orthogonal bases given by the monomials.
Since $\beta_n \le \alpha_n$ for all $n \in \bN_0$, the sequence $\|z^n\|_{\cH_1} / \|z^n\|_{\cH_2}$
is non-decreasing, so the result follows from Lemma \ref{lem:kacnelson_triangular}.
\end{proof}
To apply Corollary \ref{cor:weighted_shift_domination}, we require the following combinatorial inequality.
\begin{lem}
\label{lem:weight_inequality}
Let $w_{k,\alpha}$ denote the positive weights of Lemma \ref{lem:DA_decomp}, that is,
\begin{equation*}
w_{k, \alpha}^2 = d^d \frac{ \prod_{j=1}^d (\alpha_j+k+1)}{\prod_{j=1}^d ( |\alpha| + k d + j)}.
\end{equation*}
Then the weights $w_{k,\alpha}$ satisfy the inequalities
\begin{equation*}
w_{k, \alpha} \le w_{k,0}
\end{equation*}
for all $k \in \bN_0$ and all $\alpha \in \bN_0^d$.
\end{lem}
\begin{proof}
By the inequality of arithmetic and geometric means,
\begin{equation*}
w_{k,\alpha}^2 \le d^d
\frac{ (\tfrac{|\alpha|}{d} + k+ 1)^{d} }{\prod_{j=1}^d ( |\alpha| + k d + j)},
\end{equation*}
so it suffices to show that for all real numbers $r \ge 0$ and all $k \in \bN_0$, the inequality
\begin{equation*}
\frac{ (\tfrac{r}{d} + k+ 1)^{d} }{\prod_{j=1}^d ( r + k d + j)}
\le \frac{(k+1)^{d}}{\prod_{j=1}^d (k d + j)}
\end{equation*}
holds. By taking logarithms, we see that this last statement is equivalent to the assertion that for
any $k \in \bN_0$, the function $f:[0,\infty) \to \bR$ defined by
\begin{equation*}
f(r) = d \log( \tfrac{r}{d} + k + 1) - \sum_{j=1}^d \log(r + k d + j),
\end{equation*}
has a global maximum at $r=0$.
To see this, we compute the derivative
\begin{equation*}
f'(r) = \frac{1}{\tfrac{r}{d} + k + 1} - \sum_{j=1}^d \frac{1}{r + d k + j}
= \frac{1}{\tfrac{r}{d} + k + 1} - \frac{1}{d} \sum_{j=1}^d \frac{1}{\tfrac{r}{d} + k + \tfrac{j}{d}}.
\end{equation*}
Since
\begin{equation*}
\frac{r}{d} + k + \frac{j}{d} \le \frac{r}{d}+k+1
\end{equation*}
for all $j \in \{1,\ldots,d \}$, we deduce from the formula for the derivative of $f$ that $f'(r) \le 0$
for all $r \in [0,\infty)$. In particular, $f(r) \le f(0)$ for all $r \in [0,\infty)$, which finishes the proof.
\end{proof}
We are now ready to prove Proposition \ref{prop:mult_embed}.
\begin{proof}[Proof of Proposition \ref{prop:mult_embed}]
We wish to show that the map
\begin{equation*}
\Phi: \Mult(\cH_d) \to \Mult(H^2_d), \quad \varphi \mapsto \varphi \circ \tau,
\end{equation*}
is a complete isometry.
Since $f \mapsto f \circ \tau$ is an isometry from $\cH_d$ into $H^2_d$ by part (a) of Lemma \ref{lem:H_d_embed},
it suffices to show that $\Phi$ is a complete contraction (and in particular maps into $\Mult(H^2_d)$).
To this end, let $p = [p_{i j}]$ be an $n \times n$-matrix of polynomials.
Lemma \ref{lem:DA_decomp} shows that $[p_{i j} (M_\tau)] \in \cB((H^2_d)^n)$ is the direct sum
of the operators
\begin{equation*}
[p_{i j} (M_\tau \big|_{H_\alpha})] \in \cB(H_\alpha^n)
\end{equation*}
for $\alpha \in \partial \bN_0^d$,
and that each $M_\tau \big|_{H_\alpha}$ is unitarily equivalent to a weighted shift with
weight sequence $(w_{k, \alpha})$. In this setting, Lemma \ref{lem:weight_inequality} and Corollary \ref{cor:weighted_shift_domination}
show that
\begin{equation*}
|| [p_{i j} (M_\tau \big|_{H_\alpha})] || \le || [p_{i j} (M_\tau \big|_{H_0}) ] ||
= || [p_{i j} (M_z) ||_{\cB(\cH_d)^d},
\end{equation*}
where the last equality follows from part (a) of Lemma \ref{lem:H_d_embed}. Thus,
\begin{equation*}
|| [{p_{i j} \circ \tau}] ||_{\Mult(H^2_d)^n} \le || [{p_{i j}}] ||_{\Mult(\cH_d)^n}.
\end{equation*}
The preceding paragraph shows that $\Phi$ is a unital complete contraction on the subspace
of all polynomials in $\Mult(\cH_d)$. It is well known that circular invariance of $\cH_d$
implies that the unit ball of $\Mult(\cH_d)$ contains a WOT-dense subset consisting of polynomials, so
a straightforward limiting argument finishes the proof.
\end{proof}
Our first application of Proposition \ref{prop:mult_embed} concerns subhomogeneity of $\Mult(H^2_d)$.
\begin{cor}
\label{cor:da_not_subhom}
If $d \ge 2$, then $\Mult(H^2_d)$ is not topologically subhomogeneous.
In particular, there do not exist a constant $C > 0$ and $n \in \bN$ such that
\begin{equation*}
||\varphi||_{\Mult(H^2_d)} \le C ||\varphi||_{\Mult(H^2_d),n}
\end{equation*}
for all $\varphi \in \Mult(H^2_d)$.
\end{cor}
\begin{proof}
By Corollary \ref{cor:not_subhom}, the multiplier algebra of the weighted Dirichlet
space $\cD_{1/2}$ is not topologically subhomogeneous. By Proposition \ref{prop:mult_embed} and part (c)
of Lemma \ref{lem:H_d_embed}, $\Mult(\cD_{1/2})$ is (completely isometrically) isomorphic
to a subalgebra of $\Mult(H^2_2)$, hence $\Mult(H^2_2)$ is not topologically subhomogeneous. The case of
general $d$ now follows from Lemma \ref{lem:first_proj} (or from Remark \ref{rem:first_proj_cnp}).
The additional statement is once again a consequence of the fact that the $n$-point multiplier norm
can be computed using representations of dimension at most $n$.
\end{proof}
\section{\texorpdfstring{Applications to multipliers of $H^2_d$}{Applications to multipliers of H2d}}
\label{sec:embedding_applications}
We will use Proposition \ref{prop:mult_embed} to establish two more results regarding multipliers
of the Drury--Arveson space.
Our first application concerns the Sarason function, which was studied in \cite{AHM+17c}. If $\cH$
is a complete Nevanlinna--Pick space whose kernel $K$ is normalized at a point and if $f \in \cH$, then the Sarason function
of $f$ is defined by
\begin{equation*}
V_f(z) = 2 \langle f, K(\cdot,z) f \rangle - \|f\|^2.
\end{equation*}
If $\cH = H^2$, then $\Re V_f$ is the Poisson integral of $|f|^2$. In particular, $f \in H^\infty$ if and only
if $\Re V_f$ is bounded.
In Theorem 4.5 of \cite{AHM+17c}, it is shown that
for a class of complete Pick spaces
including standard weighted Dirichlet spaces on the disc and the Drury--Arveson space,
boundedness of $\Re V_f$ implies that $f \in \Mult(\cH)$.
Proposition 4.8 of \cite{AHM+17c} shows that the converse
fails for the standard weighted Dirichlet spaces $\cD_{a}$ for $0 < a < 1$, that is,
there are multipliers of $\cD_a$ whose Sarason function has unbounded real part.
With the help of Proposition \ref{prop:mult_embed}, we can establish such a result
for the Drury--Arveson space as well. This answers a question of J\"org Eschmeier \cite{EschmeierPC}.
This result was also obtained with a different proof in the recent paper \cite{FX20} by Fang and Xia.
\begin{prop}
\label{prop:da_sarason}
For $d \ge 2$, there exist $\varphi \in \Mult(H^2_d)$ such that $\Re V_\varphi$ is unbounded.
\end{prop}
\begin{proof}
It suffices to construct such a multiplier $\varphi$ for $d = 2$, as the trivial extension
of $\varphi$ to $\bB_d$ for $d \ge 2$ (i.e. the function $\varphi \circ P$, where
$P$ is the projection on the first coordinates) will be a multiplier of $H^2_d$ (by Lemma \ref{lem:first_proj})
whose Sarason function agrees with that of $\varphi$ on $\bB_2$.
By Proposition 4.8 of \cite{AHM+17c}, there exists $u \in \Mult(\cD_{1/2})$ such that $\Re V_u$ is
unbounded in $\bD$. Let $\tau: \bB_2 \to \bD$ be defined as in Section \ref{sec:embedding} and let
$\varphi = u \circ \tau$. Then $\varphi \in \Mult(H^2_2)$ by Proposition \ref{prop:mult_embed}. It remains
to show that $\Re V_\varphi$ is unbounded in $\bB_2$.
To this end,
let $M = \ol{\spa}\{ (z_1 z_2)^n: n \in \bN_0 \} \subset H^2_2$, let $L$ denote
the reproducing kernel of $M$ and
let $K$ be the reproducing kernel of $\cD_{1/2}$.
It follows from part (c) of Lemma \ref{lem:H_d_embed} (or by direct computation) that
\begin{equation*}
K(\tau(z),\tau(w)) = L(z,w) \quad (z,w \in \bB_2).
\end{equation*}
Indeed, since $V$ is given by composition with $\tau$, we have $V^* L(\cdot,w) = K(\cdot, \tau(w))$,
and since $V: \cD_{1/2} \to M$ is a unitary, $L(\cdot,w) = V V^* L(\cdot,w) = K(\cdot, \tau(w)) \circ \tau$.
Since $\varphi \in M$, it is elementary to check that if $h \in M^\bot$, then $\varphi \cdot h \in M^\bot$.
Using the fact that $L(\cdot,w) = P_M S(\cdot,w)$, where $S$ denotes the reproducing
kernel of $H^2_d$, we therefore find that for $w \in \bB_d$,
\begin{equation*}
\langle \varphi, S(\cdot,w) \varphi \rangle = \langle \varphi, L(\cdot,w) \varphi \rangle
= \langle \varphi, (K(\cdot,\tau(w)) \circ \tau) \varphi \rangle = \langle u, K(\cdot,\tau(w)) u \rangle_{\cD_{1/2}},
\end{equation*}
where the last equality follows from part (a) of Lemma \ref{lem:H_d_embed} and the definition of $\varphi = u \circ \tau$.
Therefore, $V_\varphi = V_u \circ \tau$.
Since $\tau$ is surjective and $\Re V_u$ is unbounded, $\Re V_\varphi$ is unbounded as well.
\end{proof}
Our second application concerns subspaces of $H^2_d$ that entirely consist of multipliers.
It is a special case of a theorem of Grothendieck \cite{Grothendieck54}
that every closed infinite dimensional subspace of
$L^2(\bT)$ contains functions that are not essentially bounded; see \cite[Theorem 5.2]{Rudin91} for a fairly
elementary proof.
Thus, no infinite dimensional closed subspace of $H^2$ consists
entirely of multipliers of $H^2$. J\"org Eschmeier \cite{EschmeierPC} asked if there exist closed infinite
dimensional subspaces of $H^2_d$ that entirely consist of multipliers and that are graded in the sense that they are spanned by homogeneous polynomials. We use Proposition \ref{prop:mult_embed}
to provide a positive answer if $d \ge 4$.
\begin{cor}
If $d \ge 4$, then the space
\begin{equation*}
H_0 = \ol{\spa} \{ (z_1 z_2 \cdots z_d)^n: n \in \bN_0 \} \subset H^2_d
\end{equation*}
is contained in $\Mult(H^2_d)$.
\end{cor}
\begin{proof}
Part (a) of Lemma \ref{lem:H_d_embed} shows that $H_0 = \{f \circ \tau: f \in \cH_d \}$.
But $\cH_d = \Mult(\cH_d)$ by part (e) of Lemma \ref{lem:H_d_embed}, so that the result
follows from Proposition \ref{prop:mult_embed}.
\end{proof}
\section{Spaces between the Drury--Arveson space and the Hardy space}
\label{sec:spaces_between}
Recall that in the scale of spaces $\cD_a(\bB_d)$, the values
$a=1$ and $a=d$ correspond to the Drury--Arveson space and to the Hardy space, respectively.
We have already seen that $\Mult(\cD_a(\bB_d))$ is not topologically subhomogeneous
in the case $0 \le a < 1$ and also in the case $a=1$ and $d \ge 2$; see Corollary \ref{cor:ball_not_subhom}
and Corollary \ref{cor:da_not_subhom}.
On the other
hand, if $a \ge d$, then $\Mult(\cD_a(\bB_d)) = H^\infty(\bB_d)$, which is (even completely isometrically) $1$-subhomogeneous.
In this section, we study topological subhomogeneity of $\Mult(\cD_a(\bB_d))$ for $1 < a < d$.
In fact, it will occasionally be more convenient to work with an equivalent norm on $\cD_a(\bB_d)$. To this end,
let $s \in \bR$ and let $\cH_s(\bB_d)$ be the reproducing kernel Hilbert space on $\bB_d$ with
kernel
\begin{equation*}
\sum_{n=0}^\infty (n+1)^{s} \langle z,w \rangle^n.
\end{equation*}
Equivalently,
\begin{equation*}
\cH_s(\mathbb{B}_d) =
\Big\{ f = \sum_{\alpha \in \mathbb{N}_0^d} \widehat{f}(\alpha) z^\alpha \in \mathcal{O}(\mathbb{B}_d): \|f\|^2
= \sum_{\alpha \in \mathbb{N}_0^d} (|\alpha|+1)^{-s} \frac{\alpha!}{|\alpha|!} |\widehat{f}(\alpha)|^2 < \infty \Big\}.
\end{equation*}
We simply write $\cH_s = \cH_s(\bD)$.
It is well known that if $s = a - 1 \ge -1$, then $\cD_a(\bB_d) = \cH_s(\bB_d)$ with equivalence of norms.
Indeed, the weights in the description of $\mathcal{D}_a(\mathbb{B}_d)$ are comparable
to those in the description of $\mathcal{H}_s(\mathbb{B}_d)$, because for $a > 0$, we have
\begin{equation*}
\frac{1}{\Gamma(a+n)} \approx \frac{(n+1)^{1-a}}{n!};
\end{equation*}
see for instance \cite{Wendel48}.
As in the case of the Drury--Arveson space, it is possible to embed weighted Dirichlet spaces on $\bD$
into $\cD_a(\bB_d)$ for certain values of $a$.
As in Section \ref{sec:embedding}, we let
\begin{equation*}
\tau: \ol{\bB_d} \to \ol{\bD}, \quad z \mapsto d^{d/2} z_1 z_2 \ldots z_d.
\end{equation*}
\begin{lem}
\label{lem:Besov_embed_space}
Let $s \in \bR$.
The map
\begin{equation*}
\cH_{s-(d-1)/2} \to \cH_s(\bB_d), \quad f \mapsto f \circ \tau,
\end{equation*}
is bounded and bounded below.
\end{lem}
\begin{proof}
The monomials form an orthogonal basis of $\cH_{s-(d-1)/2}$, and powers of $\tau$ are orthogonal
in $\cH_s(\bB_d)$. Moreover, by Stirling's formula,
we have
\begin{align*}
\|\tau^n\|_{\cH_s(\bB_d)}^2 &= d^{n d} \frac{(n!)^d}{(n d)!} (nd + 1)^{-s} \approx
(n+1)^{(d-1)/2} ( n d + 1)^{-s} \approx (n+1)^{ (d-1)/2 - s} \\
&= \|z^n\|^2_{\cH_{s - (d-1)/2}},
\end{align*}
so the result follows.
\end{proof}
Our goal is to show that in analogy with Proposition \ref{prop:mult_embed},
the map $\varphi \mapsto \varphi \circ \tau$ also yields an embedding on the level of multiplier algebras.
Using notation as in Section \ref{sec:embedding}, we define for $\alpha \in \partial \bN_0^d$
a space
\begin{equation*}
H_\alpha^{(s)} =
\ol{ \spa} \{ z^\beta: \beta \in X_\alpha \} \subset \cH_s(\bB_d).
\end{equation*}
Then the following generalization of Lemma \ref{lem:DA_decomp} is proved in the same way.
\begin{lem}
\label{lem:H_s_decomp}
Let $d \in \bN$ and $s \in \bR$. Then $\cH_s(\bB_d)$ admits an orthogonal decomposition
\begin{equation*}
\cH_s(\bB_d) = \bigoplus_{\alpha \in \partial \bN_0^d} H_\alpha^{(s)}
\end{equation*}
For each $\alpha \in \partial \bN_0^d$, the space $H_\alpha^{(s)}$ is reducing for the multiplication operator $M_\tau$,
and the operator $M_\tau \big|_{H_\alpha^{(a)}}$ is unitarily equivalent to a unilateral weighted shift with positive weight sequence
$(w_{k,\alpha})_{k=0}^\infty$, where
\begin{equation*}
\pushQED{\qed}
w_{k, \alpha}^2 = d^d \frac{ \prod_{j=1}^d (\alpha_j+k+1)}{\prod_{j=1}^d ( |\alpha| + k d + j)}
\Big( 1 + \frac{d}{|\alpha| + k d + 1} \Big)^{-s}. \qedhere
\popQED
\end{equation*}
\end{lem}
For $r \in \bN_0$ and $k \in \bN_0$, we define positive weights by
\begin{equation*}
v_{k,r}^2 =
d^d \frac{(\frac{r}{d} + k + 1)^d}{\prod_{j=1}^d (r + k d + j)}
\Big( 1 + \frac{d}{r + k d + 1} \Big)^{-s}
\end{equation*}
and
\begin{equation*}
u_k^2 =
d^d \frac{(k + 1)^d}{\prod_{j=1}^d (k d + j)}
\Big( 1 + \frac{d}{d + k d + 1} \Big)^{-s}
\end{equation*}
Then the following inequalities hold.
\begin{lem}
\label{lem:weights_inequalities}
Let $w_{k,\alpha}$ denote the weights of Lemma \ref{lem:H_s_decomp}.
Let $r,k \in \bN_0$ and $\alpha \in \bN_0^d$.
\begin{enumerate}[label=\normalfont{(\alph*)}]
\item $w_{k,\alpha} \le v_{k, |\alpha|}$.
\item If $r = l d + t$ with $l,t \in \bN_0$, then $v_{k,r} = v_{k+l,t}$.
\item Assume $s \le 0$. Then $v_{k,r} \le v_{k,0}$
and thus $w_{k,\alpha} \le w_{k,0}$.
\item Assume $s \ge 0$. If $0 \le r \le d$, then $v_{k,r} \le u_k$.
\end{enumerate}
\end{lem}
\begin{proof}
(a) follows from the inequality of arithmetic and geometric means.
(b) is obvious.
(c) and (d) We saw in the proof of Lemma \ref{lem:weight_inequality} that
\begin{equation*}
\frac{(\frac{r}{d} + k + 1)^d}{\prod_{j=1}^d (r + k d + j)} \le
\frac{(k+1)^d}{ \prod_{j=1}^d (kd + j)}.
\end{equation*}
From this inequality and from the fact that the quantity
\begin{equation*}
\Big( 1 + \frac{d}{r+k d + 1} \Big)^{-s}
\end{equation*}
is non-increasing in $r$ if $s \le 0$ and non-decreasing in $r$ if $s \ge 0$,
the inequalities involving $v_{k,r}$ in (c) and (d) follow. The inequality about $w_{k,\alpha}$
in (c) then follows from the inequality about $v_{k,r}$ and part (a).
\end{proof}
We are now ready to prove the desired embedding result for multiplier algebras, which
extends Proposition \ref{prop:mult_embed}.
\begin{prop}
\label{prop:besov_mult_embed}
Let $s \in \bR$. Then the map
\begin{equation*}
\Mult(\cH_{s-(d-1)/2}) \to \Mult(\cH_s(\bB_d)), \quad \varphi \mapsto \varphi \circ \tau,
\end{equation*}
is a completely bounded isomorphism onto its image.
In particular, if $a \ge \frac{d-1}{2}$, then
\begin{equation*}
\Mult(\cD_{a-(d-1)/2}) \to \Mult(\cD_a(\bB_d)), \quad \varphi \mapsto \varphi \circ \tau,
\end{equation*}
is a completely bounded isomorphism onto its image.
\end{prop}
\begin{proof}
The second statement follows from the first statement and the equality
$\cD_a(\bB_d) = \cH_{a-1}(\bB_d)$ with equivalence of norms.
To prove the first statement,
in light of Lemma \ref{lem:Besov_embed_space}, it suffices to show that the map is completely bounded.
Moreover, as in the proof of Proposition \ref{prop:mult_embed},
it suffices to show that the map is completely bounded on the space of all polynomials by an approximation argument.
Let $M_z$ denote the operator of multiplication by $z$ on $\cH_{s-(d-1)/2}$.
Lemma \ref{lem:H_s_decomp} further implies that it suffices to prove that
there exists a constant $C > 0$ so that for all $\alpha \in \partial \bN_0^d$, the mapping
\begin{equation}
\label{eqn:besov_embed_proof}
p(M_z) \mapsto p(M_\tau \big|_{H_\alpha^{(s)}}) \quad (p \in \bC[z])
\end{equation}
has completely bounded norm at most $C$.
Recall from Lemma \ref{lem:H_s_decomp} that the operator $M_{\tau} \big|_{H_\alpha^{(s)}}$
is a weighted shift with weight sequence $(w_{k,\alpha})_{k=0}^\infty$.
We introduce the following notation. Given
two weighted shifts $S$ and $T$ with positive weights, we write
$S \prec T$ if the map
\begin{equation*}
p(T) \mapsto p(S), \quad (p \in \bC[z])
\end{equation*}
is completely contractive.
Suppose first that $s \le 0$. Then part (c) of Lemma \ref{lem:weights_inequalities} and Corollary
\ref{cor:weighted_shift_domination} imply that
\begin{equation*}
M_{\tau} \big|_{H_\alpha^{(s)}} \prec
M_{\tau} \big|_{H_0^{(s)}}
\end{equation*}
for all $\alpha \in \partial \bN_0^d$. Moreover, Lemma \ref{lem:Besov_embed_space} implies that
the map
\begin{equation*}
p(M_z) \mapsto p(M_{\tau} \big|_{H_0^{(s)}})
\end{equation*}
is completely bounded, hence the mapping in Equation \eqref{eqn:besov_embed_proof} is completely bounded
with completely bounded norm independent of $\alpha \in \partial \bN_0^d$, which finishes the proof in the case $s \le 0$.
Suppose now that $s \ge 0$.
Let $S_r$ denote the weighted shift with weights $(v_{k,r})_{k=0}^\infty$
and let $S$ denote the weighted shift with weights $(u_k)_{k=0}^\infty$,
defined before Lemma \ref{lem:weights_inequalities}.
Let $\alpha \in \bN_0^d$.
Then by part (a) of Lemma \ref{lem:weights_inequalities} and by Corollary \ref{cor:weighted_shift_domination},
\begin{equation*}
M_{\tau} \big|_{H_\alpha^{(s)}} \prec S_{|\alpha|}.
\end{equation*}
Write $|\alpha| = l d + t$ with $l,t \in \bN_0$ and $0 \le t \le d$. Part (b) of Lemma \ref{lem:weights_inequalities}
shows that $v_{k,|\alpha|} = v_{k+l,t}$, so that $S_{|\alpha|}$
is unitarily equivalent to a restriction of $S_t$ to an invariant subspace and hence
\begin{equation*}
S_{|\alpha|} \prec S_{t}.
\end{equation*}
Finally, part (d) of Lemma \ref{lem:weights_inequalities} and Corollary \ref{cor:weighted_shift_domination}
imply that
\begin{equation*}
S_t \prec S,
\end{equation*}
so combining the last three relations, we see that
\begin{equation*}
M_{\tau} \big|_{H_\alpha^{(s)}} \prec S.
\end{equation*}
We finish the proof by showing that the map
\begin{equation*}
p(M_z) \mapsto p(S)
\end{equation*}
is completely bounded, where $M_z$ continues to denote the operator of multiplication by $z$
on $\cH_{s-(d-1)/2}$.
To this end, let $\cK$ be the space of power series with orthogonal basis $(z^n)_{n=0}^\infty$
and norm
\begin{equation*}
\|z^n\|^2_{\cK} = d^{n d} \frac{(n!)^d}{(n d)!} \Big(1 + \frac{n d}{d+1} \Big)^{-s}.
\end{equation*}
Since
\begin{equation*}
\frac{\|z^{k+1}\|^2_{\cK}}{\|z^k\|^2_{\cK}} = u_k^2,
\end{equation*}
we see that $S$ is unitarily equivalent to the operator of multiplication by $z$ on $\cK$.
Moreover, by Stirling's formula,
\begin{equation*}
\|z^n\|_{\cK}^2 \approx (n+1)^{(d-1)/2} (n+1)^{-s} = \|z^n\|^2_{\cH_{s-(d-1)/2}},
\end{equation*}
so that $\cK = \cH_{s-(d-1)/2}$ with equivalence of norms,
hence $S$ and $M_z$ are similar.
\end{proof}
We can now show that multiplier algebras of some spaces
between the Drury--Arveson space and the Hardy space are not subhomogeneous.
\begin{cor}
\label{cor:between_spaces_subhom}
Let $d \in \bN$ and let $0 \le a < \frac{d+1}{2}$. Then $\Mult(\cD_a(\bB_d))$
is not topologically subhomogeneous.
\end{cor}
\begin{proof}
We combine two arguments. Firstly,
if $\frac{d-1}{2} \le a < \frac{d+1}{2}$, then Proposition \ref{prop:besov_mult_embed} shows
that $\Mult(\cD_{a}(\bB_d))$ contains a subalgebra isomorphic to $\Mult(\cD_{a-(d-1)/2})$.
Since $0 \le a- \frac{d-1}{2} < 1$, the algebra $\Mult(\cD_{a - (d-1)/2})$ is not
topologically subhomogeneous by Corollary \ref{cor:not_subhom} and Proposition \ref{prop:dirichlet_subhom}.
Thus, $\Mult(\cD_a(\bB_d))$ is not topologically subhomogeneous for $\frac{d-1}{2} \le a < \frac{d+1}{2}$.
Secondly, if $\Mult(\cD_a(\bB_{d}))$ is not topologically subhomogeneous, then Lemma \ref{lem:first_proj}
implies that $\Mult(\cD_a(\bB_{d+1}))$ is not topologically subhomogeneous either. Combining
these two statements, the result follows by an obvious induction on $d$.
\end{proof}
Corollary \ref{cor:between_spaces_subhom} leaves open the question of subhomogeneity
of $\Mult(\cD_a(\bB_d))$
in the range $\frac{d+1}{2} \le a < d$. We now show that in this range,
the multiplier norm is not comparable to the supremum norm, from which we deduce that $\Mult(\cD_a(\bB_d))$
is at least not topologically $1$-subhomogeneous. To this end, we will use inner functions on the unit ball.
Recall that a bounded holomorphic function $f: \bB_d \to \bC$ is said to be inner
if $\lim_{r \to 1} |f(r z)| = 1$ for almost every $z \in \partial \bB_d$.
\begin{lem}
\label{lem:inner}
Let $0 \le a < d$ and let $f$ be a non-constant inner function on $\bB_d$.
Then $\lim_{n \to \infty} \|f^n\|_{\cD_a(\bB_d)} = \infty$.
\end{lem}
\begin{proof}
If $0 \le b < a$, then $D_b(\mathbb{B}_d)$ is continuously contained in $\mathcal{D}_a(\mathbb{B}_d)$,
which is easily seen by passing to the equivalent norm in $\mathcal{H}_s(\mathbb{B}_d)$ mentioned at the beginning of this section.
Therefore, it suffices to prove the result for $a$ close to $d$, say $a = d - \delta$, where $0 < \delta < 1$.
It is well known that another equivalent norm on $\cD_a(\bB_d)$ is given by
\begin{equation*}
\|h\|^2 = |h(0)|^2 + \int_{\bB_d} |R h(z)|^2 (1 - |z|^2)^{1 -\delta} \, dV(z),
\end{equation*}
where
$R$ denotes the radial derivative and $V$ is the Lebesgue measure on $\bB_d$; see
\cite[Theorem 41]{ZZ08}. Moreover, \cite[Theorem 5.3]{Stoll13} (choosing $q=2$ and $p= \frac{d}{d+2-\delta}$) implies that for inner functions $g$,
\begin{equation*}
\int_{\bB_d} |R g(z)|^2 (1 - |z|^2)^{1 -\delta} \, dV(z)
\approx \int_{\bB_d} (1 - |g(z)|^2)^2 (1 - |z|^2)^{-\delta - 1} \, dV(z),
\end{equation*}
where the implied constants do not depend
on the inner function $g$. Choosing $g=f^n$ above, we find that
\begin{equation*}
\|f^n\|_{\cD_a(\bB_d)}^2
\gtrsim \int_{\bB_d} (1 - |f^n(z)|^2)^2 (1 - |z|^2)^{-\delta -1} \, dV(z).
\end{equation*}
Since $f$ is not constant, $f^n$ converges to $0$ pointwise on $\bB_d$, so the monotone
convergence theorem implies that the last integral tends to
\begin{equation*}
\int_{\bB_d} (1 - |z|^2)^{-\delta - 1} \, dV(z) = \infty,
\end{equation*}
which finishes the proof.
\end{proof}
We are now ready to prove the announced result about $1$-subhomogeneity.
\begin{prop}
\label{prop:not-1-subhom}
Let $0 \le a < d$. Then the multiplier norm on $\cD_a(\bB_d)$ is not
comparable to the supremum norm on $\bB_d$. In fact, $\Mult(\cD_a(\bB_d))$
is not topologically $1$-subhomogeneous.
\end{prop}
\begin{proof}
If $0 \le a < 1$, then the result is a special case of Corollary \ref{cor:ball_not_subhom},
so we may assume that $1 \le a < d$.
We first show that there does not exist a constant $C > 0$ so that
\begin{equation*}
\|p\|_{\Mult(\cD_a(\bB_d))} \le C \|p\|_\infty
\end{equation*}
for all polynomials $p$. Suppose otherwise. Since every function $f \in H^\infty(\bB_d)$
is a pointwise limit of polynomials $p_n$ with $\|p_n\|_\infty \le \|f\|$
(for instance the Fej\'er means of $f$), it follows that every function $f \in H^\infty(\bB_d)$
is a multiplier of $\cD_a(\bB_d)$ with
\begin{align*}\|f\|_{\Mult(\cD_a(\bB_d))} \le C \|f\|_\infty.\end{align*}
On the other hand, if $f$ is a non-constant inner function, which exists by a theorem
of Aleksandrov \cite{Aleksandrov82}, then
$\|f^n\|_\infty \le 1$ for all $n \in \bN$, but Lemma \ref{lem:inner} implies that
\begin{equation*}
\|f^n\|_{\Mult(\cD_a(\bB_d))} \ge
\|f^n\|_{\cD_a(\bB_d)} \xrightarrow{n \to \infty} \infty.
\end{equation*}
This contradiction proves the claim and in particular the first part of the proposition.
Next, let $A$ denote the norm closure of the polynomials inside of $\Mult(\cD_a(\bB_d))$.
We will show that $A$ is not topologically $1$-subhomogeneous, which will
finish the proof.
To this end, we determine the unital
homomorphisms $\chi: A \to \bC$, i.e. the characters of $A$.
We claim that if $\chi$ is a character, then
then there exists $\lambda \in \ol{\bB_d}$ so that $\chi(f) = f(\lambda)$
for all $f \in A$. Indeed, given a character $\chi$, define
$\lambda = (\chi(z_1),\ldots,\chi(z_d))$.
If $K_a$ denotes the reproducing kernel of $\cD_a(\bB_d)$, then
\begin{equation*}
(1 - \langle z, w\rangle) K_a(z,w) = \frac{1}{(1 - \langle z,w \rangle)^{a-1}}
\end{equation*}
is positive, hence the tuple $(M_{z_1},\ldots,M_{z_d})$ forms a row contraction
on $\cD_a(\bB_d)$.
Since characters are automatically contractive, and contractive functionals are completely contractive,
it follows that $\lambda \in \ol{\bB_d}$. (This can also be seen by computing
the spectrum of the tuple $M_z$.) Clearly, $\chi(f) = f(\lambda)$ for all
polynomials $f$, and hence for all $f \in A$ by definition of $A$.
In particular, the description of the characters of $A$ implies that
for all $f \in A$,
\begin{equation*}
\sup \{ |\chi(f)| : \chi \text{ is a character of } A \} = \|f\|_{\infty}.
\end{equation*}
Since the multiplier norm of polynomials is not bounded by a constant times
the supremum norm by the first part,
it therefore follows that there does not exist
a constant $C > 0$ so that
\begin{equation*}
\|f\|_{\Mult(\cD_a(\bB_d))} \le C \sup \{ |\chi(f)| : \chi \text{ is a character of } A \}
\end{equation*}
for all $f \in A$. In other words, $A$ is not topologically $1$-subhomogeneous.
\end{proof}
\section{Questions}
\label{sec:questions}
We close this article with a few questions.
\subsection{Small Dirichlet spaces}
It is well known that the scale of spaces $\cD_a$, defined for $a \in [0,\infty)$, can be extended as
follows. For $s \in \bR$, let $\cH_s$ be the reproducing kernel Hilbert space on $\bD$ with
kernel
\begin{equation*}
\sum_{n=0}^\infty (n+1)^{s} (z \overline{w})^n.
\end{equation*}
If $s = a - 1 \ge -1$, then $\cD_a = \cH_s$ with equivalence of norms,
see the discussion at the beginning of Section \ref{sec:spaces_between}.
Moreover, if $s \ge 0$, then $\|z\|_{\Mult(\mathcal{H}_s)} \le 1$,
so von Neumann's inequality implies that
$\Mult(\cH_s) = H^\infty$ completely isometrically for $s \ge 0$.
On the other hand, it follows from Theorem \ref{thm:ci_subhom_intro}
that $\Mult(\cH_s)$ is not completely isometrically subhomogeneous for all $s < 0$,
since the multiplier norm of $\cH_s$ it not equivalent to the supremum norm for $s < 0$.
In fact, by Theorem \ref{thm:top_subhom_intro}, the algebra $\Mult(\cH_s)$ is not topologically
subhomogeneous for $-1 \le s < 0$, but the proof does not seem to extend to the range $s < -1$.
\begin{quest}
Is $\Mult(\cH_s)$ topologically subhomogeneous for $s < -1$?
\end{quest}
It is known that for $s < -1$, we have $\Mult(\cH_s) = \cH_s$ with equivalence of norms
(see Proposition 31 and Example 1 on page 99 in \cite{Shields74}). In particular,
$\Mult(\cH_s)$ is reflexive as a Banach space for $s < -1$. Thus, we ask more generally.
\begin{quest}
Does there exist a unital, infinite-dimensional operator algebra that is subhomogeneous
and reflexive as a Banach space?
\end{quest}
Notice that the answer to this question is independent of what kind of subhomogeneity
is specified, as reflexivity is stable under topological isomorphisms.
\subsection{Spaces between the Drury--Arveson space and the Hardy space}
Consider the spaces $\cD_a(\bB_d)$ for $d \ge 2$,
and recall that $\cD_1(\bB_d)$ is the Drury--Arveson space $H^2_d$ and that
$\cD_d(\bB_d)$ is the Hardy space $H^2(\bB_d)$.
We saw in Corollary \ref{cor:ci_subhom_ball_dirichlet}
that $\Mult(\cD_a(\bB_d))$ is completely isometrically subhomogeneous if and only if $a \ge d$.
On the other hand, Theorem \ref{thm:top_subhom_intro} implies that $\Mult(\cD_a(\bB_d))$
is not even topologically subhomogeneous if $0 \le a < \frac{d+1}{2}$.
Moreover,
$\Mult(\cD_a(\bB_d))$ is not topologically $1$-subhomogeneous for
$0 \le a < d$ by Proposition \ref{prop:not-1-subhom}.
\begin{quest}
\label{quest:subhom}
Is $\Mult(\cD_a(\bB_d))$ topologically subhomogeneous if $\frac{d+1}{2} \le a < d$?
\end{quest}
The embedding of Proposition \ref{prop:besov_mult_embed} does not appear to be useful
for the study of this question. For real numbers $a$ in the range $\frac{d+1}{2} \le a < d$,
Proposition \ref{prop:besov_mult_embed} yields embeddings of multiplier algebras
that coincide with $H^\infty(\bD)$, a $1$-subhomogeneous algebra. Thus, a different argument
seems to be needed to cover the range
$\frac{d+1}{2} \le a < d$.
\subsection{Similarity of multipliers}
Suppose that $\cH$ and $\cK$ are reproducing kernel Hilbert spaces on the same set $X$.
If there exists a multiplier $\theta \in \Mult(\cH,\cK)$ so that
\begin{equation*}
S: \cH \to \cK, \quad f \mapsto \theta f,
\end{equation*}
is bounded and invertible, then $\Mult(\cH) = \Mult(\cK)$ with equivalence of norms.
More precisely, if $C = \|S\| \|S^{-1}\|$, then
\begin{equation*}
C^{-1} \|\varphi\|_{\Mult(\cH)} \le \|\varphi\|_{\Mult(\cK)} \le C \|\varphi\|_{\Mult(\cH)}
\end{equation*}
for all $\varphi \in \Mult(\cH)$ (and these inequalities continue to hold for matrix-valued multipliers).
Thus, a positive answer to the following question would provide another proof of Corollary \ref{cor:many_points}
(with potentially different constants).
\begin{quest}
Let $a \in (0,1)$ and let $n \in \bN$. Does there exist a constant $A > 0$ so that
for all finite sets $F \subset \bD$ with $|F| = n$, there exists a multiplier
$\theta \in \Mult( H^2 |_F, \cD_a |_F)$ with
\begin{equation*}
\|\theta\|_{\Mult(H^2 |_F, \cD_a |_F)} \|\theta^{-1}\|_{\Mult(\cD_a |_F, H^2 |_F)} \le A?
\end{equation*}
\end{quest}
\subsection*{Note added in proof}
The recent preprint \cite{HRS21} shows that $\Mult(\mathcal{D}_a(\mathbb{B}_d))$ is not topologically subhomogeneous for $0 \le a < d$, thus answering Question \ref{quest:subhom}.
\bibliographystyle{amsplain}
|
1,314,259,994,346 | arxiv | \section{Introduction}
\label{sec:introduction}
\newcommand{\teaserimg}[1]{\includegraphics[width=0.115\linewidth,clip]{#1}}
\begin{figure}[t]
\centering
\begin{tabu}{cccccccc}
\rowfont{\tiny}
\multicolumn{8}{c}{Single Channel}\\
\teaserimg{IMAGES/dataset_imgs/003_0/full_frame.png}&
\teaserimg{IMAGES/dataset_imgs/003_0/avg1.png}&
\teaserimg{IMAGES/dataset_imgs/003_0/avg2.png}&
\teaserimg{IMAGES/dataset_imgs/003_0/avg4.png}&
\teaserimg{IMAGES/dataset_imgs/003_0/avg8.png}&
\teaserimg{IMAGES/dataset_imgs/003_0/avg16.png}&
\teaserimg{IMAGES/dataset_imgs/003_0/avg400.png}&
\teaserimg{IMAGES/dataset_imgs/003_0/sim.png}\\
\teaserimg{IMAGES/dataset_imgs/008_1/full_frame.png}&
\teaserimg{IMAGES/dataset_imgs/008_1/avg1.png}&
\teaserimg{IMAGES/dataset_imgs/008_1/avg2.png}&
\teaserimg{IMAGES/dataset_imgs/008_1/avg4.png}&
\teaserimg{IMAGES/dataset_imgs/008_1/avg8.png}&
\teaserimg{IMAGES/dataset_imgs/008_1/avg16.png}&
\teaserimg{IMAGES/dataset_imgs/008_1/avg400.png}&
\teaserimg{IMAGES/dataset_imgs/008_1/sim.png}\\
\rowfont{\tiny}
\multicolumn{8}{c}{Multi Channel}\\
\teaserimg{IMAGES/dataset_imgs/010/full_frame.png}&
\teaserimg{IMAGES/dataset_imgs/010/avg1.png}&
\teaserimg{IMAGES/dataset_imgs/010/avg2.png}&
\teaserimg{IMAGES/dataset_imgs/010/avg4.png}&
\teaserimg{IMAGES/dataset_imgs/010/avg8.png}&
\teaserimg{IMAGES/dataset_imgs/010/avg16.png}&
\teaserimg{IMAGES/dataset_imgs/010/avg400.png}&
\teaserimg{IMAGES/dataset_imgs/010/sim.png}\\
\teaserimg{IMAGES/dataset_imgs/013/full_frame.png}&
\teaserimg{IMAGES/dataset_imgs/013/avg1.png}&
\teaserimg{IMAGES/dataset_imgs/013/avg2.png}&
\teaserimg{IMAGES/dataset_imgs/013/avg4.png}&
\teaserimg{IMAGES/dataset_imgs/013/avg8.png}&
\teaserimg{IMAGES/dataset_imgs/013/avg16.png}&
\teaserimg{IMAGES/dataset_imgs/013/avg400.png}&
\teaserimg{IMAGES/dataset_imgs/013/sim.png}\\
\rowfont{\tiny}
Full frame & Raw crop & 2$\times$ Average & 4$\times$ Average &8$\times$ Average &16$\times$ Average& Noise-free LR & Target HR
\end{tabu}
\caption{Example of image sets in the proposed W2S. We obtain LR images with 5 different noise levels by either taking a single raw image or averaging different numbers of raw images of the same field of view. The more images we average, the lower the noise level, as shown in the different columns of the figure. The noise-free LR images are the average of 400 raw images, and the HR images are obtained using structured-illumination microscopy (SIM)~\cite{gustafsson2000surpassing}. The multi-channel images are formed by mapping the three single-channel images of different wavelengths to RGB. A gamma correction is applied for better visualization. Best viewed on screen.}
\label{fig:teaser}
\end{figure}
Fluorescence microscopy allows to visualize sub-cellular structures and protein-protein interaction at the molecular scale. However, due to the weak signals and diffraction limit, fluorescence microscopy images suffer from high noise and limited resolution. One way to obtain high-quality, high-resolution (HR) microscopy images is to leverage super-resolution fluorescence microscopy, such as structure illumination microscopy (SIM)~\cite{gustafsson2000surpassing}. This technique requires multiple captures with several parameters requiring expert tuning to get high-quality images. Multiple or high-intensity-light acquisitions can cause photo-bleach and even damage the samples. The imaged cells could be affected and, if imaged in sequence for live tracking, possibly killed. This is because a single SIM acquisition already requires a set of captures with varying structured illumination. Hence, a large set of SIM captures would add up to high illumination and an overhead in capture time that is detrimental to imaging and tracking of live cells. Therefore, developing an algorithm to effectively denoise and super-resolve a fluorescence microscopy image is of great importance to biomedical research. However, a high-quality dataset is needed to benchmark and evaluate joint denoising and super-resolution (JDSR) on microscopy data.
Deep-learning-based methods in denoising~\cite{anwar2019real,tai2017memnet,zhang2017beyond,el2020blind} and SR~\cite{wang2018esrgan,zhang2018image,zhang2018residual} today are outperforming classical signal processing approaches. A major limitation in the literature is, however, the fact that these two restoration tasks are addressed separately. This is in great part due to a missing dataset that would allow both to train and to evaluate JDSR. Such a dataset must contain aligned pairs of LR and HR images, with noise and noise-free LR images, to allow retraining retrain prior denoising and SR methods for benchmarking the consecutive application of a denoiser and an SR network as well as candidate one-shot JDSR methods.
In this paper, we present such a dataset, which, to the best of our knowledge, is the first JDSR dataset. This dataset allows us to evaluate the existing denoising and SR algorithms on microscopy data. We leverage widefield microscopy and SIM techniques to acquire data fulfilling the described requirements above. Our noisy LR images are captured using widefield imaging of human cells. We capture a total of 400 replica raw images per field of view.
We average several of the LR images to obtain images with different noise levels, and all of the 400 replicas to obtain the noise-free LR image. Using SIM imaging~\cite{gustafsson2000surpassing}, we obtain the corresponding high-quality HR images. Our resulting \textbf{W}idefield\textbf{2S}IM{} (W2S{}) dataset consists of 360 sets of LR and HR image pairs, with different fields of view and acquisition wavelengths. Visual examples of the images in W2S{} are shown in Fig.~\ref{fig:teaser}.
We leverage our JDSR dataset to benchmark different approaches for denoising and SR restoration on microscopy images. We compare the sequential use of different denoisers and SR methods, of directly using an SR method on a noisy LR image, and of using SR methods on the noise-free LR images of our dataset for reference. We additionally evaluate the performance of retraining SR networks on our JDSR dataset. Results show a significant drop in the performance of SR networks when the low-resolution (LR) input is noisy compared to it being noise-free. We also find that the consecutive application of denoising and SR achieves better results. It is, however, not as performing in terms of RMSE and perceptual texture reconstruction as training a single model on the JDSR task, due to the accumulation of error. The best results are thus obtained by training a single network for the joint optimization of denoising and SR.
In summary, we create a microscopy JDSR dataset, W2S{}, containing noisy images with 5 noise levels, noise-free LR images, and the corresponding high-quality HR images. We analyze our dataset by comparing the noise magnitude and the blur kernel of our images to those of existing denoising and SR datasets. We benchmark state-of-the-art denoising and SR algorithms on W2S{}, by evaluating different settings and on different noise levels. Results show the networks can benefit from joint optimization.
\section{Related Work}
\subsection{Biomedical Imaging Techniques for Denoising and Super-resolution}
Image averaging of multiple shots is one of the most employed methods to obtain a clean microscopy image. This is due to its reliability and to avoid the potential blurring or over-smoothing effects of denoisers. For microscopy experiments requiring long observation and minimal degradation of specimens, low-light conditions and short exposure times are, however, preferred as multiple shots might damage the samples. To reduce the noise influence and increase the resolution, denoising methods and SR imaging techniques are leveraged.
To recover a clean image from a single shot, different denoising methods have been designed, including PURE-LET~\cite{luisier2011image}, EPLL~\cite{zoran2011learning}, and BM3D~\cite{BM3D}. Although these methods provide promising results, recent deep learning methods outperform them by a big margin~\cite{zhang2019poisson}. To achieve resolution higher than that imposed by the diffraction limit, a variety of SR microscopy techniques exist, which achieve SR either by spatially modulating the fluorescence emission using patterned illumination (\textit{e.g.}, STED~\cite{hein2008stimulated} and SIM~\cite{gustafsson2000surpassing}), or by stochastically switching on and off individual molecules using photo-switchable probes (\textit{e.g.}, STORM~\cite{rust2006sub}), or photo-convertible fluorescent proteins (\textit{e.g.}, PALM~\cite{shroff2008live}). However, all of these methods require multiple shots over a period of time, which is not suitable for live cells because of the motion and potential damage to the cell. Thus, in this work, we aim to develop a deep learning method to reconstruct HR images from a single microscopy capture.
\subsection{Datasets for Denoising and Super-resolution}
\label{sec:work}
Several datasets have commonly been used in benchmarking SR and denoising, including Set5~\cite{bevilacqua2012low}, Set14~\cite{zeyde2010single}, BSD300~\cite{martin2001database}, Urban100~\cite{huang2015single}, Manga109~\cite{matsui2017sketch}, and DIV2K~\cite{timofte2018ntire}. None of these datasets are optimized for microscopy and they only allow for synthetic evaluation. Specifically, the noisy inputs are generated by adding Gaussian noise for testing denoising algorithms, and the LR images are generated by downsampling the blurred HR images for testing SR methods. These degradation models deviate from the degradations encountered in real image capture~\cite{chen2019camera}. To better take into account realistic imaging characteristics and thus evaluate denoising and SR methods in real scenarios, real-world denoising and SR datasets have recently been proposed. Here we discuss these real datasets and compare them to our proposed W2S{}.
\noindent\textbf{Real Denoising Dataset }
Only a few datasets allow to quantitatively evaluate denoising algorithms on real images, such as DND~\cite{plotz2017benchmarking} and SSID~\cite{abdelhamed2018high}. These datasets capture images with different noise levels, for instance by changing the ISO setting at capture. More related to our work, Zhang~\textit{et al.}{}~\cite{zhang2019poisson} collect a dataset of microscopy images. All three datasets are designed only for denoising, and no HR images are provided that would allow them to be used for SR evaluation. According to our benchmark results, the best denoising algorithm does not necessarily provide the best input for the downstream SR task, and the JDSR learning is the best overall approach. This suggests a dataset on joint denoising and SR can provide a more comprehensive benchmark for image restoration.
\noindent\textbf{Real Super-resolution Dataset }
Recently, capturing LR and HR image pairs by changing camera parameters has been proposed. Chen~\textit{et al.}{} collect 100 pairs of images of printed postcards placed at different distances. SR-RAW~\cite{zhang2019zoom} consists of 500 real scenes captured with multiple focal lengths. Although this dataset provides real LR-HR pairs, it suffers from misalignment due to the inevitable perspective changes or lens distortion. Cai~\textit{et al.}{} thus introduce an iterative image registration scheme into the registration of another dataset, RealSR~\cite{cai2019toward}. However, to have high-quality images, all these datasets are captured with low ISO setting, and the images thus contain very little noise as shown in our analysis. Qian~\textit{et al.}{} propose a dataset for joint demosaicing, denoising and SR~\cite{qian2019trinity}, but the noise in their dataset is simulated by adding white Gaussian noise. Contrary to these datasets, our proposed W2S{} is constructed using SR microscopy techniques~\cite{gustafsson2000surpassing}, all pairs of images are well aligned, and it contains raw LR images with different noise levels and the noise-free LR images,
thus enabling the benchmarking of both denoising and SR under real settings.
\subsection{Deep Learning based Image Restoration}
Deep learning based methods have shown promising results on various image restoration tasks, including denoising and SR. We briefly present prior work and the existing problems that motivate joint optimization.
\noindent\textbf{Deep Learning for Denoising }
Recent deep learning approaches for image denoising achieve state-of-the-art results on recovering the noise-free images from images with additive noise
Whether based on residual learning~\cite{zhang2017beyond}, using memory blocks~\cite{tai2017memnet}, bottleneck architecture~\cite{weigert2018content},
attention mechanisms~\cite{anwar2019real}, internally modeling Gaussian noise parameters~\cite{el2020blind}, these deep learning methods all require training data. For real-world raw-image denoising, the training data should include noisy images with a Poisson noise component, and a corresponding aligned noise-free image, which is not easy to acquire.
Some recent self-supervised methods can learn without having training targets~\cite{batson2019noise2self,krull2019noise2void,lehtinen2018noise2noise}, however, their performance does not match that of supervised methods. We hence focus on the better-performing supervised methods in our benchmark, since targets are available.
All these networks are typically evaluated only on the denoising task, often only on the one they are trained on. They optimize for minimal squared pixel error, leading to potentially smoothed out results that favour reconstruction error at the expense of detail preservation. When a subsequent task such as SR is then applied on the denoised outputs from these networks, the quality of the final results does not, as we see in our benchmark, necessarily correspond to the denoising performance of the different approaches. This highlights the need for a more comprehensive perspective that jointly considers both restoration tasks.
\noindent\textbf{Deep Learning for Super-resolution }
Since the first convolutional neural network for SR~\cite{dong2014learning} outperformed conventional methods on synthetic datasets, many new architectures~\cite{kim2016accurate,lim2017enhanced,shi2016real,vasu2018analyzing,wang2018esrgan,zhang2018image,zhang2018residual} and loss functions~\cite{johnson2016perceptual,ledig2017photo,sajjadi2017enhancenet,zhang2019ranksrgan,zhang2019image} have been proposed to improve the effectiveness and the efficiency of the networks. To enable the SR networks generalize better on the real-world LR images where the degradation is unknown, works have been done on kernel prediction~\cite{cai2019toward,gu2019blind} and kernel modeling~\cite{zhang2019deep,zhou2019kernel}. However, most of the SR networks assume that the LR images are noise-free or contain additive Gaussian noise with very small variance. Their predictions are easily affected by noise if the distribution of the noise is different from their assumptions~\cite{choi2019evaluating}. This again motivates a joint approach developed for the denoising and SR tasks.
\noindent\textbf{Joint Optimization in Deep Image Restoration }
Although a connection can be drawn between the denoising and super-resolution tasks in the frequency domain~\cite{elhelou2020stochastic}, their joint optimization was not studied before due to the lack of a real benchmark.
Recent studies have shown the performance of joint optimization in image restoration, for example, the joint demosaicing and denoising~\cite{gharbi2016deep,klatzer2016learning}, joint demosaicing and super-resolution~\cite{zhang2019zoom,zhou2018deep}. All these methods show that the joint solution outperforms the sequential application of the two stages. More relevant to JDSR,
Xie~\textit{et al.}{}~\cite{xie2015joint} present a dictionary learning approach with constraints tailored for depth maps, and
Miao~\textit{et al.}{}~\cite{miao2020handling} propose a cascade of two networks for joint denoising and deblurring, evaluated on synthetic data only. Similarly, our results show that a joint solution for denoising and SR also obtains better results than any sequential application. Note that our W2S dataset allows us to draw such conclusions on \textit{real} data, rather than degraded data obtained through simulation.
\section{Joint Denoising and Super-Resolution Dataset for Widefield to SIM Mapping}
In this section, we describe the experimental setup that we use to acquire the sets of LR and HR images and present an analysis of the noise levels and blur kernels of our dataset.
\subsection{Structured-Illumination Microscopy}
\label{sec:sim}
Structured-illumination microscopy (SIM) is a technique used in microscopy imaging that allows samples to be captured with a higher resolution than the one imposed by the physical limits of the imaging system~\cite{gustafsson2000surpassing}. Its operation is based on the interference principle of the Moir{\'e} effect. We present how SIM works in more detail in our supplementary material. We use SIM to extend the resolution of standard widefield microscopy images. This allows us to obtain aligned LR and HR image pairs to create our dataset. The acquisition details are described in the next section.
\subsection{Data Acquisition} \label{sec:acquisition}
We capture the LR images of the W2S{} dataset using widefield microscopy~\cite{verveer1999comparison}. Images are acquired with a high-quality commercial fluorescence microscope and with real biological samples, namely, human cells.
\noindent\textbf{Widefield Images }
A time-lapse widefield of 400 images is acquired using a Nikon SIM setup (Eclipse T1) microscope. The details of the setup are given in the supplementary material. In total, we capture 120 different fields-of-view (FOVs), each FOV with 400 captures in 3 different wavelengths. All images are \textit{raw}, \textit{i.e.}, are linear with respect to focal plane illuminance, and are made up of $512 \times 512$ pixels.
We generate different noise-level images by averaging 2, 4, 8, and 16 raw images of the same FOV. The larger the number of averaged raw images is, the lower the noise level. The noise-free LR image is estimated as the average of all 400 captures of a single FOV. Examples of images with different noise levels and the corresponding noise-free LR images are presented in Fig.~\ref{fig:teaser}.
\noindent\textbf{SIM Imaging }
The HR images are captured using SIM imaging. We acquire the SIM images using the same Nikon SIM setup (Eclipse T1) microscope as above. We present the details of the setup in the supplementary material. The HR images have a resolution that is higher by a factor of 2, resulting in $1024 \times 1024$ pixel images.
\subsection{Data Analysis}
\label{sec:ana}
W2S{} includes 120 different FOVs, each FOV is captured in 3 channels, corresponding to the wavelengths 488nm, 561nm and 640nm. As the texture of the cells is different and independent across different channels, the different channels can be considered as different images, thus resulting in 360 views. For each view, 1 HR image and 400 LR images are captured. We obtain LR images with different noise levels by averaging different numbers of images of the same FOV and the same channel. In summary, W2S{} provides 360 different sets of images, each image set includes LR images with 5 different noise levels (corresponding to 1, 2, 4, 8, and 16 averaged LR images), the corresponding noise-free LR image (averaged over 400 LR images) and the corresponding HR image acquired with SIM. The LR images have dimensions $512 \times 512$, and the HR images $1024 \times 1024$.
To quantitatively evaluate the difficulty of recovering the HR image from the noisy LR observation in W2S{}, we analyze the degradation model relating the LR observations to their corresponding HR images. We adopt a commonly used degradation model~\cite{chen2019camera,dong2014learning,gu2019blind,zhou2019kernel}, with an additional noise component,
\begin{equation}\label{eq:LRdegradation}
I_{LR}^{noisy} = (I_{HR} \circledast k) \downarrow_m + n,
\end{equation}
where $I_{LR}^{noisy}$ and $I_{HR}$ correspond, respectively, to the noisy LR observation and the HR image, $\circledast$ is the convolution operation, $k$ is a blur kernel, $\downarrow_m$ is a downsampling operation with a factor of $m$, and $n$ is the additive noise. Note that $n$ is usually assumed to be zero in most of the SR networks' degradation models, while it is not the case for our dataset. As the downsampling factor $m$ is equal to the targeted super-resolution factor, it is well defined for each dataset. We thus analyze in what follows the two unknown variables of the degradation model for W2S{}; namely the noise $n$ and the blur kernel $k$.
Comparing to other denoising datasets, W2S{} contains 400 noisy images for each view, DND~\cite{choi2019evaluating} contains only 1, SSID~\cite{abdelhamed2018high} contains 150, and FMD~\cite{zhang2019poisson}, which also uses widefield imaging, contains 50. W2S{} can thus provide a wide range of noise levels by averaging a varying number of images out of the 400. In addition, W2S{} provides LR and HR image pairs that do not suffer from misalignment problems often encountered in SR datasets.
\noindent\textbf{Noise Estimation }
We use the noise modeling method in~\cite{foi2008practical} to estimate the noise magnitude in raw images taken from W2S{}, from the denoising dataset FMD~\cite{zhang2019poisson}, and from the SR datasets RealSR~\cite{cai2019toward} and City100~\cite{chen2019camera}. The approach of~\cite{foi2008practical} models the noise as Poisson-Gaussian. The measured noisy pixel intensity is given by $y=x+n_P(x)+n_G$, where $x$ is the noise-free pixel intensity, $n_G$ is zero-mean Gaussian noise, and $x+n_P(x)$ follows a Poisson distribution of mean $ax$ for some $a>0$. This approach yields an estimate for the parameter $a$ of the Poisson distribution.
We evaluate the Poisson parameter of the noisy images from the three noise levels (obtained by averaging 1, 4 and 8 images) of W2S{}, the raw noisy images of FMD, and the LR images of the SR datasets for comparison. We show the mean of the estimated noise magnitude for the different datasets in Fig.~\ref{fig:noise_stats}. We see that the raw noisy images of W2S{} have a high noise level, comparable to that of FMD. On the other hand, the estimated noise parameters of the SR datasets are almost zero, up to small imprecision, and are thus significantly lower than even the estimated noise magnitude of the LR images from the lowest noise level in W2S{}. Our evaluation highlights the fact that the additive noise component is not taken into consideration in current state-of-the-art SR datasets. The learning-based SR methods using these datasets are consequently not tailored to deal with noisy inputs that are common in many practical applications, leading to potentially poor performance. In contrast, W2S{} contains images with high (and low) noise magnitude comparable to the noise magnitude of a recent denoising dataset~\cite{zhang2019poisson}.
\begin{figure}[t]
\centering
\subfigure[Estimated noise (log)]{
\includegraphics[width=0.45\linewidth,height=0.31\linewidth]{IMAGES/dataset_imgs/noise.png}
\label{fig:noise_stats}
}
\subfigure[Estimated kernels]{
\includegraphics[width=0.45\linewidth,trim={0 0 0 7},clip,height=0.31\linewidth]{IMAGES/dataset_imgs/kernel.png}
\label{fig:kernel_stats}
}
\caption{Noise and kernel estimation on images from different datasets. A comparably-high noise level and a wide kernel indicate that the HR images of W2S{} are challenging to recover from the noisy LR observation.}
\label{fig:dataset_stats}
\end{figure}
\noindent\textbf{Blur Kernel Estimation }
We estimate the blur kernel $k$ shown in Eq.~\eqref{eq:LRdegradation} as
\begin{equation}
k = \underset{k}{argmin} ||I_{LR}^{noise-free}\uparrow^{bic} - k \circledast I_{HR} ||^2_2,
\end{equation}
where $I_{LR}^{noise-free}\uparrow^{bic}$ is the noise-free LR image upscaled using bicubic interpolation. We solve for $k$ directly in the frequency domain using the Fast Fourier Transform~\cite{helou2018fourier}.
The estimated blur kernel is visualized in Fig.~\ref{fig:kernel_stats}. For the purpose of comparison, we show the estimated blur kernel from two SR datasets: RealSR~\cite{cai2019toward} and City100~\cite{chen2019camera}. We also visualize the two other blur kernels: the MATLAB bicubic kernel that is commonly used in the synthetic SR datasets, and the Gaussian blur kernel with a sigma of 2.0, which is the largest kernel used by the state-of-the-art blind SR network~\cite{gu2019blind} for the upscaling factor of 2. From the visualization we clearly see the bicubic kernel and Gaussian blur kernel that are commonly used in synthetic datasets are very different from the blur kernels of real captures. The blur kernel of W2S{} has a long tail compared to the blur kernels estimated from the other SR datasets, illustrating that more high-frequency information is removed for the LR images in W2S. This is because a wider space-domain filter corresponds to a narrower frequency-domain low pass, and vice versa. Hence, the recovery of HR images from such LR images is significantly more challenging.
Compared to the SR datasets, the LR and HR pairs in W2S{} are well-aligned during the capture process, and no further registration is needed. Furthermore, to obtain high-quality images, the SR datasets are captured under high ISO and contain almost zero noise, whereas W2S{} contains LR images with different noise levels. This makes it a more comprehensive benchmark for testing under different imaging conditions. Moreover, as shown in Sec.~\ref{sec:ana}, the estimated blur kernel of W2S{} is wider than that of other datasets, and hence it averages pixels over a larger window, filtering out more frequency components and making W2S{} a more challenging dataset for SR.
\section{Benchmark}
\label{sec:benchmark}
We benchmark on the sequential application of state-of-the-art denoising and SR algorithms on W2S{} using RMSE and SSIM. Note that we do not consider the inverse order, \textit{i.e.}, first applying SR methods on noisy images, as this amplifies the noise and causes a large increase in RMSE as shown in the last row of Table~\ref{table:PSNR_dsr}. With current methods, it would be extremely hard for a subsequent denoiser to recover the original clean signal.
\subsection{Setup}
We split W2S{} into two disjoint training and test sets. The training set consists of 240 LR and HR image sets, and the test set consists of 120 sets of images, with no overlap between the two sets. We retrain the learning-based methods on the training set, and the evaluation of all methods is carried out on the test set.
For denoising, we evaluate different approaches from both classical methods and deep-learning methods. We use a method tailored to address Poisson denoising, PURE-LET~\cite{luisier2011image}, and the classical Gaussian denoising methods EPLL~\cite{zoran2011learning} and BM3D~\cite{BM3D}. The Gaussian denoisers are combined with the Anscombe variance-stabilization transform (VST)~\cite{makitalo2012optimal} to first modify the distribution of the image noise into a Gaussian distribution, denoise, and then invert the result back with the inverse VST. We estimate the noise magnitude using the method in~\cite{foi2008practical}, to be used as input for both the denoiser and for the VST when the latter is needed. We also use the state-of-the-art deep-learning methods MemNet~\cite{tai2017memnet}, DnCNN~\cite{zhang2017beyond}, and RIDNet~\cite{anwar2019real}. For a fair comparison with the traditional non-blind methods that are given a noise estimate, we separately train each of these denoising methods for every noise level, and test with the appropriate model per noise level. The training details are presented in the supplementary material.
We use six state-of-the-art SR networks for the benchmark: four pixel-wise distortion based SR networks, RCAN~\cite{zhang2018image}, RDN~\cite{zhang2018residual}, SAN~\cite{dai2019second}, SRFBN~\cite{li2019feedback}, and two perceptually-optimized SR networks, EPSR~\cite{vasu2018analyzing} and ESRGAN~\cite{wang2018esrgan}. The networks are trained for SR and the inputs are assumed to be noise-free, \textit{i.e.}, they are trained to map from the noise-free LR images to the high-quality HR images. All these networks are trained using the same settings, the details of which are presented in the supplementary material.
\begin{table}[t]
\centering
\begin{tabular}{ccccccc}
\toprule
& & \multicolumn{5}{c}{Number of raw images averaged before denoising} \\ \cline{3-7}
& Method & {1} & {2} & {4} & {8} & {16} \\ \cline{1-7}
\parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Denoisers}}}&
PURE-LET~\cite{luisier2011image} & \cellcolor{gray!20}0.089/0.864&0.076/0.899&0.062/0.928&0.052/0.944&0.044/0.958 \\
&VST+EPLL~\cite{zoran2011learning} & \cellcolor{gray!20}0.083/0.887&0.074/0.916&0.061/0.937&0.051/0.951&0.044/0.962 \\
&VST+BM3D~\cite{BM3D} & \cellcolor{gray!20}0.080/0.897&0.072/0.921&0.059/0.939&0.050/0.953&0.043/0.962 \\
&MemNet$^\dagger$~\cite{tai2017memnet} &\cellcolor{gray!20}0.090/0.901&0.072/0.909&0.063/0.925&0.059/0.944&0.059/0.944 \\
&DnCNN$^\dagger$~\cite{zhang2017beyond} &\cellcolor{gray!20}0.078/0.907&0.061/0.926&\textcolor{red}{0.049}/0.944&\textcolor{red}{0.041}/0.954&\textcolor{red}{0.033}/\textcolor{red}{0.964} \\
&RIDNet$^\dagger$~\cite{anwar2019real} & \cellcolor{gray!20}\textcolor{red}{0.076}/\textcolor{red}{0.910}&\textcolor{red}{0.060}/\textcolor{red}{0.928}&\textcolor{red}{0.049}/\textcolor{red}{0.943}&\textcolor{red}{0.041}/\textcolor{red}{0.955}&0.034/\textcolor{red}{0.964} \\
\cline{1-7}
\bottomrule
\end{tabular}
\caption{RMSE/SSIM results on denoising the W2S{} test images. We benchmark three classical methods and three deep learning based methods. The larger the number of averaged raw images is, the lower the noise level. $^\dagger$The learning based methods are trained for each noise level separately. An interesting observation is that the best RMSE results (in red) do not necessarily give the best result after the downstream SR method as show in Table~\ref{table:PSNR_dsr}. We highlight the results under the highest noise level with gray background for easier comparison with Table~\ref{table:PSNR_dsr}.}
\label{table:PSNR_den}
\end{table}
\subsection{Results and Discussion}
\newcommand{\benchmarkA}[1]{\includegraphics[width=0.135\linewidth]{#1}}
We apply the denoising algorithms on the noisy LR images, and calculate the RMSE and SSIM values between the denoised image and the corresponding noise-free LR image in the test set of W2S{}. The results of the 6 benchmarked denoising algorithms are shown in Table~\ref{table:PSNR_den}.
DnCNN and RIDNet outperform the classical denoising methods for all noise levels. Although MemNet achieves worse results than the classical denoising methods in terms of RMSE and SSIM, the results of MemNet contain fewer artifacts as shown in Fig.~\ref{fig:result:denoising}.
One interesting observation is that a better denoising with a lower RMSE or a higher SSIM, in some cases, results in unwanted smoothing in the form of a local filtering that incurs a loss of detail. Although the RMSE results of DnCNN are not the best (Table~\ref{table:PSNR_den}), when they are used downstream by the SR networks in Table~\ref{table:PSNR_dsr}, the DnCNN denoised images achieve the best final performance.
Qualitative denoising results are shown in the first row of Fig.~\ref{fig:result:denoising}. We note that the artifacts created by denoising algorithms are amplified when SR methods are applied on the denoised results (\textit{e.g.}, (a) and (b) of Fig.~\ref{fig:result:denoising}). Although the denoised images are close to the clean LR image according to the evaluation metrics, the SR network is unable to recover faithful texture from these denoised images as the denoising algorithms remove part of the high-frequency information.
\begin{figure}[t]
\centering
\begin{tabu}{ccccccc}
\rowfont{\tiny}
\multicolumn{7}{c}{Denoising Results}\\
\benchmarkA{IMAGES/jdsr/100_0/PURELET.png} &
\benchmarkA{IMAGES/jdsr/100_0/EPLL.png} &
\benchmarkA{IMAGES/jdsr/100_0/BM3D.png} &
\benchmarkA{IMAGES/jdsr/100_0/M_1.png} &
\benchmarkA{IMAGES/jdsr/100_0/D_1.png} &
\benchmarkA{IMAGES/jdsr/100_0/R_1.png} &
\benchmarkA{IMAGES/jdsr/100_0/avg400.png} \\
\rowfont{\tiny}
(a) PURE-LET & (b) EPLL & (c) BM3D & (d) MemNet & (e) DnCNN & (f) RIDNet & (g) clean LR \\
\rowfont{\tiny}
\multicolumn{7}{c}{RDN~\cite{zhang2018residual} applied on denoised results}\\
\benchmarkA{IMAGES/jdsr/100_0/RDN_PURELET.png} &
\benchmarkA{IMAGES/jdsr/100_0/RDN_EPLL.png} &
\benchmarkA{IMAGES/jdsr/100_0/RDN_BM3D.png} &
\benchmarkA{IMAGES/jdsr/100_0/RDN_M_1.png} &
\benchmarkA{IMAGES/jdsr/100_0/RDN_D_1.png} &
\benchmarkA{IMAGES/jdsr/100_0/RDN_R_1.png} &
\benchmarkA{IMAGES/jdsr/100_0/RDN.png} \\
\rowfont{\tiny}
(a) RDN+ & (b) RDN+ & (c) RDN+ & (d) RDN+ & (e) RDN+ & (f) RDN+ & (g) RDN+\\
\rowfont{\tiny}
PURE-LET & EPLL & BM3D & MemNet & DnCNN & RIDNet & clean LR\\
\end{tabu}
\caption{The first row shows qualitative results of the denoising algorithms on a test LR image with the highest noise level. The second row shows qualitative results of the SR network RDN~\cite{zhang2018residual} applied on top of the denoised results. RDN amplifies the artifacts created by PURE-LET and EPLL, and is unable to recover faithful texture when the input image is over-smoothed by denoising algorithms. A gamma correction is applied for better visualization. Best viewed on screen.}
\label{fig:result:denoising}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{lcccccccc}
\toprule
& & \multicolumn{6}{c}{Super-resolution networks} \\
\cline{3-8}
& \textbf{} & RCAN & RDN & SAN & SRFBN & EPSR & ESRGAN \\ \cline{3-8}
\parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Denoisers}}} & PURE-LET & .432/.697&.458/.695&.452/.693&.444/.694&.658/.594&.508/.646\\
& VST+EPLL & .425/.716&.434/.711&.438/.707&.442/.710&.503/.682&.485/.703\\
& VST+BM3D & .399/.753&.398/.748&.418/.745&.387/.746&.476/.698&.405/.716\\
& MemNet & .374/.755&.392/\textcolor{red}{.749}&.387/.746&.377/.752&.411/.713&.392/.719\\
& DnCNN & \textcolor{red}{.357}/\textcolor{red}{.756}&\textcolor{red}{.365}/\textcolor{red}{.749}&\textcolor{red}{.363}/\textcolor{red}{.753}&\textcolor{red}{.358}/\textcolor{red}{.754}&\textcolor{red}{.402}/\textcolor{red}{.719}&\textcolor{red}{.373}/\textcolor{red}{.726}\\
& RIDNet & .358/\textcolor{red}{.756}&.371/.747&.364/.752&.362/.753&.411/.710&.379/.725\\
\cline{1-8}
& Noise-free LR & .255/.836&.251/.837&.258/.834&.257/.833&.302/.812&.289/.813\\
\hline
& Noisy LR & .608/.382&.589/.387&.582/.388&.587/.380&.627/.318&.815/.279\\
\hline
\bottomrule
\end{tabular}
\caption{RMSE/SSIM results on the sequential application of denoising and SR methods on the W2S{} test images with the highest noise level, corresponding to the first column of Table~\ref{table:PSNR_den}. We omit the leading `0' in the results for better readability. For each SR method, we highlight the best RMSE value in red. The SR networks applied on the denoised results are trained to map the noise-free LR images to the high-quality HR images. }
\label{table:PSNR_dsr}
\end{table}
The SR networks are applied on the denoised results of the denoising algorithms, and are evaluated using RMSE and SSIM. We also include the results of applying the SR networks on the noise-free LR images.
As mentioned above, we notice that there is a significant drop in performance when the SR networks are given the denoised LR images instead of the noise-free LR images as shown in Table~\ref{table:PSNR_den}.
For example, applying RDN on noise-free LR images results in the SSIM value of 0.836, while the SSIM value of the same network applied to the denoised results of RIDNet on the lowest noise level is 0.756 (shown in the first row, last column in Table~\ref{table:PSNR_jdsr}). This illustrates that the SR networks are strongly affected by noise or over-smoothing in the inputs. We also notice that a better SR network according to the evaluation on a single SR task does not necessarily provide better final results when applied on the denoised images. Although RDN outperforms RCAN in both RMSE and SSIM when applied on noise-free LR images, RCAN is more robust when the input is a denoised image. Among all the distortion-based SR networks, RCAN shows the most robustness as it outperforms all other networks in terms of RMSE and SSIM when applied on denoised LR images. As mentioned above, another interesting observation is that although DnCNN results in lower RMSE and higher SSIM than other networks for denoising at the highest noise level, DnCNN still provides a better input for the SR networks. We note generally that better denoisers according to the denoising benchmark do not necessarily provide better denoised images for the downstream SR task. Although the denoised results from MemNet have larger RMSE than the conventional methods, as shown in Table~\ref{table:PSNR_den}, the SR results on MemNet's denoised images achieve higher quality based on RMSE and SSIM.
Qualitative results are given in Fig.~\ref{fig:result:benchmark}, where for each SR network we show the results for the denoising algorithm that achieves the highest RMSE value for the joint task (\textit{i.e.}, using the denoised results of DnCNN). We note that none of networks is able to produce results with detailed texture. As denoising algorithms remove some high-frequency signals along with noise, the SR results from the distortion-based networks are blurry and many texture details are lost. Although the perception-based methods (EPSR and ESRGAN) are able to produce sharp results, they fail to reproduce faithful texture and suffer a drop in SSIM.
\begin{figure}[!ht]
\centering
\begin{tabu}{ccccccc}
\benchmarkA{IMAGES/benchmark/113_1/RCAN_D_1.png}&
\benchmarkA{IMAGES/benchmark/113_1/RDN_D_1.png}&
\benchmarkA{IMAGES/benchmark/113_1/SAN_D_1.png}&
\benchmarkA{IMAGES/benchmark/113_1/SRFBN_D_1.png}&
\benchmarkA{IMAGES/benchmark/113_1/EPSR_D_1.png}&
\benchmarkA{IMAGES/benchmark/113_1/ESRGAN_D_1.png}&\benchmarkA{IMAGES/benchmark/113_1/sim.png}\\
\rowfont{\tiny}
(a) 0.313 &(b) 0.322 &(c) 0.322 &(d) 0.344 &(e) 0.405 &(f) 0.400 & Ground-truth\\
\end{tabu}
\caption{Qualitative results with the corresponding RMSE values on the sequential application of denoising and SR algorithms on the W2S{} test images with the highest noise level. (a) DnCNN+RCAN, (b) DnCNN+RDN, (c) DnCNN+SAN, (d) DnCNN+SRFBN (e) DnCNN+EPSR, (f) DnCNN+ESRGAN. A gamma correction is applied for better visualization. Best viewed on screen.}
\label{fig:result:benchmark}
\end{figure}
\subsection{Joint Denoising and Super-Resolution (JDSR)}
Our benchmark results in Sec.~\ref{sec:benchmark} show that the successive application of denoising and SR algorithms does not produce the highest-quality HR outputs. In this section, we demonstrate that it is more effective to train a JDSR model that directly transforms the noisy LR image into an HR image.
\subsection{Training Setup}
For JDSR, we adopt a 16-layer RRDB network~\cite{wang2018esrgan}. To enable the network to better recover texture, we replace the GAN loss in the training with a novel texture loss. The GAN loss often results in SR networks producing realistic but fake textures that are different from the ground-truth and may result in a significant drop in SSIM~\cite{wang2018esrgan}. Instead, we introduce a texture loss that exploits the features' second-order statistics to help the network produce high-quality and real textures. This choice is motivated by the fact that second-order descriptors have proven effective for tasks such as texture recognition~\cite{harandi2014bregman}. We leverage the difference in second-order statistics of VGG features to measure the similarity of the texture between the reconstructed HR image and the ground-truth HR image. The texture loss is defined as
\begin{equation}
\mathcal{L}_{texture} = || Cov(\phi(I_{SR})) - Cov(\phi(I_{HR})) ||_2^2,
\end{equation}
where $I_{SR}$ is the estimated result from the network for JDSR and $I_{HR}$ is the ground-truth HR image, $\phi(\cdot)$ is a neural network feature space, and $Cov(\cdot)$ computes the covariance. We follow the implementation of MPN-CONV~\cite{li2017is} for the forward and backward feature covariance calculation. To improve visual quality, we further incorporate a perceptual loss to the training objective
\begin{equation}
\mathcal{L}_{perceptual} = || \phi(I_{SR}) - \phi(I_{HR}) ||_2^2.
\end{equation}
Our final loss function is then given by
\begin{equation}
\mathcal{L} = \mathcal{L}_1 + \alpha \cdot \mathcal{L}_{perceptual} + \beta \cdot \mathcal{L}_{texture},
\end{equation}
where $\mathcal{L}_1$ represents the $\ell$1 loss between the estimated image and the ground-truth. We empirically set $\alpha = 0.05$ and $\beta = 0.05$.
We follow the same training setup as the experiments in Sec.~\ref{sec:benchmark}. For comparison, we also train RCAN~\cite{zhang2018residual} and ESRGAN~\cite{wang2018esrgan} on JDSR.
\begin{table}[t]
\centering
\begin{tabular}{cccccc}
\toprule
& \multicolumn{4}{c}{Number of raw images averaged before JDSR} & \multirow{2}{*}{\#Parameters} \\ \cline{2-5}
Method & {1} & {2} & {4} & {8} \\ \cline{1-6}
DnCNN$^\dagger$+RCAN$^\ddagger$&0.357/0.756&0.348/0.779&0.332/0.797&0.320/0.813&0.5M+15M\\
DnCNN$^\dagger$+ESRGAN$^\ddagger$&0.373/0.726&0.364/0.770&0.349/0.787&0.340/0.797&0.5M+18M\\
\cline{1-6}
JDSR-RCAN$^*$&0.343/0.767&0.330/0.780&0.314/0.799&0.308/0.814&15M\\
JDSR-ESRGAN$^*$&0.351/0.758&0.339/0.771&0.336/0.788&0.322/0.798&18M\\
Ours$^*$&0.340/0.760&0.326/0.779&0.318/0.797&0.310/0.801&11M \\
\cline{1-6}
\end{tabular}
\caption{JDSR RMSE/SSIM results on the W2S{} test set. $^\dagger$The denoising networks are retrained per noise level. $^\ddagger$The SR networks are trained to map noise-free LR images to HR images. $^*$The networks trained for JDSR are also retrained per noise level. }
\label{table:PSNR_jdsr}
\end{table}
\subsection{Results and Discussion}
The quantitative results of different methods are reported in Table~\ref{table:PSNR_jdsr}. The results indicate that comparing to the sequential application of denoising and SR, a single network trained on JDSR is more effective even though it has fewer parameters. GAN-based methods generate fake textures and lead to low SSIM scores. Our model, trained with texture loss, is able to effectively recover high-fidelity texture information even when high noise levels are present in the LR inputs. We show the qualitative results of JDSR on the highest noise level (which corresponds to the first column of Table~\ref{table:PSNR_den}) in Fig.~\ref{fig:jdsr}. We see that other networks have difficulties to recover the shape of the cells in the presence of noise, whereas our method trained with texture loss is able to generate a higher-quality HR image with faithful texture.
\newcommand{\jdsrimg}[1]{\includegraphics[width=0.16\linewidth]{#1}}
\begin{figure}[t]
\centering
\begin{tabu}{cccccc}
\jdsrimg{IMAGES/jdsr/090_0/RCAN_D_1_avg1.png}&
\jdsrimg{IMAGES/jdsr/090_0/RCAN_jdsravg1.png}&
\jdsrimg{IMAGES/jdsr/090_0/ESRGAN_D_1_avg1.png}&
\jdsrimg{IMAGES/jdsr/090_0/ESRGAN_jdsravg1.png}&
\jdsrimg{IMAGES/jdsr/090_0/ours_jdsravg1.png}&
\jdsrimg{IMAGES/jdsr/090_0/sim.png}
\\
\rowfont{\tiny}
(a) 0.101 &(b) 0.065 &(c) 0.160 &(d) 0.124 &(e) 0.084 & Ground-truth\\
\end{tabu}
\caption{Qualitative results with the corresponding RMSE values of denoising and SR on the W2S{} test images with the highest noise level. (a) DnCNN+RCAN, (b) RCAN, (c) DnCNN+ESRGAN, (d) ESRGAN, (e) a 16-layer RRDB network~\cite{wang2018esrgan} trained with texture loss. A gamma correction is applied for better visualization. Best viewed on screen.}
\label{fig:jdsr}
\vspace{-0.2cm}
\end{figure}
\section{Conclusion}
We propose the first joint denoising and SR microscopy dataset, \textbf{W}idefield\textbf{2S}IM{}. We use image averaging to obtain LR images with different noise levels and the noise-free LR. The HR images are obtained with SIM imaging. With W2S{}, we benchmark the combination of various denoising and SR methods. Our results indicate that SR networks are very sensitive to noise, and that the consecutive application of two approaches is sub-optimal and suffers from the accumulation of errors from both stages. We also observe form the experimental results that the networks benefit from joint optimization for denoising and SR. W2S{} is publicly available, and we believe it will be useful in advancing image restoration in medical imaging. Although the data is limited to the domain of microscopy data, it can be a useful dataset for benchmarking deep denoising and SR algorithms.
\clearpage
\bibliographystyle{splncs04}
|
1,314,259,994,347 | arxiv | \section{Introduction}
It is widely expected that gravitational waves of sufficiently strong
amplitude will
be detected by a new generation of gravitational wave interferometers.
Binary systems composed of
compact objects, such as black holes and/or neutron stars, are among
the strongest expected sources of these waves.
Advanced gravitational wave detectors should be sensitive enough to
detect the merging phase of such binaries. A detailed analysis
of the expected waveforms from these events
will provide valuable information not only in the analysis of the
received signals, but also in the design and tuning of future advanced
gravitational wave detectors~\cite{advligo,Mandel:2007hi}.
In relation to these efforts, and beyond the intrinsic importance of
the two-body problem in general relativity (GR), it is significant that recent studies of
the binary black hole problem have made substantial progress in providing waveforms for
these mergers (see for instance~\cite{Pret05,CLMZ06,BCCKM06,DHPS06,GSBHH07,SPRTW07}).
Furthermore, these numerical results for vacuum spacetimes show a remarkable
agreement with those obtained with approximation techniques \cite{BCP07,PBBC07}.
This provides considerable support for the use of
waveforms obtained via approximation techniques, suitably enhanced by further information
from numerical simulations, since these can be more
easily encoded in a template bank~\cite{Baumgarte:2006en}. This requires knowing the waveforms during the
pre-merger, merger and post-merger stages and matching them appropriately to obtain
the continuous wavetrain through the most violent and strongly radiative stage of
the dynamics.
For non-vacuum spacetimes, differences in the waveforms
may arise from the state of matter describing the compact stars, the
influence of magnetic fields and related phenomena.
To fully understand these systems and their waveforms,
detailed simulations will be required to map out the possible phenomenology.
For the particular case of binary neutron
stars in full GR, several efforts studying the system in three dimensional
settings have been presented in recent
years~\cite{Shibata:2002jb,Duez:2002bn,Shibata:2003ga,
Miller:2003vc,Marronetti:2003hx,Jin:2006gm,Shibata:2005ss}.
However,
the complexity and computational cost of these simulations has
permitted investigators to consider only a portion of the interesting parameter
space and several of them have been restricted by symmetry
considerations. Nevertheless, a number of interesting problems are
beginning to be addressed, including the
influence of stiff versus soft equations of state~\cite{Shibata:2005ss},
a possible way to determine the innermost stable circular
orbit~\cite{Marronetti:2003hx}, the dynamics of unequal mass
binaries~\cite{Shibata:2005ss} and even the possible existence of
critical phenomena in the merging system~\cite{Jin:2006gm}.
Further exploration of these systems will require relaxing symmetry
considerations, such as axisymmetry or equatorial symmetry, and expanding
the space of initial configurations that can be successfully evolved.
Moreover, the inclusion of additional physics such as
magnetic fields will be important as these effects may play a major role in
the resulting dynamics. For instance, the magnetorotational instability,
which redistributes angular momentum in the system, can have a strong
influence on the multipole structure of the central source and
hence on the gravitational wave output of the system.
To date, work on black hole-neutron star binaries has been limited to a few
cases~\cite{Shibata:2006,Shibata:2006a,Rezzolla:2006}. As a result, our
understanding of this type of system is still in its infancy.
Needless to say, we have even more to understand about
both types of compact binaries when their environments, which may include
magnetic fields and radiation transport, are included.
Indeed, both magnetic fields and radiation transport are expected to be
key ingredients in modeling short, hard gamma-ray burst phenomena with compact
binaries.
Understanding such spectacular events requires the addition of
these ingredients to the computational infrastructure. The
resulting numerical simulations should allow for new astrophysical
insights.
The present work is intended as the first in a series of studies
that examine the evolution of compact binary systems in full three
dimensional general relativity.
To this end, we have developed a general computational infrastructure
with solvers for the Einstein and relativistic MHD equations that
incorporates several novel
features, which we discuss in the following sections. In Section II
we describe our formulation of the equations for these systems.
This includes expressing
the Einstein equations in terms of a desirable symmetric hyperbolic property~\cite{Palenzuela:2006wp,PLLinprep}
and coupling them with the equations of relativistic magnetohydrodynamics (MHD)~\cite{Neilsen2005,Anderson:2006ay}. Section III presents our
numerical implementation, such as integration techniques,
distributed adaptive mesh refinement (AMR), and
a tapered grid algorithm that ensures
stability and considerably reduces spurious reflections off artificial
internal
boundaries~\cite{Lehner:2005vc}.
These ingredients let
us simulate binary evolutions in which the stars begin with wider
separations than has been done in earlier studies. We can
extract gravitational radiation in the wave zone and
place outer boundaries an order of magnitude beyond what has been done
previously.
As a result, contamination by boundary effects
is negligible.
Section IV presents a fairly stringent code test by considering the
dynamics of a single
Tolman-Oppenheimer-Volkoff (TOV) solution and extracting the
radial oscillation modes of the star.
Section V describes our main application, namely a study of a binary
neutron star system without any assumed symmetries. We follow
the dynamics of the system from an early non-quasicircular stage
to the merger and subsequent formation of a neutron star or a black
hole. We present gravitational wave signals as measured by observers
placed in the wave-zone and calculated via Weyl scalars.
Section VI concludes and offers some considerations for future
work.
\section{Formulation and equations of motion}
The binary neutron star systems considered here are governed by both
the Einstein equations for the geometry, and the relativistic fluid
equations for the matter. We write both systems as first order
hyperbolic equations.
In this section we present a brief summary of our formulation and equations
for both the geometry and the fluid. More details on our
approach to the Einstein equations~\cite{Palenzuela:2006wp} and the
relativistic fluid equations~\cite{Neilsen2005,Anderson:2006ay} can be
found elsewhere.
By way of notation, we use letters from the beginning of the
alphabet
($a$, $b$, $c$) for spacetime indices, while letters from the middle
of the alphabet ($i$, $j$, $k$) range over spatial components.
We adopt geometric units where $c=G=1$.
\subsection{Einstein equations}
We write the Einstein equations in a first order reduction of the
generalized harmonic (GH) formalism. Our approach is closely related to
the one in~\cite{Lindblom:2005qh}, and it was used
previously in binary boson star evolutions~\cite{Palenzuela:2006wp},
where additional information can be found.
We define spacelike hypersurfaces at
$x^0\equiv t ={\rm const.}$, and define the 3-metric $h_{ij}$ on the
hypersurfaces. A vector normal to the hypersurfaces is given by
$n_a = - \nabla_a t / ||\nabla_a t ||$, and coordinates defined on
neighboring hypersurfaces can be related through the lapse, $\alpha$,
and shift, $\beta^i$. With these definitions, the spacetime metric
$g_{ab}$ can then be written as
\begin{eqnarray}
{\rm d} s^2 &=& g_{ab}\, {\rm d} x^a {\rm d} x^b\\
&=&-\alpha^2 \, {\rm d} t^2
+ h_{ij}\left({\rm d} x^i + \beta^i\, {\rm d} t\right)
\left({\rm d} x^j + \beta^j\, {\rm d} t\right).
\end{eqnarray}
Indices on spacetime quantities are raised and lowered with the 4-metric,
$g_{ab}$, and its inverse, while the 3-metric $h_{ij}$ and its inverse
are used to raise and lower indices on spatial quantities.
In the generalized harmonic formulation, the evolved variables are
\begin{equation}
g_{ab} \, , \quad Q_{ab} \equiv - n^c \, \partial_c g_{ab} \, ,
\quad D_{iab} \equiv \partial_i g_{ab} \, ,
\end{equation}
namely the spacetime metric and its temporal and spatial derivatives,
respectively. Coordinates are specified via the generalized harmonic
condition
\begin{equation}
\square x^a = H^a(t,x^i),
\end{equation}
where the arbitrary source
functions $H^a(t,x^i)$ determine the coordinate freedom.
Although our code allows for a general coordinate choice, we
choose harmonic coordinates for the work presented here and set
$H^a(t,x^i)=0$.
The evolution equations in our GH formalism are
\begin{eqnarray}
\label{EE_geq}
\partial_t g_{ab} &=& \beta^k~D_{kab} - \alpha~Q_{ab}, \\
\partial_t Q_{ab} &=& \beta^k~\partial_k Q_{ab}
- \alpha h^{ij} \partial_i D_{jab} \nonumber \\
&-& \alpha~ \partial_a H_b - \alpha~ \partial_b H_a +
2~\alpha~ \Gamma_{cab}~ H^c \nonumber \\
&+& 2\, \alpha\, g^{cd}~(h^{ij} D_{ica} D_{jdb} - Q_{ca} Q_{db}
- g^{ef} \Gamma_{ace} \Gamma_{bdf}) \nonumber \\
&-& \frac{\alpha}{2} n^c n^d Q_{cd} Q_{ab}
- \alpha~h^{ij} D_{iab} Q_{jc} n^c \nonumber \\
&-& 8 \pi \, \alpha(2T_{ab} - g_{ab} T) \nonumber \\
&-& 2 \sigma_0 \, \alpha \, [n_a Z_b + n_b Z_a - g_{a b} n^c Z_c ] \nonumber \\
&+& \sigma_1 \, \beta^i ( D_{iab} - \partial_i g_{ab} ), \\
\label{EE_Deq}
\partial_t D_{iab} &=& \beta^k \partial_k D_{iab}
- \alpha~\partial_i Q_{ab} \nonumber \\
&+& \frac{\alpha}{2} n^c n^d D_{icd} Q_{ab}
+ \alpha~h^{jk} n^c D_{ijc} D_{kab} \nonumber \\
&-& \sigma_1 \, \alpha \, ( D_{iab} - \partial_i g_{ab} ) .
\end{eqnarray}
Here $T_{ab}$ is the stress-energy tensor and $T$ is its trace, $T=T^a{}_a$.
$Z^a$ is a vector related to the constraints defined below in
Eq.~(\ref{eq:define_Z}). These variables are not evolved,
rather they measure the constraint violation and are included
in the evolution equations for constraint
damping purposes~\cite{gundlach}.
We also define $\Gamma_{abc} = g_{ad}\Gamma^{d}{}_{bc}$, where
${\Gamma^a}{}_{bc}$ are the Christoffel symbols obtained from $g_{ab}$,
given by
\begin{equation}
\Gamma^a{}_{bc} = \frac{1}{2} \, {g^{ad}} (D_{bdc} + D_{cdb} - D_{dbc}) ~.
\end{equation}
Note, $D_{iab}$ are evolved variables in our system, and the
quantities $D_{0ab}$ are computed from $Q_{ab}$ and $D_{iab}$ via
\begin{equation}
D_{0ab} = -\alpha Q_{ab} + \beta^k D_{kab} \, .
\end{equation}
While the Arnowitt-Deser-Misner (ADM) extrinsic
curvature is not part of the GH system,
the fluid equations below are written in terms of $K_{ij}$, which can be
calculated as
\begin{equation}
K_{ij} = \frac{1}{2} Q_{ij}+\frac{1}{\alpha}(D_{(ij)0}-\beta^k D_{(ij)k}).
\end{equation}
This GH formulation includes a number of constraints that must be satisfied
for consistency, including the Hamiltonian and momentum constraints as well
as additional constraints that arise in the first order reduction.
In particular, if we define the four-vector
\begin{equation}
\label{harmonicZ}
2 Z^a \equiv - \Gamma^{a}{}_{bc} \, g^{bc} - H^a(t,x^i) \, ,
\label{eq:define_Z}
\end{equation}
it can be shown that the energy and momentum constraints are satisfied if
$Z^a=0=\partial_t Z^a$.
The free parameters $\sigma_0$ and $\sigma_1$ are chosen to
control the damping of the four vector $Z_a$ (the
energy and momentum constraints) and the first order constraints,
respectively~\cite{Lindblom:2005qh,Palenzuela:2006wp}.
We monitor the
$Z^a$ during the evolution as an indication of the magnitude of the numerical
error in the solution.
\subsection{Perfect fluid equations}
We now briefly introduce the perfect fluid equations.
Additional information can be found in our previous
work~\cite{Neilsen2005,Anderson:2006ay}
as well as in general review articles~\cite{Marti:1999wi,Font:2000pp}.
The stress-energy tensor for the perfect fluid is
\begin{equation}
T_{ab} = h_e u_a u_b + P g_{ab},
\end{equation}
where $u^a$ is the four velocity of the fluid,
$h_e$ is the enthalpy, and $P$ is the isotropic pressure. The enthalpy
can be written
\begin{equation}
h_e = \rho_o + \rho_o\epsilon + P,
\end{equation}
where $\rho_o$ the rest energy density, and $\epsilon$ is
the specific internal energy density.
We introduce the quantities
\begin{equation}
W\equiv -n^a u_a, \qquad v^i \equiv \frac{1}{W}\,h^i{}_j u^j,
\end{equation}
where $W$ is the Lorentz factor between the fluid frame and the fiducial ADM
observers and $v^i$ is the spatial coordinate velocity of the fluid.
The set of fluid variables introduced here are known as the {\em primitive}
variables, ${{\hbox{\bfgreek\char'167}}} = (\rho_o , v^i , P)^{\rm T}$.
High resolution shock capturing schemes (HRSC) are robust numerical methods for
compressible fluid dynamics. These methods, based on Godunov's seminal
work~\cite{Godunov}, are fundamentally based on writing the fluid equations as
integral conservation laws. To this end, we introduce {\em conservative}
variables ${{\hbox{\bfgreek\char'161}}} = (D, S_i, \tau)^{\rm T}$, where
\begin{eqnarray}
D &=& W \rho_o,\\
S_i &=& h_e W^2 v_i,\\
\tau &=& h_e W^2 - P - D.
\end{eqnarray}
In an asymptotically flat spacetime these quantities are conserved,
and are related to the
baryon number, momentum, and, in the non-relativistic limit, the kinetic
energy, respectively. Anticipating the form of the evolution equations,
we also introduce the densitized conserved variables
\begin{equation}
\tilde D = \sqrt{h}\, D, \quad
\tilde S_i = \sqrt{h}\, S_i, \quad
\tilde \tau = \sqrt{h}\, \tau,
\end{equation}
where $h=\det(h_{ij})$.
The fluid equations can now be written in balance law form
\begin{equation}
\partial_t\tilde{\hbox{\bfgreek\char'161}} + \partial_k{\hbox{\bfgreek\char'146}}\,^k(\tilde{\hbox{\bfgreek\char'161}}) = {\hbox{\bfgreek\char'163}}(\tilde{\hbox{\bfgreek\char'161}}),
\label{eq:balance}
\end{equation}
where ${\hbox{\bfgreek\char'146}}\,^k$ are flux functions, and ${\hbox{\bfgreek\char'163}}$
are source terms. The fluid equations in this form are specifically
\begin{eqnarray}
&&\partial_t \tilde D + \partial_i \left[ \alpha\,\tilde D
\left( v^i - {\beta^i \over \alpha} \right) \right] = 0,\label{eq:ev_D} \\
&& \partial_t \tilde S_j + \partial_i \left[ \alpha \left(
\tilde S_j \left( v^i - {\beta^i \over \alpha} \right) + \sqrt{h}\,P \, h^i{}_j
\right)\right]\nonumber\\
&&\qquad = \alpha \, {^{3}{\Gamma}}^i{}_{jk} \, \left( \tilde S_i v^k
+ \sqrt{h}\,P h_i{}^k \right)
+ \tilde S_a\partial_j\beta^a\Bigr.\nonumber\\
&&\qquad\qquad\qquad
- \partial_j \alpha \, (\tilde\tau + \tilde D), \\
&&\partial_t \tilde\tau
+ \partial_i \left[ \alpha\left(\tilde S^i - \frac{\beta^i}{\alpha} \,
\tilde\tau - v^i \tilde D \right) \right] \nonumber\\
&& \qquad= \alpha \,
\left[ K_{ij} \tilde S^i v^j + \sqrt{h}\, K P - \frac{1}{\alpha}
\, \tilde S^a \partial_a \alpha \right].
\end{eqnarray}
Here ${^{3}{\Gamma}}^i{}_{jk}$ is the Christoffel symbol associated with the
3-metric $h_{ij}$, and $K$ is the trace of the extrinsic curvature,
$K = K^i{}_i$.
Finally, we close the system of fluid equations with an equation of state (EOS).
We choose the ideal gas EOS
\begin{equation}
P = (\Gamma-1)\, \rho_o\epsilon,
\end{equation}
where $\Gamma$ is the constant adiabatic exponent. Nuclear matter in
neutron stars
is relatively stiff, and we set $\Gamma=2$ in this work.
When the fluid flow is adiabatic, this EOS reduces to the well known
polytropic EOS
\begin{equation}
P=\kappa\rho_o{}^\Gamma,
\label{eq:polyEOS}
\end{equation}
where $\kappa$ is a dimensional constant. We use the polytropic EOS
only for setting initial data.
\section{Numerical approach}
Our numerical approach to solving the combined equations of general
relativistic hydrodynamics (GRHD) is built upon two extensively tested
codes: These were written to solve the (1) Einstein
equations~\cite{Palenzuela:2006wp,PLLinprep}, and the (2) relativistic
magnetohydrodynamics (MHD) equations~\cite{Neilsen2005,Anderson:2006ay}.
It should be mentioned that while we do solve the full GRMHD equations, in
our current work the magnetic field is set to zero.
Results with non-trivial magnetic fields will
be presented elsewhere~\cite{everybodyPRL}.
While both sets of evolution
equations are hyperbolic, the solutions from each set of equations are quite
different. The Einstein equations are linearly degenerate, and
therefore we expect smooth solutions to evolve from smooth initial data.
The fluid equations, on the other hand, are genuinely nonlinear, and
discontinuous weak solutions (shocks) generically evolve from smooth initial
data~\cite{Reula:1998ty}. We choose numerical methods adapted to the
features of each set of equations.
The fluid equations are evolved with a modified convex essentially
non-oscillatory (CENO) method, while the Einstein equations are evolved
using fourth-order accurate difference operators that satisfy
summation by parts (SBP).
These very different methods are easily combined by discretizing the
equations in time using the method of lines.
We base our code on the {\sc had}\ computational infrastructure for
distributed AMR. The Einstein and fluid solvers are written in separate
modules, which can be used individually or combined.
The following sections
review our methods.
\subsection{Adaptive mesh refinement using {\sc had}}
The neutron star problem has several important physical scales, and
each must be adequately resolved to capture the relevant dynamics.
These scales include (1) the individual stars, preferably incorporating
some of their internal dynamics, (2) the orbital
length scale, (3) the gravitational wave zone, and (4) the
location of outer boundaries. In this work, the initial orbital scale
is on the order of several stellar radii, the gravitational waves are
extracted at 30, 40 and 50 stellar radii, and the outer boundaries
of the computational domain are placed about 100 stellar radii from
the orbital pair to reduce boundary contamination of the orbital
dynamics and gravitational wave signals. The computational demands
required to resolve these different physical scales are best met
using adaptive mesh refinement.
We use the publicly available computational infrastructure
{\sc had}\ to provide parallel distributed
AMR for our codes~\cite{had_webpage,Liebling}.
{\sc had}\ can solve both hyperbolic and elliptic equations, and, unlike
several other publicly available AMR
toolkits~\cite{SAMRAI,Chombo,Paramesh,AMROC,Boxlib}, it
accommodates both vertex and cell centered algorithms. {\sc had}\
has a modular design, allowing one to solve
different sets of equations with the same computational infrastructure.
Furthermore, solvers for different equations can be coupled together,
as we have done here with separate solvers for the GR and MHD equations.
{\sc had} provides Berger-Oliger~\cite{Berger} style AMR with subcycling
in both space and time. The {\sc had} clustering algorithm is
Berger-Rigoutsos~\cite{Rigoutsos}, and the load balancing algorithm is
the least loaded scheme~\cite{Rendleman}. Refinement in {\sc had}\ can be
triggered by user-specified criteria, e.g., refining on solution
features such as gradients or extrema, or refining on truncation error
estimation using a shadow hierarchy.
The runs
presented here use the shadow hierarchy for refinement, and
all dynamic fields are used to estimate the truncation error.
Some additional fixed refinement regions are used for gravitational wave
extraction in the wave zone.
As an example, Figure~\ref{fig:amr_mesh} illustrates the resulting
mesh structure at a pre-merge stage in our simulations.
\begin{figure}
\begin{center}
\epsfig{file=grid0.0.ps,height=4.0cm}
\epsfig{file=grid84.8.ps,height=4.0cm}
\epsfig{file=grid500.8.ps,height=4.0cm}
\caption{The AMR mesh structure at times 0, 84, and 500 for the
pre-merge stage of the simulation with
resolution of 32 points across each star.
The simulation had seven levels of refinement, five of which are visible
here. Simulations were performed on 128 processors.} \label{fig:amr_mesh}
\end{center}
\end{figure}
{\sc had} supports arbitrary orders of accuracy~\cite{Lehner:2005vc},
and the overall accuracy for the implementation employed here is third order for smooth
solutions. {\sc had} implements the tapered-grid boundary method
for internal boundaries~\cite{Lehner:2005vc}. This method is
advantageous for two reasons. It guarantees stability of the AMR
algorithm if the unigrid counterpart is stable as well as significantly
reducing spurious reflections at interface boundaries.
Finally, when a fine grid is created during an evolution,
the geometric variables are interpolated onto the fine grid using
Lagrangian interpolation. The fluid variables are interpolated using
weighted essentially non-oscillatory (WENO)
interpolation~\cite{SebastianShu}.
This interpolation scheme is designed for discontinuous
functions, and reduces to Lagrangian interpolation for smooth functions.
\subsection{Method of lines}
The numerical methods for the Einstein equations (SBP) and the
fluid equations (CENO) both specify the discretization of the spatial
difference operators, giving the semi-discrete equations
\begin{equation}
\frac{d {\hbox{\bfgreek\char'165}}}{dt} = L({\hbox{\bfgreek\char'165}}).
\end{equation}
Here ${\hbox{\bfgreek\char'165}}$ represents the set of all variables evolved in both
the Einstein and fluid equations, and $L$ represents a discrete spatial
difference operator. These ordinary differential equations are now
discretized in time using the method of lines.
We choose a third order Runge-Kutta scheme that preserves the
TVD (Total Variation
Diminishing) condition~\cite{ShuOsherI} to integrate the semi-discrete
equations
\begin{eqnarray}
{\hbox{\bfgreek\char'165}}\,^{(1)} &=& {\hbox{\bfgreek\char'165}}\,^{n} + \triangle t L({\hbox{\bfgreek\char'165}}\,^n),\nonumber\\
{\hbox{\bfgreek\char'165}}\,^{(2)} &=& \frac{3}{4}{\hbox{\bfgreek\char'165}}\,^{n} + \frac{1}{4}{\hbox{\bfgreek\char'165}}\,^{(1)}
+ \frac{1}{4}\triangle t L({\hbox{\bfgreek\char'165}}\,^{(1)}),\\
{\hbox{\bfgreek\char'165}}\,^{n+1} &=& \frac{1}{3}{\hbox{\bfgreek\char'165}}\,^{n} + \frac{2}{3}{\hbox{\bfgreek\char'165}}\,^{(2)}
+ \frac{2}{3}\triangle t L({\hbox{\bfgreek\char'165}}\,^{(2)}).\nonumber
\end{eqnarray}
Using the method of lines for the temporal discretization gives us
considerable freedom in choosing numerical methods for the spatial
derivatives, as well as the ability to choose methods of
arbitrary orders of accuracy. This freedom allows us to naturally
and consistently combine
both the CENO and SBP methods in the GRHD code.
\subsection{Einstein equations}
As described in~\cite{Palenzuela:2006wp} our implementation of the
Einstein equations takes advantage of several techniques tailored
to the symmetric hyperbolic properties of the generalized
harmonic formulation we use. At the linear level, these techniques
guarantee that the full AMR implementation is stable.
We use second and fourth order spatial derivative operators which satisfy
summation by parts. These operators allow one to obtain a semi-discrete
energy estimate which, together with suitable boundary conditions and
time integration, ensure the stability of the implementation of linear
systems (see~\cite{GKO}, also~\cite{Calabrese:2003vx} and references cited
therein). Relatedly, we employ a Kreiss-Oliger dissipation operator which is
consistent with the summation by parts property.
For the outer boundaries, we implement Sommerfeld boundary conditions and
follow the prescription given in~\cite{RLS07}.
We have also used maximally dissipative boundary conditions,
but found that they led to larger reflections at the boundaries which,
in turn, corrupt the waveform extraction at late times.
We set the constraint damping parameters to
$\sigma_0=\sigma_1=1$. These values were previously used in both
binary black hole and boson star evolutions, and work similarly in
the binary neutron star evolutions presented here. For the cases
discussed here, constraint violations remain under control during
the evolutions.
Finally, while our GH formalism allows for general coordinate
choices through the source functions $H^a(t,x^i)$,
we adopt $H^a(t,x^i)=0$ in all the
simulations described here. Thus, the coordinates adopted are strict
harmonic coordinates.
\subsection{Perfect fluid equations}
The perfect fluid equations are integrated using an HRSC solver based
on the CENO method~\cite{LiuOsher}, incorporating some modifications by
Del Zanna and Bucciantini~\cite{DelZanna:2002qr},
Detailed discussions of our method
have been presented previously~\cite{Neilsen2005,Anderson:2006ay}.
We choose the CENO method to solve the fluid equations primarily for
two reasons.
This means that the discrete fluid solution corresponds to point values
of the solution and not cell averages.
First, it is a finite difference or vertex centered scheme.
As the Einstein equations are discretized with finite differences,
coupling these equations to the fluid equations with AMR is simplified if
both sets of variables are defined at the same grid locations.
Secondly, CENO uses
a component-wise decomposition (central schemes) of the equations rather
than a spectral decomposition (upwind schemes). Central schemes, are more
efficient than spectral decomposition schemes. Although they
are more diffusive at discontinuities, their solutions often differ
only slightly
from those obtained using upwind methods. With AMR we can sharply resolve
all interesting features of the solution. Outflow boundary conditions are
applied at the physical outer boundary.
The HLLE flux is used for the numerical flux~\cite{Harten}.
This is a central-upwind
method that uses the largest eigenvalues of the Jacobian matrix
in each direction.
To calculate the numerical fluxes, we choose to use piecewise parabolic
method (PPM) reconstruction for the
fluid variables~\cite{Colella:1982ee}, and reconstruct the primitive
variables. No dissipation or discontinuity detection is used in the
reconstruction.
This is a bit of a departure from the CENO scheme.
In general, ENO methods use a hierarchical reconstruction, where,
for example,
a second-order reconstruction depends on an underlying first order
reconstruction.
We have found, at least for the resolutions considered here,
that
CENO often favors a first order reconstruction at the center
of stars, because of the manner in which candidate second order
stencils are compared for their similarity to the first order reconstruction.
This loss of
accuracy at the center of the star damps the physical quasi-normal
oscillations of the star, and can lead to a long-term growth of the central
density. PPM gives a superior reconstruction for stellar interiors, and
therefore we adopt this reconstruction here.
When the fluid flow is highly relativistic, the reconstruction procedure
can produce unphysical states. When this occurs, we attempt reconstruction
using a lower order. For example, if PPM fails, then a linear
minmod reconstruction is attempted, and if this fails, then no reconstruction
is used.
A consequence of using HRSC methods is the need to go back and forth
between primitive, ${\hbox{\bfgreek\char'167}}$, and conservative, ${\hbox{\bfgreek\char'161}}$, variables.
While the relation of the conservative variables in terms of the primitive
variables is algebraic, the transformation that gives the primitive variables
in terms of the conservative variables
is transcendental. We use a Newton-Raphson solver designed for the
MHD equations to find the primitive variables~\cite{Neilsen2005}.
At grid points where this solver may fail,
the primitive variables are obtained from neighboring points by linear
interpolation.
The conservative variables are then recalculated at these points from the
interpolated primitive variables.
Unphysical states can arise during the evolution of the fluid
equations. This often occurs in evacuated regions of the grid, where
truncation errors or effects from finite precision arithmetic are
significant compared to the fluid densities.
To compensate for some of these errors, a floor is applied to
the energy variables $\tilde D$ and $\tilde \tau$ as
\begin{eqnarray}
\tilde D &\leftarrow& \max(\tilde D,\, {\rm floor}),\\
\tilde \tau &\leftarrow& \max(\tilde \tau, \,{\rm floor}).
\end{eqnarray}
The floor in these runs is set
between $1\times 10^{-8}$ and $5\times 10^{-9}$, which is seven
orders of magnitude smaller than
the central rest mass densities ($\rho_c$) of the individual stars.
The floor value must be small compared to the densities in the problem
so that the floor does not significantly affect the dynamics of interest.
Often the effect of the floor can only be ascertained by varying
it in a series of runs. For example, we found that floor values
of $10^{-7}$ are too large, producing a noticeable increase in $\rho_c$
during the evolutions, and changes in the stellar trajectories and
the emitted waveforms. These errors essentially disappear when the floor is
$10^{-8}$, and reducing it further to $5\times 10^{-9}$ does not change
the solutions. Thus, we adopt here a floor of $1\times 10^{-8}$.
\section{Oscillating modes of single TOV stars}
As a first test of our combined GRHD code we consider a single TOV star. Our
goal is not only to represent the analytic TOV solution, but to
accurately reproduce the known radial oscillation modes of the star.
While the TOV solution is spherically symmetric and static,
discretization effects act as small perturbations that excite the
normal modes of the star.
The initial data for this test consist of a $\Gamma=2$ polytrope with
$\kappa=1$. (The solution is calculated using a modified version of the
RNS code of
Stergioulas~\cite{RNS}.)
The star, in the geometrized units with
$\kappa = 1$, has a mass of $M = 0.14$, a circumferential radius $R=0.958$,
and central rest mass density $\rho_c = 1.28 \times 10^{-1}$.
We evolve the data in a dynamic spacetime at different resolutions
and using different reconstruction methods for the fluid variables.
Figure~\ref{fig:single_tov} shows $\rho_c$ plotted as a function of time
for three resolutions of
32, 64, 128 points across the star. As expected, the oscillations and overall
drift in $\rho_c$ converge with resolution. This is important both as
a code test and an indication of the resolution necessary to capture some
dynamics of stellar interiors. The data in Figure~\ref{fig:single_tov}
were generated using PPM reconstruction. We found that first- and
second-order CENO reconstructions were more diffusive,
resulting in larger drifts in $\rho_c$.
Consequently, we had difficulty in reproducing the radial pulsation
modes of the star using these reconstructions.
To confirm that the code reproduces the expected physical behavior, we
examine the radial pulsations of the star. The modes are calculated from
the oscillations in $\rho_c$, and the extracted frequencies are
shown in Table~\ref{table:frequencies}.
(Though we present data for the central density only here,
we have verified that these are global modes by examining the
time variation of density and velocity in the star.)
These oscillation modes can
be compared to the known radial perturbation modes~\cite{Font2002},
and these frequencies are in excellent agreement.
Note, to make these comparisons we rescale the perturbation results
as described in the Appendix,
which were calculated for
$\kappa=100$.
These validations are
a stringent test of our computational methods and give us considerable
confidence that our code accurately reproduces the physics of these
systems.
\begin{figure}
\begin{center}
\epsfig{file=oscillate.ps,height=9.5cm,angle=270}
\caption{This figure shows oscillations in the central rest energy
density, $\rho_c$, for a dynamic spacetime evolution
of a single TOV star at three different resolutions:
32, 64, and 128 points across the star.
The initial data are for a star of mass
$M = 0.14$, circumferential radius $0.958$, central rest mass
density $\rho_c = 1.28 \times 10^{-1}$, $\Gamma=2$, and $\kappa = 1$.
The outer boundary of the simulations is 12 stellar radii away from the
center of the star, and PPM is used to reconstruct the fluid variables.
While $\rho_c$ increases noticeably for the coarsest resolution run,
it eventually stabilizes at a higher value, giving a stable configuration.
Results from the Fourier transform of this data are given in
Table~\ref{table:frequencies}.
Owing to the computational costs of these simulations, the higher resolution
runs were not evolved to the same end time.
In particular, the highest resolution run was evolved only until
$t\simeq 65$.
}
\label{fig:single_tov}
\end{center}
\end{figure}
\begin{table}
\begin{tabular}{c|c|c|c}
\hline
Mode & 3D GRHD code & Perturbation results & Relative Difference \\
& (kHz) & (kHz) & (\%) \\
\hline
F & 14.01 & 14.42 & 2.88 \\
H1 & 39.59 & 39.55 & 0.1 \\
H2 & 59.89 & 59.16 & 1.2 \\
H3 & 76.94 & 77.76 & 1.1 \\
\end{tabular}
\caption{Comparison of small radial pulsation frequencies for an
evolved star using the 3D GRHD code to the linear perturbation
modes~\cite{Font2002}. The polytrope is constructed for $\Gamma=2$ and
$\kappa =1$. The perturbation
results have been appropriately rescaled for $\kappa = 1$~\cite{Noblephd}.
The Fourier transform of the central density time series is plotted in
Figure~\ref{fig:rho_fft}.
}
\label{table:frequencies}
\end{table}
\begin{figure}
\begin{center}
\epsfig{file=freq.ps,height=8.5cm,angle=270}
\caption{This figure shows the Fourier transform of the oscillations of $\rho_c$ seen
in the highest resolution simulation of Figure \ref{fig:single_tov}.
Five distinct peaks are observed; the first four peaks
are compared with results found via linear perturbation
(See Table \ref{table:frequencies}). The scale of the vertical axis is arbitrary.}
\label{fig:rho_fft}
\end{center}
\end{figure}
\section{Binary neutron stars}
In this paper, we consider two
different binary neutron star mergers,
one resulting in a prompt collapse to a black hole and one that
results in a differentially rotating neutron star which persists for
a long time (as compared to an orbital time close to merger).
We evolve the systems through several orbits and extract
gravitational radiation from the orbiting phase and the merger.
In the course of performing these
evolutions, we carefully examine some numerical questions to ensure the
accuracy of our results.
Initial data for both binaries are set by superposing the initial data for
single, boosted TOV stars~\cite{Matzner:1998pt}.
Provided that the initial separation
between stars is sufficiently large, violations in the momentum and
Hamiltonian constraints are at or below the truncation error threshold.
We monitor that this is indeed the case for our chosen separations by
evaluating the constraints and checking that any violations are of the
same order as those obtained for the single stars considered in the previous
section. Thus, these data are numerically consistent.
The boost velocities are smaller than the
corresponding Keplerian velocities for Newtonian circular orbits.
Thus, our data are not quasi-circular
(as used in~\cite{Miller:2003vc,Shibata:2002jb}), and they do not correspond
to a system resulting from a long, slow inspiral.
However, these data allow us to both test our implementation, as well
as to examine how radiative effects circularize the orbits.
Forthcoming work will consider initial data taken from post-Newtonian and
quasi-equilibrium approaches.
We extract the gravitational wave information by
computing the Weyl scalar $\Psi_4$, and for convenience we further
decompose $r\Psi_4$ as an expansion in terms of
(spin-weighted) spherical harmonics
\begin{eqnarray}
r \Psi_4 = \sum_{l,m} C_{l,m}\, {}^{-2} Y_{lm}.
\end{eqnarray}
This extraction is done at three different
locations from the center of mass, and we shift the obtained
quantities in time to
account for the travel time between the observers along null rays.
These observers are placed within the wave zone, and the shift in time
is given simply by the distance in Minkowski spacetime between the observers.
As we consider here only equal mass binaries, corrections for gauge effects
should be small~\cite{lehnermoreschi}. An analysis
of these effects for different binaries will be presented
elsewhere~\cite{gaugeradiation_us}.
\begin{figure}
\begin{center}
\epsfig{file=rhomax.eps,height=4.5cm}
\caption{This figure shows the maximum value of $\rho_o$ in binary
simulations at four resolutions: 8, 16, 24 and 32 points across each star.
With fewer than 16 points across the star, the stars disperse.
For increasing resolutions, the solutions converge.}
\label{fig:rho_convergence}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=radius.eps,height=4.5cm}
\caption{The coordinate separation between the stars in a merging binary is
shown here as a function of time for three resolutions.
Notice that the merger time, about $t=800$, is almost the same
for the two finer resolutions.}
\label{fig:rad_convergence}
\end{center}
\end{figure}
As discussed in~\cite{Miller:2003vc}, boundary and resolution effects
can strongly influence the dynamics of these systems.
To explore the effects of outer boundaries on the simulation results,
we perform two otherwise identical evolutions with outer
boundaries at different locations. While a more detailed discussion
of these tests follows below in Section~\ref{sec:binary_black_hole},
we find that outer boundaries at 80 stellar radii have negligible influence
on the solution.
To examine resolution effects, we adopt
a threshold error tolerance for the shadow hierarchy such that
the resulting mesh covers each star with a minimum of 16 points.
While in the previous section we used much higher resolutions to capture
the interior dynamics of single stars, binary evolutions
at similar resolutions here
are prohibitively expensive. Figure~\ref{fig:rho_convergence}
gives an indication of the minimum resolution required to
evolve the binary without resolving the internal dynamics of individual
stars. Figure~\ref{fig:rad_convergence} shows the (coordinate) radial
distance between the center of the stars versus time for the three different
resolutions. The trajectories converge as the resolution is
increased.
\begin{figure}
\begin{center}
\epsfig{file=boundary.ps,height=8cm,angle=270}
\epsfig{file=waveform-midres-120.ps,height=8.0cm,angle=270}
\caption{{\it Top Panel}. The $\ell=2$, $m=2$ mode of $r\Psi_4$
extracted at 50 stellar radii for
binary simulations with different domain sizes. The smaller
domain is of size $\pm 80$ stellar radii while the larger is
$\pm 124$ stellar radii. The two results differ only by a small
phase and amplitude error which appears late in the evolution.
For both simulations, the floor value is $1\times 10^{-8}$.
{\it Bottom Panel}: This includes three plots of the
$\ell=2$, $m=2$ mode of $r\,\Psi_4$ extracted at 30, 40, and 50 stellar radii.
The initial data are described in Section~\ref{sec:binary_black_hole}.
The domain of the calculation is 248 stellar radii
across. The signals from different extraction surfaces are shifted in time
by the appropriate (flat-space) differences between the extraction radii.
}
\label{fig:boundary}
\end{center}
\end{figure}
\subsection{Black hole final state}
\label{sec:binary_black_hole}
\begin{figure}
\begin{center}
\epsfig{file=trajectory.eps,height=5.0cm,width=5.0cm}
\caption{The coordinate trajectory of the center of one of the neutron stars
as it spirals into a black hole end state.
The points (filled circles) that have been included along the trajectory
are the coordinate locations of the maximum density. These points are shown
at intervals of
$\Delta t=20$ in order to give an idea of the star's speed.}
\label{fig:coord_trajectory}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=omega.eps,height=4.5cm}
\caption{Orbital frequency of the binary as calculated
from the numerical evolution in
two different ways. $\omega_c$ is obtained by following the coordinate
position of the centers of the stars while $\omega_D$ is obtained from the
dominant mode of $r\Psi_4$.}
\label{fig:exc_omega}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=exc_omega.eps,height=5.0cm,angle=0}
\caption{The eccentricity obtained from Eq.~(\ref{eq:eccentricity}). After a
transient behavior due to the initial configuration, an overall monotonically
decreasing behavior is seen in the eccentricity as the binary
orbit becomes tighter.}
\label{fig:eccentricity}
\end{center}
\end{figure}
The first set of initial data gives a binary neutron star merger that
results in a prompt collapse to a black hole. As mentioned previously, the initial
data are constructed from superposing two equal mass
neutron stars with zero
spin angular momentum. In particular, each star has a mass of
$M=0.89~M_{\odot}$, a radius of $R=16.26$~km, and a central density of
$3.24 \times 10^{14} \; {\rm g}/{\rm cm}^3$.
The stars are placed initially at the coordinate locations
$(x,y,z) = (0, \pm 3,0)$ with the boost $v^i = (\mp 0.08,0,0)$.
We first investigate possible effects from the outer boundaries on
the simulation results by performing two otherwise identical
evolutions with the outer boundaries at different locations. In one,
the outer boundary is at $80R$, and in the other it is at $124R$.
These simulations use the shadow hierarchy, and the AMR
grid-structure is determined by the threshold error parameter.
An additional set of fixed
fine grids is placed at larger distances to ensure sufficient
resolution for computing waveforms. As a consequence,
the grid-structure in the central region is determined dynamically
while at far distances it is kept fixed. We compare the $C_{2,2}$
component of the gravitational wave signal measured by an observer
at a fixed coordinate distance, $50R$, for the two computational
domains. These waveforms are shown in Figure~\ref{fig:boundary}.
which shows only small differences in the waveforms at late times.
Additional tests indicate that these differences
arise from the location of the exterior, fixed refinement
boxes. This observation is indicated by
the coincidence of results obtained with outer boundaries at $100 R$ and
$80 R$ with exactly the same coordinate locations of the exterior
grids. The overall excellent
agreement between the wave signals suggests that the influence of the
boundary location is negligible.
The dynamics of the subsequent evolution shows a clear eccentricity which is
reflected both in the gravitational waveforms
(bottom panel of Figure~\ref{fig:boundary}) and the
coordinate trajectories (Figure~\ref{fig:coord_trajectory}). It is worth noting that, following the
suggestion of~\cite{BCP07}, a waveform similar to
Figure~\ref{fig:boundary} can be obtained by using the Newtonian
quadrupole approximation with the coordinate trajectories from
Figure~\ref{fig:coord_trajectory}. In addition, these trajectories
are similar to those obtained by integrating the 2.5 post-Newtonian
equations.
Finally, as with the black hole case reported in~\cite{BCP07},
the orbital coordinate frequency $\omega_c$ (computed from the coordinate
trajectories) is in good agreement with the orbital waveform frequency
$\omega_D$ (computed from the dominant mode $\ell=2$, $m=2$ of $r\Psi_4$), as
shown in Figure~\ref{fig:exc_omega}.
The eccentricity can be computed using the Newtonian definition given in~\cite{MorWill02}
\begin{equation}
\textit{e} = \frac{\sqrt{\omega_p} - \sqrt{\omega_a}}%
{\sqrt{\omega_p} + \sqrt{\omega_a}} ,
\label{eq:eccentricity}
\end{equation}
where $\omega_p$ is the orbital frequency at a local maximum and $\omega_a$
the subsequent local minimum. The eccentricity of this
simulation is shown
in Figure~\ref{fig:eccentricity}. To compute this, we take each half-cycle
and evaluate expression (\ref{eq:eccentricity}), thus obtaining a discrete
set of values. The first point is clearly affected by the
initial data adopted, but the subsequent points show an overall
decrease towards zero. This is expected as the gravitational
radiation carries away angular momentum, and its loss
circularizes the orbit.
\begin{figure}
\begin{center}
\epsfig{file=bh1620.eps,height=4.5cm}
\epsfig{file=bh1760.eps,height=4.5cm}
\epsfig{file=bh1820.eps,height=4.5cm}
\caption{Snapshots at select times viewed down the $z$ axis
of the orbiting stars
and their subsequent collapse
to a black hole. These snapshots zoom in on the central region of the
grid and show
only a twentieth of the $z=0$ slice of the computational domain.
The stars orbit counterclockwise
seven times before merging and collapsing to a black hole.
The arrows indicate the fluid velocity.
The reference vector in the upper right hand corner of each panel
has a magnitude of 0.5.
The color scheme indicates the rest mass density.
The plots show the simulation at times 620, 760, and 820 as indicated
in the upper left corner of each image.
See Figure~\ref{fig:minlapse} for a plot of the
lapse at the origin as a function of time for this system.}
\label{fig:fluidsnapshots}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=400_bh.ps,height=6.0cm}
\epsfig{file=600_bh.ps,height=6.0cm}
\epsfig{file=820_bh.ps,height=6.0cm}
\caption{Snapshots of the lapse on the $z=0$ plane at times 400, 600, and 820 for the system presented in
Figure~\ref{fig:fluidsnapshots}. The contours shown correspond to
$\alpha=0.8,0.7,0.6,0.5,0.4$, from the outermost to the innermost one.
At times prior to merger, only the first contour value exists.
After merger,
the lapse collapses, indicating the formation of a black hole.
Notice the essentially circular shape of all the contours
except for the innermost one at the latest time.
The lapse at the origin as a function of time is shown in Figure~\ref{fig:minlapse}.}
\label{fig:lapsesnapshots}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=nobh1043.eps,height=4.5cm}
\epsfig{file=nobh1116.eps,height=4.5cm}
\epsfig{file=nobh1210.eps,height=4.5cm}
\caption{Snapshots at select times viewed down the $z$ axis
of the orbiting stars
and their subsequent merger into a differentially rotating star.
These snapshots zoom in on the central region of the grid and show
only a twentieth of the $z=0$ slice of the computational domain.
The stars orbit counterclockwise
a couple of times before merging.
The arrows indicate the fluid velocity.
The reference vector in the upper right hand corner of each panel
has a magnitude of 0.5.
The color scheme indicates the rest mass density.
The plots show the simulation at times 43, 116, and 210 as indicated
in the upper left corner of each image.
See Figure~\ref{fig:minlapse} for a plot of
the lapse at the origin as a function of time for this system.}
\label{fig:nobhfluidsnapshots}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=nobh_0.ps,height=6cm}
\epsfig{file=nobh_100.ps,height=6cm}
\epsfig{file=nobh_200.ps,height=6cm}
\caption{Snapshots of the lapse on the $z=0$ plane at times 0, 100, and 200 for the
system presented in
Figure~\ref{fig:nobhfluidsnapshots}. The contours shown correspond to
$\alpha=0.9,0.85$ from the outermost to the innermost one.
At early times, the contour for the lowest value is not present. After merger,
though the lapse evolves to a slightly lower value, it remains bounded
above $\simeq 0.75$. Notice the essentially circular shape of all the contours
at the latest time.
The lapse at the origin as a function of time is shown in Figure~\ref{fig:minlapse}.}
\label{fig:NOBHlapsesnapshots}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=alpha.ps,height=8.0cm,angle=270}
\caption{The lapse at the origin as a function of time for the orbiting
polytropes and their merger to either a black hole or neutron star. For
the sake of comparison, we have defined $t_{\rm merger}$ to be
the instand at which the stars come into contact.
See Figures~\ref{fig:lapsesnapshots},\ref{fig:NOBHlapsesnapshots} for
contour plots of the lapse in the two cases.
}
\label{fig:minlapse}
\end{center}
\end{figure}
Upon merger, the object's pressure and rotation can not support the star and it
quickly collapses to a black hole.
As described earlier, our simulations are carried
out with harmonic slicing which is not singularity
avoiding~\cite{Alcubierre:2002iq}.
Although the lapse collapses to zero as illustrated in
Figs.~\ref{fig:minlapse} and \ref{fig:lapsesnapshots},
it does not collapse sufficiently fast to avoid numerical problems.
As a result, the size of the merged object decreases rapidly
and the code crashes when it can no longer resolve the
physical length-scales within the allowed maximum refinement
levels as shown in the final frame of Figure~\ref{fig:fluidsnapshots}.
Ongoing work
excises a region within a trapped surface to avoid
this problem. We defer to future work a full analysis of the post-merger case
and the transition to a quasinormal ringing pattern in the
radiation~\cite{bnstobh_us}.
\subsection{Differentially rotating neutron star}
In the case where the individual stars are initially
separated (in coordinate space) by $4R$
and boosted with a speed of 0.0825 the merger
does not give rise to a prompt
collapse to a black hole, rather it produces a single
differentially rotating star (See Figs.~\ref{fig:nobhfluidsnapshots}--\ref{fig:NOBHlapsesnapshots}). As in the previous case, the
initial orbital dynamics correspond to an eccentric inspiral trajectory.
But upon merger, the object's pressure and rotation are sufficient to support
a newly formed star. The merged object has a bar-like structure that
is spinning with a characteristic pattern frequency. The real part of
the coefficient $C_{2,2}$ of $r \Psi_4$ for this evolution (shown
here in Figure~\ref{fig:finalstar}) carries a signature of the merger
($t$ approximately
from 100 to 200) and of the resulting spinning bar ($t$ greater than 250).
Qualitatively, the outcome of this
evolution agrees with the results presented for the fully relativistic
simulation labeled ``E-1'' in~\cite{Shibata:2002jb}, and even with the results
from the post-Newtonian SPH simulation labeled ``F1''
in~\cite{faberrasio}; compare,
for example, our Figure~\ref{fig:finalstar}
with Figure 11 in~\cite{Shibata:2002jb}
and Figure 3 in~\cite{faberrasio}. These two earlier
simulations also followed the merger of equal-mass, initially irrotational
neutron stars having a $\Gamma=2$ equation of state. However,
the bar-like structure survives noticeably longer in our
simulation than in the evolution presented in~\cite{faberrasio},
and in our simulation
the radiation signature appears to carry more detail about the
post-merger dynamics than in either of these earlier evolutions.
Specifically, the structure discernible in Figure~\ref{fig:finalstar}
between the times
180 and 240 reflects the fact that, as viewed from the co-rotating
frame of the bar, the bar itself is experiencing nontrivial oscillations.
The neutron star that forms from this merger is strongly differentially
rotating. In an effort to quantify this, in the latter stages of
the evolution we fit the internal motions of the star to a rotation law
of the form,
\begin{equation}
\Omega(r) = \frac{\Omega_c}{1+A r^2 \sin(\theta)^2}
\end{equation}
which has proven to be useful in numerous other investigations (see for instance, \cite{hachisu,Shibata:2002jb,Lyford:2002ip}).
Figure~\ref{fig:fits} shows the time-dependent behavior of
the fitted parameters $\Omega_c$ and $A$. We note in particular
that the ratio of the central and near-surface value of $\Omega$
at the equator is $\Omega_c/\Omega_{\rm eq} \approx 0.34$.
\begin{figure}
\begin{center}
\epsfig{file=finalstar.ps,height=8.0cm,angle=270}
\caption{The merger waveform for the collision resulting in a single
compact star extracted at three different stellar radii: 30, 40, and 50.
The domain of the simulation is $\pm 152$ stellar radii. After
the merger a transient behavior is observed. In particular, the features
at $t\simeq 180, 240$ result from marked oscillations in the produced
bar-like configuration (as seen in the co-rotating frame). Afterwards
the gravitational waves due to the spinning bar exhibit a clear frequency at
$\simeq 12.8$~kHz.}
\label{fig:finalstar}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=omegaA.eps,height=5.0cm,angle=0}
\caption{The fitted values of $\Omega_c$ and $A$ as determined from
the fluid's tangential velocity. The merger takes place at about
$t\simeq 140$ after which the angular velocity rises during a transient
stage and then slowly decreases.}
\label{fig:fits}
\end{center}
\end{figure}
\section{Conclusion}
Neutron stars will be important sources of gravitational waves for
the next generation of gravitational wave detectors. While waveforms
from neutron star binaries are weaker than those produced by
binary black holes due to the allowed neutron star masses, their signals
are expected to be richer, as the gravitational waves will also
carry information about the matter. Indeed, gravitational waves are
expected to become an important probe of neutron star physics,
addressing questions such as the equation of state for nuclear
matter and the nature of progenitors for short, hard gamma-ray
bursts.
We have constructed a code that solves the combined Einstein and fluid equations in
three spatial dimensions, with no symmetry assumptions, and we use
{\sc had}\ for distributed AMR. AMR is an essential element
of our method, as it allows us to place the outer boundaries far
from the binary, while the shadow hierarchy allows us to refine each
star individually without a priori assumptions about their motion.
We have carefully verified our numerical results by performing runs
at different resolutions, using grids with different physical outer
boundaries, extracting $\Psi_4$ at different radii, and varying the
floor applied to the fluid densities. Moreover, we studied the
radial pulsation frequencies for a $\Gamma=2$ polytropic TOV star,
finding excellent agreement between our results and the expected
perturbative values. The successful conclusion of these tests gives
us confidence in the physical results obtained from our code.
As a first application of this code in a demanding scenario, we present
a detailed study of two binary neutron
star mergers, one resulting in a final black hole and the other a
final neutron star. In both cases we examine the gravitational
wave emission by extracting the $\ell=2$, $m=2$ mode of $r\Psi_4$.
$\Psi_4$ is extracted sufficiently far from the binary within the wave
zone, and extraction is done at three different radii.
In the first case, $\Psi_4$ is extracted up until the lapse collapses,
and in the second case the wave signal is extracted until a
final differentially rotating star is reached.
A comparison
to a post-Newtonian analysis allows us to understand better the
gravitational wave signals and the orbital kinematics, such as
orbital trajectories, frequencies, and eccentricities. For example,
the initial data describes an eccentric orbit. The effect of
the eccentricity can be observed in the alternating pattern of
larger and smaller extrema in $\Psi_4$ as well as a modulation in
the observed wavelength. Both features are expected from a Post-Newtonian
analysis of an eccentric orbit. The orbits circularize
through gravitational wave emission, and the solution around the
time of collapse is largely spherically symmetric. In the second case,
the neutron star merger results in a large strongly differentially rotating
star. The observed maximum density after
the merger does not lie at the origin but oscillates, in the co-rotating frame,
in a bar-like fashion in between $\simeq 0.2 R_{\rm final}$
and $\simeq 0.4 R_{\rm final}$ (with $R_{\rm final}$ the equatorial radius of the
merged object).
The work presented here raises additional questions that we will pursue
in a continuing research program.
For example, we will continue to study the ringdown of the
final black hole formed in the first merger. Studies of
the differentially rotating star formed in the second
case are continuing to determine whether this star eventually
collapses to form a black hole.
We will also examine a broader class
of initial data, including quasicircular and unequal mass binaries.
As mentioned
previously, we also are investigating the effect of magnetic fields on
the massive compact object formed in a merger and its possible subsequent
collapse. These results will be published in subsequent papers.
\renewcommand{\theequation}{A-\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix}\label{appendix}
It is customary in general relativity to adopt geometrized units $G=c=1$,
such that all quantities, including mass ($M$) and time ($T$),
have units of length ($L$). Vacuum
solutions are invariant under changes in this fundamental length scale $L$.
A quantity $X$ that scales as $L^lM^mT^t$
can be converted into geometrized units by multiplying with the
factor $c^t~(G/c^2)^m$. After the conversion to geometrized units,
$X$ scales as $L^{l+m+t}$.
Most equations of state
break this intrinsic scale-invariance, and the fundamental length-scale must
be fixed by additional choices.
Once the new scale is chosen, transformations between
geometrized and physical units can be easily made.
In the following, we summarize the basic procedure
detailed in~\cite{Noblephd} to account for the proper scaling of
quantities.
The polytropic EOS~(\ref{eq:polyEOS}) is specified by
the constants $\{ \kappa, \Gamma \}$, and the
quantities obtained when using a particular set $\{ \kappa_1, \Gamma_1 \}$
can be scaled to those obtained using a second set
$\{ {\kappa_2}, {\Gamma}_2 \}$ by the factor
\begin{eqnarray}\label{rescaling}
\frac{L_1}{L_2} = \frac{ { {\kappa_1}}^{1/2({\Gamma_2}-1)} }
{{\kappa_2}^{1/2(\Gamma_2-1)}} \, .
\end{eqnarray}
There are two common approaches in the literature to set this additional
length scale. The first one is obtained by fixing a constant physical
quantity, e.g., the solar mass $M_\odot=1$, and from it deduce the appropriate
conversion factors. That is, if a quantity
$\hat{X}$ has dimensions of $L^lM^mT^t$, its
dimensionless counterpart, $X$, is obtained from the following equation:
\begin{equation}\label{unit1}
\hat{X} = \left(\frac{G~M_\odot}{c^2}\right)^{l+t} \frac{M^m_{\odot}}{c^t}
X\, .
\end{equation}
There is still the freedom to choose $\kappa$, and all dimensions
are scaled with this parameter. Usually the choice $\kappa=100$ is preferred
because it leads to physical units which are close to the current observations.
For instance, TOV stars constructed with these parameters have a maximum
stable mass of $\hat{M}_{\rm max}=1.64 M_\odot$ with
a radius of $\hat{R}_{\rm max}=14.11$~km.
The second method for choosing the length scale is explained in detail
in~\cite{Noblephd},
and is more involved. It is based on fixing
the maximum stable mass for a family of solutions (with given
$\{ \kappa=1, \Gamma \}$) to a physically motivated value.
Thus, a quantity $\hat{X}$ with
dimensions $L^lM^mT^t$ is obtained by using the relation:
\begin{equation}\label{unit2}
\hat{X} = \hat{\kappa}^x c^y G^z X,
\end{equation}
where
\begin{eqnarray}
x &=& \frac{l + m + t}{2 (\Gamma-1)} \, , \quad
y = \frac{(\Gamma-2)l + (3\Gamma-4)m - t}{\Gamma-1}\, ,
\nonumber \\
z &=& -\frac{l + 3m + t}{2} \, .
\end{eqnarray}
In this method $\hat{\kappa}$ has dimensions. We now identify
the maximum stable mass for the given polytrope to some physical maximum mass.
For a neutron star, the observed maximum mass is
$\hat{M}_{\rm max} = 1.4 M_\odot$.
Although this second method for fixing the fundamental length scale generally
leads to different results from
the first, it can be checked that for $\Gamma=2$
both methods (the first one with $\kappa=100$, while the second one always has
$\kappa=1$) provide the same scaling factors when the physical maximum stable
mass is set
to $\hat{M}=1.64 M_\odot$. Since the dimensionless maximum stable
mass is $M=0.164$, Eq.~(\ref{unit2}) can be solved for $\hat{\kappa}$ with
$\{l=0,m=1,t=0\}$, giving
$\hat{\kappa}=1.456 \times 10^5 {\rm cm}^5/\left({\rm g}\, {\rm s}^2\right)$. With this value,
(\ref{unit2}) can again be used to recover the dimensions of any quantity.
\begin{acknowledgments}
We would like to thank J. Frank, J. Pullin, I. Olabarrieta and O. Reula for
stimulating discussions.
This work was supported by the National Science Foundation under grants
PHY-0326311, PHY-0554793, AST-0407070 and AST-0708551 to Louisiana State
University, PHY-0326378 and PHY-0502218
to Brigham Young University, and PHY-0325224 to Long Island University.
This research was also supported in part by the National Science Foundation
through TeraGrid resources provided by SDSC under allocation award PHY-040027.
In addition to TeraGrid resources, we have employed clusters belonging
to the Louisiana Optical Network Initiative (LONI), and clusters
at LSU (mike) and BYU (marylou4).
\end{acknowledgments}
|
1,314,259,994,348 | arxiv | \section{Introduction}
When captured by the gravitation of a neutron star, the interaction
between the matter outflow coming from a companion star (such as the
Be decretion disc of a Be star, or the Roche Lobe overflowing matter
from a low mass star) and the magnetic field of a neutron star can
lead to several states. A pulsar (ejector), a propeller, or an
accretion state can be realised depending on the balance between the
pressure exerted by the inflowing matter and by the rotating magnetic
field of the neutron star \citep[see, e.g.][for a
review]{lipunov1992}. When the mass in-flow is able to bound the
magnetosphere to a closed configuration, whether accretion down to the
neutron star surface is possible (accretor state) or mass is
propellered away by the neutron star magnetosphere (propeller state)
is mainly determined by the ratio between the rotation rate of the
magnetosphere and of the incoming matter at the magnetospheric
boundary \citep{illarionov1975}. At such interface, in some cases the
magnetosphere yields energy and angular momentum to the matter inflow,
and the plasma is expected to be very turbulent and magnetised.
\citet{bednarek2009b,bednarek2009} argued how in such conditions
electrons can be accelerated to high energies by a Fermi process,
yielding a detectable emission at GeV and TeV energies. In a similar
context, \cite{torres2012} explained the seemingly anti-correlated
orbital variability of the GeV and TeV emission of LS +61$^{\circ}$
303 in terms of the alternation between the propeller and the ejector
state of a magnetised neutron star, as the neutron star experiences
different mass in-flow rates along its orbit.
Here, we propose a propeller scenario to explain the properties of the
high energy emission of {XSS J12270--4859}, so far the only low-mass X-ray binary
(LMXB) proposed to have a persistent gamma-ray counterpart, actually
emitting a comparable power in X-rays and at HE, {2FGL 1227.7--4853}
\citep{demartino2010,hill2011,demartino2013}. Unlike gamma-ray
binaries, in this case both the wind and the radiative output of the
low mass companion star are unimportant in determining the HE emission
properties. Instead, we propose a model in which electrons are
accelerated at the interface between an accretion disc and a
propellering neutron star, and which losses are mainly driven by
synchrotron emission. Indeed, this electron population interacts with
the magnetic field permeating such layer and with the radiation thus
produced to yield the X-ray and GeV emission observed from the source.
The paper is organised as follows. In Sec.~\ref{sec:xss} we review the
properties of {XSS J12270--4859} and of its proposed $\gamma$-ray counterpart,
2FGL 1227.7--4853. In Sec.~\ref{sec:igr} we present the results of the analysis of
recent INTEGRAL observations, aimed at assessing the long-term
variability of the hard X-ray emission, and thus the stability
of the accretion state. In Sec.~\ref{sec:propeller} we derive
expressions relating the observed luminosity to the relevant physical
parameters of the system, spin period, magnetic field, and mass inflow
rate, under the assumption that it hosts a propellering neutron star
with typical parameters of LMXBs. In Sec.~\ref{sec:sed} we reproduce
semi-quantitatively the high energy spectral energy distribution
produced by a relativistic population of electrons located at the
boundary between an accretion disc and a propellering magnetosphere,
under simple assumptions on the shape of the emitting region. We
discuss these results in Sec.~\ref{sec:disc}, comparing our scenario
with other possible models proposed to explain the system, involving
either a rotation powered pulsar or an accreting compact object.
\section{XSS J12270--4859}
\label{sec:xss}
{XSS J12270--4859} is a faint hard X-ray source, first identified as a Cataclysmic
Variable on the basis of its optical spectrum
\citep{masetti2006}. However, the absence of a clear modulation of the
emission in the X-ray \citep{saitou2009,demartino2010} and in the
optical bands \citep{pretorius2009}, as well as the absence of Fe
K-$\alpha$ features in X-rays, forced to disregard such
hypothesis.
{XSS J12270--4859} was observed on several occasions in the X-ray band: by RXTE in
November 2007 and during 2011 \citep[3--60
keV;][]{butters2008,demartino2013}, by XMM-Newton in January 2009
and January 2011 \citep[0.5--10 keV;][]{demartino2010,demartino2013},
by Suzaku in August 2009 \citep[0.2--12 and 10--600
keV;][]{saitou2009}, by Swift/XRT between March and September 2011
\citep[0.5--10 keV][]{demartino2013}, and by INTEGRAL since March 2003
\citep[20--100 keV; see][who reported the analysis up to October
2007]{demartino2010}. Its average 0.2-100 keV luminosity was
evaluated by \citet{demartino2010} as
$L_{X}=(2.2\pm0.4)\times10^{34}\,d_2^2\,\rm{erg s^{-1}}$, where $d_2$
is the distance in units of 2 kpc. Its spectrum is described by a
featureless power law, $F_{E}\propto E^{-(\Gamma_X-1)}$, with an index
$\Gamma_X=1.70\pm0.02$, without any detected cut-off up to 100 keV
\citep{saitou2009, demartino2010,demartino2013}. The light curve below
10 keV shows peculiar dips and flares on time scales of few hundreds
of seconds, suggesting an accretion nature of the X-ray emission
\citep{saitou2009, demartino2010,demartino2013}. Flares are followed
by dips in which a spectral hardening is observed, suggestive of
additional absorption by a flow of cool matter. Dips with little to
none spectral evolution also occur randomly during the quiescent
emission, and are possibly interpreted in terms of occultation by
discrete blobs of material.
The emission of the IR/optical/UV counterpart is compatible with the
sum of the thermal emission of a K2-K5 V star at a distance of
2.3--3.6 kpc, and of a hotter thermal emission coming from a surface
of larger size, compatible with an accretion disc
\citep{demartino2013}. Dips and flares take place almost
simultaneously in the UV and X-ray band; together with the observed
relative amplitudes of the flares in these two bands, this strongly
indicates that the UV emission originate from reprocessing of the
X-emission in a larger region than where the higher energy emission is
generated \citep{demartino2013}. The presence of material around the
compact object is further indicated by the detection of several
emission lines typical of accreting systems, such as H$_{\alpha}$,
$H_{\beta}$ and HeII, \citep{masetti2006,pretorius2009}, as well as by
an optical spectrum recently obtained by NTT during March 2012 (De
Martino 2013, priv. comm.).
{XSS J12270--4859} is positionally coincident with a moderately bright gamma-ray
source detected by {\it Fermi}-LAT, {2FGL 1227.7--4853} \citep{demartino2010,
hill2011,demartino2013}. {\it Fermi}-LAT observations performed
between August 2008 and September 2010 revealed a source with spectrum
described by a power law with index $2.21\pm0.09$, cut off at
$\Gamma_{\gamma}=4.1\pm1.3$ GeV, and with a luminosity of
$L_{\gamma}=(2.3\pm0.3)\times^{34}\,d_2^2$ erg s$^{-1}\simeq L_X$
above 100 MeV \citep{hill2011}\footnote{Here, only statistical errors
are quoted. Systematic errors can be larger by a factor $\sim$
three. See \citealt{hill2011} for a discussion of this
issue.}. These authors also discussed the possible association of
{2FGL 1227.7--4853} with two radio sources detected by ATCA at 5.5 and 9 GHz,
falling in its error circle. They identified the radio counterpart of
{XSS J12270--4859} (a faint source with a $F_{\nu}\propto\nu^{-\alpha}$ power-law
spectrum, with $\alpha=0.5\pm0.6$) as the least improbable
association, given the extremely low radio-to-gamma ray luminosity
ratio shown by the other one, most probably an AGN. No significant
variation of the gamma-ray emission of the source were found by
\citet{demartino2013}, who extended the analysis including data taken
until April 2012. Their analysis also proved that the gamma-ray
emission is concurrent with observations performed at soft
(XMM-Newton, Suzaku, Swift/XRT) and hard (RXTE) X-rays.
\section{INTEGRAL observations of {XSS J12270--4859}}
\label{sec:igr}
\citet{demartino2010} reported the analysis of {\it INTEGRAL}/ISGRI
observations of {XSS J12270--4859} performed between March 2003 and October 2007,
and used them together with RXTE and XMM-Newton observations to build
a 0.2-100 keV spectrum which was successfully modelled by a power-law
with index $\Gamma_X=1.70\pm0.02$. In order to study the long-term
variability of the hard X-ray emission of {XSS J12270--4859}, and to analyse
INTEGRAL observations simultaneous to those {\it Fermi}-LAT observations
reported by \citet[][]{hill2011} and performed between August 2008 and
September 2009, we analysed all {\it INTEGRAL} \citep{winkler2003}
observations performed from March 2003 to July 2012.
Observations performed by {\it INTEGRAL} are carried out in individual
Science Window (ScW), which have a typical time duration of about 2
ks. Here, we consider all public IBIS/ISGRI and JEM--X observations during which
{XSS J12270--4859} has offset angle less than 14$^o$ and 5$^o$, respectively,
adding up to a total effective exposure time of 553.7 ks for IBIS/ISGRI and
39.1 ks for JEM--X (22.9 ks from JEM-X 1 and 16.2 ks from JEM-X 2). Data
reduction was performed using the standard ISDC offline scientific
analysis software version 10.0.
\begin{figure}
\includegraphics[angle=0,width=8.6cm]{lc_img_binned2.eps}
\caption{Long--term light curve (upper panel) and significance (lower
panel) of {XSS J12270--4859} on ScW timescales as seen by IBIS/ISGRI in the 18--60
keV band. The interval covered by {\it {\it Fermi}-LAT} data
reported by \citet{hill2011} is highlighted in blue. Here
labels should be printed with a larger font size. \label{fig:intlc}}
\end{figure}
While {XSS J12270--4859} was not detected by JEM--X at a significance larger than
3$\sigma$, we confirm the previous {\rm INTEGRAL}/ISGRI detection
reported by \citet{demartino2010}. Combining all the ISGRI data,
{XSS J12270--4859} is detected at a significance level of 10 $\sigma$ in the
18--60 keV band, with an average count rate of 0.365 $\pm$ 0.036 s$^{
-1}$. Since {XSS J12270--4859} itself is relatively faint in hard X-rays
comparing to other sources in this region, the energy spectrum was
obtained from the mosaic images. Its average spectrum can be described
by a power law with index of $1.67\pm 0.27 $, for a luminosity of
$(8.8 \pm 0.1 ) \times 10^{33}$ d$_2^2$ erg s $^{-1}$ in the 18--60 keV
band, compatible with the value derived by \citet{demartino2010} on a
smaller data set. The reduced $\chi^2$ for the fit is 0.6 under 3 d.o.f.
To study the long-term variability of the emission observed by ISGRI
we divided the whole exposure into seven time intervals, each with an
exposure of roughly 80 ks. The latter is the exposure --concurrent
with {\it Fermi}-LAT observations-- during which the source is found
with a significance of 4.5$\sigma$. The light curve, significance and
effective exposure for each time span are shown in
Figure~\ref{fig:intlc} and Table~\ref{table1}. The source is detected
at a significance $\ga 3$-$\sigma$ in all but the third interval,
which is the one covering the shorter time period (15 days), and
during which the significance falls to $1.8$-$\sigma$. However,
modelling the overall light curve with a constant gives a $\chi^2=9$ over six
degrees of freedom, clearly indicating that the observed emission is
compatible with a constant.
\begin{table}
\centering
\caption{Flux, detected significance and effective exposure time of 7 evenly divided time spans of ISGRI observations of {XSS J12270--4859}}
\label{table1}
\centering
\begin{tabular}{cccccccccc}
\hline\hline
Time covered (MJD) & Intensity (s$^{-1}$) & Signif. ($\sigma$) & Expos. (ks)\\
\\
\hline
52650.7 -- 53010.2& 0.42& 4.79& 79.2\\
53010.2 -- 53528.3& 0.42& 4.58& 78.9\\
53528.4 -- 53543.5& 0.17& 1.81& 80.1\\
53543.6 -- 53746.9& 0.29& 3.14& 79.3\\
53746.9 -- 54110.9& 0.24& 2.50& 79.2\\
54111.1 -- 54692.7& 0.39& 3.67& 79.6\\
54693.7 -- 56131.0& 0.50& 4.80& 79.1\\
\hline
\hline
\end{tabular}
\end{table}
Finally, in order to search for any periodic signal in the long--term
light curve, we use the Lomb--Scargle periodogram method
\citep{lomb1976,scargle1982}. Power spectra are generated for the
light curve using the PERIOD subroutine \citep{press1989}. The 99\%
white noise significance levels are estimated using Monte Carlo
simulations \citep[see e.g.][]{kong1998}. No signal was significantly
detected beyond such noise level.
Our analysis indicates that {XSS J12270--4859} keeps behaving as a steady hard
X-ray emitter, over a 9-year time interval. Also, it shows that the
source was active in hard X-rays simultaneously to the Fermi
observations performed between August 2008 and September 2010 and
analysed by \citet{hill2011}, confirming the simultaneous RXTE/Fermi
detection achieved during 2011 \citep{demartino2013}.
\section{Propeller state}
\label{sec:propeller}
The fate of matter in-falling towards a magnetised rotating neutron
star depends essentially on the ratio between the rotation rate of the
in-flowing matter and of the magnetosphere, evaluated at the radius
where the dynamics of the flow becomes dominated by the stress
exerted by the magnetic field, the so-called disk truncation radius
$R_{\rm in}$ \citep[see, e.g.][and references therein]{ghosh2007}. In
a Keplerian this disk, matter rotates at a rate:
\begin{equation}
\Omega_{\rm K}(r)=(GM_*/r^3)^{1/2},
\end{equation}
where $M_*$ is the mass of the compact object, and it is useful to
define the fastness parameter as the ratio between the neutron star
rotation rate, $\Omega_*=2\pi/P$, and the Keplerian rate at the truncation
radius:
\begin{equation}
\label{eq:fastness}
\omega_*=\frac{\Omega_*}{\Omega_{\rm K}(R_{\rm in})}=\left(\frac{R_{\rm in}}{R_{\rm c}}\right)^{3/2}.
\end{equation}
Here
\begin{equation}
\label{eq:rcor}
R_{\rm c}=(GM_*/\Omega_*^2)^{1/3}
\end{equation} is the co-rotation radius.
While for $\omega_*<1$ ({\it slow} rotator case), mass in the disc is
allowed to accrete yielding its specific angular momentum to the
neutron star, for $\omega_*\geq1$ ({\it fast} rotator case), the
inflowing mass finds a centrifugal barrier which partly or completely
inhibit further in-fall onto the surface of the neutron star.
Such a bi-modality of the outcome of accretion follows from the nature
of the coupling between the field lines and the disc matter. In order
to flow towards the compact object in a Keplerian disk at a steady
rate $\dot{M}$, matter has to get rid of its angular momentum at a
rate:
\begin{equation}
\label{eq:amloss}
(d/dt)L_{\dot{M}}(r)=\dot{M}\Omega_{\rm K}(r) r^2.
\end{equation}
Far from the central object, it is disk viscosity which redistributes
this angular momentum towards the outer rings of the disk. As matter
approaches a magnetised neutron star, the stress exerted by its
rotating magnetic field becomes dominant. Differential rotation
between the field lines, assumed to be initially poloidal, and disk
matter yields a stress:
\begin{equation}
S^{\rm m}=\pm B_pB_\phi/4\pi,
\end{equation}
where $B_\phi$ is the toroidal component of the field originated from
the twisting of the poloidal component
\begin{equation}
B_p(r)=\mu r^{-3},
\end{equation} and $\mu$
is the NS magnetic dipole moment. Reconnection and opening of the
field lines limit the magnitude of the toroidal component to a
fraction $\eta\la1$ of the poloidal component, and reduce the
interaction layer to a width $(\Delta r/R_{\rm in})<1$
\citep[e.g.][]{wang1996,ghosh2007,lovelace1995,romanova2009,dangelo2010}. The
magnetic torque integrated on such layer can be thus written as
\citep[see, e.g.,][]{dangelo2010}:
\begin{eqnarray}
\label{eq:magtorque}
T_{\rm m}&=&2\int [r S^{\rm m}(r)] r dr d\phi=4\pi
S^{\rm m}(R_{\rm in}) R_{\rm in}^2 \Delta r=\nonumber\\ &=&\pm \eta
\left(\frac{\Delta r}{R_{\rm in}}\right) \frac{\mu^2}{R_{\rm in}^3},
\end{eqnarray}
where $[r S^{\rm m}(r)]$ is the torque acting per unit area, and the
factor 2 reflects the faces of the disc over which the torque is
applied.
The sign of the magnetic torque exerted by the NS on the disk matter
(Eq.~\ref{eq:magtorque}) depends on the direction of the twist, and is
positive when $\omega_*>1$. In such conditions the neutron star
deposits angular momentum into the disc, which makes feasible the
ejection of matter along the field lines \citep[i.e., a propeller
state;][]{illarionov1975}.
However, for values of $\omega_*$ between $1$ and a certain critical
value $\omega_*^{\rm cr} \ga 1$, the energy released by the
propellering magnetosphere to the disc plasma is not sufficient to
unbind it from the system \citep{spruit1993}. A large fraction of the
propellered matter returns to the disc and builds up there, and may
eventually resume accretion as it increases the inward accretion rate
\citep[see the recent works by][who examined in depth the cycles
between accretion and angular momentum deposition at the inner rim,
which takes place for values of $1<\omega_*\leq\omega_{\rm
cr}$]{dangelo2010,dangelo2012}. For values of the fastness
$\omega_* > \omega_*^{\rm cr}$ the ejection of matter at the inner rim of the disc
is instead clearly favoured, as the angular momentum and the energy
which may be released by the NS to the disc matter increases. This
tendency is also confirmed by the magneto-hydrodynamical simulations
performed by \citealt{romanova2003,ustyugova2006,romanova2009}. In the
following, we consider a similar situation to model the phenomenology
shown by {XSS J12270--4859}, and assume for simplicity that all the incoming
matter is ejected by the system, $\dot{M}_{\rm ej}=\dot{M}$.
Under the above assumptions, the conservation of angular momentum at
the inner rim of the disc reads as:
\begin{eqnarray}
\label{eq:ejection}
\dot{M} R_{\rm in} v_{\rm out}&=&(d/dt){L}_{\dot{M}}+T_{\rm m} = \nonumber \\
&=&\dot{M}\Omega_{\rm K}R_{\rm in}^2+\eta\left(\frac{\Delta r}{R_{\rm in}}\right)\frac{\mu^2}{R_{\rm in}^3},
\end{eqnarray}
where, the term on the left hand side is the rate of angular momentum
lost in the outflow, while the rate of angular momentum carried by
disk matter (Eq.~\ref{eq:amloss}) and the torque applied by the
magnetic field lines (Eq.~\ref{eq:magtorque}) appear in the right hand
side. \citet{eksi2005} proposed an useful parametrisation of the
propeller process in terms of the elasticity of the scattering between
the field lines and the disc plasma through a parameter $\beta$, which
varies between $\beta=1$ in the perfectly elastic case, and $\beta=0$
in the purely an-elastic case. According to this parametrisation, the
velocity of the outflow is:
\begin{equation}
\label{eq:veloutflow}
v_{\rm out}= \Omega_{\rm K}(R_{\rm in}) R_{\rm in} [1-(1+\beta)(1-\omega_*)].
\end{equation}
Substituting this relation in Eq.\ref{eq:ejection} yields:
\begin{equation}
\label{eq:fullpropomg}
\omega_*^{7/3}(\omega_*-1)(1+\beta)=\frac{\eta(\Delta r /
R_{\rm in})\mu^2}{\dot{M}\sqrt{GM}R_{\rm c}^{7/2}}.
\end{equation}
An ejecting propeller solution may hold only if the fastness exceeds
the critical threshold, which was estimated by \citet{perna2006} as:
\begin{equation}
\label{eq:omgcr}
\omega_*^{\rm cr}(\beta)=\frac{\beta+\sqrt{2}}{1+\beta}.
\end{equation}
The critical threshold takes a value of $1.21$ and $\sqrt{2}$ for the
perfectly elastic ($\beta=1$) and an-elastic ($\beta=0$) case,
respectively. We plot in Fig.~\ref{fig:mumdot} the values of the NS
dipole moment and rate of mass lost leading to such critical values,
which delimit the region where fully ejecting propeller solutions are
possible.
\begin{figure}
\includegraphics[angle=0,width=\columnwidth]{mumdot_oct7.eps}
\caption{Values of the NS dipole moment and of the rate of mass lost
by the disc giving the critical values of the fastness parameters,
for a 1.4 M$_{\odot}$ NS spinning a a period of 2.5 ms, with
$\eta=1$ and $(\Delta r/R_{\rm in })=0.1$. The red solid and
dashed lines are evaluated for the critical fastness of the purely
an-elastic and elastic case, respectively. These lines delimit the
blue shaded region, where fully ejecting propeller solutions
hold. The red circle marks the mass ejection rate evaluated for a
system with a spin period of 2.5 ms, a luminosity of $10^{35}$ erg
s$^{-1}$, and a fastness equal to the critical value.
\label{fig:mumdot}}
\end{figure}
The energy available to power the observed emission follows from the
conservation of energy \citep{eksi2005}:
\begin{eqnarray}
\label{eq:mdot}
L_{\rm rad}&=&\frac{GM\dot{M}}{R_{\rm in}}+\Omega_* T_{\rm
m}-\frac{1}{2}\dot{M}v_{\rm out}^2=\\
&=&
\frac{GM\dot{M}}{2R_{\rm
in}}[1+(1-\beta^2)(\omega_*-1)^2],
\end{eqnarray}
and is related to the rate of mass in-flow (which equals the mass
ejection rate under the assumptions made). Considering the critical
fastness for the an-elastic case ($\omega_*^{\rm cr}=\sqrt{2}$,
$\beta=0$), a spin period of 2.5 ms, and a value of the luminosity of
$10^{35}$ erg s$^{-1}$ (see Sec.~\ref{sec:model}), gives a mass
ejection rate of $3.6\times10^{15}$ g s$^{-1}$, which crosses the
relative propeller solution for $\mu\ga\mbox{few}\times10^{26}$ G
cm$^3$ (see the red circle in Fig.~\ref{fig:mumdot}), of the order of
the field usually estimated for NS in a LMXB.
In Sec.~\ref{sec:sed}, we interpret the X-ray and $gamma$-ray
emission of {XSS J12270--4859} in terms of the synchrotron and self-synchrotron
Compton emission by a population of relativistic electrons,
accelerated by shocks at the magnetospheric interface of a
propellering neutron star, and interacting with the NS magnetic
field at the interface, $\bar{B}$. To estimate the strength of such
field we consider a value of the order of the dipolar component of
the NS field, evaluated at the inner disc radius:
\begin{eqnarray}
\label{eq:dipole}
\bar{B}& =& B_p(R_{\rm in})=\frac{\mu}{R_{\rm in}^{3}} = \frac{\mu}{R_{\rm c}^{3}\omega_*^{2}} = \frac{\mu\Omega_*^2}{GM_*\omega_*^2}=\frac{4\pi^2\mu}{GM_*P^2\omega_*^2}=\nonumber \\
&=&0.85\times10^6\;\mu_{26}\;P_{2.5}^{-2}\;(\omega_*/2)^{-2}\;G,
\end{eqnarray}
where $\mu_{26}$ is the NS magnetic dipole moment in units of
$10^{26}\, \mbox{G cm}^{3}$, and $P_{2.5}$ the spin period in units of 2.5
ms. The expression in the right hand side is obtained by using the
definition of the NS fastness (Eq.~\ref{eq:fastness}), and the
definition of the corotation radius (Eq.~\ref{eq:rcor}), considering a
NS mass of 1.4 M$_{\odot}$ (as implicitely assumed in the rest of the
paper), taking a screening coefficient $\eta=1$, and ignoring the
tangential component introduced by shearing. The Eq.~\ref{eq:dipole}
implicitely expresses the NS magnetic dipole moment in terms of the
field strength at the interface, the NS spin period, and the fastness,
and can be plugged in the expression of the angular momentum
conservation, Eq.~\ref{eq:fullpropomg}. Further, $\dot{M}$ can be
expressed as a function of the radiated luminosity, spin period,
fastness and elasticity parameter thanks to the relation expressing
energy conservation, Eq.~\ref{eq:mdot}. Substituting in
Eq.~\ref{eq:fullpropomg}, and setting $\eta=1$ and $(\Delta r/R_{\rm
in})=1$, finally yields:
\begin{equation}
\label{eq:solution}
\bar{B}=5.2\times10^6\;L_{35}^{-1}\;P_{2.5}^{-1/2}\left[\frac{1}{\omega_*}\frac{(\omega_*-1)\times(1+\beta)}{1+(1-\beta^2)(\omega_*-1)^2}\right]^{1/2}.
\end{equation}
Considering a spin period in a range typical of millisecond pulsars
(1.5--5 ms), and a value of the fastness exceeding the critical
threshold for mass ejection, $\omega_*^{\rm cr}$
(Eq.~\ref{eq:omgcr}), we conclude that the magnetic field at the
interface $\bar{B}$ must lie in a range between $2.2$ and
$11\times10^6$ G, to produce a total propeller luminosity of
$1.5\times10^{35}$ erg s$^{-1}$ (see Sec.~\ref{sec:results} and
Table~\ref{table}).
It has to be noted that mass ejection is not a necessary outcome of a
system in a propeller state. In fact, a steady solution for a thin
disc with an angular momentum source at the inner rim exists for all
values of $\omega_*>1$ \citep{syunyaev1977}. In such case, the angular
momentum is retained in the disc, which readjusts by increasing its
density with respect to the standard accreting solution, in order to
match the increased demand of viscosity set by the source of angular
momentum at the inner boundary. No mass in-flows in this case and the
disc is considered {\it dead}. The angular momentum injected by the NS
at the inner rim is released at the outer edge of the disc, most
probably to the orbit of the binary through tidal interactions. Such
state has been recently re-examined by
\citet{dangelo2011,dangelo2012}. However, while a dead disc solution
exists even if no matter is ejected by the system, it hardly holds on
year-long time-scales such as those observed in {XSS J12270--4859}. As a matter of
fact, if the disc is continuously replenished by a source of mass at a
rate $\geq ~10^{-12}$ M$_{\odot}$ yr$^{-1}$, like those commonly
observed from LMXB \citep[e.g.][]{coriat2012}, it takes the inward
pressure of a {\it dead} disc only a few months to bring the inner
disk radius back to the co-rotation surface. We then consider only the
full-ejecting case discussed above as a possibility to explain the
properties of {XSS J12270--4859} in terms of a propeller state.
\section{Spectral energy distribution}
\label{sec:sed}
The plasma of the layer where the accretion disc is truncated in a
propeller state is expected to be very turbulent and magnetised, as a
result of the deposition of a copious amount of energy by the
magnetosphere (see, e.g., magnetohydrodynamics numerical simulations
studied by \citealt{romanova2009} and references therein). Such
region was identified by \citet{bednarek2009,bednarek2009b} as a
suitable site to accelerate charged particles to relativistic energies
through a Fermi process. Here, we apply a similar guess to the case
of a relatively weakly magnetised ($B_{\rm NS}\approx10^{8-9}$ G, $\mu
\approx 10^{26}$--$10^{27}$ G cm$^{3}$), quickly spinning ($P \approx
\mbox{few}$ ms) NS in a LMXB, and study the spectral energy
distribution expected to arise from the population of relativistic
electrons expected to be produced at the layer between the
magnetosphere and the disc. For simplicity, we consider in the
following that the electron distribution occupies a torus-like volume,
with radial size equal to the inner disc radius $R_{\rm in}=R_{\rm
c}\omega_*^{2/3}$ (see Eq.~\ref{eq:fastness}), and transverse
section of size, $R_{\rm t}$. Only the acceleration of electrons is
considered in this model, while the possible contribution of hadrons
is discussed in Sec.~\ref{sec:hadrons}.
\subsection{Electrons acceleration}
In a Fermi acceleration process, energy is given up to each electron at a
rate {\setlength\arraycolsep{0.1em}
\begin{eqnarray}
\label{eq:engain}
\ell_{acc}&=&\xi c E / R_L=\xi e c B(R_{\rm in})=1.4\times10^5 \xi_{0.01}\;\bar{B}_6\:\mbox{erg s}^{-1},
\end{eqnarray}
where $R_L=E/e B $ is the Larmor radius, $\xi_{0.01}$ is the
acceleration parameter in units of 0.01, $e$ is the electron charge,
and $\bar{B}_6$ is the strength of the magnetic field at the interface
$\bar{B}$, in units of $10^6$ G, which is of the order of the
values determined in Sec.~\ref{sec:propeller}.
The time scale of electron acceleration is:
\begin{equation}
\label{eq:tauacc}
\tau_{\rm acc}=\frac{\gamma m_e c^2}{\ell_{\rm acc}}=5.7\times10^{-8}\;\xi_{0.01}\;\bar{B}_6\;(\gamma/10^4)\; \mbox{s},
\end{equation}
where $m_e$ is the electron mass. This value is much shorter than
the time needed to travel the typical size of the region, $R_{in}$,
\begin{eqnarray}
\label{eq:tautravel}
\tau_{\rm tr}&=&
\frac{R_{\rm in}}{c}=\frac{R_{\rm c}\omega_*^{2/3}}{c} = \nonumber \\
& \approx & 1.6\times10^{-4}\;P_{2.5}^{2/3}\;(\omega_*/2)^{2/3}\;\mbox{s},
\end{eqnarray}
ensuring that electrons can be effectively accelerated before that they can escape the system.
\subsection{Emission processes and expected dominant components}
The electrons accelerated by a Fermi process lose energy through the
emission of radiation produced by their interaction with the magnetic
and the radiation field permeating the transition layer, and with the
ions of the plasma.
\subsubsection{Synchrotron emission}
\label{sec:synchro}
Synchrotron losses proceed in the transition layer at the rate set by the Larmor formula:
\begin{eqnarray}
\label{eq:synchro}
\ell_{syn}=\frac{4}{9}\frac{e^4}{m_e^2 c^3} \bar{B}^2 \gamma^2
=1.1\times10^5\;\bar{B}_6^2\;(\gamma/10^4)^2\;\mbox{erg
s}^{-1},
\end{eqnarray}
where $\gamma$ is the electron Lorentz factor. If synchrotron losses are dominant over other
channels of energy losses (see below), the parameters describing the
electron energy distribution are set by the equilibrium between the
energy injection through Fermi acceleration and synchrotron
emission. In particular, the cut-off energy of the electron
distribution is set by equating Eq.~\ref{eq:engain} and
\ref{eq:synchro}:
\begin{equation}
\label{eq:gammamaxsyn}
\gamma_{\rm max}^{syn}=\frac{3}{2}\frac{m_e
c^2}{e^{3/2}}\left(\frac{\xi}{\bar{B}}\right)^{1/2}=1.2\times10^4\;\xi_{0.01}^{1/2}\;\bar{B}_6^{-1/2}.
\end{equation} Assuming an exponentially cut power law distribution for the
energy of the electrons:
\begin{equation}
\label{eq:elpopul}
\frac{dN_e}{d\gamma}=K \gamma^{-\alpha}\exp{\left(-\frac{\gamma}{\gamma_{\rm max}}\right)},
\end{equation}
the synchrotron spectral energy distribution is described by a power
law:
\begin{equation}
(E F_E)^{syn}\propto E^{-(\alpha-3)/2}\exp{\left[-\frac{3}{2}\left(\frac{E}{E_{\rm max}^{syn}}\right)^{1/3}\right]},
\end{equation} with cut-off energy:
\begin{equation}
\label{eq:synen}
E_{\rm max}^{syn}=\frac{3}{2}\frac{\hbar}{m_e c} \bar{B} (\gamma_{\rm max}^{syn})^2=\frac{27}{16} \frac{\hbar m_e c^3}{e^2}\xi=1.2 \;\xi_{0.01}\;\mbox{MeV}.
\end{equation}
\citep[see, e.g., the relations given by][evaluated for an electron
distribution like the one given by Eq.~\ref{eq:elpopul}, and setting
the electron Lorentz factor to the value given by
Eq.~\ref{eq:gammamaxsyn}]{lefa2012}. It results that when the
synchrotron emission is the dominant cooling process, the cut-off
energy of the emitted spectrum depends only on the acceleration
parameter, $\xi$. This parameter may take values $<<1$ in the case of
relativistic shocks, but it is largely undetermined on theoretical
grounds \citep[see][and references therein]{khangulyan2007}. In our
model we assume that the main contribution to the 0.2--100 keV
spectrum of XSS J12270--4859\ (a power law $E F_{E}\propto E^{-(\Gamma_X-2)}$,
with an index $\Gamma_X=1.70\pm0.02$ and no cut-off detected up to 100
keV, see Sec.~\ref{sec:xss} and \ref{sec:igr}) is given by synchrotron
emission. Imposing that the cut-off energy of this component lies
between 100 keV and 100 MeV, Eq.~\ref{eq:synen} can be used to
constrain $\xi$ to a broad range, $8.5\times10^{-4}$--0.85. Similarly
the observed spectral slope indicates an electron energy distribution
with $\alpha\simeq2\Gamma-1=2.4$. On the other hand, the high energy
part of the spectrum observed by {\it Fermi}-LAT, with a cut-off at
$4.1\pm1.3$ GeV \citep{hill2011}, cannot be explained by synchrotron
emission alone, as it would require $\xi\sim30$, and is instead
discussed in terms of (self-synchrotron) inverse Compton emission in
Sec.~\ref{sec:iccem}.
At low energies the emitting region becomes optically thick to the
synchrotron radiation. We evaluate the absorption coefficient for the
relevant parameters of the system and an electron distribution with
index $\alpha=2.4$, following \citet{rybicki1979}:
\begin{equation}
\alpha_{syn}(E)=3\times10^{-4}\;\left(\frac{n_e}{10^{17}\mbox{cm}^{-3}}\right)\;\bar{B}_6^{2.2}\;\left(\frac{E}{\mbox{eV}}\right)^{-3.2}\;\mbox{cm}^{-1},
\end{equation}
where $n_e$ is the density of electrons of the considered medium,
scaled to a value of the order of those obtained through modelling of
the observed spectrum (see below). Imposing $\tau=\alpha_{syn}(E_{\rm br})R_{\rm
t}=1$, we estimate the energy below which the medium becomes
optically thick to synchrotron radiation as:
\begin{equation}
\label{eq:lowenbreak}
E_{\rm br}=2.9\;\left(\frac{n_e}{10^{17}\mbox{cm}^{-3}}\right)^{0.31}\;\left(\frac{R_t}{\mbox{km}}\right)^{0.31}\;\bar{B}_6^{0.69}\;\;\mbox{eV}.
\end{equation}
This value is between the optical and the UV band for typical
parameters of the system, compatible with the absence of a low-energy
cut-off in the observed X-ray data.
\subsubsection{Inverse Compton emission}
\label{sec:iccem}
The weight of inverse Compton losses in the Thomson domain with respect to synchrotron
losses can be evaluated as:
\begin{equation}
\label{eq:ratio}
\frac{\ell_{IC}}{\ell_{syn}}=\frac{\epsilon_{ph}}{\epsilon_{\rm mag}},
\end{equation}
where $\epsilon_{\rm ph}$ and $\epsilon_{\rm mag}=\bar{B}^2/8\pi$ are the
energy density in the radiation and in the magnetic field,
respectively.
The inner rings of a viscous disc emit thermal photons with typical
temperature set by amount of angular momentum that has to be
dissipated by the disc. This is related to the rate of mass inflow
(here set equal to the rate of mass lost) and by the size of the disc
\citep{frankkingraine2002}, yielding $kT(R_{\rm in})\approx 100$ eV
for $\dot{m}_{15}=P_{2.5}=1$ and $\omega_*=2$. Even ignoring the
reduction of the cross section for inverse Compton scattering of
photons with initial energy lower than the Klein-Nishina threshold
$\approx m_e c^2 / 4 \gamma$ ($\approx 10$ eV for $\gamma=10^4$), the
energy density implied by a similar thermal spectrum,
$\epsilon_{disc}=aT^4(R_{\rm in})$, is lower by more than a factor
$1000$ than the density associated to a $10^6$ G magnetic
field. Inverse Compton scattering of disc photons is then
energetically unimportant with respect to synchrotron emission, for
typical parameters of the systems considered here,
Photons emitted by the low mass companion have an even lower
density at the interface between the disc and the magnetosphere,
$\epsilon_{star}/\epsilon_{disc}\approx[T_2/T(R_{\rm
in})]^4(R_2/a)^2\approx10^{-10}$, where $T_2$ and $R_2$ are the
companion star temperature and radius, respectively, and $a$ is the
size of the orbit. To evaluate such ratio we considered
$T_2\simeq4600\,K$ , $R_2\approx 0.6 R_{\odot}$, a total mass of the
system of 2 M$_{\odot}$, and an orbital period of 8 hr, values typical
of a late type K star as proposed by \citet{demartino2013}.
On the other hand, inverse Compton scattering of the synchrotron
photons off the same electron population which produced them
(synchrotron self Compton process; SSC in the following) may play an
important role if the electron distribution is concentrated in a
relatively small region, such as the transition region that we are
considering here ($R_{\rm in}\approx 50$ km, $R_{\rm t}\approx
\mbox{few km}$). In the Thomson regime, the ratio between the
luminosity emitted through synchrotron and SSC process is
\citep[e.g.][]{sari2001}:
\begin{equation}
\label{eq:SSCratio}
\frac{L_{\rm SSC}}{L_{\rm syn}}\sim \frac{1}{3}\frac{E_{\rm
max}^{SSC}}{E_{\rm max}^{syn}}\sigma_T n_e R_{\rm in},
\end{equation}
where $E_{\rm max}^{SSC}$ and $E_{\rm max}^{syn}$ are the peak
energies of the SSC and synchrotron photons. While the latter energy
is set by Eq.~\ref{eq:synen}, the cut-off of the inverse Compton
distribution reproduces that of the electron energy distribution,
$E_{\rm SCC}^{max}=\gamma_{max}m_ec^2$. Taking into account also the
SSC losses, and defining $f\equiv 1+ L_{\rm SSC}/L_{\rm syn}$, the
maximum energy that can be achieved by the electron distribution is
then obtained by balancing electron energy losses and gains,
\begin{equation}
\label{eq:balance}
\ell_{syn}+\ell_{SSC}=f\ell_{syn}=\ell_{acc},
\end{equation} yielding:
\begin{equation}
\label{eq:gammamax}
\gamma_{\rm max}^{SSC}=\frac{\gamma_{\rm max}^{syn}}{\sqrt{f}}=8.2\times10^3\;f_{2}^{-1/2}\:\xi_{0.01}^{1/2}\;\bar{B}_6^{-1/2}.
\end{equation}
Here, we defined $f_2=f/2$, to scale the energy ratio to the case of
an equal luminosity released by synchrotron and SSC processes. The SSC
spectrum is then cut-off at an energy of:
\begin{equation}
\label{eq:enmax}
E_{\rm SCC}^{max}=4.2\;f_{2}^{-1/2}\:\xi_{0.01}^{1/2}\;\bar{B}_6^{-1/2}\; \mbox{GeV}.
\end{equation}
of the order of that observed from {XSS J12270--4859} by {\it Fermi}-LAT
($4.1\pm1.3$ GeV; \citealt{hill2011}). SSC emission can be thus
responsible of the $\gamma$-ray flux observed from {XSS J12270--4859} and perhaps
from other binaries. By setting the high-energy cut-off of the
spectrum to the observed value, and varying the magnetic field in the
range determined in Sec.~\ref{sec:propeller} ($\bar{B}_6$=2.2--11),
Eq.~\ref{eq:enmax} shows that the ratio of the luminosity emitted by
SSC and synchrotron process depends linearly on the poorly constrained
acceleration parameter, $\xi$. If the latter is varied in the range
determined in Sec.~\ref{sec:synchro} by imposing that the cut-off of
the synchrotron spectrum lies between 100 keV and 100 MeV
($\xi=8.5\times10^{-4}$--0.85), values of $f$ ranging from
$\sim10^{-2}$ to 40 are obtained. It is then clear that a sensible
estimate of the expected flux ratio cannot be given without an
accurate knowledge of the acceleration parameter. On the other hand,
as {XSS J12270--4859} emits a comparable $\gamma$-ray and X-ray flux, we expect
the two components to emit energy at a comparable rate. By setting
$f=2$ in Eq.~\ref{eq:enmax}, we can therefore constrain the
acceleration parameter to lie in the range $\xi=0.02$--$0.10$, in
order to reproduce the observational features of {XSS J12270--4859}, in the
framework set by our model.
At zero order, the electron density requested to produce a
similar contribution from SSC and synchrotron emission ($L_{\rm
SSC}/L_{\rm syn}=1$, $f=2$) can be estimated from Eq.~\ref{eq:SSCratio}:
\begin{equation}
\label{eq:eldens}
n_e\approx2.7\times10^{14}\;f_{2}^{-1/2}\:\xi_{0.01}\;P_{2.5}^{-2/3}\;(\omega_*/2)^{-2/3}\;\mbox{cm}^{-3}.
\end{equation}
However, this value is largely underestimated as the reduction of the
cross section due to Klein Nishina effects largely decreases the
efficiency of the SSC emission. We show in Sec.~\ref{sec:model} how
densities of the order of $10^{17}\,\mbox{cm}^{-3}$ are needed to
reproduce the observed spectral energy distribution.
This value of the electron density has to be compared with the
expected density in the outflow. By imposing mass continuity at the
base of the outflow, we estimate a scale unit of the mass density as:
\begin{eqnarray}
\rho_0&=&\frac{\dot{M}}{2\pi R_{\rm in} R_{\rm t} v_{\rm
out}}\nonumber=\\&=&10^{-7}\;\dot{m}_{15}\;P_{2.5}^{-1/3}\;(\omega_*/2)^{-4/3}\;(R_{\rm
t}/10^5\mbox{cm})\; \mbox{g cm}^{-3},
\end{eqnarray}
where we used Eq.~\ref{eq:veloutflow} to express the outflow velocity
in the purely ejecting case ($\beta=1$), and considered a typical size
for the transverse section of the acceleration region $R_{\rm
t}\approx 10^5$ cm (see below). For a fully ionised plasma, the
electron density is therefore:
\begin{equation}
n_{e,0}\simeq\frac{\rho_0}{m_{H}}(X+Y/2)\approx0.5\times10^{17}\;\mbox{cm}^{-3},
\end{equation}
where we omitted the dependencies on the scale units used so far, and
we considered solar abundances for hydrogen and helium, $X=0.7$ and
$Y=0.28$, respectively. Such a value of the scale unit of electron density is not far from that needed to produce a comparable SSC and synchrotron
emission ($\approx\mbox{few}\times10^{17}\,\mbox{cm}^{-3}$; see Sec.~\ref{sec:model}).
\subsubsection{Bremsstrahlung emission}
To evaluate the energy lost by electrons in bremsstrahlung
interactions with the ions of plasma, we considered the relation given
by \citet{blumenthal1970}, in the approximation of a fully ionised plasma:
\begin{eqnarray}
\ell_{brems}&=&\frac{4e^6}{\hbar m_e^2 c^4}
(2n_{H}+20n_{He})[\log(2\gamma)-1/3]\gamma m_e c^2\simeq\nonumber\\
&&0.66\;\left(\frac{\rho}{\rho_0}\right)\;\left(\frac{\gamma}{10^4}\right)\;
\mbox{erg s}^{-1}.
\end{eqnarray}
Here $n_H=X \rho/m_H$ and $n_{He}=Y \rho/m_H$ are the hydrogen and
helium number densities, respectively, and a solar composition is
considered. By comparing this relation to energy gains and other
radiative losses (see Eq.~\ref{eq:engain}, \ref{eq:synchro} and
\ref{eq:balance}), we deduce that bremsstrahlung radiation is not
dominant for the typical densities of a propeller outflow.
\subsubsection{Coulomb losses}
Energy losses through Coulomb interactions can be safely ignored for
the relevant parameters. Considering the rate of energy lost by a
relativistic population of electrons through Coulomb interactions with
other electrons and with much less energetic ions
\citep[e.g.][]{frankel1979}, a slowing down time scale:
\begin{equation}
\tau_{\rm coll}\simeq 130\;(\gamma/10^4)\;(\rho/\rho_0)^{-1}\;\mbox{s}
\end{equation} is obtained, much longer than other time scales of the system.
\begin{figure}
\includegraphics[angle=0,width=\columnwidth]{obs_spectra.eps}
\caption{Spectral energy distribution observed from {XSS J12270--4859} in the
$\gamma$-rays, X-rays, IR/optical/UV and radio bands (magenta,
blue, optical and black thick lines, respectively). The model
obtained for $\alpha=2.4$, $\gamma_{\rm max}=10^4$,
$\bar{B}=5\times10^6$ G and $n_e=12\times10^{17}$ cm$^{-3}$ is
over-plotted as a black line. The components due to the
synchrotron, SSC, inverse Compton of disc photons, and
bremsstrahlung in the relativistic plasma are shown as blue,
magenta, red and green dotted lines, respectively. The cyan dashed
line represents the possible contribution of the inner parts of an
accretion disk truncated at a radius of 50 km, with an inner
temperature of 0.1 keV, an inclination of $75^{\circ}$, and
absorbed by the interstellar medium with an absorption column
$N_{\rm H}=10^{21}$ cm$^{-2}$ \citep{demartino2010}. This
component has not been taken into account in evaluating the model,
considering the large uncertainties on the actual emission of a
disk in a propeller state.\label{fig:spectrum}}
\end{figure}
\subsection{Simulated spectral energy distribution}
\label{sec:model}
To estimate quantitatively if the proposed model can reproduce the
broad-band spectral energy distribution observed from {XSS J12270--4859}, we
modelled the radiation processes described in the previous section
(synchrotron, SSC, inverse Compton scattering of the disc photons, and
bremsstrahlung) using the codes described by
\citet{torres2004}, \citet{decea2009} and \citet{martin2012}, and assuming a distance to
the system of 2 kpc.
The electron distribution for $\gamma>1$ is described by an
exponentially cut-off power law (Eq.~\ref{eq:elpopul}). The energy
distribution and luminosity produced by the synchrotron and SSC
processes also depend on the strength of the magnetic field
interacting with the electrons, $\bar{B}$, and on the electron
density, $n_e$ respectively.
\begin{table*}
\centering
\begin{minipage}{140mm}
\caption{Model parameters reproducing the X-ray to $\gamma$-ray
spectrum observed from {XSS J12270--4859}. The cut-off of the electron
distribution is fixed at $\gamma=10^4$ and $\alpha=2.4$. A
distance of 2 kpc is considered.\label{table}}
\begin{tabular}{@{}cccc||ccccccc@{}}
\hline
\hline
$\bar{B}$ (MG) & $n_{e,17}$ (cm$^{-3}$) & $L_{\rm syn;35}$ & $L_{\rm SSC;35}$ & $\xi$ & $\omega_*$ & $\beta$ & $\mu_{26}$ (G cm$^{3}$) & $\dot{m}_{15}$ (g s$^{-1}$)& $ R_{\rm in}$ (km) & $ R_{\rm t}$ (km)\\
\hline
$1$ & $2.6$ & $0.48$ & $0.77$ & $0.019$ & $1.03$ &{...} & $0.3$ & {...} & $31$ & $1.3$ \\
$3$ & $4.9$ & $0.62$ & $0.67$ & $0.046$ & $1.40$-$1.44$ & $0-0.1$ & $1.7-1.8$ & $4.5-4.6$ & $38-39$ & $0.3-0.4$ \\
$4$ & $7.0$ & $0.73$ & $0.71$ & $0.059$ & $1.3-1.70$ & $0.40-1$ & $1.9-3.4$ & $4.8-5.6$ & $36-44$ & $0.20-0.22$\\
$5$ & $12$ & $0.77$ & $0.72$ & $0.072$ & $1.4-1.9$ & $0.76-1$ & $3.1-5.4$ & $5.6-6.2$ & $39-48$ & $0.12-0.13$ \\
$6$ & $12$ & $0.78$ & $0.69$ & $0.083$ & $1.8-2.4$ & $0.93-1$ & $5.8-10$ & $7.0-7.3$ & $46-56$ & $0.09-0.10$ \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
We set the model parameters to reproduce the $\Gamma_1=1.70\pm0.02$ power
law observed in the 0.2--100 keV band by {\it XMM-Newton}, {\it RXTE}
and {\it INTEGRAL} (with an unabsorbed flux of
$(4.5\pm0.9)\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$;
\citealt{demartino2010,demartino2013}), and the $\Gamma_2=2.21\pm0.09$ power
law, cut off at $E_{\rm cut}=4.1\pm1.3$ GeV, observed above 100 MeV by {\it
Fermi}-LAT (with an unabsorbed flux of $(4.1\pm0.3)\times10^{-11}$
erg cm$^{-2}$ s$^{-1}$, \citealt{hill2011}). The spectral energy
distribution of these two components is plotted as a blue and
magenta thick line in Fig.~\ref{fig:spectrum}, respectively, together
with the IR/optical/UV (red thick line; \citealt{demartino2013}) and
the radio spectrum (black thick line; \citealt{hill2011}).
The cut-off in the $\gamma$-ray energy band is well reproduced by
$\gamma_{max}=10^4$. On the other hand, the shape of the X-ray power
law results from the sum of the synchrotron and SSC contribution in
that energy band (with the former contributing for more than two
thirds of the emission), and is reasonably modelled by using an index
$\alpha=2.4$ for the electron energy distribution. Keeping fixed
these two parameters, and choosing $\bar{B}=5\times 10^6$ G, we found a
good modelling of the observed X-ray and $\gamma$-ray spectra for an
electron density of $n_e=12\times10^{17}$ cm$^{-3}$. The spectral
energy distribution so obtained is plotted in Fig.~\ref{fig:spectrum}
as a black solid line, where the synchrotron and SSC components are
also drawn as a blue and magenta dotted line, respectively. The low
energy cut-off of the synchrotron spectrum is set at the energy
predicted by Eq.~\ref{eq:lowenbreak}.
In Fig.~\ref{fig:spectrum}, we also plot the inverse Compton spectrum
yielded by seed photons coming from the inner disk (assuming a inner
temperature of 100 eV and a truncation radius of 50 km) and the
bremsstrahlung spectrum (evaluated for a fully ionised plasma with an
electron density equal to the value determined above) as a red and
green dotted line, respectively. Their contribution to the total
spectrum can be then safely neglected, for the parameters considered
in this case, according to expectations.
Similarly, the thermal X-ray output of the inner parts of the
truncated accretion disk is expected to be at most of the same order
of the synchrotron emission, at energies of few tenths of keV. To show
this, we used the model developed by \citet{gierlinski1999} to model
the spectrum of an disk in the accreting high state, truncated at 50
km, with an inner disk temperature of 100 eV (see
Sec.~\ref{sec:iccem}), a high inclination of 75$^{\circ}$ (as possibly
indicated by the dips and flares observed from the source), a
hardening factor of 2, and interstellar absorption by a column with
density $N_{\rm H}=10^{21}$ cm$^{-2}$ (see cyan dashed line in
Fig.~\ref{fig:spectrum}). This component would have a 0.5--10 keV flux
not larger than a factor of few with respect to the upper limit which
can be set on the presence of a disk thermal component during the
January 2009 observation ($2\times10^{-13}$ erg cm$^{-2}$ s$^{-1}$).
Considering the large uncertainties on the thermal emission of an disc
truncated by a propellering magnetosphere, and the scattering of most
of the disk soft X-ray photons by the electron cloud (see red dashed
line in Fig.~\ref{fig:spectrum}), we then consider that the non
detection of a thermal component at soft X-rays is still compatible
with our description.
The IR/optical/UV spectrum was modelled by \citealt{demartino2013} as
the sum of the contributions of the companion star ($T_2=4600\pm250$
K) and of the outer part of the accretion disc ($T_h=12800\pm600$ K;
$R\approx 10^5$ km). Considering also the low energy cut-off of the
synchrotron spectrum at few eV, the emission coming from the
transition layer is not expected to contribute directly to more than
ten per cent to the IR/optical/UV output. However, UV flares are
simultaneous to X-ray flares, despite they have a lower amplitude, by
a factor of two, indicating that the emission in the two bands are
closely related \citep{demartino2010,demartino2013}. This is still
compatible with our model if the UV emission is assumed to be due to
reprocessing in the outer rings of the disc of the X-ray emission
generated at the inner disc boundary, something already suggested by
\citealt{demartino2013}.
The emission model that we have developed for the radiation coming
from the transition layer cannot explain the optically thin/flat radio
spectrum ($F_{E}\sim\nu^{p}$, with $p=-0.5\pm0.6$) observed by ATCA at
frequencies of 5.5 and 9 GHz \citep{hill2011}. It is evident from
Eq.~\ref{eq:lowenbreak} that a similar emission has to come from a
region of a larger size, with a correspondingly lower electron density
and magnetic field strength. For instance, in order to obtain a break
of the synchrotron spectrum below 5.5 GHz, and assuming a dipolar
decay of the field as $B(r)\propto r^{-3}$, the emitting region should
be 100 times larger and have a density lower by four orders of
magnitude. These properties would be anyway compatible with an
emitting region pertaining to the binary system.
\subsection{Implications for XSS J12270--4859 }
\label{sec:results}
The model parameters that we obtained can be used together with the
relation derived in Sec.~\ref{sec:propeller} and \ref{sec:sed}, to get
model-dependent constraints on the system parameters. The cut-off
energy of the electron distribution ($\gamma=10^4$) and the magnetic
field strength at the interface ($\bar{B}=5\times10^6$ G), are related
by Eq.~\ref{eq:gammamax} to the acceleration parameter $\xi$, and the
ratio between the synchrotron and the SSC luminosity, $f$. For the
considered parameters, we have $f=1.98$ and an acceleration parameter
$\xi\simeq0.07$ (see Table~\ref{table}).
Eq.~\ref{eq:solution} relates the strength of the field at the
interface to the radiated luminosity, the spin period, the fastness
and the elasticity parameter. Setting $P_{2.5}=1$, and the luminosity
of $1.49\times10^{35}$ erg s$^{-1}$ as obtained from our modelling,
gives a fastness parameter of $1.45$ for the elastic propeller case
(larger than the critical value of $1.21$ needed for mass ejection,
\citealt{perna2006}), a magnetic dipole moment of $3.1\times10^{26}$ G
cm$^3$ (Eq.~\ref{eq:dipole}) and a mass ejection rate of
$6.3\times10^{15}$ g s$^{-1}$ (Eq.~\ref{eq:ejection}), compatible with
the typical mass accretion rates observed from NS in LMXB. For these
parameters, the inner disc radius would be placed at $R_{\rm
in}\simeq40$ km, with an acceleration region of transverse section
$R_{\rm t}\simeq0.1$ km.
We also varied the strength of the magnetic field at the interface
$\bar{B}$ to study the range of parameters which lead to a good fit of
the observed spectral energy distribution, and compatible with the the
propeller model developed in Sec.\ref{sec:propeller}. Decreasing
$\bar{B}$ while keeping fixed the spin period, increases the range of
values of the elasticity parameters which can provide a solution above
the value of the critical fastness. For instance, a value of
$\bar{B}_6=4$ gives a propeller solution for $\beta>0.4$, while
$\bar{B}_6=3$ is allowed for $0.1>\beta>0$. A similar effect is
obtained by increasing the value of the spin period. A too low value
of the field (e.g. $\bar{B}_6=1$) at the interface is not formally
compatible with the propeller model we developed; in this case, in
fact, a solution of Eq.~\ref{eq:solution} is found for $\omega<1.03$,
which is below the critical fastness for any elasticity parameter. For
such values the field would then more likely produce an accretion
state than a propeller. On the other hand, when $\bar{B}$ is increased
above $6\times10^6$ G, the volume of the acceleration zone needed to
keep the contribution of the SSC photons comparable to that yielded by
synchrotron photons, decreases uncomfortably, while the acceleration
parameter goes above a value of 0.1. At the same time the magnetic
dipole moment increases above $10^{27}$ G cm$^3$, which is also
unlikely for a NS in a LMXB. We therefore conclude that an interface
magnetic field strength in a relatively narrow range of
$3$--$6\times10^6$ G best reproduces the observed spectra and is
compatible with the theoretical expectations for a NS in a LMXB,
ejecting mass in a propeller state. This range of field imply an
acceleration parameter in the range 0.04-0.08, to give a comparable
emission in the two components (see Eq.~\ref{eq:gammamax}). The
parameter values of a sample set of models are given in
Table~\ref{table}.
\subsection{Hadrons acceleration}
\label{sec:hadrons}
So far, we have not analysed possible acceleration of hadrons,
followed by subsequent interactions with material in the disc, pion
decay, and gamma-ray production. This is an alternative that may in
principle require consideration.
In our scenario, where particles get accelerated in the transition
zone, use of Eq.~\ref{eq:tauacc} and \ref{eq:tautravel}, which give
the timescale of acceleration and escape, applied to the case of
protons, imply that the maximum proton acceleration would happen for a
Lorentz factor of 10$^4$, i.e., an energy of 10 TeV. We do not see
the outcome of these accelerated protons, which would produce photons
up to a few hundreds GeV, since the gamma-ray spectrum is severely cut
at a few GeV. (For electrons, the maximum energy of the population is
limited not by escape losses, but by synchrotron and SSC, which can be
instead neglected in the case of protons, as they are a factor
$(m_p/m_e)^2\approx4\times10^6$ less efficient, see
Eq.~\ref{eq:synchro}) One could in principle entertain that protons of
the highest energies will penetrate the inner structure of the
accretion disc, and interacting there would produce a photon of few
100 GeV which would very likely interact as well, being absorbed
\citep[see, e.g.,][]{bednarek1993}. We cannot discard this scenario a
priori, without detailed computations, as a possible contributor to
the total SED yield.
What may seem less likely is the picture in which the protons are
accelerated in an electrostatic gap out of the transition zone, which
then impact the accretion disk in a sort of beam-meets-target
phenomenon \citep{cheng1989}. This scenario has been explored for some
high-mass X-ray binaries, like A0535+26 (earlier claimed as a possible
EGRET source) by \citet{romero2001}, and \citet{anchordoqui2003};
albeit the model has not been confirmed by Fermi-LAT or at higher
energies \citep{acciari2011}. In our case, assuming first we are in
accretion phase, not in propeller, and using the formulae in
\citealt{romero2001}, the acceleration potential would produce one
order of magnitude less voltage than for A0535+26, and the current
flowing in the disc would also be one order of magnitude less.
\section{Discussion}
\label{sec:disc}
Being the only LMXB with a proposed bright persistent $\gamma$-ray
counterpart discovered so far, {XSS J12270--4859} is an extremely intriguing
source. Though, the nature of the compact object hosted by this system
is still uncertain. In this paper, we proposed that the system hosts a
neutron star in a propeller state, developing a model to reproduce the
observed X-ray and $\gamma$-ray properties of the source. Before
discussing the implication of the model we presented in this paper, we
briefly summarise other possibilities suggested to explain at least
partly the rich phenomenology observed from {XSS J12270--4859}.
\subsection{Can it be a rotational-powered pulsar?}
\label{sec:discpulsar}
Coherent pulsations were not detected from {XSS J12270--4859} in the X-ray and
radio band, nor from its proposed $\gamma$-ray counterpart. The upper
limit on the X-ray pulse amplitude set by \citep[][between 15 and 25
per cent in the 0.5--10 keV band]{demartino2013} are larger than
those observed from many accreting millisecond pulsars \citep[see,
e.g.][and references therein]{patruno2012c}, and should not be
considered particularly constraining as to whether an accreting pulsar is
present in the system.
On the other hand, the non detection of radio pulsations reported by
\citet{hill2011} on the basis of Parkes 1.4 GHz observations indicates
that if {XSS J12270--4859} harbours a rotation powered pulsar, either its pulsed
emission is not beamed towards the Earth, or it is scattered and
absorbed by matter engulfing the system.
Indeed, the similarity of the $\gamma$-ray spectrum of the proposed
counterpart of {XSS J12270--4859} to those observed by Fermi from several
$\gamma$-ray pulsars, led \citet{hill2011} to suggest that the system
may host a similar radio quiet/faint $\gamma$-ray pulsar. However, we
note that the X-ray emission of {XSS J12270--4859} ($L_X=6.5(1)\times10^{33}$ erg
s$^{-1}$ in the 2--10 keV band) is larger by orders of magnitude
than that expected and usually observed from rotation powered pulsars
with a low mass companion. In fact, similar systems usually harbour a
weakly magnetised ($B\simeq10^8$--$10^9$ G) pulsar, spun up to a
millisecond spin period by a previous phase of mass accretion
\citep[see, e.g.][]{bhattacharya1991}. Expressing the spin-down power
of a pulsar as
\begin{equation}
\dot{E}\approx\frac{\mu^2}{c^3}\left(\frac{2\pi}{P}\right)^4,
\end{equation}
with $\mu$ magnetic dipole moment, and $P$ the spin period of the
neutron star, and considering a $\eta=L_X/\dot{E}\leq 10^{-3}$
efficiency of the conversion of the spin down power in X-ray
luminosity \citep[e.g.][]{becker2009}, typical parameters observed in
millisecond pulsars ($P\approx\mbox{few}$ ms; $\mu\approx10^{26}$ G
cm$^{-3}$) yield:
\begin{equation}
\dot{L_X^{psr}}\simeq 1.5\times10^{31}\,\eta_{-3}\,\mu_{26}^2\,P_{2.5}^{-1}\,\mbox{erg s}^{-1}.
\end{equation}
Here, $\eta_{-3}$ is the X-ray conversion efficiency in units of
$10^{-3}$. Indeed, the brightest rotation powered millisecond pulsars
in X-rays have luminosities of $\simeq10^{33}$ erg s$^{-1}$
\citep{cusumano2003,webb2004,bog2011}. Similarly, pulsars which
showed a transition between rotation and accretion powered states, PSR
J1023+0038 \citep{archibald2009}, and IGR J18245--2452
\citep{papitto2013}, were observed at an X-ray luminosity of
$\approx10^{32}$ erg s$^{-1}$ during their rotation powered activity
\citep{archibald2010,bog2011b}.
To match a similar value, {XSS J12270--4859} should be then closer than 0.6 kpc,
spinning rather rapidly and/or being particularly young. We note that
a distance $1.4$--$3.6$ kpc is suggested by the spectral shape of the
colder thermal component detected in the optical band by
\citet{demartino2013}.
Also, the observation of optical emission lines would not seem to
easily fit a scenario with a rotation powered pulsar, as in a similar
state the pulsar wind would be expected to sweep the entire Roche Lobe
of the neutron star from the matter transferred by the companion star
\citep{ruderman1989}. For similar reasons we consider the rotation
powered pulsar scenario is at least improbable, as {XSS J12270--4859} should be by
far the brightest rotation powered pulsar, without pulsations being
detected, and with a disk surviving the intense radiation pressure
which would be implied by such a high spin down power.
\subsection{Can it be an accreting black hole?}
The non-detection of coherent pulsations from {XSS J12270--4859} and its short
term X-ray variability leaves open the possibility that an accreting
black hole in a low hard state is present in the system
\citep{saitou2009,demartino2013}. In such case, the radio emission
would be originated in a compact jet. Indeed, the largely uncertainty
of the radio spectrum (a $F_{\nu}\propto\nu^{-\alpha}$ power-law
spectrum, with $\alpha=0.5\pm0.6$) makes it compatible with the
typical flat spectrum observed from compact jets ($\alpha\approx
0$). As noted by \citet{demartino2013}, the observed ratio between the
radio luminosity at 9 GHz and the X-ray luminosity in the 3--9 keV
X-ray band place it slightly under-luminous in the radio band with
respect to the correlation observed for black hole binaries
\citep{gallo2003}, while it seems over-luminous with respect
to accreting neutron stars in the hard state
\citep{migliari2006}. However, the relatively bright gamma-ray
emission seems difficult to reconcile with a black-hole scenario. So
far, GeV emission has been detected only from two systems hosting a
black hole; a weak emission recently detected from Cyg X-1
\citep{malyshev2013} --albeit this has not been confirmed in
subsequent analysis by the {\it Fermi}-LAT collaboration--, and a
brighter but transient emission observed from Cyg X-3
\citep{tavani2009,fermi2009}. In both cases, however, according to
leptonic models the high energy emission is related to up-scattering
of the dense photon field emitted by the massive companion star and/or
the disc, which are hardly important contributors in the case of
{XSS J12270--4859}. Indeed, no gamma-ray emission has been reported so far from
the many black holes with a low-mass companion star known. Thus we
also consider this scenario unlikely.
\subsection{A neutron star in propeller}
In this paper we have proposed a propellering neutron star scenario
for {XSS J12270--4859}. Our model is based on the assumption that in a similar state a
population of electrons can be accelerated to relativistic energies at
the interface between the disc and the magnetosphere, following the
suggestion put forward by \citet{bednarek2009,bednarek2009b}.
He applied a similar model to the case of slowly rotating ($P\ga 10$
s) accreting NS in HMXB ($B_{\rm NS}\sim10^{12}$ G, $\mu\sim 10^{30}$
G cm$^{3}$; \citealt{bednarek2009b}), as well as to NSs with
super-critical magnetic field at their surface ($B_{\rm NS} \sim
10^{14}$ G, $\mu\sim 10^{32}$ G cm$^{3}$), harboured in binary system
with a massive companion star \citep{bednarek2009}. This concept was
later developed by \citet{torres2012,papitto2012} to explain the
multi-wavelength phenomenology of LS I 61 303; and has found support in
the recent discovery of the super orbital variability of the gamma-ray
emission from the system \citep{ackermann2013}.
While the
surface magnetic field of a typical NS in a LMXB is lower by more
than four orders of magnitude than the much more intense fields of NS
in HMXB or in magnetars, the radius at which the matter in-flow is
truncated in a NS-LMXB system is much lower. The field at the
magnetospheric interface of a NS in a LMXB, like that hypothesised for
{XSS J12270--4859}, is then up to three orders of magnitude larger in this case
(Eq.~\ref{eq:dipole}), and as a consequence also the power available
to accelerate electrons (Eq.~\ref{eq:engain}).
For typical parameters of a system like the one considered here, the
cooling of this electron population takes place mainly through
synchrotron interaction with the magnetic field permeating the
interface, and through inverse Compton losses due to the interaction
between the electrons and the synchrotron photons. The dominance of
self-synchrotron Compton emission is not usually encountered in
systems. We found that LMXB in a propeller state could be prone to
this situation.
Inverse Compton
losses given by the interaction with the radiation field emitted by
the disc, and by the companion star, represent in fact a contribution
which is orders of magnitude lower. The same holds for
bremsstrahlung losses.
As both the dominant cooling channels are strongly dependent on the
strength of the magnetic field, this quantity has a crucial influence
on the value of the maximum energy yielded to electrons. We showed
that for typical parameters of a propellering NS in LMXB
($\mu\approx\mbox{few}\times10^{26}$ G cm$^{3}$, $P\approx\mbox{few
ms}$, $R_{\rm in}\ga 50$ km), and an acceleration parameter in
the range 0.01--0.1, a maximum energy of few GeV is naturally
obtained, compatible with the high energy cutoff observed from the
$\gamma$-ray counterpart of {XSS J12270--4859}. At the same time, if the emission
region is compatible with the size of the magnetosphere-disc interface
($R_{\rm in}\ga 50$ km; $R_{t}\approx\mbox{km} $), the synchrotron
self Compton emission will give rise to an emission with an overall
output comparable to that yielded in X-rays by the synchrotron
emission. A similar model is therefore able to explain
semi-quantitatively the peculiar spectral energy distribution of
{XSS J12270--4859} at high energies. On the other hand, synchrotron absorption in
the emission region predicts a low energy cut-off at few eV.
According to our model, the emission arising at the magnetospheric
interface cannot take into account the radio emission observed from
the source, which should therefore come from a larger region with a
lower electron density.
The model we presented may also explain the observational features
recently observed from the transitional pulsar PSR
J1023+0038. Following the disappearance of radio pulsations, the onset
of an accretion state has been recently reported for this otherwise
rotational-powered system \citep{stappers2013}. Evidences supporting
the formation of an accretion disc has been obtained from optical
observations \citep{halpern2013}, similar to those which let
\citet{archibald2009} to conclude that the source was in an accretion
state between 2000 and 2001 \citep[see
also][]{wang2009}. Simultaneously, the source brightened by more
than an order of magnitude in X-rays to an average level of
$L_X(0.5$--$10\,\mbox{keV})\simeq2.5\times10^{33}$ erg s$^{-1}$ (with
variations by a factor 10 on timescales of few tens of seconds
\citealt{patruno2013b,kong2013,papitto2013b}), and by at least a
factor of five in gamma-rays ($L_{\gamma}(>100\,\mbox{MeV})\ga
5\times10^{33}$ erg s$^{-1}$, \citealt{stappers2013}), with respect to
the rotation-powered phase characterised by an active radio-pulsar
\citep{archibald2010,tam2010,bog2011b}. The observed value of the
X-ray luminosity strongly suggests that the system has entered a
propeller state, possibly alternating with a rotational-powered state
on short timescales of tens of seconds \citep{patruno2013b}. The model
we presented can be taken as a plausible interpretation of the
comparable emission observed from this system in X-rays and
gamma-rays, alternative to a scenario in which the gamma-ray emission
is due exclusively to residual periods of activity as a
rotational-powered pulsar.
In the model presented here we considered a neutron star in a purely
ejecting propeller state, even if \citet{bednarek2009,bednarek2009b}
argued that the electron acceleration at the magnetospheric interface
can take place both if the neutron star is effectively accreting the
in-flowing mass down to its surface, and if it is instead
propellering mass away. Such a choice was made as in such conditions
the interface between the field and the disc is expected to be highly
turbulent and magnetised, thus favouring the acceleration of
electrons through a Fermi process. Even if the accretion of a
fraction of the in-flowing matter (like in the trapped state studied
by \citealt{dangelo2010,dangelo2012} and applied by
\citealt{patruno2009b,patruno2013} to interpret properties of a few
accreting pulsars) is in principle possible, the low X-ray flux
observed from the source and the absence of a detected thermal
component in the soft X-ray band set a limit on the accretion rate.
Even assuming that the total X-ray output of the source is powered by
accretion, this would not take place at a rate larger than $\simeq
2\times10^{-12}$ M$_{\odot}$ yr$^{-1}$, i.e. $10^{-4}$ times the
Eddington rate.
According to our model, the observed X-ray and gamma-ray emission (as
well as the matter outflow) are ultimately powered by the in-fall of
matter down to the disc truncation radius, and by the energy deposited
by the rotating magnetosphere (see Eq.~\ref{eq:mdot}). The observed
X-ray flares/dip pairs could be then produced by a sudden increase of
the rate of mass in-fall, caused by inhomogeneities of the disc
accretion flow, and a subsequent re-fill of the starved parts of the
disc, similar to the interpretation given by
\citealt{demartino2010,demartino2013}. On the other hand, the observed
UV emission is larger by an order of magnitude than the contribution
of the synchrotron component of our model at those energies. To
explain the simultaneity among flares and dips observed at X-rays and
UV energies, we should then conclude that the latter emission is
mainly due to reprocessing in the outer parts of the disc, of the
emission at higher energies produced close to the magnetospheric
boundary.
As a concluding remark, we note that positional correspondences
between persistent and transient X-ray binaries and $\gamma$-ray
sources detected by Fermi and Agile seem intrinsically rare
\citep[e.g.][and references
therein]{ubertini2009,sguera2011,li2012b}. \citet{maselli2011}
searched for positional correspondence between sources of the second
Palermo BAT catalogue \citep{cusumano2010} and the first Fermi
catalogue \citep{abdo2010} and found only 15 galactic sources, among
which only two are LMXB, {XSS J12270--4859} and SLX 1735--269. Intriguingly, also
the latter is a faint persistent X-ray source, and on this basis was
classified by \citet{intzand2007} as a candidate ultra-compact X-ray
binary. The increase in sensitivity obtained by Fermi as its mission
progresses will help shedding light on the possibility that more
$\gamma$-ray LMXB candidates exist; at least, in those cases where the
accretion state is such that a steady gamma-ray emission exist.
\section*{Acknowledgments}
Work done in the framework of the grants AYA2012-39303, as well as
SGR2009-811, and iLINK2011-0303. AP is is supported by a Juan de la
Cierva Research Fellowship. DFT was additionally supported by a
Friedrich Wilhelm Bessel Award of the Alexander von Humboldt
Foundation. AP thanks D. De Martino for illuminating discussions.
\bibliographystyle{mn2e}
|
1,314,259,994,349 | arxiv | \section{Introduction}
This paper was motivated by a desire to study indecomposable objects in certain module categories for a vertex operator algebra $V$ via modules for the higher level Zhu algebras for $V$, denoted $A_n(V)$, for $n \in \mathbb{N}$. In \cite{Z}, Zhu introduced an associative algebra, which we denote by $A_0(V)$, for $V$ a vertex operator algebra. This Zhu algebra has proven to be very useful in understanding the module structure of $V$ for certain classes of modules and certain types of vertex operator algebras. In particular, in the case that the vertex operator algebra is rational, Frenkel and Zhu, in \cite{FZ}, showed that there is a bijection between the module category for a vertex operator algebra and the module category for the Zhu algebra associated with this vertex operator algebra. Subsequently, in \cite{DLM}, Dong, Li, and Mason introduced higher level Zhu algebras, $A_n(V)$ for $n \in \mathbb{Z}_+$, and proved many important fundamental results about these algebras. Dong, Li, and Mason presented several statements that generalize the results of Frenkel and Zhu from the level zero algebras to these higher level algebras, results that mainly focused on the semi-simple setting, e.g., the case of rational vertex operator algebras.
For an irrational vertex operator algebra, instead of the irreducible modules, indecomposable modules are the fundamental objects in the module category. To date, it has proven difficult to find examples of vertex operator algebras that have certain nice finiteness properties but non semi-simple representation theory, i.e., so called $C_2$-cofinite irrational vertex operator algebras, which is an important setting for logarithmic conformal field theory. And in general, in the indecomposable nonsimple setting, the correspondence between the category of such $V$-modules and the category of $A_n(V)$-modules is not well understood. In this paper we are able to obtain correspondences for certain module subcategories and begin a more systematic study of the settings for which the higher level Zhu algebras become effective tools for understanding and constructing $V$-modules.
In theory, the higher level Zhu algebras should prove to be important tools for studying indecomposable $\mathbb{N}$-gradable $V$-modules, in particular those that have an increase in the Jordan block size at degree greater than zero with respect to the $\mathbb{N}$-grading. Whereas the zero level Zhu algebra has been used, for instance in \cite{am1}, \cite{AM}, to determine that important examples, such as the $\mathcal{W}_p$ triplet vertex operator algebras, do have indecomposable modules, more information about the indecomposables for such vertex operator algebras is given by higher level Zhu algebras, including information necessary to compute the fusion rules for the module category in these non semi-simple settings, e.g. \cite{TW} (see also \cite{NT}). For $C_2$-cofinite rational vertex operator algebras, Zhu showed in \cite{Z} that the characters (reflective of the $L(0)$ eigenspaces structure)
of the irreducible modules are closed under the action of modular transformations. However, as Miyamoto showed in \cite{Miyamoto2004}, for $C_2$-cofinite irrational vertex operator algebras, pseudo-characters for indecomposable non-simple modules must be introduced and considered along with the characters in order to preserve modular invariance, where these so called pseudo-characters detect the generalized eigenspaces of the module. It is precisely the higher level Zhu algebras which give information about these pseudo-characters. However despite these important applications of higher level Zhu algebras, so far in practice, higher level Zhu algebras have not been well understood, and techniques for calculating them have not been developed. This paper is a necessary step toward these goals.
In this paper, we study two functors defined in \cite{DLM}: the functor $\Omega_n/\Omega_{n-1}$ from $\mathbb{N}$-gradable $V$-modules (also called admissible $V$-modules as in \cite{DLM}) to $A_n(V)$-modules; and the functor $L_n$ from $A_n(V)$-modules to $\mathbb{N}$-gradable $V$-modules. We investigate when the composition of these two functors is isomorphic to the identity morphism in various module categories. We show that modifications and clarifications are needed for some of the statements in \cite{DLM} to hold, and we give examples to show the necessity of these modifications and clarifications. We investigate the relationship between indecomposable modules for $A_n(V)$ and certain indecomposable modules for $V$. We present some sufficient conditions on both $V$-modules and $A_n(V)$-modules for the functors between the restricted module categories to be mutual inverses. We investigate the question of what types of $V$-modules are constructed from the induction functor $L_n$ and how the structure of $A_n(V)$, in particular as regards $A_{n-1}(V)$, affects the structure of the types of indecomposable modules that can be induced by $L_n$, including for instance when a module can be induced from an $A_n(V)$-module that was not already induced from an $A_{n-1}(V)$-module, and what size Jordan blocks for the $L(0)$ grading operator arise at different levels. We also give bounds on what degrees singular vectors can reside in for a $V$-module induced by $L_n$ from an $A_n(V)$-module.
We present $A_1(V)$ for two vertex operator algebras: the generalized Verma module vertex operator algebras for the Heisenberg and Virasoro algebras, constructed in \cite{BVY-Heisenberg} and \cite{BVY-Virasoro}, respectively. We describe the nature of the modules for these vertex operator algebras as regards the ring structure of $A_1(V)$, in particular in relationship to $A_0(V)$. We construct a family of indecomposable nonsimple modules for the Virasoro vertex operator algebra that are logarithmic modules and are not highest weight modules. We give concrete examples arising from the Virasoro vertex operator algebra motivating the need for the extra conditions we introduce as necessary conditions for the statements of some of the main theorems we prove in this paper to hold.
This paper is organized as follows, in Section 2, we give basic definitions including the definition of Zhu algebras $A_n(V)$, for $n \in \mathbb{N}$, the functor $\Omega_n$ from the category of $\mathbb{N}$-gradable $V$-modules to the category of $A_n(V)$-modules, and the functor $L_n$ from the category of $A_n(V)$-modules to the category of $\mathbb{N}$-gradable $V$-modules as defined in \cite{DLM}.
In Section 3, in the case when $U$ is an $A_n(V)$-module such that {\it no nonzero submodule of} $U$ factors through $A_{n-1}(V)$, we prove in Theorem \ref{mainthm} that $\Omega_n/\Omega_{n-1}(L_n(U)) \cong U$ as $A_n(V)$-modules. The italics in the previous statement give the extra condition necessary for this statement to hold in contrast to the statement in \cite{DLM}. We prove Corollary \ref{mainthm-cor} to Theorem \ref{mainthm} that shows this extra condition is not necessary in the case when $U$ is indecomposable and $A_n(V)$ decomposes into a direct sum of $A_{n-1}(V)$ and a direct sum complement. In Section 4, we give examples to illustrate both the need for the extra condition in Theorem \ref{mainthm} for the case of $n=1$ and the Virasoro vertex operator algebra, and how this extra condition is not needed in, for instance, the case of $n=1$ for the Heisenberg vertex operator algebra, illustrating how this is predicated on the relationship between $A_1(V)$ and $A_0(V)$ in these two examples.
In Section 3, we show that the functors $L_n$ and $\Omega_n/\Omega_{n-1}$ are inverse functors when restricted to the categories of simple $A_n(V)$-modules that do not factor through $A_{n-1}(V)$ and simple $\mathbb{N}$-gradable $V$-modules {\it that are generated by their degree $n$ subspace} (or equivalently, {\it that have a nonzero degree $n$ subspace}). This is a small clarification of Theorem 4.9 of \cite{DLM} where we add the extra condition in italics in the previous statement, for the result to hold. In Section \ref{Virasoro-n-generated-example}, we give an example involving a simple module for the Virasoro vertex operator algebra to show that this extra condition is necessary for the statement to hold.
In Section 3, we show that $L_n$ sends indecomposable $A_n(V)$-modules to indecomposable $\mathbb{N}$-gradable $V$-modules. We go on to identify a certain subcategory of $\mathbb{N}$-gradable $V$-modules on which $L_n \circ \Omega_n/\Omega_{n-1}$ is the identity functor, and we give further restricted categories of $\mathbb{N}$-gradable $V$-modules and $A_n(V)$-modules, respectively, on which the functors restricted to these subcategories are mutual inverses. More importantly, we identify some characteristics of the types of $V$-modules that can be induced through $L_n$ from an $A_n(V)$-module, for instance properties of the lowest $n$ and first $2n+1$ subspaces and where singular vectors must reside for an $\mathbb{N}$-gradable $V$-module to be induced via $L_n$ from its degree $n$ subspace.
In Section \ref{examples-section}, we present the examples that illustrate the relationship between the level one Zhu algebra and the level zero Zhu algebra and the impact that their ring structures have on their module categories, as well as the necessity of certain additional assumptions for results in Section 3 to hold. The two examples we consider are the cases for the generalized Verma module vertex operator algebras associated to the Heisenberg and the Virasoro algebras, respectively. We construct indecomposable $\mathbb{N}$-gradable $V$-modules for each of these vertex operator algebras, $V$, by
presenting the level one Zhu algebra associated with $V$, denoted $A_1(V)$, as constructed in \cite{BVY-Heisenberg} and \cite{BVY-Virasoro}, respectively. For the Heisenberg vertex operator algebra $A_1(V)$ is isomorphic to $\mathbb{C}[x] \oplus \mathbb{C}[x]$. Thus the irreducible (indecomposable) modules for $A_1(V)$ are irreducible (indecomposable) modules for either the zero level Zhu algebra $A_0(V)$, which is $\mathbb{C}[\alpha(-1) \mathbf{1}] \cong \mathbb{C}[x]$, or its direct sum component (see Section \ref{Heisenberg-section} for details). In particular, in the case that $V$ is the Heisenberg vertex operator algebra,
any indecomposable $A_1(V)$-module $U$ either itself factors through $A_0(V)$ or only its trivial submodule factors through $A_0(V)$. As a result, there are no new modules induced by $L_1$ from modules for $A_1(V)$ that were not already induced by $L_0$ from $A_0(V)$-modules.
However, the level one Zhu algebra for the Virasoro vertex operator algebra is isomorphic to $\mathbb{C}[x, y]/( xy)$ as an associative algebra. Since there is a node at $(0, 0)$ on the curve $xy = 0$, the level one Zhu algebra provides indecomposable modules that do not factor through the level zero Zhu algebra which is isomorphic to $\mathbb{C}[\omega] \cong \mathbb{C}[x]$, but a nontrivial submodule does factor through $A_0(V)$. For this reason, in this case we can construct explicit indecomposable modules for the Virasoro vertex operator algebra from modules for $A_1(V)$, that are not highest weight modules, i.e. are not induced from an $A_0(V)$-module. We show that in this case the structure of $A_1(V)$ versus $A_0(V)$ gives rise to a family of modules for each $k \in \mathbb{Z}_+$ that have the property that they are decomposed by the $L(0)$ operator into generalized eigenspaces comprised of Jordan blocks of size $k$ at degree zero and size $k+1$ at degree one and which can not be induced from the level zero Zhu algebra.
The indecomposable modules for the associative algebra $\mathbb{C}[x, y]/( xy)$ were first studied and classified by Gelfand and Ponomarev \cite{GP} in order to study the representations of Lorentz groups. A study of indecomposable modules for $\mathbb{C}[x, y]/(xy )$ can be found in \cite{LS}, (cf. \cite{L}, \cite{NR}, \cite{AL}). In this paper, we present a particular class of modules for $A_1(V)$ for $V$ the Virasoro vertex operator algebra, and in \cite{BVY-Virasoro} we further study the modules of $A_1(V)$ and the resulting modules for the Virasoro vertex operator algebra.
{\bf Acknowledgements:} The first author is the recipient of Simons Foundation Collaboration Grant 282095 and greatly appreciates this support. The authors would like to thank Kiyokazu Nagatomo and the referee for helpful comments on the initial draft of this manuscript.
\section{The notion of level $n$ Zhu algebra $A_n(V)$, and the functors $\Omega_n$, $\Pi_n$, and $L_n$}
Let $V = (V, Y, {\bf 1}, \omega)$ be a vertex operator algebra. In this section, following \cite{DLM}, we recall the notion of the level $n$ Zhu algebra for $V$, denoted $A_n(V)$, for $n \in \mathbb{N}$, the functor $\Omega_n$ from the category of $\mathbb{N}$-gradable $V$-modules to the category of $A_n(V)$-modules, and the functor $L_n$ from the category of $A_n(V)$-modules to the category of $\mathbb{N}$-gradable $V$-modules. We recall several results of \cite{DLM}, introduce some notation, and prove a lemma that will be useful in proving the main results of this paper.
First, we recall the definition of the level $n$ Zhu algebra $A_n(V)$, for $n \in \mathbb{N}$, introduced in \cite{Z} for $n = 0$, and then generalized to $n >0$ in \cite{DLM}.
\begin{defn}[\cite{Z}, \cite{DLM}]{\rm
For $n \in \mathbb{N}$, let $O_n(V)$ be the subspace of $V$ spanned by elements of the form
\[ u \circ_n v =
\mbox{\rm Res}_x \frac{(1 + x)^{\mathrm{wt}\, u + n}Y(u, x)v}{x^{2n+2}}
\]
for homogeneous $u \in V$ and for $v \in V$, and by elements of the form $(L(-1) + L(0))v$ for $v \in V$. The vector space $A_n(V)$ is defined to be the quotient space $V/O_n(V)$.
We define the following multiplication on $V$
\[
u *_n v = \sum_{m=0}^n(-1)^m\binom{m+n}{n}\mbox{\rm Res}_x \frac{(1 + x)^{\mathrm{wt} \, u + n}Y(u, x)v}{x^{n+m+1}},
\]
for $v \in V$ and homogeneous $u \in V$, and for general $u \in V$, $ *_n $ is defined by linearity. It is shown in \cite{DLM} that with this multiplication, the subspace $O_n(V)$ of $V$ is a two-sided ideal of $V$, and $A_n(V)$ is an associative algebra, called the {\it level $n$ Zhu algebra}.
}
\end{defn}
\begin{rema}\label{epi-remark}
{\em As noted in \cite{DLM}, we have $O_n(V) \subset O_{n-1}(V)$ for $n \in \mathbb{Z}_+$, and thus there is a natural surjective algebra homomorphism from $A_n(V)$ onto $A_{n-1}(V)$ given by $v + O_n(V) \mapsto v+ O_{n-1}(V)$. If $U$ is a module for $A_n(V)$, then $U$ is said to {\em factor through $A_{n-1}(V)$} if the kernel of the epimorphism from $A_n(V)$ onto $A_{n-1}(V)$, i.e., $O_{n-1}(V)$, acts trivially on $U$, giving $U$ a well-defined $A_{n-1}(V)$-module structure. }
\end{rema}
Next we recall the definitions of various $V$-module structures. We assume the reader is familiar with the notion of weak $V$-module for a vertex operator algebra $V$ (cf. \cite{LL}).
\begin{defn}\label{N-gradable-definition}
{\em An {\it $\mathbb{N}$-gradable weak $V$-module} (also often called an {\it admissible $V$-module} as in \cite{DLM}) $W$ for a vertex operator algebra $V$ is a weak $V$-module that is $\mathbb{N}$-gradable, $W = \coprod_{k \in \mathbb{N}} W(k)$, with $v_m W(k) \subset W(k + \mathrm{wt} v - m -1)$ for homogeneous $v \in V$, $m \in \mathbb{Z}$ and $k \in \mathbb{N}$, and without loss of generality, we can and do assume $W(0) \neq 0$, unless otherwise specified. We say elements of $W(k)$ have {\it degree} $k \in \mathbb{N}$.
An {\it $\mathbb{N}$-gradable generalized weak $V$-module} $W$ is an $\mathbb{N}$-gradable weak $V$-module that admits a decomposition into generalized eigenspaces via the spectrum of $L(0) = \omega_1$ as follows: $W=\coprod_{\lambda \in{\mathbb{C}}}W_\lambda$ where $W_{\lambda}=\{w\in W \, | \, (L(0) - \lambda \, id_W)^j w= 0 \ \mbox{for some $j \in \mathbb{Z}_+$}\}$, and in addition, $W_{n +\lambda}=0$ for fixed $\lambda$ and for all sufficiently small integers $n$. We say elements of $W_\lambda$ have {\it weight} $\lambda \in \mathbb{C}$.
A {\it generalized $V$-module} $W$ is an $\mathbb{N}$-gradable generalized weak $V$-module where $\dim W_{\lambda}$ is finite for each $\lambda \in \mathbb{C}$.
An {\it (ordinary) $V$-module} is a generalized $V$-module such that the generalized eigenspaces $W_{\lambda}$ are in fact eigenspaces, i.e. $W_{\lambda}=\{w\in W \, | \, L(0) w=\lambda w\}$.}
\end{defn}
We will often omit the term ``weak" when referring to $\mathbb{N}$-gradable weak and $\mathbb{N}$-gradable generalized weak $V$-modules.
The term {\it logarithmic} is also often used in the literature to refer to $\mathbb{N}$-gradable weak generalized modules or generalized modules.
\begin{rema}{\em An $\mathbb{N}$-gradable $V$-module with $W(k)$ of finite dimension for each $k \in \mathbb{N}$ is not necessarily a generalized $V$-module since the generalized eigenspaces might not be finite dimensional. }
\end{rema}
We define the {\it generalized graded dimension} of a generalized $V$-module $W = \coprod_{\lambda \in{\mathbb{C}}}W_\lambda$ to be
\begin{equation}
\mathrm{gdim}_q \, W = q^{-c/24} \, \sum_{\lambda \in \mathbb{C}} \, ( \mathrm{dim} \ W_\lambda) \, q^\lambda.
\end{equation}
We recall the functors $\Omega_n$ and $L_n$ for $n \in \mathbb{N}$ defined and studied in \cite{DLM}. Let $W$ be an $\mathbb{N}$-gradable $V$-module, and let
\begin{equation}
\Omega_n(W) = \{w \in W \; | \; v_iw = 0\;\mbox{if}\; \mbox{\rm wt} \, v_i < -n \;
\mbox{for $v\in V$ of homogeneous weight}\}.
\end{equation}
Then $\Omega_n(W)$ is an $A_n(V)$-module, via the action $[a] \mapsto o(a) = a_{\mathrm{wt} \, a -1}$ for $a \in V$ and $[a] = a + O_n(V)$.
\begin{rema} {\em The functor $\Omega_n$ from $\mathbb{N}$-gradable $V$-modules to $A_n(V)$-modules we have defined here is the same functor defined in \cite{DLM}, but is not the functor called $\Omega_n$ in \cite{V}; rather in \cite{V} the functor denoted $\Omega_n$ is just the projection functor onto the $n$th graded subspace of $W$. We will denote this projection functor by $\Pi_n$; that is for $W$ an $\mathbb{N}$-gradable $V$-module, we have $\Pi_n(W) = W(n)$. }
\end{rema}
In order to define the functor $L_n$ from the category of $A_n(V)$-modules to the category of $\mathbb{N}$-gradable $V$-modules, we need several notions, including the notion of the universal enveloping algebra of $V$, which we now define.
Let
\begin{equation}
\hat{V} = \mathbb{C}[t, t^{-1}]\otimes V/D\mathbb{C}[t, t^{-1}]\otimes V,
\end{equation}
where $D = \frac{d}{dt}\otimes 1 + 1 \otimes L(-1)$. For $v \in V$, let $v(m) = v \otimes t^m + D\mathbb{C}[t, t^{-1}]\otimes V \in \hat{V}$. Then $\hat{V}$ can be given the structure of a $\mathbb{Z}$-graded Lie algebra as follows: Define the degree of $v(m)$ to be $\mbox{\rm wt} \, v - m - 1$ for homogeneous $v \in V$, and define the Lie bracket on $\hat{V}$ by
\begin{equation}\label{bracket}
[u(j), v(k)] = \sum_{i = 0}^{\infty}\binom{j}{i}(u_iv)(j+k-i),
\end{equation}
for $u, v \in V$, $j,k \in \mathbb{Z}$.
Denote the homogeneous subspace of degree $m$ by $\hat{V}(m)$. In particular, the degree $0$ space of $\hat{V}$, denoted by $\hat{V}(0)$, is a Lie subalgebra.
Denote by $\mathcal{U}(\hat{V})$ the universal enveloping algebra of the Lie algebra $\hat{V}$. Then $\mathcal{U}(\hat{V})$ has a natural $\mathbb{Z}$-grading induced from $\hat{V}$, and we denote by $\mathcal{U}(\hat{V})_k$ the degree $k$ space with respect to this grading, for $k \in \mathbb{Z}$.
Given a weak $V$-module $W$, consider the following linear map
\begin{eqnarray}\label{defining-phi}
\varphi_W : \ \ \ \ \mathcal{U}(\hat{V}) &\longrightarrow & \mathrm{End} (W)\\
v_1(m_1)v_2(m_2) \cdots v_k(m_k) & \mapsto & (v_1^W)_{m_1}(v_2^W)_{m_2} \cdots (v_k^W)_{m_k} ,\nonumber
\end{eqnarray}
where $v^W_m$ is the coefficient of $x^{-m-1}$ in the vertex operator action of $Y_W(v,x)$ on the module $W$. We will often denote $\varphi_W$ by $\varphi$ and $v^W_m$ by $v_m$ if the module is $V$ itself or if $W$ is clearly implied.
We shall need the following lemma in the proof of our main theorem:
\begin{lemma}\label{l1}
We have
\begin{equation}\label{split}
o(O_n(V)) \subseteq \coprod_{i > n} \varphi(\mathcal{U}(\hat{V})_i) \varphi (\mathcal{U}(\hat{V})_{-i}).
\end{equation}
\end{lemma}
{\it Proof.}\hspace{2ex} By the $L(-1)$-derivative property in $V$, we have that $o(L(-1)v) = -o(L(0)v)$, and thus $o((L(-1) + L(0))v) = 0$, showing (\ref{split}) holds trivially for the elements of the form $(L(-1)+L(0))v$ in $O_n(V)$.
Next, for homogeneous $u, v \in V$, we have
\begin{eqnarray}
o \left(\mbox{\rm Res}_x \frac{(1 + x)^{\mathrm{wt} \, u+n}Y(u, x)v}{x^{2n+2}} \right) \nonumber
&=& \sum_{j \in \mathbb{N}} \binom{\mathrm{wt} \, u + n}{j} o(u_{j-2n-2}v)\nonumber \\
&= & \sum_{j \in \mathbb{N}} \binom{\mathrm{wt} \, u + n}{j}\mbox{\rm Res}_{x_2}\mbox{\rm Res}_{x_1-x_2} Y(Y(u, x_1-x_2)v, x_2)\nonumber \\
&& \quad (x_1-x_2)^{j-2n-2}x_2^{\mathrm{wt} \, u + \mathrm{wt} \, v -j+2n}\nonumber \\
&= & \mbox{\rm Res}_{x_2}\mbox{\rm Res}_{x_1-x_2} Y(Y(u, x_1-x_2)v, x_2)\frac{x_1^{\mathrm{wt} \, u + n} x_2^{\mathrm{wt} \, v+n}}{(x_1-x_2)^{2n+2}}\nonumber \\
&= & \mbox{\rm Res}_{x_1}\mbox{\rm Res}_{x_2} Y(u, x_1)Y(v, x_2)\frac{x_1^{\mathrm{wt} \, u + n}x_2^{\mathrm{wt} \, v+n}}{(x_1-x_2)^{2n+2}}\nonumber \\
&& - \mbox{\rm Res}_{x_1}\mbox{\rm Res}_{x_2} Y(v, x_2)Y(u, x_1)\frac{x_1^{\mathrm{wt} \, u + n}x_2^{\mathrm{wt}\, v+n}}{(-x_2+x_1)^{2n+2}}, \label{lemma-proof}
\end{eqnarray}
where the last equality follows from the Jacobi identity on $V$.
The first term on the right hand side of (\ref{lemma-proof}) satisfies
\begin{eqnarray*}
\lefteqn{\mbox{\rm Res}_{x_1}\mbox{\rm Res}_{x_2} Y(u, x_1)Y(v, x_2)\frac{x_1^{\mathrm{wt} \, u + n}x_2^{\mathrm{wt} \, v+n}}{(x_1-x_2)^{2n+2}}}\nonumber \\
&= &\sum_{j \in \mathbb{N}}(-1)^{j}\binom{-2n-2}{j}\mbox{\rm Res}_{x_1}\mbox{\rm Res}_{x_2} Y(u, x_1)Y(v, x_2) x_1^{\mathrm{wt} \, u -n-2-j}x_2^{\mathrm{wt} \, v+n+j}\nonumber \\
&= & \sum_{j \in \mathbb{N}}(-1)^{j}\binom{-2n-2}{j} u_{\mathrm{wt} \, u -n-2-j}v_{\mathrm{wt} \, v+n+j}\nonumber \\
&=& \sum_{k\leq -n-1} (-1)^{n+k+1} \binom{-2n-2}{-n-k-1} u_{\mathrm{wt} \, u +k -1}v_{\mathrm{wt} \, v -k -1}\nonumber \\
&\in & \coprod_{i > n}\varphi(\mathcal{U}(\hat{V})_i) \varphi(\mathcal{U}(\hat{V})_{-i}).
\end{eqnarray*}
Similarly, the second term of the right hand side of (\ref{lemma-proof}) satisfies
\begin{eqnarray*}
\lefteqn{\mbox{\rm Res}_{x_1}\mbox{\rm Res}_{x_2} Y(v, x_2)Y(u, x_1)\frac{x_1^{\mathrm{wt} \, u + n}x_2^{\mathrm{wt} \, v+n}}{(-x_2 + x_1)^{2n+2}}}\nonumber \\
&= &\sum_{j \in \mathbb{N}}(-1)^{j}\binom{-2n-2}{j}\mbox{\rm Res}_{x_1}\mbox{\rm Res}_{x_2} Y(v, x_2)Y(u, x_1) x_1^{\mathrm{wt} \, u +n + j}x_2^{\mathrm{wt} \, v-n-j-2}\nonumber \\
&= & \sum_{j \in \mathbb{N}}(-1)^{j}\binom{-2n-2}{j} v_{\mathrm{wt} \, v -n-2-j}u_{\mathrm{wt} \, u+n+j}\nonumber \\
&=& \sum_{k\leq -n-1} (-1)^{n+k+1} \binom{-2n-2}{-n-k-1} u_{\mathrm{wt}\, v +k -1}v_{\mathrm{wt}\, u -k -1}\nonumber \\
&\in & \coprod_{i > n}\varphi(\mathcal{U}(\hat{V})_i) \varphi(\mathcal{U}(\hat{V})_{-i}).
\end{eqnarray*}
\hspace*{\fill}\mbox{$\halmos$}\\
We can regard $A_n(V)$ as a Lie algebra via the bracket $[u,v] = u *_n v - v *_n u$, and then the map $v( \mathrm{wt} \, v -1) \mapsto v + O_n(V)$ is a well-defined Lie algebra epimorphism from $\hat{V}(0)$ onto $A_n(V)$.
Let $U$ be an $A_n(V)$-module. Since $A_n(V)$ is naturally a Lie algebra homomorphic image of $\hat{V}(0)$, we can lift $U$ to a module for the Lie algebra $\hat{V}(0)$, and then to a module for $P_n = \bigoplus_{p > n}\hat{V}(-p) \oplus \hat{V}(0) = \bigoplus_{p < -n} \hat{V}(p) \oplus \hat{V}(0)$ by letting $\hat{V}(-p)$ act trivially for $p\neq 0$. Define
\[
M_n(U) = \mbox{Ind}_{P_n}^{\hat{V}}(U) = \mathcal{U}(\hat{V})\otimes_{\mathcal{U}(P_n)}U.
\]
We impose a grading on $M_n(U)$ with respect to $U$, $n$, and the $\mathbb{Z}$-grading on $\mathcal{U}(\hat{V})$, by letting $U$ be degree $n$, and letting $M_n(U)(k)$, for $k \in \mathbb{Z}$, to be the subspace of $M_n(U)$ induced from $\hat{V}$, i.e., $M_n(U)(k) = \mathcal{U}(\hat{V})_{k-n}U$.
For $v \in V$, define $Y_{M_n(U)}(v,x) \in (\mathrm{End} (M_n(U)))((x))$ by
\begin{equation}\label{define-Y_M}
Y_{M_n(U)}(v,x) = \sum_{m\in\mathbb{Z}} v(m) x^{-m-1}.
\end{equation}
Let $W_{A}$ be the subspace of $M_n(U)$ spanned linearly by the coefficients of
\begin{multline}\label{relations-for-M}
(x_0 + x_2)^{\mathrm{wt} \, v + n} Y_{M_n(U)}(v, x_0 + x_2) Y_{M_n(U)}(w, x_2) u \\
- (x_2 + x_0)^{\mathrm{wt} \, v + n} Y_{M_n(U)}(Y(v, x_0)w, x_2) u
\end{multline}
for $v,w \in V$, with $v$ homogeneous, and $u \in U$. Set
\[ \overline{M}_n(U) = M_n(U)/\mathcal{U} (\hat{V})W_A .\]
It is shown in \cite{DLM} that if $U$ is an $A_n(V)$-module that does not factor through $A_{n-1}(V)$, then $\overline{M}_n(U) = \bigoplus_{k \in \mathbb{N}} \overline{M}_n(U) (k)$ is an $\mathbb{N}$-gradable $V$-module with $\overline{M}_n(U) (0)\neq 0$ and $\overline{M}_n(U) (n) \cong U$ as an $A_n(V)$-module. Note that the condition that $U$ itself does not factor though $A_{n-1}(V)$ is indeed a necessary and sufficient condition for $\overline{M}_n(U) (0)\neq 0$ to hold.
It is also observed in \cite{DLM} that $\overline{M}_n(U)$ satisfies the following universal property: For any weak $V$-module $M$ and any $A_n(V)$-module homomorphism $\phi: U \longrightarrow \Omega_n(M)$, there exists a unique weak $V$-module homomorphism $\Phi: \overline{M}_n(U) \longrightarrow M$, such that $\Phi \circ \iota = \phi$ where $\iota$ is the natural injection of $U$ into $\overline{M}_n(U)$. This follows from the fact that $\overline{M}_n(U)$ is generated by $U$ as a weak $V$-module, again with the possible need of a grading shift.
Let $U^* = \mbox{Hom}(U, \mathbb{C})$. As in the construction in \cite{DLM}, we can extend $U^*$ to $M_n(U)$ by first an induction to $M_n(U)(n)$ and then by letting $U^*$ annihilate $\bigoplus_{k \neq n} M_n(U)(k)$. In particular, we have that elements of $M_n(U)(n) = \mathcal{U}(\hat{V})_0U$ are spanned by elements of the form
\[o_{p_1}(a_1) \cdots o_{p_s}(a_s)U\]
where $s \in \mathbb{N}$, $p_1 \geq \cdots \geq p_s$, $p_1 + \cdots + p_s =0$, $p_i \neq 0$, $p_s \geq -n$, $a_i \in V$ and $o_{p_i}(a_i) = (a_i)(\mathrm{wt} \, a_i - 1 - p_i)$. Then inducting on $s$ by using Remark 3.3 in \cite{DLM} to reduce from length $s$ vectors to length $s-1$ vectors, we have a well-defined action of $U^*$ on $M_n(U)(n)$.
Set
\[
J = \{v \in M_n(U) \, | \, \langle u', xv\rangle = 0 \;\mbox{for all}\; u' \in U^{*}, x \in \mathcal{U}(\hat{V})\}
\]
and
\[
L_n(U) = M_n(U)/J.
\]
\begin{rema}\label{L-a-V-module-remark} {\em It is shown in \cite{DLM}, Propositions 4.3, 4.6 and 4.7, that if $U$ does not factor through $A_{n-1}(V)$, then $L_n(U)$ is a well-defined $\mathbb{N}$-gradable $V$-module with $L_n(U)(0) \neq 0$; in particular, it is shown that $\mathcal{U}(\hat{V})W_A \subset J$, for $W_A$ the subspace of $M_n(U)$ spanned by the coefficients of (\ref{relations-for-M}), i.e., giving the associativity relations for the weak vertex operators on $M_n(U)$.}
\end{rema}
\section{Main results}
We have the following theorem which is a necessary modification to what was presented as Theorem 4.2 in \cite{DLM}:
\begin{thm}\label{mainthm}
For $n \in \mathbb{N}$, let $U$ be a nonzero $A_n(V)$-module such that if $n>0$, then $U$ does not factor through $A_{n-1}(V)$. Then $L_n(U)$ is an $\mathbb{N}$-gradable $V$-module with $L_n(U)(0) \neq 0$. If we assume further that there is no nonzero submodule of $U$ that factors through $A_{n-1}(V)$, then $\Omega_n/\Omega_{n-1}(L_n(U)) \cong U$.
\end{thm}
{\it Proof.}\hspace{2ex} For $n \in \mathbb{N}$, let $U$ be a nonzero $A_n(V)$-module such that if $n>0$, then $U$ does not factor through $A_{n-1}(V)$. By Remark \ref{L-a-V-module-remark}, $L_n(U)$ is a well-defined $\mathbb{N}$-gradable $V$-module with $L_n(U)(0) \neq 0$, and
\begin{equation}\label{in-J}
\mathcal{U}(\hat{V})W_A \subset J.
\end{equation}
We have $U \subset M_n(U)(n) \subset M_n(U)$, and for $u \in U$, we have $\langle U^*, u \rangle = 0$ if and only if $u = 0$. Therefore $J \cap U = 0$, implying that $U$ is naturally isomorphic to a subspace of $L_n(U)$, namely $U + J$. We denote $u + J$ by $\bar{u}$ for $u \in U$.
Since $U$ is an $A_n(V)$-module, we have that $o(v) \cdot U = 0$ for all $v \in O_n(V)$, and thus $\bar{u} \in \Omega_n(L_n(U))$ for all $u \in U$. But
\[(U + J)\cap \Omega_{n-1}(L_n(U)) = \{u + J \; | \; u \in U \ \mathrm{and} \ v_i (u +J) = 0 \ \mathrm{if} \ \mathrm{wt} \, v_i < -n + 1 \}\]
is annihilated by $\mathcal{U}(\hat{V})_{-i}$ for $i < -n+1$ and therefore, by Lemma \ref{l1}, is annihilated by the action of $O_{n-1}(V)$, implying that $(U + J) \cap \Omega_{n-1}(L_n(U))$ is a nontrivial $A_n(V)$-submodule of $U +J$ that factors through $A_{n-1}(V)$.
Now assume that $U$ has no nonzero submodule that factors through $A_{n-1}(V)$. This and the fact that $U\cap J = 0$, implies that $(U + J)\cap \Omega_{n-1}(L_n(U)) = 0$. Therefore there is an injection of $A_n(V)$-modules $\iota: U \hookrightarrow \Omega_n/\Omega_{n-1} (L_n(U))$ given by $u \mapsto \bar{u} + \Omega_{n-1}(L_n(U))$. In particular, this implies that $L_n(U) (0) \neq 0$ since $U$ is assumed to be nonzero.
Next we show the injection $\iota$ is surjective, i.e., if $\bar{w} + \Omega_{n-1}(L_n(U)) \in \Omega_n/\Omega_{n-1} (L_n(U))$, for $\bar{w} \in \Omega_n(L_n(U))$, then there exists $u \in U$ such that $\bar{u} = \bar{w} + v$ for $v \in \Omega_{n-1} (L_n(U))$, i.e., $\iota(u) = \bar{w} + \Omega_{n-1} (L_n(U))$. For convenience, we denote the coset $\bar{w} + \Omega_{n-1} (L_n(U))$ by $\hat{w}$.
Let $\bar{w} \in \Omega_n(L_n(U))$, and let $w \in M_n(U)$ be its preimage under the canonical projection.
If $\mathrm{deg} \, w < n$, then for $v \in V$ with $\mbox{\rm wt} \, v_i < -n + 1$, we have $\mbox{\rm wt} \, v_i w < 0$ and thus $\bar{w} \in \Omega_{n-1}(L_n(U))$, and $\iota(0) = \hat{0} = \hat{w}$.
For the case when $\mathrm{deg} \, w = n$, by (\ref{in-J}) there is a natural surjection
\begin{eqnarray}
\pi : \overline{M}_n(U) = M_n(U)/ \mathcal{U}(\hat{V})W_A &\longrightarrow & L_n(U) = M_n(U)/J \\
w + \mathcal{U}(\hat{V})W_A & \mapsto & w +J. \nonumber
\end{eqnarray}
This coupled with the fact that $\overline{M}_n(U)(n) \cong U$ implies that if $\mathrm{deg} \, w = n$, then there exists a $u \in U$ such that $u + \mathcal{U}(\hat{V})W_A = w + \mathcal{U}(\hat{V})W_A \stackrel{\pi}{\mapsto} w + J = \bar{w} \in \Omega_n(L_n(U))$, proving that $\iota(u) = \bar{w} + \Omega_{n-1}(L_n(U))$.
Finally if $\mathrm{deg} \, w >n$, we will show that $\hat{w} = \hat{0}$ and thus $\iota(0) = \hat{w}$. Suppose not. If $\hat{w} \neq \hat{0}$, then in particular $w \notin J$, and thus there exists $x \in \mathcal{U}(\hat{V})_{n - \mathrm{deg}\, w}$ such that $\langle u', xw \rangle \neq 0$ for some $u' \in U^*$. Consider the $A_n(V)$-module generated by $xw \in M_n(U)$, defined as follows:
For $v + O_n(V) \in A_n(V)$ and $u \in U$, let $(v + O_n(V)) \cdot u = v(\mathrm{wt} \, v - 1) u$. Under this action, let $\overline{U} = A_n(V) \cdot xw$. By Lemma \ref{l1}, $O_n(V) \cdot \overline{U} = \coprod_{i >n} \varphi(\mathcal{U}(\hat{V})_i )\varphi(\mathcal{U}(\hat{V})_{-i}) \cdot\overline{U} = \coprod_{i >n} \mathcal{U}(\hat{V})_i )\mathcal{U}(\hat{V})_{-i} \overline{U} = 0$. However this implies that $O_{n-1}(V) \cdot \overline{U} = \coprod_{i >n-1} \mathcal{U}(\hat{V})_i \mathcal{U}(\hat{V})_{-i} \overline{U} = \mathcal{U}(\hat{V})_n \mathcal{U}(\hat{V})_{-n} \overline{U}$. But $\hat{w} \neq \hat{0}$ in $\Omega_n/\Omega_{n-1} (L_n(U))$ also implies, in particular, that $v_i (w + J) = 0$ if $\mbox{\rm wt} \, v_i < -n$. Noting then that
\[\mathcal{U}(\hat{V})_{-n} \cdot \overline{U} = \{ v_i o(v) x w \; | \; v_i \in \mathcal{U}(\hat{V})_{-n}, \, v \in A_n(V) \, \mathrm{with} \, \mathrm{wt} \, o(v) = 0, \, \mathrm{and \ wt} \, x = n- \mathrm{wt} \, w \},\]
and thus we have that $\mbox{\rm wt} \, v_i o(v) x = - \mbox{\rm wt} \, w < -n$, we then must have that $\mathcal{U}(\hat{V})_{-n} \cdot \overline{U} = 0$. Therefore
$o(O_{n-1}(V)) \cdot \overline{U} = \mathcal{U}(\hat{V})_n \mathcal{U}(\hat{V})_{-n} \cdot \overline{U} = 0$, implying that $\bar{U} = A_n(V) \cdot xw$ is a submodule of $U$ that factors through $A_{n-1}(V)$. By assumption then, we must have that $xw = 0$, and thus $w \in J$, implying that $\iota(0) = \hat{0} = \hat{w}$.
\hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
Note that it is trivially true that $\Pi_n (L_n(U)) \cong U$ as was observed in \cite{V} (where $\Pi_n$ in \cite{V} is denoted by $\Omega_n$).
Theorem 4.2 in \cite{DLM} only imposes the condition on $U$ that it be an $A_n(V)$-module that itself does not factor through $A_{n-1}(V)$ in order for $\Omega_n/\Omega_{n-1}(L_n(U)) \cong U$ to hold,
rather than the condition that no nonzero submodule of $U$ factor through $A_{n-1}(V)$. In Section \ref{Virasoro-factor-through-example}, we give an example that shows that this extra condition is necessary. This example is based on a module for the Virasoro vertex operator algebra and the level one Zhu algebra. In Section \ref{Heisenberg-section}, we show that this extra condition is not needed at level $n=1$ for the Heisenberg vertex operator algebra. Furthermore in these examples, we observe why these cases are different in regards to the structure of $A_1(V)$ for the Heisenberg versus the Virasoro vertex operator algebras as regards the algebra $A_0(V)$.
One of the main reasons we are interested in Theorem \ref{mainthm} is what it implies for the question of when modules for the higher level Zhu algebras give rise to indecomposable nonsimple modules for $V$. In \cite{V} it is claimed that if $A_n(V)$ is a finite dimensional semisimple algebra for all $n \in \mathbb{N}$, then $V$ is rational. But we are interested in the irrational case and in particular when $A_n(V)$ can be used to construct certain indecomposable nonsimple modules for $V$ of interest in the irrational setting, particularly in the $C_2$-cofinite case.
To this end, we have the following two corollaries to Theorem \ref{mainthm}:
\begin{cor}\label{mainthm-first-cor} Suppose that for some fixed $n \in \mathbb{Z}_+$, $A_n(V)$ has a direct sum decomposition $A_n(V) \cong A_{n-1}(V) \oplus A'_n(V)$, for $A'_n(V)$ a direct sum complement to $A_{n-1}(V)$, and let $U$ be an $A_n(V)$-module. If $U$ is trivial as an $A_{n-1}(V)$-module, then $\Omega_n/\Omega_{n-1} (L_n(U) ) \cong U$.
\end{cor}
{\it Proof.}\hspace{2ex} If $A_n(V) \cong A_{n-1}(V) \oplus A'_n(V)$, then any $A_n(V)$-module $U$ decomposes into $U = U_{n-1} \oplus U'$ where $U_{n-1}$ is an $A_{n-1}(V)$-module and $U'$ is an $A'_n(V)$-module. If $U$ has no nontrivial submodule that factors through $A_{n-1}(V)$ which is true if and only if $U_{n-1} = 0$, i.e., if and only if $U$ is trivial as an $A_{n-1}(V)$-module, then Theorem \ref{mainthm} implies $\Omega_n/\Omega_{n-1} (L_n(U) ) \cong U$.
\hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
An example of this setting is given in Section 4, below, namely that of the Heisenberg vertex operator algebra and the level one Zhu algebra.
\begin{cor}\label{mainthm-cor}
Let $n \in \mathbb{Z}_+$ be fixed, and let $U$ be a nonzero indecomposable $A_n(V)$-module such that there is no nonzero submodule of $U$ that can factor through $A_{n-1}(V)$. Then $L_n(U)$ is a nonzero indecomposable $\mathbb{N}$-gradable $V$-module generated by its degree $n$ subspace, $L_n(U) (n) \cong U$, and satisfying
\[\Omega_n/\Omega_{n-1} (L_n(U)) \cong L_n(U)(n) \cong U\]
as $A_n(V)$-modules.
Furthermore if $U$ is a simple $A_n(V)$-module, then $L_n(U)$ is a simple $V$-module as well.
\end{cor}
{\it Proof.}\hspace{2ex} Suppose
\[
L_n(U) = W_1 \oplus W_2,
\]
where $W_1$ and $W_2$ are nonzero $\mathbb{N}$-gradable $V$-modules. Then by Theorem \ref{mainthm}, and linearity, we have that
\begin{eqnarray*}
U & \cong & \Omega_n/\Omega_{n-1}(L_n(U)) \\
&=& \Omega_n/\Omega_{n-1}(W_1 \oplus W_2)\\
&=& \Omega_n/\Omega_{n-1}(W_1) \oplus \Omega_n/\Omega_{n-1}(W_2).
\end{eqnarray*}
Since $U$ is indecomposable, $\Omega_n/\Omega_{n-1}(W_i) = 0$ for $i = 1$ or $2$. Without loss of generality assume that $\Omega_n/\Omega_{n-1}(W_1) = 0$. Then in
$L_n(U)$, we have that $0 \neq (U + J) \cap W_1(n) \subset (U +J) \cap \Omega_n(W_1) = (U + J) \cap \Omega_{n-1}(W_1)$, which is an $A_{n-1}(V)$-module. Thus $\{ u \in U \; | \; u + J \in W_1(n)\} \subset U$ is a nonzero submodule of $U$ that factors through $A_{n-1}(V)$, contradicting our assumption. Thus it follows that either $W_1$ or $W_2$ is zero, and $L_n(U)$ is indecomposable.
It is obvious that $L_n(U)$ is generated by $L_n(U) (n) \cong U$ and the fact that this is isomorphic as an $A_n(V)$-module to $\Omega_n/\Omega_{n-1}(L_n(U))$ follows directly from Theorem \ref{mainthm}.
Finally we observe that the proof of Lemma 4.8 in \cite{DLM} holds, proving the last statement.
\hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
\begin{defn}\label{catasso}
For $n \in \mathbb{N}$, denote by $\mathcal{A}_{n,n-1}$ the category of $A_n(V)$-modules such that there are no nonzero submodules that can factor through $A_{n-1}(V)$.
\end{defn}
\begin{defn}\label{V-category-definition}
For $n \in \mathbb{N}$, denote by $\mathcal{V}_n$ the category of weak $V$-modules whose objects $W$ satisfy: $W$ is $\mathbb{N}$-gradable with $W(0) \neq 0$; and $W$ is generated by $W(n)$.
\end{defn}
With these definitions, we have that Theorem \ref{mainthm} shows that $\Omega_n/\Omega_{n-1} \circ L_n$ is the identity functor on the category $\mathcal{A}_{n,n-1}(V)$, and Corollary \ref{mainthm-cor} shows that $L_n$ sends indecomposable objects in $\mathcal{A}_{n,n-1}$ to indecomposable objects in $\mathcal{V}_n$. Natural questions that arise then are: What subcategory of $V$-modules does one need to restrict to so that these functors are inverses of each other on this restricted category? But more importantly, what can be said about the correspondences between the subcategories of simple versus indecomposable objects? Below we further investigate these questions.
\begin{rema}{\em
Theorem 4.9 of \cite{DLM} states that the functors $L_n$ and $\Omega_n/\Omega_{n-1}$ induce mutually inverse bijections between the isomorphism classes of simple objects in the category $\mathcal{A}_{n,n-1}$ and the isomorphism classes of simple objects in the category of $\mathbb{N}$-gradable $V$-modules. However it is necessary to add the condition that the degree $n$ subspace be nonzero for this to hold, as our example in Section \ref{Virasoro-n-generated-example} shows. That is, even in the case of simple objects, if the $\mathbb{N}$-gradable $V$-module is not generated by its degree $n$ subspace (e.g., if the degree $n$ subspace is zero), then the bijection can fail. With this small and obvious addition, we have the theorem below---a clarification of Theorem 4.9 in \cite{DLM}; see also \cite{V}. }
\end{rema}
\begin{thm}\label{simple-theorem}
$L_n$ and $\Omega_n/\Omega_{n-1}$ are equivalences when restricted to the full subcategories of completely reducible $A_n(V)$-modules whose irreducible components do not factor through $A_{n-1}(V)$ and completely reducible $\mathbb{N}$-gradable $V$-modules that are generated by their degree $n$ subspace (or equivalently, that have a nonzero degree $n$ subspace), respectively. In particular, $L_n$ and $\Omega_n/\Omega_{n-1}$ induce naturally inverse bijections on the isomorphism classes of simple objects in the category $\mathcal{A}_{n, n-1}$ and isomorphism classes of simple objects in $\mathcal{V}_n$.
\end{thm}
\begin{rema}
{\em We note that Theorem 4.10 of \cite{DLM} in the case of $V$ rational (i.e., in the semisimple setting) also must be modified accordingly with the extra condition that the category of $\mathbb{N}$-gradable $V$ modules must be restricted to the subcategory of modules generated by their degree $n$ subspace in order for the statement of the theorem to hold; see also \cite{V}, where in the rational setting $\Omega_n/\Omega_{n-1}$ can be replace by $\Pi_n$ in many of the statements of \cite{DLM}. }
\end{rema}
We have the following result which gives a stronger statement than Corollary \ref{mainthm-cor} and a way of constructing indecomposable $\mathbb{N}$-gradable $V$-modules, which we employ in Section \ref{Virasoro-factor-through-example}:
\begin{prop} Let $U$ be an indecomposable $A_n(V)$-module that does not factor through $A_{n-1}(V)$. Then $L_n(U)$ is an indecomposable $\mathbb{N}$-gradable $V$-module generated by its degree $n$ subspace.
Furthermore, if $U$ is finite dimensional, then $L_n(U)$ is an indecomposable $\mathbb{N}$-gradable generalized $V$-module.
\end{prop}
{\it Proof.}\hspace{2ex} Suppose $L_n(U) = W_1 \oplus W_2$, where $W_1$ and $W_2$ are nonzero $\mathbb{N}$-gradable submodules of $L_n(U)$. Then $U +J = L_n(U)(n) = W_1(n) \oplus W_2(n)$.
Since $U$ is indecomposable, we have that either $U +J = W_1(n)$ or $U +J = W_2(n)$. Without loss of generality, assume that $U +J = W_2(n)$. Then $ W_1 \cap (U +J) = 0$. Let $\widetilde{W_1}$ be the preimage of $W_1$ in $M_n(U)$. Then $\widetilde{W_1} \cap U = 0$, and hence $\widetilde{W_1}(n)\subset J$. Then since $L_n(U)$ is generated by $L_n(U)(n)$, we have that $W_1$ is generated by $W_1(n)$, and thus we have $\widetilde{W}_1 \subset J$. Therefore $W_1 = \widetilde{W_1}/J = 0$, contradicting the assumption that $W_1$ was nonzero, and proving that $L_n(U)$ is indecomposable.
Now assume that $U$ is finite dimensional. Since $L(0) = o(\omega)$ preserves $U$, and $U$ is finite dimensional, we have that $U +J = L_n(U)(n)$ can be decomposed into a direct sum of generalized eigenspaces for $L(0)$. However, since $\omega$ is in the center of $A_n(V)$, the distinct generalized eigenspaces of $U$ with respect to $L(0)$ are distinct $A_n(V)$-submodules of $U$. Therefore there exists $\lambda \in \mathbb{C}$ and $j \in \mathbb{Z}_+$ such that in $L_n(U)$, we have $(L(0) - \lambda \, id_{U+J})^j (U+J)= 0$.
Then $L_n(U)$ has a $\mathbb{C}$-grading with respect to the eigenvalues of $L(0)$ induced by $M_n(U)$ and the eigenvalue $\lambda$ of $L(0)$ on $U$, given by $M_n(U)(k) = M_n(U)_{\lambda -n + k}$, proving that $L_n(U)$ is an $\mathbb{N}$-gradable generalized $V$-module.
\hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
In general $\Omega_n/\Omega_{n-1}$ will not send indecomposable objects in $\mathcal{V}_{n}$ to indecomposable $A_n$-modules, or for that matter will $\Pi_n$. And so we have the following questions: If $W$ is an indecomposable object in $\mathcal{V}_{n}$, when are $W(n) = \Pi_n W$ and $\Omega_n/\Omega_{n-1}(W)$ indecomposable $A_n(V)$-modules? Furthermore, and more importantly for our purposes, what types of indecomposable modules can be constructed from the functor $L_n$? We begin to answer some of these questions below.
It is easy to see the following:
\begin{prop}
Let $W$ be an indecomposable object in $\mathcal{V}_n$. Then the $A_n(V)$-module $W(n)$ cannot be decomposed into a direct sum of subspaces $U_1$ and $U_2$ such that $\langle U_1\rangle \cap \langle U_2\rangle = 0$, where $\langle U_i\rangle$ denotes the $V$-submodule of $W$ generated by $U_i$, for $i = 1, 2$.
\end{prop}
{\it Proof.}\hspace{2ex} Otherwise, assume $W(n) = U_1 \oplus U_2$ such that $\langle U_1\rangle \cap \langle U_2\rangle = 0$. Since $W$ is generated by $W(n)$, we have $W = \langle U_1\rangle \oplus \langle U_2\rangle$, contradicting the assumption that $W$ was indecomposable. \hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
Regarding the question: When is $L_n( \Omega_n/\Omega_{n-1} (W)) \cong W$ for $W$ an $\mathbb{N}$-gradable $V$-module? It is clear that we must at least have that $W$ is an object in the category $\mathcal{V}_n$, i.e., $W$ must be generated by its degree $n$ subspace $W(n)$. However, in general we have that
\[ \bigoplus_{k = 0}^n W(k) \subset \Omega_n(W) \]
with equality holding if, for instance, $W$ is simple. In Section \ref{Virasoro-factor-through-example}, we give an example, that of the Virasoro vertex operator algebra, to show that in the indecomposable case equality will not necessarily hold. But we do have the following sufficient criteria, where below we denote the cyclic submodule of $W$ generated by $w\in W$ by $Vw$:
\begin{thm}\label{last-thm}
Let $W$ be an $\mathbb{N}$-gradable $V$-module that is generated by $W(n)$ such that $\Omega_j(W) = \bigoplus_{k=0}^j W(k)$, for $j = n$ and $n-1$. Then $L_n(\Omega_n/\Omega_{n-1}(W))$ is naturally isomorphic to a quotient of $W$.
Furthermore, suppose $W$ also satisfies the property that for any $w \in W$, $w =0$ if and only if $V w \cap W(n) = 0$. Then
\[L_n(\Omega_n/\Omega_{n-1}(W) ) \cong W.\]
\end{thm}
{\it Proof.}\hspace{2ex}
Consider the $A_n(V)$-module injection from $W(n)$ into $\Omega_n(W) = \bigoplus_{k = 0}^n W(k)$. Then by the universal property of $\overline{M}_n(W(n))$, and since $W$ is generated by $W(n)$, there exists a unique $V$-module surjection
\begin{eqnarray*}
\Phi : \overline{M}_n(W(n)) &\longrightarrow& W\\
xw + \mathcal{U}(\hat{V})W_A &\mapsto& \varphi(x) w
\end{eqnarray*}
for $x \in \mathcal{U}(\hat{V})$ and $w \in W(n)$, where $\varphi$ is defined in (\ref{defining-phi}).
Letting $\overline{J} = J/\mathcal{U}(\hat{V})W_A$, where
\[J = \{ w \in M_n(W(n)) \, | \, \langle u', xw \rangle = 0 \mbox{ for all $u' \in W(n)^*$, and $x \in \mathcal{U}(\hat{V})$} \}, \]
we have that $\overline{J}$ is naturally an $\mathbb{N}$-gradable $V$-submodule of $\overline{M}_n(W(n))$. Then $\mathrm{ker} \, \Phi \subset \overline{J}$, and thus
\[L_n(\Omega_n/\Omega_{n-1} (W)) \cong L_n(W(n)) = M_n(W(n))/J \cong \overline{M}_n(W(n))/\overline{J}\]
is an $\mathbb{N}$-gradable $V$-module quotient of $W \cong \overline{M}_n(W(n)) /(\mathrm{ker} \, \Phi)$ by $(\mathrm{ker} \, \Phi) /\overline{J}$, proving the first paragraph of the theorem.
Now suppose that $V w \cap W(n) = 0$ implies that $w = 0$ for $w \in W$. Let $\bar{w} \in \bar{J}$ such that $\bar{w} = w + \mathcal{U}(\hat{V}) W_A$. Then $\langle u', x w \rangle = 0$ for all $x \in \mathcal{U}(\hat{V})$ and $u' \in W(n)^*$. Thus $V\cdot \Phi(\bar{w}) \cap W(n) = 0$, which implies that $\Phi (\bar{w}) = 0$. Therefore $\overline{J} \subset \mathrm{ker} \, \Phi$, implying $\overline{J} = \mathrm{ker} \, \Phi$, proving the second paragraph of the theorem.
\hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
\begin{rema}\label{J-remark} {\em Theorem \ref{last-thm} above gives some motivation and intuition about the subspace $J$ of $M_n(U)$ used to define $L_n(U)$. In fact, from Theorem \ref{last-thm}, we see that $\overline{J} = J/\mathcal{U}(\hat{V})W_A$ is the maximal submodule of $\overline{M}_n(U)$ of the form $N/\mathcal{U}(\hat{V})W_A$ such that $N \cap U = 0$. }
\end{rema}
We have the following corollary:
\begin{cor}\label{iso-cor} Let $\mathcal{A}_{n,n-1}^{Res}$ denote the subcategory of objects $U$ in $\mathcal{A}_{n,n-1}$ that satisfy $\Omega_j(L_n(U)) = \bigoplus_{k=0}^j L_n(U)(k)$ for $j = n$ and $n-1$, and let $\mathcal{V}_n^{Res}$ denote the subcategory of objects $W$ in $\mathcal{V}_n$ that satisfy: $\Omega_j(W) = \bigoplus_{k=0}^j W(k)$ for $j = n$ and $n-1$;
For any $w \in W$, $w = 0$ if and only if $Vw\cap W(n) = 0$;
and $W(n)$ has no nonzero $A_n(V)$-submodule that is an $A_{n-1}(V)$-module.
Then the functors $\Omega_n/\Omega_{n-1}$ and $L_n$ are mutual inverses on the categories $\mathcal{V}_n^{Res}$ and $\mathcal{A}_{n,n-1}^{Res}$, respectively. In particular, the categories $\mathcal{V}_n^{Res}$ and $\mathcal{A}^{Res}_{n,n-1}$ are equivalent.
Furthermore, the subcategory of simple objects in $\mathcal{V}_n^{Res}$ is equivalent to the subcategory of simple objects in $\mathcal{A}_{n,n-1}^{Res}$.
\end{cor}
{\it Proof.}\hspace{2ex} By Theorem \ref{mainthm}, the functor $\Omega_n/\Omega_{n-1}$ takes objects in $\mathcal{V}_n^{Res}$ to objects in $\mathcal{A}_{n.n-1}^{Res}$, and $\Omega_n/\Omega_{n-1} \circ L_n$ is the identity on $\mathcal{A}_{n.n-1}^{Res}$.
By Theorem \ref{last-thm} and Remark \ref{J-remark}, the functor $L_n$ takes objects in $\mathcal{A}_{n.n-1}^{Res}$ to objects in $\mathcal{V}_n^{Res}$, and $L_n \circ \Omega_n/\Omega_{n-1}$ is the identity on $\mathcal{V}_n^{Res}$.
The correspondence on the subcategories of simple objects follows from Theorem \ref{simple-theorem}.
\hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
In particular, this illustrates that the categorical correspondence determined by the functors $\Omega_n/\Omega_{n-1}$ and $L_n$ restricts to subcategories of $\mathcal{V}_n$ and $\mathcal{A}_{n, n-1}$ that are too narrow to give a significant understanding of the relationship between indecomposable modules and higher level Zhu algebras as regards those modules that can be constructed via the functor $L_n$, which is the setting that motivated this paper in the first place. That is, we are more interested in understanding the nature of the types of indecomposable $V$-modules that can be constructed from various classes of $A_n(V)$-modules through $L_n$, and in fact the functor $\Omega_n/\Omega_{n-1}$ is more useful in the indecomposable nonsimple setting as giving more information when it is {\it not} an inverse to $L_n$. We study this issue further in \cite{BVY-Virasoro}, and in Section \ref{Virasoro-factor-through-example} below, we give an example to illustrate the types of indecomposable modules one can construct through the functor $L_n$ from indecomposable $A_n(V)$-modules.
We also observe the following:
\begin{prop}\label{2n-prop} Let $U$ be an $A_n(V)$-module that does not factor through $A_{n-1}(V)$, and let $W = L_n(U)$. Then
$\bigoplus_{k=0}^n W(k) \subset \Omega_n(W) \subset \bigoplus _{k=0}^{2n} W(k)$, and all singular vectors, $\Omega_0(W)$, must be contained in $\bigoplus _{k=0}^{n} W(k)$.
\end{prop}
{\it Proof.}\hspace{2ex} The first inclusion $\bigoplus_{k=0}^n W(k) \subset \Omega_n(W)$ is obvious. Now suppose $w \in \Omega_n(W)$, and $w = w' + w''$ with $w' \in \bigoplus _{k=0}^{2n} W(k)$ and $w'' \in \bigoplus _{k=2n+1}^{\infty} W(k)$. Then the $\mathbb{N}$-grading of $W$ implies that $w', w'' \in \Omega_n(W)$. This and the fact that $U = W(n)$, implies that $\mathcal{U}(\hat{V}) w'' \cap U = 0$, since $ \Omega_n(W) \cap \bigoplus _{k=2n+1}^{\infty} W(k) = 0$. Thus $w'' \in J$, and $w = w'$ in $W = L_n(U) = M_n(U)/J$. Therefore $\Omega_n(W) \subset \bigoplus _{k=0}^{2n} W(k)$. Furthermore, any singular vector $w$, i.e., any $w \in \Omega_0(W)$, must in fact then be contained in $\bigoplus _{k=0}^{n} W(k)$, otherwise if $w = w' + w''$ with $w' \in \bigoplus _{k=0}^{n} W(k)$ and $w'' \in \bigoplus_{k = n+1}^{\infty} W(k)$, then again $\mathcal{U}(\hat{V}) w'' \cap U = 0$, implying $w'' \in J$. Therefore $\Omega_0(W) \subset \bigoplus _{k=0}^{n} W(k)$.
\hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
\begin{rema} {\em Proposition \ref{2n-prop} and the fact that if $W = L_n(W(n))$ for $w \in W$, if $Vw = W$, then $V w \cap W(n) = 0$ if and only if $w = 0$ (or more generally, for any $w \in W$, then $\langle W(n)^*, w \rangle = 0$ if and only if $w = 0$), help to characterize the modules in $\mathcal{V}_n$ that are in the image of the functor $L_n$. That is if $W = L_n(W(n))$, then $W$ must satisfy the following:
(i) $W$ is generated by $W(n)$;
(ii) For $w \in W$, if $Vw = W$, then $V w \cap W(n) = 0$ if and only if $w = 0$ (or more generally, for any $w \in W$, then $\langle W(n)^*, w \rangle = 0$ if and only if $w = 0$);
(iii) $\Omega_n(W) \subset \bigoplus _{k=0}^{2n} W(k)$ and $\Omega_0(W) \subset \bigoplus _{k=0}^{n} W(k)$.}
\end{rema}
In particular, we see that the higher level Zhu algebras, i.e., $A_n(V)$ for $n \geq 1$, can be used to construct indecomposable nonsimple modules for $V$ with Jordan blocks for the $L(0)$ operator of sizes $k$ through $k+n$, if $A_n(V)$ does not decompose into a direct sum with $A_{n-1}(V)$; see Corollary \ref{mainthm-first-cor}. We illustrate this below for $n = 1$ in Section \ref{Virasoro-factor-through-example}.
\section{Examples: Heisenberg and Virasoro vertex operator algebras}\label{examples-section}
In \cite{BVY-Heisenberg} and \cite{BVY-Virasoro}, we determine the level one Zhu algebra of $V$ in the two cases of when $V$ is the vertex operator algebra associated to the rank one Heisenberg algebra and when $V$ is the Virasoro vertex operator algebra, respectively. In \cite{BVY-Heisenberg} and \cite{BVY-Virasoro}, we then give some results on the classification of modules for these vertex operator algebras using the structure of their level one Zhu algebras.
Here we recall from \cite{FZ} and \cite{W} the level zero Zhu algebras and from \cite{BVY-Heisenberg} and \cite{BVY-Virasoro} the level one Zhu algebras for these vertex operator algebras. We point out some distinctive features of these algebras which give interesting examples to illustrate aspects of Theorems \ref{mainthm} and \ref{simple-theorem}, as well as the other results of Section 3. For instance, these examples illustrate how the nature of the surjection of the level one Zhu algebra onto the level zero Zhu algebra affects the structure of the $V$-modules that arise, in particular, how the nature of this surjection determines whether the higher level Zhu's algebras are necessary in order to detect indecomposable modules that have different size Jordan blocks with respect to $L(0)$ at higher degrees.
\subsection{The Heisenberg vertex operator algebra, and its level zero and level one Zhu algebras}\label{Heisenberg-section}
Following, for example \cite{LL}, we denote by $\mathfrak{h}$ a one-dimensional abelian Lie algebra spanned by $\alpha$ with a bilinear form $\langle \cdot, \cdot \rangle$ such that $\langle \alpha, \alpha \rangle = 1$, and by
\[
\hat{\mathfrak{h}} = \mathfrak{h}\otimes \mathbb{C}[t, t^{-1}] \oplus \mathbb{C} \mathbf{k}
\]
the affinization of $\mathfrak{h}$ with bracket relations
\[
[a(m), b(n)] = m\langle a, b\rangle\delta_{m+n, 0}\mathbf{k}, \;\;\; a, b \in \mathfrak{h},
\]
\[
[\mathbf{k}, a(m)] = 0,
\]
where we define $a(m) = a \otimes t^m$ for $m \in \mathbb{Z}$ and $a \in \mathfrak{h}$.
Set
\[
\hat{\mathfrak{h}}^{+} =\mathfrak{h} \otimes t\mathbb{C}[t] \qquad \mbox{and} \qquad \hat{\mathfrak{h}}^{-} = \mathfrak{h} \otimes t^{-1}\mathbb{C}[t^{-1}].
\]
Then $\hat{\mathfrak{h}}^{+}$ and $\hat{\mathfrak{h}}^{-}$ are abelian subalgebras of $\hat{\mathfrak{h}}$. Consider the induced $\hat{\mathfrak{h}}$-module given by
\[
M(1) = \mathcal{U}(\hat{\mathfrak{h}})\otimes_{\mathcal{U}(\mathbb{C}[t]\otimes \mathfrak{h} \oplus \mathbb{C} \mathbf{k})} \mathbb{C}{\bf 1} \simeq S(\hat{\mathfrak{h}}^{-}) \qquad \mbox{(linearly)},
\]
where $\mathcal{U}(\cdot)$ and $S(\cdot)$ denote the universal enveloping algebra and symmetric algebra, respectively, $\mathfrak{h} \otimes \mathbb{C}[t]$ acts trivially on $\mathbb{C}\mathbf{1}$ and $\mathbf{k}$ acts as multiplication by $1$. Then $M(1)$ is a vertex operator algebra, often called the {\it vertex operator algebra associated to the rank one Heisenberg algebra}, or simply the {\it rank one Heisenberg vertex operator algebra}, or the {\it one free boson vertex operator algebra} --- the Heisenberg Lie algebra in question being precisely $\hat{\mathfrak{h}} \diagdown \mathbb{C} \alpha(0)$.
There is in fact a one-parameter family of possible conformal elements for $M(1)$ that give the vertex operator algebra structure, namely $\omega_a = \frac{1}{2} \alpha(-1)^2{\bf 1} + a \alpha(-2){\bf 1}$ for $a \in \mathbb{C}$. We distinguish these different vertex operator algebra structures on $M(1)$, by $M_a (1)$.
Any element of $M_a(1)$ can be expressed as a linear combination of elements of the form
\begin{equation}\label{generators-for-V}
\alpha(-k_1)\cdots \alpha(-k_j){\bf 1}, \quad \mbox{with} \quad k_1 \geq \cdots \geq k_j \geq 1, \ \mbox{for $j \in \mathbb{N}$}.
\end{equation}
It is known that $M_a(1)$ is simple and has infinitely many nonisomorphic irreducible modules, which can be easily classified (see \cite{LL}). Furthermore, the indecomposable generalized modules have been completely determined, e.g., see \cite{M}. In particular, we have
\begin{prop}[\cite{M}] Let $W$ be an indecomposable generalized $M_a(1)$-module. Then as an $\hat{\mathfrak{h}}$-module
\begin{equation}
W \cong M_a(1) \otimes \Omega(W)
\end{equation}
where $\Omega(W) = \{w \in W \; | \; \alpha(n)w = 0 \mbox{ for all $n>0$} \}$ is the vacuum space.
\end{prop}
\begin{rema} {\em Note that in terms of the functors $\Omega_n$ for $n \in \mathbb{N}$, if $W$ is an indecomposable $M_a(1)$-module, then
\[\mathbf{1} \otimes \Omega(W) = \Omega_0(W) .\]}
\end{rema}
We have the following level zero and level one Zhu algebras for $M_a(1)$ (cf. \cite{FZ}, \cite{BVY-Heisenberg}): \begin{equation}
A_0(M_a(1)) \cong \mathbb{C}[x,y]/(y-x^2) \cong \mathbb{C}[x]
\end{equation}
under the identification
\begin{eqnarray}
\alpha(-1)\mathbf{1} + O_0(M_a(1)) &\longleftrightarrow& x + (p_0(x,y)), \label{x-for-A_0}\\
\alpha(-1)^2\mathbf{1} + O_0(M_a(1)) &\longleftrightarrow& y + (p_0(x,y)), \label{y-for-A_0}
\end{eqnarray}
where $p_0(x,y) = y - x^2$.
For the level one Zhu algebra, we have
\begin{eqnarray}
A_1(M_a(1)) &\cong& \mathbb{C}[x,y]/((y-x^2)(y-x^2-2)) \\
&\cong& \mathbb{C}[x,y]/(y-x^2)\oplus \mathbb{C}[x,y]/(y-x^2-2) \\
&\cong& \mathbb{C}[x] \oplus \mathbb{C}[x] \ \cong \ A_0(M_a(1)) \oplus \mathbb{C}[x]
\end{eqnarray}
under the identification
\begin{eqnarray}
\alpha(-1)\mathbf{1} + O_1(M_a(1)) &\longleftrightarrow& x + (p_0(x,y)p_1(x,y)), \\ \alpha(-1)^2\mathbf{1} + O_1(M_a(1)) &\longleftrightarrow& y + (p_0(x,y)p_1(x,y)),
\end{eqnarray}
where again $p_0(x,y) = y - x^2$, and in addition $p_1(x,y) = y - x^2 - 2$.
\begin{rema}\label{Heisenberg-remark} {\em In this case, since the ideals $I_0 = (p_0(x,y))$ and $I_1 = (p_1(x,y))$ are relatively prime, i.e. $I_0 + I_1 = \mathbb{C}[x,y]$, we have that the level one Zhu algebra is naturally isomorphic to a direct sum of $A_0(M_a(1)) \cong \mathbb{C}[x,y]/I_0$ and its direct sum complement which is isomorphic to $\mathbb{C}[x,y]/I_1$.
Thus any indecomposable module $U$ for $A_1(M_a(1))$ will either be an indecomposable module for $A_0(M_a(1))$ or an indecomposable module for its direct sum complement, $\mathbb{C}[x,y]/I_1$. That is we will either have that $U$ itself will factor through $A_0(M_a(1))$ or only the zero submodule of $U$ will factor through $A_0(M_a(1))$.
Therefore by Theorem \ref{mainthm}, any indecomposable $U$ module for $A_1(M_a(1))$ which does not factor through $A_0(M_a(1))$, will satisfy
\[U \cong \Omega_1/\Omega_0 (L_1(U)) ,\]
and in particular, any extra requirement that no nonzero submodule of $U$ factor through $A_0(M_a(1))$ is superfluous. }
\end{rema}
We illustrate the points made in Remark \ref{Heisenberg-remark} explicitly below.
\subsubsection{Heisenberg Example:}\label{Heisenberg-example}
The indecomposable modules for $A_0(M_a(1)) \cong \mathbb{C}[x,y]/(y-x^2) \cong \mathbb{C}[x]$ are given by
\begin{equation}\label{U-for-A_0}
U_0(\lambda, k) = \mathbb{C}[x,y]/((y-x^2), (x - \lambda)^k) \cong \mathbb{C}[x]/(x - \lambda)^k
\end{equation}
for $\lambda \in \mathbb{C}$ and $k \in \mathbb{Z}_+$, and
\[
L_0(U_0(\lambda, k)) \cong M_a(1) \otimes_{\mathbb{C}} \Omega(\lambda, k),
\]
where $\Omega(\lambda,k)$ is a $k$-dimensional vacuum space such that $\alpha(0)$
acts with Jordan form given by
\begin{equation}
\left[ \begin{array}{cccccc}
\lambda & 1 & 0 & \cdots & 0 & 0\\
0 & \lambda & 1 & \cdots & 0 & 0 \\
0 & 0 & \lambda & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \cdots & \lambda & 1 \\
0 & 0 & 0 & \cdots & 0 & \lambda
\end{array}
\right] .
\end{equation}
Note then that the zero mode of $\omega_a$ which is given by
\begin{equation}
L(0) = \sum_{m \in \mathbb{Z}_+} \alpha(-m) \alpha(m) + \frac{1}{2} \alpha(0)^2 - a \alpha(0)
\end{equation}
acts on $\Omega(\lambda, k)$ such that the only eigenvalue is $\frac{1}{2} \lambda^2 - a \lambda$ (which is the lowest conformal weight of $M_a(1) \otimes \Omega(\lambda, k)$) and $L(0) -( \frac{1}{2} \lambda^2 - a \lambda) Id_k$ with respect to a Jordan basis for $\alpha(0)$ acting on $\Omega(\lambda, k)$ is given by
\begin{equation}
\left[ \begin{array}{ccccccc}
0 & \lambda-a & \frac{1}{2}- a &0 & \cdots & 0 & 0 \\
0 & 0 & \lambda - a & \frac{1}{2} - a & \cdots & 0 & 0 \\
0 & 0 & 0 & \lambda - a & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & 0& \cdots & \lambda - a& \frac{1}{2} - a\\
0 & 0 & 0 & 0 & \cdots &0 & \lambda - a \\
0 & 0 & 0 & 0 & \cdots & 0 & 0
\end{array}
\right] .
\end{equation}
Also note that $L(0)$ is diagonalizable if and only if: (i) $k = 1$ which corresponds to the case when $M_a(1) \otimes \Omega(\lambda, k)$ is irreducible; (ii) $k = 2$ and $\lambda = a$; or (iii) $k>2$ and $\lambda = a = \frac{1}{2}$.
These $M_a(1) \otimes \Omega(\lambda, k)$ exhaust all the indecomposable generalized $M_a(1)$-modules and the $\mathbb{N}$-grading of $M_a(1) \otimes \Omega(\lambda, k)$ is explicitly given by
\[ M_a(1) \otimes \Omega(\lambda, k) = \coprod_{m \in \mathbb{N}} M_a(1)_m \otimes \Omega( \lambda, k) \]
where $M_a(1)_m$ is the weight $m$ space of the vertex operator algebra $M_a(1)$ and thus $M_a(1)_m \otimes \Omega( \lambda, k)$ is the space of generalized eigenvectors of
weight $m + \frac{1}{2}\lambda^2 - a \lambda$ with respect to $L(0)$. Therefore, the generalized graded dimension of $M_a(1) \otimes \Omega(\lambda, k)$ is given by
\begin{eqnarray}
\mathrm{gdim}_q \, M_a(1) \otimes \Omega(\lambda, k) &=& q^{-1/24} \sum_{m \in \mathbb{N}} (k \,\mathrm{dim} \, M_a(1)_m )\, q^{m + \frac{1}{2} \lambda^2 - a \lambda}\\
&=& q^{\frac{1}{2} \lambda^2 - a \lambda - 1/24} k \sum_{m \in \mathbb{N}} (\mathrm{dim} \, M_a(1)_m )\, q^{m} \nonumber \\
&=& q^{\frac{1}{2} \lambda^2 - a \lambda} k \, \eta(q)^{-1} \nonumber
\end{eqnarray}
where $\eta(q)$ is the Dedekind $\eta$-function.
The indecomposable modules for $A_1(M_a(1)) \cong \mathbb{C}[x,y]/((y-x^2)(y - x^2 - 2)) \cong A_0(M_a(1)) \oplus \mathbb{C}[x]$ are given by the indecomposable modules $U_0(\lambda, k)$ for $A_0(M_a(1))$ as given in (\ref{U-for-A_0}) or by
\begin{equation}
U_1(\lambda, k) = \mathbb{C}[x,y]/((y-x^2 - 2), (x - \lambda)^k) \cong \mathbb{C}[x]/(x - \lambda)^k
\end{equation}
for $\lambda \in \mathbb{C}$ and $k \in \mathbb{Z}_+$, in which case,
\[ U_1(\lambda, k) \cong \alpha(-1)\mathbf{1} \otimes \Omega(\lambda, k) \quad \mbox{and}
\quad L_1(U_1(\lambda, k)) \cong M_a(1) \otimes \Omega(\lambda,k).\]
Therefore, there are no new modules obtained via inducing from a module for $A_1(M_a(1))$ versus from $A_0(M_a(1))$.
If we allow for inducing by $L_1$ for any indecomposable $A_1(M_a(1))$ module (including those that factor through $A_0(M_a(1))$ so that $L_1(U)(0)$ might be zero), then the possible cases for $\Omega_1/\Omega_0(L_1(U))$ for $U$ an indecomposable $A_1(M_a(1))$-module are
\[ U = U_0(\lambda, k), \quad L_1(U) (0) = 0, \quad \mbox{and} \quad \Omega_1/\Omega_0(L_1(U)) \cong U_1(\lambda, k) \ncong U\]
or
\[ U = U_1(\lambda, k) \quad L_1(U) (0) = \Omega(\lambda, k) \neq 0, \quad \mbox{and} \quad \Omega_1/\Omega_0(L_1(U)) \cong U.\]
Note however that in the case of $U = U_0(\lambda,k)$, the $M_a(1)$-module $L_1(U)$ is in fact $M_a(1) \otimes \Omega(\lambda, k)$ but the grading as an $\mathbb{N}$-gradable module is shifted up one. Thus by regrading to obtain an $\mathbb{N}$-gradable $M_a(1))$ module in the sense of Definition \ref{N-gradable-definition}, this module is again just $M_a(1) \otimes \Omega(\lambda, k)$, and the level one Zhu algebra gives no new information about the indecomposable $M_a(1)$-modules not already given by the level zero Zhu algebra.
\subsection{Virasoro vertex operator algebras, and their level zero and level one Zhu algebras}
Let $\mathcal{L}$ be the Virasoro algebra with central charge $\mathbf{c}$, that is, $\mathcal{L}$ is the vector space with basis $\{\bar{L}_n \,|\, n\in\mathbb{Z}\}\cup \{\mathbf{c}\}$ with bracket relations
\begin{align*}
[\bar{L}_m,\bar{L}_n]=(m-n)\bar{L}_{m+n}+\frac{m^3-m}{12} \delta_{m+n,0} \, \textbf{c},\quad\quad [\textbf{c},\bar{L}_m]=0
\end{align*}
for $m,n\in \mathbb{Z}$. Here we use a bar over the Virasoro generators to distinguish between these Virasoro elements and the functor $L_n$ defined earlier.
Let $\mathcal{L}^{\geq 0}$ be the Lie subalgebra with basis $\{ \bar{L}_n \,|\, n\geq 0 \} \cup \{\mathbf{c}\}$, and let $\mathbb{C}_{c,h}$ be the $1$-dimensional $\mathcal{L}^{\geq 0}$-module where $\mathbf{c}$ acts as $c$ for some $c\in \mathbb{C}$, $\bar{L}_0$ acts as $h$ for some $h\in \mathbb{C}$, and $\bar{L}_n$ acts trivially for $n\geq 1$. Form the induced $\mathcal{L}$-module
\[
M(c,h)= \mathcal{U}(\mathcal{L})\otimes_{\mathcal{L}^{\geq 0}} \mathbb{C}_{c,h} .
\]
We shall write $L(n)$ for the operator on a Virasoro module corresponding to $\bar{L}_n$, and $\mathbf{1}_{c,h} = 1 \in \mathbb{C}_{c,h}$. Then
\[ V_{Vir}(c,0)= M(c,0)/\langle L(-1)\mathbf{1}_{c,0}\rangle
\]
has a natural vertex operator algebra structure with vacuum vector $1=\mathbf{1}_{c,0}$, and conformal element $\omega=L(-2)\mathbf{1}_{c,0}$, satisfying $Y(\omega,x)= \sum_{n\in\mathbb{Z} } L(n)x^{-n-2}$. In addition, for each $h\in\mathbb{C}$, we have that $M(c,h)$ is an ordinary $V_{Vir}(c,0)$-module with $\mathbb{N}$-gradation
\[M(c,h)=\coprod_{k\in \mathbb{N}} M(c,h)_k\]
where $M(c,h)_k$ is the $L(0)$-eigenspace with eigenvalue $h + k$. We say that $M(c,h)_k$ has degree $k$ and weight $h+k$.
We now fix $c \in \mathbb{C}$, and denote by $V_{Vir}$, the vertex operator algebra $V_{Vir}(c,0)$.
It was shown in \cite{W} that
\begin{equation}
A_0(V_{Vir}) \cong \mathbb{C}[x,y]/(y-x^2 - 2x) \cong \mathbb{C}[x]
\end{equation}
under the identification
\begin{eqnarray}
L(-2)\mathbf{1} + O_0(V_{Vir}) &\longleftrightarrow& x + (q_0(x,y)), \\
L(-2)^2\mathbf{1} + O_0(V_{Vir}) &\longleftrightarrow& y + (q_0(x,y)),
\end{eqnarray}
where $q_0(x,y) = y - x^2 - 2x$.
In addition, there is a bijection between isomorphism classes of irreducible $\mathbb{N}$-gradable $V_{Vir}$-modules and irreducible $\mathbb{C}[x]$-modules given by $L(c,\lambda)
\longleftrightarrow \mathbb{C}[x]/(x - \lambda)$ where $T(c,\lambda)$ is the largest proper submodule of $M(c,\lambda)$, and $L(c,\lambda) = M(c,\lambda)/T(c,\lambda)$. It was proved in \cite{W} that if $c$ is not of the form
\begin{equation}\label{cpq}
c_{p,q} = 1 - 6\frac{(p-q)^2}{pq} \quad \mbox{for any $p,q \in \{2,3,4,\dots\}$ with $p$ and $q$ relatively prime,}
\end{equation}
then $T(c, \lambda) = \langle L(-1) \mathbf{1}_{c,0} \rangle$, implying $V_{Vir}(c,0) = L(c,0)$, is simple as a vertex operator algebra. Furthermore, it then follows from the fact $A_0(V_{Vir}) \cong \mathbb{C}[x]$ that in this case of $c \neq c_{p,q}$, $V_{Vir}(c,0)$ is irrational. It was also shown in \cite{W} that for $c = c_{p,q}$ that $T(c, \lambda) \neq \langle L(-1) \mathbf{1}_{c,0}$ and thus in this case $V_{Vir}(c_{p,q}, 0)$ is not simple as a vertex operator algebra.
As for the level one Zhu algebra, in \cite{BVY-Virasoro}, we show that
\begin{eqnarray}
A_1(V_{Vir}) &\cong& \mathbb{C}[x,y]/((y-x^2-2x)(y-x^2-6x + 4)) \\
&\cong& \mathbb{C}[\tilde{x},\tilde{y}]/(\tilde{x} \tilde{y})
\end{eqnarray}
under the identification
\begin{eqnarray}
L(-2)\mathbf{1} + O_1(V_{Vir}) &\longleftrightarrow& x + (q_0(x,y)q_1(x,y)), \label{x}\\
L(-2)^2\mathbf{1} + O_1(V_{Vir}) &\longleftrightarrow& y + (q_0(x,y)q_1(x,y)), \label{y}
\end{eqnarray}
where $q_0(x,y) = y - x^2 -2x = \tilde{x}$ and in addition $q_1(x,y) = y - x^2 -6x + 4 = \tilde{y}$.
\begin{rema} {\em In this case, the ideals $J_0 = (q_0(x,y))$ and $J_1 = (q_1(x,y))$ are not relatively prime since $q_0(x,y) - q_1(x,y) = 4x - 4$, i.e., $q_0$ and $q_1$ have nontrivial intersection at $x=1$, and thus $J_0 + J_1 \neq \mathbb{C}[x,y]$. Therefore the level zero Zhu algebra is not isomorphic to a direct summand of the level one Zhu algebra. This results, as we shall see below, in several interesting examples illustrating the subtleties of the relationship between modules for the higher level Zhu algebras for $V = V_{Vir}$ and $\mathbb{N}$-gradable $V_{Vir}$-modules. }
\end{rema}
\begin{rema} {\em The result we obtain in \cite{BVY-Virasoro} for $A_1(V_{Vir})$ differs from that presented in \cite{V} as follows: In the notation of \cite{V}, letting $L = L(0)$ and $A = L(-1)L(1)$, then acting on a primary vector (i.e. a vector $w$ for which $L(n) w = 0$ if $n >0$), we have that $(y - x^2 -2x)(y-x^2 -6x + 4)$ acts (using the zero mode action as computed below in (\ref{L(0)-zero-mode}) and (\ref{L(-2)-squared-acting})) as $4(A^2 - 2AL + 2A)$. This implies that for such vectors $A_1(V_{Vir})$ acts as $\mathbb{C}[A, L]/(A^2 - 2LA +2A)$ and this algebra is almost the level one Zhu algebra for $V_{Vir}$ given in \cite{V} but still differs by a minus sign. Thus with the minus sign typo corrected, $A_1(V_{Vir})$ acts equivalently to the algebra given in \cite{V} on $\Omega_1(W)$ for an $\mathbb{N}$-gradable $V$-module $W$, but will not in general act the same on $\Omega_n(W)$ for $n>1$. Since one of the important aspects of the information that the higher level Zhu algebras give is contained in the action of $A_n(V)$ versus the action of $A_{n-1}(V)$ through the natural epimorphism from $A_n(V)$ to $A_{n-1}(V)$, it is essential to have each $A_n(V)$ realized as its full algebra $V/O_n(V)$ rather than as a realization of zero modes acting on $\Omega_n(V)$ so as to be able to compare, e.g. the action of $A_2(V_{Vir})$ on an $A_2(V_{Vir})$-module such as $\Omega_2(W)$ versus the action of $A_1(V_{Vir})$ on the same module. }
\end{rema}
\subsubsection{Virasoro Example 1: $\Omega_n/\Omega_{n-1}(L_n(U)) \ncong U$ since $U$ has a nonzero proper submodule that factors through $A_{n-1}(V)$}\label{Virasoro-factor-through-example}
We now give a family of examples to illustrate that there are nontrivial instances of an indecomposable $A_1(V_{Vir})$-module $U$ that does not factor through $A_0(V_{Vir})$, but a nontrivial submodule does, and that in this case we have
\[\Omega_1/\Omega_0 (L_1(U)) \ncong U.\]
In addition, we give other aspects of the structure of this family of indecomposable $V_{Vir}$-modules, $\{W_k\}_{k \in \mathbb{Z}_+}$. For instance, each $W_k$ for $k \in \mathbb{Z}_+$, has a Jordan block decomposition for $L(0)$ with Jordan blocks of size $k$ in the weight zero, and weight greater than one subspaces, and size $k+1$ in the weight one and greater subspaces.
For $k \in \mathbb{Z}_+$, consider the non simple, indecomposable $A_1(V_{Vir})$-module given by
\[
U = \mathbb{C}[x, y]/((y-x^2-2x)^{k+1}, (y-x^2-6x+4)).
\]
Clearly $U$ is not a module for $A_0(V_{Vir}) \cong \mathbb{C}[x,y]/(y - x^2 - 2x)$, and
\[
U \cong \mathbb{C}[x]/(x-1)^{k+1}.
\]
Let $w$ be a singular vector for the Virasoro algebra (i.e., $L(n) w = 0$ if $n>0$) such that
\[
L(0)^kw \neq 0; \;\;\; L(0)^{k+1}w = 0.
\]
Set
\[
U' = \mathrm{span}_\mathbb{C} \{L(-1)L(0)^iw\;|\; i = 0, \dots, k\}.
\]
We have the following lemma:
\begin{lemma}
As $A_1(V_{Vir})$-modules, $U \cong U'$
under the homomorphism
\begin{eqnarray}
f : \ \ \ \ \ \ \ U &\longrightarrow& U'\\
\overline{(x-1)^{i}} &\mapsto& L(-1)L(0)^iw, \nonumber
\end{eqnarray}
for $i = 0, \dots, k$, and
where $\overline{(x-1)^{i}}$ is the image of $(x-1)^i$ in $U$ under the canonical projection.
\end{lemma}
{\it Proof.}\hspace{2ex} Clearly the map $f$ is surjective, and since $U$ and $U'$ have the same dimension, $f$ is also injective. We need to show $f$ is an algebra homomorphism.
Denote by $[x]$ and $[y]$ be the image of $x$ and $y$ in $A_1(V_{Vir})$ under the canonical projection and identification given by (\ref{x}) and (\ref{y}). Then $[x] = [L(-2){\bf 1}]$ and $[y] = [L(-2)^2{\bf 1}]$. Thus $[x]$ acts on the modules via
\begin{equation}\label{L(0)-zero-mode}
o(L(-2){\bf 1}) = L(0).
\end{equation}
To determine the action of $[y]$, recall the normal ordering notation
\[{}^\circ_\circ Y(u,x)Y(v,x) {}^\circ_\circ = \left(\sum_{m<0} u_n x^{-n-1}\right) Y(v,x) + Y(v,x) \sum_{m \geq 0} u_n x^{-n-1}.\]
Then
\begin{eqnarray}\label{L(-2)-squared-acting}
o(L(-2)^2{\bf 1}) &=& (L(-2)^2{\bf 1})_{\mathrm{wt} \, L(-2)^2{\bf 1} - 1} \ = \ (L(-2)^2{\bf 1})_3 \nonumber \\
&=& \mathrm{Res}_x x^3 \, {}^\circ_\circ Y(L(-2) \mathbf{1},x) Y(L(-2) \mathbf{1},x) {}^\circ_\circ \nonumber \\
&=& \sum_{m<0} L(m-1)L(-m+1) + \sum_{m\geq 0}L(-m+1) L(m-1) \nonumber \\
&=& L(1)L(-1) + L(0)^2 + L(-1)L(1) + 2 \sum_{i \geq 2}L(-i)L(i) \nonumber \\
&=& [L(1), L(-1)] + L(0)^2 + 2 \sum_{i \geq 1}L(-i)L(i) \nonumber \\
&=& 2L(0) + L(0)^2 + 2 \sum_{i \geq 1}L(-i)L(i) .
\end{eqnarray}
Thus we have
\begin{eqnarray*}
f([x]\cdot \overline{(x-1)^i}) &=& f(\overline{(x-1)^{i+1}} + \overline{(x-1)^{i}}) \\
&=& f(\overline{(x-1)^{i+1}}) + f(\overline{(x-1)^{i}})\\
&=& L(-1)L(0)^{i+1}w + L(-1)L(0)^iw\\
&=& L(0)(L(-1)L(0)^iw)\\
&=& o(L(-2){\bf 1})(L(-1)L(0)^iw)\\
&=& [x]\cdot f( \overline{(x-1)^i}),
\end{eqnarray*}
and
\begin{eqnarray*}
f([y]\cdot \overline{(x-1)^i}) &=& f([x^2+6x-4]\cdot \overline{(x-1)^i})\\
&=& f([(x-1)^2 + 8(x-1) + 3] \cdot \overline{(x-1)^i})\\
&=& f(\overline{(x-1)^{i+2}}) + 8f(\overline{(x-1)^{i+1}} ) + 3f(\overline{(x-1)^i})\\
&=& L(-1)L(0)^{i+2}w + 8L(-1)L(0)^{i+1}w + 3 L(-1)L(0)^iw,
\end{eqnarray*}
while from (\ref{L(-2)-squared-acting}), we have
\begin{eqnarray*}
\lefteqn{[y]\cdot f( \overline{(x-1)^i}) =}\\
&=& \left( 2L(0) + L(0)^2 + 2 \sum_{j \geq 1}L(-j)L(j) \right)\cdot L(-1)L(0)^iw\\
&=& L(-1)L(0)^{i+2}w + 8L(-1)L(0)^{i+1}w + 3L(-1)L(0)^iw\\
&=& f([y]\cdot \overline{(x-1)^i}).
\end{eqnarray*}
Therefore $f$ is a $A_1(V_{Vir})$-module homomorphism, proving the lemma. \hspace*{\fill}\mbox{$\halmos$}\vspace{1em}
Now we see that in this case
\[
\Omega_1(L_1(U)) = L_1(U)(0) \oplus L_1(U)(1) \oplus \mbox{higher degree terms}
\]
and
\[
\Omega_0(L_1(U)) = L_1(U)(0) \oplus \left( \mathrm{span}_\mathbb{C} \{L(-1)L(0)^k w\} + J \right) .
\]
Since $L(-1)L(0)^kw \in U$, by definition of $J$, we have that $L(-1)L(0)^kw \notin J$. In particular,
\begin{eqnarray*}
\Omega_1/\Omega_0(L_1(U)) &\cong& U / \mathrm{span}_\mathbb{C} \{L(-1)L(0)^kw\} \oplus \mbox{higher degree terms}\\
&\ncong& U,
\end{eqnarray*}
giving a counter example to Theorem 4.2 in \cite{DLM} and showing the necessity of the condition in our Theorem \ref{mainthm} that no nontrivial submodule of $U$ factors through $A_{n-1}(V)$. That is this case illustrates that since the submodule $\mathrm{span}_\mathbb{C}\{L(-1)L(0)^kw\} \subsetneq U$ is an $A_0(V_{Vir})$-module, the added condition in our Theorem \ref{mainthm} in comparison to Theorem 4.2 of \cite{DLM} is indeed necessary for the statement to hold.
Note that in general here, $J$ and the nature of the higher degree terms in $\Omega_1(L_1(U))$, will depend on the central charge $c$ of $V_{Vir} = V_{Vir}(c,0)$. In general,
\begin{eqnarray*}
(L_1(U)) (j) &=& \mathrm{span}_\mathbb{C} \{L(-s_1) \cdots L(-s_r)L(-1) L(0)^i w \; | \; i = 0, \dots, k, \ r \in \mathbb{N}, \\
& & \qquad s_1 \geq s_2 \geq\dots \geq s_r \geq 1, \ s_1 + \cdots + s_r + 1 = j \} \\
& & \quad \oplus \mathrm{span}_\mathbb{C} \{L(-s_1) \cdots L(-s_r) L(0)^i w \; | \; i = 1, \dots, k, \ r \in \mathbb{N}, \\
& & \qquad s_1 \geq s_2 \geq\dots \geq s_r > 1, \ s_1 + \cdots + s_r = j \} \ \mathrm{mod} \ J
\end{eqnarray*}
for $j \in \mathbb{N}$, where the first direct summand, which occurs if $j \neq 0$, consists of Jordan blocks for $L(0)$ of size $k+1$ modulo the subspace $J$, and the second direct summand, which occurs if $j \neq 1$, consists of Jordan blocks of size $k$ modulo the subspace $J$. That is, the degree zero space will have a Jordan block of size $k$, the degree one space will have Jordan block of size $k+1$, and the higher degree subspaces will potentially have Jordan blocks of size $k+1$ and lower with the size depending on the central charge $c \in \mathbb{C}$.
For instance, if $k = 1$, then $U = \mathrm{span}_\mathbb{C} \{ L(-1)w, L(-1)L(0)w \}$, and we have
\begin{eqnarray}
L_1(U)(0) &=& \mathrm{span}_\mathbb{C} \{ L(0)w \}, \qquad \qquad \qquad \quad \ \ \mbox{ 1 Jordan block of size 1}\\
L_1(U)(1) &=& \mathrm{span}_\mathbb{C} \{ L(-1)w, L(-1)L(0)w \}, \quad \mbox{ 1 Jordan block of size 2} .
\end{eqnarray}
\subsubsection{Virasoro Example 2: $L_n(\Omega_n/\Omega_{n-1}(W)) \ncong W$ since $W \neq \langle W(n) \rangle$ even if $W$ is simple}\label{Virasoro-n-generated-example}
Now let $c \neq 0$, and let $L(c,0)$ be the unique simple minimal vertex operator algebra with central charge $c$, up to isomorphism, i.e. $L(c,0)$ is isomorphic to the quotient of $V_{Vir} = V_{Vir}(c,0)$ by its largest proper ideal $T(c,0)$.
Let $V = W = L(c,0)$. Then $W(0) = \mathbb{C} \mathbf{1}$, with $\mathbf{1} = \mathbf{1}_{(c,0)}$, and $W(1) = 0$. Thus in this case
\[\Omega_0(W) = \mathbb{C} \mathbf{1}= W(0) = W(0) \oplus W(1) = \Omega_1(W).\] Therefore
\[L_1(\Omega_1/\Omega_0 (W)) = 0 \ncong W.\]
\subsubsection{Virasoro Example 3: $L_n(\Omega_n/\Omega_{n-1}(W)) \ncong W$ since $\Omega_n/\Omega_{n-1}(W) \neq W(n)$, for $W$ indecomposable but nonsimple, and $V$ simple}
Now let $V = L(c,0)$ and $c \neq c_{p,q}$ for $c_{p,q}$ defined in (\ref{cpq}). In this case $V= L(c,0) = V_{Vir} (c,0)$ and $V$ is a simple vertex operator algebra. Let $W = M(c,0)$ which is not a simple $V$-module but is indecomposable. In this case, since the quotient of $M(c,0)$ by $\langle L(-1) \rangle$ is simple, we have
\[\Omega_0(W) = \mathrm{span}_\mathbb{C} \{ \mathbf{1}, L(-1)\mathbf{1}\} = W(0) \oplus W(1) \]
\[\Omega_1(W) = \mathrm{span}_\mathbb{C} \{ \mathbf{1}, L(-1)\mathbf{1}, L(-1)^2\mathbf{1}\} = W(0) \oplus W(1) \oplus \mathbb{C} L(-1)^2 \mathbf{1}.\]
Thus
\[\Omega_1/\Omega_0(W) \cong \mathbb{C} L(-1)^2\mathbf{1}, \]
which does not factor through $A_0(V)$ since $y - x^2 - 2x$ acts as $2L(-1)L(1)$ on $\mathbb{C} L(-1)^2\mathbf{1}$, which is nontrivial. Then $L_1(\mathbb{C} L(-1)^2\mathbf{1}) \cong \langle L(-1)\mathbf{1} \rangle$ since $L(-1) \mathbf{1} = \frac{1}{2} L(1) L(-1)^2 \mathbf{1}$ and this spans the zero degree space of $L_1(\mathbb{C} L(-1)^2\mathbf{1})$.
Therefore we have
\[L_1(\Omega_1/\Omega_0(W)) = L_1(\mathbb{C} L(-1)^2\mathbf{1}) \cong \langle L(-1)\mathbf{1} \rangle \ncong M(c,0) = W.\]
Also note that $\Pi_1(W) = \mathbb{C} L(-1) \mathbf{1}$ which as an $A_1(V)$-module does factor through $A_0(V)$, and $L_0( \Pi_1(W)) \cong \langle L(-1)\mathbf{1} \rangle$.
|
1,314,259,994,350 | arxiv | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{V}{ehicle} telematics is steadily evolving as a solution to improve mobility safety, vehicles efficiency, and maintenance~\cite{Markets:2020}. Sensors which monitor mechanical, electrical, and electronic systems of the vehicle produce information collected by an Electronic Control Units (ECUs) which optimizes vehicle performance and enhances safety by producing preventive maintenance reports. Naturally, correct functioning of the machine alone is not enough to prevent accidents. The World Health Organization reports that road traffic accidents represent one of the leading causes of global deaths~\cite{WHO:2018}, while the European Road Safety Observatory quantifies the socioeconomic consequences for traffic injuries in 2018 as \euro\,120 billion~\cite{EU:2019}.
As a logical consequence, industry and researchers pursue innovative methods to support drivers, through Advanced Driver Assistance Systems (ADAS) based on external and internal vehicle sensing. External sensing allows gathering information on the environment around the vehicle through specific sensors, with the possibility of sharing this information with other vehicles in proximity~\cite{Alsultan:2014}. As a result, vehicle telematics has become more relevant, as it directly impacts drivers, passengers and the environment around the vehicle.
Figure\,\ref{fig:automotive_market} shows a projection of the electronics market growth between 2018 and 2023~\cite{Gartner_Forecast:2018}. Automotive electronics forecast is only comparable to industrial electronics, pushed by the fourth industrial revolution.
\afterpage{
\begin{figure}[t]
\centering
\hspace{-0.5cm}
\resizebox{0.42\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
xbar stacked,
xmin=0, xmax=10,
axis x line*=bottom,
axis y line*=none,
xlabel={Percentage \%},
symbolic y coords={%
{Industrial},
{Automotive},
{Military/Civil Aerospace},
{Total IC},
{Consumer},
{Communication},
{Data Processing}},
y tick label style={text width=2.5cm,align=right},
ytick=data,
ytick=data,
bar width=6mm,
]
\addplot[left color=black!20!blue, right color=black!40!red]coordinates {
(8.60,{Industrial})
(8.80,{Automotive})
(2.15,{Military/Civil Aerospace})
(3.10,{Total IC})
(2.20,{Consumer})
(1.50,{Communication})
(0.98,{Data Processing})};
\end{axis}
\begin{scope}[shift={(5.4,4.3)}]
\path[3d pie chart/.cd,radius=1.5cm,h=0.5cm,colors={"blue!25","black!10!yellow","purple!60","red!90","black!30!green!50"}]
pic{3d pie chart={18/Others,36/LiDAR,22/Imaging,15/Radar,9/Ultrasonic}};
\end{scope}
\begin{scope}
\filldraw[black] (5.8,1.25) circle (0.1cm) node[] (auto) {};
\draw (5.8,1.255) -- (5.8,1.8) node[line width=0.5mm] {};
\node at (5.8,2.5) {CAGR for Automotive Sensing};
\node at (5.8,2.1) {Market (2016-2022)~\cite{Yole_Sensor_Forecast:2017}};
\node [draw, thick, shape=rectangle, minimum width=7cm, minimum height=4cm, anchor=center] at (5.8,3.8) {};
\end{scope}
\end{tikzpicture}
}
\caption{Electronics market growth projection (2018-2023 CAGR)~\cite{Gartner_Forecast:2018}. The pie chart shows the prediction of automotive sensors market growth (2016-2022 CAGR)~\cite{Yole_Sensor_Forecast:2017}, focused on exteroceptive sensors.}
\label{fig:automotive_market}
\end{figure}
}
Vehicles are today equipped with a myriad of sensors, which integrate different systems and help to improve, adapt, or automate vehicle safety and driving experience. These systems assist drivers by offering precautions to reduce risk exposure or by cooperatively automating driving tasks, with the aim of minimizing human errors~\cite{Guo:2019}. Typically, counting only on measurements from internal (``proprioceptive'') sensors is not sufficient to provide safety and warning applications associated with the external environment. With exteroceptive sensors instead, vehicles have the ability to acquire information on the surrounding environment, recognizing other factors and objects that coexist in the same space. Vehicle external sensing is gaining importance especially with the proliferation of cameras which, combined with the improved image processing and analysis, enables a wide range of applications~\cite{IHS:2018}. Consequently, imaging is among the areas with the highest projection in automotive electronics as shown in Figure~\ref{fig:automotive_market}. LiDARs have the primacy as exteroceptive sensors expected to be the most demanded in automotive. Unlike cameras, they provide an omnidirectional sensing and they do not suffer from scarce light conditions. Radar and ultrasonic sensors share another quarter of the market, while 18\% of the market will be taken by other exteroceptive sensors like microphones.
Data generated by exteroceptive sensors is of primary interest for drivers and passenger. In fact, the environmental sensing is fundamental for ADAS, collision avoidance and safety applications. Nonetheless, the same data has a valence for smart cities, e.g., sensing the road conditions, and for insurance companies to establish driving profiles and calculate insurance premiums.
\smallskip\textit{Contribution and Outline.}
In this paper we present an overview of exteroceptive sensors and their use in vehicle telematics along with relative services and applications. Our main focus is on the safety application area, but we also cover other important fields related to mobility, like navigation, road monitoring and driving behavior analysis. Such applications are interesting for car manufacturers, insurance companies, smart cities, as well as drivers and passengers. The purpose it to provide a clear taxonomy of works per telematics application and per exteroceptive sensor. Studies are mainly selected proportional to their relevance, novelty, and publication date. In this process, we provide background information on sensors in telematics, detailing and comparing the exteroceptive sensors particularly. When applicable, we describe Original Equipment Manufacturer (OEM) and aftermarket devices which include such sensors.
\begin{figure}
\hspace{-0.5cm}
\centering
\resizebox{0.5\textwidth}{!}{
\tikzstyle{section}=[align=center, text width=1.9cm, minimum height=0.8cm, rectangle, draw = black, rounded corners = 3mm]
\tikzstyle{subsec}=[align=left, text width=3.75cm, minimum height=0.5cm, rectangle, draw = black, rounded corners = 3mm]
\tikzstyle{subsubsec}=[align=left, text width=4.55cm, minimum height=0.5cm, rectangle, draw = black, rounded corners = 3mm]
\begin{tikzpicture}[thick,scale=1, every node/.style={scale=1.1}]
\draw (0,10) node[section] (sec_1) {\ref{sec:introduction} Introduction};
\draw (0,8.2) node[section] (sec_2) {\ref{sec:background} Background};
\draw (0,5.05) node[section] (sec_3) {\ref{sec:_ext_sensors_telematics}\\Exteroceptive Sensors for Telematics};
\draw (0,1.45) node[section] (sec_4) {\ref{sec:services_and_applications}\\Services and Applications};
\draw (0,-1.3) node[section] (sec_5) {\ref{sec:challenges}\\Open Research/ Challenges};
\draw (0,-3.15) node[section] (sec_6) {\ref{sec:conclusion} Conclusion};
\draw (3.7,10) node[subsec] (subsec_intro) {$-$ Overview \\$-$ Contribution};
\draw (3.7,8.8) node[subsec] (subsec_classification) {\ref{subsec:classification} Proprioceptive \textit{vs.} \hspace*{0.6cm} exteroceptive sensors};
\draw (3.7,7.6) node[subsec] (subsec_OTS) {\ref{subsec:OTS_devices} OTS telematics \hspace*{0.6cm} devices};
\draw (3.7,5.05) node[subsec] (subsec_exteroceptive) {\ref{subsec:gnss} GNSS\\
\ref{subsec:magnetometer} Magnetometer\\
\ref{subsec:microphone} Microphone\\
\ref{subsec:biometric} Biometric sensors\\
\ref{subsec:ultrasonic} Ultrasonic sensor\\
\ref{subsec:radar} Radar\\
\ref{subsec:lidar} LiDAR\\
\ref{subsec:camera} Camera};
\draw (3.7,2.7) node[subsec] (subsec_safety) {\ref{subsec:safety} Safety};
\draw (3.7,1.95) node[subsec] (subsec_driv) {\ref{subsec:driving_behavior} Driving behavior};
\draw (3.7,1.15) node[subsec] (subsec_road) {\ref{subsec:road_monitoring} Road monitoring};
\draw (3.7,0.35) node[subsec] (subsec_nav) {\ref{subsec:navigation} Navigation};
\draw (3.7,-1.3) node[subsec] (subsec_challenges) {$-$ Which data is important?\\
$-$ Data processing\\
$-$ Security\\
$-$ Risk assessment};
\draw (3.7,-3.15) node[subsec] (subsec_conclusion) {$-$ Summary \\$-$ Perspectives};
\draw (9,10.2) node[subsubsec] (subsubsec_pass_act) {\ref{subsubsec:classification_active_passive} Active/passive sensors};
\draw (9,7.85) node[subsubsec] (subsubsec_OTS) {\ref{subsubsec:OBD_CAN} OBD-II dongle and \hspace*{0.8cm} CAN bus readers\\
\ref{subsubsec:digital_tachograph} Digital tachograph\\
\ref{subssubsec:bbox_wind} Black-box and \hspace*{0.8cm} windshield devices\\
\ref{subsubsec:dashcam} Dashcam\\
\ref{subsubsec:smartphones} Smartphones\\
\ref{subsubsec:wearable} Wearable devices};
\draw (9,4.8) node[subsubsec] (subsubsec_safety) {\ref{subsubsec:tire_wear} Tire wear\\
\ref{subsubsec:crash_detection} Collision detection\\
\ref{subsubsec:collision_avoidance} Collision avoidance\\
\ref{subsubsec:Lane_departure} Lane departure warning};
\draw (9,2.4) node[subsubsec] (subsubsec_driv) {\ref{subsubsec:driving_profiling} Driving profiling\\
\ref{subsubsec:driver_passenger_identification} Driver detection\\
\ref{subsubsec:driver_identification} Driver identification\\
\ref{subsubsec:Health_monitoring} Driver health monitoring\\
\ref{subsubsec:Driving_distraction} Driving distractions};
\draw (9,-0.25) node[subsubsec] (subsubsec_road) {\ref{subsubsec:road_porosity} Road porosity\\
\ref{subsubsec:road_wetness} Road wetness\\
\ref{subsubsec:pothole_detection} Pothole detection\\
\ref{subsubsec:road_type_classification} Road type classification\\
\ref{subsubsec:parking_space_detection} Parking space detection};
\draw (9,-2.7) node[subsubsec] (subsubsec_nav) {\ref{subsubsec:GNSS-based} GNSS/INS-based \hspace*{0.85cm} navigation \\
\ref{subsubsec:SLAM-based} SLAM-based navigation\\
\ref{subsubsec:map_tracking} Map tracking datasets};
\foreach \f/\t in
{sec_1/subsec_intro, sec_2/subsec_OTS, sec_2/subsec_classification, sec_3/subsec_exteroceptive, sec_4/subsec_nav, sec_4/subsec_road, sec_4/subsec_driv, sec_4/subsec_safety, sec_5/subsec_challenges, sec_6/subsec_conclusion, subsec_classification/subsubsec_pass_act, subsec_OTS/subsubsec_OTS, subsec_safety/subsubsec_safety, subsec_driv/subsubsec_driv, subsec_road/subsubsec_road, subsec_nav/subsubsec_nav}
\draw[black, very thick] (\f.east) -- (\t.west);
\end{tikzpicture}}
\caption{Paper organization outline.}
\label{fig:outline}
\end{figure}
For the sake of readability, the organization of this paper is shown in Figure\,\ref{fig:outline}.
Section\,\ref{sec:background} reviews sensor aspects in telematics, their classification, advantages and disadvantages. It also provides an overview of Off-the-Shelf (OTS) telematics devices.
Section\,\ref{sec:_ext_sensors_telematics} details exteroceptive sensors used in vehicle telematics. Section\,\ref{sec:services_and_applications} shows a taxonomy of high level telematics applications like navigation, road monitoring, driving behavior, and safety. For each application, a list of works by sensor is provided.
Open research and challenges in the usage of exteroceptive sensors for vehicular telematics are presented in Section\,\ref{sec:challenges}, while Section\,\ref{sec:conclusion} concludes the survey with summary and perspectives.
\section{Background}
\label{sec:background}
Sensors in telematics enable monitoring a broad range of functions inherent to the management of diverse driving activities. Electronic sensing systems and data processing capacity reduce driver's workload and provide innovative services. This section presents a classification of sensors for telematics purposes, according to the environment in which they operate. Figure~\ref{fig:built-in_sensors} illustrates the proposed classification where sensors are placed in the central column. On the left-hand side, each sensor is connected to OTS telematics devices where it is embedded, while at the right-hand side possible fields of application are identified.
\subsection{Proprioceptive vs. exteroceptive sensors}
\label{subsec:classification}
A wide variety of sensors is used in regular vehicles, the majority of them to gather information on internal mechanisms. Self-driving vehicles on the other hand incorporate external sensors whose function is critical to analyze the surrounding environment. Therefore, vehicular telematics is no longer merely mechanical, leading to the analysis of internal and external variables. As such, a basic classification of the sensors is according to the sensed variables, as \textit{proprioceptive} or \textit{exteroceptive}~\cite{Siegwart:2004}.
\textit{Proprioceptive sensors} measure variations in signals generated by the vehicle's internal systems (motor speed, battery level, etc.). Those measurements allow estimating different metrics specific to the vehicle, such as speed, fluid levels, acceleration, among other topics of interest for vehicle telematics. An accelerometer is an example of proprioceptive sensor.
\textit{Exteroceptive sensors} allow vehicles to be in contact with stimuli coming from the environment surrounding the vehicle. As a result, it is possible to acquire some information: e.g., measurements of distance, light intensity, sound amplitude, detection of pedestrians, and surrounding vehicles. Therefore, measurements from exteroceptive sensors are interpreted by the vehicle to produce meaningful environmental features.
Proprioceptive sensors, inseparable from vehicle powertrain and chassis, are widely used in vehicle production. In contrast, exteroceptive sensors are mostly used in luxury vehicles, vehicles with some level of autonomy, or experimental vehicles. Conventionally, the proprioceptive sensors are designed to measure single-process systems and are therefore limited in capacity. They are unexposed, protected from the external environment. In contrast, exteroceptive sensors are designed to analyze and monitor internal (vehicle cabin) and external environments. Thus, they are able to operate in different conditions, including with a higher degree of difficulty~\cite{Sjafrie:2020} (e.g., rain, humidity, snow, night time, etc.).
\subsubsection{Active and passive sensors}
\label{subsubsec:classification_active_passive}
Proprioceptive and exteroceptive sensors are designed to just capture and read a specific metric, or to interact with the environment by observing and recording changes in it, or reactions from it. This leads to classifying sensors as active or passive. \textit{Passive} sensors are able to perform measurements without interacting with the environment, in other words, the sensor receives energy stimuli from the environment. \textit{Active} sensors interact with the environment to acquire data. For example, they may emit waves outside the vehicle and measure the level of the environment reaction to those waves. Wave emitters can be lasers or radars, among others.
\begin{figure}[!t]
\centering
\resizebox{0.452\textwidth}{!}{
\begin{tikzpicture}
\tikzstyle{Perception}=[align=center, text width=2cm, minimum height=1.5cm, rectangle, draw = black, rounded corners = 2mm]
\tikzstyle{Sensor}=[align=center, text width=2.3cm, minimum height=0.5cm, rectangle, draw = black, rounded corners = 1mm]
\tikzstyle{Standalone}=[align=center, text width=1.7cm, minimum height=1cm, rectangle, draw = black, rounded corners = 2mm]
\tikzstyle{Services}=[align=center, text width=2.3cm, minimum height=1cm, rectangle, draw = black, rounded corners = 2mm]
\draw (0,-.1) node[align=center, text width=1.95cm, minimum height=7.3cm, rectangle, draw = black, rounded corners = 3mm, dashed] (g_sensor) {};
\draw (0,4.0) node[align=center, text width=2.5cm] (t_g_sensor) {\textit{OTS Telematics devices}};
\draw (0,2.9) node[Standalone, fill=orange!40] (bbox) {Black-box / Windshield};
\draw (0,1.7) node[Standalone, fill=orange!40] (dash) {Dashcam};
\draw (0,0.5) node[Standalone, fill=orange!40] (smart) {Smartphone};
\draw (0,-0.7) node[Standalone, fill=orange!40] (wear) {Wearable};
\draw (0,-1.9) node[Standalone, fill=orange!40] (obd) {OBD-II dongle};
\draw (0,-3.1) node[Standalone, fill=orange!40] (tach) {Tachograph};
\draw (4,1.68) node[fill=cyan!30, opacity=0.8, align=center, text width=2.5cm, minimum height=5.35cm, rectangle, draw = black, rounded corners = 2mm, dashed] (g_sensor) {};
\draw (4,4.55) node[align=center] (t_g_sensor) {\textit{Exteroceptive}};
\draw (4,4) node[Sensor, fill=gray!20] (ult) {Ultrasonic};
\draw (4,3.35) node[Sensor, fill=gray!20] (rad) {Radar};
\draw (4,2.7) node[Sensor, fill=gray!20] (lid) {LiDAR};
\draw (4,2.05) node[Sensor, fill=white] (cam) {Camera};
\draw (4,1.4) node[Sensor, fill=white] (mic) {Microphone};
\draw (4,0.725) node[Sensor, fill=gray!20] (gnss) {GNSS};
\draw (4,0.05) node[Sensor, fill=white] (mag) {Magnetometer};
\draw (4,-0.62) node[Sensor, fill=white] (plet) {Biometric};
\draw (4,-3.15) node[fill=yellow!30, opacity=0.8, align=center, text width=2.5cm, minimum height=3.1cm, rectangle, draw = black, rounded corners = 2mm, dashed] (g_sensor) {};
\draw (4,-1.4) node[align=center] (t_g_sensor) {\textit{Proprioceptive}};
\draw (4,-2) node[Sensor, fill=white] (gyr) {Gyroscope};
\draw (4,-2.67) node[Sensor, fill=white] (acc) {Accelerometer};
\draw (4,-3.5) node[Sensor, fill=white] (can) {CAN bus sensors};
\draw (4,-4.35) node[Sensor, fill=white] (hall) {Hall effect};
\draw (8,0.6) node[align=center, text width=2.5cm, minimum height=6.15cm, rectangle, draw = black, rounded corners = 3mm, dashed] (g_sensor) {};
\draw (8,4.1) node[align=center, text width=3cm] (t_g_sensor) {\textit{Services and applications}};
\draw (8,3) node[Services, fill=green!40] (safe) {Safety (Sec.~\ref{subsec:safety})};
\draw (8,1.45) node[Services, fill=green!40] (driv) {Driving behavior (Sec.~\ref{subsec:driving_behavior})};
\draw (8,-0.25) node[Services, fill=green!40] (road) {Road monitoring (Sec.~\ref{subsec:road_monitoring})};
\draw (8,-1.8) node[Services, fill=green!40] (nav) {Navigation (Sec.~\ref{subsec:navigation})};
\foreach \f/\t in
{dash/cam, smart/cam/, smart/mic, bbox/gnss, smart/gnss, wear/plet, smart/mag, wear/gyr, smart/gyr, bbox/acc, wear/acc, smart/acc, tach/hall, ult/road, ult/safe, rad/road, rad/nav, rad/safe, lid/nav, lid/safe, cam/nav, cam/road, cam/driv, cam/safe, mic/driv, mic/road, gnss/nav, gnss/driv, mag/nav, plet/driv, obd/can, obd/acc, obd/gnss, dash/mic, dash/acc, obd/mag}
\draw[black, very thick] (\f.east) -- (\t.west);
\end{tikzpicture}
}
\caption{{Bipartite graphs showing the relationship between the most widely used sensors in vehicular telematics (in the middle), OTS telematics devices (on the left), and between sensors and telematics services and applications (on the right).
In the middle, exteroceptive sensors are nodes on the top, while proprioceptive on the bottom.
Active sensors are represented as gray colored nodes, passive sensors as white colored nodes.}}
\label{fig:built-in_sensors}
\end{figure}
\subsection{OTS telematics devices}
\label{subsec:OTS_devices}
While smartphones include a large number of sensors (e.g., GNSS, camera, microphone, accelerometer) which make them particularly suitable for insurance telematics~\cite{Wahlstrom:2017}, other sensors require dedicated hardware and installation process. Next, we present a background of OTS telematics devices that carry exteroceptive sensors.
\smallskip\subsubsection{\textit{OBD-II dongles and CAN bus readers}} \label{subsubsec:OBD_CAN}
A modern vehicle can contain more than one hundred sensors, generally associated with the mechanics and
operation of the engine and vehicle systems~\cite{Fleming:2008}. Automotive systems concentrate in three areas of the vehicle: powertrain, chassis, and body. In each area, a set of sensors measures physical quantities associated to specific functions. Measurements are sent to the ECU of each system, where they are interpreted in a look-up table~\cite{Wong:2012}. Data is stored in profiles used to control the vehicle actuators and their performance, e.g. battery level, fuel injection duration, speed control, vehicle stability, anti-lock brake system, among others. The use of specific sensors may also be associated with other factors such as legislation and safety~\cite{Orazio:2011}. Data profiles from the ECUs are used to check the vehicle status information through the On-Board Diagnostics II (OBD-II) interface. It provides access to the vehicle sub-systems controlled by the ECUs, via the CAN bus. OBD-II is widely used by the automotive manufacturers for the analysis of data collected by the ECUs, and their subsequent general diagnosis. Nevertheless, the acquisition of data
through the OBD-II connector is limited to a single port and is specific to each manufacturer which defines proprietary message codes. Commercial OBD-II dongles and more broadly CAN bus readers are connected to the power source of the vehicle itself and may have extra sensors, like a GNSS or an accelerometer.
\smallskip\subsubsection{Digital tachograph}\label{subsubsec:digital_tachograph} Commercial and utility vehicles often use an equipment to log trajectory data, such as speed and distance traveled. Digital tachograph displays, records and stores these measurements internally for the driver's work periods, which are defined by the regulatory authority. The tachograph usually uses a hall effect sensor located in the gearbox of the vehicle, or other mechanical interface whose movement is representative of the speed~\cite{Furgel:2006}. Data is stored in the driver's smart card and then mainly analyzed by fleet management tools. In fact, although tachographs provide precise and secure information, they are relegated to fleet management due to its acquisition and installation cost.
\smallskip\subsubsection{Black-box and windshield devices}\label{subssubsec:bbox_wind}
Usually, black-box and windshield devices are installed within the vehicle and they are equipped with a self-contained sensor systems or they acquire information in a piggy-back process via the CAN bus. These devices embed a GNSS and an accelerometer sensor to define driving profiles about harsh acceleration, braking or impact.
In addition, a windshield device may contain a SIM card and a microphone to establish a voice communication with remote assistance.
\smallskip\subsubsection{Dashcams}
\label{subsubsec:dashcam}
A dashcam is an on-board camera, usually mounted over the dashboard, that records the vehicle front view. Common uses include registering collisions, road hazards, in addition to offering video surveillance services~\cite{Kim:2017}. Since the amount of information generated by the video frames is considerable, images are selected beforehand by the processing system. Additional dashcam functionalities include gesture and voice biometric~\cite{Tymoszek:2020}.
It is worth noting that the utilization of dashcams is limited in some countries due to privacy concerns~\cite{Kim:2020:dashcam}.
\smallskip\subsubsection{Smartphones}
\label{subsubsec:smartphones}
Smartphones involve a variety of technologies that make them a sophisticated computer, with the ability to process data and graphics, not to mention communication and sensing capabilities~\cite{Xu:2014}. Smartphones possess a large number of built-in sensors that allow continuous data collection. Added to mobility, it results in the empowerment of various types of applications with specific requirements in terms of complexity, granularity and response time.
As for vehicular telematics, smartphones play an important role: they can acquire CAN bus data through an OBD-II dongle, a Wi-Fi or Bluetooth connection and, as such, monitor and record data from both proprioceptive and exteroceptive sensors~\cite{Wahlstrom:2017}.%
\smallskip\subsubsection{Wearable devices}
\label{subsubsec:wearable}
Complementary to smartphones, wearable devices are used to monitor human physiological and biometric signals~\cite{Seneviratne:2017}. In the vehicular telematics context, they are used for safety and driving behavior applications~\cite{Sun:2017}. Wearable devices include smartwatches, smart glasses, smart helmets~\cite{Rajathi:2019} and electrocardiogram (ECG) sensors.
\section{Exteroceptive sensors for telematics}
\label{sec:_ext_sensors_telematics}
This section describes in detail exteroceptive sensors which are used for telematics purposes.
Table~\ref{tab:exteroceptive-sensors-comparative} summarizes the main features of each sensor.
\begin{table*}[ht]
\caption{Multi-dimensional comparative among exteroceptive sensors used in vehicle telematics.}
\label{tab:exteroceptive-sensors-comparative}
\resizebox{\textwidth}{!}{%
\begin{tabularx}{1.0\textwidth}{>{\centering\arraybackslash}p{1.4cm}>{\centering\arraybackslash}p{0.8cm}>{\centering\arraybackslash}p{2.7cm}>{\centering\arraybackslash}p{0.9cm}>{\centering\arraybackslash}p{1.5cm}>{\centering\arraybackslash}p{4.5cm}>{\centering\arraybackslash}p{3.6cm}}
\toprule
\textbf{Sensor} &
\textbf{Price} &
\textbf{Main usage} &
\textbf{Precision} &
\textbf{Range} &
\textbf{Advantage} &
\textbf{Limitation}
\\ \toprule
\multirow{2}{*}{GNSS} &
\multirow{2}{*}{Low} &
\multirow{2}{*}{Navigation, positioning} &
Medium/ High &
\multirow{2}{*}{n/a} &
\multirow{2}{*}{High coverage, small form factor} &
Signal blocking in urban canyons \\ \midrule
\multirow{2}{*}{Magnetometer} &
\multirow{2}{*}{Low} &
Navigation, positioning, orientation &
\multirow{2}{*}{Medium} &
\multirow{2}{*}{n/a} &
Small form factor, low energy consumption &
\multirow{2}{*}{Magnetic interference} \\ \midrule
\multirow{2}{*}{Microphone} &
\multirow{2}{*}{Low} &
Surveillance, assistant, environmental sensing &
\multirow{2}{*}{n/a} &
\SI{150}{m}, omnidirectional &
Small form factor, low energy consumption, direction of arrival &
\multirow{2}{*}{Environmental noise} \\ \midrule
Biometric &
Low &
Heath monitoring &
High &
n/a &
Simple data processing &
Uncomfortable \\ \midrule
Ultrasonic &
Low &
Environmental sensing &
Low\,(cm) &
\SI{150}{cm} &
Small form factor &
Low resolution \\ \midrule
\multirow{2}{*}{Radar} &
Low/ Medium &
\multirow{2}{*}{Environmental sensing} &
\multirow{2}{*}{High} &
\multirow{2}{*}{\SI{250}{m}} &
Robust in adverse climatic conditions and with scarce or absent illumination &
Energy consumption, data processing for classification \\ \midrule
\multirow{2}{*}{LiDAR} &
\multirow{2}{*}{High} &
\multirow{2}{*}{Environmental sensing} &
\multirow{2}{*}{High} &
\SI{200}{m}, omnidirectional &
Low sensitive to light and to weather conditions, 3D representation &
\multirow{2}{*}{Data processing latency} \\ \midrule
\multirow{2}{*}{Camera} &
Medium/ High &
\multirow{2}{*}{Environmental sensing} &
Medium/ High &
\multirow{2}{*}{Line-of-Sight} &
Multiple techniques for data processing &
Sensitive to light and weather conditions \\ \bottomrule
\end{tabularx}%
}
\end{table*}
\subsection{Global Navigation Satellite System (GNSS)}
\label{subsec:gnss}
Some OTS devices implement Location-Based Systems (LBS) using an embedded GNSS receiver. GNSS systems allow a quite accurate localization (on the meter scale) on earth, through trilateration signals from dedicated geostationary artificial satellites. Depending on the platform on which OEM devices operate, different LBS are offered. In smartphones, some location services merge short and long-range wireless networks such as Wi-Fi, Bluetooth, and cellular networks~\cite{Zandbergen:2009}, in addition to GNSS data~\cite{Dabove:2019}.
Nowadays, Android-based devices, use messages based on the NMEA 0183 standard~\cite{NMEA:0183}. The latest updates to this standard include measurement of the pseudo-range and Doppler shift; this adds simplicity and robustness to the processing of raw GNSS measurements~\cite{GNSS:2018, Android_Dev:2019}. Nevertheless, GNSS reception exhibits outages due to interference, signal propagation, and measurement accuracy in urban canyons due to multipath effects and Non-Line-of-Sight (NLoS) conditions~\cite{Zhang:2011}.
\subsection{Magnetometer}
\label{subsec:magnetometer}
The main function of a magnetometer is reading the strength of the Earth's magnetic field determining its orientation. Magnetometers embedded in commodity devices like smartphones have microelectromechanical systems (MEMS) which inform the magnetic field on three-axis with \SI{}{\micro\tesla} sensibility~\cite{Jones:2010}. Moreover, its miniaturized form factor and low energy consumption favors its availability in a large number of devices. Thus, it results as an important component for providing navigation and LBS services.
\subsection{Microphone}
\label{subsec:microphone}
A microphone transforms sound waves into electrical energy. These sensors are embedded as MEMS devices or condensed mics that are connected to OTS devices. Microphones are an affordable solution for real time signal processing. According to ISO\,9613-2 standard, their sensing range reaches up to \SI{200}{m} for high intensity sounds in an urban scenario~\cite{ISO_9613-2}. Moreover, microphones consume low energy, have very small size, and omni-directional sensing capability. Devices with an array of microphones are used to estimate the Direction of Arrival (DoA) and localize the sound source calculating the time difference of arrival between each microphone pair.
On the other hand, their efficiency largely depends on their sensitivity, sound waves amplitude, and environmental noise.
\subsection{Biometric sensors}
\label{subsec:biometric}
Biometric sensors are used to collect measurable biological characteristics (biometric signals) from a human being, which can then be used in conjunction with biometric recognition algorithms to perform automated person identification. ECG devices installed in the steering wheel and in the driver's seat to measure heart activity, through touch or photoelectric sensors. In telematics, it is used as a proxy of drivers' stress condition, drowsiness, and fatigue~\cite{murugan2020detection}.
\subsection{Ultrasonic sensor}
\label{subsec:ultrasonic}
Ultrasonic refers to acoustic waves, where a transmitter sends sound waves, and a receiver captures the bounce off waves from nearby objects. The distance of such object is determined through the Time-of-Flight (ToF). These waves are propagated at the speed of sound (that depends on the density of the propagation medium), and use frequencies higher than those audible by the human ear, between 20 and \SI{180}{kHz}~\cite{Siegwart:2004}. Sound propagation occurs conically, with opening angles between \ang{20} and \ang{40}. The ultrasonic sensor is suitable for low speed, short or medium range applications (tens or hundreds of cm) like parking assistance, blind spot detection and lateral moving. With a low power consumption (up to \SI{6}{W}) and a price under \SI{100}{\$}, it is a relatively affordable object detection sensor.
\subsection{Radar}
\label{subsec:radar}
Radar (Radio Detection and Ranging) detectors use reflected electromagnetic waves. The device transmits radio wave pulses that bounce on objects outside the vehicle. The reflected pulses which arrive some time later at the sensor allow inferring different information. It is possible to determine the direction, distance, and estimate the object size~\cite{Sjafrie:2020}. The relative speed of moving targets can be calculated through frequency changes caused by the Doppler shift. Radar systems transmit waves in Ultra High Frequency (UHF), at 24, 77, and \SI{79}{GHz}, with opening angles between \SI{9}{\degree} and \SI{150}{\degree}, and elevation up to \SI{30}{\degree}. Radar can operate in distance ranges up to \SI{250}{m}, with a power consumption from \SI{12}{W}, it is used for short, mid and long-range object detection and adaptive cruise control at high speeds. Radars are robust in adverse climatic conditions (e.g., fog or rain) and with scarce or no lighting. Nevertheless, signal processing is harder for classification problems if not combined with other sensor readings. Radar's price ranges from \SI{50}{\$} to \SI{200}{\$}.
\subsection{LiDAR}
\label{subsec:lidar}
LiDAR (Light Detection And Ranging) uses laser reflection instead of radio waves.
The LiDAR sensor transmits light pulses to identify objects around the vehicle. Typically, a LiDAR emits a \SI{905}{nm} wavelength laser light beam to illuminate the objects. Pulses of laser light are generally emitted at every \SI{30}{ns}. The returning light component is coaxial with the light beam emitted by the sensor~\cite{Siegwart:2004}. The LiDAR sweeps in a circular and vertical fashion; the direction and distance of the reflected pulses are recorded as a data point. Moreover, a set of points then constitutes a point cloud which is a spatial representation of coordinates, enabling 3D model processing with high accuracy. LiDAR sensors can cover at \ang{360} the horizontal field of view around the vehicle, and up to \ang{42} the vertical field of view. LiDARs are less sensitive to light and weather conditions. Nevertheless, processing the whole LiDAR points is time consuming, thus it is not suitable for real time applications. There are low-cost LiDAR sensors (from \SI{100}{\$}) and low power consumption (from \SI{8}{W}); nonetheless, these are limited to one laser beam. More advanced models of LiDAR sensors contain laser arrays (up to 128), improving the point cloud resolution; this represents a higher energy consumption (up to \SI{60}{W}), and more expensive (up to 75,000\,\$).
\subsection{Camera}
\label{subsec:camera}
A camera is a vision sensor used to take images both inside and outside the vehicle, to detect objects on the road as well as to analyze the behavior of the driver and his environment inside the vehicle. CMOS-based cameras are widely used in vehicular applications~\cite{Miller:2004}. These can operate in the Visible (VIS) and Near-Infrared (NIR) spectral region~\cite{Siegwart:2004}. VIS cameras are largely used because these reproduce instantaneous images like those perceived by the human eye. Differently, NIR cameras detect objects based on heat radiation. Additionally, the quality of the images depends on the resolution and field of view of the device. Furthermore, vehicular applications use monocular cameras, stereo cameras, in addition to using so-called fish-eye lenses, which generate optical effects. Optical cameras are less expensive than LiDAR sensors and very effective, with power consumption less than \SI{2.5}{W}. Despite the fact that the camera generates the highest amount of data per second, accurate methods for object detection and recognition through image processing exist nowadays, like Convolutional Neural Networks (CNN) and deep learning, enabling to handle real images better than LiDAR. Some drawbacks exist though: image quality depends on lighting and weather conditions, and scene representation is limited to the pointing direction and line-of-sight.
\section{Services and Applications}
\label{sec:services_and_applications}
Next, we cover practical telematics applications and services that use exteroceptive sensors exclusively.
We organize them into four macro-areas: safety, driving behavior, road monitoring, and navigation. Given the rich literature in vehicle telematics, for each macro-area we have selected relevant works in terms of practicability, novelty, relevance, and release date. In addition, we report on available datasets if applicable.
\subsection{Safety}
\label{subsec:safety}
As summarized in Table~\ref{tab:safety}, the safety area includes four categories of applications, related to vehicle maintenance, driving and external events: tire wear, collision detection, collision avoidance and lane departure.
\begin{table}[t]
\caption{Safety applications with exteroceptive sensors.}
\label{tab:safety}
\begin{tabularx}{\columnwidth}{CCC}
\toprule
\textbf{Application} & \textbf{Sensor/\textcolor{gray}{Dataset}} & \textbf{References} \\ \toprule
Tire wear & Radar & \cite{Matsuzaki:2012, Prabhakara:2020:osprey} \\ \midrule
\multirow{2}{*}{Collision detection} & Microphone & \cite{foggia_crowded_roads, Foggia2015, Foggia2016, saggese_sorenet, morfi2018deep, 10.1145/3378184.3378186, 7472921, huang2020urban, crashzam, crashzam_springer}\\
& \textcolor{gray}{Dataset} & \cite{crashzam, dataset:gemmeke, dataset:mesaros, dataset:Piczak, dataset:Salamon:2014, dataset:stowell} \\ \midrule
\multirow{5}{*}{Collision avoidance} & Camera & \cite{gloger2005camera, Kilicarslan:2019, Kim:2012, Wu:2018:CA} \\
& Radar & \cite{joshi:2019:CA, Blanc:2004, Sun:2012:CA, Park:2003} \\
& LiDAR & \cite{kumar:2019:CA, Kampker:2018, Natale:2010, ogawa:2011:CA, Nashashibi:2008} \\
& LiDAR+Camera & \cite{wei2018lidar, cho2019study, Nobis:2019} \\
& Microphone & \cite{Mizumachi:2014} \\ \midrule
\multirow{3}{*}{Lane departure} & Camera & \cite{Popken:2007, Freyer:2010, Ono:2016, Grimm:2013, Andrade:2018, Gaikwad:2015, Boutteau:2013, Baili:2017}\\
& LiDAR & \cite{Ghallabi:2018, kammel2008lidar, hata2014road, zhang2010lidar, wu2020automatic} \\
& \textcolor{gray}{Dataset} & \cite{Aly:2008, Wu:2012, Fritsch:2013, VPGNet:2017, TuSimple:2017, CuLane:2017, Berriel:2017, ApolloScape:2019, BDD100K:2018} \\ \bottomrule
\end{tabularx}%
\end{table}
\smallskip\subsubsection{Tire wear}
\label{subsubsec:tire_wear}
Often underestimated, tires play a crucial role for vehicle stability and control, especially on slippery road surfaces. Nevertheless, tire wear is challenging to continuously measure due to the position and the dynamics of the tires. Matsuzaki \textit{et al.}~\cite{Matsuzaki:2012} analyze the tire surface deformation. The system uses a wireless CCD camera attached to the wheel rim to obtain 3D images. Digital Image Correlation Method (DICM) is used to estimate the strain distribution and friction load in the tire. Results show a 10\% error range in the tire load. Osprey, is a debris-resilient system designed to measure tread depth without embedding any electronics within the tire itself~\cite{Prabhakara:2020:osprey}. Instead, Osprey uses a mmWave radar, measuring the tire wear as the difference between tire tread and groove.
\smallskip\subsubsection{Collision detection}
\label{subsubsec:crash_detection}
The correlation between first aid time delay and death probability when a severe car accident occurs has been statistically proved~\cite{SnchezMangas2010ThePO}. For this reason, an automatic collision detector and rescue caller, like the eCall system\cite{eu:ecall2}, is compulsory for all the new cars sold in the European Community. For older car models devoid of eCall system, some retrofit devices are available in form of a black-box, which is connected to a \SI{12}{V} socket plug and the OBD interface~\cite{8000985, bosch:ecallretrofit, bosch:ecallconnectivity, splitsecnd:ecall}. Such devices rely on a 3-axis accelerometer to detect the collision impact. Nevertheless, when the accelerometer sensor is not firmly attached to the vehicle chassis, the acceleration measurement is not reliable. Also, the accelerometer is prone to false positives, e.g. after a street bump or if a pothole is hit and OBD dongles tend to fold out during impacts.
Recent sound event recognition breakthroughs make possible to detect a car accident through sound analysis, without a specific requirement on the microphone position or orientation. Foggia {\it et al.} were among the first to design a model for urban sound recognition, including car crashes~\cite{foggia_crowded_roads, Foggia2015, Foggia2016}. Initially, their audio classification was based on a combination of bag of words and Support Vector Machine (SVM) with features extracted from raw audio signals. Successively, Deep Neural Networks (DNN) models, in the form of CNN, have proved their effectiveness in classifying audio signals from their spectrogram representation~\cite{saggese_sorenet}. Their solutions are focused on the creation of a larger and inclusive road side surveillance system or microphone-based Intelligent Transportation Systems (ITS) like other works~\cite{morfi2018deep, 10.1145/3378184.3378186, 7472921, huang2020urban}.
Sammarco and Detyniecki~\cite{crashzam} shift the focus to driver and passengers safety, training a SVM model directly on crash sounds recorded inside the car cabin and running on a mobile application. All the other sounds supposed to be reproduced within vehicles like people talking, radio music, and engine noise is treated as negative samples, instead. The proposed Crashzam solution does not require a road side surveillance infrastructure favoring scalability. Moreover, the same authors provide a method for impact localization~\cite{crashzam_springer}. The aim is to provide a quick damage assessment and a fast querying for spare parts. It is based on a four microphones device placed at the center of the car cabin and on the knowledge of the vehicle sizes. Besides the particular dataset of events recorded within the car cabin~\cite{crashzam}, more generic urban event audio dataset exist~\cite{dataset:gemmeke, dataset:mesaros, dataset:Piczak, dataset:Salamon:2014, dataset:stowell}. These datasets contain sound clips with features extracted through the Mel-Frequency Cepstral Coefficients (MFCC) algorithm, and Machine Learning (ML)-based classification techniques.
\smallskip\subsubsection{Collision avoidance}
\label{subsubsec:collision_avoidance}
Collision alerts warn drivers when a collision is imminent, or whether other vehicles or objects are detected extremely close. Collision Avoidance (CA) procedure includes: \textit{(i)} environment sensing for object detection, \textit{(ii)} collision trajectory and impact time estimation, \textit{(iii)} alert launching. A survey of collision avoidance techniques is provided in~\cite{Mukhtar:2015}.
Object detection for CA involves challenges such as pedestrians, vehicles, and obstacles detection at the front as well as at the rear or sides of the vehicle.
Object detection makes extensive use of image acquisition via different kinds of cameras. Those include monochrome, RGB, IR, and NIR cameras~\cite{gloger2005camera, Kilicarslan:2019, Kim:2012, Wu:2018:CA}. A peculiarity of detecting objects by image-based sensors is the use of bounding boxes. The advantage is to crop the image around the object itself resulting in decreased computational time for post-processing. In addition to visual imaging, other active-range sensors such as radar~\cite{joshi:2019:CA, Blanc:2004, Sun:2012:CA, Park:2003}, and LiDAR~\cite{kumar:2019:CA, Kampker:2018, Natale:2010, ogawa:2011:CA, Nashashibi:2008}, can determine the proximity of objects around the vehicle in a two or three-dimensional representation. Another strategy consists of sensor fusion, for example combining camera and LiDAR~\cite{wei2018lidar, cho2019study, Nobis:2019}. Since such sensors for object detection are complementary and coordinated, they create a more resilient CA system. Surround sound can also be used for object detection. Mizumachi \textit{et al.}~\cite{Mizumachi:2014} propose a sensing method relying on a microphone array to warn drivers about another vehicle approaching from the rear side. They employ a spatial-temporal gradient method in conjunction with a particle filter for fine DoA estimation.
Vehicles equipped with multiple exteroceptive sensors are used to conduct experiments on various research areas and, in particular, on object detection for CA.
Table\,\ref{tab:datasets_object_detection} lists available datasets with external perception data. These datasets are available both in images and semantically. In principle, detected objects are marked with 2D or 3D bounding boxes. Based on these markings, it is possible to categorize the detected objects, mostly of the times using neural networks. Some datasets are limited in terms of time or distance traveled (Table\,\ref{tab:datasets_object_detection}).
\begin{table}[t]
\caption{Datasets for object detection with exteroceptive sensors.}
\label{tab:datasets_object_detection}
\begin{tabularx}{\columnwidth}{>{\centering\arraybackslash}p{2.1cm}>{\centering\arraybackslash}p{0.5cm}>{\centering\arraybackslash}p{0.5cm}>{\centering\arraybackslash}p{0.6cm}>{\centering\arraybackslash}p{0.65cm}>{\centering\arraybackslash}p{0.65cm}>{\centering\arraybackslash}p{0.8cm}}
\toprule
\multirow{2}{*}{\textbf{Dataset}} & \textbf{Fra\-mes} & \textbf{Sce\-nes} & \multirow{2}{*}{\textbf{Label}} & \textbf{Annot. types} & \textbf{Annot. frames} & \multirow{2}{*}{\textbf{Size}} \\ \toprule
KITTI~\cite{KITTI:2012} & 43\,k & 22 & 3D & 8 & 15\,k & 1.5\,h \\ \midrule
KAIST~\cite{KAIST:2018} & 95\,k & -- & 2D/3D & 3 & 8.9\,k & -- \\ \midrule
Nuscenes~\cite{Nuscenes:2019} & 40\,k & 1\,k & 2D/3D & 23 & 40\,k & 5.5\,h \\ \midrule
DDAD~\cite{DDAD:2020} & 21\,k & 435 & 2D/3D & -- & 99\,k & -- \\ \midrule
A2D2~\cite{A2D2:2019} & 41\,k & -- & 3D & 38 & 12\,k & -- \\ \midrule
ApolloScape\cite{ApolloScape:2019} & 144\,k & 103 & 2D/3D & 28 & 144\,k & 100\,h \\ \midrule
BDD100K~\cite{BDD100K:2018} & 100\,k & -- & 2D & 10 & 100\,k & 1,000\,h \\ \midrule
Waymo~\cite{Waymo:2019} & 12\,M & 1.1\,k & 2D/3D & 4 & 230\,k & 6.5\,h \\ \midrule
Vistas~\cite{Vistas:2017} & 25\,k & -- & 2D & 66 & 25\,k & 6.5\,h \\ \midrule
Cityscapes~\cite{Cityscapes:2016} & 25\,k & -- & 2D & 30 & 25\,k & -- \\ \midrule
\multirow{2}{*}{Argoverse~\cite{Argoverse:2019}} & \multirow{2}{*}{--} & 113 & 3D & \multirow{2}{*}{15} & \multirow{2}{*}{22\,k} & 1\,h \\
& & 324\,k & 2D & & & 320\,h \\ \midrule
H3D~\cite{H3D:2019} & 27\,k & 160 & 3D & 8 & 27\,k & 0.77\,h \\ \midrule
Oxford~\cite{Oxford:2019} & -- & 100\,+ & 2D/3D & -- & -- & 1,000\,km \\ \midrule
Eurocity~\cite{Eurocity:2019} & 47k & -- & 2D & 8 & -- & 53\,h \\ \midrule
Canadian~\cite{Canadian:2020} & 7k & -- & 2D/3D & 16 & -- & -- \\ \midrule
Lyft5~\cite{Lyft_Prediction:2020} & -- & 170\,k & 3D & 9 & 46\,k & 1,118\,h \\ \midrule
D$^2$-City~\cite{D2City:2019} & 700\,k & -- & 2D & 12 & -- & 55\,h \\ \midrule
BLVD~\cite{BLVD:2019} & 120\,k & -- & 2D/3D & 28 & 250\,k & -- \\ \midrule
Honda~\cite{Honda:2018} & -- & -- & 2D & 30 & -- & 104\,h \\ \midrule
Ford AV~\cite{Ford:2020} & -- & -- & 3D & -- & -- & 66\,km \\ \midrule
Astyx~\cite{Astyx:2019} & 546 & -- & 2D/3D & -- & -- & -- \\ \bottomrule
\end{tabularx}
\end{table}
\smallskip\subsubsection{Lane departure}
\label{subsubsec:Lane_departure}
Lane detection (LD) and tracking (LT) is a hot topic in the driving safety area due to the complexity needed to achieve reliable results. Most of the applications using LD and LT aim to warn the driver about an odd trajectory before lane crossing to prevent accidents. The first challenge is to correctly extract lane from the acquired image in a single-frame context. This process must be both quick and precise.
Automotive manufacturers deploy inexpensive cameras, usually on the vehicle windshield. Audi implements a monochrome camera with a CMOS image sensor. When the driver performs any maneuver that is considered dangerous by the system, a vibration of the steering wheel is produced~\cite{Popken:2007, Freyer:2010}. Toyota uses monocular cameras in the vehicles to detect LD and send alerts for lane-keeping. These cameras are equipped with a single lens for detecting white lane markings and headlights~\cite{Ono:2016}. Mercedes-Benz uses a stereo camera for its lane-keeping system. Starting from the detection of lane markings, a steering assistant interacts with the driver to facilitate vehicle driving. The system also uses vibration of the steering wheel to alert the driver~\cite{Grimm:2013}.
Andrade \textit{et al.}~\cite{Andrade:2018} propose a three-level image processing strategy for LD. In the low-level, the system essentially performs image compression and delimits the Region of Interest (ROI). In the mid-level, it uses filters to extract features. Finally, high-level processing uses the Hough transform algorithm to extract possible line segments in the ROI. A similar approach is employed by Baili \textit{et al.}~\cite{Baili:2017}. With the same LD technique, Gaikwad and Lokhande~\cite{Gaikwad:2015} use a Piecewise Linear Stretching Function (PLSF) in combination with the Euclidean distance transform to keep false alarms under 3\% and the lane detection rate above 97\%.
The PLSF converts images to grayscale in binary mode and improves contrast in ROI. Boutteau \textit{et al.}~\cite{Boutteau:2013} employ fish-eye cameras to detect lane lines, and from projecting lines onto a unitary virtual sphere, triangulate its projection in perspective, reconstructing the road lines in 3D. Omnidirectional line estimation uses the RANSAC (RANdom SAmple Consensus) method. The system has a true positive rate of 86.9\%. As road markings are reflective, they can be detected using intensity laser data coming from a LiDAR~\cite{Ghallabi:2018, kammel2008lidar, hata2014road, zhang2010lidar, wu2020automatic}. Following this intuition, LD includes road segmentation detecting curbs with elevation information and selecting the most reflective points from the road plane.
To study lane detection, there are some datasets available with vision-based real data. These have been collected under different climatic and light exposure conditions. The datasets consist of video sequences, images and frames of real scenes on the road~\cite{Aly:2008, Wu:2012, Fritsch:2013, VPGNet:2017, TuSimple:2017, CuLane:2017, Berriel:2017, ApolloScape:2019, BDD100K:2018}.
\subsection{Driving behavior}
\label{subsec:driving_behavior}
One of the main risk factors on the roads is the human driving~\cite{Singh:2015}. The driver behavior is associated with different events that generate dangerous or aggressive actions. As shown in Table\,\ref{tab:driving_behavior}, driving behavior can be evaluated in macro-areas that study different events in driving practice. The literature on the classification of driving behaviors is rich. In the insurance market, commercial products like Pay As You Drive (PAYD) or Pay How You Drive (PHYD) determine their price taking driving behavior metrics into account~\cite{bordoff:2010}.
\begin{table}[!t]
\caption{Driving behavior applications with exteroceptive sensors.}
\label{tab:driving_behavior}
\begin{tabularx}{\columnwidth}{CCC}
\toprule \textbf{Application} & \textbf{Sensor/\textcolor{gray}{Dataset}} & \textbf{References}\\ \toprule
\multirow{5}{*}{Driving profiling} & GNSS & \cite{Abdelrahman2019, zheng2015trajectory, Dong:2017,Dong:2016, andrieu2012comparing, chen2018driver} \\
& Microphone & \cite{Goksu:2018, Kubera:2019, Ma:2017} \\
& Camera & \cite{Tran:2012} \\
& Tachograph & \cite{Rygula:2009, Kim:2016, Zhou:2019} \\
& \textcolor{gray}{Dataset} & \cite{Hankey:2016, dataset:campbell2012shrp, dataset:VTT1/LYUBJP_2020, dataset:VTT1/KVCO0B_2017, LeBlanc:2010, Bender:2015, site:nhtsa} \\ \midrule
Driver detection & Microphone & \cite{chu:2014, Yang:2011} \\ \midrule
\multirow{2}{*}{Driver identification} & GNSS & \cite{Jafarnejad:2019} \\
& \textcolor{gray}{Dataset} & \cite{Abut:2007} \\
\midrule
\multirow{3}{*}{Driver health monitoring} & Plethysmograph & \cite{Shin:2010} \\
& ECG & \cite{Cassani:2019, Jung:2014, Wartzek:2011, Sakai:2013} \\
& Biometric & \cite{Sinnapolu:2018, Audi:2016, MB:2019, Nismo:2013} \\ \midrule
\multirow{5}{*}{Driver distraction} & Camera & \cite{Fridman:2016, Tran:2018, Zhang:2019, Walger:2014, Wijnands:2019, Hossain:2018, Xu:2014:soberDrive, Chuang:2014, Qiao:2016, You:2013, abouelnaga2017realtime} \\
& Microphone & \cite{Xie:2019, Xu:2017} \\
& ECG & \cite{BenDkhil:2015, Yeo:2009} \\
& Infrared & \cite{Bhaskar:2017, Lee:2008} \\
& \textcolor{gray}{Dataset} & \cite{Taamneh:2017} \\ \bottomrule
\end{tabularx}%
\end{table}
\smallskip\subsubsection{Driving profiling}
\label{subsubsec:driving_profiling}
Risk predictions are based on driving behavior and profiling, always supported by historical GNSS data, sometimes enriched with weather and traffic conditions~\cite{Abdelrahman2019, zheng2015trajectory}. Recently, Dong \textit{et al.}~\cite{Dong:2017} propose an Autoencoder Regularized deep neural Network (ARNet) and a trip encoding framework called trip2vec to learn drivers' driving styles directly from GPS records. This method achieves an identification accuracy higher than their own previous work based on different DNN architectures on characterizing driving styles~\cite{Dong:2016}. Authors in~\cite{andrieu2012comparing, chen2018driver}, instead, consider and evaluate fuel consumption and eco-driving as a proxy of driving behavior.
The identification of driving characteristics is relevant to describe different profiles of driving behavior.
Instead of relying on low frequency GNSS points, G\"oksu~\cite{Goksu:2018} proposes to monitor the vehicle speed through acoustic signals. He employs a Wavelet Packet Analysis (WPA) for processing the engine speed variation sound. This method provides arbitrary time-frequency resolution. Given that WPA output is a sub-signals set, the author uses norm entropy, log energy and energy. These features feed a Multi-Layer Perceptron (MLP). Experiments were conducted with data was collected from four different vehicles, using a digital recorder attached to a microphone, located at \SI{1}{m} away from the engine. Best results are obtained with the norm entropy as feature. On the other hand, Kubera \textit{et al.}~\cite{Kubera:2019} study the drivers' behavior approaching speed check points recording and analyzing audio signals recorded by a roadside microphone. They test multiple ML models (SVM, random forest, and ANN), as well as a time series-based approach to classify car accelerating, decelerating, or maintaining constant speed. Results shows 95\% classification accuracy in speed estimation. Microphone is also used for detecting turn signals in a larger framework for auto-calibrating and smartphone-based dangerous driving behavior identification system~\cite{Ma:2017}.
Assessments of data collected in commercial vehicles through tachographs are performed to analyze the behavior of drivers and detect dangerous events in shared driving. Data collected is of particular interest to the vehicle owner and manufacturer, telematics insurance, and a regulatory entity. Data collected through tachographs loaded on commercial vehicles are studied in~\cite{Rygula:2009, Kim:2016, Zhou:2019}. Also, camera is used for modeling and predicting driver behavior~\cite{Tran:2012}. The camera is actually pointed to drivers' feet to track and analyze their movements through a Hidden Markov Model (HMM). The model is able to correctly predict brake and acceleration pedal presses 74\% of time and \SI{133}{ms} before the actual press. Instrumenting vehicles with data recorders and transmitters to collect data to study and assess driving behavior is expensive and often have to face privacy issues.
Nevertheless, many academic institutions and public authorities have built, on voluntary-basis, databases of real (also called ``naturalistic'') rides~\cite{Hankey:2016, dataset:campbell2012shrp, dataset:VTT1/LYUBJP_2020, dataset:VTT1/KVCO0B_2017, LeBlanc:2010, Bender:2015} or test rides in a controlled environment~\cite{site:nhtsa}.
\smallskip\subsubsection{Driver detection}
\label{subsubsec:driver_passenger_identification}
One of the basic problems driving behavior monitoring systems is to differentiate the driver from passengers. Systems proposed to resolve this issue are named Driver Detection Systems (DDS). The DDS is a building block for mobility services, especially common for PHYD or PAYD smartphone-based applications or fleet management, as the same person can sometimes drive its own vehicle or be just a passenger on other occasions (e.g., on taxis, buses, and friends' car).
One common approach is to split the vehicle seats in four quadrants (front/rear and left/right) where the driver occupies either front/left or the front/right seat and to analyze signals during maneuvres~\cite{chu:2014}.
Following this approach, a microphone-based solution has been proposed by Yang \textit{et al.}~\cite{Yang:2011}: supposing that the vehicle has four loudspeakers at the four corners of the car cabin, and the driver/passenger's smartphone can establish a Bluetooth connection to the car's stereo system, then some high frequency beeps are played at some predefined time intervals. On the other side, beeps are recorded and analyzed by a mobile app to deduce the reception timing difference between left/right and front/left. Despite being an elegant solution, locations equidistant from the loudspeakers present high incertitude. Moreover, Bluetooth association is often guaranteed only to the driver's smartphone.
\smallskip\subsubsection{Driver identification}
\label{subsubsec:driver_identification}
A slightly different problem is Driver Identification (DI): given a set of drivers, recognizing who is currently driving. Traditional authentication methods which use smart-cards, RFID tags, or code dialing, require the installation of specific hardware. Jafarnejad \textit{et al.}~\cite{Jafarnejad:2019} propose a DI approach based on noisy and low rate location data provided by GNSS embedded in smartphones, car-navigation system or external receivers. The authors extract characteristics through a semantic categorization of data and metrics detected in the dataset. For that, a DNN architecture analyzes the characteristics. Results show that the approach achieves an accuracy of 81\% for 5 drivers. Nonetheless, the amount of data trained can generate errors. Hence, it is necessary to implement one more authentication method in the algorithm. Moreira-Matias and Farah~\cite{Moreira:2017} propose a methodology to DI using historical trip-based data. The system uses data acquired through a data recorder, and uses ML techniques for feature analysis. Results show a high accuracy in predicting the driver category ($\approx$88\%).
\smallskip\subsubsection{Driver health monitoring}
\label{subsubsec:Health_monitoring}
Drivers' health describes different driving actions including maneuvres leading to accidents. For this reason, researchers and industry are interested in the use of sensors for drivers' physiological electrocardiogram (ECG) signals. The correlation between ECG signals and both heart and breathing rhythms describes alterations in the body due to stress, fatigue, drowsiness, sleepiness, inattention, drunkenness, decision errors, and health issues~\cite{Choi:2016}. One strategy for obtaining ECG signals is to use resistive sensors on the steering wheel.
Osaka~\cite{Osaka:2012} and Shin \textit{et al.}~\cite{Shin:2010} have developed a heart rate verification system using electrodes and a photo-plethysmograph, a heart-rate sensor that uses a photoelectric pulse wave.
Cassani \textit{et al.}~\cite{Cassani:2019} evaluate ECG signals through electrodes installed on the steering wheel to study three factors: ECG signal quality, estimated heart and breathing rate. Jung \textit{et al.}~\cite{Jung:2014} propose a real-time driver health monitoring system with sleepiness alerts. Another technique is the use of sensors on the back of the driver's seat. Wartzek \textit{et al.}~\cite{Wartzek:2011} propose a reliable analysis of sensors distributed according to the morphology of the driver in the back of the seat. The authors show that 86\% of the samples are reliable on a testbed of 59 people. Sakai \textit{et al.}~\cite{Sakai:2013} use resistive sensors mounted on the back of the driver's seat to analyze the heart-rate in different driving conditions, with speed variations.
Very recently, smartwatches are becoming more and more pervasive for health and wellness monitoring~\cite{Reeder:2016}. Sinnapolu \textit{et al.}~\cite{Sinnapolu:2018} propose to monitor the driver's heart rate via smartwatch and in case of critical conditions (the driver is not responding for in-vehicle button press or driver related activity), then a micro-controller sends CAN messages to activate the auto pilot, to pull over for assistance, and route the vehicle to the closest health center.
Automakers also are careful about their customers' well-being while driving. Nissan NISMO uses wearable technology to capture biometric data from the driver via the heart rate monitor. This device can analyze heart rate, brain activity, and skin temperature. An application can determine early fatigue, concentration, emotions, and hydration level~\cite{Nismo:2013}. Audi uses smartwatches or fitness wristbands that monitor heart rate and skin temperature. The goal is to reduce stress levels while driving, besides improving the concentration and fitness driving of drivers. Audi plans a driver assistance service that can perform autonomous driving functions, assisted emergency stops, and implement emergency services via eCall~\cite{Audi:2016}. Mercedes-Benz monitors health and fitness levels. The smartwatch can transmit the data from the sensors through the smartphone and displays the data on the on-board computer. The system analyzes vital data such as stress level and sleeping quality, activating different programs depending on the individual profile of the driver~\cite{MB:2019}.
\smallskip\subsubsection{Driver distraction}
\label{subsubsec:Driving_distraction}
A recurring problem in driving behavior is distraction. In fact, just in 2017, distracted driving claimed 3,166 lives in the United States~\cite{NHTSADistractedDriving:2019}. The following are common types of distraction~\cite{NHTSADistractedDriving:2010}:
\begin{itemize}
\item {\it visual}: taking the eyes off the road;
\item {\it manual}: taking the hands off the steering wheel;
\item {\it cognitive}: taking the mind off of driving.
\end{itemize}
Most of the existing solutions use image analysis to track drivers' eyes and gestures, whether the gaze is on the street ahead or if looking elsewhere for distraction (e.g., looking the smartphone) or for drowsiness~\cite{Fridman:2016, Tran:2018, Zhang:2019, Walger:2014, Wijnands:2019, Hossain:2018, Xu:2014:soberDrive, Chuang:2014, Qiao:2016, You:2013, abouelnaga2017realtime}. Usually, images from drivers' smartphone frontal camera are analyzed through quite complex CNN architectures for face, eyes, nodding and yawning detection and a warning sound is reproduced in case of danger. Special precautions must adopt at night when ambient lighting is scarce.
Besides images or videos analysis, Xie \textit{et al.}~\cite{Xie:2019} propose a driver's distraction model based on the audio signal acquired by smartphone microphone. They design a Long Short-Term Memory (LSTM) network accepting audio features extracted with under-sampling technique and Fast Fourier Transform (FFT). With the goal of an early detection of drowsy driving, they achieve an average total accuracy of 93.31\%. Also, Xu \textit{et al.}~\cite{Xu:2017} rely on sound acquired by drivers' smartphones to assess inattentive driving. Through an experimental campaign, they claim a 94.8\% model accuracy in recognizing events like fetching forward, picking up drops, turning back, eating and drinking. Such events exhibit unique patterns on Doppler profiles of audio signals.
As drowsiness drops the attention level, it is also considered a form of distraction. Relying on different sources of information, authors in~\cite{BenDkhil:2015, Yeo:2009} evaluate drowsiness by analysis of electroencephalography (EEG) signals records. Bhaskar~\cite{Bhaskar:2017}, instead, proposes EyeAwake, which monitors eye blinking rate, unnatural head nodding/swaying, breathing rate and heart rate to detect drowsy driving leveraging infrared sensors consisting of an infrared Light Emitting Diode (LED) and an infrared photo-transistor. Its 70\% accuracy though does not make it attractive despite its low cost. Nonetheless, infrared sensor is used by Lee \textit{et al.}~\cite{Lee:2008} to monitor driver's head movement to detect drowsiness with 78\% of accuracy rate. Car manufacturers are very careful to drivers' fatigue and drowsiness proposing specific systems in high range products. A list of current research and market solutions is provided in~\cite{Doudou:2020}. Taamneh \textit{et al.}~\cite{Taamneh:2017} provide to the research community a multi-modal dataset for various forms of distracted driving including images, Electro-Dermal Activity (EDA) and adrenergic sensor (for heart and breathing rate) data.
\subsection{Road Monitoring}
\label{subsec:road_monitoring}
Poor road conditions produce mechanical damages, increase vehicle maintenance expenses, poor water draining, not to mention higher accident risk. Different approaches have been developed to monitor the road surface, share this information and alert drivers, as shown in Table\,\ref{tab:road_monitoring}. As smartphones possess a three-axis accelerometer, it is possible to process the vertical acceleration signal within a mobile application to find pavement asperities or to identify dangerous zones. Naricell and TrafficSense are two example applications~\cite{Nericell:2008, TrafficSense:2008}. Nevertheless, to provide reliable measures, a proprioceptive sensor like the accelerometer must be well fixed on the vehicle chassis. On the other hand, exteroceptive sensors can easily be used to overcome this limitation and to go beyond the mere pothole detection.
\smallskip\subsubsection{Road porosity}
\label{subsubsec:road_porosity}
Another way to infer road conditions is through acoustic analysis of the tire and road surface. Crocker \textit{et al.}~\cite{Crocker:2005} study the impact between tire tread, road, and air pumping with the ISO 11819\-2:2017 close-proximity (CPX) method~\cite{ISO_11819}. Through their experiments, they are able to identify surfaces which have a greater sound absorption, like porous road pavement. Such surfaces have the advantage that they drain water well and reduce the splash up behind vehicles during heavy rainfalls. In a similar context, Bezemer-Krijnen \textit{et al.}~\cite{Krijnen:2016} study the tire-road rolling noise with the CPX approach to figure out the pavement roughness and porosity, as well as the influence of tire tread and road characteristic on the noise radiation.
\smallskip\subsubsection{Road wetness}
\label{subsubsec:road_wetness}
Abdi\'c \textit{et al.}~\cite{Abdic:2016} suggest the use of a DNN model, namely a Bi-directional LSTM (BLSTM) Recurrent Neural Network (RNN) model, to detect the road surface wetness. Data is collected with a shotgun microphone placed very close to the rear tire. Authors conduct experiments at different speeds, types of road, and road International Roughness Indexes (IRI). Their model achieves an Unweighted Average Recall (UAR) of 93.2\% for all vehicle speeds. Alonso \textit{et al.}~\cite{Alonso:2014} propose an asphalt status classification system based on real-time acoustic analysis of tire-road interaction noise. Similar to~\cite{Abdic:2016}, their goal is to detect road weather conditions with an on-board system. The authors use a SVM model in combination with feature engineering extraction methods. Results show that wet asphalt is detected 100\% of the time, even using just one feature. Meantime, dry asphalt detection achieves 88\% accuracy. Yamada \textit{et al.}~\cite{Yamada:2003} study the road surface condition wetness based on images taken by a TV camera inside the vehicle. They employ light polarization techniques to distinguish between a dry surface and a surface wet of rain or snow. Jokela \textit{et al.}~\cite{Jokela:2009} also present IcOR, a method to monitor road conditions based on light polarization reflected from the road surface. To estimate the contrast of the images, the system evaluates the graininess and the blurriness of the images. IcOR uses a monochrome stereo camera pair.
\begin{table}[t]
\centering
\caption{Road monitoring applications with exteroceptive sensors.}
\label{tab:road_monitoring}
\begin{tabularx}{\columnwidth}{>{\centering\arraybackslash}p{3cm}CC}
\toprule
\textbf{Application} & \textbf{Sensor/\textcolor{gray}{Dataset}} & \textbf{References} \\ \toprule
Road porosity & Microphone & \cite{Crocker:2005, Krijnen:2016} \\ \midrule
\multirow{2}{*}{Road wetness} & Microphone & \cite{Abdic:2016, Alonso:2014} \\
& Camera & \cite{Yamada:2003, Jokela:2009} \\ \midrule
\multirow{6}{*}{Pothole detection} & Microphone & \cite{Mednis:2010} \\
& Camera & \cite{Hou:2007, Chun:2019, ye2019convolutional, Maeda_2018, shim2019road, anand2018crack, Huidrom:2013} \\
& Camera+Laser & \cite{7084929} \\
& Radar & \cite{Huston:2000} \\
& Ultrasonic & \cite{Madli:2015} \\
& \textcolor{gray}{Dataset} & \cite{road_damage_dataset_2018, kaggle_pothole_image_dataset, shi2016automatic, yang2019feature, mei2020densely}\\ \midrule
Road slipperiness & Tachograph & \cite{Jang:2019} \\ \midrule
Road type classification & Ultrasonic & \cite{Bystrov:2016} \\ \midrule
\multirow{4}{*}{Parking lots detection} & Ultrasonic & \cite{Park:2008:parking} \\
& Radar & \cite{Loeffler:2015} \\
& Camera & \cite{grassi:2015:parking, grassi2017parkmaster, Ng:2017:parking} \\
& LiDAR & \cite{Park:2019:parking} \\ \bottomrule
\end{tabularx}%
\end{table}
\smallskip\subsubsection{Pothole detection}
\label{subsubsec:pothole_detection}
Another approach of analyzing road conditions is to detect potholes and gaps. Mednis \textit{et al.}~\cite{Mednis:2010} introduce a method for pothole detection and localization called RoadMic. Their dataset includes a combination of timestamped sound fragments with GPS positions. The sound signal is low passed to discard the noise (associated with high frequencies) and to reduce transmission latency. The proposal is tested in an urban scenario, considering 10 test drives, and data is analyzed offline. Although pothole detection is based on a simple sound signal amplitude threshold and position on the triangulation of several GPS points, authors conclude that RoadMic detects potholes with more than 80\% reliability, depending on the GPS capabilities and driving speed. Hou \textit{et al.}~\cite{Hou:2007} perform pothole recognition from 2D images taken by more cameras, and with a stereovision technique, interpolate each pair of images to generate a 3D image. Their initial goal is to have a 3D reconstruction of the pavement with an accuracy of 5\,mm at vertical direction. Chun \textit{et al.}~\cite{Chun:2019} analyze road surface damage through cameras installed on the vehicle, taking photos up to \SI{100}{km/h}. The authors use a CNN to classify the images and detect surface damages. Other studies also use different CNN architectures to classify road potholes and cracks~\cite{Maeda_2018, shim2019road, anand2018crack, ye2019convolutional}. Huidrom \textit{et al.}~\cite{Huidrom:2013} quantify potholes, cracks and patches using image processing techniques supported by heuristically derived decision logic. The testbed uses a portable digital camera and a monochromatic camera.
Other sensors are also used instead of cameras for road conditions assessment. Vupparaboina~\textit{et al.}~\cite{7084929} instead, couple camera and laser scanning within a physics-based geometric framework to identify dry and wet pothole. The system analyzes the deformations of the laser light through the camera. Huston \textit{et al.}~\cite{Huston:2000} use a Ground Penetrating Radar (GPR) in the frequency band of \SI{0.05}{GHz} to \SI{6}{GHz} to analyze concrete roadways subjected to mechanical stress, especially detecting delamination conditions with signal processing. In laboratory tests, the proposed solution is able to detect defects as small as \SI{1}{mm}. Madli \textit{et al.}~\cite{Madli:2015} also use an ultrasonic sensor to identify potholes and humps as well as their depth and height respectively. As each pothole is geotagged too, the system uses a smartphone application to alert drivers approaching dangerous zones.
Other methods to detect potholes and cracks include 2D image and 3D surface analysis.
Recent progress in image processing brought by CNNs and the presence of high resolution cameras on smartphones, make such methods very convenient and accurate. The availability of open source image datasets contribute even more to the popularity of image processing for pothole detection. Moreover, datasets are necessary to train large DNNs~\cite{road_damage_dataset_2018, kaggle_pothole_image_dataset, shi2016automatic, yang2019feature, mei2020densely}. Unfortunately, the effectiveness of smartphone camera image processing for road surface monitoring drastically drops down with poor lighting conditions and dense traffic situations.
\smallskip\subsubsection{Road slipperiness}
\label{subsubsec:road_slipperiness}
Slippery road conditions are a crucial issue for drivers. Jang~\cite{Jang:2019} identifies slippery road spots using data from digital tachographs on-board commercial vehicles. The system measures the differences between the angular and rotational speed of the wheels, calculates the linear regression of the data, and estimates the road slipperiness within the calculated confidence interval. Experiments are conducted in different surfaces and states. Results show\,$\pm$20\% of wheel slips with a 99.7\% confidence interval. Nonetheless, the system has some issues concerning GPS interference, and other strategies can depend from readings unrelated to tachograph. Therefore, the authors suggest merging the proposed method with other techniques to improve it.
\smallskip\subsubsection{Road type classification}
\label{subsubsec:road_type_classification}
Bystrov \textit{et al.}~\cite{Bystrov:2016} investigate the use of a short-range ultrasonic sensing system to classify road surfaces: asphalt, mastic asphalt, grass, gravel, and dirt road. Among the classification methods used, MLP shows the best performance. Mukherjee and Pandey~\cite{Mukherjee:2017} classify road surfaces through the texture characterization. The authors use Gray-Level Co-occurrence Matrix (GLCM) to the texture analysis. The system uses a linearly scanning method to evaluate the GLCM approach. To test the approach, marks are introduced using a vision dataset~\cite{KITTI:2012}. The authors conclude that this tool can be added in road segmentation processes.
\smallskip\subsubsection{Parking space detection}
\label{subsubsec:parking_space_detection}
Road monitoring services also include the detection of free parking lots. In large cities, the quest for a free parking space is a time consuming and stressful task which impact driving behavior and fuel consumption. The problem of detecting free parking lots while driving and without instrumenting or changing the road infrastructure, has been initially tackled with ultrasonic~\cite{Park:2008:parking} and radar sensors~\cite{Loeffler:2015}. Successively, due to the recent advances in image object detection with computer vision and CNN, camera~\cite{grassi:2015:parking, grassi2017parkmaster, Ng:2017:parking} or LiDAR based systems have been proposed~\cite{Park:2019:parking}. The common strategy is to sense the roadside while driving and compare its occupancy with a pre-defined parking lots map. Considering all lots as free, the occupancy information is shared and vice-versa.
\subsection{Navigation}
\label{subsec:navigation}
LBS are widely used in vehicle telematics to track vehicle navigation and to guide drivers from origin to destination. Currently, the automotive sector represents 55\% of the LBS market, or 93.3\% when combined with consumer solutions~\cite{GNSSReport:2019}. Most LBS for vehicle telematics are based on GNSS receivers. Nonetheless, GNSS outages reduce the accuracy of vehicle positioning. To encompass these issues, there are employed standalone devices with embedded MEMS sensors like accelerometer and gyroscope~\cite{Tiliakos:2013}, which make up a 6-degree-of-freedom system recognized as the Inertial Measurement Unit (IMU) \cite{Seel:2014}. Through the processing of these signals, it is possible to enable the tracking, position, and orientation of the vehicle, making it an Inertial Navigation System (INS). Compared to GNSS issues, INS exhibits cumulative growth in bias sensor error~\cite{Woodman:2007, Ramanandan:2012, Prikhodko:2018}.
Likewise, with the implementation of exteroceptive sensors in vehicles, vehicular telematics now can interact with the vehicle surroundings. In addition to determining the location of the vehicle in a map, it is possible to recognize the position relative to static or moving objects, which makes navigation systems in the vehicle more intuitive. Moreover, having information on the trajectories of the vehicle and surrounding objects, we can gather safety-related information, insurance-relevant data, as well as to build driving analysis tools to describe driver behaviors and their relevance with respect to partial or autonomous driving assistance systems. Leveraging on the functionality of the exteroceptive sensors, various works study and analyze LBS through the integration of GNSS and INS or Simultaneous Localization and Mapping (SLAM)-based systems. As shown in Table\,\ref{tab:navigation}, we consider GNSS/INS-based and SLAM-based applications, since these are widely used for vehicular navigation systems.
\begin{table}[t]
\centering
\caption{Navigation applications with exteroceptive sensors.}
\label{tab:navigation}
\begin{tabularx}{\columnwidth}{CCC}
\toprule
\textbf{Application} & \textbf{Sensor/\textcolor{gray}{Dataset}} & \textbf{References}\\ \toprule
\multirow{3}{*}{GNSS/INS-based} & Camera & \cite{Schreiber:2016, Ramezani:2018, Wen:2019, Shunsuke:2015} \\
& LiDAR & \cite{Hata:2016, Meng:2017, Wan:2018, Demir:2019} \\
& Radar & \cite{Abosekeen:2019} \\ \midrule
\multirow{3}{*}{SLAM-based} & Camera & \cite{Lemaire:2007, Magnabosco:2013, Chiang:2020} \\
& LiDAR & \cite{Ghallabi:2018, Javanmardi:2019, Choi:2014, Moras:2010}\\
& Radar & \cite{Jose:2005, Cornick:2016, Ort:2020} \\ \bottomrule
\end{tabularx}%
\end{table}
\smallskip\subsubsection{GNSS/INS-based navigation}\label{subsubsec:GNSS-based} To mitigate problems with inaccuracies and sensor biases, GNSS and INS systems are used simultaneously as real-time calibration systems along with exteroceptive sensors to reduce cumulative error and the effects of GNSS outages~\cite{Zhang:2011, Ramanandan:2012, Sasani:2015}.
Schreiber \textit{et al.}~\cite{Schreiber:2016} propose a localization method using real-time camera images coupled to a GNSS/INS when GNSS measurements have low precision. The system analyzes the changes in camera orientation between pairs of frames to estimate the position and speed change. The authors implement an Extended Kalman Filter (EKF) to estimate movement through cameras and GNSS/INS prediction. Nonetheless, the system depends on the quality of the GNSS signal. Ramezani \textit{et al.}~\cite{Ramezani:2018} support stereo cameras with inertial sensors in a Visual-Inertial Odometry (VIO) system. The idea is to keep navigation operational in the absence of GNSS signal. The system implements a Multi-State Constraint Kalman Filter (MSCKF) to integrate INS data and images from a single camera. A second camera is used to impose additional conditions to improve the estimation of the system. Results show that the MSCKF stereo achieves a lower average positioning error in relation to the mono approach and the integrated with INS.
Wen \textit{et al.}~\cite{Wen:2019} use a fish-eye camera pointing the sky to classify measurements in LoS or NLoS environments, beforehand to integrating them into GNSS/INS. The authors formulate an integration problem using the measurements of each system independently. Variable analysis results in a non-linear optimization problem, where the sensor measurements are interpreted as edges, and the different states as nodes; these are defined in a factor graph. The experiments are carried out in an urban environment. Compared with an EKF and a factor graph for the GNSS/INS system, the fish-eye camera with factor graph technique reduces the mean positioning error from \SI{8.31}{m} to \SI{3.21}{m} and from \SI{7.28}{m} to \SI{4.73}{m} in the selected scenarios. Even if the remaining positioning error is lower, yet it is too much for autonomous driving vehicles and it has not been verified with a very large experimental campaign. Shunsuke \textit{et al.}~\cite{Shunsuke:2015} present a system for locating the vehicle through positioning in the lane of a road. The system integrates a monocular camera with GNSS/INS. Analysis of the lane detection image is carried out by implementing the Inverse Perspective Mapping (IPM) algorithm, which projects the ROI onto a ground plane, and processed through the Hough transform. GNSS/INS/lane detection images use a particle filter. Results show that the average positioning error is lower than 0.8\%, and the correct lane rate is higher than 93\%.
A drawback of image sensors is their sensitivity to both very intense or scarce lighting~\cite{Hata:2016}. Yet, an important asset for navigation is dynamic object detection. LiDARs can also serve this purpose. Hata \textit{et al.}~\cite{Hata:2016} present a vehicle location method that includes the detection of curbs and road markings through LiDAR readings. Curb detection is based on the adjacent distance between rings formed by the sensor readings; a gradient filter analyzes the false classification of curbs, and a regression filter adjusts a function to remove outliers and to consider candidate points. For detecting road markings, the authors use the Otsu threshold method, an algorithm that returns a single intensity level in a pixel, and a reflective intensive sensor calibration method. The binary map grid for both curbs and road markings is integrated with GNSS/INS using the Monte Carlo Location (MCL) algorithm, a method that estimates the position by matching the sensor measurements and the area where the vehicle displaces. Results show that the longitudinal and lateral errors are less than \SI{0.3}{m}.
Meng \textit{et al.}~\cite{Meng:2017} propose a vehicle location system based on GNSS, IMU, Distance-Measuring Instrument (DMI), and LiDAR. GNSS/IMU/DMI systems are combined employing a fault-detection method based on Unscented Kalman Filter (UKF) and a curb detection. The system calculates the lateral location of the vehicle and estimates the lateral error. Wan \textit{et al.}~\cite{Wan:2018} design a vehicle location system based on the fusion of GNSS, INS, and LiDAR sensors. The system estimates the location through position, speed, and attitude together. The location-based on the LiDAR sensor shows the position and heading angle of the vehicle. Results show that the location incertitude with LiDAR decreases between \SI{5}{cm} and \SI{10}{cm} for both longitudinal and lateral location. Demir \textit{et al.}~\cite{Demir:2019} develop a framework for vehicle location that uses \SI{4}{x} LiDARs working simultaneously. Sensor readings are accumulated and merged into a scan accumulator module, which performs a normal distribution transform to analyze ambient variations using statistics on the point cloud distribution rather than point-to-point correspondence at accumulated data from each sensor. Results show that the maximum lateral and longitudinal error is \SI{10}{cm} and \SI{30}{cm}, respectively.
To mitigate adverse effects on vision-based sensors and laser-based sensors, some studies employ radar readings for localization applications. Abosekeen \textit{et al.}~\cite{Abosekeen:2019} estimate the vehicle location through a radar sensor for adaptive cruise control (ACC), in a navigation scheme that integrates GNSS/INS. The system performs raw radar measurement processing to determine the vehicle's estimated position and reduce the ground reflection effect and uses an EKF to combine the radar with a Reduced Inertial Sensor System (RISS).
\smallskip\subsubsection{SLAM-based navigation}\label{subsubsec:SLAM-based} One concept of mapping used in robotic mobility is the well-known Simultaneous Localization and Mapping (SLAM). It represents a computational problem that tends to build or update a map of an unknown environment as soon as the vehicle moves, constantly monitoring the route~\cite{Martin:2014}. SLAM can use different exteroceptive sensors to which an algorithm is associated, depending on the scope and assumptions of the implementation.
Vision-based approaches with stereo, monocular and thermal cameras are studied in~\cite{Lemaire:2007, Magnabosco:2013}. Basically, the authors combine stereo and monocular cameras with thermal cameras, to improve the detection of landmarks and to analyze the average relative error to the position of landmarks. However, visual location-based suffers from climatic changes, lighting, among others. Chiang \textit{et al.}~\cite{Chiang:2020} implement a navigation system using smartphone sensors. The system integrates GNSS/INS sensors and cameras. The authors implement the ORB-SLAM (Oriented FAST and Rotated BRIEF)~\cite{Rublee:2011} technique to process images. Data from the sensors that make up the system are merged through an EKF algorithm. The results show that the GNSS/INS system with integrated SLAM improves the accuracy of position and velocity, from 43\% to 51.3\%.
LiDAR-based approaches are not sensitive to ambient lighting, surface texture, as well as supporting long-range and wide field of view (FOV). Ghallabi \textit{et al.}~\cite{Ghallabi:2018} use lane markings to calculate vehicle location using multilayer LiDAR within a map. Line detection employs the Hough transform, as soon as a map-matching algorithm is implemented to validate landmarks within the location system. Javanmardi \textit{et al.}~\cite{Javanmardi:2019} propose a location system based on LiDAR multilayers and 2D vector map and planar surface map formats, which represent building, building footprints, and ground. The idea is to reduce the size of the map while maintaining location accuracy. A hybrid map-based SLAM system through a Rao-Blackwellized particle filter is proposed by Choi~\cite{Choi:2014}. Basically, LiDAR readings are filtered to classify and compare landmarks and to establish the location and mapping of the vehicle. Moras \textit{et al.}~\cite{Moras:2010} present a scheme to monitor moving objects around the vehicle and map the environment statically with LiDAR echo readings. The scheme implements a framework that merges a dual space representation (polar and cartesian), which defines the local occupancy grid and the accumulation grid. Accurate localization is a prerequisite for such a scheme.
Jose and Adams~\cite{Jose:2005} implement mmWave radar and formulate a SLAM problem to estimate target radar positioning and cross sections. Cornick \textit{et al.}~\cite{Cornick:2016} use a Localizing Ground Penetrating Radar (LGPR) that works through mapping and logging components. Ort \textit{et al.}~\cite{Ort:2020} use EKF to combine LGPR readings with wheel encoders and IMU sensor readings including magnetometer measurements.
\smallskip\subsubsection{Map tracking datasets}\label{subsubsec:map_tracking} To characterize maps in real-time, sensor readings are used to compose updated maps of the vehicle's surroundings. The mapping process occurs through object detection, where artificial intelligence techniques are implemented to identify various macro-areas, from lane markings to traffic signs recognition. As a result, the generation of maps in 2D and 3D is done through geometric and semantic layers. In addition to navigation, it is possible to analyze the dynamics and behavior of objects in the surroundings. Table\,\ref{tab:tracking_datasets} lists available datasets from experimental vehicles that collect data through exteroceptive sensors. All datasets include RGB cameras and LiDAR sensors. Rasterized maps in~\cite{Nuscenes:2019, Argoverse:2019, Lyft_Perception:2019} include roads, ground height and sidewalks, and vectorized maps include semantic layers like lane geometry, among others. Aerial map in~\cite{Lyft_Prediction:2020} is represented from the data encoded in the semantic map. Meanwhile,~\cite{Ford:2020} uses a ground plane and a 3D point cloud of non-roads data.
\begin{table}[!t]
\centering
\caption{Map tracking datasets available with exteroceptive sensors.}
\label{tab:tracking_datasets}
\begin{tabularx}{\columnwidth}{>{\centering\arraybackslash}p{1.8cm}>{\centering\arraybackslash}p{1.4cm}>{\centering\arraybackslash}p{0.6cm}>{\centering\arraybackslash}p{2.2cm}>{\centering\arraybackslash}p{0.6cm}}
\toprule
\textbf{Dataset} & \textbf{Map type} & \textbf{Layers} & \textbf{Sensors} & \textbf{Size} \\ \toprule
\multirow{2}{*}{Nuscenes~\cite{Nuscenes:2019}} & \multirow{2}{*}{Raster} & \multirow{2}{*}{11} &LiDAR/Radar /Camera/GPS/IMU & \multirow{2}{*}{6\,h} \\ \midrule
Argoverse~\cite{Argoverse:2019} & Vector+Raster & 2 & LiDAR/Camera & 290\,km \\ \midrule
Lyft5~\cite{Lyft_Perception:2019} & Raster & 7 & LiDAR/Camera & 2.5\,h \\ \midrule
\multirow{2}{*}{Lytf5~\cite{Lyft_Prediction:2020}} & HD\,Semantic & \multirow{2}{*}{7} & \multirow{2}{*}{LiDAR/Camera} & 1,118\,h \\
& Aerial & & & 74\,km$^2$ \\ \midrule
\multirow{2}{*}{Ford~\cite{Ford:2020}} & \multirow{2}{*}{3D} & \multirow{2}{*}{2} & LiDAR/Camera & \multirow{2}{*}{66\,km} \\
& & & /GPS/IMU & \\ \bottomrule
\end{tabularx}
\end{table}
\section{Open Research Challenges}
\label{sec:challenges}
The utilization of exteroceptive sensors and their versatility in various telematics services and applications demonstrate their importance. Nonetheless, there are still open challenges, some of them inherent to the data acquired. In this section, we describe some of the areas which require further investigation.
\smallskip\textit{Which data is important?}
Exteroceptive sensors onboard a Waymo vehicle generate up to \SI{19}{TB} of data per day~\cite{Waymo1,Tuxera_Waymo}. Clearly, determining which data is relevant becomes crucial for fast processing. Three strategies are useful to reduce the amount of data upstream:
\begin{enumerate}
\item select and restrict the number of exteroceptive sensors;
\item activate sensors only when necessary;
\item degrade sensor precision (e.g., sampling frequency or image resolution).
\end{enumerate}
Some car manufacturers go further. Tesla renounces to use LiDAR sensors, claiming they are unreliable~\cite{Forbes_Tesla}. As shown in Section~\ref{sec:services_and_applications}, different options and sensors are used to achieve the same function. On the other hand, sensor data fusion is undeniably approach to achieve better precision and reliability.
\smallskip\textit{Data processing.}
The analysis of sensor readings can have a high computational cost entailing response delay.
In areas like vehicular safety and insurance telematics, where data analysis is used to detect critical events, response time is crucial.
With the commercialization of vehicles with different levels of autonomy, data analysis becomes more significant for the design of risk models.
This is compensated by Moore's law: hardware manufacturers are designing more and more performing computing systems which can be embedded in vehicles for data processing. Fig.~\ref{fig:challenges} shows different challenges involving data analysis.
\begin{figure}[t]
\centering
\resizebox{0.47\textwidth}{!}{
\newcommand*{\Large\bfseries\color{black!85}}{\Large\bfseries\color{black!85}}
\newcommand{\arcarrow}[3]{%
\pgfmathsetmacro{\rin}{2.5}
\pgfmathsetmacro{\rmid}{3}
\pgfmathsetmacro{\rout}{3.5}
\pgfmathsetmacro{\astart}{#1}
\pgfmathsetmacro{\aend}{#2}
\pgfmathsetmacro{\atip}{5}
\fill[color=green!60, very thick] (\astart+\atip:\rin) arc (\astart+\atip:\aend:\rin) -- (\aend-\atip:\rmid) -- (\aend:\rout) arc (\aend:\astart+\atip:\rout) -- (\astart:\rmid) -- cycle;
\path[decoration = {text along path, text = {|\Large\bfseries\color{black!85}|#3}, text align = {align = center}, raise = -1.0ex},
decorate
](\astart+\atip:\rmid) arc (\astart+\atip:\aend+\atip:\rmid);
}
\tikzstyle{data}=[align=left, rectangle, draw = black]
\begin{tikzpicture}
\fill[even odd rule,red!50] circle (3.3) circle (2.7);
\node at (0,0) [color = green, align = center]{{\includegraphics[width=0.25\textwidth]{Figures/blue_car.png}}};
\node at (-5.6,-1.4) [color = green, align = center]{{\includegraphics[width=0.075\textwidth]{Figures/dev.png}}};
\node at (6,-1.5) [color = green, align = center]{{\includegraphics[width=0.15\textwidth]{Figures/data.png}}};
\node at (4.5,2.2) [color = green, align = center]{{\includegraphics[width=0.12\textwidth]{Figures/camera.png}}};
\node at (7,2.2) [color = green, align = center]{{\includegraphics[width=0.12\textwidth]{Figures/lidar.png}}};
\node at (-5.1,1.3) [color = green, align = center]{{\includegraphics[width=0.20\textwidth]{Figures/labeled_data.png}}};
\arcarrow{100}{ 20}{Perception}
\arcarrow{340}{260}{Raw data}
\arcarrow{140}{220}{Analysis}
\draw (6,0) node[data, text width=3.9cm] (b_perc){$\bullet$ What data is important? \\$-$ Data volume \\$-$ Heterogeneity};
\draw (-5,3) node[data, text width=3.9cm] (b_anal){$\bullet$ Data analysis \\$-$ Features extraction \\$-$ Privacy corcerns \\$\bullet$ Risk prediction \\$-$ Risk assessment models};
\draw (-5,-3) node[data, text width=3.9cm] (b_raw){$\bullet$ Data processing \\$-$ Data collection \\$-$ Data preparation \\$-$ Data processing};
\filldraw[black] (3,0) circle (0.1cm) node[] (l_perc) {};
\filldraw[black] (-1.5,2.598) circle (0.1cm) node[] (l_anal) {};
\filldraw[black] (-1.5,-2.598) circle (0.1cm) node[] (l_raw) {};
\foreach \f/\t in
{b_perc/l_perc}
\draw[black, very thick] (\f.west) -- (\t.east);
\foreach \f/\t in
{b_raw/l_raw, b_anal/l_anal}
\draw[black, very thick] (\f.east) -- (\t.west);
\draw (4.5,1.05) node[] {Camera~\cite{Nuscenes:2019}};
\draw (7,1.05) node[] {Lidar~\cite{Canadian:2020}};
\draw (-5.2,0.56) node[] {Labeled data~\cite{Nuscenes:2019}};
\draw (-4.3,-1.8) node[] {Data analyst};
\end{tikzpicture}
}
\caption{Challenges involving vehicular telematics.}
\label{fig:challenges}
\end{figure}
\smallskip\textit{Security and privacy.}
A widely reported problem is data privacy and security~\cite{Tene:2012,Derikx:2016}. In the insurance market, for instance, the growth in the volume of telematics data and claims for coverage and compensation are essential to tailor services and insurance premium for each customer. On the other hand, such sensible data attracts cyber-attacks, forcing companies to adopt extreme caution~\cite{Dambra:2020}. Currently, various works investigate methods to ensure and preserve the data integrity in insurance telematics~\cite{Pese:2017, Li_Privacy:2017, Zhou_Privacy:2019}.
An emerging technology that tackles privacy and security issues is blockchain. In a nutshell, a blockchain is a distributed database that stores indexes of transactions in a list of blocks that are chained to each other in a private and immutable manner~\cite{nakamoto2019bitcoin, Dorri:2017}. In addition to privacy, it may help preventing cyber frauds, as well as data manipulation by third parties.
\smallskip\textit{Risk assessment.}
The evolution of autonomous vehicles creates a challenging scenario in term of risk assessment modeling for policymakers~\cite{SAE:2018}. The application of artificial intelligence in the data collected by the sensors raises a series of questions about the complexity of decisions, for example, fairness and explainability. As a matter of fact, the transition to autonomous driving raises ethical and moral questions~\cite{Mordue:2020}. The insurance market requires delineating common points between responsibilities and ethics to establish policies associated with vehicle functionalities and legislation~\cite{Bellet:2019}.
\section{Conclusion}
\label{sec:conclusion}
This paper focuses on exteroceptive sensors, embedded or placed inside the vehicles, and their possible utilization for telematics services and applications like mobility safety, navigation, driving behavior analysis, and road monitoring. Such applications are of great interest both for the automotive and insurance industry as well as research, smart cities, drivers, passengers and pedestrians. Showing that exteroceptive sensors provide alternative and smart solutions when proprioceptive sensors are not available or just inconvenient, we provide to the reader a taxonomy of references for specific application areas and device types. Given the extensive literature, which grows with the development of autonomous vehicles, we have selected most relevant works based on their release date, innovation, and feasibility. First, we have introduced the sensor classification and detailed specifications, advantages, and limitations of exteroceptive sensors considering their availability in OTS telematics devices. Moreover, we provide a report on existing available datasets for specific applications. Those are of paramount importance to the design of applications, especially on areas such as CNN for image processing, which demand large amounts of training data to perform well. We concluded the paper identifying open challenges and research directions: while sensors are becoming more precise and the sensor fusion more popular, the amount of data to process also increases at fast pace.
Other environmental sensing information can come from different channels such as communication with RSU sensors (V2I), or with other vehicles (V2V), or listening to specific streams on social networks. They are out of the scope of this paper though, as the works we refer to do not rely on a road infrastructure or on immeasurable data. Finally, the utilization of commodity devices in telematics grows steadily, smartphones \textit{in primis}, as they embed a large array of sensors.
\section*{Acknowledgment}
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001, CNPq, FAPERJ, and FAPESP Grant 15/24494-8.
\bibliographystyle{IEEEtran}
|
1,314,259,994,351 | arxiv | \section{Introduction and Main Results}
Maxwell's equations describe electromagnetic waves and consequently the propagation of light. We refer to the physics' literature for further query (cf. \cite{LandauLifschitz1990,FeynmanLeightonSands1964}).
Time-dependent Maxwell's equations in media in three spatial dimensions relate
\emph{electric and magnetic field} $ (\mathcal{E},\mathcal{B}):\mathbb{R} \times \mathbb{R}^3 \to \mathbb{C}^3 \times \mathbb{C}^3$ with
\emph{displacement and magnetizing fields} $(\mathcal{D},\mathcal{H}):\mathbb{R} \times \mathbb{R}^3 \to \mathbb{C}^3 \times \mathbb{C}^3$, the \emph{electric and magnetic current} $(\mathcal{J}_e,\mathcal{J}_m): \mathbb{R} \times \mathbb{R}^3 \to \mathbb{C}^3 \times \mathbb{C}^3 $, and \emph{electric and magnetic charges} $(\rho_e,\rho_m): \mathbb{R} \times \mathbb{R}^3 \to \mathbb{C} \times \mathbb{C}$:
\begin{equation}
\label{eq:Maxwell3D}
\left\{ \begin{array}{cl}
\partial_t \mathcal{D} &= \nabla \times \mathcal{H} + \mathcal{J}_e, \qquad \nabla \cdot \mathcal{D} = \rho_e, \quad \nabla \cdot \mathcal{B} = \rho_m, \\
\partial_t \mathcal{B} &= - \nabla \times \mathcal{E} + \mathcal{J}_m.
\end{array} \right.
\end{equation}
In physical contexts, fields, currents and charges are real-valued, and the magnetic charge and current vanish. We consider possibly non-vanishing magnetic charge and current to highlight symmetry between the electric and magnetic field. Moreover, $\mathcal{J}_e$ and $\mathcal{J}_m$ are typically taken with opposite signs.
In the following we consider the time-harmonic, monochromatic ansatz
\begin{equation}
\label{eq:TimeHarmonicAnsatz}
\begin{split}
\mathcal{D}(t,x) &= e^{i \omega t} D(x), \quad \mathcal{H}(t,x) = e^{i \omega t} H(x), \\
\mathcal{J}_e(t,x) &= e^{i \omega t} J_{e}(x), \quad \mathcal{J}_m(t,x) = e^{i \omega t} J_{m}(x)
\end{split}
\end{equation}
with $\omega \in \mathbb{R}$. We supplement \eqref{eq:Maxwell3D} with the material laws
\begin{equation}
\label{eq:MaterialLaws}
\mathcal{D}(t,x) = \varepsilon \mathcal{E}(t,x), \quad \mathcal{B}(t,x) = \mu \mathcal{H}(t,x),
\end{equation}
where $\varepsilon = \text{diag}(\varepsilon_1,\varepsilon_2,\varepsilon_3) \in \mathbb{R}^{3 \times 3}, \; \varepsilon_i, \; \mu \in \mathbb{R}_{> 0}$. Requiring $\varepsilon$ and $\mu$ to be symmetric and positive definite is a physically natural assumption.
The fully anisotropic case
\begin{equation*}
\varepsilon = \text{diag}(\varepsilon_1,\varepsilon_2,\varepsilon_3), \quad \mu = \text{diag}(\mu_1,\mu_2,\mu_3) \text{ with } \frac{\varepsilon_1}{\mu_1} \neq \frac{\varepsilon_2}{\mu_2} \neq \frac{\varepsilon_3}{\mu_3} \neq \frac{\varepsilon_1}{\mu_1}
\end{equation*}
is analyzed in joint work with R. Mandel \cite{MandelSchippa2021}, where we argue in detail how the analysis reduces in the general case to scalar $\mu$ (see also \cite[p.~63]{Liess1991}).
Material laws with scalar $\mu$ are frequently used in optics (cf. \cite[Section~2]{MoloneyNewell1990}). Then \eqref{eq:Maxwell3D} becomes under \eqref{eq:TimeHarmonicAnsatz} and \eqref{eq:MaterialLaws} to relate $E$ with $D$ and $H$ with $B$:
\begin{equation}
\label{eq:Maxwell3DConcise}
P(\omega,D)
\begin{pmatrix}
D \\ B
\end{pmatrix}
= \begin{pmatrix}
J_e \\ J_m
\end{pmatrix} , \quad P(\omega,D) =
\begin{pmatrix}
i \omega & - \mu^{-1} \nabla \times \\
\nabla \times (\varepsilon^{-1} \cdot) & i \omega
\end{pmatrix}
.
\end{equation}
\eqref{eq:TimeHarmonicAnsatz} can be explained by considering \eqref{eq:Maxwell3D} under Fourier transforms in time: Letting
\begin{equation*}
\mathcal{D}(t,x) = \frac{1}{2 \pi} \int_{\mathbb{R}} e^{i \omega t} D(\omega,x) d\omega, \quad \mathcal{H}(t,x) = \frac{1}{2 \pi} \int_{\mathbb{R}} e^{i \omega t} H(\omega,x) d\omega, \ldots,
\end{equation*}
we find a solution to \eqref{eq:Maxwell3D} provided that $D(\omega,\cdot)$,... solve \eqref{eq:Maxwell3DConcise}. We focus on solenoidal currents, but shall also consider the effect of non-vanishing divergence. We deduce from the continuity equation for electric charges $\partial_t \rho_e(t,x) - \nabla \cdot \mathcal{J}_e(t,x) = 0$ the following relation between $J_e(\omega,\cdot)$ and the time-dependent charges:
\begin{equation*}
\nabla \cdot J_e(\omega,x) = i \omega \int_{\mathbb{R}} e^{-i \omega t} \rho_e(t,x) dt.
\end{equation*}
Since $\omega$ will be fixed in the following analysis of the time-harmonic equation, we let
\begin{equation}
\label{eq:3DCharges}
\rho_e(x) = \nabla \cdot J_e(x) \text{ and } \rho_m(x) = \nabla \cdot J_m(x).
\end{equation}
\medskip
We consider Maxwell's equations in two spatial dimensions and the partially anisotropic case in three dimensions. The time-dependent form of Maxwell's equations in two dimensions corresponds to electric and magnetic fields and currents of the form
\begin{align*}
\mathcal{E}_i(t,x) &= \mathcal{E}_i(t,x_1,x_2), \quad i=1,2; \quad \mathcal{E}_3 = 0; \\
\mathcal{B}_i &= 0, \quad i=1,2; \quad \mathcal{B}_3(t,x) = \mathcal{B}_3(t,x_1,x_2); \\
\mathcal{J}_{ei}(t,x) &= \mathcal{J}_{ei}(t,x_1,x_2), \quad i=1,2; \quad \mathcal{J}_{e3} = 0; \\
\mathcal{J}_{mi}(t,x) &= 0, \quad i=1,2; \quad \mathcal{J}_{m3}(t,x) = \mathcal{J}_{m3}(t,x_1,x_2).
\end{align*}
\eqref{eq:Maxwell3D} simplifies to (cf. \cite{2dMaxwellProposal}):
\begin{equation}
\label{eq:Maxwell2D}
\left\{ \begin{array}{cl}
\partial_t \mathcal{D} &= \nabla_{\perp} \mathcal{H} + \mathcal{J}_e, \quad \nabla \cdot \mathcal{D}= \rho_e, \\
\partial_t \mathcal{B} &= - \nabla \times \mathcal{E} + \mathcal{J}_m,
\end{array} \right.
\end{equation}
where $\mathcal{D},\mathcal{E},\mathcal{J}_e:\mathbb{R} \times \mathbb{R}^2 \to \mathbb{C}^2$, $\mathcal{B},\mathcal{H},\mathcal{J}_m: \mathbb{R} \times \mathbb{R}^2 \to \mathbb{C}$, $\nabla_\perp = (\partial_2,-\partial_1)^t$, and we assume \eqref{eq:MaterialLaws} with $\mu > 0$, and $(\varepsilon^{ij})_{i,j} \in \mathbb{R}^{2 \times 2}$ denoting a symmetric, positive definite matrix. We can rewrite \eqref{eq:Maxwell2D} under \eqref{eq:TimeHarmonicAnsatz} and \eqref{eq:MaterialLaws} as
\begin{equation}
\label{eq:Maxwell2dConcise}
P(\omega,D) \begin{pmatrix}
D \\ B
\end{pmatrix}
= \begin{pmatrix}
J_e \\ J_m
\end{pmatrix}, \quad
P(\omega,D) =
\begin{pmatrix}
i \omega & 0 & - \mu^{-1} \partial_2 \\
0 & i \omega & \mu^{-1} \partial_1 \\
\partial_1 \varepsilon_{21} - \partial_2 \varepsilon_{11} & \partial_1 \varepsilon_{22} - \partial_2 \varepsilon_{12} & i \omega
\end{pmatrix},
\end{equation}
denoting with $\varepsilon_{ij}$ the components of the inverse of $\varepsilon$. In two dimensions, we let
\begin{equation}
\label{eq:Charges2D}
\rho_e = \partial_1 J_e + \partial_2 J_e \text{ and } \rho_m = 0.
\end{equation}
\vspace*{0.5cm}
In the following let $d \in \{2,3\}$, $m(2) = 3$, $m(3) = 6$, and
\begin{align*}
L_0^p(\mathbb{R}^2) &= \{ (f_1,f_2,f_3) \in L^p(\mathbb{R}^2)^3 \, : \, \partial_1 f_1 + \partial_2 f_2 = 0 \text{ in } \mathcal{S}'(\mathbb{R}^2) \}, \\
L_0^p(\mathbb{R}^3) &= \{ (f_1,\ldots,f_6) \in L^p(\mathbb{R}^3)^6 \, : \, \nabla \cdot (f_1,f_2,f_3) = \nabla \cdot( f_4,f_5,f_6) = 0 \text{ in } \mathcal{S}'(\mathbb{R}^3) \}.
\end{align*}
In this paper we are concerned with the resolvent estimates
\begin{equation}
\label{eq:ResolventEstimates}
\| (D,B) \|_{L_0^q(\mathbb{R}^d)} = \| P(\omega, D)^{-1} (J_{e},J_{m}) \|_{L_0^q(\mathbb{R}^d)} \lesssim \kappa_{p,q}(\omega) \| (J_{e},J_{m}) \|_{L_0^p(\mathbb{R}^d)}.
\end{equation}
However, as will be clear from perceiving $P(\omega,D)$ as a Fourier multiplier,\\ $P(\omega,D)^{-1}$ cannot even be understood in the distributional sense for $\omega \in \mathbb{R}$. The remedy will be to consider $\omega \in \mathbb{C} \backslash \mathbb{R}$ and prove estimates independent of the distance to the real axis. Then we can consider limits $\Im (\omega) \downarrow 0$ and $\Im (\omega) \uparrow 0$. This is presently referred to as Limiting Absorption Principle (LAP) in the $L^p$-$L^q$-topology. Moreover, the analysis yields explicit formulae for the resulting limits. It appears that this is the first contribution to resolvent estimates for the Maxwell operator in anisotropic media in the $L^p$-$L^q$-topology.
\medskip
Recently, Cossetti--Mandel analyzed the isotropic\footnote{In the isotropic case we identify $\varepsilon = \lambda 1_{3 \times 3}$ with $\lambda \in \mathbb{R}_{>0}$ and do likewise for $\mu$.}, possibly spatially inhomogeneous case $\varepsilon, \mu \in W^{1,\infty}(\mathbb{R}^3; \mathbb{R}_{>0})$ in \cite{CossettiMandel2021}. In the isotropic case, iterating \eqref{eq:Maxwell3D} and using the divergence conditions yields Helmholtz-like equations for $D$ and $H$. This approach was carried out in \cite{CossettiMandel2021}. In the anisotropic case this strategy becomes less straight-forward. Instead we choose to diagonalize the Fourier multiplier to get into the position to use resolvent estimates for the fractional Laplacian. Kwon--Lee--Seo \cite{KwonLeeSeo2021} previously used a diagonalization to prove resolvent estimates for the Lam\'e operator. However, there are degenerate components in the diagonalization of time-harmonic Maxwell's operators, which do not occur for the Lam\'e operator. We use the divergence condition to ameliorate the contribution of the degeneracies. In case the currents have non-vanishing divergence, we can quantify this contribution with the charges.
\medskip
We digress for a moment to elaborate on $L^p$-$L^q$-estimates for the fractional Laplacian and applications. Let $s \in (0,d)$. For $\omega \in \mathbb{C} \backslash [0,\infty)$ we consider the resolvents as Fourier multiplier:
\begin{equation}
\label{eq:FourierMultiplier}
((-\Delta)^{s/2} - \omega)^{-1}f = \frac{1}{(2 \pi)^d} \int_{\mathbb{R}^d} \frac{\hat{f}(\xi)}{ \|\xi \|^s - \omega } e^{i x. \xi} d\xi
\end{equation}
for $f: \mathbb{R}^d \to \mathbb{C}$ in some suitable a priori class, e.g., $f \in \mathcal{S}(\mathbb{R}^d)$. In the present context, resolvent estimates for the Half-Laplacian $\|((-\Delta)^{\frac{1}{2}}-\omega)^{-1} \|_{p \to q}$ are most important. There is a huge body of literature on resolvent estimates for the Laplacian $(-\Delta - \omega)^{-1}:L^p(\mathbb{R}^d) \to L^q(\mathbb{R}^d)$. This is due to versatile applications to uniform Sobolev estimates and unique continuation (cf. \cite{KenigRuizSogge1987}), the localization of eigenvalues for Schr\"odinger operators with complex potential (cf. \cite{Cuenin2017,Frank2011,Frank2018}), or LAPs in $L^p$-spaces (cf. \cite{Gutierrez2004}). Kenig--Ruiz--Sogge \cite{KenigRuizSogge1987} showed that uniform resolvent estimates in $\omega \in \mathbb{C} \backslash [0,\infty)$ for $d \geq 3$ hold if and only if
\begin{equation}
\label{eq:UniformBoundedness}
\frac{1}{p} - \frac{1}{q} = \frac{2}{d} \text{ and } \frac{2d}{d+3} < p < \frac{2d}{d+1}.
\end{equation}
By homogeneity and scaling, we find
\begin{equation}
\label{eq:ScalingResolvent}
\| (-\Delta - \omega)^{-1} \|_{p \to q} = |\omega|^{-1+\frac{d}{2} \big( \frac{1}{p} - \frac{1}{q} \big)} \| \big( - \Delta - \frac{\omega}{|\omega|} \big)^{-1} \|_{p \to q} \quad \forall \omega \in \mathbb{C} \backslash [0, \infty).
\end{equation}
Thus, it suffices to consider $|\omega| = 1$ to discuss boundedness. Kwon--Lee \cite{KwonLee2020} showed the currently widest range of resolvent estimates for the fractional Laplacian outside the uniform boundedness range (see \cite{HuangYaoZheng2018} for a previous contribution). To state the range of admissible $L^p$-$L^q$-estimates, we shall use notations from \cite{KwonLee2020}.
Let $I^2 = \{(x,y) \in \mathbb{R}^2 \, | \, 0 \leq x,y \leq 1 \}$, and let $(x,y)^\prime = (1-x,1-y)$ for $(x,y) \in I^2$. For $\mathcal{R} \subseteq I^2$ we set $\mathcal{R}^\prime = \{ (x,y)^\prime \, | \, (x,y) \in \mathcal{R} \}$.\\
The resolvent of the fractional Laplacian $((-\Delta)^{\frac{s}{2}} - z)^{-1}$ is bounded for fixed $z \in \mathbb{C} \backslash [0,\infty)$ if and only if $(1/p,1/q) \in \mathcal{R}_0^{\frac{s}{2}}$ with
\begin{equation*}
\mathcal{R}_0^{\frac{s}{2}} = \mathcal{R}_0^{\frac{s}{2}}(d) = \{(x,y) \in I^2 \, | \, 0 \leq x-y \leq \frac{s}{d} \} \backslash \{(1,\frac{d-s}{d}), (\frac{s}{d},0) \};
\end{equation*}
see, e.g., \cite[Proposition~6.1]{KwonLee2020}. Guti\'{e}rrez showed in \cite{Gutierrez2004} that uniform estimates for $\omega \in \{ z \in \mathbb{C} \, : \, |z| = 1, \, z \neq 1 \}$ hold if and only if $(1/p,1/q)$ lies in the set
\begin{equation}
\label{eq:UniformBoundednessII}
\mathcal{R}_1 = \mathcal{R}_1(d) = \{ (x,y) \in \mathcal{R}^1_0(d) \, : \, \frac{2}{d+1} \leq x-y \leq \frac{2}{d}, \, x > \frac{d+1}{2d}, \, y < \frac{d-1}{2d} \}.
\end{equation}
Failure outside this range was known before (cf. \cite{KenigRuizSogge1987,Boerjeson1986}) due to the connection to Bochner-Riesz operators with negative index. Clearly, there are more estimates available outside $\mathcal{R}_1$ if one allows for dependence on $\omega$, e.g.,
\begin{equation*}
\| (-\Delta - \omega)^{-1} \|_{L^2 \to L^2} \sim \text{dist}(\omega,[0,\infty))^{-1}.
\end{equation*}
Kwon--Lee \cite{KwonLee2020} analyzed estimates outside the uniform boundedness range in detail and covered a wide range. Estimates with dependence on $\omega$ can be used to localize eigenvalues for Schr\"o\-din\-ger operators with complex potentials (cf. \cite{Cuenin2017}), which is done for Maxwell operators in Section \ref{section:Localization}.
\medskip
Diagonalizing the symbol of \eqref{eq:Maxwell3DConcise} to operators involving the Half-Laplacian works in the \textit{partially anisotropic case}, i.e.,
\begin{equation}
\label{eq:PartiallyAnisotropicCondition}
\# \{\varepsilon_1,\varepsilon_2,\varepsilon_3\} \leq 2.
\end{equation}
This includes the isotropic case $\varepsilon_1 = \varepsilon_2 = \varepsilon_3$, for which the results of Cossetti--Mandel \cite{CossettiMandel2021} are recovered for constant coefficients, albeit via a different approach.
It turns out that in the fully anisotropic case
\begin{equation*}
\varepsilon_1 \neq \varepsilon_2 \neq \varepsilon_3 \neq \varepsilon_1,
\end{equation*}
diagonalizing the multiplier introduces singularities, and this case has to be treated differently (cf. \cite{MandelSchippa2021}). The estimates proved in \cite{MandelSchippa2021} for the fully anisotropic case are strictly weaker than in the partially anisotropic case. We connect resolvent bounds for the Maxwell operator with resolvent estimates for the Half-Laplacian:
\begin{theorem}
\label{thm:ResolventEstimateMaxwell}
Let $1 < p,q < \infty$, $d \in \{2,3\}$, and $\omega \in \mathbb{C} \backslash \mathbb{R}$. Let $\varepsilon \in \mathbb{R}^{d\times d}$ denote a symmetric positive definite matrix, and let $P(\omega,D)$ as in \eqref{eq:Maxwell2dConcise} for $d=2$, and as in \eqref{eq:Maxwell3DConcise} for $d=3$. For $d=3$, we assume that $\varepsilon=\text{diag}(\varepsilon_1,\varepsilon_2,\varepsilon_3)$ and satisfies \eqref{eq:PartiallyAnisotropicCondition}.\\
Then, $P(\omega,D)^{-1}:L^p_0(\mathbb{R}^d) \to L^q_0(\mathbb{R}^d)$ is bounded if and only if $(1/p,1/q) \in \mathcal{R}_0^{\frac{1}{2}}(d)$, and we find the estimate
\begin{equation}
\label{eq:ResolventEquivalence}
\| P(\omega,D)^{-1} \|_{L^p_0 \to L^q_0} \sim \| ((-\Delta)^{\frac{1}{2}} - \omega)^{-1} \|_{L^p \to L^q} + \| ((-\Delta)^{\frac{1}{2}} + \omega)^{-1} \|_{L^p \to L^q}
\end{equation}
to hold. \\
If $1 \leq p \leq \infty $ and $1 < q < \infty$, then we find the estimate
\begin{equation}
\label{eq:ResolventEstimateAbove}
\begin{split}
&\quad \| P(\omega,D)^{-1} (J_e,J_m) \|_{L^q} \\
&\lesssim (\| ((-\Delta)^{\frac{1}{2}} - \omega)^{-1} \|_{L^p \to L^q} + \| ((-\Delta)^{\frac{1}{2}} + \omega)^{-1} \|_{L^p \to L^q}) \|(J_e,J_m)\|_{L^p} \\
&\quad + \| (-\Delta)^{-\frac{1}{2}} \rho_e \|_{L^q} + \| (-\Delta)^{-\frac{1}{2}} \rho_m \|_{L^q}
\end{split}
\end{equation}
to hold with $\rho_e$ and $\rho_m$ defined as in \eqref{eq:Charges2D} for $d=2$ and \eqref{eq:3DCharges} for $d=3$. If $\rho_e = \rho_m = 0$, $1<p<\infty$, and $q \in \{1,\infty\}$, then \eqref{eq:ResolventEstimateAbove} also holds.
\end{theorem}
We cannot allow for $p\in \{1, \infty\}$ or $q \in \{1,\infty\}$ in the proof of
\begin{equation*}
\| P(\omega,D)^{-1} \|_{L^p_0 \to L^q_0} \gtrsim \| ((-\Delta)^{\frac{1}{2}} - \omega)^{-1} \|_{L^p \to L^q} + \| ((-\Delta)^{\frac{1}{2}} + \omega)^{-1} \|_{L^p \to L^q}
\end{equation*}
as multiplier bounds for Riesz transforms are involved. It is well-known that the Riesz transforms are bounded on $L^p(\mathbb{R}^d)$, $1<p<\infty$, but neither on $L^1$ nor on $L^\infty$. In the proof of \eqref{eq:ResolventEstimateAbove} for $\rho_e = \rho_m = 0$, which covers the reverse estimate of the above display, we can overcome this possibly technical issue by arranging the Riesz transforms acting on a reflexive $L^p$-space. Hence, we can allow for either $p \in \{1,\infty\}$ or $q \in \{ 1, \infty\}$. For the sake of simplicity, in Corollary \ref{cor:ResolventBounds} we only consider $1<p,q<\infty$ although \eqref{eq:ResolventEstimateAbove} partially extends to $p \in \{1, \infty\}$ or $q \in \{1, \infty\}$.
\medskip
Coming back to resolvent estimates for the Half-Laplacian, for $d \in \{2,3\}$ and $(1/p,1/q) \in I^2$, define
\begin{equation*}
\gamma_{p,q} = \gamma_{p,q}(d) = \max \{ 0, 1 - \frac{d+1}{2} \big( \frac{1}{p} - \frac{1}{q} \big), \frac{d+1}{2} - \frac{d}{p}, \frac{d}{q} - \frac{d-1}{2} \}.
\end{equation*}
Set
\begin{align*}
\kappa_{p,q}^{(\frac{1}{2})}(\omega) &= |\omega|^{-1 + d \big( \frac{1}{p} - \frac{1}{q} \big) + \gamma_{p,q}} \text{dist}(\omega,[0,\infty))^{-\gamma_{p,q}}, \\
\kappa_{p,q}(\omega) &= |\omega|^{-1 + d \big( \frac{1}{p} - \frac{1}{q} \big) + \gamma_{p,q}} \text{dist}(\omega,\mathbb{R})^{-\gamma_{p,q}}.
\end{align*}
Kwon--Lee \cite[Conjecture~3,~p.~1462]{KwonLee2020} conjectured for $(1/p,1/q) \in \mathcal{R}_0^{1/2}(d)$
\begin{equation}
\label{eq:SharpResolventEstimate}
\kappa_{p,q}^{(\frac{1}{2})}(\omega) \sim_{p,q,d} \| ((-\Delta)^{1/2}-\omega)^{-1} \|_{p \to q}.
\end{equation}
They verified the conjecture for $d=2$ and for $d = 3$ in the restricted range in $\tilde{\mathcal{R}}_0^{1/2}(3)$ \cite[Theorem~6.2,~p.~1462]{KwonLee2020}. We refer to \cite{KwonLee2020} for the precise description. For notational convenience, let $\tilde{\mathcal{R}}_0^{1/2}(2) = \mathcal{R}_0^{1/2}(2)$. By invoking the results from \cite{KwonLee2020}, we find the following:
\begin{corollary}
\label{cor:ResolventBounds}
Let $1 < p,q < \infty$, $d \in \{2,3\}$, and $\omega \in \mathbb{C} \backslash \mathbb{R}$. Let $\varepsilon \in \mathbb{R}^{d\times d}$ and $P(\omega,D)$ be as in Theorem \ref{thm:ResolventEstimateMaxwell}. Then we find the following:
\noindent 1. If $d=2$, then
\begin{equation}
\label{eq:ResolventEstimateMaxwell}
\| P(\omega,D)^{-1} \|_{L_0^p(\mathbb{R}^d) \to L^q_0(\mathbb{R}^d)} \sim \kappa_{p,q}(\omega)
\end{equation}
is true for $(1/p,1/q) \in \mathcal{R}_0^{\frac{1}{2}}(2)$.
\medskip
\noindent 2. If $d=3$ with $\varepsilon$ satisfying \eqref{eq:PartiallyAnisotropicCondition}, then \eqref{eq:ResolventEstimateMaxwell} is true for $(1/p,1/q) \in \tilde{\mathcal{R}}_0^{\frac{1}{2}}(3)$.
\end{corollary}
\medskip
Turning to LAPs, we work with the following notions:
\begin{definition}
Let $d \in \{2,3\}$, $1 \leq p,q \leq \infty$, $\omega \in \mathbb{R} \backslash 0$, and $0 < \delta < 1/2$. We say that a global $L^p_0$-$L^q_0$-LAP holds if $P(\omega \pm i \delta,D)^{-1}: L^p_0(\mathbb{R}^d) \to L^q_0(\mathbb{R}^d)$ are bounded uniformly in $\delta > 0$, and there are operators
$P_{\pm}(\omega) : L_0^p(\mathbb{R}^d) \to L_0^q(\mathbb{R}^d)$ such that
\begin{equation}
\label{eq:GlobalLAP}
P(\omega \pm i \delta,D)^{-1} f \to P_{\pm}(\omega) f \text{ as } \delta \to 0 \text{ in } (\mathcal{S}'(\mathbb{R}^d))^{m(d)}.
\end{equation}
We say that a local $L^p_0$-$L^q_0$-LAP holds if for any $\beta \in C^\infty_c(\mathbb{R}^d )$, $P(\omega \pm i \delta,D)^{-1} \beta(D): L^p_0(\mathbb{R}^d) \to L^q_0(\mathbb{R}^d)$ are bounded uniformly in $\delta > 0$, and there are operators $P^{loc}_{\pm}(\omega): L_0^p(\mathbb{R}^d) \to L_0^q(\mathbb{R}^d)$ such that
\begin{equation}
\label{eq:LocalLAP}
P(\omega \pm i \delta,D)^{-1} \beta(D) f \to P_{\pm}^{loc}(\omega) f \text{ in } \mathcal{S}'(\mathbb{R}^d)^{m(d)}.
\end{equation}
\end{definition}
\begin{remark}
By the explicit formulae for $P(\omega,D)^{-1}$ for $\omega \in \mathbb{C} \backslash \mathbb{R}$ we can also handle currents with non-vanishing divergence as in Theorem \ref{thm:ResolventEstimateMaxwell}. We omitted this discussion for the sake of brevity.
\end{remark}
We observe that $\gamma_{p,q} > 0$ for $p$ and $q$ as in Corollary \ref{cor:ResolventBounds}:
\begin{corollary}
\label{cor:GlobalLAP}
Let $d \in \{2,3\}$. For $1<p,q<\infty$, $(1/p,1/q) \in \tilde{\mathcal{R}}_0^{\frac{1}{2}}(d)$, there is no global $L^p_0$-$L^q_0$-LAP for \eqref{eq:Maxwell2dConcise} or \eqref{eq:Maxwell3DConcise}.
\end{corollary}
We show a local $L^p_0$-$L^q_0$-LAP for the Maxwell operator in Proposition \ref{prop:LocalLAP}. Roughly speaking, for low frequencies the resolvent estimates are equivalent to resolvent estimates for the Laplacian, and uniform estimates $L^{p_1} \to L^q$ are possible for $(1/p_1,1/q) \in \mathcal{P}(d)$ (see Section \ref{section:LAP}). For the high frequencies, away from the singular set, the multiplier is smooth, but provides merely the smoothing of the Half-Laplacian. We use different $L^{p_2} \to L^q$-estimates for this region. This gives $L^{p_1} \cap L^{p_2} \to L^q$-estimates, which are uniform in $\omega$ in a compact set away from the origin, and an LAP in the same spaces. The necessity of considering currents in intersections of $L^p$-spaces is shown in Corollary \ref{cor:GlobalLAP}. Below for $s \geq 0$ and $1 < q < \infty$, $W^{s,q}(\mathbb{R}^d)$ denotes the $L^q$-based Sobolev space:
\begin{equation*}
W^{s,q}(\mathbb{R}^d) = \{ f \in L^q(\mathbb{R}^d) : (1-\Delta)^{s/2} f \in L^q \} \text{ and } \| f \|_{W^{s,q}} := \| (1-\Delta)^{s/2} f \|_{L^q}.
\end{equation*}
\begin{theorem}[LAP for Time-Harmonic Maxwell's equations]
\label{thm:LocalLAP}
Let $1 \leq p_1,p_2,q \leq \infty$, and let $d \in \{2,3\}$. If $(1/p_1,1/q) \in \mathcal{P}(d)$, $(1/p_2,1/q) \in \mathcal{R}_0^{\frac{1}{2}}(d)$, then $P(\omega,D)^{-1}: L_0^{p_1}(\mathbb{R}^d) \cap L_0^{p_2}(\mathbb{R}^d) \to L_0^q(\mathbb{R}^d)$ is bounded uniformly for $\omega \in \mathbb{C} \backslash \mathbb{R}$ in a compact set away from the origin. Furthermore, for $\omega \in \mathbb{R} \backslash 0$ there are limiting operators $P_{\pm}(\omega): L_0^{p_1}(\mathbb{R}^d) \cap L_0^{p_2}(\mathbb{R}^d) \to L_0^{q}(\mathbb{R}^d)$ with
\begin{equation*}
P(\omega \pm i \delta, D)^{-1} (J_e,J_m) \to P_{\pm}(\omega) (J_e,J_m) \text{ in } (\mathcal{S}'(\mathbb{R}^d))^{m(d)} \text{ as } \delta \downarrow 0
\end{equation*}
such that $(D,B) = P_{\pm}(\omega) (J_e,J_m)$ satisfy
\begin{equation}
\label{eq:LimitingOperators}
P(\omega,D) (D,B) = (J_e,J_m) \text{ in } (\mathcal{S}'(\mathbb{R}^d))^{m(d)}.
\end{equation}
Additionally, if $q<\infty$, and $s \in [1,\infty)$, then
\begin{equation}
\label{eq:WeakSolutions}
\| (D,B) \|_{(W^{s,q}(\mathbb{R}^d))^{m(d)}} \lesssim \| (J_e,J_m) \|_{(W^{s-1,q}(\mathbb{R}^d))^{m(d)} \cap L_0^{p_1}(\mathbb{R}^d)}.
\end{equation}
\end{theorem}
Previously, Picard--Weck--Witsch \cite{PicardWeckWitsch2001} showed an LAP in weighted $L^2$-spaces (cf. \cite{Agmon1975}). Since the results in \cite{PicardWeckWitsch2001} are proved via Fredholm's Alternative, the frequencies $\omega \in \mathbb{R} \backslash 0$ are assumed not to belong to a discrete set of eigenvalues. In \cite{PicardWeckWitsch2001} $\varepsilon$ and $\mu$ are assumed to be positive-definite and isotropic, but allowed to depend on $x$ as in \cite{CossettiMandel2021}. Pauly \cite{Pauly2006} proved similar results as Picard--Weck--Witsch \cite{PicardWeckWitsch2001} in weighted $L^2$-spaces in the anisotropic case; see also \cite{PaulyThesis,BenArtziNemirovsky1998}. Much earlier, Eidus \cite{Eidus1985} already proved non-existence of eigenvalues of the Maxwell operator provided that $\varepsilon$ and $\mu$ are sufficiently smooth short-range perturbations of the identity and satisfy a repulsivity condition. Recently, D'Ancona--Schnaubelt \cite{DAnconaSchnaubelt2021} proved global-in-time Strichartz estimates from resolvent estimates in weighted $L^2$-spaces.
\smallskip
It appears that in the present work the role of the Half-Laplacian is explicitly identified for the analysis of the Maxwell operator the first time. We note that in \cite{2dQuasilinearMaxwell,3dQuasilinearMaxwell}, in joint work with R. Schnaubelt, we apply a similar diagonalization to show Strichartz estimates for time-dependent Maxwell's equations with rough coefficients. In these works, due to variable permittivity and permeability, the diagonalization is carried out with pseudo-differential operators, and the present role of the Half-Laplacian is played by the Half-Wave operator. Provided that suitable estimates for the Half-Laplacian with variable coefficients were at disposal, of which the author is not aware, it seems possible that the present approach extends to variable permittivity and permeability as well.
\vspace*{0.5cm}
\textit{Outline of the paper.} In Section \ref{section:Reduction} we diagonalize time-harmonic Maxwell's equations in Fourier space to reduce the resolvent estimates to estimates for the Half-Laplacian. We also give examples for lower resolvent bounds in terms of the Half-Laplacian. In Section \ref{section:LAP} we argue how an LAP fails in $L^p$-spaces, but can be salvaged in intersections of $L^p$-spaces. In Section \ref{section:Localization} we show how the $\omega$-dependent resolvent estimates lead to localization of eigenvalues in the presence of potentials. We postpone technical computations to the Appendix, where we also give explicit solution formulae.
\section{Reduction to resolvent estimates for the Half-Laplacian}
\label{section:Reduction}
Let $\omega \in \mathbb{C} \backslash \mathbb{R}$. We diagonalize $P(\omega,D)$ defined in \eqref{eq:Maxwell2dConcise} or in \eqref{eq:Maxwell3DConcise} in the partially anisotropic case. We shall see that the transformation matrices are essentially Riesz transforms. This allows to bound the resolvents with estimates for the Half-Laplacian. We will make repeated use of the Mikhlin--H\"ormander multiplier theorem (cf. {\cite[Theorem~6.2.7,~p.~446]{Grafakos2014}}):
\begin{theorem}[Mikhlin--H\"ormander]
\label{thm:MultiplierTheorem}
Let $1<p<\infty$ and $m: \mathbb{R}^n \backslash 0 \to \mathbb{C}$ be a bounded function that satisfies
\begin{equation}
\label{eq:DerivativeBounds}
|\partial^\alpha m(\xi)| \leq D_\alpha |\xi|^{-|\alpha|} \qquad (\xi \in \mathbb{R}^n \backslash 0)
\end{equation}
for $|\alpha| \leq \lfloor \frac{n}{2} \rfloor + 1$. Then, $\mathfrak{m}_p: L^p(\mathbb{R}^n) \to L^p(\mathbb{R}^n)$ given by $f \mapsto (m \hat{f}) \check{\;}$ defines a bounded mapping with
\begin{equation}
\label{eq:LpBoundMultiplier}
\| \mathfrak{m}_p \|_{L^p \to L^p} \leq C_n \max(p,(p-1)^{-1}) (A + \| m \|_{L^\infty}),
\end{equation}
where
\begin{equation*}
A= \max(D_\alpha, \; |\alpha| \leq \lfloor \frac{n}{2} \rfloor + 1).
\end{equation*}
\end{theorem}
As pointed out in \cite{Grafakos2014}, $m \in C^k(\mathbb{R}^n \backslash 0)$, $k \geq \lfloor \frac{n}{2} \rfloor + 1$ is an $L^p$-multiplier for $1<p<\infty$, if it is zero-homogeneous, i.e., there is $\tau \in \mathbb{R}$ such that for any $\lambda > 0$ and $\xi \neq 0$, we have
\begin{equation}
\label{eq:ZeroHomogeneous}
m(\lambda \xi) = \lambda^{i \tau} m(\xi).
\end{equation}
Differentiating the above display with respect to $\xi$, we obtain for $\lambda > 0$
\begin{equation*}
\lambda^{|\alpha|} (\partial_\xi^\alpha m)(\lambda \xi) = \lambda^{i \tau} \partial_\xi^\alpha m(\xi)
\end{equation*}
and \eqref{eq:DerivativeBounds} is satisfied with $D_\alpha = \sup_{|\theta| = 1} |\partial^\alpha m(\theta)|$.
\subsection{Proof of Theorem \ref{thm:ResolventEstimateMaxwell} for $d=2$}
\label{subsection:2d}
Let $u = (D_1,D_2,B)$. We denote $(\varepsilon^{-1})_{ij} = (\varepsilon_{ij})_{i,j}$. To reduce to estimates for the Half-Laplacian, we diagonalize the symbol associated with the operator defined in \eqref{eq:Maxwell2dConcise}. We write $\xi = (\xi_1,\xi_2) \in \mathbb{R}^2$:
\begin{equation}
\label{eq:MaxwellMultiplier}
(P(\omega,D) u) \widehat (\xi) = p(\omega,\xi) \hat{u}(\xi) = i
\begin{pmatrix}
\omega & 0 & -\xi_2 \mu^{-1} \\
0 & \omega & \xi_1 \mu^{-1} \\
\xi_1 \varepsilon_{12} - \xi_2 \varepsilon_{11} & \xi_1 \varepsilon_{22} - \xi_2 \varepsilon_{12} & \omega
\end{pmatrix}
\hat{u}(\xi)
.
\end{equation}
Let $\| \xi \|^2_{\varepsilon^\prime} = \langle \xi, \mu^{-1} \det(\varepsilon)^{-1} \varepsilon \xi \rangle $, $\xi^\prime = \xi / \| \xi \|_{\varepsilon^\prime}$, and define
\begin{equation}
\label{eq:FractionalLaplacianResolvent}
e_{\pm}(\omega,D): L^p(\mathbb{R}^2) \to L^q(\mathbb{R}^2), \quad (e_{\pm} f) \widehat (\xi) = \frac{1}{\omega \pm \| \xi \|_{\varepsilon^\prime}} \hat{f}(\xi).
\end{equation}
We have the following lemma on diagonalization:
\begin{lemma}
\label{lem:Diagonalization2d}
For almost all $\xi \in \mathbb{R}^2$ there is a matrix $m(\xi) \in \mathbb{C}^{3 \times 3}$ such that
\begin{equation*}
p(\omega,\xi) = m(\xi) d(\omega,\xi) m^{-1}(\xi)
\end{equation*}
with
\begin{equation}
\label{eq:Eigenvalues}
d(\omega,\xi) = i \text{diag}(\omega,\omega - \| \xi \|_{\varepsilon^\prime}, \omega + \| \xi \|_{\varepsilon^\prime}).
\end{equation}
Furthermore, the operators $m_{ij}(D)$ and $m^{-1}_{ij}(D)$ are $L^p$-bounded for $1<p<\infty$.
\end{lemma}
\begin{proof}
It is straight-forward to check that the eigenvalues are as in \eqref{eq:Eigenvalues} with the eigenvectors at hand. We align the corresponding eigenvectors as columns to
\begin{equation}
\label{eq:Eigenvectors}
m(\xi) =
\begin{pmatrix}
\varepsilon_{22} \xi_1^\prime -\varepsilon_{12} \xi_2^\prime & -\xi_2^\prime \mu^{-1} & \xi_2^\prime \mu^{-1} \\
\varepsilon_{11} \xi_2^\prime -\varepsilon_{12} \xi_1^\prime & \xi_1^\prime \mu^{-1} & -\xi_1^\prime \mu^{-1} \\
0 & -1 & -1
\end{pmatrix}
\end{equation}
and note that $\det m(\xi) = - 1$ for $\xi \neq 0$. For the inverse matrix we compute
\begin{equation}
\label{eq:InverseEigenvectors}
m^{-1}(\xi) =
\begin{pmatrix}
\mu^{-1} \xi_1^\prime & \mu^{-1} \xi_2^\prime & 0 \\
\frac{ \xi_1^\prime \varepsilon_{21} - \xi_2^\prime \epsilon_{11} }{2} & \frac{\varepsilon_{22} \xi_1^\prime - \varepsilon_{21} \xi_2^\prime }{2} & -\frac{1}{2} \\
\frac{\xi_2^\prime \varepsilon_{11} - \xi_1^\prime \varepsilon_{12}}{2} & \frac{ \xi_2^\prime \varepsilon_{12} - \xi_1^\prime \varepsilon_{22}}{2} & - \frac{1}{2}
\end{pmatrix}
.
\end{equation}
$L^p$-boundedness is immediate from Theorem \ref{thm:MultiplierTheorem} because the components of $m$ and $m^{-1}$ are zero-homogeneous and smooth away from the origin.
\end{proof}
In Proposition \ref{prop:Explicit2d} we compute $p^{-1}(\omega,\xi)$ via this diagonalization. The diagonalization allows us to separate
\begin{equation}
\label{eq:Decomposition2d}
p^{-1}(\omega,\xi) = M^2(A,B) + M^2_c
\end{equation}
with $M^2_c v = 0$ for $\xi_1 v_1 + \xi_2 v_2 = 0$ and
\begin{equation}
\label{eq:Constants2d}
A = \frac{1}{i(\omega - \| \xi \|_{\varepsilon'})}, \quad B = \frac{1}{i(\omega + \| \xi \|_{\varepsilon'})}.
\end{equation}
We can finish the proof of Theorem \ref{thm:ResolventEstimateMaxwell} for $d=2$:
\begin{proof}[Proof~of~Theorem~\ref{thm:ResolventEstimateMaxwell},~$d=2$]
We begin with the lower bound in \eqref{eq:ResolventEquivalence}. For $u$ with $\partial_1 u_1 +\partial_2 u_2 = 0$, we have
\begin{equation*}
p^{-1}(\omega,\xi) \hat{u}(\xi) = M^2(A,B) \hat{u}(\xi).
\end{equation*}
The entries of $M^2(A,B)$ are linear combinations of $e_{\pm}(\omega,\xi)$ and $\xi_i'$. The operators
\begin{equation*}
(\mathcal{R}^{\varepsilon'}_i f) \widehat (\xi) = \xi_i' \hat{f}(\xi)
\end{equation*}
are $L^p$-bounded for $1<p<\infty$ with a constant only depending on $p,\varepsilon,\mu$ as the symbols are linear combinations of Riesz symbols after changes of variables.
We find (see \eqref{eq:FractionalLaplacianResolvent} for notations)
\begin{equation}
\label{eq:ResolventEstimateUpperBound}
\| P(\omega,D)^{-1} \|_{L_0^p \to L_0^q} \lesssim \| e_+(\omega,D) \|_{L^p \to L^q} + \| e_-(\omega,D) \|_{L^p \to L^q}
\end{equation}
for $1 \leq p,q \leq \infty$ with $(1 < p < \infty$ or $1<q<\infty)$. The reason we are not required to take $1<p<\infty$ and $1<q<\infty$ is that, if there is one reflexive $L^p$-space, then we can commute the Fourier multipliers after multiplying out the matrices such that the Riesz transforms act on a reflexive $L^p$-space.\footnote{I thank the referee for pointing this out.} This shows the lower bound in \eqref{eq:ResolventEquivalence} for $d=2$.
\medskip
We turn to show the upper bound in \eqref{eq:ResolventEquivalence}, which is
\begin{equation}
\label{eq:EquivalenceHalfLaplacians}
\| P(\omega,D)^{-1} \|_{L_0^p \to L_0^q} \gtrsim \| e_+(\omega,D) \|_{L^p \to L^q} + \| e_-(\omega,D) \|_{L^p \to L^q}
\end{equation}
for $1<p,q<\infty$.
The operators $\mathcal{R}_j^{\varepsilon'}$ satisfy for $1<p<\infty$
\begin{equation}
\label{eq:EquivalenceRieszTransforms}
\| f \|_{L^p(\mathbb{R}^2)} \sim_{p,\varepsilon,\mu} \| \mathcal{R}_1^{\varepsilon^\prime} f \|_{L^p(\mathbb{R}^2)} + \| \mathcal{R}_2^{\varepsilon^\prime} f \|_{L^p(\mathbb{R}^2)}.
\end{equation}
In fact, as already used above, $\| \mathcal{R}_j^{\varepsilon^\prime} f \|_{L^p} \lesssim_{p,\varepsilon,\mu} \| f \|_{L^p}$ for $1<p<\infty$ as a consequence of Theorem \ref{thm:MultiplierTheorem}. Let $\chi_1$, $\chi_2: \mathbb{R} / (2 \pi \mathbb{Z}) \to [0,1]$ be a smooth partition of unity of the unit circle such that
\begin{equation*}
\left\{ \begin{array}{cl}
\chi_1(\theta) &= 1 \text{ for } \theta \in [-\frac{\pi}{8},\frac{\pi}{8}] \cup [\frac{7 \pi}{8}, \frac{9 \pi}{8}], \\
\chi_2(\theta) &= 1 \text{ for } \theta \in [\frac{3 \pi}{8}, \frac{5 \pi}{8}] \cup [\frac{11 \pi}{8}, \frac{13 \pi}{8}].
\end{array} \right.
\end{equation*}
We extend $\chi_i$ to $\mathbb{R}^2 \backslash 0$ by zero-homogeneity.
\medskip
For the reverse bound in \eqref{eq:EquivalenceRieszTransforms}, we decompose $f=f_1+f_2$ as $f_i = \chi_i(D) f$. Set $((\mathcal{R}_i^{\varepsilon^\prime})^{-1} f) \widehat (\xi) = \frac{\| \xi \|_{\varepsilon^\prime}}{\xi_i} \hat{f}(\xi)$. Note that $|\xi_i| \gtrsim \| \xi \|_{\varepsilon'}$ for $\xi \in \text{supp}(\hat{f}_i)$. By Theorem \ref{thm:MultiplierTheorem}, we find the estimate
\begin{equation*}
\| \big( \mathcal{R}^{\varepsilon^\prime}_i \big)^{-1} f_i \|_{L^p} \lesssim_{p,\varepsilon,\mu} \| f_i \|_{L^p}.
\end{equation*}
Consequently,
\begin{equation*}
\| f \|_{L^p} \leq \| f_1 \|_{L^p} + \| f_2 \|_{L^p} \leq \sum_{i=1}^2 \| \big( \mathcal{R}^{\varepsilon'}_i \big)^{-1} \mathcal{R}^{\varepsilon'}_i f_i \|_{L^p} \lesssim_{p,\varepsilon,\mu} \sum_{i=1}^2 \| \mathcal{R}^{\varepsilon'}_i f_i \|_p.
\end{equation*}
With \eqref{eq:EquivalenceRieszTransforms} in mind, we show \eqref{eq:EquivalenceHalfLaplacians} by considering the data
\begin{equation}
v =
\begin{pmatrix}
-2 \mathcal{R}_2^{\varepsilon^\prime} f & 2 \mathcal{R}_1^{\varepsilon^\prime} f & 0
\end{pmatrix}^t
.
\end{equation}
Clearly, $\partial_1 v_1 + \partial_2 v_2 = 0$. We compute
\begin{equation*}
m^{-1}(D) v = \mu
\begin{pmatrix}
0 & 1 & -1
\end{pmatrix}^t
f.
\end{equation*}
We further compute
\begin{equation*}
P(\omega,D)^{-1} v =
\begin{pmatrix}
- \mathcal{R}_2^{\varepsilon^\prime} (e_- + e_+) & \mathcal{R}_1^{\varepsilon'} (e_- + e_+ ) & \mu (- e_- + e_+)
\end{pmatrix}^t f,
\end{equation*}
and it follows by \eqref{eq:EquivalenceRieszTransforms}
\begin{equation*}
\begin{split}
\| P(\omega,D)^{-1} v \|_{L^q} &\sim \| (e_-(\omega,D) + e_+(\omega,D)) f \|_{L^q} + \mu \| (e_-(\omega,D) - e_+(\omega,D)) f \|_{L^q} \\
&\sim \| e_-(\omega,D) f \|_{L^q} + \| e_+(\omega,D) f \|_{L^q}
\end{split}
\end{equation*}
as claimed. Since $\| v \|_{L^p} \sim \| f \|_{L^p}$, by choosing $f$ suitably, we find
\begin{equation*}
\| P(\omega,D)^{-1} \|_{L_0^p \to L_0^q} \gtrsim \max( \| e_- \|_{L^p \to L^q}, \| e_+ \|_{L^p \to L^q} ) \sim \| e_- \|_{L^p \to L^q} + \| e_+ \|_{L^p \to L^q}.
\end{equation*}
Finally, we turn to \eqref{eq:ResolventEstimateAbove}, which reads for $d=2$
\begin{equation}
\label{eq:ResolventEstimateAbove2d}
\begin{split}
&\quad \| P(\omega,D)^{-1} (J_e,J_m) \|_{L^q} \lesssim (\| e_-(\omega,D) \|_{L^p \to L^q} + \| e_+(\omega,D) \|_{L^p \to L^q}) \|(J_e,J_m)\|_{L^p} \\
&\quad + \| (-\Delta)^{-\frac{1}{2}} \rho_e \|_{L^q}.
\end{split}
\end{equation}
We decompose writing $J = (J_e,J_m)$
\begin{equation*}
P(\omega,D)^{-1} J = (M^2(A,B) \hat{J})^{\vee} + (M_c \hat{J})^{\vee}
\end{equation*}
as in \eqref{eq:Decomposition2d}. The arguments from above estimate the contribution of $(M^2(A,B) \hat{J})^{\vee}$. A computation yields
\begin{equation*}
(M_c \hat{J})(\xi) =
\begin{pmatrix}
\varepsilon_{12} \xi_2' -\varepsilon_{22} \xi_1' \\
\varepsilon_{12} \xi_1' -\varepsilon_{11} \xi_2'\\
0
\end{pmatrix}
\frac{\hat{\rho_e}(\xi)}{\mu \omega \| \xi \|_{\varepsilon'}}
\end{equation*}
with $\rho_e = \partial_1 J_{e1} + \partial_2 J_{e2}$. From this follows
\begin{equation*}
\| (M_c \hat{J})^{\vee} \|_{L^q} \lesssim \| (-\Delta)^{1/2} \rho_e \|_{L^q}
\end{equation*}
by $L^q$-boundedness of $\mathcal{R}_i^{\varepsilon'}$ for $1<q<\infty$ and $\| \xi \|/ \| \xi \|_{\varepsilon'}$ zero-homogeneous and smooth away from the origin. The proof is complete.
\end{proof}
\subsection{Proof of Theorem \ref{thm:ResolventEstimateMaxwell} for $d=3$}
\label{subsection:3d}
We consider $P(\omega,D)$ as in \eqref{eq:Maxwell3DConcise} with $\varepsilon = \text{diag}(\varepsilon_1,\varepsilon_2,\varepsilon_3)$ and $\mu > 0$. Here we consider the partially anisotropic case $a^{-1}= \varepsilon_1 ; \quad \varepsilon_2 = \varepsilon_3 = b^{-1}$ and suppose that $\mu = 1$ without loss of generality, to which we can reduce by linear substitution. The computation also covers the isotropic case $a=b$, which was considered in \cite{CossettiMandel2021}. For $\xi \in \mathbb{R}^3$ we denote
\begin{align*}
\| \xi \|^2 &= \xi_1^2 + \xi_2^2 + \xi_3^2, \quad \| \xi \|^2_\varepsilon = b \xi_1^2 + a \xi_2^2 + a \xi_3^2, \\
\xi' &= \xi / \| \xi \|, \qquad \qquad \quad \; \tilde{\xi} = \xi / \| \xi \|_\varepsilon.
\end{align*}
We write further
\begin{align*}
(\nabla \times u) \widehat{\,} (\xi) = - i \mathcal{B}(\xi) \hat{u}(\xi), \quad \mathcal{B}(\xi) =
\begin{pmatrix}
0 & \xi_3 & - \xi_2 \\
- \xi_3 & 0 & \xi_1 \\
\xi_2 & -\xi_1 & 0
\end{pmatrix}
.
\end{align*}
We have the following lemma on diagonalization:
\begin{lemma}
\label{lem:DiagonalizationPartiallyAnisotropic}
For almost all $\xi \in \mathbb{R}^3$ there is a matrix $\tilde{m}(\xi) \in \mathbb{C}^{6 \times 6}$ such that
\begin{equation*}
p(\omega,\xi) = \tilde{m}(\xi) d(\omega,\xi) \tilde{m}^{-1}(\xi)
\end{equation*}
with
\begin{equation*}
d(\omega,\xi) = i \, \text{diag}(\omega, \omega, \omega - \sqrt{b} \| \xi \|, \omega + \sqrt{b} \| \xi \|, \omega - \| \xi \|_\varepsilon, \omega + \| \xi \|_\varepsilon).
\end{equation*}
Furthermore, the components of $\tilde{m}$ and $\tilde{m}^{-1}$ are $L^p$-bounded Fourier multipliers for $1<p<\infty$.
\end{lemma}
\begin{proof}
To verify that the diagonal entries of $d$ are truly the eigenvalues of $p$, we record eigenvectors, which are normalized to zero-homogeneous entries. Eigenvectors to $i \omega$ are
\begin{align*}
v_1^t &= \big(0,0,0, \xi_1^\prime , \xi_2^\prime, \xi_3^\prime \big), \\
v_2^t &= \big(\frac{\tilde{\xi}_1}{a }, \frac{\tilde{\xi}_2}{b }, \frac{\tilde{\xi}_3}{b }, 0, 0, 0 \big).
\end{align*}
Eigenvectors to $i\omega \mp i \sqrt{b} \| \xi \|$ are given by
\begin{align*}
v_3^t &= \big(0,- \frac{\xi_3^\prime }{\sqrt{b}}, \frac{\xi_2^\prime }{\sqrt{b}}, - ((\xi_2^\prime)^2 + (\xi_3^\prime)^2), \xi_1^\prime \xi_2^\prime, \xi_1^\prime \xi_3^\prime \big), \\
v_4^t &= \big(0, \frac{\xi_3^\prime }{\sqrt{b} }, - \frac{\xi_2^\prime }{\sqrt{b}}, - ((\xi_2^\prime)^2 + (\xi_3^\prime)^2), \xi_1^\prime \xi_2^\prime, \xi_1^\prime \xi_3^\prime \big).
\end{align*}
Eigenvectors to $i \omega \mp i \| \xi \|_{\varepsilon}$ are given by
\begin{align*}
v_5^t &= \big( \tilde{\xi}_2^2 + \tilde{\xi}_3^2, - \tilde{\xi}_1 \tilde{\xi}_2, - \tilde{\xi}_1 \tilde{\xi}_3, 0 , - \tilde{\xi}_3, \tilde{\xi}_2 \big),\\
v_6^t &= \big(- (\tilde{\xi}_2^2+ \tilde{\xi}_3^2), \tilde{\xi}_1 \tilde{\xi}_2, \tilde{\xi}_1 \tilde{\xi}_3, 0, -\tilde{\xi}_3, \tilde{\xi}_2 \big).
\end{align*}
Set
\begin{equation}
\label{eq:AuxiliaryMatrix}
m(\xi) = (v_1, \ldots, v_6)
\end{equation}
and
\begin{equation}
\label{eq:RenormalizationQuantities}
\alpha(\xi)= \frac{(\xi_2^2 + \xi_3^2)^{1/2}}{(\| \xi \| \| \xi \|_\varepsilon)^{\frac{1}{2}}} \text{ and } \delta = \frac{\| \xi \|}{\| \xi \|_\varepsilon}.
\end{equation}
The determinant of $m(\xi)$ is computed in Lemma \ref{lem:ComputationDeterminant} in the Appendix. We have
\begin{equation*}
| \det m(\xi) | \sim \alpha^4(\xi).
\end{equation*}
Furthermore, we find for $\alpha \neq 0$:
\begin{align*}
&m^{-1}(\xi) = \\
&\begin{pmatrix}
0 & 0 & 0 & \xi_1^\prime & \xi_2^\prime & \xi_3^\prime \\
ab \tilde{\xi}_1 & ab \tilde{\xi}_2 & ab \tilde{\xi}_3 & 0 & 0 & 0 \\
0 & - \frac{\sqrt{b} \| \xi \|}{2 \| \xi \|_\varepsilon} \frac{\tilde{\xi}_3}{\tilde{\xi}_2^2 + \tilde{\xi}_3^2} &
\frac{\sqrt{b} \| \xi \|}{2 \| \xi \|_\varepsilon} \frac{\tilde{\xi}_2}{\tilde{\xi}_2^2 + \tilde{\xi}_3^2} & - 1/2 & \frac{\xi_1' \xi_2'}{2(\xi_2'^2 + \xi_3'^2)} & \frac{\xi_1' \xi_3'}{2(\xi_2'^2 + \xi_3'^2)} \\
0 & \frac{\sqrt{b}\| \xi \| }{2 \| \xi \|_\varepsilon} \frac{\tilde{\xi}_3}{\tilde{\xi}_2^2 + \tilde{\xi}_3^2} & - \frac{\sqrt{b} \| \xi \| }{2 \| \xi \|_\varepsilon} \frac{\tilde{\xi}_2}{\tilde{\xi}_2^2 + \tilde{\xi}_3^2} & - 1/2 & \frac{\xi_1^\prime \xi_2^\prime}{2(\xi_2'^2 + \xi_3'^2)} & \frac{\xi_1^\prime \xi_3^\prime}{2(\xi_2'^2 + \xi_3'^2)} \\
a/2 & - \frac{b \tilde{\xi}_1 \tilde{\xi}_2}{2(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)} & - \frac{b \tilde{\xi}_1 \tilde{\xi}_3}{2(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)} & 0 & - \frac{\xi_3^\prime \| \xi \|_\varepsilon}{2 \| \xi \| (\xi_2'^2 + \xi_3'^2)} & \frac{\| \xi \|_\varepsilon \xi_2^\prime }{2 \| \xi \| (\xi_2'^2 + \xi_3'^2)} \\
-a/2 & \frac{b \tilde{\xi}_1 \tilde{\xi}_2}{2(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)} & \frac{b \tilde{\xi}_1 \tilde{\xi}_3}{2(\tilde{\xi}_2 ^2 + \tilde{\xi}_3^2)} & 0 & - \frac{| \xi |_\varepsilon}{2 \| \xi \|} \frac{\xi_3^\prime}{(\xi_2'^2 + \xi_3'^2)} & \frac{\| \xi \|_\varepsilon \xi_2^\prime}{2 \| \xi \| (\xi_2'^2 + \xi_3'^2)}
\end{pmatrix}
.
\end{align*}
Since $\alpha(\xi) \to 0$ as $|\xi_2| + |\xi_3| \to 0$, $m$ becomes singular along the $\xi_1$-axis, and the entries of $m^{-1}(\xi)$ are no $L^p$-bounded Fourier multipliers anymore. This suggests to renormalize $v_3,\ldots,v_6$ with $1/\alpha(\xi)$. We let
\begin{equation*}
\begin{split}
&\tilde{m}(\xi) = \\
&\begin{pmatrix}
0 & \frac{\tilde{\xi}_1}{a} & 0 & 0 & (\delta (\tilde{\xi}_2^2 + \tilde{\xi}_3^2))^{\frac{1}{2}} & - (\delta(\tilde{\xi}_2^2 + \tilde{\xi}_3^2))^{\frac{1}{2}} \\
0 & \frac{\tilde{\xi}_2}{b} & - \frac{\xi_3'}{\sqrt{b} (\delta( \xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & \frac{\xi_3'}{\sqrt{b} (\delta(\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & - \frac{\delta^{\frac{1}{2}} \tilde{\xi}_1 \tilde{\xi}_2}{(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{1/2}} & \frac{\delta^{\frac{1}{2}} \tilde{\xi}_1 \tilde{\xi}_2}{(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}} \\
0 & \frac{\tilde{\xi}_3}{b} & \frac{\xi_2'}{\sqrt{b} (\delta (\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & - \frac{\xi_2'}{\sqrt{b} (\delta(\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & - \frac{\delta^{\frac{1}{2}} \tilde{\xi}_1 \tilde{\xi}_3}{(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}} & \frac{\delta^{\frac{1}{2}} \tilde{\xi}_1 \tilde{\xi}_3}{(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}} \\
\xi_1' & 0 &- \frac{(\xi_2'^2 + \xi_3'^2)^{\frac{1}{2}}}{\delta^{\frac{1}{2}}} & - \frac{(\xi_2'^2 + \xi_3'^2)^{\frac{1}{2}}}{\delta^{\frac{1}{2}}} & 0 & 0 \\
\xi_2' & 0 & \frac{\xi_1' \xi_2' }{(\delta(\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & \frac{\xi_1' \xi_2' }{(\delta (\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & - \frac{\delta^{\frac{1}{2}} \tilde{\xi}_3}{(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}} & - \frac{\delta^{\frac{1}{2}} \tilde{\xi}_3}{(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}} \\
\xi_3' & 0 & \frac{\xi_1' \xi_3'}{(\delta( \xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & \frac{\xi_1' \xi_3' }{ (\delta(\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & \frac{\tilde{\xi}_2 \delta^{\frac{1}{2}}}{(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}} & \frac{\tilde{\xi}_2 \delta^{\frac{1}{2}}}{(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}}
\end{pmatrix}.
\end{split}
\end{equation*}
By Lemma \ref{lem:ComputationDeterminant}, we have $\det (\tilde{m}) \sim 1$ if and only if $\xi \neq (\nu,0,0)$ for some $\nu \in \mathbb{R}$. Hence, $\tilde{m}$ and $\tilde{m}^{-1}$ are well-defined away from the $\xi_1$-axis. By Cramer's rule, we obtain $\tilde{m}(\xi)^{-1}$ from $m^{-1}(\xi)$ by modifying the rows 3-6:
\begin{equation*}
\begin{split}
&\tilde{m}^{-1}(\xi) = \\
&\begin{pmatrix}
0 & 0 & 0 & \xi_1' & \xi_2' & \xi_3' \\
ab \tilde{\xi}_1 & ab \tilde{\xi}_2 & ab \tilde{\xi}_3 & 0 & 0 & 0 \\
0 & - \frac{\sqrt{b} \delta^{\frac{1}{2}} \tilde{\xi}_3}{2 (\tilde{\xi}_2^2+ \tilde{\xi}_3^2)^{\frac{1}{2}}} & \frac{\sqrt{b} \delta^{\frac{1}{2}} \tilde{\xi}_2}{2 (\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}} & - \frac{ (\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}}{2 \delta^{\frac{1}{2}}} & \frac{\xi_1' \xi_2' \delta^{\frac{1}{2}}}{2(\xi_2'^2 + \xi_3'^2)^{\frac{1}{2}}} & \frac{\xi_1' \xi_3' \delta^{\frac{1}{2}}}{2(\xi_2'^2 + \xi_3'^2)^{\frac{1}{2}}} \\
0 & \frac{\sqrt{b} \delta^{\frac{1}{2}} \tilde{\xi}_3}{2 (\tilde{\xi}_2^2+\tilde{\xi}_3^2)^{\frac{1}{2}}} & - \frac{\sqrt{b} \delta^{\frac{1}{2}} \tilde{\xi}_2}{2 (\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{1/2}} & - \frac{ (\tilde{\xi}_2^2 + \tilde{\xi}_3^2)^{\frac{1}{2}}}{2 \delta^{\frac{1}{2}}} & \frac{\delta^{\frac{1}{2}} \xi_1' \xi_2'}{2(\xi_2'^2 + \xi_3'^2)^{\frac{1}{2}}} & \frac{\delta^{\frac{1}{2}} \xi_1' \xi_3'}{2(\xi_2'^2 + \xi_3'^2)^{\frac{1}{2}}} \\
\frac{a (\tilde{\xi}_2 + \tilde{\xi}_3^2)^\frac{1}{2}}{2 \delta^{\frac{1}{2}}} & - \frac{ b \tilde{\xi}_1 \tilde{\xi}_2}{2 (\delta(\tilde{\xi}_2^2 + \tilde{\xi}_3^2))^{\frac{1}{2}}} & - \frac{b \tilde{\xi}_1 \tilde{\xi}_3}{ 2 (\delta(\tilde{\xi}_2^2 + \tilde{\xi}_3^2))^{\frac{1}{2}}} & 0 & - \frac{\xi_3'}{2(\delta(\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & \frac{\xi_2'}{2 (\delta(\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} \\
- \frac{a (\tilde{\xi}_2 + \tilde{\xi}_3^2)^\frac{1}{2}}{2 \delta^{\frac{1}{2}}} & \frac{b \tilde{\xi}_1 \tilde{\xi}_2}{2 (\delta(\tilde{\xi}_2^2 + \tilde{\xi}_3^2))^{\frac{1}{2}}} & \frac{b \tilde{\xi}_1 \tilde{\xi}_3}{2 (\delta(\tilde{\xi}_2^2 + \tilde{\xi}_3^2))^{\frac{1}{2}}} & 0 & - \frac{\xi_3'}{2(\delta (\xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}} & \frac{\xi_2'}{2(\delta( \xi_2'^2 + \xi_3'^2))^{\frac{1}{2}}}
\end{pmatrix}
.
\end{split}
\end{equation*}
Also by Cramer's rule, it is enough to check that the Fourier multipliers associated with the entries in $\tilde{m}$ are $L^p$-bounded, for which we use Theorem \ref{thm:MultiplierTheorem}.
For the first and second column this is evident since these are Riesz transforms up to change of variables. We turn to the proof that the entries of $v_i/\alpha(\xi)$, $i=3,\ldots,6$, are multipliers bounded in $L^p$ for $1<p<\infty$. This follows by writing them as products of zero-homogeneous functions, which are smooth away from the origin, and Riesz transforms in two variables. We give the details for the entries of $v_3/\alpha(\xi)$:
\begin{itemize}
\item $(v_3)_2/\alpha(\xi)$: We have to show that
\begin{equation*}
\frac{\xi_3 (\| \xi \| \| \xi \|_{\tilde{\varepsilon}})^{1/2} }{\| \xi \| (\xi_2^2 + \xi_3^2)^{1/2}} = \frac{\xi_3}{(\xi_2^2+ \xi_3^2)^{1/2}} \big( \frac{\| \xi \|_{\varepsilon}}{\| \xi \|} \big)^{1/2}
\end{equation*}
is a multiplier. This is the case because $\frac{i \xi_3}{(\xi_2^2+ \xi_3^2)^{1/2}}$ is a Riesz transform in $(x_2,x_3)$ and the second factor $\big( \frac{\| \xi \|_{\varepsilon}}{\| \xi \|} \big)^{1/2}$ is zero-homogeneous and smooth away from the origin, hence, in the scope of Theorem \ref{thm:MultiplierTheorem}.
\item $(v_3)_3/\alpha(\xi)$ is a multiplier by symmetry in $\xi_2$ and $\xi_3$ and the previous considerations.
\item $(v_3)_4/\alpha(\xi)$: We find
\begin{equation*}
\frac{(\xi_2^2 + \xi_3^2)}{\| \xi \|^2 (\xi_2^2+ \xi_3^2)^{1/2}} \cdot (\| \xi \| \| \xi \|_{\varepsilon})^{1/2} = \frac{(\xi_2^2 + \xi_3^2)^{1/2}}{\| \xi \|} \cdot \big( \frac{\| \xi \|_{\varepsilon}}{\| \xi \|} \big)^{1/2}
\end{equation*}
to be a Fourier multiplier as it is zero-homogeneous and smooth away from the origin.
\item $(v_3)_5/\alpha(\xi)$: Consider
\begin{equation*}
\frac{\xi_1 \xi_2}{\| \xi \|^2 (\xi_2^2 + \xi_3^2)^{1/2}} (\| \xi \| \| \xi \|_{\varepsilon})^{1/2} = \frac{\xi_1}{\| \xi \|} \cdot \frac{\xi_2}{(\xi_2^2 + \xi_3^2)^{1/2}} \cdot \big( \frac{\| \xi \|_{\varepsilon}}{\| \xi \|} \big)^{1/2},
\end{equation*}
which is again a Fourier multiplier because the first and third expression are zero-homogeneous and smooth in $\mathbb{R}^n \backslash 0$, the second is again a Riesz transform in two variables.
\item $(v_3)_6/\alpha(\xi)$ can be handled like the previous case.
\end{itemize}
The remaining entries of $\tilde{m}$ are treated similarly, which completes the proof.
\end{proof}
\begin{remark}
To compute the eigenvalues from scratch, it is perhaps easiest to use the block structure of $p(\omega,\xi)$ to find
\begin{equation*}
\det (p(\omega,\xi)) = \det (-\omega^2 1_{3 \times 3} - \mathcal{B}^2(\xi) \varepsilon^{-1}).
\end{equation*}
Next, we can use the identity $\mathcal{B}^2(\xi) = - \| \xi \|^2 1_{3 \times 3} + \xi \otimes \xi$, after which there seems to be no further simplification but to compute the determinant brutely. Note that $\det(i \lambda 1_{6 \times 6} - p(\omega,\xi)) = \det (p(\lambda - \omega,\xi))$, which allows to find the eigenvalues from the zero locus of $\det(p(\omega,\xi))$.
\end{remark}
We prove Theorem \ref{thm:ResolventEstimateMaxwell} for $d=3$ following along the argument for $d=2$. Proposition \ref{prop:Explicit3d} in the Appendix provides a decomposition
\begin{equation}
\label{eq:Decomposition3d}
p^{-1}(\omega,\xi)= M^3(A,B,C,D) + M^3_c
\end{equation}
with $M^3_c v = 0$ for $\xi_1 v_1 + \xi_2 v_2 + \xi_3 v_3 = \xi_1 v_4 + \xi_2 v_5 + \xi_3 v_6 = 0$ and
\begin{equation*}
A = \frac{1}{i(\omega - \| \xi \|_\varepsilon)}, \; B = \frac{1}{i(\omega + \| \xi \|_\varepsilon)}, \; C= \frac{1}{i(\omega - \| \xi \|)}, \; D = \frac{1}{i(\omega + \| \xi \|)}.
\end{equation*}
\begin{proof}[Proof of Theorem \ref{thm:ResolventEstimateMaxwell}, $d=3$]
The estimate
\begin{equation*}
\| P(\omega,D)^{-1} \|_{L^p_0(\mathbb{R}^3) \to L^q_0(\mathbb{R}^3)} \lesssim \| ((-\Delta)^{\frac{1}{2}}-\omega)^{-1} \|_{L^p \to L^q} + \| ((-\Delta)^{\frac{1}{2}}-\omega)^{-1} \|_{L^p \to L^q}
\end{equation*}
for $1 \leq p,q \leq \infty$ with $(1 <p < \infty$ or $1<q<\infty)$ follows from the same argument as in the two-dimensional case: The entries of $M^3(A,B,C,D)$ are linear combinations of $A$,$B$,$C$,$D$ multiplied with components of $\tilde{m}$ and $\tilde{m}^{-1}$, which yield Fourier multipliers by Lemma \ref{lem:DiagonalizationPartiallyAnisotropic}.
\medskip
Below let $(\mathcal{R}_i f) \widehat (\xi) = \frac{\xi_i}{\| \xi \|} \hat{f}(\xi) $.
To show the lower bound for $1<p,q<\infty$, we consider the following initial data:
\begin{equation*}
J_{e} = \begin{pmatrix}
0 \\ - \mathcal{R}_3 f \\ \mathcal{R}_2 f
\end{pmatrix}, \quad
J_{m} = \underline{0}.
\end{equation*}
Note that $\nabla \cdot J_{e} = 0$ and again, the initial data is also physically meaningful as the magnetic current vanishes.
Let $(e_{\pm} f) \widehat (\xi) = (\omega \pm \sqrt{b} | \xi |)^{-1} \hat{f}(\xi)$. We compute with $m$ as in \eqref{eq:AuxiliaryMatrix}:
\begin{equation}
\label{eq:Example}
(d m^{-1})(\xi) \begin{pmatrix}
\hat{J}_{e} \\ \hat{J}_{m}
\end{pmatrix}
= \frac{\sqrt{b}}{2}
\begin{pmatrix}
0 \\ 0 \\ \widehat{e_- f} \\
- \widehat{e_+ f} \\ 0 \\ 0
\end{pmatrix}, \quad
\begin{pmatrix}
D \\ B
\end{pmatrix}
= i
\begin{pmatrix}
0 \\ - \mathcal{R}_3 ( e_- f + e_+ f) \\ \mathcal{R}_2 ( e_- f + e_+ f) \\
- ((\mathcal{R}_2^2 + \mathcal{R}_3^2) (e_- f - e_+ f) \\ \mathcal{R}_1 \mathcal{R}_2 (e_- f - e_+ f) \\ \mathcal{R}_1 \mathcal{R}_3 (e_- f - e_+ f).
\end{pmatrix}
\end{equation}
We shall see that
\begin{equation}
\label{eq:LowerBound}
\| (D,B) \|_{L^q_0} \gtrsim \| e_- f + e_+ f \|_{L^q} + \| e_- f - e_+ f \|_{L^q} \gtrsim \| e_- f \|_{L^q} + \| e_+ f \|_{L^q}
\end{equation}
either, if $f$ has frequency support in a conic neighbourhood of the $\xi_3$-axis, or, if $f$ is spherically symmetric.
\medskip
\noindent Assume that $g \in \mathcal{S}(\mathbb{R}^3)$ and
\begin{equation*}
\text{supp} (\hat{g}) \subseteq \{ \xi \in \mathbb{R}^3 : | \xi / |\xi| - e_3 | \leq c \ll 1 \text{ and } \frac{1}{2} \leq | \xi | \leq 2 \} =: E.
\end{equation*}
By Theorem \ref{thm:MultiplierTheorem}, we have for $1<p<\infty$
\begin{equation}
\label{eq:RieszTransformConicNeighbourhood}
\| g \|_{L^p} \lesssim \| \mathcal{R}_3 g \|_{L^p} \text{ and } \| \mathcal{R}_2 g \|_{L^p} \leq C(c) \| g \|_{L^p}
\end{equation}
with $C(c) \to 0$ as $c \to 0$. If $\text{supp}( \hat{f}) \subseteq E$, then also the Fourier support of $e_- f \pm e_+ f$ is contained in $E$, and an application of \eqref{eq:RieszTransformConicNeighbourhood} to $D_2$ and $B_1$ yields
\begin{equation}
\label{eq:EstimateBelow}
\begin{split}
\| (D,B) \|_{L^q_0} &\gtrsim \| \mathcal{R}_3 ( e_- f + e_+ f) \|_{L^q} + \| (\mathcal{R}_2^2 + \mathcal{R}_3^2) ( e_- f - e_+ f) \|_{L^q} \\
&\gtrsim \| e_- f + e_+ f \|_{L^q} + \| e_- f - e_+ f \|_{L^q} \\
&\gtrsim \| e_- f \|_{L^q} + \| e_+ f \|_{L^q},
\end{split}
\end{equation}
which is \eqref{eq:LowerBound}.\\
Next, suppose that $f \in L^p(\mathbb{R}^3)$, $1<p<\infty$ is spherically symmetric. Since $\mathcal{R}_1^2 + \mathcal{R}_2^2 + \mathcal{R}_3^2 = Id$ and $\| \mathcal{R}_i^2 f \|_{L^p} = \| \mathcal{R}_j^2 f \|_{L^p}$ for $i,j \in \{1,2,3\}$ by change of variables and rotation symmetry, we find $\| \mathcal{R}_i^2 f \|_{L^p} \gtrsim \| f \|_{L^p}$. By $L^p$-boundedness, we have
\begin{equation}
\label{eq:RieszTransformsEstimateRadialSymmetry}
\| f \|_{L^p} \lesssim \| \mathcal{R}_i^2 f \|_{L^p} \lesssim \| \mathcal{R}_i f \|_{L^p} \lesssim \| f \|_{L^p}.
\end{equation}
Similarly,
\begin{equation*}
(\mathcal{R}_1^2 + \mathcal{R}_2^2) + (\mathcal{R}_2^2 + \mathcal{R}_3^2) + (\mathcal{R}_1^2 + \mathcal{R}_3^2) = 2 Id,
\end{equation*}
and $\| (\mathcal{R}_i^2 + \mathcal{R}_j^2) f \|_{L^p} = \| (\mathcal{R}_k^2 + \mathcal{R}_l^2) f \|_{L^p}$ again by change of variables and rotation symmetry. Hence, we also find
\begin{equation}
\label{eq:RieszTransformBoundBelowSquared}
\| (\mathcal{R}_i^2 + \mathcal{R}_j^2) f \|_{L^p} \gtrsim \| f \|_{L^p}.
\end{equation}
\eqref{eq:RieszTransformsEstimateRadialSymmetry} and \eqref{eq:RieszTransformBoundBelowSquared} together allow to argue as well in case of spherical symmetry as in \eqref{eq:EstimateBelow}. If we can choose $f$ such that the operator norms of $e_{\pm}$ are approximated, we find
\begin{equation*}
\| (D,H) \|_{L^q_0} \gtrsim (\| e_- \|_{L^p \to L^q} + \| e_+ \|_{L^p \to L^q} ) \| f \|_{L^p}.
\end{equation*}
Lastly, if $\text{supp} (\hat{f}) \subseteq E$, i.e., the frequency support is in a conic neighbourhood of the $\xi_3$-axis, or is spherically symmetric, we find $\| (J_{e},J_{m}) \|_{L^p_0} \sim \| f \|_{L^p}$. To see that it suffices to consider the frequency support of $f$ as such, we recall the examples from \cite[Section~5.2]{KwonLee2020}, giving the claimed lower bound for the operator norm of the resolvent of the fractional Laplacian: a Knapp type example, which can be realized with frequency support in a conic neighbourhood of the $\xi_3$-axis \cite[p.~1458]{KwonLee2020}, and a spherically symmetric example related with the surface measure on the sphere \cite[p.~1459]{KwonLee2020}.
\medskip
We turn to the proof of \eqref{eq:ResolventEstimateAbove} for $d=3$:
\begin{equation}
\label{eq:ResolventEstimateAbove3d}
\begin{split}
&\quad \| P(\omega,D)^{-1} (J_e,J_m) \|_{L^q} \\
&\lesssim (\| ((-\Delta)^{\frac{1}{2}} - \omega)^{-1} \|_{L^p \to L^q} + \| ((-\Delta)^{\frac{1}{2}} + \omega)^{-1} \|_{L^p \to L^q}) \|(J_e,J_m)\|_{L^p} \\
&\quad + \| (-\Delta)^{-\frac{1}{2}} \rho_e \|_{L^q} + \| (-\Delta)^{-\frac{1}{2}} \rho_m \|_{L^q}.
\end{split}
\end{equation}
This hinges again on the decomposition
\begin{equation*}
(P^{-1}(\omega,D) (J_e,J_m))^{\wedge}(\xi) = M^3(A,B,C,D) (\hat{J}_e,\hat{J}_m)(\xi) + M^3_c (\hat{J}_e,\hat{J}_m)(\xi).
\end{equation*}
The contribution of $M^3(A,B,C,D)$ is estimated like in the first part of the proof. We compute
\begin{equation*}
\begin{split}
&M^3_c (\hat{J}_e,\hat{J}_m)(\xi) \\
&= -
(\frac{b \tilde{\xi}_1 \hat{\rho}_e(\xi)}{ \omega \| \xi \|_{\varepsilon}} , \frac{a \tilde{\xi}_2 \hat{\rho}_e(\xi)}{ \omega \| \xi \|_{\varepsilon}} , \frac{a \tilde{\xi}_3 \hat{\rho}_e(\xi)}{ \omega \| \xi \|_{\varepsilon}} , \frac{\xi_1' \hat{\rho}_m(\xi)}{ \omega \| \xi \|} , \frac{ \xi_2' \hat{\rho}_m(\xi)}{ \omega \| \xi \|}, \frac{\xi_3' \hat{\rho}_m(\xi)}{ \omega \| \xi \|} )^t
.
\end{split}
\end{equation*}
The claim follows by Theorem \ref{thm:MultiplierTheorem} because $\| \xi \| / \| \xi \|_\varepsilon$ and $\xi_i'$ and $\tilde{\xi}_i$ are zero-\-homo\-geneous and smooth away from the origin. The proof of Theorem \ref{thm:ResolventEstimateMaxwell} is complete.
\end{proof}
\section{Local and global LAP}
\label{section:LAP}
Let $P(\omega,D)$ be as in the previous section. In the following we want to investigate the limit of
\begin{equation*}
P(\omega \pm i \delta, D)^{-1} f \text{ as } \delta \to 0, \quad \omega \in \mathbb{R} \backslash 0,
\end{equation*}
by which we construct solutions to time-harmonic Maxwell's equations. By scaling we see that the following estimates are uniform in $\omega$, provided it varies in a compact set away from the origin. We further suppose that $\omega > 0$; the case $\omega < 0$ can be treated with the obvious modifications.
In the following let $0<|\delta|<1/2$. By the above diagonalization, it is equivalent to consider uniform boundedness of
\begin{equation*}
e^{\varepsilon^\prime}_{\pm}(\omega +i \delta): L^p(\mathbb{R}^d) \to L^q(\mathbb{R}^d), \quad (e^{\varepsilon'}_{\pm}(\omega + i \delta) f) \widehat (\xi) = \frac{\hat{f}(\xi)}{\|\xi\|_{\varepsilon^\prime} \pm ( \omega+ i \delta)}.
\end{equation*}
Hence, by the results of the previous section, the uniform $L^p_0$-$L^q_0$-LAP fails due to the lack of uniform resolvent estimates for the Half-Laplacian in $L^p$-spaces. This is recorded in Corollary \ref{cor:GlobalLAP}.
Regarding the local $L^p_0$-$L^q_0$-LAP, we observe that the operator
\begin{equation*}
(e^{\varepsilon'}_{+}(\omega \pm i \delta) f) \widehat (\xi) = \frac{\beta(\xi) \hat{f}(\xi)}{\|\xi\|_{\varepsilon'} + (\omega \pm i \delta)}
\end{equation*}
for $\beta \in C^\infty_c$, $0<\delta<1/2$ is bounded from $L^p \to L^q$ for $1 \leq p \leq q \leq \infty$ by Young's inequality, with the obvious limit as $\delta \to 0$. Thus, we focus on
\begin{equation}
\label{eq:ReducedOperator}
(e_\delta f) \widehat (\xi) := (e_-(\omega \pm i\delta) f) \widehat (\xi) = \frac{\beta(\xi) \hat{f}(\xi)}{\|\xi\|_{\varepsilon'} - (\omega \pm i \delta)}
\end{equation}
with $0< \delta < \delta_0 \ll 1$, where $\beta \in C^\infty_c(\mathbb{R}^n)$.
We can be more precise about the limiting operators: For $t \in \mathbb{R}$ recall Sokhotsky's formula, which hold in the sense of distributions:
\begin{equation*}
\lim_{\varepsilon \downarrow 0} \frac{1}{t \pm i \varepsilon} = v.p. \frac{1}{t} \mp i \pi \delta_0(t),
\end{equation*}
where $\delta_0$ denotes the delta-distribution at the origin.\\
Let
\begin{equation*}
\mathcal{R}_{\pm}^{loc} f = \lim_{\delta \to \pm 0} e_{\delta} f.
\end{equation*}
We find
\begin{equation*}
\mathcal{R}_{\pm}^{loc} f = v.p. \int \frac{\beta(\xi) e^{ix\xi}}{\|\xi\|_{\varepsilon'} - \omega} \hat{f}(\xi) d\xi \pm i \pi \int e^{i x \xi} \beta(\xi) \delta(\|\xi\|_{\varepsilon'} - \omega) \hat{f}(\xi) d\xi,
\end{equation*}
and by the diagonalization formulae, we find that the limiting operators can be expressed as linear combinations involving possibly generalized Riesz transforms, $\mathcal{R}^{loc}_{\pm}$, and $e_+$. We recall the $L^p$-$L^q$-mapping properties of $\mathcal{R}^{loc}_{\pm}$.
We observe that
\begin{equation*}
(\mathcal{R}^{loc}_+ - \mathcal{R}^{loc}_-) f = 2 \pi i \int_{\{ \| \xi \|_{\varepsilon'} = 1 \}} \beta(\xi) e^{ix \xi} \hat{f}(\xi) d\sigma(\xi).
\end{equation*}
This operator, modulo the bounded operator given by convolution with $\mathcal{F}^{-1} \beta$ and linear change of variables $\xi \to \zeta$ such that $\| \xi \|_{\varepsilon'} = \| \zeta \|$, is known as \emph{restriction-extension operator} (cf. \cite{JeongKwonLee2016,KwonLee2020}) and is a special case of the Bochner-Riesz operator of negative index:
\begin{equation*}
(\mathcal{B}^{\alpha} f) \widehat (\xi) = \frac{1}{\Gamma(1-\alpha)} \frac{\hat{f}(\xi)}{(1-\| \xi \|^2)_+^\alpha}, \quad 0 < \alpha \leq \frac{d+2}{2},
\end{equation*}
$\mathcal{B}_\alpha$ is defined by analytic continuation for $\alpha \geq 1$. Hence, for $\alpha = 1$, it matches the restriction--extension operator. This operator is well-understood due to the works of B\"orjeson \cite{Boerjeson1986}, Sogge \cite{Sogge1986}, and Guti\'errez \cite{Gutierrez2000,Gutierrez2004}. The most recent results for Bochner--Riesz operators of negative index are due to Kwon--Lee \cite{KwonLee2020}. Guti\'errez showed that $\mathcal{B}^1: L^p \to L^q$ is bounded if and only if $(1/p,1/q) \in \mathcal{P}(d)$ with
\begin{equation*}
\mathcal{P}(d) = \{ (x,y) \in [0,1]^2 \, : \, x-y \geq \frac{2}{d+1}, \; x > \frac{d+1}{2d}, \; y < \frac{d-1}{2d} \}.
\end{equation*}
She used this to show uniform resolvent estimates for
\begin{equation*}
(-\Delta - z)^{-1}: L^p \to L^q, \quad z \in \mathbb{S}^1 \backslash \{1 \} \text{ for } (1/p,1/q) \in \mathcal{R}_1(d).
\end{equation*}
We summarize the operator bounds for $e_\delta$ and $\mathcal{R}_{\pm}^{\text{loc}}$.
\begin{proposition}[{\cite[Proposition~4.1]{KwonLee2020}}]
\label{prop:OperatorBounds}
Let $\omega > 0$, $0<\delta<1/2$, $\beta \in C^\infty_c(\mathbb{R}^d)$ and $d_\delta$ as in \eqref{eq:ReducedOperator}. Then, we find the following estimates to hold for $(1/p,1/q) \in \mathcal{P}(d)$:
\begin{equation}
\label{eq:OperatorBounds}
\begin{split}
\| e_\delta \|_{L^p \to L^q} &\leq C(\omega,p,q), \\
\big\| \int_{\mathbb{R}^d} e^{ix.\xi} \delta(\| \xi \|_{\varepsilon'} - \omega) \hat{f}(\xi) d\xi \big\|_{L^q} &\leq C(\omega,p,q) \| f \|_{L^p}, \\
\big\| v.p. \int_{\mathbb{R}^d} e^{ix.\xi} \frac{\beta(\xi)}{\| \xi \|_{\varepsilon'} - \omega} \hat{f}(\xi) d\xi \big\|_{L^q} &\leq C(\omega,p,q) \|f \|_{L^p}.
\end{split}
\end{equation}
\end{proposition}
We are ready for the proof of the local LAP:
\begin{proposition}[Local LAP]
\label{prop:LocalLAP}
We find a local $L^p_0$-$L^q_0$-LAP to hold provided that $(1/p,1/q) \in \mathcal{P}(d)$. This means that for $\omega \in \mathbb{R} \backslash 0$ and $\beta \in C^\infty_c(\mathbb{R}^d)$, we find uniform (in $0<\delta<1/2$) resolvent bounds
\begin{equation}
\label{eq:UniformResolventBounds}
\| P(\omega \pm i \delta,D)^{-1} \beta(D) f \|_{L^q_0(\mathbb{R}^d)} \lesssim_{p,q,d,\omega} \| f \|_{L^p_0(\mathbb{R}^d)}
\end{equation}
and there are limiting operators $P_{\pm}^{loc}: L^p_0 \to L^q_0$ such that
\begin{equation*}
P(\omega \pm i \delta, D)^{-1} \beta(D) f \to P_{\pm}^{loc}(\omega) f \text{ in } (\mathcal{S}'(\mathbb{R}^d))^{m(d)}.
\end{equation*}
\end{proposition}
\begin{proof}
We assume that $\omega > 0$ because $\omega < 0$ can be treated \emph{mutatis mutandis}. Recall the bounds for $e_\delta$ recorded in Proposition \ref{prop:OperatorBounds}, easier bounds for $e_+^{\varepsilon'}$, and the diagonalization from Sections \ref{section:Reduction}, which decompose (cf. Lemmas \ref{lem:Diagonalization2d}, \ref{lem:DiagonalizationPartiallyAnisotropic})
\begin{equation*}
p(\omega,\xi) = m(\xi) d(\omega,\xi) m^{-1}(\xi).
\end{equation*}
By these, \eqref{eq:UniformResolventBounds} follows for $(1/p,1/q) \in \mathcal{P}(d)$ provided that $1<p,q<\infty$ to bound the generalized Riesz transforms. We extend this to all $(1/p,1/q) \in \mathcal{P}(d)$ by Young's inequality: For $(1/p,0) \in \mathcal{P}(d)$ we choose $1<\tilde{q}<\infty$ such that $(1/p,1/\tilde{q}) \in \mathcal{P}(d)$. By Young's inequality and the previously established bounds for $(1/p,1/\tilde{q}) \in \mathcal{P}(d)$ follows
\begin{equation*}
\| P(\omega \pm i \delta,D)^{-1} \beta(D) f \|_{L^\infty_0} \lesssim \| P(\omega \pm i \delta,D)^{-1} \beta(D) f \|_{L^{\tilde{q}}_0} \lesssim \| f \|_{L^p_0}.
\end{equation*}
The case $(1,1/q) \in \mathcal{P}(d)$ is treated by the dual argument.
\medskip
By Sokhotsky's formula and the diagonalization, we can consider the limiting operators
\begin{equation*}
P_{\pm}(\omega,D) = \lim_{\delta \to 0} P(\omega\pm i \delta,D)^{-1} \beta(D): L^p_0 \to L^q_0
\end{equation*}
whose mapping properties follow again from Proposition \ref{prop:OperatorBounds} and the diagonalization as argued above. We give explicit formulae in Propositions \ref{prop:Explicit2d} and \ref{prop:Explicit3d}; however, these are bulky and recorded in the appendix.
\end{proof}
We are ready for the proof of Theorem \ref{thm:LocalLAP}:
\begin{proof}[Proof of Theorem \ref{thm:LocalLAP}]
Let $1 \leq p_1, p_2, q \leq \infty$, and $\omega \in \mathbb{R} \backslash 0$. Choose $C=C(\varepsilon,\omega)$ such that $p(\omega,\xi)^{-1}$ is regular for $\| \xi \| \geq C$. Write $J = (J_e,J_m)$ for the sake of brevity. Let $\beta \in C^\infty_c$ with $\beta \equiv 1$ on $\{ \| \xi \| \leq C \}$ and decompose
\begin{equation*}
J
= \beta(D) J + (1-\beta)(D) J
=: J_{low} + J_{high}.
\end{equation*}
By Proposition \ref{prop:LocalLAP}, we find uniform bounds for $0 < \delta < 1/2$
\begin{equation*}
\| P(\omega \pm i \delta, D)^{-1} J_{low} \|_{L_0^q} \lesssim \| J_{low} \|_{L_0^{p_1}}
\end{equation*}
provided that $(\frac{1}{p_1},\frac{1}{q}) \in \mathcal{P}(d)$. The estimate
\begin{equation*}
\| P(\omega \pm i \delta,D)^{-1} J_{high} \|_{L_0^{q}} \lesssim \| J_{high} \|_{L_0^{p_2}}
\end{equation*}
follows for $0 \leq \frac{1}{p_2} - \frac{1}{q} \leq \frac{1}{d}$ and $(\frac{1}{p_2},\frac{1}{q}) \notin \{ (\frac{1}{d},0), (1,\frac{d-1}{d}) \}$ by properties of the Bessel kernel. The limiting operators $P^{loc}_{\pm}(\omega)$ were described in Proposition \ref{prop:LocalLAP}: We have
\begin{equation*}
P(\omega \pm i \delta, D)^{-1} (J_e,J_m) \to P^{loc}_{\pm}(\omega) (J_e,J_m) \text{ in } \mathcal{S}'(\mathbb{R}^d)^{m(d)}.
\end{equation*}
The high frequency is limit is easier to analyze because the multiplier remains regular by construction. Let $M^d \in \mathbb{C}^{m(d) \times m(d)}$ be as in Propositions \ref{prop:Explicit2d} and \ref{prop:Explicit3d}. For $d=2$, let
\begin{equation*}
A = \frac{1}{i(\omega - \| \xi \|_{\varepsilon'})}, \quad B= \frac{1}{i(\omega + \| \xi \|_{\varepsilon'})},
\end{equation*}
and we have
\begin{equation}
\label{eq:2dHighFrequencyLimit}
\begin{split}
P(\omega \pm i \delta,D)^{-1} J_{high} &\to \frac{1}{(2 \pi)^2} \int_{\mathbb{R}^2} e^{ix.\xi} M^2(A,B) (1-\beta(\xi)) \hat{J}(\xi) d\xi \text{ in } (\mathcal{S}'(\mathbb{R}^2))^3 \\
&=: P^{high}(\omega) J.
\end{split}
\end{equation}
For $d=3$, let
\begin{equation*}
A = \frac{1}{i(\omega - \sqrt{b} \| \xi \|)}, \; B = \frac{1}{i(\omega + \sqrt{b} \| \xi \|)}, \; C = \frac{1}{i(\omega - \| \xi \|_\varepsilon)}, \; D = \frac{1}{i(\omega + \| \xi \|_\varepsilon)},
\end{equation*}
and we have with convergence in $ (\mathcal{S}'(\mathbb{R}^3))^6$
\begin{equation}
\label{eq:3dHighFrequencyLimit}
\begin{split}
P(\omega \pm i \delta,D)^{-1} (1-\beta(D)) J &\to \frac{1}{(2 \pi)^3} \int_{\mathbb{R}^3} e^{ix.\xi} M^3(A,B,C,D) (1-\beta(\xi)) \hat{J}(\xi) d\xi \\
&=: P^{high}(\omega) J.
\end{split}
\end{equation}
Let $P_{\pm}(\omega) = P_{\pm}^{loc}(\omega) + P^{high}(\omega)$. By Proposition \ref{prop:LocalLAP}, and \eqref{eq:2dHighFrequencyLimit}, \eqref{eq:3dHighFrequencyLimit}, we have
\begin{equation*}
P(\omega \pm i \delta,D)^{-1} J \to P_{\pm}^{loc}(\omega) J + P^{high}(\omega) J \text{ in } (\mathcal{S}'(\mathbb{R}^d))^{m(d)}.
\end{equation*}
Let $(D,B)^{\pm}_\delta = P(\omega \pm i \delta, D)^{-1} J$ and $(D,B)^{\pm} = P_{\pm}(\omega) J$. At last, we show that
\begin{equation}
\label{eq:LimitingSolution}
P(\omega,D)(D,B)^{\pm} = J.
\end{equation}
For this purpose, we show that for $\delta \to 0$ we have
\begin{equation}
\label{eq:Limit}
P(\omega,D) (D,B)^{\pm}_\delta \to J \text{ in } \mathcal{S}'(\mathbb{R}^d)^{m(d)}.
\end{equation}
As $(D,B)^{\pm}_\delta \to (D,B)^{\pm}$ in $\mathcal{S}'(\mathbb{R}^d)^{m(d)}$, \eqref{eq:Limit} concludes the proof.\\
To show \eqref{eq:Limit}, we return to the diagonalizations (cf. Lemmas \ref{lem:Diagonalization2d}, \ref{lem:DiagonalizationPartiallyAnisotropic}):
\begin{equation*}
p(\tilde{\omega},\xi) = i m(\xi) d(\tilde{\omega},\xi) m^{-1}(\xi) \text{ for } \tilde{\omega} \in \mathbb{C}.
\end{equation*}
We find for $\omega \in \mathbb{R}$:
\begin{equation*}
\begin{split}
p(\omega, \xi) p^{-1}(\omega \pm i \delta, \xi) &= m(\xi) d(\omega,\xi) d(\omega \pm i \delta, \xi)^{-1} m^{-1}(\xi) \\
&= m(\xi) ( 1_{m(d) \times m(d)} \pm \delta d(\omega \pm i \delta,\xi)^{-1} ) m^{-1}(\xi) \\
&= 1_{m(d) \times m(d)} \pm \delta p(\omega \pm i \delta, \xi)^{-1}.
\end{split}
\end{equation*}
Hence,
\begin{equation*}
\begin{split}
P(\omega,D) (D,B)^{\pm}_\delta &= J \pm \delta P(\omega \pm i \delta, D)^{-1} J, \\
\| P(\omega, D) (D,B)^{\pm}_\delta - J \|_{L^q_0(\mathbb{R}^d)} &\lesssim \delta \| J \|_{L_0^{p_1} \cap L_0^{p_2}} \to 0.
\end{split}
\end{equation*}
In particular, \eqref{eq:Limit} holds true in $\mathcal{S}'(\mathbb{R}^d)^{m(d)}$.
\medskip
Next, we suppose additionally that $J \in (W^{s-1,q}(\mathbb{R}^d))^{m(d)}$ for $s \geq 1$. By Young's inequality, we have
\begin{equation*}
\| P(\omega \pm i \delta,D)^{-1} \beta(D) J \|_{(W^{s,q}(\mathbb{R}^d))^{m(d)}} \lesssim \| P(\omega \pm i \delta, D)^{-1} \beta(D) J \|_{L^{q}_0(\mathbb{R}^d)}.
\end{equation*}
Hence, the low frequencies can be estimated like before. For the high frequencies, we recall that the multipliers $M^2$ and $M^3$ yield smoothing of one derivative and by Theorem \ref{thm:MultiplierTheorem}, we find
\begin{equation*}
\begin{split}
&\qquad \| P(\omega \pm i \delta, D)^{-1} (1-\beta(D)) J \|_{(W^{s,q}(\mathbb{R}^d))^{m(d)}} \\
&\lesssim \| \frac{(1-\Delta)^{s/2}}{(1-\Delta)^{1/2}} (1-\beta(D)) J \|_{(L^q(\mathbb{R}^d))^{m(d)}} \\
&\lesssim \| (1-\Delta)^{(s-1)/2} (1-\beta(D)) J \|_{(L^q(\mathbb{R}^d))^{m(d)}} \\
&= \| (1-\beta(D)) J \|_{(W^{s-1,q}(\mathbb{R}^d))^{m(d)}}.
\end{split}
\end{equation*}
The proof of Theorem \ref{thm:LocalLAP} is complete.
\end{proof}
\section{Localization of Eigenvalues}
\label{section:Localization}
At last, we use the $\omega$-dependent resolvent estimates to localize eigenvalues for operators $P(\omega,D) + V$ acting in $L^q$. For this purpose, we consider for $\ell > 0$ and $(1/p,1/q) \in \tilde{\mathcal{R}}_0^{\frac{1}{2}}$ the region, where uniform resolvent estimates are possible:
\begin{equation}
\label{eq:UniformEstimate}
\begin{split}
\mathcal{Z}_{p,q}(\ell) &= \{ \omega \in \mathbb{C} \backslash \mathbb{R} \; : \; \kappa_{p,q}(\omega) \leq \ell \} \\
&= \{ \omega \in \mathbb{C} \backslash \mathbb{R} \; : \; |\omega|^{-\alpha_{p,q}} |\omega|^{\gamma_{p,q}} | \Im \omega |^{-\gamma_{p,q}} \leq \ell \}, \quad \alpha_{p,q} = 1 - d \big( \frac{1}{p} - \frac{1}{q} \big).
\end{split}
\end{equation}
Describing the regions, we start with observing the symmetry in the real and imaginary part. For $\alpha_{p,q} = 0$, $\ell < 1$, we find $\mathcal{Z}_{p,q}(\ell) = \emptyset$. For $\ell \geq 1$, $\mathcal{Z}_{p,q}(\ell)$ describes a cone around the $y$-axis with aperture getting larger. For $\alpha_{p,q} > 0$ the boundaries become slightly curved.
Pictorial representations for $\Re \omega > 0$ were provided in \cite[Figures~9~(a)-(c)]{KwonLee2020}. The region in the left half plane is obtained by reflection along the imaginary axis. We shall see that eigenvalues of $P(\omega,D) + V$ must lie in $\mathbb{C} \backslash \mathcal{Z}_{p,q}(\ell)$. Previously in \cite{Frank2018}, for non-self-adjoint Schr\"odinger operators analogous arguments were used to show that in a range of $(p,q)$, a sequence of eigenvalues $\lambda_j$ with $\Re \lambda_j \to \infty$ has to satisfy $\Im \lambda_j \to 0$ as a consequence of the shape of $\mathcal{Z}_{p,q}(\ell)$. This is not the case presently and the shape of $\mathcal{Z}_{p,q}(\ell)$ only yields a bound for the asymptotic growth of $|\Im \lambda_j|$ as $|\Re \lambda_j| \to \infty$. This also raises the question for counterexamples, where the behavior $\Re \lambda_j \to \infty$ and $\Im \lambda_j \to 0$ fails. We also refer to Cuenin \cite{Cuenin2017} for resolvent estimates for the fractional Laplacian in this context.
Let $C$ be the constant such that
\begin{equation}
\label{eq:ConstantResolventEstimate}
\| P(\omega,D)^{-1} \|_{L_0^p(\mathbb{R}^d) \to L_0^q(\mathbb{R}^d)} \leq C \kappa_{p,q}(\omega).
\end{equation}
\begin{corollary}
Let $d \in \{2,3\}$, $\ell > 0$, and $1<p,q<\infty$ such that $(1/p,1/q) \in \tilde{\mathcal{R}}_0^{1/2}$. Suppose that there is $t \in (0,1)$ such that
\begin{equation*}
\| V \|_{\frac{pq}{q-p}} \leq t (C \ell)^{-1}.
\end{equation*}
If $E \in \mathbb{C} \backslash \mathbb{R}$ is an eigenvalue of $P+V$ acting in $L^q_0$, then $E$ must lie in $\mathbb{C} \backslash \mathcal{Z}_{p,q}(\ell)$.
\end{corollary}
\begin{proof}
The short argument is standard by now (cf. \cite{KwonLee2020,KwonLeeSeo2021}), but contained for the sake of completeness. Let $u \in L_0^q(\mathbb{R}^d)$ be an eigenfunction of $P+V$ with eigenvalue $E \in \mathbb{C} \backslash \mathbb{R}$ and suppose that $E \in \mathcal{Z}_{p,q}(\ell)$. By H\"older's inequality, we find $-(P-E)u = (V-(P-E+V))u = Vu \in L^p$. By definition of $\mathcal{Z}_{p,q}(\ell)$, we find
\begin{equation*}
\| (P-E)^{-1} \|_{p \to q} \leq C \kappa_{p,q}(E) \leq C \ell.
\end{equation*}
By the triangle and H\"older's inequality, we find
\begin{equation*}
\|(P-E)^{-1}(P-E) u \|_q \leq C \ell ( \| (P-E+V) u \|_p + \| V u \|_p ) \leq C \ell \| V \|_{\frac{pq}{q-p}} \| u \|_q \leq t \| u \|_q,
\end{equation*}
which implies $u=0$ as $t<1$. Hence, $E \notin \mathcal{Z}_{p,q}(\ell)$.
\end{proof}
\section{Appendix}
\begin{lemma}
\label{lem:ComputationDeterminant}
With the notations from Section \ref{subsection:3d}, let $m(\xi)$ be as in \eqref{eq:AuxiliaryMatrix} and $\alpha(\xi)$ as in \eqref{eq:RenormalizationQuantities}. Then, we have
\begin{equation*}
|\det m(\xi)| \sim \alpha^4(\xi).
\end{equation*}
\end{lemma}
\begin{proof}
We compute the determinant by taking linear combinations of the third and fourth column and fifth and sixth column, and aligning the columns as block matrices:
\begin{equation*}
\det m(\xi) =
\begin{vmatrix}
0 & \tilde{\xi}_1/a & 0 & 0 & \tilde{\xi}_2^2 + \tilde{\xi}_3^2 & -(\tilde{\xi}_2^2 + \tilde{\xi}_3^2) \\
0 & \tilde{\xi}_2/b & - \xi_3'/\sqrt{b} & \xi_3'/\sqrt{b} & - \tilde{\xi}_1 \tilde{\xi}_2 & \tilde{\xi}_1 \tilde{\xi}_2 \\
0 & \tilde{\xi}_3/b & \xi_2'/\sqrt{b} & - \xi_2'/\sqrt{b} & - \tilde{\xi}_1 \tilde{\xi}_3 & \tilde{\xi}_1 \tilde{\xi}_3 \\
\xi_1' & 0 & -((\xi_2')^2 + (\xi_3')^2) & - ((\xi_2')^2 + (\xi_3')^2) & 0 & 0 \\
\xi_2' & 0 & \xi_1' \xi_2' & \xi_1' \xi_2' & - \tilde{\xi}_3 & - \tilde{\xi}_3 \\
\xi_3' & 0 & \xi_1' \xi_3' & \xi_1' \xi_3' & \tilde{\xi}_2 & \tilde{\xi}_3
\end{vmatrix}
\end{equation*}
\begin{equation*}
\begin{split}
&\sim
\begin{vmatrix}
0 & \tilde{\xi}_1/a & 0 & 0 & \tilde{\xi}_2^2 + \tilde{\xi}_3^2 & 0 \\
0 & \tilde{\xi}_2/b & 0 & - \xi_3'/\sqrt{b} & - \tilde{\xi}_1 \tilde{\xi}_2 & 0 \\
0 & \tilde{\xi}_3/b & 0 & - \xi_2'/\sqrt{b} & - \tilde{\xi}_1 \tilde{\xi}_3 & 0 \\
\xi_1' & 0 & (\xi_2')^2 + (\xi_3')^2 & 0 & 0 & 0 \\
\xi_2' & 0 & -\xi_1' \xi_2' & 0 & 0 & - \tilde{\xi}_3 \\
\xi_3' & 0 & -\xi_1' \xi_3' & 0 & 0 & \tilde{\xi}_3
\end{vmatrix}
\\
&\sim
\begin{vmatrix}
\tilde{\xi}_1/a & 0 & \tilde{\xi}_2^2+ \tilde{\xi}_3^2 & 0 & 0 & 0 \\
\tilde{\xi}_2/b & - \xi_3'/\sqrt{b} & - \tilde{\xi}_1 \tilde{\xi}_2 & 0 & 0 & 0 \\
\tilde{\xi}_3/b & \xi_2'/\sqrt{b} & - \tilde{\xi}_1 \tilde{\xi}_3 & 0 & 0 & 0 \\
0 & 0 & 0 & \xi_1' & (\xi_2')^2 + (\xi_3')^2 & 0 \\
0 & 0 & 0 & \xi_2' & - \xi_1' \xi_2' & - \tilde{\xi}_3 \\
0 & 0 & 0 & \xi_3' & - \xi_1' \xi_3' & \tilde{\xi}_2
\end{vmatrix}
=: A_2 \cdot A_1.
\end{split}
\end{equation*}
We find by noting that $(\xi_1')^2 + (\xi_2')^2 + (\xi_3')^2 = 1$
\begin{equation*}
\begin{split}
A_1 &\sim
\begin{vmatrix}
(\xi_2')^2 + (\xi_3')^2 & - \xi_1' \xi_2' & - \xi_1' \xi_3' \\
\xi_1' & \xi_2' & \xi_3' \\
0 & - \tilde{\xi}_3 & \tilde{\xi}_2
\end{vmatrix}
=
\begin{vmatrix}
1-(\xi_1')^2 & - \xi_1' \xi_2' & - \xi_1' \xi_3' \\
\xi_1' & \xi_2' & \xi_3' \\
0 & - \tilde{\xi}_3 & \tilde{\xi}_2
\end{vmatrix}
\\
&=
\begin{vmatrix}
1 & 0 & 0 \\
\xi_1' & \xi_2' & \xi_3' \\
0 & -\tilde{\xi}_3 & \tilde{\xi}_2
\end{vmatrix}
-\xi_1'
\begin{vmatrix}
\xi_1' & \xi_2' & \xi_3' \\
\xi_1' & \xi_2' & \xi_3' \\
0 & - \tilde{\xi}_3 & \tilde{\xi}_2
\end{vmatrix}
= \xi_2' \tilde{\xi}_2 + \xi_3' \tilde{\xi}_3.
\end{split}
\end{equation*}
Next, by a similar argument,
\begin{equation*}
\begin{split}
A_2 &\sim
\begin{vmatrix}
\tilde{\xi}_1/a & \tilde{\xi}_2/b & \tilde{\xi}_3/b \\
0 & - \xi_3'/\sqrt{b} & \xi_2'/\sqrt{b} \\
\tilde{\xi}_2^2 + \tilde{\xi}_3^2 & - \tilde{\xi}_1 \tilde{\xi}_2 & - \tilde{\xi}_1 & \tilde{\xi}_3
\end{vmatrix}
= \frac{1}{a}
\begin{vmatrix}
\tilde{\xi}_1/a & \tilde{\xi}_2/b & \tilde{\xi}_3/b \\
0 & - \xi_3'/\sqrt{b} & \xi_2'/\sqrt{b} \\
a(\tilde{\xi}_2^2 + \tilde{\xi}_3^2) & - a \tilde{\xi}_1 \tilde{\xi}_2 & - a \tilde{\xi}_1 \tilde{\xi}_3
\end{vmatrix}
\\
&= \frac{1}{a}
\begin{vmatrix}
\tilde{\xi}_1/a & \tilde{\xi}_2/b & \tilde{\xi}_3/b \\
0 & - \xi_3'/\sqrt{b} & \xi_2'/\sqrt{b} \\
b \tilde{\xi}_1^2 + a(\tilde{\xi}_2^2 + \tilde{\xi}_3^2) - b \tilde{\xi}_1^2 & - a \tilde{\xi}_1 \tilde{\xi}_2 & - a \tilde{\xi}_1 \tilde{\xi}_3
\end{vmatrix}.
\end{split}
\end{equation*}
We use multilinearity to write
\begin{equation*}
\begin{split}
A_2 &\sim \frac{1}{a}
\left(
\begin{vmatrix}
\tilde{\xi}_1/a & \tilde{\xi}_2/b & \tilde{\xi}_3/ b \\
0 & - \xi_3'/\sqrt{b} & \xi'_2/\sqrt{b} \\
1 & 0 & 0
\end{vmatrix}
-
\begin{vmatrix}
\tilde{\xi}_1/a & \tilde{\xi}_2/b & \tilde{\xi}_3/b \\
0 & - \xi_3'/\sqrt{b} & \xi_2'/\sqrt{b} \\
-b \tilde{\xi}_1^2 & - a \tilde{\xi}_1 \tilde{\xi}_2 & - a \tilde{\xi}_1 \tilde{\xi}_3
\end{vmatrix}
\right)
\\
&\sim
\begin{vmatrix}
1 & 0 & 0 \\
0 & -\xi_3'/\sqrt{b} & \xi_2'/\sqrt{b} \\
\tilde{\xi}_1/a & \tilde{\xi}_2/b & \tilde{\xi}_3/b
\end{vmatrix}
\sim (\xi_3' \tilde{\xi}_3 + \xi_2' \tilde{\xi}_2).
\end{split}
\end{equation*}
\end{proof}
In the following we give explicit formulae for the resolvents and for limiting operators in two dimensions:
\begin{proposition}
\label{prop:Explicit2d}
Let $d=2$ and
\begin{equation*}
M^2(A,B) =
\begin{pmatrix}
\frac{A+B}{2 \mu} ((\xi_2')^2 \varepsilon_{11} - (\xi_1' \xi_2') \varepsilon_{12}) & \frac{A+B}{2 \mu}((\xi_2')^2 \varepsilon_{21} - \xi_1' \xi_2' \varepsilon_{22}) & \frac{\xi_2'}{2 \mu}(A-B) \\
\frac{A+B}{2 \mu}((\xi_1')^2 \varepsilon_{21} - \xi_1' \xi_2' \varepsilon_{11}) & \frac{A+B}{2 \mu}((\xi_1')^2 \varepsilon_{22} - \varepsilon_{12} (\xi_1') (\xi_2')) & \frac{\xi_1'}{2 \mu}(B-A) \\
\frac{A-B}{2}(\xi_2' \varepsilon_{11} - \xi_1' \varepsilon_{21}) & \frac{B-A}{2} (\xi_1' \varepsilon_{22} - \xi_2' \varepsilon_{21}) & \frac{A+B}{2}
\end{pmatrix},
\end{equation*}
furthermore,
\begin{equation*}
M^2_{c} = \frac{1}{i \omega \mu}
\begin{pmatrix}
\varepsilon_{22} (\xi_1')^2 - \varepsilon_{12} \xi_1' \xi_2' & \varepsilon_{22} \xi_1' \xi_2' - \varepsilon_{12} (\xi_2')^2 & 0 \\
\varepsilon_{11} \xi_1' \xi_2' - \varepsilon_{12} (\xi_1')^2 & \varepsilon_{11} (\xi_2')^2 - \varepsilon_{12} \xi_1' \xi_2' & 0 \\
0 & 0 & 0
\end{pmatrix}
.
\end{equation*}
Then, we have for $\omega \in \mathbb{C} \backslash \mathbb{R}$ and almost all $\xi \in \mathbb{R}^2$:
\begin{equation*}
(P(\omega,D)^{-1} u) \widehat (\xi) = (M^2(A,B) + M^2_c) \hat{u}(\xi)
\end{equation*}
with
\begin{equation*}
A= \frac{1}{i(\omega - \| \xi \|_{\varepsilon'})}, \quad B = \frac{1}{i(\omega + \| \xi \|_{\varepsilon'})}.
\end{equation*}
\medskip
For $\omega > 0$, $\beta \in C^\infty_c(\mathbb{R}^2)$, and $u \in \mathcal{S}(\mathbb{R}^2)^3$, we find
\begin{equation*}
P(\omega \pm i \delta,D)^{-1} \beta(D) u \to P^{loc}_{\pm}(\omega) \beta(D) u
\end{equation*}
with
\begin{equation*}
P_{\pm}^{loc}(\omega) \beta(D) u (x) = \frac{1}{(2 \pi)^2} \int_{\mathbb{R}^2} e^{ix.\xi} (M^2(A,B)+M^2_c)
\beta(\xi) \hat{u}(\xi),
\end{equation*}
where
\begin{equation*}
A= \frac{1}{i} \{ v.p. \frac{1}{\omega - \| \xi \|_{\varepsilon'}} \mp i \pi \delta(\omega - \| \xi \|_{\varepsilon'}) \}, \quad B = \frac{1}{i(\omega + \| \xi \|_{\varepsilon'})}.
\end{equation*}
\end{proposition}
\begin{proof}
The first claim follows from computing $p^{-1}(\omega,\xi)$ (cf. Lemma \ref{lem:Diagonalization2d}). We decompose
\begin{equation*}
\begin{split}
m^{-1}(\xi) &= m_1(\xi) + m_2(\xi) \\
&=
\begin{pmatrix}
0 & 0 & 0 \\
\frac{\xi_1' \varepsilon_{21} - \xi_2' \varepsilon_{11}}{2} & \frac{\varepsilon_{22} \xi_1' - \varepsilon_{21} \xi_2'}{2} & - \frac{1}{2} \\
\frac{\xi_2' \varepsilon_{11} - \xi_1' \varepsilon_{12}}{2} & \frac{\xi_2' \varepsilon_{12} - \xi_1' \varepsilon_{22}}{2} & - \frac{1}{2}
\end{pmatrix}
+
\begin{pmatrix}
\mu^{-1} \xi_1' & \mu^{-1} \xi_2' & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}
\end{split}
\end{equation*}
based on the observation that $m_2(\xi) v(\xi) = 0$ for $\xi_1 v_1(\xi) + \xi_2 v_2(\xi) = 0$. We compute for $A$ and $B$ as in the first claim:
\begin{equation*}
M^2(A,B) = m(\xi) d(\omega,\xi)^{-1} m_1(\xi).
\end{equation*}
The computation is simplified by noting that:
\begin{equation*}
\begin{pmatrix}
0 & 0 & 0 \\
0 & A & 0 \\
0 & 0 & B
\end{pmatrix}
\begin{pmatrix}
0 & 0 & 0 \\
\frac{\xi_1' \varepsilon_{21} - \xi_2' \varepsilon_{11}}{2} & \frac{\varepsilon_{22} \xi_1' - \varepsilon_{21} \xi_2'}{2} & - \frac{1}{2} \\
\frac{\xi_2' \varepsilon_{11} - \varepsilon_{12} \xi_1'}{2} & \frac{\xi_2' \varepsilon_{12} - \xi_1' \varepsilon_{22}}{2} & - \frac{1}{2}
\end{pmatrix}
=
\begin{pmatrix}
0 & 0 & 0 \\
\frac{ A( \xi_1' \varepsilon_{21} - \xi_2' \varepsilon_{11})}{2} & \frac{A (\varepsilon_{22} \xi_1' - \varepsilon_{21} \xi_2'}{2} & - \frac{A}{2} \\
\frac{B(\xi_2' \varepsilon_{11} - \varepsilon_{12} \xi_1')}{2} & \frac{B(\xi_2' \varepsilon_{12} - \xi_1' \varepsilon_{22})}{2} & - \frac{B}{2}
\end{pmatrix}
.
\end{equation*}
We find for $m(\xi) d(\omega,\xi)^{-1} m_1(\xi)$:
\begin{equation*}
\begin{pmatrix}
\frac{A+B}{2 \mu} ((\xi_2')^2 \varepsilon_{11} - (\xi_1' \xi_2') \varepsilon_{12}) & \frac{A+B}{2 \mu}((\xi_2')^2 \varepsilon_{21} - \xi_1' \xi_2' \varepsilon_{22}) & \frac{\xi_2'}{2 \mu}(A-B) \\
\frac{A+B}{2 \mu}((\xi_1')^2 \varepsilon_{21} - \xi_1' \xi_2' \varepsilon_{11}) & \frac{A+B}{2 \mu}((\xi_1')^2 \varepsilon_{22} - \varepsilon_{12} (\xi_1') (\xi_2')) & \frac{\xi_1'}{2 \mu}(B-A) \\
\frac{A-B}{2}(\xi_2' \varepsilon_{11} - \xi_1' \varepsilon_{21}) & \frac{B-A}{2} (\xi_1' \varepsilon_{22} - \xi_2' \varepsilon_{21}) & \frac{A+B}{2}
\end{pmatrix}
\end{equation*}
In $M^2_c = m(\xi) d(\omega,\xi)^{-1} m_2(\xi)$ we have separated the contribution of non-trivial charges. The second claim follows with the same computation from Sokhotsky's formula.
\end{proof}
For $d=3$, we define $M^3(A,B,C,D) \in \mathbb{C}^{6 \times 6}$:
\begin{align*}
M^3_{11} &= \frac{a(C+D)(\tilde{\xi}_2^2 + \tilde{\xi}_3^2)}{2}, \quad M^3_{12} = -\frac{b(C+D)\tilde{\xi}_1 \tilde{\xi}_2}{2} , M^3_{13} = - \frac{b(C+D)\tilde{\xi}_1 \tilde{\xi}_3 }{2}, \\
M^3_{14} &= 0, \quad M^3_{15} = \frac{(D-C)\tilde{\xi}_3}{2}, \quad M^3_{16} = \frac{(C-D)\tilde{\xi}_2}{2}.
\end{align*}
Furthermore,
\begin{align*}
M^3_{21} &= - \frac{a(C+D)\tilde{\xi}_1 \tilde{\xi}_2}{2} , \quad M^3_{22} = \frac{(A+B)\xi_3^2}{2(\xi_2^2 + \xi_3^2)} + \frac{b(C + D)\tilde{\xi}_1^2 \xi_2^2}{2(\xi_2^2+ \xi_3^2)} , \\
M^3_{23} &= - \frac{(A+B)\xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)} + \frac{b(C+D)\tilde{\xi}^2_1 \xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)}, \quad M^3_{24} = \frac{\xi_3' (A+B)}{2 \sqrt{b}}, \\
M^3_{25} &= \frac{(B-A)\xi_1' \xi_2 \xi_3}{2 \sqrt{b} (\xi_2^2 + \xi_3^2)} + \frac{(C-D)\tilde{\xi}_1 \xi_2 \xi_3 }{2(\xi_2^2 + \xi_3^2)}, \\
M^3_{26} &= \frac{(B-A) \xi_1' \xi_3^2}{2 \sqrt{b} (\xi_2^2 + \xi_3^2)} + \frac{(D-C) \tilde{\xi}_1 \xi_2^2}{2(\xi_2^2+ \xi_3^2)}.
\end{align*}
Next,
\begin{align*}
M^3_{31} &= -\frac{a(C+D) \tilde{\xi}_1 \tilde{\xi}_3}{2}, \quad M^3_{32} = -\frac{(A+B) \xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)} + \frac{b(C+D)\tilde{\xi}_1^2 \xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)}, \\
M^3_{33} &= \frac{(A+B)\xi_2^2 }{2(\xi_2^2 + \xi_3^2)} + \frac{b(C+D)\tilde{\xi}_1^2 \xi_3^2}{2(\xi_2^2 + \xi_3^2)}, \quad M^3_{34} = \frac{(B-A) \xi_2'}{2 \sqrt{b}} , \\
M^3_{35} &= \frac{(A-B)\xi_1' \xi_2^2}{2 \sqrt{b}(\xi_2^2 + \xi_3^2) } + \frac{(C-D)\tilde{\xi}_1 \xi_3^2}{2(\xi_2^2 + \xi_3^2)} , \quad
M^3_{36} = \frac{(A-B)\xi_1' \xi_2 \xi_3}{2 \sqrt{b} (\xi_2^2 + \xi_3^2)} + \frac{(D-C)\tilde{\xi}_1 \xi_2 \xi_3}{2(\xi_2^2+ \xi_3^2)}.
\end{align*}
\begin{align*}
M^3_{41} &= 0, \quad M^3_{42} = \frac{\sqrt{b} (A-B) \xi_3'}{2}, \\
M^3_{43} &= \frac{\sqrt{b} (B-A)\xi_2'}{2}, \quad M^3_{44} = \frac{(A+B) (\xi_2'^2 + \xi_3'^2)}{2} , \\
M^3_{45} &= - \frac{(A+B) \xi_1' \xi_2'}{2} , \quad M^3_{46} = - \frac{(A+B) \xi_1' \xi_3'}{2}.
\end{align*}
\begin{align*}
M^3_{51} &= \frac{a(D-C)}{2} \tilde{\xi}_3, \quad M^3_{52} = \frac{\sqrt{b} (B-A) \xi_1' \xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)} + \frac{b(C-D) \tilde{\xi}_1 \xi_2 \xi_3}{2 (\xi_2^2 + \xi_3^2)} , \\
M^3_{53} &= \frac{\sqrt{b} (A-B) \xi_1' \xi_2^2}{2 (\xi_2^2 + \xi_3^2)} + \frac{b(C-D) \tilde{\xi}_1 \xi_3^2}{2(\xi_2^2 + \xi_3^2)}, \quad M^3_{54} = -\frac{(A+B) \xi_1' \xi_2'}{2} , \\
M^3_{55} &= \frac{(A+B) \xi_1'^2 \xi_2^2}{2 (\xi_2^2+ \xi_3^2)} + \frac{(C+D) \xi_3^2}{2( \xi_2^2 + \xi_3^2)}, \quad M^3_{56} = \frac{(A+B) \xi_1'^2 \xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)} - \frac{(C+D) \xi_2 \xi_3}{2(\xi_2^2+ \xi_3^2)} .
\end{align*}
Lastly,
\begin{align*}
M^3_{61} &= \frac{a(C-D) \tilde{\xi}_2 }{2}, \quad M^3_{62} = \frac{\sqrt{b} ( B-A)\xi_1' \xi_3^2 }{2(\xi_2^2 + \xi_3^2)} + \frac{b(D-C) \tilde{\xi}_1 \xi_2^2}{2(\xi_2^2 + \xi_3^2)}, \\
M^3_{63} &= \frac{\sqrt{b}( A-B) \xi_1' \xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)} + \frac{b(D-C) \tilde{\xi}_1 \xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)} , \quad M^3_{64} = - \frac{(A+B) \xi_1' \xi_3' }{2}, \\
M^3_{65} &= \frac{(A+B) \xi_1'^2 \xi_2 \xi_3}{2 (\xi_2^2 + \xi_3^2)} - \frac{(C+D) \xi_2 \xi_3}{2(\xi_2^2 + \xi_3^2)} , \quad M^3_{66} = \frac{(A+B) \xi_1'^2 \xi_3^2}{2 ( \xi_2^2 + \xi_3^2)} + \frac{(C+D) \xi_2^2}{2( \xi_2^2 + \xi_3^2)}.
\end{align*}
We let moreover
\begin{equation*}
M^3_c = \frac{1}{i \omega}
\begin{pmatrix}
b \tilde{\xi}_1^2 & b \tilde{\xi}_1 \tilde{\xi}_2 & b \tilde{\xi}_1 \tilde{\xi}_3 & 0 & 0 & 0 \\
a \tilde{\xi}_1 \tilde{\xi}_2 & a \tilde{\xi}_2^2 & a \tilde{\xi}_2 \tilde{\xi}_3 & 0 & 0 & 0 \\
a \tilde{\xi}_1 \tilde{\xi}_3 & a \tilde{\xi}_2 \tilde{\xi}_3 & a \tilde{\xi}_3^2 & 0 & 0 & 0 \\
0 & 0 & 0 & \xi_1'^2 & \xi_1' \xi'_2 & \xi_1' \xi'_3 \\
0 & 0 & 0 & \xi'_1 \xi_2' & \xi_2'^2 & \xi_2' \xi'_3 \\
0 & 0 & 0 & \xi'_1 \xi_3' & \xi'_2 \xi_3' & \xi_3'^2
\end{pmatrix}
.
\end{equation*}
We have the following analog of Proposition \ref{prop:Explicit2d}:
\begin{proposition}
\label{prop:Explicit3d}
Let $d=3$. We find for $\omega \in \mathbb{C} \backslash \mathbb{R}$ and almost all $\xi \in \mathbb{R}^3$
\begin{equation*}
(P(\omega,D)^{-1} u) \widehat (\xi) = (M^3(A,B,C,D) + M^3_c) \hat{u}(\xi)
\end{equation*}
with
\begin{equation*}
A= \frac{1}{i(\omega - \sqrt{b} \| \xi \|)}, \; B = \frac{1}{i(\omega + \sqrt{b} \| \xi \|)}, \; C= \frac{1}{i(\omega - \| \xi \|_\varepsilon)}, \; D= \frac{1}{i(\omega + \| \xi \|_\varepsilon)}.
\end{equation*}
\medskip
For $\omega > 0$, $\beta \in C^\infty_c(\mathbb{R}^3)$, and $u \in \mathcal{S}(\mathbb{R}^3)^6$, we find
\begin{equation*}
P(\omega \pm i \delta,D)^{-1} \beta(D) u \to P^{loc}_{\pm}(\omega) \beta(D) u \text{ in } (\mathcal{S}'(\mathbb{R}^3))^6
\end{equation*}
with
\begin{equation*}
P_{\pm}^{loc}(\omega) \beta(D) u (x) = \frac{1}{(2 \pi)^3} \int_{\mathbb{R}^3} e^{ix.\xi} ( M^3(A,B,C,D) + M^3_c)
\beta(\xi) \hat{u}(\xi),
\end{equation*}
where
\begin{equation*}
\begin{split}
A &= \frac{1}{i} \{ v.p. \frac{1}{\omega - \sqrt{b} \| \xi \|} \mp i \pi \delta(\omega - \sqrt{b} \| \xi \|) \}, \; B = \frac{1}{i(\omega + \sqrt{b} \| \xi \|)}, \\
C &= \frac{1}{i} \{ v.p. \frac{1}{\omega - \| \xi \|_{\varepsilon} } \mp i \pi \delta(\omega - \| \xi \|_{\varepsilon}) \}, \quad \quad D = \frac{1}{i(\omega + \| \xi \|_{\varepsilon})}.
\end{split}
\end{equation*}
\end{proposition}
\section*{Acknowledgements}
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 258734477 – SFB 1173. I would like to thank Lucrezia Cossetti and Rainer Mandel for helpful discussions about the results and context. Moreover, I am much obliged to the anonymous referees whose insightful comments clearly improved the presentation.
|
1,314,259,994,352 | arxiv | \section{Introduction}
\defcitealias{2018berton1}{B18}
Among active galactic nuclei (AGN), the class of narrow-line Seyfert 1 (NLS1) galaxies has been in the spotlight for the last decade. In the optical spectra of NLS1s the emission lines originating from the high-density broad-line region (BLR) and the low-density narrow-line region (NLR) are of comparable width, unlike in broad-line AGN. By definition, the full-width at half maximum of the broad H$\beta$, FWHM(H$\beta$) $<$ 2000~km s$^{-1}$ \citep{1985osterbrock1}, whereas the emission lines arising from the NLR usually have widths of a few hundred km s$^{-1}$. However, we remark that this limit is used mostly for historical reasons, since no real threshold is present in the FWHM(H$\beta$) distribution at least up to 4000 km s$^{-1}$\ \citep[e.g., see][]{2018marziani1}. To ensure that they are Type 1 AGN, and we have a direct view of the BLR an additional criterion for an NLS1 classification requires the flux ratio of [O~III] and total (broad $+$ narrow) H$\beta$ to be $<$ 3. This threshold has been found to be determinative in separating Type 1 and 2 AGN \citep[e.g., ][]{1981shuder1}. NLS1s also often exhibit strong Fe~II multiplets \citep{1989goodrich1}, confirming the unobstructed view of the central engine.
In the quasar main sequence \citep[MS, Fig. 2 in][]{2018marziani1}, originally derived by means of principal component analysis \citep[e.g.,][]{1992boroson1}, and believed to be mainly driven by the Eddington ratio and orientation, NLS1s are population A sources (FWHM(H$\beta$) $<$ 4000~km s$^{-1}$). The horizontal branch in the MS is driven by the Eddington ratio, and R4570\footnote{R4570 is the flux ratio of Fe~II and H$\beta$.}, that also anticorrelates with the [O~III] line strength, is used as its proxy. The spread on the vertical branch (FWHM(H$\beta$)), is though to be mainly dominated by orientation if R4570 is kept fixed (\citealp{2014shen1}, but see, \citealp{2018panda1}). NLS1s show very diverse values of R4570, and since it correlates with the Eddington ratio, this implies that a considerable fraction of NLS1s are accreting close to or even above the Eddington limit \citep{1992boroson1}. Naturally, due to their classification, NLS1s do not show considerable spread in the FWHM(H$\beta$) values, but interestingly they, and other population A sources, do show different emission line profiles (Lorentzian) compared to the population B sources (FWHM(H$\beta$) $>$ 4000~km s$^{-1}$ and Gaussian) \citep{2000sulentic2, 2003marziani1, 2020berton1}. Lorentzian profiles are believed to be dominated by turbulent motion in the BLR \citep{2011kollatschny1,2012goad1}, rather than rotation, and thus orientation might not play a crucial role in most of the population A sources. If this is the case, the narrowness \footnote{If the BLR is virialised, the velocity dispersion of the gas clouds, i.e. the FWHM of the permitted lines, depends quadratically on the mass of the central object. Narrow permitted lines, then, may originate because of an undermassive black hole.} of the broad emission lines in NLS1s would be caused by a relatively low-mass black hole, typically $<10^8$ M$_\odot$ \citep{2011peterson1}, compared to, for example, broad-line Seyfert 1 galaxies, that usually have M$_\textrm{BH}$ $>10^8$ M$_\odot$. This seems to be confirmed by reverberation mapping campaigns focusing on NLS1s \citep{2016wang1,2018du1,2019du1}. Interestingly, it has been found that the H$\beta$ lags compared to changes in the continuum in high Eddington ratio sources are shorter than expected, suggesting that this quality affects the distance of the BLR clouds from the continuum \citep{2018du1, 2020dallabonta1}.
Assuming that the black hole masses in NLS1s are $<10^8$ M$_\odot$, and since the mass can only increase with time, NLS1s may constitute an early stage of AGN life cycle and they will eventually grow into fully developed BLS1s \citep{2000mathur1, 2000sulentic1, 2017fraixburnet1}. However, an alternative hypothesis is that the narrowness of permitted lines originates due to the BLR geometry. If the BLR is flattened, when observed pole-on we only observe the part of the velocity vector directed towards us, and since the rotation happens on a plane misaligned with our line of sight we do not observe considerable Doppler broadening of the lines, and the emission lines appear narrow. In this scenario, the black hole mass of NLS1s can be significantly higher than $10^8$ M$_\odot$, and NLS1s would be no different from BLS1s and other broad-line AGN \citep{2008decarli1}. However, the earlier mentioned reverberation mapping studies, and other observational properties of NLS1s, such as their host galaxy morphologies (\citealp[e.g.,][but see]{2001krongold1, 2006deo1, 2008anton1, 2016kotilainen1, 2018jarvela1, 2019berton1, 2020olguiniglesias1, 2021hamilton1} \citealp{2017dammando1, 2018dammando1}), seem to indicate that the black hole mass is genuinely low when compared to BLS1s and other broad-line AGN.
The central engine of an AGN can launch a host of different effluxes, spanning from highly collimated powerful relativistic jet, through lower power non-relativistic jets, to wide-angle outflows generated by nuclear winds. Traditionally, the most powerful relativistic jets have been associated with the most massive supermassive black holes residing in old elliptical galaxies \citep{2000laor1}, whereas low-power jets and outflows can be seen in a wider variety of AGN. Interestingly, some NLS1s ($\sim$7\%, \citealp{2006komossa1}) show prominent radio emission and several blazar-like properties, such as high brightness temperature, prominent variability, and a double-humped spectral energy distribution \citep{2008yuan1}. The discovery of gamma-ray emission from a handful of them ($\sim$20, \citealp{2018romano1, 2020jarvela1, 2021rakshit1}) proved that, just like blazars and radio galaxies, NLS1s can harbour powerful relativistic jets. As the NLS1s that do not host relativistic jets may be the progenitors of BLS1s, it has been suggested that NLS1s with relativistic jets are an early stage of the life cycle of flat-spectrum radio quasars (FSRQs) \citep{2015foschini1, 2017foschini1, 2020foschini1}. Furthermore, several authors hypothesise that when seen at a larger angle, relativistic jetted NLS1s may appear as kinematically young radio galaxies, such as compact steep-spectrum sources (CSS, \citealp{2001oshlack1, 2006gallo1, 2006komossa1, 2014caccianiga1, 2016berton1, 2017berton1, 2017foschini1, 2017caccianiga1, 2021yao1}). At radio frequencies, when their relativistic jet axis is close to the line of sight, these NLS1s typically show a flat spectrum and a compact morphology on kpc-scale \citepalias[\citealp{2018berton1}, from now on ][]{2018berton1}. When instead the relativistic jet has a larger inclination, they tend to become fainter due to the decreasing impact of boosting effects, more extended, and to show, at least in a few cases, radio lobes \citepalias[][Vietri et al. in prep.]{2018berton1}.
However, the vast majority of NLS1s is radio-quiet or radio-silent\footnote{Radio-quiet AGN by definition have a ratio $S_{\rm{radio}}/S_{\rm{optical}} < 10$, where $S$ are the flux densities at 5~GHz and in B-band, respectively \citep{1989kellermann1}. Radio-silent sources have no known detection in radio.}. The origin of the radio emission in these NLS1s is still debated. The presence of jets, even relativistic ones, in them cannot be ruled out. Because of the non-linear scaling between jet power and black hole mass \citep{2003heinz1}, and the still widely unexplored impact of the magnetic flux density \citep{2021chamani1}, jets, even relativistic ones, in NLS1s can be weak, and barely dominate the total radio emission produced by the galaxy. In some extreme cases, they may even be completely invisible at low radio frequencies due to absorption by a screen of ionised gas, making the AGN appear as radio-quiet or -silent \citep{2018lahteenmaki1,2020berton2,2021jarvela1}. Alternative sources of radio emission on scales smaller than $\sim$0.1~kpc can be the accretion disk corona or weakly collimated disk winds. Also wide-angle outflows due to nuclear winds \citep{2007proga1} are common in these sources. In radio frequencies these outflows are characterised by non-collimated morphologies and steep spectral indices \citep{2012fauchergiguere1,2019panessa1} that can show steepening toward high frequencies \citep{2010jiang1}. At the host galaxy scale, instead, the radio emission from star formation activity typically dominates, in the form of synchrotron emission from supernova remnants and free-free emission from H~II regions \citep{2018lister1, 2019panessa1}. All of these components are often present at the same time, and to distinguish them is not an easy task. \citetalias{2018berton1} found that, on average, radio-quiet objects often show an extended morphology with a spectral index steeper than what is found in NLS1s with known relativistic jets, but they could not draw many conclusions on the sample as a whole. When the radio morphology is extended, a detailed analysis of each source is the preferred way to understand exactly the mechanisms at play in each individual source.
The goal of this paper is to accurately analyse the sources in \citetalias{2018berton1} with an extended morphology to understand the nature and the origins of their radio emission. A possible way to do this, and to tell apart the different sources of radio emission, is by studying spatially resolved spectral index maps. Such technique can provide some indications on the nature of the extended emission in sources without jets, while in jetted NLS1s it can reveal the presence of interaction between the relativistic jets and the interstellar medium. Furthermore, we carried out a detailed research in the literature for each source, to obtain as much useful information as possible to paint a complete picture of all of our targets. The paper is organised as follows: in Sect.~\ref{sec:sample} we briefly overview the sample, in Sect.~\ref{sec:data-analysis} we describe the data reduction and the production of the radio and spectral index maps, in Sect.~\ref{sec:radiofromsf} we overview star formation mechanisms that produce radio emission. We also introduce some diagnostic tools we can use to estimate its contribution in our source, and also discuss the possible issues with these tools. Then, in Sect.~\ref{sec:results} we summarise the star formation results for the whole sample, and discuss each source individually. Finally, in Sect.~\ref{sec:discussion} we discuss our results and their implications, and we conclude with a brief summary in Sect.~\ref{sec:summary}. Throughout the paper, we use the standard $\Lambda$CDM cosmology, with $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, and $\Omega_{\Lambda}$ = 0.73 \citep{2011komatsu1}. For spectral indices we adopt the convention of $S_{\nu} \propto \nu ^{\alpha}$ at frequency $\nu$.
\section{Sample}
\label{sec:sample}
This paper is continuation to \citetalias{2018berton1}, where the original sample selection is explained. Our sources were selected from their sample based on the radio morphology. \citetalias{2018berton1} presents Karl G. Jansky Very Large Array (JVLA) A-configuration observations of 74 NLS1s obtained in project 15A-283 (P.I. J. Richards). The observations are centred at 5.2~GHz, with a bandwidth of 2~GHz. From the original sample we selected for further analysis all sources whose radio morphology was classified either as extended (E) or intermediate (I) (see Table A.1. in \citetalias{2018berton1}). In extended sources the ratio between the peak flux density and the integrated flux density is < 0.75, and in intermediate sources the ratio is between 0.75 and 0.95. Sources with a ratio > 0.95 were classified as compact (C). We decided to leave out the compact sources in this study since most of them show a flat radio spectrum, and they lack the extended radio emission we are especially interested in. This selection criterion resulted in a sample of 20 extended and 26 intermediate sources. Two extended sources were left out due to bad data quality, so our final sample size is 44 sources. The compact sources will be a subject of a future study.
Basic information of the sample, including name, coordinates, redshift, scale, and radio morphological type, is listed in Table~\ref{tab:basicdata}. The redshift, black hole mass, Eddington ratio, and radio luminosity distributions of our sample, taken from \citetalias{2018berton1} and \citet{2020berton2}, divided to intermediate and extended source sample, are shown in Figs.~\ref{fig:zdist}-\ref{fig:ledddist}.
\begin{figure}
\centering
\includegraphics[width=9cm]{z-dist.png}
\caption{Redshift distribution of our sample, divided based on the radio morphology classification. Sources with intermediate radio morphology marked in blue with a dashed edge line, sources with extended radio morphology marked in red with a dotted edge line. }
\label{fig:zdist}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9cm]{bh-dist.png}
\caption{Logarithmic black hole mass distribution of our sample. Division to subsamples, and colours and edge styles as in Fig.~\ref{fig:zdist}.}
\label{fig:bhdist}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9cm]{ledd-dist-v2.png}
\caption{Eddington ratio distribution of our sample. Division to subsamples, and colours and edge styles as in Fig.~\ref{fig:zdist}.}
\label{fig:ledddist}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9cm]{lradio-dist.png}
\caption{Distribution of the 5.2~GHz integrated radio luminosities of our sources. Division to subsamples, and colours and edge styles as in Fig.~\ref{fig:zdist}.}
\label{fig:lradiodist}
\end{figure}
\section{Data analysis}
\label{sec:data-analysis}
\subsection{Radio data calibration}
We reduced the radio data again, and used the standard Extended VLA (EVLA) pipeline 5.0.0 for the calibration. Each data set had its own flux calibrator. After running the pipeline we split the individual measurement sets of each source from the main data sets, averaging the data over the 64 channels of each of the 16 spectral windows and over ten seconds of exposure time. For further data reduction and analysis after splitting we used CASA version 5.6.2-3 due to its enhanced data reduction capabilities, for example, in producing spectral index maps.
\subsection{Spectral index maps}
We constructed the spectral index maps and their respective error maps following the procedure described in \citet{2015wiegert1}, that we also summarise here. The procedure includes the actual cleaning of the data as well as several post-imaging corrections.
\subsubsection{Cleaning}
The clean algorithm in CASA (\texttt{tclean}) enables multi-term (multi-scale) multi-frequency synthesis, \texttt{mt-mfs} \citep{2011rau1}. The multi-term feature allows a simultaneous fitting of a spectral index over the whole band-width and as a function of the position within the image using a simple power-law. The multi-scale algorithm enables an enhanced modelling of flux components at different scales by using both delta-functions and circular Gaussians in the deconvolution, instead of just delta-functions as in the other clean algorithms. We experimented with the multi-scale feature using some of the most extended sources in our sample, but concluded that using different scales does not significantly alter the resulting spectral index or error maps in comparison to `traditional clean' with scales = 0, and thus we decided to use only the multi-term option. We used a natural weighting scheme for the majority of our sources, with a few exceptions: for J0347+0105, J1209+3217, and J1317+6010 we used Briggs weighting with robust = 0.0, and for J1047+4725 we used uniform weighting. These four sources suffered from strong sidelobes whose effects we suppressed by using an alternative weighting scheme.
The \texttt{mt-mfs} algorithm performs the fitting by modelling the spectrum of each flux component (pixel) by a Taylor series expansion about the reference frequency, $\nu_0$. A specific intensity at frequency $\nu$, $I_{\nu}$, can be fit with:
\begin{equation}
I_{\nu} = I_{\nu_0} \left( \frac{\nu}{\nu_0} \right) ^{\alpha + \beta (\nu / \nu_0)}
\end{equation} \label{eq:taylor}
where $\alpha$ is the spectral index and $\beta$ is the curvature of the power-law. Expanding Eq.~\ref{eq:taylor} in a Taylor series about $\nu_0$ results in Taylor terms, or maps, equal to the number of terms in the Taylor polynomial used. The first map, TT0, corresponds to the specific intensities at $\nu_0$ and is thus equal to the normal radio map. The second map, TT1, is defined so that $\alpha$ = TT1 / TT0. The third term, TT2, describes the spectral curvature and is defined so that $ \rm \beta = TT2 / TT0 -\alpha(\alpha-1)/2$.
The number of Taylor terms is not limited, but in practise using more than two or three terms is usually not needed. Furthermore, the data quality does not usually allow it, since the more terms are used, the higher the required S/N of the data is. Most of our sources are quite faint, and the extended parts, where some curvature could be seen are usually regions of low S/N ($\sim$3-6), and thus not of adequate quality to fit the $\beta$ term. Therefore, we chose to use two Taylor terms to maximise the quality of the $\alpha$ maps. The end products of the cleaning procedure are a TT0 map, centred in the middle of the observing bandwidth at $\sim$5.2~GHz, a corresponding TT1 map, and $\alpha$, and $\Delta\alpha$ maps. The $\Delta\alpha$ map is an error map describing the empirical error estimate based on the errors of the TT0 and TT1 residual images.
In the \texttt{mt-mfs} method the fit is performed on the flux component, resulting in a uniform resolution and S/N over the whole band. This is clearly superior to the traditional way of forming a spectral index map by splitting the observed bandwidth to two and then estimating the spectral index based on these two maps that intrinsically have different resolutions and S/N. However, \texttt{mt-mfs} is not perfect either, and $\alpha$ and $\Delta\alpha$ maps require some post-processing steps, which are described in the next Sections.
\subsubsection{Wide-band primary beam correction and 5$\sigma$ cut-off}
\label{sec:wbpbcorr}
The primary beam varies with frequency and thus imposes its own spectral index onto the Taylor-coefficient images, TT0 and TT1, and the $\alpha$ map. This can be corrected with the CASA task \texttt{widebandpbcor}, which computes a set of primary beams at given frequencies, calculates the Taylor-coefficient images representing the primary beam spectrum, performs the primary beam correction of the Taylor-coefficient images, and finally computes the primary beam corrected $\alpha$ map using the corrected Taylor-coefficient images.
However, this correction cannot account for any variations during a specific observation, including, for example, the slightly changing shape of each telescope, and the rotation of the primary beam on the sky when tracking a source. The primary beam errors also increase with distance from the pointing centre, and it is necessary to understand the impact of these effects on the final $\alpha$ maps to determine their accuracy. \citet{2013bhatnagar1} showed that the effects are insignificant up to the half-power beam width (HPBW), after which they cause considerable errors in total intensity as well as spectral index maps. The HPBW of the JVLA at 5.2~GHz is $\sim$8~arcmin, giving a radius of $\sim$4~arcmin. All of our sources are clearly less extended than this -- the most extended ones being $<$ 6~arcsec -- and the total intensity, as well as the spectral index maps, should be accurate.
Unfortunately we cannot perform any further tests on the correctness of the $\alpha$ maps since our data is limited; we do not have any observations with multiple pointing, or observations carried out in more than one observing sessions. What we can do, however, is to compare the $\alpha$ maps to the spectral index values obtained in \citetalias{2018berton1} where the traditional way of estimating the spectral indices was used. This comparison is made in Section~\ref{sec:spindcomparison}.
In addition to wide-band primary beam correction, we masked pixels with a S/N < 5$\sigma$, where $\sigma$ is the rms of the corresponding TT0 image of the source. We chose 5$\sigma$ instead of a more traditional 3$\sigma$ threshold since the $\alpha$ maps consistently show extreme or clearly erroneous values when the S/N is low, and thus these peripheral regions in general do not hold valuable information.
\subsubsection{$\Delta\alpha$ cut-off}
\label{sec:deltaalphacutoff}
The primary beam corrected $\alpha$ maps can still show considerable variations, especially near the edges. Also the $\Delta\alpha$ map correspondingly shows very high errors in these regions, implying that the data quality might not have been adequate to accurately estimate the spectral index. We removed the most extreme values by creating an additional mask based on the values of the $\Delta\alpha$ map, and then applying it onto the $\alpha$ map. Following \citet{2015wiegert1} we decided to cut off all the data that has $\Delta\alpha >$ 1. We also experimented with lower values, but the cut-off of the data would have been too drastic, so we decided to use the threshold of $\Delta\alpha >$ 1, even if some less accurate values might remain.
\subsubsection{Smoothing of the $\alpha$ and $\Delta\alpha$ maps}
\label{sec:alphasmoothing}
The Taylor coefficient maps are convolved with the clean beam, but as the $\alpha$ and $\Delta\alpha$ maps are derived from them using mathematical operations their final resolution is not the same as that of the Taylor coefficient maps. We thus smoothed the $\alpha$ and $\Delta\alpha$ maps using the parameters of the clean beam of each source. This considerably decreases the small-scale variance of the $\alpha$ maps and the errors of the $\Delta\alpha$ maps.
\subsubsection{Remaining issues and additional tests}
In summary, the post-imaging correction steps performed to achieve the final $\alpha$ and $\Delta\alpha$ maps were:
\begin{enumerate}
\item wide-band primary beam correction;
\item 5$\sigma$ cut-off;
\item $\Delta\alpha >$ 1 cut-off;
\item convolution with the clean beam.
\end{enumerate}
The final $\alpha$ and $\Delta\alpha$ maps, overlaid with the normal radio contours, are shown in Appendix~\ref{app:maps}, from Figure~\ref{fig:J0347} to \ref{fig:J2314}, in panels a) and b). Individual sources are discussed in Sec.~\ref{sec:results}. The rms of the maps and the peak and integrated flux densities are listed in Table~\ref{tab:measurements}. We obtained the peak flux density by fitting a 2D Gaussian to the data, and the integrated flux density by summing up all the emission within the 3$\sigma$ contour. The peak flux density error is the one given by CASA when fitting a 2D Gaussian to the data, the errors of the integrated flux densities were estimated with rms per beam $\times$ the square root of the area of emission expressed in beams. In addition, for each source we calculated an average spectral index over the whole region that the $\alpha$ map covers weighted with the surface brightness, and in the core in a region with a radius of 2~px. For some interesting sources we also calculated a spectral index in a region of interest outside the core. Also these regions had a radius of 2~px. The total, core, and region of interest spectral indices, as well as the coordinates of the regions of interest are listed in Table~\ref{tab:spinds}.
Even after these passages some issues remain with the maps. The peripheral regions of some $\alpha$ maps exhibit extreme values, that are usually accompanied by higher-than-average errors in the $\Delta\alpha$ maps. Thus any seemingly drastic changes in the spectral index in the edge regions of the maps should be taken with a grain of salt. In case of some of the faintest and smallest sources these effects seem to dominate the whole $\alpha$ map.
Even though the resolution over the whole band is uniform when using the \texttt{mt-mfs} method, it is in principle possible that a source might have some structures that are resolved-out at higher frequencies, thus artificially steepening the spectral index. In practise these structures would have to be on a scale of several kpc, and generally such extended emission is diffuse and faint, so we considered their contribution negligible. Furthermore, it is also worth noting that in NLS1s extended emission at scales much larger than a few kpc is very rarely observed \citep{2020chen1}.
Finally, the frequency-dependent changes in the uv-range might have an impact on the $\alpha$ maps, especially in very extended sources. We examined its effects by selecting a few of the most extended sources and doing a comparison $\alpha$ map where the uv-range was selected to span only the range where all spectral windows had data. Examples of an $\alpha$ and a $\Delta\alpha$ map produced with a common uv-range for J1302+1624 are shown in Figures~\ref{fig:J1302spindcommon} and \ref{fig:J1302spinderrcommon}. The $\alpha$ and $\Delta\alpha$ maps of the same source without limiting the uv-range are shows in Figures~\ref{fig:J1302spindnoncommon} and \ref{fig:J1302spinderrnoncommon}. It is clear that the differences between the $\alpha$ maps are marginal, and it thus seems safe to assume that the effect of the uv-range to the $\alpha$ maps is insignificant.
\subsection{Comparison to classical spectral indices}
\label{sec:spindcomparison}
Since our sources are a subsample of the NLS1 sample studied in \citetalias{2018berton1}, where also their spectral indices were estimated, we can compare our $\alpha$ maps and weighted average spectral indices with their results. They used the traditional way of estimating the spectral index by splitting the 2~GHz band to two 1~GHz wide windows, cleaning them using a common uvrange, and then measuring their emission properties separately. Effectively this gives an in-band two-point spectral index with the central frequencies of 4.7 and 5.7~GHz. In most cases the spectral indices in \citetalias{2018berton1} agree with our results, but there are a few curious cases in \citetalias{2018berton1} where the in-band spectral index is very close to zero, but these sources look steep with $\alpha$ closer to -1 in our new maps. The in-band spectral indices of these sources also do not correlate with the 1.4-5~GHz spectral index derived in \citetalias{2018berton1} (recalculated and shown also here in Table~\ref{tab:spinds}), whereas the majority of their sources show a correlation of these two spectral indices. This raised our suspicion and we decided to estimate again the traditional spectral indices of some of these rogue sources using a newer CASA version (5.6.2-3), since an older version (4.7.2) was used in \citetalias{2018berton1}. We did the cleaning and the estimation of the spectral indices exactly as in \citetalias{2018berton1} and it turns out that the new results we got are in agreement with the $\alpha$ maps, i.e. these sources do show steep indices also when deriving it the classical way. We thus believe that our $\alpha$ maps are reliable, and the spectral index estimation of the few peculiar sources in \citetalias{2018berton1} is influenced by some undetermined issue.
\subsection{Tapered maps}
In addition to normal maps we produced tapered maps of our sources to enhance the sensitivity to extended structures. In this procedure, in addition to the selected weighting scheme, a Gaussian taper, that decreases the weights of the outermost baselines in the uv-plane, is applied on the data. We produced the tapered maps using two different tapers, 60k$\lambda$ and 90k$\lambda$, to examine the extended structures with a bit more detail (90k$\lambda$), and to try to bring out even the faintest extended emission (60k$\lambda$). Tapered maps are shown in Appendix~\ref{app:maps}, from Figure~\ref{fig:J0347} to \ref{fig:J2314}, in panels c) and d). The rms and the integrated flux densities of the tapered maps are listed in Table~\ref{tab:measurements}. The errors of the integrated flux densities were estimated with rms per beam $\times$ the square root of the area of emission expressed in beams.
\section{Radio emission from star formation}
\label{sec:radiofromsf}
In addition to the processes connected to the nuclear activity, also star formation activity, via free-free emission and synchrotron emission from supernova remnants, can significantly contribute to the radio emission of galaxies \citep{1992condon1, 2019panessa1}. In some cases, the radio emission produced this way is even strong enough to make a source appear radio-loud\footnote{Radio loudness is defined as the ratio between the 5~GHz flux density and the $B$ band flux density \citep{1989kellermann1}. Sources with radio loudness $>$ 10 are classified as radio-loud, and sources with radio loudness $<$ 10 as radio-quiet.} \citep{2019ganci1}, which is usually considered a sign of strong nuclear activity, or even of the presence of jets. Especially NLS1s are known to often show enhanced star formation \citep{2010sani1}. Indeed, \citet{2015caccianiga1} claim that the mid-infrared colours of some sources in a sample of flat-spectrum NLS1s suggest that star formation is actually the predominant source of their radio emission. The combination of the almost flat spectral index of the free-free emission and the steep spectral index of the supernova remnant synchrotron emission results in a total spectral index of $\sim$-0.7, close to what is observed in several of our sources. We thus wanted to estimate the star formation related radio emission in our sources, to better understand their nature and to aid the interpretation of our results. The best way to do this is to use mid- or far-infrared observations, since star formation manifests strongly in these bands. Only a few NLS1s have far-infrared data, therefore we are limited to the mid-infrared data from the Wide-Field Infrared Survey Explorer \citep[WISE, ][]{2010wright1} AllWISE data release. WISE performed observations in four bands: W1, W2, W3, and W4, with respective wavelengths of 3.4, 4.6, 12, and 22~$\mu$m. In particular, the longer wavelength bands, W3 and W4, are relevant to star formation studies in AGN. To achieve a comprehensive picture we decided to use several different methods, described below.
1) Investigating a sample of flat-spectrum NLS1s, \citet{2015caccianiga1} concluded that WISE colours can be used as a proxy of their star formation activity, because they are sensitive to the comparable strengths of different mid-infrared emitting components in active galaxies. Especially the W3-W4 colour is sensitive to the strength of star formation because increasing star formation gets stronger toward longer wavelengths thus it especially affects the W4 band emission, making the colour redder. They determine that colours redder than W3-W4 $>$ 2.5 cannot be explained by AGN spectral energy distribution templates, but require a strong star formation component (see Figs. 2 and 3 in their paper). We calculated the W3-W4 colour of our sources, and it is shown in Table~\ref{tab:sf}, as well as the W3 and W4 magnitudes, and the W3 flux density.
2) Another parameter \citet{2015caccianiga1} use is a variant of the widely used $q24$ parameter, that reflects the strength of the 24$\mu$m emission relative to the 1.4~GHz emission. \citet{2015caccianiga1} use a $q22$ parameter instead, defined as
\begin{equation}
q22 = \textrm{log } (S_{22 \mu \textrm{m}} / S_{1.4 \textrm{GHz}})
\end{equation}
where $S_{22 \mu \textrm{m}}$ is the W4 band flux density and $S_{1.4 \textrm{GHz}}$ is the 1.4~GHz flux density. We only have 5.2~GHz flux densities from our observations, so we extrapolate the 1.4~GHz flux densities from the 5.2~GHz values using the weighted total spectral index from the $\alpha$ maps of the sources. \citet{2015caccianiga1} define that a major star formation contribution to the radio emission can be expected in sources with $q22 >$ 1, especially when combined with a red W3-W4 colour.
3) Another method we used is the estimation of the star formation related radio emission using mid-infrared observations. In \citet{2007boyle1} the authors derive relations between the 20~cm (1.5~GHz) and the 24~$\mu$m flux densities using two different data sets. The WISE W4 band observations are at 22~$\mu$m but the 2~$\mu$m difference is most probably negligible in this case since the variation in the restframe wavelengths due to redshifts is considerably larger, and the observed bands largely overlap. Moreover, \citet{2007boyle1} did not take redshift into account either, and their redshift range is comparable to ours, so their data is influenced by the same redshift bias. Thus we decided not to correct the 22~$\mu$m flux densities for redshifts. The relations used are
\begin{equation}
S_{20\textrm{cm}} = (0.041 \pm 0.002) S_{24 \mu \textrm{m}} + (1.35 \pm 0.8)~\mu \textrm{Jy}
\end{equation} \label{eq:CDFS}
and
\begin{equation}
S_{20\textrm{cm}} = (0.039 \pm 0.004) S_{24\mu \textrm{m}} + (7.1 \pm 3.3)~\mu \textrm{Jy}
\end{equation} \label{eq:ELAIS}
where $S_{20\textrm{cm}}$ is the estimated flux density at 20~cm, or 1.5~GHz, and $S_{24\mu \textrm{m}}$ is the observed 24~$\mu$m flux density.
The star formation related radio emission estimates calculated using the above equations are shown in Table~\ref{tab:sf}. Equation~3 was derived using the observations of the $Chandra$ Deep Field South (CDFS) region, and we thus call it the $S_{20\textrm{cm}}$ CDFS estimate, whereas the observations of the European Large Area $ISO$ Survey (ELAIS) field was used to derive Equation~\ref{eq:ELAIS}, and we correspondingly call it the $S_{20\textrm{cm}}$ ELAIS estimate. It should be noted that the aforementioned Equations give the flux density estimate at 1.5~GHz, whereas our observations are centred at 5.2~GHz. If we assume for star formation related emission the characteristic spectral index of -0.7, the flux density at 5.2~GHz, $S_{5.2\textrm{GHz}}$, is 0.42 $\times$ $S_{20\textrm{cm}}$.
4) The fourth method we implemented is the direct comparison of the radio and mid-infrared flux densities. \citet{2021kozielwierzbowska1} studied the radio - mid-infrared connection in a large sample of radio AGN and pure star forming galaxies, and concluded that a simple separation based on $S_{W3} = S_{1.4GHz}$ correctly classifies 98-99\% of galaxies, with the star forming galaxies lying above the line, and the radio AGN below. Choosing W3 instead of W4 is based on the fact that the polycyclic aromatic hydrocarbon (PAH) features, which manifest the strongest around $\sim$11~$\mu$m, for sources with $z <$ 0.6 fall in the W3 band \citep{2011jarrett1}, which covers 7-17~$\mu$m. The PAH emission is often associated with the presence of massive young stars, and it can thus be used as a proxy for the star formation in the galaxy \citep{2008dacunha1}. In our case we do not expect the division to be so straightforward since it is expected that in many of our sources the radio emission is produced by both, the AGN and star formation. Furthermore, according to some studies \citep[e.g.,][]{2012lamassa1} the PAH features are considerably suppressed in AGN-dominated systems, which might affect the usefulness of this proxy in some of our sources. On the other hand, \citet{2014esquej1} did not find proof of PAH suppression in the vicinity of low luminosity AGN, which most of the sources in our sample are. The feasibility of this method is further supported by the study in \citet{2010sani1} where they successfully use the PAH features to investigate star formation in NLS1s. In \citet{2021kozielwierzbowska1} the radio flux density at 1.4~GHz is used so we extrapolated it for our sources from our 5.2~GHz observations the weighted total spectral index measured from their $\alpha$ maps.
\subsection{Remarks on mid-infrared emission}
However, some caveats should be kept in mind when using the aforementioned methods. They have been formulated using large samples of AGN, and reflect their general properties, but are not necessarily as accurate for smaller samples with peculiar properties, such as NLS1s. Naturally, the most important question to ask is: how certain are we of the origin of the mid-infrared emission in AGN, and especially in NLS1s? The above-mentioned diagnostics assume that enhanced mid-infrared emission and certain colours are associated with star formation, which necessarily is not always the case. Indeed, in some cases features similar to those caused by star formation can be caused by the AGN itself heating the surrounding dust.
\subsubsection{Polar dust}
One significant phenomenon causing possible deviation from these relations is the presence of polar dust in AGN \citep[e.g.,][]{2009burtscher1,2016asmus1,2019leftley1}. Polar dust comprises of warm ($\sim$a few hundred K) dust that is situated along the polar direction of an AGN, i.e. perpendicular to the conventional dusty torus, and usually spans some tens of pc. It is believed to be maintained by a nuclear wind driven by radiation pressure \citep{2019leftley1}. Its spectral energy distribution ($\nu F_{\nu}$ vs. $\lambda$) peaks in mid-infrared, around 10-30$\mu$m \citep[][]{2018lyu1}, depending on the temperature, and in some cases its contribution to mid- to far-infrared emission is significanty stronger than the contribution of the AGN, or star formation, reaching even a dominance of 90\% \citep{2013honig1, 2019asmus1}. Its spectral shape is somewhat similar to the shape caused by strong star formation at mid-infrared wavelengths \citep[see Fig. 9 in][]{2018lyu1}. This is relevant in our case since polar dust emission can significantly enhance the emission in both W3 and W4 bands, and due to its properties it relatively enhances W4 band emission more than W3, mimicking mid-infrared properties conventionally associated with star formation activity. An example is NGC 3782, a Seyfert 1 galaxy that does not show significant star formation, but has a W3 flux density 600~mJy higher than its 1.4~GHz flux density, and its W3-W4 $>$ 2.5. Indeed, its mid-infrared emission was found to be dominated by dust in the polar region \citep{2013honig1}. Such sources are plenty \citep[see e.g.][]{2013zhang1,2018lyu1}, and unfortunately we have no way of estimating the polar dust contribution without careful spectral energy distribution (SED) modelling or direct high-resolution mid-infrared observations of the emitting region, since such data is not available. Moreover, there are indications that high Eddington ratio sources, such as NLS1s, have a tendency to show more polar dust \citep{2019leftley1}.
However, \citet{2013zhang1} found that the ratio of the [O~III] wing and the bolometric luminosity is correlated with the mid-infrared covering factor. The wing is believed to rise from turbulent polar outflows in the inner narrow-line region \citep{2014peng1}, and thus they propose that this indicates that a considerable fraction of the warm dust producing mid-infrared emission in AGN is likely embedded in polar outflows. The [O~III] lines of all of our objects were analysed in \citet{2021berton1}, which reports their core and wing properties, including velocity and width of both components. The [O~III]$\lambda$5007 was fitted with two Gaussians, one for the core component, and one to reproduce the blue wing. The same Gaussians, rescaled, were used to fit the [O~III]$\lambda$4959 simultaneously. The errors were calculated via a Monte Carlo method, by randomly varying the noise on the line and fitting them 1000 times. Unfortunately we do not have the information about the luminosity of the wing, but the presence of a wind alone already indicates the presence of nuclear outflows, and increases the probability those sources might host a prominent polar dust component.
\subsubsection{Dusty torus}
Another emission source whose contribution to the infrared emission is not totally clear is the more conventional AGN structure, the dusty torus. It is known that the hot (1000-1500~K) dust, in the inner parts of the torus manifests as a bump in near-infrared bands, around 2-4~$\mu$m (in $\nu F_{\nu}$ vs. $\lambda$), and in general seems to dominate the infrared spectrum between 1 and 10~$\mu$m \citep[e.g.][]{1986edelson1, 2006rodriguezardila1}. The overall torus emission instead peaks at mid- to far-infrared wavelengths \citep{2006fritz1,2018zhuang1}, and as demonstrated by \citet{2018zhuang1} contribution from different AGN- and host-related elements significantly changes when modelling the same SED with different models \citep[see also][]{2019gonzalezmartin1}, leaving a lot of space for interpretation. The implications of this are two-fold: the torus emission can result in enhanced mid-infrared emission, without any contribution from star formation, and, if the mid-infrared emission is dominated by the AGN, it means that W3 and W4 might not reflect the properties of the host galaxy, but of the AGN itself, which would render some of the diagnostic tools we are using unreliable.
In the literature, the results regarding the AGN and host galaxy contributions differ considerably, from 90\% AGN domination to only $\sim$15\% AGN contribution \citep{2009dicken1,2013rosario1,2018zhuang1,2019gonzalezmartin1}. However, a trend of decreasing AGN contribution from brighter quasars to more moderate Seyfert galaxies can be observed. \citet{2013rosario1} studied a sample of 13000 Type 2 AGN using the W3 and W4 bands, and 1.4~GHz radio luminosities. They found that Seyfert galaxies almost exclusively lie in the mid-infrared-bright region, and conclude that their mid-infrared emission is mainly related to star formation activity, and that only $\sim$15\% of the W4 band emission has an origin in the AGN-heated dust. Interestingly they find that the W3 band emission in Seyfert galaxies is suppressed compared to pure star-forming galaxies, possibly due to the PAH destruction due to the AGN emission \citep{2012lamassa1}.
\subsubsection{Summary of mid-infrared issues}
All these components -- the torus, polar dust, and star formation -- can, and often do, co-exist \citep[see Fig. 13 in][]{2018lyu1} complicating the situation even more. Before moving forward, we summarise how these different mid-infrared emission production scenarios can affect our diagnostics, and what we can do to mitigate their effects.
\begin{enumerate}
\item W3-W4 mid-infrared colour:
\begin{itemize}
\item Polar dust emission can show properties similar to star formation \textrightarrow the presence of [O~III] wing can be used to estimate whether polar winds are present and might contribute to the mid-infrared emission and colours.
\item PAH features can be suppressed, which will redden the W3-W4 colour \textrightarrow no effect to the diagnostic value.
\end{itemize}
\item $q22$ parameter:
\begin{itemize}
\item Cannot account for additional mechanisms producing 22$\mu$m emission, for example, polar dust \textrightarrow other diagnostics need to be used to estimate the probability of contamination.
\item Cannot account for strong AGN radio emission \textrightarrow $q22$ values below 1 should not be used for diagnostic purposes.
\end{itemize}
\item $S_{\textrm{W3}}$ - $S_{1.4\textrm{GHz}}$ -relation:
\begin{itemize}
\item The W3 emission can be enhanced by AGN-heated dust emission that can make the source look star-forming \textrightarrow the presence of [O~III] wing can be used to estimate if polar dust is present.
\item AGN can suppress the 11.3~$\mu$m PAH feature \textrightarrow other diagnostics must be used.
\end{itemize}
\item $S_{\textrm{CDFS}}$ - $S_{\textrm{int}}$ -relation:
\begin{itemize}
\item Has been calibrated using star forming galaxies, does not take into account the contribution of the AGN in W4 band \textrightarrow the AGN contribution in Seyferts seems to be small.
\end{itemize}
\end{enumerate}
Since the contribution of the AGN to the mid-infrared emission of Seyferts, and thus also NLS1s, seems to be small \citep{2013rosario1}, and NLS1s are known to be strongly star-forming galaxies, we can assume that in most cases the diagnostics we are using are pointing in the right direction. However, all the sources will be analysed individually, and all diagnostic tools, supplemented by any other information we have of these sources, will be used to draw a more complete picture of the situation.
\section{Results}
\label{sec:results}
\subsection{Significance of star formation}
We implemented several diagnostics to study the contribution of the radio emission produced by star formation related processes in our sources, as described in Sect.~\ref{sec:radiofromsf}. For the radio data we used the total intensity measured from the 60k$\lambda$ map to include also possible faint extended emission, not visible in the normal or the 90k$\lambda$ map. Keeping in mind the remarks in the previous section, we summarise the results here.
1) 25 out of 44 sources in our sample have W3-W4 $>$ 2.5, indicating mid-infrared colours so red that they might be hard to explain only by AGN activity. If the red colour is indeed caused by star formation, it is probable that in these sources its contribution to the radio emission is significant as well. The result that more than half of our sources seem to have considerable star formation is well in agreement with the previous study by \citet{2010sani1}. This result on its own does not mean that there is no strong AGN activity in these sources.
2) The second method was to use the $q22$ parameter to estimate whether the star formation can significantly contribute to the observed radio emission. We found that 26 out of 44 sources have $q22 >$ 1. Most of the sources with W3-W4 $>$ 2.5 also have $q22 >$ 1, but there are some curious sources where this is not the case (see Table~\ref{tab:sf}). These sources on average have high radio flux densities, and whereas the W3-W4 colour indicates star formation in these sources, the strong radio emission distorts the $q22$ value which remains low, or even negative. Indeed, the more widely used $q24$ is expected to be very low for radio-loud AGN \citep{2008ibar1}, and it cannot account for the effects of both AGN and star formation activity in a source. Thus, especially for NLS1s, where often a star forming host galaxy and jets can co-exist, no conclusions should be drawn from $q22$ values $<$ 1.
3) The third method we used was to estimate the radio emission produced by star formation using mid-infrared data \citep{2007boyle1}. We used the WISE W3 flux density to estimate the radio emission from star formation at 1.5~GHz. We used a spectral index characteristic for star formation ($\alpha$=-0.7) to extrapolate the 1.5~GHz estimate to 5.2~GHz. The comparison of the measured 5.2~GHz flux densities and the estimated 5.2~GHz flux densities is shown in Fig.~\ref{fig:jvlacdfs}. We decided to use only the CDFS estimate, since both estimates, CDFS and ELAIS, are very close to each other. It can be seen that a majority of our sources are clustered around the $S_{\textrm{5.2GHz, CDFS}}$ = $S_{\textrm{5.2GHz, JVLA}}$ line. This suggests that, assuming that the predominant source of the W3 band emission is star formation, in most of our sources these processes are enough to explain the bulk of the observed radio emission. Only in the sources clearly below the black line the CDFS estimate underestimates the radio emission, and these sources should have a major contribution from the AGN to explain the observed flux density. Interestingly, the mid-infrared colour W3-W4 does not seem to correlate with the position of a source in Fig.~\ref{fig:jvlacdfs}, indicating that there are some discrepancies between the diagnostics.
\begin{figure}
\centering
\includegraphics[width=9cm]{CDFS-JVLA.png}
\caption{5.2~GHz flux densities estimated from the 22$\mu$m flux densities, and extrapolated from 1.5~GHz using $\alpha$=-0.7 vs. measured JVLA 5.2~GHz flux densities. The errorbars of the flux densities are smaller than the size of the markers and not shown. Sources with the WISE W3-W4 colour $>$ 2.5, that is sources that are unusually red and most likely to have enhanced star formation, are shown with filled red squares, and sources with the WISE W3-W4 colour $<$ 2.5, that is, sources that are not unusually red, as filled blue circles. The black line denotes equal flux densities. $S_{\textrm{5.2GHz}}$ CDFS and $S_{\textrm{5.2GHz}}$ JVLA values close to each other means that the bulk of the radio emission can be explained by star formation processes, whereas the sources lying below the black line have excess radio emission.}
\label{fig:jvlacdfs}
\end{figure}
4) The last diagnostic is based on the direct comparison of the W3 and 1.4~GHz flux densities \citep{2021kozielwierzbowska1}. Since our observations are at 5.2~GHz we extrapolated them to 1.4~GHz using the weighted total spectral index from the $\alpha$ map of each source. The comparison of the 1.4~GHz and the W3 flux densities is shown in Figure~\ref{fig:jvlaw3}. The sources above the black line ought to have strong star formation contribution, whereas only the sources below the line are expected to be dominated by the AGN radio emission. Also in this case the mid-infrared colour does not seem to play a determinative role. This result is also in agreement with the other diagnostics implying that a majority of NLS1s show enhanced star formation, even at levels where it can be the predominant source of the radio emission in the galaxy. However, it must be taken into account that the W3 flux density can be enhanced also due to excess mid-infrared emission from the torus or the polar dust.
\begin{figure}
\centering
\includegraphics[width=9cm]{W3-JVLA.png}
\caption{WISE W3 flux densities vs. JVLA 1.4~GHz flux densities extrapolated from the 5.2~GHz flux densities using the total spectral indices from the $\alpha$ maps. The errorbars of the flux densities are smaller than the size of the markers and not shown. Sources with the WISE W3-W4 colour $>$ 2.5, that is sources that are unusually red and most likely to have enhanced star formation, are shown with filled red squares, and sources with the WISE W3-W4 colour $<$ 2.5, that is, sources that are not unusually red, as filled blue circles. The black line denotes equal flux densities. Sources above the black line should have more contribution from star formation, whereas the sources below the black line should be dominated by AGN activity.}
\label{fig:jvlaw3}
\end{figure}
Furthermore, methods 3) and 4) are in agreement with each other since all sources that are below the threshold shown by the black line in Figure~\ref{fig:jvlaw3} are below the threshold also in Figure~\ref{fig:jvlacdfs}. So, whereas it might be hard to tell the exact origin of the mid-infrared and the radio emission in sources with high W3 and W4 flux densities, the two diagnostics agree on the sources that are AGN-dominated. The properties of individual sources will be discussed in the next sections.
\subsection{Notes on individual sources}
The results for all sources in this study will be individually discussed in this section. Their general radio emission and radio morphological properties were extensively discussed in \citetalias{2018berton1}, thus in this paper we will concentrate on their spectral properties, and investigating the origin of their radio emission. We note that throughout the rest of the paper we use the term ``jet'' to refer to collimated outflows in general, since in most cases we have no way of estimating whether the jets are relativistic or not. We use ``outflow'' to refer to non-collimated efflux. The luminosities of the sources are taken from \citetalias{2018berton1} (their Table A.5). In addition we obtained the PanSTARRS-1 $i$ band optical images \citep{2020flewelling1} of the host galaxies of our sources. We overlaid them with the 90k$\lambda$ tapered radio map, and in some cases also the normal radio map, to give a better sense of the extent of their radio morphologies, and to investigate if there are some optical structures that match with the radio structures. For the sources with the highest redshifts this does not significantly add new information because the scales we are probing at high redshift are several kpc, and the host galaxies are usually not resolved either, but for consistency we decided to include them.
We use the radio loudness parameter to make our results comparable with the literature. However, in general we do not encourage its usage since it is an arbitrarily chosen threshold, and ambiguous for most of the sources \citep{2017padovani1,2017jarvela1}. Whereas very radio-loud sources are more likely to host relativistic jet, there are examples of sources with no previous radio detection at low frequencies, but that were found to have relativistic jets \citep{2018lahteenmaki1,2020berton2,2021jarvela1}, and also of sources whose star formation activity is so enhanced they appear radio-loud \citep{2015caccianiga1}. Thus, as convenient as it might be, we cannot use the radio loudness to deduce the properties of sources, but have to treat and study all of them individually.
Before going into the detailed discussion a few remarks should be made. First, it turned out that the star formation diagnostics are not very reliable when it comes to NLS1s, unless the source is clearly dominated by the AGN. Thus we do not use these tools alone to draw any conclusions. This is further discussed in Sect.~\ref{sec:discussion}. In general, star forming galaxies, even starbursts, do not exceed a radio luminosity of log $\nu L_{\nu, \textrm{int}}$ = 40.0 erg s$^{-1}$ \citep{2009sargsyan1}, which can thus be used as an additional proxy for the AGN dominance. Second, we use log $\nu L_{\nu, \textrm{int}}$ = 40.60 erg s$^{-1}$ at 5.2~GHz as a threshold for `traditional' CSS sources \citep{1998odea1}. Low-luminosity CSS sources do not have a defined lower luminosity limit but we remark that, for example, the low-luminosity CSS sources in \citet{2002kunert1} were chosen to have a 5~GHz flux density $>$ 150~mJy. Only one of our sources exceeds this, which makes the comparison between the low-luminosity CSS sources and NLS1s in this paper uncertain.
We list all the flux density and luminosity variables we use in Table~\ref{tab:acronyms}. We give a summary of the radio morphology properties and the mid-infrared diagnostics in Table~\ref{tab:summary}. We remark that the data in this table is based only on this paper, for example, if a jet has been detected in previous observations, we do not list it in the table unless we detect it also at 5.2~GHz. We tentatively classify sources based on $R_\textrm{CDFS}$ = ($S_{\textrm{5.2GHz, CDFS}}$ - $S_{\textrm{int}}$) / $S_{\textrm{5.2GHz, CDFS}}$), and $R_\textrm{W3}$ = ($S_{\textrm{W3}}$ - $S_{\textrm{1.4GHz, JVLA}}$) / $S_{\textrm{W3}}$. Large negative values of $R_\textrm{CDFS}$ indicate that there is more radio emission in the source than what can be explained by star formation, while positive values suggest that the radio emission could be explained by star formation only. Large positive values of $R_\textrm{W3}$ possibly indicate star formation as the predominant source of radio emission, while large negative values suggest that the AGN is the main radio emission source. The thresholds are given in Table~\ref{tab:rclass}.
\begin{table*}[!h]
\caption[]{Variables used in this paper.}
\centering
\begin{tabular}{l l p{12cm}}
\hline\hline
Variable & Units & Description \\ \hline
$S_{\textrm{peak}}$ & mJy beam$^{-1}$ & Peak flux density of the normal radio map\\
$S_{\textrm{int}}$ & mJy & Integrated flux density of the normal radio map within the 3$\sigma$ contour \\
$S_{\textrm{90k}\lambda, \textrm{int}}$ & mJy & Integrated flux density of the 90k$\lambda$ radio map within the 3$\sigma$ contour \\
$S_{\textrm{60k}\lambda, \textrm{int}}$ & mJy & Integrated flux density of the 60k$\lambda$ radio map within the 3$\sigma$ contour \\
$S_{\textrm{1.4GHz, JVLA}}$ & mJy & Flux density at 1.4~GHz, extrapolated from the 5.2~GHz using the total spectral index \\
$S_{\textrm{5.2GHz, CDFS}}$ & mJy & Flux density at 5.2~GHz, extrapolated (with $\alpha$ = -0.7) from the 1.5~GHz flux density estimated from the 22~$\mu$m flux density \\
$S_{\textrm{W3}}$ & mJy & WISE W3 flux density \\
log $\nu L_{\nu, \textrm{int}}$ & erg s$^{-1}$ & Logarithm of the luminosity at 5.2~GHz \\ \hline
\end{tabular}
\tablefoot{(1) Variable name, (2) variable units, (3) description of the variable.}
\label{tab:acronyms}
\end{table*}
\begin{table}
\caption[]{Threshold for classifying the predominant source of the radio emission.}
\centering
\begin{tabular}{l l}
\hline\hline
Criterion & Classification \\ \hline
0.0 $ < R_\textrm{CDFS}$ & star formation dominated \\
-0.5 $ < R_\textrm{CDFS} <$ 0.0 & inconclusive \\
-1.0 $ < R_\textrm{CDFS} <$ -0.5 & possibly AGN dominated \\
$R_\textrm{CDFS} <$ -1.0 & AGN dominated \\
1.0 $ < R_\textrm{W3}$ & star formation dominated \\
0.5 $ < R_\textrm{W3} <$ 1.0 & possibly star formation dominated \\
-0.5 $ < R_\textrm{W3} <$ 0.5 & inconclusive \\
-1.0 $ < R_\textrm{W3} <$ -0.5 & possibly AGN dominated \\
$R_\textrm{W3} <$ -1.0 & AGN dominated \\ \hline
\end{tabular} \label{tab:rclass}
\end{table}
\subsubsection{J0347+0105}
\label{sec:J0347}
J0347+0105 is a radio-quiet NLS1 at $z$ = 0.031, previously studied with the JVLA in A configuration at 8.4~GHz by \citet{2000thean1}. They found only a marginally resolved core with a flux density of 6.8~mJy. It was not detected at 22~GHz with VLBI \citep{2016doi1}, but the sensitivity of those observations was poor with a detection limit around 7~mJy. J0347+0105 has also been found to exhibit water maser emission \citep{2011tarchi1}, though its origin remains unclear.
Our radio map in Figure~\ref{fig:J0347spind} does not show any significant structure. The integrated flux density is 11.65~mJy. Using the 8.4~GHz flux density by \citet{2000thean1}, we derive a spectral index of -1.12. Our $\alpha$ map shows a consistently steep, but not quite that extreme, total spectral index of -0.73. The observations are almost 20 years apart and the beam sizes are different, which probably causes the difference in the spectral indices. The tapered maps (Figures~\ref{fig:J0347-90k} and \ref{fig:J0347-60k}) show a slightly elongated north-east/south-west structure, but the integrated flux density is only $\sim$0.5~mJy higher than in the normal map, so any extended emission is very faint. The host galaxy does not show any clear features either, and the radio emission is confined inside the host (Fig.~\ref{fig:J0347-host}).
The W3 flux density ($S_{\textrm{W3}}$) of this source is 203.5~mJy higher than the extrapolated 1.4~GHz flux density ($S_{\textrm{1.4GHz, JVLA}}$ = 31.3~mJy) while the extrapolated CDFS 5.2~GHz flux density ($S_{\textrm{5.2GHz, CDFS}}$ = 8.8~mJy) is slightly lower than the JVLA 5.2~GHz flux density ($S_{\textrm{int}}$). The $q22$ parameter is 1.22, and the W3-W4 is 2.29, not indicating extreme colours, although W3-W4 might be underestimated since the W3 emission is unusually high. The $\alpha$ map, with a spectral index characteristic of star formation, the high W3 flux density, and $q22 >$ 1 imply that the contribution of star formation is very strong in this source. Since the star formation rate in this source has not been studied, an alternative explanation for the high mid-infrared flux densities can be equatorial or polar dust, since this source shows a prominent and turbulent [O~III] wing with a velocity of -371~km s$^{-1}$\ and FWHM of 1074km s$^{-1}$\. A CSS-like nature cannot be ruled out either without higher resolution observations, but considering its luminosity, log $\nu L_{\nu, \textrm{int}}$ = 39.17 erg s$^{-1}$, which is clearly below the usual CSS threshold, star formation seems the more plausible explanation.
\subsubsection{J0629-0545}
\label{sec:J0629}
J0629-0545 (IRAS 06269-0543) is a radio-quiet NLS1 at $z$ = 0.117 that has been identified as an ultra-luminous infrared galaxy (ULIRG, \citealp{2002zheng1}). Its integrated flux density is 14.58~mJy, with the flux density of the tapered maps approximately similar. It exhibits starburst level star formation at a rate of 183 $M_{\odot}$ yr$^{-1}$ derived from the CO(1-0) luminosity \citep{2019tan1}. This is evident also in our star formation diagnostics, as the W3-W4 colour is 2.88, $q22$ = 1.23, $S_{\textrm{W3}}$ is 153.7~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$ (45.3~mJy). $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are quite similar, 13.2 and 14.6~mJy, respectively, indicating that the radio emission might be dominated by star formation activity. It also shows a strong [O~III] wing, with a velocity of -464~km s$^{-1}$\ and a FWHM of 1623~km s$^{-1}$, so a contribution of AGN-heated dust to the mid-infrared brightness cannot be ruled out either. However, its radio luminosity, log $\nu L_{\nu, \textrm{int}}$ = 40.43 erg s$^{-1}$, would be high even for a starburst galaxy, and thus it is plausible that radio emission is a combination of nuclear and star formation related processes. \citet{2019tan1} in ALMA observations found a rotating gas disk with ordered velocity gradient, indicating that the host (Figs.~\ref{fig:J0629-host-zoom} and \ref{fig:J0629-host}) is a disk galaxy which has not undergone any recent merger. They estimate the inclination of the disk to be 51$\pm$10~\degree.
The radio map of J0629-0545 (Fig.~\ref{fig:J0629spind}) clearly shows two distinct components with a separation of 2.7~kpc. We assume that the brighter component is the core, since the north-eastern one is close to the edge of the host galaxy, as seen in Fig.~\ref{fig:J0629-host}. These same features were also detected at 8.4~GHz in \citet{2000moran1}, but interestingly the north-eastern component is brighter in their observations. Unfortunately they do not give the flux densities of the single components so we cannot do any comparison. J0629-0545 was also detected in NVSS with a flux density of 32.7~mJy, which gives a 1.4-5.2~GHz spectral index of -0.62. The $\alpha$ map in Fig.~\ref{fig:J0629spind} indicates steeper spectral indices: both the total and the core spectral indices are -0.92. We cannot say anything definitive about the spectral index of the north-eastern component, because the S/N of the data was not high enough to model it reliably. \citet{2000moran1} reported an extremely steep spectral index of -2.2 between 1.4 and 8.4~GHz. They also found the source to exhibit variability, which might partly account for the discrepancy between their and our results. The nature of the north-eastern component is unclear but a strong blueshift of -550~km s$^{-1}$ of the [O~III] lines found in \citet{2002zheng1} suggests that the source has a jet, and, indeed, the asymmetry of the radio morphology might indicate an AGN origin for the north-east region, rather than star formation related origin.
\subsubsection{J0632+6340}
\label{sec:0632}
J0632+6340 (UGC 3478) is a radio-quiet, nearby ($z$ = 0.013) NLS1 with a modest integrated flux density of 2.83~mJy, and a low luminosity of log $\nu L_{\nu, \textrm{int}}$ = 37.76 erg s$^{-1}$. The peak flux density is only 1.69~mJy beam$^{-1}$, so a considerable fraction of the radio emission comes from the regions of extended emission. The radio map (Fig~\ref{fig:J0632spind}) shows a two-sided extended morphology, approximately at a PA of 40~\degree. The extent of the whole emitting region is $\sim$1~kpc end-to-end, it is well confined within the bulge of its spiral host galaxy (Figs.~\ref{fig:J0632-host} and \ref{fig:J0632-host-zoom}), and seems to be aligned with its major axis. The tapered maps do not reveal any additional structures. The total spectral index of the source is -0.51 and the core spectral index is -0.63 (Fig.~\ref{fig:J0632spind}).
J0632+6340 has a few archival detections, 22~mJy at 325~MHz in Westerbork Northern Sky Survey (WENSS), 12.8~mJy at 1.4~GHz in NVSS, and 1.4~mJy at 8.4~GHz in VLA observations \citep{2000kinney1}, based on which they also report seeing only an unresolved core, with an extent $<$ 25~pc \citep{2001schmitt3}. The spectral index seems to be steepening toward higher frequencies: based on these archival observations the spectral indices are -0.37 between 325~MHz and 1.4~GHz, -1.15 between 1.4 and 5.2~GHz, and -1.47 between 5.2 and 8.4~GHz. The 8.4~GHz observations were performed in the same configuration as ours, but their rms was higher (10 vs. 32~$\mu$Jy beam$^{-1}$), so they might have lost some of the extended emission seen in our map, which might steepen the spectral index. In addition, the beam difference might affect the measurements, making the higher frequency flux densities appear lower. None of these observations are simultaneous either, so variability might affect the indices.
\citet{2003schmitt1} studied the extent of [O~III] emission in this source, and found a morphology resembling ionisation cones (their Fig.~8). The PA of the [O~III] emission is 55 degrees, roughly in agreement with the extended radio emission in our map. No blueshift is detected in the [O~III] line profiles. The star formation rate of the host galaxy was found to be $\sim$1.9 $\pm$ 1~$M_{\odot}$ yr$^{-1}$ \citep{2020jackson1}. Our diagnostics support that there is ongoing star formation in the galaxy: the W3-W4 colour is 2.52, $q22$ = 1.65, $S_{\textrm{W3}}$ is $\sim$91.5~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ is 1.49~mJy higher than $S_{\textrm{int}}$. J0632+6340 exhibits an [O~III] wing with a shift of -290~km s$^{-1}$\ and a FWHM of 1000~km s$^{-1}$, and thus polar, or otherwise AGN-heated, dust can be responsible for a fraction of the observed mid-infrared emission. Due to the low redshift of the source, we are able to examine only the centremost region in our observations. Because of the array configuration used for our observations, any radio emission originating further out in the galaxy may be resolved out, and thus the flux density might be underestimated. Moreover, the peculiar radio morphology is not compatible with the circumnuclear star formation usually seen in NLS1s. Since it does not seem probable that this source hosts relativistic jets, the radio morphology can be explained either by non-relativistic jets, or outflows produced by nuclear winds. The non-collimated morphology, and the spectral index steepening toward higher frequencies would point to the presence of an outflow in J0632+6340. Also the presence of an [O~III] wing supports the existence of nuclear outflows in this source.
\subsubsection{J0706+3901}
J0706+3901 (FBQS J0706+3901) is a radio-quiet NLS1 at $z$ = 0.086 with a modest $S_{\textrm{int}}$ of 2.14~mJy. The flux densities of the tapered maps are approximately the same, and the luminosity is moderate with log $\nu L_{\nu, \textrm{int}}$ = 39.33~erg s$^{-1}$. It does not show any resolved structure in any of the maps (Figures~\ref{fig:J0706spind},\ref{fig:J0706-90k}, and \ref{fig:J0706-60k}). Its total and core spectral indices are around -0.8, consistent with either optically thin synchrotron emission or star formation. The W3-W4 colour is 2.55, suggesting star formation activity. $S_{\textrm{W3}}$ is $\sim$5.1~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, which is a considerable difference taking into account the modest flux density values of the source, but it does not necessarily imply very strong star formation. Also $q22$ is only 0.70. $S_{\textrm{5.2GHz, CDFS}}$ is only 0.55~mJy and is thus somewhat lower than $S_{\textrm{int}}$. The [O~III] of J0706+3901 does not have a wing component at all, so polar dust might not be present, but equatorial dust still can be. The radio emission is confined within a seemingly featureless host galaxy, as seen in Fig.~\ref{fig:J0706-host}, consistent with the star formation explanation. However, with these data it is impossible to distinguish the AGN and star formation -related components from each other; higher resolution observations will be needed to achieve that.
\subsubsection{J0713+3820}
J0713+3820 (FBQS J0713+3820) is a radio-quiet NLS1 at $z$ = 0.123 with a peak flux density of 2.32~mJy, integrated flux density of 3.39~mJy, and 60k$\lambda$ tapered map flux density of 3.80~mJy, thus a considerable amount of the emission originates from extended regions (Fig.~\ref{fig:J0713}). The radio luminosity is moderate with log $\nu L_{\nu, \textrm{int}}$ = 39.82 erg s$^{-1}$. The $\alpha$ map (Fig.~\ref{fig:J0713spind}) shows a steep spectral index around $\sim$-1.0 throughout the emitting region. This source has been detected in WENSS (25~mJy) and FIRST (10.78~mJy), yielding spectral indices of -0.57 between 325~MHz and 1.4~GHz, and -0.88 between 1.4 and 5.2~GHz. Therefore, based on these spectral indexes, it seems like the spectrum steepens toward higher frequencies, but the effect of different beam sizes is unknown. J0713+3820 was not detected at 22~GHz in VLBI observations \citep{2016doi1}.
Mid-infrared data indicates that there is star formation going on in the host as the W3 flux density is 85.2~mJy higher than the extrapolated 1.4~GHz flux density, and the $q22$ parameter is 1.19. $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are about the same (3.7 vs. 3.8~mJy), so based on this diagnostic there is no need for other sources of radio emission beyond star formation. The W3-W4 colour though is only 2.28. The extended, somewhat patchy, morphology of the radio emission, covering the whole host galaxy (Fig.~\ref{fig:J0713-host}), supports the presence of star formation in the host. The source shows an [O~III] wing component shifted by -742~km s$^{-1}$\ with respect to the line core, thus the contribution of polar dust to the mid-infrared flux densities is a possibility. It is worth noting that the FWHM of the wing component is 1495~km s$^{-1}$, indicating strong turbulence in the gas where the line originates.
Interestingly this source exhibits a considerable \textit{redshift} of 300~km s$^{-1}$\ in its [O~III] lines \citep{2021berton1}. Like mentioned before, the [O~III] wing is blueshifted relative to the [O~III] line core. It is known that [O~III] wings often arise due to nuclear winds, but they are usually accompanied by blueshifted [O~III] lines, which is not the case with J0713+3820. It might be that the redshifted [O~III] core and the blueshifted [O~III] wing arise from kinematically separate regions. Unfortunately, our data are not enough to pinpoint and resolve the [O~III] emitting regions, and further observations, especially by means of integral field spectroscopy, will be necessary to unravel the nature of this source.
\subsubsection{J0804+3853}
J0804+3853 (FBQS J0804+3853) is a radio-quiet NLS1 at $z$ = 0.212 with a low peak flux density of 0.37~mJy beam$^{-1}$, and an integrated flux density of 0.81~mJy, so more than half of the radio emission comes from extended regions. This is evident also in the radio map in Fig.~\ref{fig:J0804spind}, where an asymmetric emitting region with an extent of 4.1~kpc toward east - south-east is seen. The host galaxy does not have any distinct features, and the radio emission is confined within it (Fig.~\ref{fig:J0804-host}). J0804+3853 was detected in FIRST with a flux density of 2.68~mJy, giving a 1.4-5.2~GHz spectral index of -0.91, which is consistent with the $\alpha$ map of the source in Fig.~\ref{fig:J0804spind}: the total spectral index of the source is -0.98. J0804+3853 has a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.69 erg s$^{-1}$.
\citet{2015caccianiga1} estimate the star formation rate in this galaxy to be as high as 89 $M_{\odot}$ yr$^{-1}$, and our star formation proxies are in good agreement with this. The W3-W4 colour is 2.70, $q22$ is 1.36, $S_{\textrm{W3}}$ is 16.0~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are roughly the same. It seems like the star formation activity can account for most of the radio emission in this source, but the asymmetric radio morphology seems unusual for a star forming galaxy, and resembles a morphology seen in sources with nuclear winds instead. Indeed this source shows an [O~III] wing with a velocity of -388~km s$^{-1}$\ and a FWHM of 760~km s$^{-1}$, which supports the presence of nuclear winds. If the mid-infrared emission is enhanced by the AGN-heated dust, the star formation might not be that strong after all. More observations will be needed to determine the components that contribute to the radio emission in this source.
\subsubsection{J0806+7248}
J0806+7248 (RGB J0806+728) is a moderately radio-loud NLS1 at $z$ = 0.098 with a rather steep total spectral index of -0.96, as seen in its $\alpha$ map in Figure~\ref{fig:J0806spind}. Its integrated flux density of 11.46~mJy is similar to the flux density in the tapered maps (Figures~\ref{fig:J0806-90k} and \ref{fig:J0806-60k}). J0806+7248 does not show any resolved structure in any of the maps. Its host galaxy does not show any resolved features, and the radio emission is constrained within the host (Fig.~\ref{fig:J0806-host}).
This source has been extensively studied in the past. \citet{1991gregory1} performed observations with the 91~m Green Bank (GB) radio telescope (GBT) at 4.85~GHz and reported that J0806+7248 had a flux density of 31~mJy. The observations were done in 1987. A few years later \citet{1991becker1} reported a flux density of 29~mJy at 4.85~GHz with the GBT. In the early and mid-1990's J0806+7248 was observed in the NVSS at 1.4~GHz with a flux density of 50~mJy \citep{1998condon1}, and at 5~GHz with the VLA showing a flux density of 20~mJy \citep{1997laurentmuehleisen1}. We can estimate that the spectral index of J0806+7248 in the early 1990's was $\sim$-0.7.
\citet{2007doi1} observed J0806+7248 with the Japanese VLBI Network (JVN) at 8.4~GHz. They obtained a flux density of 6.9~mJy, and a brightness temperature, $T_{\textrm{B}}$, of $> 10^7$~K, indicating a non-thermal origin of the radio emission. The source shows a slightly extended structure at the mas-scale of the observations, but no clear morphology is seen. J0806+7248 was later observed with the VLBA at 1.7~GHz \citep{2011doi1}, obtaining a flux density of 23~mJy, and $T_{\textrm{B}}$ of $> 10^{8.7}$, unquestionably confirming the non-thermal origin of the radio emission. It exhibits a remarkable elongated structure (100~pc) with regions having $T_{\textrm{B}} > 10^{8}$~K, identifying this structure as a jet. Also some signs of a possible counter-jet are seen. What is interesting is that \citet{2011doi1} estimate its spectral index using JVN and VLBA observations to be -0.94$\pm$0.42, which is well in agreement with our core spectral index of -0.99, and with what is seen in its $\alpha$ map. Furthermore, J0806+7248 was found to be a blue outlier by \citet{2021berton1} with the [O~III] lines blueshifted -300~km s$^{-1}$, which is consistent with the presence of a relativistic jet. Also a wing is present with a velocity of -426~km s$^{-1}$, and a FWHM of 733~km s$^{-1}$.
It is striking that it looks like the flux density of J0806+7248 has been decreasing since the first observations, whereas its spectral index seems to have consistently remained very steep. Combined with the non-thermal origin of its radio emission and a rather high luminosity (log $\nu L_{\nu, \textrm{int}}$ = 40.17 erg s$^{-1}$) it seems safe to assume that it really is a CSS-like source. Also the star formation indicators support this conclusion. The W3-W4 colour is at the threshold with 2.57, but $q22$ is very low (0.11) and $S_{\textrm{int}}$ is some tens of mJy higher than $S_{\textrm{W3}}$ or $S_{\textrm{5.2GHz, CDFS}}$, confirming that the radio emission cannot be explained only by star formation.
\subsubsection{J0814+5609}
\label{sec:J0814}
J0814+5609 (SDSS J081432.11+560956.6) is a radio-loud NLS1 galaxy ($z$ = 0.510) that shows an impressive core-jet morphology as seen in Fig.~\ref{fig:J0814spind}. The peak flux density is 25.85~mJy, the integrated flux density is 29.27~mJy, and the flux densities of the tapered maps (Figs.~\ref{fig:J0814-90k} and \ref{fig:J0814-60k}) are less than a mJy more. It is one of the most luminous sources in our sample with log $\nu L_{\nu, \textrm{int}}$ = 42.19~erg s$^{-1}$. The core shows a flat spectral index, very close to zero (Fig.~\ref{fig:J0814spind}), and the jet shows throughout spectral indices in agreement with optically thin synchrotron emission, or slightly steeper. The spectral index of the brightening in the middle of the jet (RA 08:14:32.48, Dec 56:09:55.07) is -0.45. This region seems to be slighly flatter than the jet in general, possibly due to interaction or shocks that are also responsible for the increased radio emission. Interestingly there seems to be a hotspot with an inverted spectral index near the unresolved core, but taking into account the edge effects and that it is within only a 6$\sigma$ contour its presence needs to be verified with further observations. Most of the extended emission is clearly outside the seemingly featureless host galaxy, as seen in Figs.~\ref{fig:J0814-host-zoom} and \ref{fig:J0814-host}. The SDSS $r$ band 25.0~mag arcsec$^{-2}$ isophotal semimajor axis of the host is 8.6~kpc, whereas the maximum extent of the jet toward south-east is $\sim$34~kpc. The jet exhibits a considerably bent structure outside the host, but without a proper characterisation of the morphology of both the jet and the counter-jet, it is impossible to say whether the bend is due to jet precession, or dissipation of the jet when it fails to be fed enough to maintain its direction. The [O~III] lines of J0814+5609 show a significant blueshift of -590~km s$^{-1}$, but no wing component is present \citep{2021berton1}.
In the tapered maps in Figs~\ref{fig:J0814-90k} and \ref{fig:J0814-60k} also a counter-jet is seen, but due to the decreased resolution caused by tapering nothing can be said about its morphology. However, since both jets are detected, we can estimate the inclination of the system by using their flux density ratio \citep{1979scheuer1}:
\begin{equation}
\frac{ S_{j} }{ S_{cj} } = \left( \frac{1 + \beta cos \theta}{1 - \beta cos \theta} \right) ^{2-\alpha_{jet}}
\end{equation} \label{eq:jetcounterjet}
where $S_{\textrm{j}}$ and $S_{\textrm{cj}}$ are the flux density of the jet and the counter-jet, respectively, $\beta$ = $v/c$, i.e. the speed of the jet relative to the speed of light, $\theta$ is the viewing angle, and $\alpha_{\textrm{jet}}$ is the spectral index of the jet, which we assume to be -1. We fitted the core of the 60k$\lambda$ tapered map with a Gaussian component using the \texttt{imfit} task in CASA, and measured the flux densities of the jet and the counter-jet from the residual map. The flux density ratio we obtain is $\sim$5.34. Even if a jet would be relativistic when launched, it usually decelerates to non-relativistic velocities at kpc scales. Unfortunately we do not have a way to estimate its velocity, but based on its morphology that does not stay properly collimated for long, we can assume that the jet is not relativistic at kpc scales. Estimating the PA using reasonable velocities can still give us information about the inclination of the source and the range of the possible deprojected extents of the jets. Assuming a very moderate $\beta$ = 0.3~c the viewing angle is 24.9\degree, and assuming $\beta$ = 0.5~c gives $\theta$ = 57.0\degree. Without more exact measurements of the jet speed, for example, with higher resolution monitoring closer to the nucleus, we cannot determine the inclination accurately. However, these approximate results show that J0814+5609 cannot be a very low inclination source, whose permitted emission lines would be narrow due to orientation effects, but is a real NLS1. The projected length of the south-east jet in the 60k$\lambda$ tapered map is 41~kpc, and of the western counter-jet 61~kpc. It is interesting that the counter-jet seems more extended than the approaching jet. This might happen due to the bend that shortens the maximum extent of the approaching jet, whereas it might be that the counter-jet is straighter. Assuming a viewing angle of 24.9\degree, the deprojected sizes of the jet and the counter-jet would be 97.4 and 144.9~kpc, respectively, and with $\theta$ = 57.0\degree the deprojected sizes would be 48.9 and 72.7~kpc. With a somewhat unrealistically high $\beta$ of 0.8~c $\theta$ would be 70.1\degree, and the deprojected sizes still 43.6 and 64.9~kpc, making J0814+5609 the most extended NLS1s known to date \citep{2018rakshit1}.
As expected from a jetted source the mid-infrared emission is swamped by the radio emission in this source: the W3-W4 is 1.98, $q22$ is -0.99 and $S_{\textrm{int}}$ and $S_{\textrm{1.4GHz, JVLA}}$ are both considerably higher than the respective $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{W3}}$.
In previous studies J0814+5609 has been found to show a flat spectral index all the way from 325~MHz to at least 8.4~GHz. \citet{2015gu1} observed J0814+5609 with the VLBA at 5~GHz, and also analysed archival data at 2.3, 5.0, 8.4~GHz. The source exhibits at all frequencies an elongated structure toward east, consistent with our map, but in their observations any larger-scale extended emission was probably resolved out. Furthermore, they detect considerable variability at 5~GHz, that, assuming a flat spectral index above 8.4~GHz, seems to be present also at higher frequencies since \citet{2016doi1} detected the source with the JVN at 22~GHz with a flux density of 117~mJy.
\subsubsection{J0913+3658}
J0913+3658 (RX J0913.2+3658) is a radio-quiet NLS1 at $z$ = 0.107 with sub-mJy radio emission: $S_{\textrm{int}}$ is only 0.30~mJy, and $S_{\textrm{90k}\lambda, \textrm{int}}$ is 0.41~mJy. The total spectral index of the source is -0.77 and of the core -0.99 (Fig.~\ref{fig:J0913spind}). The source has been earlier observed at 1.4~GHz in FIRST with a flux density of 1.08~mJy \citep{1995becker1}. This gives a spectral index of -0.78 between 1.4 and 5.2~GHz, in agreement with our $\alpha$ map. The tapered maps in Figs.~\ref{fig:J0913-90k} and \ref{fig:J0913-60k} show diffuse emission, more extended than what is seen in the normal map (Fig.~\ref{fig:J0913spind}). The source has a low luminosity (log $\nu L_{\nu, \textrm{int}}$ = 38.66 erg s$^{-1}$), a reddish W3-W4 colour (2.55), and quite high $q22$ (1.67). Its $S_{\textrm{5.2GHz, CDFS}}$ is about double of its $S_{\textrm{int}}$, whereas its W3 flux density is 15.7~mJy higher than its $S_{\textrm{1.4GHz, JVLA}}$. This is indicative of strong star formation, which is probably the predominant source of radio emission in this source. Nonetheless, an [O~III] wing is present with a velocity of -307~km s$^{-1}$\ and a FWHM of 800~km s$^{-1}$, possibly indicating also the presence of polar dust in this source. Its host galaxy, seen in Fig.~\ref{fig:J0913-host}, appears to be a barred spiral galaxy. The radio emission is concentrated around the centre, possibly the bulge, and partly along the bar to the south from the bulge. The overlapping optical and radio features are in agreement with the star formation scenario. The host galaxy might be accompanied by a small companion on its south side, but this should be confirmed by means of spectroscopic observations.
\subsubsection{J0925+5217}
J0925+5217 (Mrk 110) is a radio-quiet NLS1 at $z$ = 0.035 with quite a low luminosity of log $\nu L_{\nu, \textrm{int}}$ = 38.72~erg s$^{-1}$. At 5.2~GHz we detected a peak flux density of 1.70~mJy beam$^{-1}$, an integrated flux density of 2.25~mJy for the central emitting region, and of 0.17~mJy for the northern emitting region (Fig.~\ref{fig:J0925spind}). The flux density of the tapered maps is 3.10~mJy. J0925+5217 shows considerable extended emission in the radio maps. In the normal map (Fig.~\ref{fig:J0925spind}) a separate emitting region is seen toward north, and in the tapered maps (Figs.~\ref{fig:J0925-90k} and \ref{fig:J0925-60k}) the radio emission is especially extended toward south. The projected separation of the northern region is 1.7~kpc, and it corresponds to the structures seen already in \citet{1993miller1} at 4.86~GHz with the VLA. They detect a total flux density of 3.8~mJy, slightly higher than ours, and propose that the radio morphology is consistent with circumnuclear star formation. Also \citet{1994kellermann1} detect the same structure at 5~GHz with the VLA and discuss its nature as ``a highly curved jet or ringlike structure''. The extended northern emission is also seen at 1.4~GHz with the VLA, but only the central source is seen at 8.4~GHz \citep{1998kukula1}, indicating it might have a steep spectral index. At 1.7~GHz with VLBA only an unresolved core with a flux density of 1.2~mJy was detected \citep{2013doi1}. They derived the brightness temperature to be at least 10$^{7.8}$~K, indicating an origin of non-thermal nature.
In the 60k$\lambda$ tapered map the extent of the radio emission toward north is 2.8~kpc, and toward south 4.7~kpc. Also possible emission toward west at a separation of 3.3~kpc is seen. It is hard to say whether it is real, but it is also seen in the normal map, at a 5$\sigma$ level. The SDSS $r$ band 25~mag arcsec$^{-1}$ isophotal semimajor and semiminor axes of the host galaxy are 8.9~kpc and 4.9~kpc, and the major axis of the galaxy is almost perpendicular to the radio emission. Toward south the radio emission is reaching the very outskirts of the host, even though it looks like the emission is more extended in radio than in optical. This is because the outskirts of the galaxy are faint, and higher contrast image would be needed to see it clearly. The host itself does not seem to exhibit any clear features (Fig.~\ref{fig:J0925-host}), although in some higher contrast images a disturbed morphology with apparent tidal features can be seen, possibly indicating a recent merger. The other bright source seen in the optical image is a foreground star \citep{1988hutchings1}.
In our $\alpha$ map, the northern region has an inverted spectral index, which may not be real but possibly caused by a low S/N ratio in such a small emitting region. The core is flat with a spectral index of -0.33, consistent with the non-thermal origin of the radio emission. J0925+5217 was detected in FIRST with a flux density of 8.19~mJy, yielding a very steep spectral index of -0.93 between 1.4 and 5.2~GHz. Thus, whereas the core is flat, the overall spectral index of the source seems to be rather steep. Although, if the radio emission is dominated by the central AGN, variability can explain the spectral index since the observations are not simultaneous.
\citet{1993miller1} identified the extended radio emission toward north as possible circumnuclear star formation and there is indeed an excess of mid-infrared emission compared to radio emission, W3 flux density is 59.8~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, $q22$ is 1.35, and $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are roughly the same. However, the colour W3-W4 is not very red with a value of 2.01. The radio emission probably has contribution from both, the star formation and the AGN, since also a non-thermal core was detected and the spectral index of the central region is flat, uncharacteristically for a star-forming region. However, based on these data it is impossible to determine the origin of the large-scale radio emission seen in the tapered maps. Its radio morphology is not common for star-forming regions, unless it is somehow related to the possible recent merger of the host galaxy. Furthermore, an accretion disk wind causing an outflow of the BLR has been detected \citep{2003kollatschny1}, though [O~III] lines are not shifted nor show a wing, suggesting that the wind has not affected the NLR, at least yet, or it is not energetic enough to cause bulk motion of the NLR. Further observations of J0925+5217 are needed to determine the origins of the radio emission at different scales.
\subsubsection{J0926+1244}
J0926+1244 (Mrk 705) is a radio-quiet NLS1 ($z$ = 0.029) with a low luminosity of log $\nu L_{\nu, \textrm{int}}$ = 38.52 erg s$^{-1}$. The peak flux density is 2.81~mJy beam$^{-1}$, the integrated flux density 3.36~mJy, and the flux densities of the tapered maps minimally higher. The radio morphology seen in Fig.~\ref{fig:J0926spind} looks peculiar: the central component seems to be surrounded by patches of radio emission on all sides. The spectral index of the core is -0.75, and the total spectral index is -0.59. The FIRST flux density for J0926+1244 is 8.52~mJy, giving a 1.4-5.2~GHz spectral index of -0.71, well in agreement with our result. The spectral indices of the surrounding patchy emitting regions seem flat, but most likely are not reliable due to edge effects and low S/N ratio. The extended emission seen in the map is approximately within a kpc from the centre, except the south-west emission, better seen in the tapered maps in Figs.~\ref{fig:J0926-90k} and \ref{fig:J0926-60k}, that is more extended.
VLBA observations at 1.7~GHz show a core-jet structure, with the jet extending 26~pc east from the core \citep{2013doi1}. The total flux density detected was 3.3~mJy, so a significant fraction of the flux was resolved-out or is seen only at larger scales. They derived a brightness temperature of at least $10^{7.9}$~K, confirming the non-thermal nature of the emission. In archival Multi-Element Radio Linked Interferometer Network (MERLIN) observations at 1.7~GHz at 150~mas resolution only single component with a flux density of 5.7~mJy beam$^{-1}$ was seen. Also 8.46~GHz VLA observations yield only an unresolved core with a flux density of 2~mJy \citep{2001schmitt1}. This gives a 5.2-8.46~GHz spectral index of -1.07, steeper than what we observe. Although it should be noted that the observations are more than a decade apart, and since the source hosts a jet, variability is likely to affect the flux densities. Interestingly, the source does not show shift of [O~III] lines \citep{2021berton1}, possibly due to the small scale or low power of the jet.
The host galaxy of J0926+1244 (Fig.~\ref{fig:J0926-host}) was indeed classified as (R)SA(r)0+ in the CVRHS classification scheme in \citet{2017buta1}. This classification indicates that it is a SA type spiral galaxy that has an outer as well as an inner ring. The 5.2~GHz radio emission is concentrated within the innermost part of the host, probably a bulge. The star formation rate of the host galaxy based on 11.3~$\mu$m PAH features was estimated to be as low as 0.6 $\pm$ 0.2~$M_{\odot}$ yr$^{-1}$ \citep{2019martinezparedes1}. On the other hand, based on the relation between the star formation rate and the luminosity of the [C~II] line \citep{2012sargsyan1,2014sargsyan1} it can be estimated to be much higher, 6.6~$M_{\odot}$ yr$^{-1}$. Our star formation diagnostics support the presence of star formation as the W3 flux density is $\sim$101.6~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, $q22$ is 1.54, and $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are almost the same. However, the mid-infrared colour with W3-W4 = 2.36 does not necessarily indicate strong star formation. The [O~III] shows a low-velocity, -79~km s$^{-1}$, but turbulent, FWHM = 846~km s$^{-1}$, wing component, and thus the contribution of polar, or equatorial, dust cannot be ruled out either.
In conclusion, despite the presence of a small-scale jet, the star formation seems to be the predominant source of radio emission in this source. However, further observations are needed to disentangle the contribution of different components.
\subsubsection{J0937+3615}
J0937+3615 (SDSS J093703.03+361537.2) is a radio-quiet NLS1 at $z$ = 0.180 with a moderate integrated flux density of 1.03~mJy, and the flux densities of the tapered maps (Figs.~\ref{fig:J0937-90k} and \ref{fig:J0937-60k}) slightly lower. It is a moderate luminosity source with log $\nu L_{\nu, \textrm{int}}$ = 39.71 erg s$^{-1}$ and does not show any distinct morphology in any of the maps. The radio emission is constrained within its host galaxy, that does not show distinguishable features either. J0937+3615 was detected in FIRST with a flux density of 3.21~mJy, yielding a 1.4-5.2~GHz spectral index of -0.87. This is in agreement with our $\alpha$ map in Fig.~\ref{fig:J0937spind}: the total spectral index is -0.77, and the core spectral index is -0.95. \citet{2015caccianiga1} estimated that the star formation rate in J0937+3615 is extremely high (83~$M_{\odot}$ yr$^{-1}$), and possibly the predominant source of its radio emission. Other star formation diagnostics support this conclusion: its W3-W4 colour is 2.66, $q22$ is 1.34, $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are roughly the same, but $S_{\textrm{W3}}$ is 15.2~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$. However, it also might exhibit nuclear winds and polar dust, since it has an [O~III] wing with a velocity of -250~km s$^{-1}$, and with a FWHM of 1000~km s$^{-1}$.
\subsubsection{J0952-0136}
J0952-0136 (Mrk 1239) is a radio-quiet NLS1 at $z$ = 0.020 that has been widely studied in the past. It has one of the highest flux densities in our sample, the peak flux density is 17.49~mJy beam$^{-1}$, the integrated flux density is 18.54~mJy, and the flux densities of the tapered maps about 0.5~mJy higher. Due to its low redshift this translates to a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.04 erg s$^{-1}$. Our radio map (Fig.~\ref{fig:J0952spind}) shows a compact component and almost symmetrical extended emission toward north-east and south-west at a position angle of 32\degree. The extended emission is particularly pronounced in the tapered maps in Figs.~\ref{fig:J0952-90k} and \ref{fig:J0952-60k}. The whole extent of the emission in the 60k$\lambda$ tapered map is 4.2~kpc. The spectral index of the source is very steep, the total spectral index is -0.98, and the core spectral index is -1.07.
J0952-0136 was detected at 2.25~GHz with the Parkes-Tidbinbilla Interferometer (PTI) with a flux density of 32~mJy \citep{1990norris1}, and with the VLA in A configuration at frequencies of 1.5, 5, and 8.4~GHz with flux densities of 56.5, 19.5, and 7.9~mJy, respectively \citep{1995ulvestad1}. The source remained unresolved in all these observations. These observations point to a spectral index steepening toward higher frequencies, a feature that was explained to be possibly due to the combination of different resolution at each frequency, and the dominance of diffuse components in this source \citep{2013doi1}. Our 5.2~GHz flux density is in agreement with the 5~GHz observation in \citet{1995ulvestad1}, and the FIRST detection of the source (59.84~mJy) at 1.4~GHz is roughly in agreement with their 1.5~GHz result.
\citet{2010orienti1} reduced archival VLA data of J0952-0136 at 8.4 and 14.9~GHz, and found an unresolved morphology at both frequencies. Further, they produced a VLBA map at 1.6~GHz and were able to see two separate components with a separation of $\sim$30~pc at a position angle of 40 deg. The total flux density they obtained is only 10.2~mJy, meaning that most of the flux is missing at the VLBA scale.
Later, also \citet{2013doi1} observed the source with the VLBA at 1.7~GHz obtaining a core flux density of 4.8mJy, and a total flux density of 20~mJy. Their map shows an elongated 40~pc long structure consisting of several components at a position angle of 47\degree. They estimated the brightness temperature to be 10$^{7.9}$~K, proving the non-thermal origin of the emission. They also claimed to have found a corresponding structure at 8.5~GHz using archival VLA observations. They further studied the source in \citet{2015doi1} where they reported that it shows a two-sided jet-like morphology, extended $\sim$34~pc to both directions. They also re-analysed archival 1.6 and 8.5~GHz VLA data and found an elongation on one side of 1.4~kpc and 85~pc, respectively, with a position angle matching the pc-scale jet. Since most of the radiated power is concentrated within the innermost 100~pc, they conclude it is alike to Fanaroff-Riley I radio galaxies, i.e. edge-darkened{\footnote{Assuming a spectral index of -1 the diffuse emission luminosity threshold between Fanaroff-Riley I and II radio galaxies at 5~GHz is $\sim$log $\nu L_{\nu}$ = 39.5~erg s$^{-1}$.} }. They suggest that the edge-darkened morphology may be caused by the subsonic speed of advancing radio lobes, and the origin of the diffuse radio emission might be a dissipated jet, not powerful enough to penetrate the dense circumnuclear matter. J0952-0136 was not detected at 22~GHz with the JVN \citep{2016doi1}, which could have been expected taking into account its steep spectral index.
It seems plausible that we are seeing the same structure that was detected in previous observations, in which case the main origin of the radio emission would be a low-power two-sided jet. However, also the presence of star formation in this galaxy has been studied quite extensively. The host galaxy (Fig.~\ref{fig:J0952-host}) was originally classified as compact \citep{1992whittle1}, and later as S0/E \citep{1994heisler1}. Its SDSS $r$ band 25~mag arcsec$^{-2}$ isophotal semimajor axis is 2.9~kpc, and its almost perpendicular to the radio axis. \citet{2003rodriguezardila1} studied the near-infrared spectrum of J0952-0136 and detected a 3.43~$\mu$m feature whose origin remained unclear. They re-observed the 0.8-4.5~$\mu$m near-infrared spectrum and did not detect the 3.43~$\mu$m line, or any PAH features \citep{2006rodriguezardila1}. Based on the 11.3~$\mu$m PAH feature \citet{2017ruscheldutra1} estimated the star formation rate to be less than 7.5~$M_{\odot}$ yr$^{-1}$. Consistent with this its star formation rate was estimated to be 3.47~$M_{\odot}$ yr$^{-1}$ based on SED modelling \citep{2016gruppioni1}. \citet{2020buhariwalla1} modelled the X-ray spectrum of J0952-0136 with alternative models and found that the $<$ 3~keV emission cannot be modelled without a starburst component, whose extent they estimate to be less than 400~pc from the nucleus. This is interesting since the other star formation proxies do not predict starburst level star formation. However, the PAH emission can be suppressed in presence of an AGN \citep{2012lamassa1}. The presence of star formation seems to be supported by the W3 flux density that is 520.2~mJy higher than the $S_{\textrm{1.4GHz, JVLA}}$, and $q22$ of 1.16, while the $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are roughly the same. On the other hand, the W3-W4 colour is not very red (2.02), and the presence of an [O~III] wing with a velocity of -686~km s$^{-1}$\ and a FWHM of 1586~km s$^{-1}$\ indicates that AGN-heated dust could be partly responsible for the very high mid-infrared flux densities. In \citet{1996mulchaey1} the [O~III] emission was found to be clearly elongated in the northeast-southwest direction, matching the position angle of the radio emission.
\subsubsection{J1034+3938}
J1034+3938 (KUG 1031+398) is a radio-quiet NLS1 at $z$ = 0.042 with a peak flux density of 7.05~mJy beam$^{-1}$, integrated flux density of 7.37~mJy, and the tapered map flux densities only slightly higher. It has a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.24 erg s$^{-1}$. Its FIRST flux density is 25.94~mJy, yielding quite a steep 1.4-5.2~GHz spectral index of -0.96. However, it is consistent with the $\alpha$ map (Fig.~\ref{fig:J1034spind}) of the source: the total spectral index of the source is -1.17, and the core spectral index is -1.13. The VLBA the flux density at 1.4~GHz was found to be 8.34~mJy \citep{2014deller1}, indicating that most of the emission was resolved-out and thus comes from larger scales. The normal map (Fig.~\ref{fig:J1034spind}) shows a morphology only slightly extended toward south-east. In the tapered maps (Figs.~\ref{fig:J1034-90k} and \ref{fig:J1034-60k}) an asymmetric east-west structure is seen. Especially the eastern extended emission seems to trace the morphology of the spiral host galaxy, seen in Fig.~\ref{fig:J1034-host}, as it seems to overlap with a spiral arm. The most obvious explanation for the extended emission seems to be star formation, even though using simple stellar population synthesis the star formation rate of J1034+3938 was found to be as low as 0.23~$M_{\odot}$ yr$^{-1}$ \citep{2010bian1}. The W3 flux density is indeed 33.7~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, but could be also explained by the dust heated by the AGN. \citet{2019smethurst1} detected an [O~III] outflow in this source, and estimate the outflow rate to be $>$ 0.07~$M_{\odot}$ yr$^{-1}$. \citet{2021berton1} did not detect a blueshift of the [O~III] lines but an [O~III] wing is present with a velocity of -253~km s$^{-1}$, and a FWHM of 901~km s$^{-1}$. Other star formation proxies do not indicate strong activity, $W3-W4$ is 2.14, $q22$ is 0.57, and $S_{\textrm{5.2GHz, CDFS}}$ is 5.4~mJy lower than $S_{\textrm{int}}$. Thus the radio emission seems to be dominated by the AGN, with star formation -- based on the morphology -- having a small contribution.
J1034+3938 is one of the NLS1s showing the most prominent soft excess in X-rays \citep{1995pounds1}, and interestingly, it is one of the rare AGN where a quasi-periodic oscillation (QPO) has been reported in X-rays, and, in fact, the only source showing the QPO with high significance and repeatability \citep[][ and references therein]{2021jin1}.
\subsubsection{J1038+4227}
J1038+4227 (SDSS J103859.58+422742.3) is a radio-loud source at $z$ = 0.220 with very pronounced diffuse extended emission (Fig.~\ref{fig:J1038}). Its integrated flux density is 3.75~mJy, and $S_{\textrm{90k}\lambda, \textrm{int}}$ is 5.36~mJy, showing that a large fraction of the emission is very low surface brightness. Its central region exhibits a steep spectral index with $\alpha$ = -0.77, and the total spectral index is -0.89. Unfortunately, the extended emission does not have a high enough S/N to estimate the spectral index outside the centre. J1038+4227 was detected at 1.4~GHz in the FIRST and in the NVSS with flux densities of 2.44~mJy and 7~mJy, respectively. The FIRST measurement underestimates the flux density because it might not have had high enough sensitivity to detect the faint extended emission.
The star formation diagnostics do not indicate strong activity: the W3-W4 colour is 2.29, $q22$ is 0.57, and both $S_{\textrm{W3}}$ (9.0~mJy) and $S_{\textrm{5.2GHz, CDFS}}$ (0.4~mJy) are lower than the respective JVLA values. However, this might be partly due to the sensitivity limit of WISE, whose 5$\sigma$ point source sensitivity is estimated to be 0.86 and 5.4~mJy in W3 and W4 bands, respectively \citep{2010wright1}. Considering the diffuse nature of the emission seen in J1038+4227, a significant fraction of the mid-infrated emission could have gone undetected, leading to underestimated mid-infrared flux densities.
The spatial extent of the emission is huge, its most extended diameter in the normal map (Fig.~\ref{fig:J1038spind}) is 9.4~arcsec, which corresponds to 33.4~kpc, and in the 60k$\lambda$ tapered map 12.5~arcsec, corresponding to 44.4~kpc. An SDSS $r$ band 25~mag arcsec$^{-2}$ isophotal major axis estimate for its host galaxy is only $\sim$4~arcsec, or 14.2~kpc, clearly smaller than the extent of the radio emission. In the image of the host galaxy, overlaid with the 90k$\lambda$ tapered radio map (Fig.~\ref{fig:J1038-host}) it can be seen that the radio emission seems to reach well outside the host, forming a halo-like structure.
The source is bright, with log $\nu L_{\nu, \textrm{int}}$ = 40.67 erg s$^{-1}$. In the normal map most of the emission comes from the core, and thus its luminosity is at the CSS threshold. Also the steep spectral index of the core would agree with this classification. The origin of the extended emission remains unknown, and this source certainly needs to be studied further to reveal its nature.
\subsubsection{J1047+4725}
J1047+4725 (SDSS J104732.68+472532.0) was cleaned using the uniform weighting to suppress the very strong sidelobes, but the normal radio map (Fig.~\ref{fig:J1047spind}) still suffers from some artefacts around its peripheries. J1047+4725 at $z$ = 0.799 is very radio-loud and the most luminous source in our sample with log $\nu L_{\nu, \textrm{int}}$ = 43.77~erg s$^{-1}$. Its peak flux density is 190.90~mJy beam$^{-1}$, and the integrated flux density is 381.75~mJy, with the integrated flux densities of the tapered maps being approximately the same, thus about half of its radio emission comes from the unresolved core. Its W3-W4 colour is red (2.69) but clearly the AGN is the main origin of the radio emission since the $q22$ is -2.18, and the $S_{\textrm{int}}$ and $S_{\textrm{1.4GHz, JVLA}}$ are both several hundred mJy higher than the corresponding $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{W3}}$ values. Its host galaxy does not exhibit any clear structures, and the radio emission seems to be confined within it (Fig.~\ref{fig:J1047-host}).
Several radio measurements of J1047+4725 exist in the literature. \citet{2015gu1} observed it at 5~GHz with the VLBA, but were not able to resolve it. However, they also analysed archival VLBA data where a clear core-jet structure can be seen (see their Fig. 6). Furthermore, at 8.4~GHz observed with the JVLA the source exhibits a two-sided jet structure, implying a large viewing angle. However, the [O~III] lines are not shifted, so the jets have not been powerful enough to cause bulk motion in the NLR. At 22.4~GHz only an unresolved core is seen. \citet{2015gu1} also note that based on archival data, which extends all the way down to 74~MHz \citep{2007cohen1}, it looks like the break frequency of the radio spectrum lies in the MHz regime, as often seen in CSS sources \citep[see Fig. 17 in][]{2015gu1}. Above 1.4~GHz the spectrum steepens to -0.5/-0.6, which is consistent with our $\alpha$ map in Fig.~\ref{fig:J1047spind} which has a core spectral index of -0.63 and a total spectral index of -0.75, and also with the CSS characteristics \citep{1998odea1}. It thus seems evident that J1047+4725 is a jetted NLS1 seen at quite a large angle, and could also be classified as a CSS.
\subsubsection{J1048+2222}
J1048+2222 (SDSS J104816.57+222238.9) is a radio-quiet NLS1 at $z$ = 0.330 with a very low integrated flux density of 0.24~mJy. The flux density of the 60k$\lambda$ tapered map is 0.47~mJy, almost double compared to the normal map. Due to its redshift the low flux density corresponds to a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.75~erg s$^{-1}$. It was detected in FIRST with a flux density of 1.49~mJy. This gives a very steep spectral index of -1.39 between 1.4 and 5.2~GHz. Our $\alpha$ map (Fig.~\ref{fig:J1048spind}) does not suggest this steep an index, as the total spectral index is -0.56 and the core spectral index is -0.71. The map shows flattening toward north, but in such a faint source it can be artificial. The normal map does not show considerable extended structure, but the tapered maps (Figs.~\ref{fig:J1048-90k} and \ref{fig:J1048-60k}) show an extensive, though very faint, region of emission toward south-west. The maximum extent of the emission is 42.8~kpc, which places it clearly outside the host galaxy (see Fig.~\ref{fig:J1048-host}) whose SDSS $r$ band 25~mag arcsec$^{-2}$ isophotal semimajor axis is 7.5~kpc. The bulk of the radio emission is confined within the host. What is the origin of the extended emission remains unclear based on the currently available data.
The star formation diagnostics of this source are not conclusive either. The $q22$ parameter is 1.09, the W3 flux density is 5.1~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are about the same, but the W3-W4 colour is only 2.2. J1048+2222 is a blue outlier with the [O~III] lines shifted by -209~km s$^{-1}$, indicating that it might host a jet or a powerful nuclear outflow. The existence of an [O~III] wing with a velocity of -531~km s$^{-1}$, and a FWHM of 1239~km s$^{-1}$\ indicates that a nuclear wind is present in this source. The blueshifted [O~III] lines, inconclusive star formation diagnostics, and the morphology of the radio emission indicate that the AGN is responsible for the most of the radio emission seen in this source.
\subsubsection{J1102+2239}
J1102+2239 (SDSS J110223.38+223920.7) is a radio-quiet NLS1 with a low integrated flux density of 0.70~mJy, and a slightly higher 60k$\lambda$ tapered map flux density of 1.22~mJy. Owing to its redshift of 0.453 its luminosity is quite high, with log $\nu L_{\nu, \textrm{int}}$ = 40.47~erg s$^{-1}$. It was considered as a possible gamma-ray emitting NLS1 in \citet{2011foschini1}, but the detection has not been confirmed. Its only archival radio detection is 1.80~mJy in FIRST. This gives a 1.4-5.2~GHz spectral index of -0.72, in agreement with the $\alpha$ map of the source (Fig.~\ref{fig:J1102spind}) that has a total spectral index of -0.76. The $\alpha$ map is actually indicating even steeper spectral indices, flattening toward north, but these might be edge effects combined with a low S/N ratio. However, it does exhibit blueshifted [O~III] lines with a velocity of -565~km s$^{-1}$, and a FWHM of 880~km s$^{-1}$, as well as an [O~III] wing with a velocity and FWHM of -750 and 1320~km s$^{-1}$, respectively. So the NLR experiences bulk motions, probably induced by a jet or a wind.
The normal map in Fig.~\ref{fig:J1102spind} is mostly featureless. The tapered maps (Figs.~\ref{fig:J1102-90k} and \ref{fig:J1102-60k}) show extended, but faint, radio emission toward west - south-west, but the emission is so weak that we cannot be certain it is real. The host is a late-type galaxy, significantly perturbed by merging with another disk galaxy \citep{2020olguiniglesias1}. \citet{2020olguiniglesias1} also detect an H~II region that based on its spectrum belongs to the interacting system. Indeed, \citet{2015caccianiga1} found the star formation rate of this source to be at a starburst level with 302~$M_{\odot}$ yr$^{-1}$. This is somewhat supported by the star formation diagnostics: the W3-W4 colour is 2.66, $S_{\textrm{W3}}$ is 3.6~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are comparable, but $q22$ is only 0.81.
Based on these data, with the exception of the high luminosity and the outflow, it seems like star formation has a major contribution to the radio emission of J1102+2239. In principle, the [O~III] outflow could have been triggered by enhanced star formation that causes stellar winds resulting in outflows, or be a feature resulting from the interaction. However, usually the velocities and FWHMs of outflows induced by these processes are more moderate -- from a few hundred to $\sim$500~ km s$^{-1}$ -- than what is seen in the AGN-induced outflows \citep{2012harrison1}. It thus seems likely that the outflow is of AGN origin. This does not mean that a jet needs to be present, but taking into account the luminosity that is higher than usually seen in pure starburst galaxies, it seems that also the AGN is required to explain the observed radio emission.
\subsubsection{J1110+3653}
J1110+3653 (SDSS J111005.03+365336.3) is a radio-loud NLS1 at $z$ = 0.629 with a peak flux density of 8.12~mJy beam$^{-1}$, integrated flux density of 9.34~mJy, and the flux densities of the tapered maps about 0.5~mJy higher. It is one of the most luminous sources in our sample with log $\nu L_{\nu, \textrm{int}}$ = 41.89~erg s$^{-1}$. Its radio map exhibits considerably extended emission toward south - south-east (Fig.~\ref{fig:J1110spind}), and the tapered maps (Figs.~\ref{fig:J1110-90k} and \ref{fig:J1102-60k}) show that the source is also extended toward north, but the emission does not seem to be exactly axisymmetric, but lopsided to east. The extent of the southern emission in the normal map is 21.2~kpc, and the north-south extent of the whole emitting region in the 60k$\lambda$ tapered map is 61.5~kpc. The SDSS $r$ band 25.0~mag arcsec$^{-2}$ isophotal semimajor axis of the host is 7.0~kpc, so the radio emission is reaching way outside the host (Fig.~\ref{fig:J1110-host-zoom}).
J1110+3653 has been previously observed in the Westerbork Northern Sky Survey (WENNS) at 325~MHz and in FIRST at 1.4~GHz with flux densities of 45 and 20.83~mJy, respectively. This gives a spectral index of -0.53 between 325~MHz and 1.4~GHz, and -0.61 between 1.4 and 5.2~GHz, so the source seems to have an overall steep radio spectral index. In our $\alpha$ map (Fig.~\ref{fig:J1110spind}) the core of the source is flat, with a spectral index of -0.10. The extended emission is steeper, with spectral indices around -0.5, or steeper.
In 5~GHz observations with the VLBA the extended emission is resolved-out and the source shows only an unresolved core with a flux density of 8.8~mJy \citep{2015gu1}. The brightness temperature was found to be at least 10$^{10}$~K, suggesting non-thermal processes being responsible for the emission, but no extreme Doppler boosting. The source was not detected at 22~GHz with the JVN with a detection limit of 7~mJy \citep{2016doi1}.
J1110+3653 was not detected in W3 or W4 bands so we can not say anything about the star formation activity in the host galaxy, as it has not been studied in the past either. However, all things considered, it seems rather evident that the AGN is responsible for the bulk of the observed radio emission. The extended emission resembles a jet that has propagated through the host and it possibly dissipating when reaching the edge of the host. However, the possibility that the morphology could instead be caused by an outflow cannot be ruled out with these data. Interestingly, J1110+3653 does not show a blueshift in its [O~III] lines, nor an [O~III] wing component.
\subsubsection{J1121+5351}
J1121+5351 (SBS 1118+541) is a radio-quiet NLS1 at $z$ = 0.103 with a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.25~erg s$^{-1}$. Its peak flux density is 1.02~mJy beam$^{-1}$, and the integrated flux density is 1.22~mJy, with the values for the tapered maps being only marginally higher. It was detected in the International Low-Frequency Array (LOFAR) survey at 144~MHz with a flux density of 11.49~mJy \citep{2019gurkan1}, and in FIRST with a flux density of 2.14~mJy. These yield a spectral index of -0.74 between 144~MHz and 1.4~GHz, and of -0.43 between 1.4 and 5.2~GHz. The $\alpha$ map of the source in Fig.~\ref{fig:J1121spind} shows a generally steep spectral index: the total spectral index is -0.61, and the core spectral index is -0.76. The normal radio map shows a slightly extended structure, but due to its triangular shape it is hard to say whether it could be artificial and caused by sidelobes. There is no hint of it in the tapered maps (Figs.~\ref{fig:J1121-90k} and \ref{fig:J1121-60k}), which instead show a faint extended structure toward east - south-east. The bulk of the radio emission is confined within the featureless host galaxy (Fig.~\ref{fig:J1121-host}).
The mid-infrared emission of J1121+5351 is enchanced, the W3 band flux density is 34.6~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $q22$ is 1.43. Also $S_{\textrm{5.2GHz, CDFS}}$ and $S_{\textrm{int}}$ are comparable, but the W3-W4 colour is only 2.33. According to the conventional Baldwin-Phillips-Terlevich (BPT) diagnostic diagram \citep{1981baldwin1} this source would be classified as a star forming galaxy \citep{2013stern1}, but in a revised classification scheme by \citet{2012shirazi1} it is classified as an AGN-dominated galaxy. However, even in the latter scheme it is very close to the threshold of composite galaxies, and taking into account the earlier classification it seems safe to assume that both the AGN and star formation contribute to the ionising continuum. The mid-infared emission can be further enhanced by the AGN-heated dust emission, since an [O~III] wing, with a velocity of -208~km s$^{-1}$\ and a FWHM of 735~km s$^{-1}$, can be seen in its optical spectrum. It seems like star formation could be also the predominant source of the radio emission, but without further studies the possible AGN contribution cannot be ruled out.
\subsubsection{J1138+3653}
J1138+3653 (SDSS J113824.54+365327.1) is a radio-loud NLS1 at $z$ = 0.356 with a peak flux density of 4.37~mJy beam$^{-1}$, an integrated flux density of 4.64~mJy, with similar integrated flux densities of the tapered maps. Its luminosity is quite high with log $\nu L_{\nu, \textrm{int}}$ = 41.02 erg s$^{-1}$. The radio map in Fig.~\ref{fig:J1138spind} does not show any resolved structures, and the radio emission is contained within the host galaxy (Fig.~\ref{fig:J1138-host}). The total spectral index is -0.59 and the core spectral index is -0.68. J1138+3653 was detected in FIRST with a flux density of 12.64~mJy, which gives a spectral index of -0.76 between 1.4 and 5.2~GHz, consistent with what we find.
J1138+3653 shows considerable variability, since its flux density at 5~GHz in VLBA observations, a few years prior to our observations, was found to be 9.5~mJy \citep{2015gu1}. The source was not resolved in these observations. The brightness temperature was found to be at least 10$^{9.5}$~K, indicative of non-thermal processes being responsible for the radio emission, but with only mild beaming, possibly due to slow jet speed or large viewing angle. Were the latter true, it means that the spatial extent of J1138+3653 must have been very small at the time of the VLBA observations, since the resolution of their observations was $\sim$20~pc. Thus the former explanation seems more plausible. J1138+3653 does not show a shift in its [O~III] lines either, indicating that powerful, large-scale jet action is not present.
Interestingly, J1138+3653 was found to exhibit a significant amount of star formation (14~$M_{\odot}$ yr$^{-1}$, \citealp{2015caccianiga1}), though the star formation diagnostics employed in this paper do not reflect this. The W3-W4 colour is red (2.52), but the $q22$ is -0.28, and $S_{\textrm{int}}$ is $\sim$4.5~mJy higher than $S_{\textrm{5.2GHz, CDFS}}$, and $S_{\textrm{1.4GHz, JVLA}}$ $\sim$8.0~mJy higher than $S_{\textrm{W3}}$, indicating that the radio emission is dominated by the AGN. According to its properties this source seems to be CSS-like.
\subsubsection{J1159+2838}
J1159+2838 (SDSS J115917.32+283814.5) is a radio-quiet NLS1 at $z$ = 0.210 with a peak flux density of 0.79~mJy beam$^{-1}$, and an integrated flux density of 0.84~mJy, with the tapered map showing approximately the same flux densities. Its radio maps (Fig.~\ref{fig:J1159}) show a slightly elongated structure toward south-west, but only at 3$\sigma$ significance level. No structures in the featureless host galaxy overlap with these radio contours (Fig.~\ref{fig:J1159-host}). According to the $\alpha$ map (Fig.~\ref{fig:J1159spind}) the spectral index is steep: the total spectral index is -0.71, and the core spectral index is -0.93. Calculating the spectral index between 1.4 and 5.2~GHz using the archival FIRST detection (2.01~mJy) gives -0.66, roughly in agreement with our $\alpha$ map.
\citet{2015caccianiga1} found that the star formation rate in J1159+2838 is 128~$M_{\odot}$ yr$^{-1}$, suggestive of starburst activity. The very red W3-W4 colour (3.03) supports this, as well as the $q22$ of 1.22, and the W3 flux density which is $\sim$5.8~mJy higher than the $S_{\textrm{1.4GHz, JVLA}}$. $S_{\textrm{int}}$ and $S_{\textrm{5.2GHz, CDFS}}$ are roughly the same, indicating that significant contribution from the AGN is not needed to explain the radio emission. The luminosity of J1159+2838 is moderate with log $\nu L_{\nu, \textrm{int}}$ = 39.76 erg s$^{-1}$. Starburst-level star formation activity seems to be the predominant source of radio emission is this galaxy.
\subsubsection{J1203+4431}
J1203+4431 (NGC 4051) is one of the most studied sources in our sample due to its proximity ($z$ = 0.002), and also one of the six original Seyfert galaxies studied in \citet{1943seyfert1}. It is a radio-quiet NLS1 hosted by a spiral galaxy (Fig.~\ref{fig:J1203-host}). In our observations, its peak flux density is 0.78~mJy beam$^{-1}$, the integrated flux density is 5.47~mJy, and the flux densities of the tapered maps slightly higher. Due to its low redshift this translates to a very low luminosity of log $\nu L_{\nu, \textrm{int}}$ = 36.46 erg s$^{-1}$. Most of the emission originates from the extended regions, seen in Fig.~\ref{fig:J1203spind}. The emission is elongated in the north-east - south-west direction, and exhibits a curved S-shape. Also the core seems to be elongated with the same position angle. The morphologies seen in the tapered maps in Figs.~\ref{fig:J1203-90k} and \ref{fig:J1203-60k} are similar to the normal map. The maximum extent of the emission in the normal map is 0.42~kpc, and in the 60k$\lambda$ tapered map 0.44~kpc, thus the radio emission we see is confined within the very centre of the galaxy, as can be seen in Figs.~\ref{fig:J1203-host} and \ref{fig:J1203-host-zoom}. In the $\alpha$ map (Fig.~\ref{fig:J1203spind}) the spectral index of the core is -0.85, steepening to $\sim$-1 toward south-west. Interestingly it looks like the spectral index of the south-west extended emission is on average flat, with spectral indices around -0.5.
J1203+4431 was observed with VLA at 1.5 and 5~GHz already in \citet{1984ulvestad1}. At 5~GHz they observed an east-west -oriented structure, that resembles the central region seen in our radio map. At 1.4~GHz they see also the south-west extended region, with a morphology similar to what is seen in our map. \citet{1993baum1} observed J1203+4431 with the Westerbork Synthesis Radio Telescope (WSRT) at 5~GHz and with the VLA in D configuration at 1.5 and 5~GHz. In the WSRT observations they see a north-east - south-west structure, that is about twice in size compared to what is seen in our maps. In their 5~GHz VLA map they clearly see the radio emission originating in the spiral arms of the galaxy, and in the 1.5~GHz map they also see the radio emission of the disk.
At higher resolution, at least from $\sim$1.5 to 15~GHz, the core component splits into three separate components with sub-mJy flux densities, and that correspond to the structures seen at lower resolution \citep{2009giroletti1,2011king1,2018baldi1,2018saikia1}. Very high resolution observations with the VLBA at 1.7~GHz \citep{2013doi1} and with the JVN at 22~GHz \citep{2015doi1} do not detect the source. Several studies have suggested that the origin of the inner structure is the AGN activity \citep[e.g., ][]{2006gallimore1,2011maitra1,2017jones1}. Based on broadband SED modelling \citet{2011maitra1} conclude that the source hosts a mildly relativistic jet/outflow. Studying a sample of Seyfert galaxies with kpc-scale radio structures, including J1203+4431, \citet{2006gallimore1} argue that in general these structures originate when jet plasma interacts with the interstellar medium and gets decelerated. They argue that in most cases the jet loses most of its power within the innermost kpc. \citet{2017jones1} studied the radio variability of J1203+4431 but found no significant signs of it. They hypothesise that the jet-like structure results from an earlier period of higher activity, and that at the moment the source is at a low-activity state. This could be supported by a study by \citet{2020esparzaarredondo1} where they classify J1203+4431 as a possible early fading candidate, meaning that it looks like its nuclear activity is currently decreasing. These properties match well with what we observe: very close to the nucleus the morphology toward south-west appears somewhat collimated, but dissipates at a radius of $<$ 0.1~kpc. We assume that the north-east part is receding, and a similar nuclear jet cannot be seen due to Doppler deboosting. Interestingly the source was also detected by the \textit{Planck} satellite at 353, 545, and 857~GHz with flux densities of 0.74, 3.04, 10.50~Jy, respectively \citep{2013planck1}. This emission most probably originates from the dust in the host galaxy \citep{2011popescu1}.
A water maser was detected in this source by \citet{2018hagiwara1}. Also ultrafast outflows have been detected in J1203+4431 in X-rays in several studies \citep{2010tombesi1,2013pounds1,2020igo1} with velocities up to 0.12~c. \citet{2013pounds1} suggest that the outflow gets terminated at a rather small radius, losing much of its energy when interacting with the interstellar medium. However, the outflow is momentum-conserving, and thus enables momentum-driven AGN feedback in the galaxy. Outflows have been detected also in the optical spectrum. \citet{1997christopoulou} detected an [O~III] outflow with two blueshifted components, originating from opposite sides of the outflow cone. They model the outflow and determine its velocity to be 245~km s$^{-1}$. The outflow was later confirmed by \citet{2013fischer1}, who found a maximum velocity of 550~km s$^{-1}$. These models were further improved in \citet{2021meena1}, where they claim that the outflow is biconical, launched within 0.5~pc from the nucleus, and is driven by radiation. At a distance of 350~pc the outflow has a velocity of 680~km s$^{-1}$, and they argue that based on the gas kinematics and the host galaxy properties it can travel up to a distance of $\sim$1~kpc from the nucleus. The [O~III] outflow somewhat traces the north-east morphology seen in the radio maps, which is natural since it can be expected that the jet and the outflow are launched into the same direction. However, in the outflow model the north-east part is approaching and the south-west receding, which does not seem to be the case based on the radio morphology.
It is clear that the radio emission is of nuclear origin, and in any case the star formation diagnostics for J1203+44231 are unreliable since we see only a fraction of the whole radio emission of the source. However, the star formation rate of the host has been estimated to be a few $M_{\odot}$ yr$^{-1}$ \citep{2019lianou1,2020lamperti1}, and in the 5~GHz VLA radio map of \citet{1993baum1} the spiral arms are clearly seen, proving that across the host some of the radio emission does have an origin related to star formation.
\subsubsection{J1209+3217}
J1209+3217 (RX J1209.7+3217) is another radio-quiet NLS1 galaxy ($z$ = 0.144) whose radio emission seems to be dominated by star formation. The radio map does not show significant structure, and the radio emission is confined inside the host galaxy (Fig.~\ref{fig:J1209-host}). The 5.2~GHz peak flux density is 0.60~mJy beam$^{-1}$, the integrated flux density is 0.72~mJy. The total spectral index is -0.55 and the core spectral index is -0.62 (Fig.~\ref{fig:J1209spind}). This is consistent with the 1.4-5.2~GHz spectral index of -0.76, estimated using archival 1.4~GHz FIRST data. The source has a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.31 erg s$^{-1}$. The W3-W4 colour is 2.88, the $q22$ value is 1.75, and the W3 flux density is significantly, $\sim$15.6~mJy, higher than $S_{\textrm{1.4GHz, JVLA}}$, indicative of strong star formation. $S_{\textrm{5.2GHz, CDFS}}$ is $\sim$0.5~mJy higher than $S_{\textrm{int}}$, suggesting that the radio emission can be explained by star formation. The [O~III] lines are not shifted, but a turbulent wing component with a velocity of -222~km s$^{-1}$\ and a FWHM of 1405~km s$^{-1}$\ is present.
\subsubsection{J1215+5442}
J1215+5442 (SBS 1213+549A) is a radio-quiet NLS1 at $z$ = 0.150 with a peak flux density of 0.56~mJy beam$^{-1}$, an integrated flux density of 0.64~mJy, and a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.30 erg s$^{-1}$. Its radio maps (Fig.~\ref{fig:J1215}) do not exhibit any remarkable features, and according to the $\alpha$ map (Fig.~\ref{fig:J1215spind}) its total spectral index is -1.0. J1215+5442 was detected in FIRST with a flux density of 2.35~mJy, giving a 1.4-5.2~GHz spectral index of -0.99, confirming the steep spectral index of our map.
Based on the star formation proxies, the radio emission of J1215+5442 can be explained by star formation activity and does not require a significant contribution from the AGN. Its mid-infrared colour, W3-W4, is 2.47, $q22$ is 1.52, and the W3 flux density is $\sim$29.6~mJy higher than the extrapolated JVLA 1.4~GHz flux density, suggesting strong star formation activity. Also $S_{\textrm{5.2GHz, CDFS}}$ is $\sim$0.8~mJy higher than $S_{\textrm{int}}$, supporting star formation as the main producer of radio emission in this galaxy. However, the contribution of AGN-heated dust to the mid-infrared emission cannot be ruled out, especially since an [O~III] wing is present, with a velocity of -500~km s$^{-1}$\ and a FWHM of 661~km s$^{-1}$.
\subsubsection{J1218+2948}
J1218+2948 (Mrk 766) is a well-studied nearby ($z$ = 0.013) radio-loud NLS1 galaxy. Its peak flux density is 11.17~mJy beam$^{-1}$, the integrated flux density 15.46~mJy, and the flux densities of the tapered maps almost the same. Its luminosity is quite low with log $\nu L_{\nu, \textrm{int}}$ = 38.47~erg s$^{-1}$. The radio morphology (Fig.~\ref{fig:J1215spind}) is mostly featureless, but exhibits a slight elongation in south-east - north-west direction. The tapered maps (Figs.~\ref{fig:J1218-90k} and \ref{fig:J1218-60k}) reveal additional extended emission toward east. The radio emission is confined within the central parts of a barred spiral host (Fig.~\ref{fig:J1218-host} and \ref{fig:J1218-host-zoom}). The $\alpha$ map shows a steep core, with a spectral index of -0.84. The total spectral index is -0.83, so the source is thoroughly steep.
In archival radio observations obtained with the VLA the flux densities were found to be 35.9~mJy at 1.46~GHz, and 15.4~mJy at 4.89~GHz, in OVRO observations at 20~GHz the flux density was 5.3~mJy \citep{1987edelson1}, and observations with Arecibo yielded 29~mJy at 2.38~GHz \citep{1978dressel1}. \citet{1995bicay1} obtained 17 $\pm$ 4~mJy at 4.755~GHz with the GBT, and \citet{2010parra1} 14.5~mJy at 4.8~GHz with the VLA. All the archival values are consistent with what we obtain, and indicate that the emission is not variable, at least at these time-scales. Using the archival and our data the spectral index between 1.46 and 5.2~GHz is -0.66, and between 5.2 and 20~GHz -0.79, consistent with our $\alpha$ map. J1218+2948 has been unresolved in most radio imaging observations, but at 8.4~GHz with the VLA in A configuration a slight extension toward south-east was seen \citep{1995kukula1}, and 1.5~GHz VLA observations also detected extended emission on the north-west side of the nucleus, in addition to the south-east emission, was detected \citep{1999nagar1}. These structures correspond to the slightly elongated morphology we see in J1218+2948. The source was not detected at 22~GHz with the JVN \citep{2016doi1}, but was detected at 95~GHz with the Combined Array for Research in Millimeter-wave Astronomy (CARMA) with a flux density of 1.98~mJy \citep{2015behar1}. \citet{2015behar1} note that the high frequency emission follows the $L_{\textrm{R}}$ - $L_{\textrm{X}}$ relation found in stellar coronae, and argue that the high frequency radio emission is similarly produced in radio-quiet and moderately radio-loud AGN in the accretion disk corona closer to the black hole and within a smaller region than the large-scale radio emission. \citet{2020esparzaarredondo1} classify this source as a fading nucleus.
Structures resembling the radio morphology have also been observed in optical and IR emission lines. The [O~III] emission is found to be rather circular \citep{1996mulchaey1,2003schmitt1} and limited to the innermost $\sim$0.5~kpc of the galaxy, but an excitation map shows a biconical south-east -- north-west structure, with the south-east emission being brighter. \citet{1996gonzalesdelgado1} find that the H$\alpha$ emission is extended 1.5~kpc toward north-east, and conclude that the spectrum of this region can be fitted with H~II region models. They also find both a blue-shifted and a red-shifted component of [O~III] on opposite sides of the nucleus, and argue that the kinematics of this gas is a result of a nuclear outflow. \citet{2014schonell1} mapped the innermost 450~pc of the galaxy with the Gemini Near Infrared Integral Field Spectrograph in $J$ and $Ks$ bands, and found that also the [Fe~II] emission is extended toward south-east. Based on line ratios they deduce that they have been ionised by a mixture of nuclear and starburst emission. Toward south-east they detect signs of shocked gas and also an increase in the [Fe~II] velocity distribution, as well as red- and blue-shifted components they think are caused by a jet/ourflow. Also a water maser has been detected in J1218+2948, but whether it is associated with the nucleus is unclear \citep{2011tarchi1}.
Star formation diagnostics for this source can be unreliable due to the JVLA observation not mapping the whole galaxy, so the flux density can be somewhat underestimated. Furthermore, \citet{2005rodriguesardila1} found in the near-infrared spectrum an excess they attribute to hot dust with a blackbody temperature of 1200~K they think resides very close to the nucleus, at the dust sublimation radius. If also the toroidal/poloidal cooler dust is bright it can contribute to the observed mid-infrared flux densities. Keeping these in mind, the star formation diagnostics point to this source being star-forming. The W3-W4 is 2.87, $q22$ is 1.40, the W3 flux density is 269.8~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ is 4.6~mJy higher than $S_{\textrm{int}}$. The star formation of the host is not extraordinarily high: based on far-infrared observations it is 5.42~$M_{\odot}$ yr$^{-1}$ \citep{2017terrazas1}, and based on the infrared luminosity it is 2.24~$M_{\odot}$ yr$^{-1}$ \citep{2020lamperti1}. However, \citet{2003rodriguezardila1} detect a 3.3~$\mu$m PAH feature within 150~pc from the nucleus at a level seen in circumnuclear starbursts in other sources. Also ultraviolet (UV) observations reveal several bright star-forming regions and star clusters, and the most pronounced star forming region seems to be associated with the east part of the galaxy bar \citep{2007munozmarin1}. This could explain the east-ward extended emission seen in the tapered maps.
Interestingly two QPOs with a frequency ratio of 3:2 have been reported in J1218+2948 in X-rays \citep{2001boller1,2017zhang1}. The periods of the QPOs are 4200 and 6450~s, and seem to be transient. Also ultrafast outflows (UFO) have been detected in this source, with velocities around $\sim$0.1~c \citep[][and references therein]{2020igo1}.
All in all, it seems like the radio emission of J1218+2948 is a mixture of star formation and AGN related emission. In the radio maps there are signs of possible jets or outflows in the south-east - north-west direction, but due to the lack of variability in the radio flux densities or radio morphology the jets/outflows appear lethargic. Furthermore, according to several studies the host galaxy is forming stars, but the level of this activity, and thus its contribution to the radio emission, is uncertain. Further observations would be needed to clarify the role of each component in this galaxy.
\subsubsection{J1227+3214}
J1227+3214 (SDSS J122749.14+321458.9) is a radio-loud NLS1 ($z$ = 0.136) with a peak flux density of 2.96~mJy beam$^{-1}$, the integrated flux density of 3.34~mJy, and the flux densities of the tapered maps only slightly higher. This results in a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.94~erg s$^{-1}$. The source was detected in FIRST with a flux density of 6.42~mJy, yielding a spectral index of -0.50 between 1.4 and 5.2~GHz. Our $\alpha$ map in Fig.~\ref{fig:J1227spind} shows a steeper spectrum: the total spectral index is -0.67, and the core spectral index is -0.74. The difference might be due to variability since the mean epoch of the FIRST observations is in 1994, however, due to the lack of further radio observations we cannot investigate the possible variability. The radio morphology seems somewhat resolved, but without any clear structures. In the normal map the morphology seems slightly extended toward north-east and west, whereas the tapered maps reveal faint extended emission toward north and south-east. Whereas the bulk of the emission is confined within the host galaxy, this extended emission seems to be reaching outside it (Fig.~\ref{fig:J1227-host}).
The star formation diagnostics indicate some activity in the host: the W3-W4 colour is 2.62, $q22$ is 1.06, and the W3 flux density is $\sim$25.4~mJy higher than the $S_{\textrm{1.4GHz, JVLA}}$. However, $S_{\textrm{int}}$ is 1.9~mJy higher than $S_{\textrm{5.2GHz, CDFS}}$. \citet{2015caccianiga1} find the star formation of this source to be high, 54~$M_{\odot}$ yr$^{-1}$, so star formation contribution to the radio emission can be expected. Interestingly, this source was labelled as a red quasar in \citet{2016lamassa1}. They argue that it is in a dusty, possibly post-merger state, when powerful nuclear winds start unveiling a heavily obscured nucleus. However, no shift was detected in [O~III] lines so a large-scale outflow does not seem to be present, and only a moderate wing component with a velocity of -209~km s$^{-1}$\ and a FWHM of 592~km s$^{-1}$\ was seen in \citet{2021berton1}. Also \citet{2017zhang2} studied its dust properties, and conclude that only the innermost parts of the AGN -- the accretion disk, the broad-line region, and the inner edge of the torus -- seem to be obscured, and not the narrow-line region. Based on Cloudy analysis they claim that the obscuring medium can actually be the torus itself. The host galaxy of J1227+3214 has been observed a few times, but despite its moderate redshift, the attemps to model the morphology of the host galaxy have been inconclusive, thus nothing can be said about the state of the host \citep{2018jarvela1,2020olguiniglesias1}.
J1227+3214 would be very luminous for a galaxy dominated purely by star formation, so it is probable that also the AGN contributes to the overall radio emission. Based on its luminosity and the steep spectral index, it could be a low-luminosity CSS-like source. With these data it is impossible to say whether the extended emission that reaches outside the host is of nuclear or star formation origin, and further observations will be needed to reveal the nature of this source.
\subsubsection{J1242+3317}
J1242+3317 (WAS 61) is a radio-quiet source at $z$ = 0.044 with a peak flux density of 1.32~mJy beam$^{-1}$, integrated flux density of 2.23~mJy, and the flux densities of the tapered maps similar. Its luminosity is quite low with log $\nu L_{\nu, \textrm{int}}$ = 38.74~erg s$^{-1}$. Its FIRST flux density is 6.32~mJy, giving a spectral index of -0.79 between 1.4 and 5.2~GHz. This is consistent with what we see in the $\alpha$ map in Fig.~\ref{fig:J1242spind}: the total spectral index is -0.74, and the core spectral index is -0.88. \citet{1995brinkmann1} report a value of 45~mJy at 4.85~GHz observed with the GBT in 1989, but this record is not found anywhere else, for example, in the NASA/IPAC Extragalactic Database (NED), the High Energy Astrophysics Science Archive Research Center's (HEASARC) Xamin, or SIMBAD's VizieR, so the value is highly dubious. The morphology of the source is clearly elongated toward east, and it seems like the spectral index flattens toward east as well, but this might be an edge-effect, especially since also the errors increase toward this direction. The tapered maps (Figs.~\ref{fig:J1242-90k} and \ref{fig:J1242-60k}) show the same extended emission toward east, and are additionally also slightly extended toward south-west. The maximum extent of the region of continuous emission is 1.3~kpc in the normal map and 2.9~kpc in the 90k$\lambda$ tapered map. The extended emission can be caused by either a low-power jet or a nuclear outflow. Higher-resolution observations would be needed to distinguish between these two options.
The host galaxy of J1242+3317 was classified as a possible S0 galaxy with a strong bar (SB) \citep{2007ohta1}, but modelling the host with a bulge and a disk yielded a S\'ersic index of 3.45 for the bulge, suggesting it is a classical bulge \citep{2012mathur1}, in which case the galaxy would not be a pristine late-type galaxy. However, the presence of a bar was not taken into account in the modelling, even if it is clearly visible in the residuals, and might affect the final results.
J1242+3317 is also a bright mid-infrared emitter, even though it is quite clear that the morphology is caused by an AGN. Its W3-W4 colour is 2.72, $q22$ is 1.59, the W3 flux density is 60.8~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ is 1.6~mJy higher than $S_{\textrm{int}}$. \citet{2020yang1} estimated its star formation rate based on the 5.2~GHz radio luminosity ad concluded that the radio emission cannot be of star formation origin. However, they do not consider the possibility that it can be a mixture of the two. Using BPT diagrams \citet{2013stern1} classified J1242+3317 as an AGN, a star-forming galaxy, and a Seyfert galaxy in diagrams based on [N~II], [S~II], and [O~I], respectively. This result suggests that indeed probably also star formation is present in this source. Also an [O~III] wing is detectable in the spectrum, with a velocity of -213~km s$^{-1}$\ and a FWHM of 888~km s$^{-1}$, so additional contribution of the AGN-heated dust to the mid-infrared emission cannot be ruled out either.
\subsubsection{J1246+0222}
J1246+0222 (PG 1244+026) is a radio-quiet, faint NLS1, at $z$ = 0.048, with a peak flux density of 0.43~mJy beam$^{-1}$, an integrated flux density of 0.71~mJy and the flux densities of the tapered maps of 0.85 and 0.82~mJy for the 90k$\lambda$ and the 60k$\lambda$ taper, respectively. It is also a low-luminosity source with log $\nu L_{\nu, \textrm{int}}$ = 38.31~erg s$^{-1}$. In \citet{1994kellermann1} its 5~GHz flux density was found to be 0.83~mJy, so it seems like it has not undergone major changes during the last decades. Its FIRST flux density is 2.23~mJy, giving a spectral index of -0.87 between 1.4 and 5.2~GHz. Upgraded Giant Metrewave Radio Telescope (uGMRT) observations at 685~MHz yielded a flux density of 3.1 $\pm$ 0.3~mJy, and they found a steep spectral index of -0.7 \citep{2020silpa1}. Furthermore, they found the source to be extended toward west - south-west on a kpc-scale. In the radio map in Fig.~\ref{fig:J1246spind} the source seems to be extended northward and west - southwestward. The spectral index varies over the emitting region, and the extreme values close to the edges are clearly a product of edge-effects and low S/N ratio data. The core spectral index is flat with $\alpha$ = 0.05, consistent with what was found in \citetalias{2018berton1}, although new observations are certainly required to confirm our result. The tapered maps show extended emission toward south-east, with an extent of 5.2~kpc in the 90k$\lambda$ tapered map. The host galaxy of J1246+0222 is featureless (Fig.~\ref{fig:J1246-host}), and the bulk of the radio emission is confined within it. Its SDSS $r$ band 25~mag arcsec$^{-2}$ isophotal semimajor axis is 3.42~kpc, implying that the extended emission reaches outside the host.
\citet{2013teng1} studied the H~I absorption properties of J1246+0222 and concluded that the blueshifted and broad H~I absorption features match an outflow caused by the nuclear activity, probably driven by a jet. Moreover, they found that the minima of the absorption troughs exceed the nuclear continuum flux densities, meaning that there is also off-nuclear radio emission contributing to the continuum in J1246+0222. Based on the resolution of their observations they argue that the emitting region needs to be at least 3~kpc from the core. The south-eastward extended emission visible in our tapered maps would fit the criterion of the required off-nuclear emission. Also fitting of the timing data of J1246+0222 in X-rays supports the presence of a jet or at least a jet base in this source. \citet{2017chainakun1} found that the data is best fitted with two X-ray sources, with the source further away from the black hole producing only small amounts of reflection X-rays. They interpret this component as a jet whose emission is beamed away from the accretion disk and thus does not get reflected. In any case, the radio morphology would be highly unusual for a star-forming galaxy, and thus it is probable that at least the extended emission is produced by nuclear activity, which implies that a jet or a nuclear outflow is or has been present. Interestingly, no [O~III] shift is seen in the optical spectrum, but a wing is present with a velocity of -503~km s$^{-1}$\ and a FWHM of 1245~km s$^{-1}$. Also a possible ultrafast outflow with a velocity of 0.08~c has been detected \citep{2020igo1}.
According to the star formation diagnostics the bulk of the radio emission could be from star formation as the W3-W4 colour is 2.56, $q22$ is 2.26, the W3 flux density is 40.3~mJy higher than the $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ is about the same as $S_{\textrm{int}}$. The star formation rates derived using different methods are roughly in agreement, ranging from 1 to 11~$M_{\odot}$ yr$^{-1}$ \citep{2014young1,2012sargsyan1,2020shangguan1}. However, \citet{2020yang1} claim that producing the amount of radio emission seen in J1246+0222 would require very improbable amounts of star formation. \citet{2007shi1} claim to have detected PAH features at 7.7 and 11.3$\mu$m in the infrared spectrum obtained with the Infrared Spectrograph (IRS) onboard Spitzer Space Telescope, whereas \citet{2018xie1} see only a flat, featureless spectrum in their data obtained with the same instrument. They model the spectrum and conclude that it can be reproduced by three continuum components with temperatures of about 394, 165, and 77~K. If in absence of strong star formation the hot, as well as partly the warm, dust emission could originate from the dust heated by the AGN, which could explain the excess mid-infrared emission.
\subsubsection{J1302+1624}
\label{sec:J1302}
J1302+1624 (Mrk 783) is a radio-loud NLS1 at $z$ = 0.067 with a peak flux density of 3.28~mJy beam$^{-1}$, an integrated flux density of 11.47~mJy and the flux densities of the tapered maps about half a mJy higher. Its luminosity is log $\nu L_{\nu, \textrm{int}}$ = 39.91~erg s$^{-1}$. Its radio map in Fig.~\ref{fig:J1302spindnoncommon} reveals a core and considerable extended emission toward east and south-east, in a curved morphology. The tapered maps do not show any additional emission (Figs.~\ref{fig:J1302-90k} and \ref{fig:J1302-60k}). The extent of the emission from the core in the normal map toward south-east is 7~kpc, toward north 3.9~kpc, and toward west 2~kpc. In the $\alpha$ map (Fig.~\ref{fig:J1302spindnoncommon}) the spectral index of the core is $\sim$-0.83, and the total spectral index is -1.50. The extended emission is mostly very steep, with spectral indices $<$ -1.5. An exception is a region very south, at the tip of the extended emission: the spectral index at the centre of the highest contour (RA 13:02:58.04,Dec +16:24:24.49) is -0.67 , and thus considerably flatter than elsewhere. It is probable that these flatter spectral indices are real, since they are not only seen at edges, and this region is clearly brighter than the rest of the jet, resembling a hotspot, and indicating interaction with the interstellar medium.
J1302+1624 was detected with FIRST with a flux density of 28.72~mJy, but this value includes both the core and the extended emission, and thus deriving a spectral index using it does not tell us much. \citet{2013doi1} observed the source with the VLBA at 1.7~GHz finding it unresolved with a flux density of 1.3~mJy. It was not detected at 22~GHz using the JVN \citep{2016doi1}. J1302+1624 was extensively studied in \citet{2017congiu1}, \citet{2017congiu2}, and \citet{2020congiu1}. \citet{2017congiu2} found an extended narrow-line region in this source. This region has the same position angle as the radio emission extended south-east, but is much further away from the nucleus, reaching as far as 35~kpc. They confirmed with a BPT diagram that this region has been ionised by the AGN, and not star formation. Hints of gas ionised by star formation are seen closer to the nucleus, and between these there is a region where no line emission was detected at all. If the gas 35~kpc away from the nucleus was ionised by the AGN, we can use a simple light travel time argument to estimate a lower limit for the age of the source: it has taken the nuclear emission at least 35~kpc / $c$ = 10$^{5.05}$~yr to reach this region.
\citet{2017congiu1} thoroughly analysed the same JVLA data we are using. Using tapering they estimated the extent of the emission to be 14~kpc south-eastward and 12~kpc north-westward. They measured the spectral indices using the same method as in \citetalias{2018berton1} and estimate the core spectral index to be -0.67 $\pm$ 0.13, and the spectral index of the extended emission to be -2.02 $\pm$ 0.74. We confirm these results with the $\alpha$ map. It should be noted that their extended spectral index includes both the very steep and the flatter region seen in our $\alpha$ map, thus being an average between them. Due to the very steep spectral index and the lack of clear collimated structures they hypothesise that the extended emission might be old, and is not being replenished anymore. They also propose that the S-shaped extended emission in the tapered map might suggest that the jets in J1302+1624 are precessing. They tested this scenario further in \citet{2020congiu1} observing the source with the VLBA and the enhanced Multi Element Remotely Linked Interferometer Network (e-MERLIN). The e-MERLIN 1.6~GHz observations confirmed the diffuse, non-collimated nature of the extended emission, as well as its steep spectral index. Interestingly, the source is partially resolved in VLBA observations at 5~GHz, but the pc-scale jet seems to be pointing north-east. Fitting a precession model to the JVLA 5.2~GHz data they confirm that it is a plausible scenario for the origin of the detected emission. Another possibility they bring up is intermittent activity, as J1302+1624 seems to be undergoing a merger, and mergers are known to cause this kind of behaviour. Furthermore, a merger can also be responsible for the change in the direction of the jet. Current data are not enough to distinguish between these scenarios. Interestingly, the flattened emission region we see in the $\alpha$ map coincides with the southern tidal structure seen in Fig. 6 of \citet{2020congiu1}, possibly indicating that the jet is interacting with the interstellar medium in the tidal tail. J1302+1624 is a blue outlier as its [O~III] lines are shifted by -191~km s$^{-1}$, but it is unclear whether the shift is due to the jet or possibly the merger.
The jet is clearly the main source of radio emission in this source, but as indicated by the BPT diagram, there probably is also some star formation going on in the host. The star formation diagnostics are dominated by the jet emission, and only the W3-W4 colour of 2.62 reveals possible star formation. $q22$ is -0.11, the W3 flux density is 63.0~mJy lower than the $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{int}}$ is 10.8~mJy higher than $S_{\textrm{5.2GHz, CDFS}}$.
\subsubsection{J1305+5116}
J1305+5116 (SDSS J130522.74+511640.2) is a radio-loud, bright NLS1 at $z$ = 0.788 with a peak flux density of 32.10~mJy beam$^{-1}$, an integrated flux density of 53.41~mJy and the flux densities of the tapered maps approximately the same. It is one of the most luminous sources in our sample with log $\nu L_{\nu, \textrm{int}}$ = 42.91~erg s$^{-1}$. J1305+5116 has also been detected in gamma-rays, which - when taking into account the quite high redshift of this source - strongly suggests it hosts a relativistic, beamed jet \citep{2015liao1}. Its radio map (Fig.~\ref{fig:J1305spind}) shows two components with a separation of 8.1~kpc, and at a position angle of 168\degree. The northern component is responsible for about 62\% of the emission (33.1~mJy). The $\alpha$ map reveals that this is the core, since it shows a flat spectral index ($\alpha$ = -0.07), whereas the spectral index of the southern component is -0.73 (RA 13:05:22.76, Dec 51:16:39.10), and consistent with optically thin synchrotron emission. The northern component is centred at the centre of the featureless host galaxy (Fig.~\ref{fig:J1305-host-zoom}). The SDSS $r$ band 25.0~mag arcsec$^{-2}$ isophotal semimajor axis of the host is 15.4~kpc, so also the jet is still confined within the host galaxy.
Based on archival data J1305+5116 was previously classified as a steep spectrum source: its flux densities at 150~MHz (6C/7C), 325~MHz (WENNS), and 1.4~GHz (FIRST) are 320, 211, and 86.9~mJy, respectively \citep{2008yuan1}. It was later observed with the Allen Telescope Array (ATA) at 1.4~GHz with a flux density of 92.7~mJy \citep{2010croft1}, indicating some degree of variability. Higher-resolution observations revealed a north-south -oriented core-jet structure \citep{2013petrov1}. In VLBA observations at 5~GHz the core flux density was found to be 15~mJy, and the jet flux density 8.9~mJy. These values are less than 50\% of what we observe at 5.2~GHz indicating that a substantial amount of the emission is resolved-out at the VLBA scales. The source was not detected at 22~GHz with the JVN with a detection limit of 9~mJy \citep{2016doi1}.
\citet{2018komossa1} investigated the optical spectrum of J1305+5116 and found extreme emission line shifts, with the largest velocities derived from them exceeding the escape velocity of the galaxy. They also found ionisation stratification -- the shifts of higher ionisation potential lines are larger -- and argue that this implies that the shifts are not caused by local interaction, but concern the whole narrow-line region. They conclude that the properties of the optical spectrum are most probably indicative of an early stage of jet evolution, where a young jet is still advancing through the dense interstellar medium and interacting with it. This scenario is in agreement with the radio morphology of the source, since the jet still seems to reside within the host. This source would be ideal for future observations about the jet - interstellar medium interaction.
The radio emission in J1305+5116 seems to be produced by a powerful nucleus and a relativistic, beamed jet. Nevertheless, for consistency the star formation diagnostics of this source are: the W3-W4 colour is 2.34, $q22$ is -0.61, $S_{\textrm{1.4GHz, JVLA}}$ is 72.3~mJy higher than $S_{\textrm{W3}}$, $S_{\textrm{int}}$ is 5.03~mJy higher than $S_{\textrm{5.2GHz, CDFS}}$.
\subsubsection{J1317+6010}
J1317+6010 (SBS 1315+604) is a radio-quiet NLS1 ($z$ =0.137) with a peak flux density of 0.51~mJy beam$^{-1}$, an integrated flux density of 0.64~mJy, and the tapered flux densities slightly higher, 0.80 and 0.79~mJy for the 90 and 60k$\lambda$ taper, respectively. Due to its redshift the low flux density translates to a moderate luminosity of log $\nu L_{\nu, \textrm{int}}$ = 39.42~erg s$^{-1}$. The only archival radio detection is 1.82~mJy from FIRST, yielding a 1.4-5.2~GHz spectral index of -0.80. Interestingly, according to the $\alpha$ map in Fig.~\ref{fig:J1317spind} the core spectral index is quite flat with $\alpha$ = -0.34. This is consistent with what was found in \citetalias{2018berton1}, but since they used the same data set, this result should be checked. It can be an artefact due to the faintness of the source. The morphology seems to be somewhat extended, and the central region seems to be surrounded by patchy, diffuse emission. In the tapered maps (Figs.~\ref{fig:J1317-90k} and \ref{fig:J1317-60k}) these blobs form one, quite featureless, emission region. The radio emission is confined within the host galaxy, that does not show any distinguishable morphology either (Fig.~\ref{fig:J1317-host}).
The star formation proxies indicate star formation in the host, W3-W4 is 2.6, $q22$ is 1.60, $S_{\textrm{W3}}$ is 12.8~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{int}}$ and $S_{\textrm{5.2GHz, CDFS}}$ are about the same. However, the BPT diagrams using different emission line ratios all indicate that the gas has been ionised by the AGN \citep{2013stern1}. The [O~III] lines are not shifted, but the source does show an [O~III] wing with a velocity of -377~km s$^{-1}$\ and a FWHM of 1436~km s$^{-1}$, so the dust heated by the AGN could explain the mid-infrared emission.
\subsubsection{J1337+2423}
J1337+2423 (IRAS 13349+2438) is a radio-quiet source at $z$ = 0.108, originally classified as an NLS1, but later found to be a broad-line Seyfert 1 (BLS1) with the FWHM(H$\beta$) $\sim$ 2500~km s$^{-1}$\ \citep{2013lee1}. However, its optical spectrum exhibits very strong Fe~II emission, unlike BLS1s in general, so it might be an ``overgrown'' NLS1. We decided to include it in the sample because it still shares a lot of properties with NLS1s, and might be interesting from a evolutionary point of view. It is defined as radio-quiet, mostly due to its strong optical/infrared emission. Its peak flux density is 9.64~mJy beam$^{-1}$, the integrated flux density is 9.96~mJy, and the tapered flux densities are about the same. It has a high luminosity of log $\nu L_{\nu, \textrm{int}}$ = 40.26~erg s$^{-1}$. Its FIRST flux density is 19.9~mJy, giving a 1.4-5.2~GHz spectral index of -0.53. The $\alpha$ map in Fig.~\ref{fig:J1337spind} shows generally steep spectral indices: the total spectral index is -1.03, and the core spectral index is -1.01. The source is slightly elongated toward east. The tapered maps do not reveal significant new emission. The radio emission is confined within the host galaxy, which shows hints of a bar and possible spiral arms or disturbed morphology (Fig.\ref{fig:J1337-host}).
The star formation diagnostics, except for the W3-W4 colour, indicate strong star formation. The W3-W4 colour is 1.95, $q22$ is 1.29, the W3 flux density is 424.1~mJy higher than the $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ is 2.8~mJy higher than $S_{\textrm{int}}$. However, this source has been studied in infrared in detail, and it has been classified as a hot dust-obscured galaxy, where almost all the mid-infrared emission originates from the AGN-heated dust \citep{2018lyu1}. The AGN-fraction of the mid-infrared emission was studied also in \citep{2016alonsoherrero1} and \citet{2016gruppioni1} and was found to be 0.97 and 0.98, respectively, so it can be concluded that the very bright mid-infrared emission is of AGN origin. On the other hand, this source also shows considerable star formation, with the estimates ranging between $\sim$15 to 45~$M_{\odot}$ yr$^{-1}$ \citep{2012sargsyan1,2019mahajan1}, so the contribution of the star formation to the radio emission cannot be totally ruled out.
J1337+2423 has also been found to host strong multiphase ultrafast outflows with velocities of $\sim$0.14~c and $\sim$0.27~c \citep{2020parker1}, as well as two warm absorbers with velocities around -600~km s$^{-1}$ \citep{2018parker1}. The optical spectrum does not show [O~III] line shifts, but reveals a strong [O~III] wing component with a velocity of -598~km s$^{-1}$\ and a FWHM of 2241~km s$^{-1}$. Taking into account the proof of strong outflows in this source, it is possible that also the eastward extended emission is rather caused by an outflow than a jet. Also the steep spectral indices support this hypothesis.
\subsubsection{J1348+2622}
J1348+2622 (SDSS J134834.28+262205.9) is a radio-quiet NLS1 at $z$ = 0.917 with a very low integrated flux density of 0.38~mJy. However, this source has the highest redshift among our sample, so the low $S_{\textrm{int}}$ translates to quite a high luminosity of log $\nu L_{\nu, \textrm{int}}$ = 40.84 erg s$^{-1}$. The source does not exhibit any distinct features in the radio maps (Fig.~\ref{fig:J1348}), and unsurprisingly also its host galaxy is unresolved due to the redshift (Fig.~\ref{fig:J1348-host}). The $\alpha$ map (Fig.~\ref{fig:J1348spind}) shows that the spectral index of the source is very steep, the total spectral index is -1.06, and the core spectral index is -1.30.
The star formation diagnostics do not imply strong star formation in this source, W3-W4 is 1.83, $q22$ is 0.32, and both $S_{\textrm{W3}}$ and $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{int}}$ and $S_{\textrm{5.2GHz, CDFS}}$ are roughly the same. However, due to the high redshift of this source, the PAH features around 11.3~$\mu$m are not within the W3 band anymore, but actually fall into the W4 band, so the star formation proxies used are not necessarily reliable for this source. Since the PAH features should be within the W4 band we decided to use it instead of W3 in the $S_{\textrm{W3}}$ - $S_{\textrm{1.4GHz, JVLA}}$ relation. According to this the W4 flux density is $\sim$1.6~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, still not a very robust result in favour of star formation.
Based on these data it is not possible to determine the predominant source of the radio emission in J1348+2622. However, star forming, or even starburst galaxies rarely exceed the radio luminosity of log $\nu L_{\nu}$ = 40~erg s$^{-1}$ \citep[e.g.,][]{2009sargsyan1}, which might suggest that the radio emission is dominated by the AGN, and that J1348+2622 might be a CSS-like NLS1 galaxy.
\subsubsection{J1358+2658}
J1358+2658 (SDSS J135845.38+265808.4) is a radio-quiet source at $z$ = 0.331 with a peak flux density of 0.52~mJy beam$^{-1}$, an integrated flux density of 0.61~mJy, with the flux densities of the tapered maps similar. Due to the redshift of this source the low flux density results in a high luminosity of log $\nu L_{\nu, \textrm{int}}$ = 40.05 erg s$^{-1}$. The radio map in Fig.~\ref{fig:J1358spind} shows a core, and faint extended emission north-eastward. A similar structure is seen also in the tapered maps (Figs.~\ref{fig:J1358-90k} and \ref{fig:J1358-60k}). However, with these data we cannot be certain the emission is real. The $\alpha$ map probably suffers from the faintness of the source, and shows variable, possibly not reliable, values. The general trend seems to be on the verge between flat and steep, but based on these data nothing conclusive can be said. The FIRST flux density is 1.17~mJy, and thus the 1.4-5.2~GHz spectral index -0.50, somewhat in agreement with the $\alpha$ map, that has a total spectral index of -0.42. The bulk of the radio emission comes from within the host galaxy, of whose morphology nothing can be said, but the north-eastward extension seems to be reaching outside it. The SDSS $r$ band 25~mag arcsec$^{-2}$ isophotal semimajor axis of the galaxy is 7.7~kpc, whereas the maximum extent of the radio emission is 17~kpc.
\citet{2015caccianiga1} estimate the star formation activity in the host to be at a starburst level with 129~$M_{\odot}$ yr$^{-1}$. Strong star formation is supported by the W3-W4 colour of 2.69, $q22$ of 1.30, the W3 flux density that is 5.4~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{int}}$ and $S_{\textrm{5.2GHz, CDFS}}$ that are about equal. Its optical spectrum also shows an [O~III] wing with a velocity of -253~km s$^{-1}$\ and a FWHM of 1366~km s$^{-1}$, so the contribution of the AGN-heated dust cannot be ruled out either.
The extended emission - if real - cannot easily be explained by star formation, and it is most likely of nuclear origin, but its nature cannot be determined with these data. Furthermore, the broad-band spectral index close to the flat/step border is not in agreement with a pure star formation scenario, which should result in a steeper spectral index. This indicates that whereas a fraction on the radio emission originates from the starburst, there needs to be also a flatter component present to account for the observed spectral index. Moreover, the luminosity of this source would be very high for a pure starburst galaxy, and thus the radio emission within the galaxy seems to be a combination of starburst and nuclear activity.
\subsubsection{J1402+2159}
J1402+2159 (RX J1402.5+2159) is a radio-quiet NLS1 at $z$ = 0.066 with very fait radio emission, the peak flux density is 0.28~mJy beam$^{-1}$, the integrated flux density is 0.33~mJy, and the 90k$\lambda$ tapered flux density is 0.46~mJy. J1402+2159 is a quite nearby source, so also the resulting luminosity is low with log $\nu L_{\nu, \textrm{int}}$ = 38.34~erg s$^{-1}$. The source was detected in FIRST with a flux density of 0.87~mJy, giving a 1.4-5.2~GHz spectral index of -0.74. In addition to these there are not other radio observations of this source. The spectral index seen in the $\alpha$ map in Fig.~\ref{fig:J1402spind} is even steeper, the total spectral index is -0.95, and the core spectral index is -1.54. However, the spectral index values might be affected by the faintness and the low S/N of the data. The source morphology shows an extension toward north-east. This feature is more pronounced in the tapered maps (Figs.~\ref{fig:J1402-90k} and \ref{fig:J1402-60k}). Overlaid with the host galaxy (Fig.~\ref{fig:J1402-host}), it looks like the radio emission reaches slightly outside of it, or at least to the edges, even when considering that the elongated beam somewhat exaggerates the extent of the emission.
The host galaxy is probably of E/S0 type \citep{2007ohta1}, and all the star formation diagnostics indicate activity in this source, the W3-W4 colour is 2.75, $q22$ is 2.16, $S_{\textrm{W3}}$ is 41.2~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ is 2.1~mJy higher than $S_{\textrm{int}}$. These diagnostics are supported by BPT diagrams based on various line ratios where this source is consistently classified as a star-forming galaxy, indicating that the main source of ionising emission in this galaxy are stars. The data are consistent with the star formation activity being the main source of radio emission in this source. However, the nature of the extended radio emission cannot be determined with these data, more, preferably higher S/N observations would be needed for that.
\subsubsection{J1536+5433}
J1536+5433 (Mrk 486) is a radio-quiet NLS1 at $z$ = 0.039 with very faint radio emission: the peak flux density is 0.44~mJy beam$^{-1}$ and the integrated flux density is = 0.50~mJy. It also has a low luminosity of log $\nu L_{\nu, \textrm{int}}$ = 37.98~erg s$^{-1}$. It was detected at 5~GHz in the 1980's with a flux density of 0.47~mJy \citep{1989kellermann1}, indicating that it does not show significant variability. The FIRST detection at 1.4~GHz is 1.23~mJy, giving a 1.4-5.2~GHz spectral index of -0.69. Our $\alpha$ map in Fig.~\ref{fig:J1536spind} shows slightly flatter spectral indices, the total spectral index is -0.35, and the core spectral index is -0.50. Due to the faintness of this source we cannot be sure how reliable the spectral index range is but it seems like the general spectral index is rather flat, which is in agreement with the spectral index estimated in \citetalias{2018berton1}. The radio emission in confined within the inner regions of the host galaxy (Fig.~\ref{fig:J1536-host}), which appears to be late-type.
The PAH features of J1536+5433 at 11.3~$\mu$m were studied by \citet{2019martinezparedes1}, and whereas they were able to detect them they found no evidence of enhanced star formation; in fact the estimated star formation rate was only 0.24 $\pm$ 0.08~$M_{\odot}$ yr$^{-1}$. Although it should be kept in mind that an AGN can suppress the PAH features. Nevertheless, the mid-infrared colour agrees with this (W3-W4 = 1.92), but the other diagnostics do not. The $q22$ is 2.12, the W3 flux density is substantially higher, about 52.1~mJy, than $S_{\textrm{1.4GHz, JVLA}}$, and also $S_{\textrm{5.2GHz, CDFS}}$ is $\sim$1.0~mJy higher than $S_{\textrm{int}}$. Whereas $S_{\textrm{int}}$ and $S_{\textrm{5.2GHz, CDFS}}$ are comparable and indicate that the radio emission could be explained by modest star formation, the W3 flux density is surprisingly high, and hard to explain by the low star formation seen in this source. The [O~III] lines of J1536+5433 are not shifted, and do not show a wing either. This, of course, does not rule out the possibility that the excess mid-infrared emission might originate from the equatorial dust heated by the AGN. This would be in line with the flat spectral index -- uncommon for star formation emission -- of the source.
\subsubsection{J1555+1911}
J1555+1911 (Mrk 291) is a faint, radio-quiet NLS1 ($z$ = 0.035) with a peak flux density of 0.12~mJy beam$^{-1}$ and an integrated flux density of 0.36~mJy. The flux densities of the tapered maps are 0.42 and 0.44~mJy, for the 90 and the 60k$\lambda$ taper, respectively. The normal map (Fig.~\ref{fig:J1555spind}) exhibits diffuse emission, and it is hard to say whether a proper core is even present. The FIRST detection of this source is 1.85~mJy, giving a 1.4-5.2~GHz spectral index of -1.25. Also our $\alpha$ map shows a very steep spectral index: the core spectral index is -1.13, and the total spectral index is -0.82, albeit the south-western part of the emission exhibits a flat spectral index, which might be due to the edge effects and thus not reliable. The tapered maps (Fig.~\ref{fig:J1555-90k} and \ref{fig:J1555-60k}) show a morphology slightly elongated toward north-west. J1555+1911 is hosted by a spiral galaxy (Fig.~\ref{fig:J1555-host}), and most of the radio emission is confined within the bulge of the galaxy.
Based on its mid-infrared properties, the diffuse radio morphology, and the low luminosity (log $\nu L_{\nu, \textrm{int}}$ = 37.85~erg s$^{-1}$) it seems rather evident that the predominant source of the radio emission is star formation. W3-W4 colour is 2.62, $q22$ is 1.67, W3 flux density is $\sim$19.2~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{int}}$ and $S_{\textrm{5.2GHz, CDFS}}$ are roughly in agreement.
\subsubsection{J1559+3501}
J1559+3501 (Mrk 493) is a radio-quiet NLS1 at $z$ = 0.031 with faint radio emission, the peak flux density is 0.46~mJy beam$^{-1}$, the integrated flux density is 1.41~mJy, and the flux densities of the tapered maps are 1.47 and 1.48~mJy for the 90 and 60k$\lambda$, respectively. It is a nearby source so the flux density results in a low luminosity of log $\nu L_{\nu, \textrm{int}}$ = 38.19~erg s$^{-1}$. The radio map in Fig.~\ref{fig:J1559spind} shows a core surrounded by diffuse emission, slightly more elongated toward north-west and south-east. The tapered maps in Figs.~\ref{fig:J1559-90k} and \ref{fig:J1559-60k} do not show any additional structures. The $\alpha$ map suffers from low S/N, as can be seen especially close to the edges, but in general the map shows very steep spectral indices, and the total spectral index is as steep as -1.16. The FIRST detection of this source is 3.39~mJy, giving a 1.4-5.2~GHz spectral index of -0.67. \citet{2020yang1} report a JVLA A-configuration 8.84~GHz flux density of 0.56~mJy, resulting in a very steep 5.2-8.84~GHz spectral index of -1.74. They claim that the 5.2-8.84~GHz spectral index is flat, but their reported value for the integrated 5.2~GHz flux density is only 0.62~mJy, whereas their and ours peak flux densities are about the same. They obtained the value using the same data set we are using, but their rms is higher (11 vs. 15.64~$\mu$Jy beam$^{-1}$). However, the 1.4-5.2~GHz spectral index is steep, and it does not seem probable that it would flatten above 5.2~GHz.
It is worth noting that the detected radio emission is confined within the bulge of the host (Fig.~\ref{fig:J1559-host}) and probably does not represent the whole radio emission of the source. The host galaxy is a barred spiral that also exhibits a nuclear ring and a nuclear dust spiral \citep{2003crenshaw1,2006deo1,2007ohta1}. \citet{2009popovic1} studied the nuclear region in detail and concluded that the nuclear ring can indeed be a place of enhanced star formation. They estimated an unusually high star formation rate of 2~$M_{\odot}$ yr$^{-1}$ for the innermost $<$ 1~kpc. In the BPT diagrams J1559+3501 is classified as a star-forming galaxy, indicating that the narrow-line region lines as excited by star formation rather than the AGN. \citet{2010sani1} detected strong 6.2~$\mu$m PAH features in this source, confirming the presence of star formation. Our star formation diagnostics are somewhat in agreement with this, but it should be noted that the WISE flux densities cover the whole galaxy, whereas we likely are missing some faint, extended radio emission from host. The W3-W4 colour is 2.34, $q22$ is 1.38, W3 flux density is $\sim$64.3~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ is 1.3~mJy higher than $S_{\textrm{int}}$. The [O~III] lines of J1559+3501 show a prominent wing with a velocity of -349~km s$^{-1}$\ and a FWHM of 807~km s$^{-1}$, so also nuclear winds are present.
The evidence seems to indicate that the star formation is a significant, if not the major, contributor to the radio emission. The nuclear, possibly star-forming ring found in previous studies falls within the radio emitting region in our map, and also the radio morphology seems to be more consistent with star formation than nuclear activity. This does not rule out the possibility that the AGN contributes too, but higher resolution observations will be needed to disentangle these two mechanisms.
\subsubsection{J1633+4718}
\label{sec:J1633}
J1633+4718 (SDSS J163323.58+471858.9) is a radio-loud NLS1 ($z$ = 0.116) whose radio map in Fig.~\ref{fig:J1633spind} shows two distinct components. The integrated flux density of the core component is 24.48~mJy, and the integrated flux density of the north component is 0.79~mJy. The flux densities of the tapered maps are very similar to these. J1633+4718 is a high-luminosity source with log $\nu L_{\nu, \textrm{int}}$ = 40.67~erg s$^{-1}$. J1633+4718 resides in an interacting system \citep{2020olguiniglesias1}: the core component is associated with the disk-like host galaxy of J1633+4718, whereas the north component is associated with another disk galaxy (Fig.~\ref{fig:J1633-host}). The nuclei of the two galaxies are $\sim$8~kpc apart. The core spectral index of the component that hosts the NLS1 is -0.54, and thus on the verge between flat and steep. The $\alpha$ map of the companion galaxy suffers from low S/N and edge effects, but its ``core'' spectral index is -0.69.
Past radio observations of J1633+4718 indicate variability in the core. \citet{1994neumann1} found an inverted radio spectrum above 5~GHz with simultaneous observations, whereas \citet{2011doi1} found the 1.7 to 8.5~GHz spectral index to be steep with quasi-simultaneous observations. Also archival data at $\sim$5~GHz suggest considerable variability: \citet{1991gregory1} reported a flux density of 34~mJy at 4.85~GHz obtained with the GBT, \citet{1997laurentmuehleisen1} obtained 30~mJy with the VLA at 5~GHz, and \citet{2010gu1} reported 55.2~mJy with the VLBA at 5~GHz. This is indicative of jet activity, which has been confirmed by high brightness temperature of at least 10$^{11.3}$~K \citep{2007doi1}, and a pc-scale core-jet radio morphology in high resolution observations at 1.7 and 5~GHz \citep{2010gu1,2011doi1}. At 8.4~GHz with the JVN the source remains unresolved with a flux density of 21.2~mJy.
Archival FIRST 1.4~GHz flux density for J1633+4718 is 65.02~mJy, whereas \citet{2011doi1} were able to detect 55.5~mJy at 1.7~GHz with the VLBA. The discrepancy might be due to resolved-out emission in the VLBA observations, but also because of the variability. Several previous studies suggest that this might be a CSS-like source due to its small size and steep spectral incides, but taking into account that the source also shows flat or even inverted radio spectra, it seems more probable that the occasional steep indices have been due to the temporal variability in the source. Moreover, the 22~GHz flux density was found to be as high as 163~mJy observed with the JVN \citep{2016doi1}, also not consistent with the CSS scenario.
The W3-W4 colour of 2.67 indicates that there might also be star formation present in the galaxy, and \citet{2015caccianiga1} estimated the star formation rate to be as high as 68~$M_{\odot}$ yr$^{-1}$. The other indicators do not favour star formation, since the radio emission is clearly dominated by the nucleus: $q22$ is 0.22, the W3 flux density is $\sim$24.4~mJy lower than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{5.2GHz, CDFS}}$ is 24.1~mJy lower than $S_{\textrm{int}}$. However, also a strong nuclear wind seen in the [O~III] wing is present with a velocity of -279~km s$^{-1}$\ and a FWHM of 1186~km s$^{-1}$\ so AGN-heated dust can contribute to the observed mid-infrared emission. The main source of the radio emission in J1633+4718 is the AGN, but the weak radio emission of the companion galaxy might be produced by star formation.
J1633+4718 is also an interesting X-ray source, since an ultrasoft X-ray component with a low blackbody temperature of $\sim$30~eV was detected in it \citep{2010yuan1,2016mallick1}. This was intepreted as a direct detection of the accretion disk, and so far similar components have not been detected in other NLS1s. \citet{2016mallick1} also detect excess UV emission, that requires the presence of a jet to be explained.
\subsubsection{J1703+4540}
J1703+4540 (SDSS J170330.38+454047.1) is a radio-loud NLS1 at $z$ = 0.060 that was previously classified as a CSS with a turnover frequency below 150~MHz \citep{2004snellen1}. Our $\alpha$ map, shown in Fig.~\ref{fig:J1703spind}, agrees with this classification since the spectral index we find is steep: the total spectral index is -0.87, and the core spectral index is -0.91. The peak flux density is 32.26~mJy beam$^{-1}$, the integrated flux density of the normal map (Fig.~\ref{fig:J1703spind}) is 34.68~mJy, with log $\nu L_{\nu, \textrm{int}}$ = 40.20 erg s$^{-1}$, and it exhibits very mild elongation toward south-east. The integrated flux densities of the tapered maps (Figs.~\ref{fig:J1703-90k} and \ref{fig:J1703-60k}) are about a mJy higher, and both maps reveal extended, but faint radio emission toward south-east. The host of J1703+4540 is clearly a spiral galaxy, as seen in Fig.~\ref{fig:J1703-host}, and the radio emission is concentrated within the bulge of the galaxy, which is of a pseudo-bulge type \citep{2020olguiniglesias1}. The extended radio emission seems to somewhat trace the southern bar- or arm-like structure, possibly indicating the presence of star formation in the galaxy. The star formation diagnostics partly support this, as the mid-infrared colour W3-W4 is 2.59, and the W3 flux density 15.9~mJy higher than the $S_{\textrm{1.4GHz, JVLA}}$. $S_{\textrm{int}}$, on the other hand, is 28.8~mJy higher than $S_{\textrm{5.2GHz, CDFS}}$, and $q22$ is only 0.52, suggesting that star formation is not sufficient to explain all of the radio emission.
Indeed, analysing archival VLBA observations of J1703+4540 at 5~GHz \citet{2010gu1} found a one-sided core-jet structure directed toward south-west. They also reported considerable flux density variability, which might also affect our star formation diagnostics, since we do not know in which state the source was during our observations. However, our integrated flux density is $\sim$20~mJy lower than the 5~GHz VLBA flux density (56.8~mJy), suggesting that the source was in a rather low state during our observations. The presence of a jet was confirmed in VLBA observations at 1.7~GHz by \citet{2011doi1}, who found a similar one-sided structure, consistent with the jet of \citet{2010gu1}, and with a projected length of 35~pc. The jet emission is quite diffuse and does not appear Doppler boosted, and they conclude that the jet is either sub-relativistic or seen at a large angle. They also detect hints of a counter-jet but cannot confirm whether it is real or not. \citet{2017giroletti1} claim to have confirmed the presence of emission on the counter-side of the core at 5~GHz, and they classify J1703+4540 as an asymmetric double, and a low-power compact \citep[LPC,][]{2010kunertbajraszewska1} source, since it does not show considerable radio morphology evolution over time, as would be expected from a true CSS.
A powerful molecular outflow with a bulk velocity of -660~km s$^{-1}$ \citep{2018longinotti1}, and a multi-component ultra-fast X-ray outflow showing velocities up to $\sim$0.1~c \citep{2018sanfrutos1}, consistent with each other, have also been found in this source. \citet{2018longinotti1} state that the outflow seen in J1703+4540 is consistent with an energy-conserving outflow propagating through the host galaxy, an instance of probably the most important and effective way of AGN feedback. In agreement with the outflows, also an [O~III] wing component is present, with a velocity of -336~km s$^{-1}$, and a FWHM of 646~km s$^{-1}$. Interestingly, the [O~III] emission line shows a \textit{redshift} of 210~km s$^{-1}$ \citep{2021berton1}, the origin of which remains unclear. Since the wing velocity is defined relative to the [O~III] core, its real velocity, compared to the restframe, is smaller than the previously reported value. Spatially resolved spectra would be needed to study the morphology and kinematics of [O~III] in more detail.
\subsubsection{J1713+3523}
J1713+3523 (FBQS J1713+3523, $z$ = 0.083) is an NLS1 that would be classified as radio-quiet based on the FIRST detection ($S_{\textrm{int, 1.4GHz}}$ = 11.24~mJy) or our data ($S_{\textrm{int}}$ = 3.81~mJy), but it was detected at 22~GHz with the JVN with a whopping flux density of 138~mJy \citep{2016doi1}, implying extreme variability and jet activity. Interestingly the 1.4-5.2~GHz spectral index is steep with $\alpha$ = -0.66, in agreement with our $\alpha$ map in Fig.~\ref{fig:J1713spind} whose total spectral index is -0.97. Assuming the less extreme spectral index of -0.66, the extrapolated 22~GHz flux density would be 1.47~mJy, requiring almost a hundred-fold increase in the flux density to achieve the observed 138~mJy. The spectral index between 5.2 and 22~GHz is 2.49, very close to the theoretical synchrotron self-absorption (SSA) limit. It should be noted that the observations are not simultaneous, the 22~GHz observations were performed in April 2014, and the JVLA observations in September 2015. However, radio flares usually propagate from higher to lower frequencies within time-scales from several months to a few years, so if anything the 22~GHz flare should have increased the 5.2~GHz flux density at the time of our observation, if it would have propagated to such low frequencies, so the estimated spectral index should be indicative of the real spectral index.
The 22~GHz observation was done with the JVN so it probes mas scale emission in the source. J1713+3523 lies at $z$ = 0.083, giving a scale of 1.562~pc mas$^{-1}$, thus the observed flux density must have come from a region of some tens of pcs across. It is interesting that we do not see any signs of the flattening of the spectral index in our $\alpha$ map, which would have been expected to happen if the 22~GHz flare would have propagated to lower frequencies. An alternative to a conventional AGN radio flare is that the 22~GHz flare was a sign of recently started jet activity in J1713+3523, as seen in high-frequency peakers (HFP). In this case there could exist a quasi-stationary feature at higher radio frequencies, with the inverted radio spectrum produced by SSA \citep{2021odea1} or free-free absorption by the ionised gas produced in the bow shock of the jet \citep{1997bicknell1}. The [O~III] lines of J1713+3523 are extremely blueshifted with $v$ = -674~km s$^{-1}$, providing further support of the influence of a jet still confined within its host galaxy and in interaction with the interstellar medium. Interestingly, no [O~III] wing is present in this source.
The whole radio emission seen in the JVLA observation of J1713+3523 could be explained by star formation, as the W3 flux density is $\sim$17.8~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{int}}$ and $S_{\textrm{5.2GHz, CDFS}}$ are roughly the same. On the other hand, the near-infrared colour, W3-W4 = 2.11, nor the $q22$ of 0.62, do not necessarily indicate strong star formation. However, the source is not extraordinarily bright, with log $\nu L_{\nu, \textrm{int}}$ = 39.54 erg s$^{-1}$, lower than for jetted sources in \citetalias{2018berton1} in general. The higher frequency observations confirm the presence of a jet, but it does not seem to dominate the 5.2~GHz radio emission.
\subsubsection{J2242+2943}
J2242+2043 (Ark 564, $z$ = 0.025) is a radio-quiet NLS1 with a peak flux density of 5.61~mJy beam$^{-1}$, and an integrated flux density of 10.33~mJy that gives quite a low luminosity of log $\nu L_{\nu, \textrm{int}}$ = 38.88 erg s$^{-1}$. It was detected in NVSS with a flux density of 29.1~mJy, yielding a spectral index of -0.79 between 1.4 and 5.2~GHz, and at 95~GHz with a flux density of 1.14~mJy \citep{2015behar1}, giving a spectral index of -0.76 between 5.2 and 95~GHz. These results are consistent with our $\alpha$ map (Fig.~\ref{fig:J2242spind}), which has a total spectral index of -0.81, and a core spectral index of -0.85. The radio morphology of the source is clearly elongated toward north, as also found in previous studies \citep[e.g.][]{2000moran1}, and the spectral index seems to slightly flatten north-ward. The tapered maps (Figs.~\ref{fig:J2242-90k} and \ref{fig:J2242-60k}) do not show any additional structures, and the radio emission is confined within the bulge of a galaxy with a barred spiral morphology (Fig.~\ref{fig:J2242-host}). This source exhibits complex X-ray properties as both relativistic and sub-relativistic outflows have been detected \citep[e.g.,][]{2013gupta1, 2013gupta2, 2016khanna1}.
Whereas the asymmetric radio morphology indicates that J2242+2943 possesses a jet or a wind that produces radio emission, the star formation proxies suggest that also star formation is present. The W3-W4 colour is 2.57, $q22$ is 1.16, and the W3 flux density is as much as $\sim$123.8~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$, and $S_{\textrm{int}}$ and $S_{\textrm{5.2GHz, CDFS}}$ are comparable. The low radio luminosity of the source also suggests that a possible jet cannot be very powerful, supported by the fact that no shift is seen in the [O~III] lines. However, an [O~III] wing is detected, with the velocity and the FWHM of -564 and 1090~km s$^{-1}$, respectively, so the mid-infrared emission might be enchanced by AGN-heated dust emission. It seems plausible that the radio emission is a combination of AGN and star formation related processes, but nothing definite can be said before more detailed observations.
\subsubsection{J2314+2243}
\label{sec:J2314}
J2314+2243 (RX J2314.9+2243, $z$ = 0.169) is a radio-quiet source with a peak flux density of 6.17~mJy beam$^{-1}$, and an integrated flux density of 7.02~mJy. It was also detected in NVSS with a flux density of 18.7~mJy, giving a spectral index of -0.75 between 1.4 and 5.2~GHz. This value is in agreement with our $\alpha$ map in Fig.~\ref{fig:J2314spind}: the total spectral index is -0.81, and the core spectral index is -0.85. The radio maps (Fig.~\ref{fig:J2314}) do not show any distinctive features, nor does the host galaxy in Fig.~\ref{fig:J2314-host}, within which the radio emission is confined.
The W3-W4 colour (2.33) and $q22$ (0.85) do not indicate strong star formation, and also $S_{\textrm{int}}$ is $\sim$4.5~mJy higher than $S_{\textrm{5.2GHz, CDFS}}$. However, $S_{\textrm{W3}}$ is 43.8~mJy higher than $S_{\textrm{1.4GHz, JVLA}}$. The origin of this excess emission can be either star formation in the host galaxy, or AGN-heated dust. The existence of a strong [O~III] wing with a velocity of -1031~km s$^{-1}$\ and a FWHM of 1579~km s$^{-1}$\ is in favour of the AGN-heated, possibly polar, dust being a major contributor to the mid-infrared emission. J2314+2243 would be unusually bright for a purely star forming galaxy, with log $\nu L_{\nu, \textrm{int}}$ = 40.48 erg s$^{-1}$, so it seems possible that the AGN is the main source of the radio emission. J2314+2243 could be another candidate for a CSS-like source.
\section{Extraordinary diversity of NLS1s}
\label{sec:discussion}
The most notable result is the remarkable diversity of NLS1s: they show a host of different components contributing to the radio, as well as the mid-infrared emission, and no two sources are alike. Spatially resolved spectral index map utilised in this paper offer a clear advantage in disentangling these components when compared to conventional ways of determining the spectral indices, that usually give one value for the whole emitting region. The $\alpha$ maps allow us to examine the spectral index and its changes over the whole emitting region, and determine the most plausible origin of the emission in different parts of the source. In addition, the associated errors of the $\alpha$ maps derived using the mt-mfs method are considerably smaller than the errors for conventional estimates. Using this method, complemented by data at other wavelengths, some NLS1 groups with similar properties can be identified, and are discussed below.
\subsection{AGN-dominated sources}
In about one third of the sources in our sample the radio emission is clearly dominated by the AGN. These sources are characterised by moderate to high radio luminosities (all but one have log $\nu L_{\nu, \textrm{int}} >$ 39.00 erg s$^{-1}$), asymmetric radio morphologies, and/or variable radio emission, and flat spectral indices in some cases. We can see considerable diversity even within this group, probably due to different ages, inclination angles, and environments of the sources.
\subsubsection{Large-scale jets}
A few of our sources exhibit radio jets with extents of the order of tens of kpcs, with the jets reaching well outside the host galaxy. In some sources we are also able to detect counter-jet emission, indicating that these sources are seen at rather large inclination angles, and projection effects cannot be responsible for the narrowness of their permitted lines.
Based on simple travel-time arguments these sources cannot be kinematically very young, and must have maintained the jets for quite a long period of time for them to propagate through the host interstellar medium. \citet{2009czerny1} propose that intermittent activity due to radiation pressure instabilities of the accretion disk could be common in young radio sources accreting at high rates; the active, jetted phases would last for 10$^3$-10$^4$ years, separated by inactive, non-jetted periods of 10$^4$-10$^6$ years. It is unclear whether such short activity periods can be responsible for the kpc-scale emission seen in these impressive sources since the final extent of the radio emission after the jets have turned off depends on several factors, for example, the jet power, the length of the activity period, and the properties of the medium it is expanding into.
Interestingly, one source in our sample, J1302+1624, shows possible relic emission with a very steep spectral index. This seems to be a consequence of its jets changing direction \citep{2020congiu1}, and with the current data we cannot say whether the jets turned off at some point or not. On the other hand, the most extended NLS1 discovered so far, J0814+5609, does not show signs of intermittent activity, and based on its flat core and observations at other radio frequencies the jets are currently active. However, this source might represent a transition object between NLS1s and broad-line radio galaxies, since its black hole mass is large for an NLS1 (10$^{8.44}$~M$_\odot$, \citealp{2016berton2}), and its FWHM(H$\beta$) on the threshold between NLS1s and BLS1s (2164~km s$^{-1}$, \citealp{2015foschini1}). The other sources with large-scale jet emission in our sample do not show signs of intermittency either, and their black hole masses are well below 10$^8$~M$_\odot$, implying that large-scale jets are not exclusively a property of high black hole mass sources. Morphologically, these sources resemble small broad-line radio galaxies, and are further proof that eventually some NLS1s might evolve into fully developed radio galaxies \citep{2016berton1}.
\subsubsection{Compact sources and small-scale jets}
Another group whose radio emission is dominated by the AGN are the sources with compact morphologies, showing no or moderate extended radio emission, or small-scale jets. In these sources the high AGN activity - usually in form of jets - was confirmed by, for example, flat spectrum, high luminosity, variability, or previous high resolution radio observations. Interestingly, many of these sources are nominally radio-quiet, casting yet another shadow over the use of radio-loudness parameter as a proxy of AGN activity.
The compactness of these sources indicates that they are either less extended, and thus possibly younger sources, or seen at smaller inclination angles. The latter might be true especially in case of the most luminous sources, where the emission is probably enhanced by considerable beaming. The lower luminosity sources are probably either seen at large enough angles for the beaming not to have considerable effects and/or host slower, sub-relativistic jets.
An interesting individual is J1633+4718 since it resides in a late-type galaxy, hosts most likely relativistic jets, and is interacting with another late-type galaxy \citep{2020olguiniglesias1}. It has been hypothesised that interaction might play a role in triggering the nuclear activity \citep{1999taniguchi1, 2008barth1, 2008urrutia1, 2011ellison1}, and seems like jetted NLS1s might be a great example of this, since previous studies have shown that interaction and mergers are more common among jetted than non-jetted NLS1s \citep[e.g.,][]{2017olguiniglesias1, 2018jarvela1, 2019berton1}. However, it should be remembered that in general the jetted sources reside at higher redshifts than the non-jetted ones that have been studied, and the impact of this difference has not been properly addressed yet.
\subsubsection{CSS-like sources}
We identify five new NLS1s that exhibit properties consistent with the CSS classification: J0806+7238, J1047+4725, J1138+3653, J1348+2622, and J2314+2243. These sources show compact morphology, steep spectral index, and luminosity that is too high to be explained by star formation or even a starburst. Three out of five sources exceed the traditional luminosity threshold for CSS sources at 5~GHz (log $\nu L_{\nu, \textrm{int}}$ = 40.60 erg s$^{-1}$), and the luminosities of the other two are log $\nu L_{\textrm{int}} >$ 40.00 erg s$^{-1}$.
A link between NLS1s and CSS sources has already been suggested several times, and some NLS1s have been identified as CSS sources in the literature \citep[e.g., ][]{2001oshlack1, 2006gallo1, 2014caccianiga1, 2016berton1}. These results lead to a tentative model where some CSS sources, especially the low-luminosity compact objects (LLCs) could be a part of the parent population of jetted NLS1s \citep{2016berton1}. According to \citet{2010kunertbajraszewska2} some LLCs can be classified as high-excitation sources, similarly to NLS1s, and their intrinsic properties, such as black hole mass and Eddington ratio distributions, show significant overlap with beamed NLS1s. Also their radio luminosity functions are consistent with each other, when beaming is taken into account, and they show similar, simple radio morphologies, without considerable diffuse emission.
What is not yet known is if their host galaxies, and larger, group- and cluster-scale environments, are similar, and if so, to what extent \citep{2021odea1}. The environments of high-excitation LCCs and most CSS-like NLS1s have not been studied systematically, and also the knowledge of the environments of beamed NLS1s is far from comprehensive. An additional issue that must be taken into account in future studies is the selection criteria of LLCs: in \citet{2010kunertbajraszewska1} the sources have 5~GHz flux densities $>$ 150~mJy, and in \citet{2010kunertbajraszewska1} they have 1.4~GHz flux densities $>$ 70~mJy. Most CSS-like NLS1s are fainter than this, thus in order to have comparable luminosities they must be at higher redshifts, which might induce some selection biases in the samples. However, the unification scenario of sub-populations of NLS1s and CSS sources should definitely be studied further.
\subsubsection{Large-scale outflows}
A third group of AGN-dominated sources are the ones with large-scale outflows. These sources are rather scarce in our sample, but it might partially be because without higher-resolution observations they are hard to distinguish from sources with slow, poorly collimated jets. Also their luminosities are quite low, so unless they can be identified based on the morphology, or actual observations of the outflow, they can easily go undetected. Outflows and jets can also co-exist (for example, J1703+4540, \citealp{2010gu1,2018longinotti1}), and since the radio emission of the jet is generally brighter than the outflow radio emission, the jet dominates the radio band.
J0804+7248 is an example of a source than could host an outflow instead of a jet: it is only moderately luminous, its radio morphology it clearly asymmetric, but does not look collimated, and its [O~III] lines are blueshifted, implying that the NLR is experiencing bulk motion. Another similar source, J0632+6340, shows a two-sided non-collimated radio morphology at the same PA as [O~III] ionisation cones, and has a very low luminosity implying that the low-speed jets or outflows have very low power.
\subsection{Host-dominated sources}
About a dozen of our sources seem to be dominated by radio emission that originates from the host galaxy, that is, from star formation related processes. They do not show any clear signs of AGN activity in radio, and the star formation activity is sufficient to explain their radio emission. Their radio morphologies are generally featureless, and the emission is confined within the host, or even the host bulge, hinting at circumnuclear star formation. In some cases the radio morphology is scattered in patches around the core, and in some cases it seems to trace the host galaxy features. The sources for which BPT diagram data exists are consistently classified as star-forming galaxies, suggesting that the stars are the main source of the ionising continuum. Interestingly, more than half of these sources show a prominent [O~III] wing, indicating that nuclear winds reaching the inner NLR are present.
\subsection{Composite sources}
Approximately one third of the sources can be classified as composite: in these sources both the AGN and the host galaxy have a significant contribution to the total radio emission. These sources show a mixed variety of properties characteristic for AGN-dominated as well as for star formation dominated sources. They can show, for example, a flat spectrum core, radio morphology resembling a jet, or very high luminosity, and on the other hand, for example, patchy radio emission reminiscent of circumnuclear star formation, PAH features, or other positive star formation proxies, indicating star formation even at a starburst level. The different BPT diagrams also often give mixed results for these sources, indicating that both the AGN and the starlight continuum are significant. In addition, different kinds of outflows have been detected in these sources, including UFOs, BLR and NLR outflows, and molecular outflows. As a cherry on top, one of these sources was identified as a water maser source.
The ensemble of properties seen in these sources beautifully demonstrates why NLS1s should not be treated as a class consisting of similar sources, but should be treated as individuals. Caution should be exercised also when using simple diagnostic tools, since they cannot properly take into account the versatility seen in these sources, and might give misleading results when used on their own. This matter is further discussed in Sect.\ref{sec:issues}.
\subsubsection{Peculiar sources}
There are some unique sources that do not fit any of the categories described earlier. An obvious one is J1337+2423 that in closer inspection turned out not to be an NLS1, but a BLS1 galaxy \citep{2013lee1}. This source highlights a possibly bigger issue in the identification on different types of Seyferts: most of them have been classified based on quite low signal-to-noise (S/N) ratio spectra, and often using automated algorithms. Subtle, but crucial, details of the emission lines, such as the wings, cannot be properly modelled using low S/N data, but can easily lead to misclassification of NLS1s and BLS1s, and confusion with intermediate Seyferts \citep{2020jarvela1, 2021peruzzi1}. Mixed samples will add undesired noise to statistical population studies, and hinder our efforts to understand the nature of the different classes of AGN.
J1038+4227 is another extremely interesting object, since its radio morphology does not resemble anything we see in other NLS1s. However, its optical spectrum clearly identifies it as one: its FWHM(H$\beta$) = 1979~km s$^{-1}$, and it shows very strong Fe~II multiplets. Were it a more evolved source, for example, an FSRQ with a flattened BLR seen face-on, we would not expect to see iron in the spectrum \citep{2018marziani2,2020berton1}. J1038+4227 is a luminous source, and since the core is responsible for most of its radio emission, and shows a steep spectral index it would be consistent with a CSS classification. The symmetrical, almost circular, faint radio emission reaching well outside the host galaxy is what sets J1038+4227 apart from other NLS1s. Due to its huge size its origin is probably the AGN, and it seems plausible that it is a radio lobe. It resembles the kpc-scale morphologies seen in some blazars \citep{2010kharb1}, but is considerably fainter. Unfortunately, this structure is too faint to estimate its spectral index, and derive any information about its age or whether its a relic or still replenished. However, taking into account its circular shape it seems like the jet must have pointed close to our line of sight, which does not seem to be the case right now, otherwise we would expect to see a flatter spectral index in the core. The nature of this source remains unclear for now, but it definitely deserves to be studied in detail in the future.
The third source that stands out is J1713+3523. It has a steep spectral index at least up to 5.2~GHz, and does not show strong signs of AGN activity at that frequency. In fact, its 5.2~GHz radio emission could be explained with star formation alone. However, a 22~GHz detection of 138~mJy is undeniable proof that this source hosts at least moderately powerful jets, and it furthermore yields a 5.2-22~GHz spectral index close to the theoretical SSA maximum. It is interesting that no signs of the jets are seen at lower frequencies. This source somewhat resembles an extraordinary group of previously radio-quiet or totally radio-silent NLS1s, first discovered at 37~GHz due to their bright (hundreds of mJy), and variable radio emission, characteristic for relativistic jets \citep{2018lahteenmaki1}. These sources were later observed with the JVLA, and all of them showed a steep 1.6-9.0~GHz radio spectrum, with no signs of possible jets \citep{2020berton2}. The nature of these sources is still somewhat unclear, but absorption, either SSA or free-free has been suggested as the culprit behind their unusual radio spectra. More observations of J1713+2523 are certainly needed to explain the observed properties of this source.
\subsection{Implications and issues}
\label{sec:issues}
The results of this paper strengthen the view of NLS1s as a wonderfully diverse class of sources. However, this versatility might also cause additional trouble, especially when studying larger samples of sources, rather than individuals, and thus caution is required when dealing with large samples and/or statistical studies. Below we discuss the biggest issues resulting from this diversity.
\subsubsection{NLS1s are \emph{not} a homogeneous class}
It cannot be stressed enough that the variety of sources under the NLS1 umbrella is vast. After all, the classification is based solely on the narrowness of the permitted lines, that indicates an undermassive black hole, and the oxygen-to-hydrogen ratio, that ensures they are Type 1 sources. This does not tell us anything about the activity state of the AGN, and it ranges from radio-silent sources to rare individuals that are able to maintain fully developed relativistic jets. It also does not reflect the properties of the host galaxy, and in NLS1s the AGN rarely dominates the whole electromagnetic spectrum over the host, unlike in quasars or blazars. Even if this might be true also for some NLS1s, the majority does not fit one mould, as demonstrated in this paper. It should also be kept in mind that the radio view of NLS1s is very different than the optical view, and whereas their optical spectra are similar, the radio properties absolutely are not.
\subsubsection{Advice against using simple proxies}
The amasing diversity of NLS1s, and the varying contributions of the AGN and the host along the electromagnetic spectrum can have unexpected effects when using relations and proxies possibly calibrated with more homogeneous or simple samples of sources. This has been continuously demonstrated regarding the use of the radio-loudness parameter, that might have originally served as an acceptable first order estimate of the activity of the AGN, but fails to perform when more complicated sources are included in the scenario \citep{2017jarvela1,2017padovani1}.
In this paper we used several star formation diagnostics based on mid-infrared data, or relations between radio and mid-infrared. It soon became evident that these proxies were not ideal tools to estimate the strength of star formation in our sources. They often show results contradicting each other, as well as results in the literature obtained with more reliable methods, for example, PAH features. It is possibly the complexity of NLS1s that fool these proxies, and especially in mid-infrared the presence of an emphasised dust component that can mimic the properties of emission produced by star formation. Nevertheless, we decided to include these diagnostics in our results to highlight their behaviour in the case of NLS1s, and as a general caution for future, especially statistical, studies using similar diagnostics.
\section{Summary and conclusions}
\label{sec:summary}
In this paper we analysed 44 NLS1 galaxies by means of spatially resolved radio spectral index maps. In addition, we employed mid-infrared diagnostics to estimate the star formation activity in their hosts. With these tools, complemented by archival data, we estimated the predominant source or sources of radio emission in them. Based on these analyses our main conclusions can be summarised as:
\begin{enumerate}
\item NLS1s are an exceptionally heterogeneous class of AGN, and in radio band they range from individuals with blazar- and radio galaxy -like characteristics to sources whose radio emission is dominated by the star formation activity in the host galaxy. Furthermore, some of them could actually be classified as CSS sources in addition to the NLS1 classification, thus providing further confirmation of the scenario in which NLS1s are AGN in an early evolutionary stage.
\item Due to this diversity and the varying contributions of different sources of radio as well as mid-infrared emission NLS1s are not ideal targets for simple proxies, such as the radio loudness parameter, or star formation diagnostics.
\item As the NLS1 classification is based solely on their optical spectrum, and includes all Type 1 sources with narrow permitted lines, it does not reflect their other physical properties, for example, in the radio band. As a result the NLS1 classification does not tell us much as these sources do not necessarily share similar physical properties. Also the threshold of FWHM(H$\beta$) = 2000~km s$^{-1}$ is somewhat arbitrary, and should not be regarded as a real physical limit. These remarks should be kept in mind especially when studying them as a class, or using the general NLS1 population in wider AGN studies.
\end{enumerate}
NLS1s are complicated sources, and any conclusions regarding the whole class cannot and should not be drawn lightly. Instead, these sources should be first and foremost studied as individuals, at least until we understand their nature and distinctive subclasses and their characteristics better. As demonstrated by this study, the majority of them require very high-quality data and detailed data analysis to disentangle the contribution of the different elements. And even still, many of the sources in our sample will require further observations, for example, in radio at higher resolution, or integral field spectroscopy, to unquestionably determine their nature, and their physical properties. Future facilities, such as the Cherenkov Telescope Array (CTA), the Advanced Telescope for High Energy Astrophysics (ATHENA), and Euclid, will also be essential to improve our understanding of this peculiar class of objects. Despite the complexity, the NLS1s definitely are a class worth investigating. The jetted NLS1s are an ideal laboratory for studying the launching of the jets, and the first stages of the evolution of more powerful FSRQs and high-excitation radio galaxies. In general in NLS1s, the interplay of the host and several AGN-related phenomena, for example, jets and outflows, offers a great opportunity to examine the AGN feedback in action.
\begin{acknowledgements}
E.J. is a current ESA research fellow. E.C. acknowledges support from ANID project Basal AFB-170002. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. Funding for the Sloan Digital Sky Survey (SDSS) has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. Thank you coffee and sparkling wine. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,314,259,994,353 | arxiv | \section*{Appendix: String ansatz for broad excitations}
\newcommand{\rb}[1]{|#1)}
\newcommand{\lb}[1]{(#1|}
\newcommand{\ew}[1]{[\textcolor{red}{\textbf{EW:}} #1]}
In this appendix we provide the details of the extended quasiparticle ansatz for describing broad low-energy excitations (see Sec.~\ref{sec:bound} in main text).
\subsection{Implementation}
We first recall that the quasiparticle ansatz is a momentum superposition of the uniform MPS ground state in which one tensor is distorted
\begin{equation} \label{eq:qp}
\ket{\Phi_p(B)}= \sum_n e^{ipn} \diagram{appendix}{1} .
\end{equation}
For further details on uniform MPS, and in particular of the gauge fixing, we refer to the tangent-space review in Ref.~\cite{Vanderstraeten2019}. Here we just mention that we always work in the mixed gauge and that the distortion tensor $B$ obeys the left gauge-fixing condition $\sum_{s=1}^d A_L^{s \dag} B^s = \sum_{s=1}^d B^{s \dag} A_L^{s} = 0$
in which we assumed the ground-state tensor $A$ to be in the left-canonical form $\sum_s A_L^{s\dag} A_L^{s} = \mathds{1}$.
\par The quasiparticle ansatz~\eqref{eq:qp} describes local and low-energy excitations to extreme precision \cite{Haegeman2011}. However, as the variational subspace on top of the MPS ground state is very localized, it is not expected to accurately capture the effect of large physical operators acting on the ground state~\cite{Haegeman2013}. Therefore, broad excitations such as the soliton bound states in the $J_1-J_2-J_3$ model (see main text) are not well described by this ansatz.
\par In Ref.~\cite{Haegeman2013} it is suggested to increase the variational support by spreading the distortion over several sites
\begin{multline}\label{eq:block_qp}
\ket{\Phi_p(B)}=\\ \sum_n e^{ipn} \diagram{appendix}{2}.
\end{multline}
The $B$-block in this ansatz contains $D^2 d^N$ elements, where $D$ is the bond dimension and $d$ the physical dimension. By taking into account the gauge fixing $D^2 (d-1)d^{N-1}$ elements are truly variational. The exponenial scaling in the number of distorted sites makes this ansatz hard to use, except for the paradigmatic AKLT-model~\cite{Haegeman2013, Haegeman2013b}, for which the ground state can be exactly represented by an MPS with bond dimension 2.
\par A more efficient representation is given by a tensor decomposition of the $B$-block
\begin{multline} \label{eq:string_qp}
\ket{\Phi_p(B_1 \cdots B_N )} = \\ \sum_n e^{ipn} \diagram{appendix}{3}.
\end{multline}
Here the gauge fixing condition only applies on first tensor ($B_1$), such that the other tensors in the decomposition ($B_2,\dots,B_N$) are purely variational.
\par By this decomposition the number of variational parameters can be chosen to scale linear with $N$. This clearly depends on the choice of limiting bond dimension $D_{\max}$ inside the excitation string. Now the excitation string can be seen as a finite-size subsystem on top of the ground state, and can be optimized by standard finite-size DMRG-methods \cite{Schollwoeck2011} that are computationally efficient.
\par In order to construct such an efficient DMRG scheme to optimize over the excitation tensors $B^{s_{n}}_1 \cdots B^{s_{n+N-1}}_N$ in~\eqref{eq:string_qp}, we need to construct the effective Hamiltonian for a one-site update
\begin{multline} \label{eq:Heff_mat}
2 \pi \delta(0) \bm{B}_i^{\dag} H_\mathrm{eff}^i(p)\bm{B}_i = \\ \braket{\Phi_p(B_1 \dots B_N ) | \hat{H} | \Phi_p(B_1 \dots B_N )},
\end{multline}
where the bold symbols denote the vectorized version of the corresponding tensor, and $\hat{H} = \sum_n \hat{h}_{n,\dots,n+M-1}$ the many-body Hamiltonian that only consists of local $M$-body interactions. Two-site update schemes may be equally well considered, but this will make the construction of the effective Hamiltonians more cumbersome. The construction of $H_\mathrm{eff}^i(p)$ boils down to the knowledge of the matrix element at the right hand side of Eq.~\eqref{eq:Heff_mat}. Translation invariance implies that the terms in the matrix element contain maximally a double infinite sum over transfer matrices $ \sum_{s=1}^d A^s \otimes \bar{A}^{s}$. But still we need to take into account all different relative positions of the local Hamiltonian $\hat{h}$ with respect to all excitations tensors that appear in the bra and in the ket-layer. Here the left gauge fixing of the first excitation tensor significantly reduces the number of terms.
\par Once we have constructed $H_\mathrm{eff}^i(p)$, we can update the $i$-th site by solving the generalized eigenvalue problem
\begin{equation} \label{eq:string_effeig}
H_\mathrm{eff}^i(p)\bm{X}_i = \omega(p) N_\mathrm{eff}^i(p) \bm{X}_i .
\end{equation}
If the MPS ground state and the $B$-tensors are in the mixed canonical form at each update, the eigenvalue problem reduces to a standard problem, i.e. $N_\mathrm{eff}^i(p)$ is the unit matrix. By sweeping through the excitation string, the excitation energy is gradually lowered up to convergence. With the obvious initialization of $B_2,\dots,B_N= A$, the starting energy will be equal to the lowest energy obtained by~\eqref{eq:qp}. Higher energy excitations may be found by projecting away the lower-lying excitations.
\subsection{Benchmarks}
We demonstrate the accuracy of the string-ansatz by comparing its energy solution with the solution of the full problem given by Eq.~\eqref{eq:block_qp} for the fundamental magnon at momentum $p=\pi$ in the AKLT model. This comparison is shown in Tab.~\ref{tab:aklt}. We raise the bond dimension inside the excitation string up to the limiting values $D_\mathrm{lim}=54$ and $D_\mathrm{lim}=108$, this corresponds to an exact decomposition of the full $B$-tensor in terms of separate blocks up to respectively $N=7$ and $N=8$ sites. For $N>\left\lbrace 7,8\right\rbrace$ the number of variational parameters scales linearly in the number of added sites, instead of exponentially in~\eqref{eq:block_qp}. For $N>\left\lbrace 7,8\right\rbrace$ we can never recover the same precision as the original results in the first column, though the difference seems to be negligible in practice. However, because of the computational efficiency, we can go to a higher number of sites. Consequently, we can recover and even slightly improve the precision obtained by optimizing Eq.~\eqref{eq:block_qp}.
\begin{table}
\begin{center}
\begin{tabular}{cccc}
\toprule
$N$ & tensor & string $D_{lim}=54$ & string $D_{lim}=108$\\
\midrule
1 & .3703703703703 & .370370370370370 & .370370370370370\\
2 & .3506345810861 & .350634581086136 & .350634581086135\\
3 & .3501652022172 & .350165202217295 & .350165202217298\\
4 & .3501291730768 & .350129173076821 & .350129173076823\\
5 & .3501247689418 & .350124768941853 & .350124768941852\\
6 & .3501242254394 & .350124225439428 & .350124225439427\\
7 & .3501241645674 & .350124164567495 & .350124164567493\\
8 & .3501241580969 & .350124158096968 & .350124158096949\\
9 & .3501241574175 & .350124157417571 & .350124157417519\\
10 & .3501241573460 & .350124157346082 & .350124157346044\\
11 & .3501241573384 & .350124157338518 & .350124157338485\\
12 & .3501241573376 & .350124157337713 & .350124157337683\\
13 & - & .350124157337627 & .350124157337597\\
14 & - & .350124157337619 & .350124157337586\\
\bottomrule
\end{tabular}
\caption{Excitation energies of the magnon branch at momentum $\pi$ of the AKLT model. The ansatz substitutes $N$ sites in the MPS. The first column is copied from~\citep{Haegeman2013} (see table on pag.~34) and is obained by the ansatz~\ref{eq:block_qp}. The second and third column are obtained by the ansatz~\eqref{eq:string_qp} in which the internal bond dimension of the string is limited to $D_{lim}=54$ and $D_{lim}=108$ respectively. The eigensolvers used to obtain these energies are reliable up to 14 digits (15 digits are shown in the second and third column, and 13 in the first).} \label{tab:aklt}
\end{center}
\end{table}
We now consider the Ising model $S=1/2$ in a tilted magnetic field, the Hamiltonian is given by
\begin{equation} \label{eq:IsSkew}
\hat{H} = - \sum_{i} \left( \hat{\sigma}^x_{i}\hat{\sigma}^x_{i+1} + h_{\perp} \hat{\sigma}^z_i + h_{\parallel} \hat{\sigma}^x_i \right) ,
\end{equation}
the parameter $h_{\perp}$ describes a transverse field and the parameter $h_{\parallel}$ an additional longitudinal field.
For $h_{\parallel} =0$ in the ordered regime far enough from the critical point, topological excitations as domain walls may occur. By applying the Jordan-Wigner transformation on the Hamiltonian, these domain walls can be represented as free fermions~\cite{Rutkevich2008}.
When a longitudinal field $h_{\parallel}>0$ is applied, the $\mathbb{Z}_2$ symmetry is broken. This energetically favors one of the two previously degenerate ground states, and induces an attractive force between pairs domain walls -- they form a state of bound spinons. When the applied field is not too large, the force can be modeled by the cost of adding one site that is in the `wrong' ground state: $\mu = 2 h_{\parallel} \bar{m}$ where $\bar{m} = (1-h_{\perp}^2)^{1/8}$~\cite{Rutkevich2008}. Hence, the semi-classical Hamiltonian of the relative variables that describe the weakly confined spinons (or the slightly interacting fermions) just describes a particle that is moving in a linear potential. The time-independent Schr\"{odinger} equation for this Hamiltonian is the Airy equation. The low-lying energy spectrum is then approximated by the negative zero's of the anti-symmetric Airy function~\cite{Rutkevich2008}. At momentum $p=0$ the energy can be approximated as
\begin{equation} \label{eq:airyspec0}
E_n(0) \approx 4 (1-h_{\perp}) + \mu^{2/3} \left[ \frac{2 h_{\perp}}{1-h_{\perp}} \right]^{1/3}\xi_n.
\end{equation}
with $\xi_n$ determined by $\mathrm{-Ai}(-\xi_n)=0$ .
We applied the ansatz~\eqref{eq:qp} and~\eqref{eq:string_qp} at $p=0$ to calculate the excitations in this model for $h_{\perp}=0.7$ in the weak confining regime with $h_{\parallel}=0.0075$. The results are shown in Fig.~\ref{fig:airyspec} together with the energies predicted by Eq.~\eqref{eq:airyspec0}. The quasiparticle ansatz ($N=1$) does not yet reveal the full Airy behavior of the spectrum. By increasing the spatial support of the excitation ansatz, we however observe a fast decrease of the excitation energies. The higher the excitation energy, the more significant the decrease of the energy. The highest excitation under study remains stuck in the continuum for the smallest $N$. The observation that the energies are always lower than the ones predicted from~\eqref{eq:airyspec0}, has probably to do with the effect of higher order terms in the expansion of the kinetic energy of the spinons.
By going to a strong longitudinal field, we expect faster convergence as a function of $N$ but however stronger deviations from the Airy spectrum.
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth]{./airy}
\caption{Excitation energies at $p=0$ in the Ising model~\eqref{eq:IsSkew} for $h_{\perp}=0.7$ and $h_{\parallel}=0.0075$ as a function of the spatial support of the ansatz with $D_{lim}=40$. By increasing $N$ it becomes clear that the energies follow an Airy-like spectrum. }\label{fig:airyspec}
\end{figure}
\section{Spinons and matrix product states}
\label{sec:mps}
The formalism of translation-invariant matrix product states (MPS) in the thermodynamic limit---the so-called uniform MPS---has been developed for simulating static and dynamic properties of quantum spin chains \cite{Vanderstraeten2019}. In particular, it yields a natural description of elementary excitations as localized particles against a strongly-correlated background \cite{Vanderstraeten2015}. When implementing physical symmetries into the MPS parametrization, definite quantum numbers can be assigned to these particles \cite{ZaunerStauber2018b}. In this section, we explain how this formalism is applied to $\ensuremath{\mathrm{SU}}(2)$ spin chains and how particles with both integer and fractional quantum numbers naturally emerge from the MPS formalism.
\subsection{Ground states}
It is, by now, well-known that matrix product states (MPS) provide an efficient parametrization of ground states of (gapped) quantum spin chains \cite{Hastings2006, Verstraete2006}. Although most state-of-the-art MPS algorithms are formulated on finite chains \cite{Schollwoeck2011}, the MPS formalism is laid out most elegantly when working in the thermodynamic limit directly \cite{Vidal2007, Haegeman2011}. Indeed, a translation-invariant ground state can be represented as an MPS where we just repeat the same tensor $A$ on each site in the chain. This can be generalized to states with larger unit cells, where we repeat the same sequence of tensors $\{A_1, A_2, \dots\}$. The state is represented diagrammatically as
\begin{multline*}
\ket{\Psi(A_1,\dots,A_n)} \\= \dots \diagram{main}{1} \dots,
\end{multline*}
and is translation invariant over $n$ sites by construction. In recent years, it was shown that it is possible to variationally optimize over this set of states directly in the thermodynamic limit to find accurate ground-state approximations for a given hamiltonian \cite{ZaunerStauber2018a, Vanderstraeten2019}.
\par The real power of MPS is laid bare when imposing symmetry constraints on the tensors that reflect the physical symmetries in the system. Indeed, it has been realized that an MPS can only be invariant under certain global symmetry operations on the physical degrees of freedom if the virtual legs transform under the same symmetry \cite{PerezGarcia2006}. Put differently, if an MPS is invariant under the global symmetry operation $U(g)=\bigotimes_iu_i(g)$,
\begin{equation*}
U(g) \ket{\Psi(A)} = \ket{\Psi(A)}, \qquad \forall g,
\end{equation*}
it follows that the MPS tensor itself transforms as
\begin{equation*}
\diagram{main}{2} = \diagram{main}{3}.
\end{equation*}
In general, the representation $V_g$ can be decomposed in a direct sum of (projective) irreps of the physical symmetry group, so that the MPS tensor decomposes into a number of blocks that are labeled by the irreps on each leg. In order for the total MPS wavefunction to transform trivially under the global symmetry operation, it is required that the tensor itself only contains non-zero blocks for which the three irreps fuse to the trivial representation---i.e., the tensor itself globally transforms trivially.
\par In the case of a quantum spin chain with $\ensuremath{\mathrm{SU}}(2)$ invariance, where the physical degrees of freedom transform under a specfic spin-$s$ representation, the virtual degrees of freedom transform as a direct sum of representations of $\ensuremath{\mathrm{SU}}(2)$ labeled by $j_1$ and $j_2$. Each block of the tensor is, therefore, labeled by three spins,
\begin{equation*}
\diagram{main}{4},
\end{equation*}
and is only non-zero when $j_1$ and $s$ can fuse to $j_2$.
\par Let us first investigate what this implies for a spin-1/2 chain. Suppose we want to write down an $\ensuremath{\mathrm{SU}}(2)$ invariant MPS with a one-site unit cell. Because the MPS tensor $A$ has to transform as a singlet, we have the following condition on the allowed irreps on the bonds
\begin{equation*}
\diagram{main}{5} \rightarrow \left| j_1 - j_2 \right| =\frac{1}{2},
\end{equation*}
which implies that a half-integer $j_1$ only couples to an integer $j_2$, and vice versa. If we build up an MPS using this tensor,
\begin{equation*}
\diagram{main}{6},
\end{equation*}
then this state falls apart into a sum of two states where the first state has $j_1$ half-integer, $j_2$ integer and so forth, and the second state contains the other representations. Therefore, for describing a singlet ground state for an $s=1/2$ spin chain, we need at least a two-site unit cell, where we alternate between half-integer and integer representations:
\begin{equation*}
\diagram{main}{7}
\end{equation*}
where $j_1$ ($j_2$) only has half-integer (integer) representations, or vice versa.
\par This result implies that an MPS cannot describe a unique ground state of a spin-1/2 chain, a result that is closely connected to the Lieb-Schultz-Mattis theorem \cite{Lieb1961, Oshikawa2000}, stating that a spin-1/2 chain cannot host a unique gapped ground state. Here we find that an MPS representation for a ground state necessarily breaks translation invariance, and, therefore that the translated state is an equally good ground-state approximation. The simplest example of an $\ensuremath{\mathrm{SU}}(2)$-invariant MPS on a spin-1/2 chain is the Majumdar-Ghosh state \cite{Majumdar1969a, Majumdar1969b}, which is obtained by interchanging $j=0$ and $j=\frac{1}{2}$ representations on the virtual bonds.
\par The situation for integer spin chains is very different. Haldane famously showed that spin-1 chains typically have a unique ground state with a finite excitation gap \cite{Haldane1983a, Haldane1983b}. Using the MPS framework, it was shown that spin-1 chains host symmetry-protected topological (SPT) phases \cite{Pollmann2010, Chen2011, Schuch2011} that are characterized by a string-order parameter, spin-1/2 edge states and even degeneracies in the ground-state entanglement spectrum. The transition to a trivial phase can only occur through a phase transition. Here, the simplest example is the Affleck-Kennedy-Lieb-Tasaki state \cite{Affleck1987, Affleck1988}, obtained by taking only $j=\frac{1}{2}$ representations on the bonds.
\par The characteristic difference between an SPT phase and a trivial phase is again clearly seen in the MPS description of the $\ensuremath{\mathrm{SU}}(2)$-invariant ground state. For a spin-1 chain, the MPS tensors necessarily have virtual representations $j_1$ and $j_2$ with the property
\begin{equation*}
\diagram{main}{8} \rightarrow \left| j_1 - j_2 \right| = 0,1.
\end{equation*}
This implies that $j_1$ and $j_2$ are either both half-integer or both integer. This implies that MPS representations of ground states are possible using a one-site unit cell
\begin{equation*}
\diagram{main}{9},
\end{equation*}
where all $j$'s are either integer or half-integer. These two cases differentiate a trivial from an SPT phase, respectively: The degeneracies in the entanglement spectrum are determined by the multiplicities of the $\ensuremath{\mathrm{SU}}(2)$ representations, such that integer ones correspond to odd degeneracies and the half-integer ones to even degeneracies.
\subsection{Elementary excitations}
Besides ground states, the uniform MPS framework can be extended to the description of elementary excitations. Indeed, it was rigorously shown that an excitation that lives on an isolated branch in the spectrum can be described by acting with a momentum superposition of a local operator onto the ground state \cite{Haegeman2013}. In the MPS language, this translates to a quasiparticle ansatz for elementary excitations on top of an MPS ground state \cite{Haegeman2012}. When applied to an $\ensuremath{\mathrm{SU}}(2)$ invariant spin system with a unique translation-invariant ground state, we have the following form for an elementary excitation
\begin{equation*}
\ket{\Phi_p^k(B)} = \sum_{n} \ensuremath{\mathrm{e}}^{ipn} \diagram{main}{10}.
\end{equation*}
Here we have introduced a new tensor $B$ that perturbs the ground state in a local region around site $n$, and performed a plane-wave superposition with momentum $p$. We have added an extra leg to this tensor that transforms according to a certain $\ensuremath{\mathrm{SU}}(2)$ irrep, labeled by $k$. Therefore, the irrep that lives on this non-contracted leg determines the global quantum number of the excited state.
\par The ansatz wavefunction is linear in the tensor $B$, and therefore the variational parameters in $B$ can be optimized by solving a generalized eigenvalue problem. Using a specific parametrization for the tensor $B$, the norm of the wavefunction can be made trivial, which reduces the generalized eigenvalue problem to an ordinary one. When the quantum numbers of the excitation---the momentum $p$ and the $\ensuremath{\mathrm{SU}}(2)$ label $k$---are non-trivial, the excitation is orthogonal to the ground state by construction; for trivial quantum numbers the ansatz can be made orthogonal by the same parametrization.
\par It is easily seen that when considering integer-spin chains, regardless of whether the $j$'s are integer or half-integer, the label $k$ has to be an integer. This corresponds to the well-known property that spin-1 chains generically have magnon excitations.
\par In the half-integer spin case, where the MPS breaks translation invariance and has a two-site unit cell, we can make elementary excitations by considering defects in the ground-state pattern. Indeed, an excitation would look like
\begin{equation*}
\ket{\Phi_p^k(B)} = \\ \sum_{n} \ensuremath{\mathrm{e}}^{ipn} \diagram{main}{11}.
\end{equation*}
One now observes that the label $k$ has to be half-integer, which indicates that elementary excitations in spin-1/2 chains generically have half-integer quantum numbers. These spinons cannot be created out of the ground state by a local operator, but are always created in pairs; this phenomenon is known as fractionalization.
\par From the MPS perspective, therefore, it is natural that half-integer spin chains host spinon excitations, whereas magnons appear in integer-spin chains. There is, however, a scenario conceivable where spinon excitations can emerge in a spin-1 chain. We imagine that we tune the system such that there is a coexistence of two MPS ground states $\ket{\Psi(A_1)}$ and $\ket{\Psi(A_2)}$, where one MPS carries only integer representations on the legs and the other only half-integer ones. Put differently, we require that the system is at a first-order transition between an SPT phase and a trivial phase. In that case, we can consider solitonic excitations between the two ground states,
\begin{equation*}
\ket{\Phi_p^k(B)} = \\ \sum_{n} \ensuremath{\mathrm{e}}^{ipn} \diagram{main}{12}.
\end{equation*}
It is easily seen now that the irrep label $k$ has to be half-integer, so it carries fractional quantum numbers.
\subsection{Spinon/anti-spinon bound states}
\label{sec:bound}
Spin chains that host spinon excitations can often be perturbed such that the spinons are confined. In the above case of spin-1/2 chains, the easiest option is to favour one of the two ground-state patterns through an explicit dimerization in the spin-chain hamiltonian. In that case, the spinons no longer exist as elementary excitations, but if the perturbation is weak, one can still understand the low-lying excitations as spinon/anti-spinon bound states. These can be pictured as consisting of two local kinks in the ground state pattern, and can be described by a two-particle wavefunction of the form
\begin{multline*}
\ket{\Phi_p^k(B)} = \sum_{n} \ensuremath{\mathrm{e}}^{ipn} \sum_{n'>0, \mathrm{even}} c(n') \\ \diagram{main}{13},
\end{multline*}
where $c(n')$ is the part of the two-particle wavefunction for the relative position between the two spinons.
\par In the case of spinons in a spin-1 chain at a first-order transition line, a spinon/anti-spinon wavefunction would look like
\begin{multline*}
\ket{\Phi_p^k(B)} = \sum_{n} \ensuremath{\mathrm{e}}^{ipn} \sum_{n'>0} c(n') \\ \diagram{main}{14}.
\end{multline*}
Here spinon confinement can be easily introduced by tuning slightly away from the first-order point such that one of the two ground states is favoured over the other energetically.
\par This type ansatz wavefunction was introduced for describing two-particle scattering states \cite{Vanderstraeten2014, Vanderstraeten2015}, for which the relative wavefunction $c(n')$ has an oscillating form. It was shown in Ref.~\cite{Vanderstraeten2016} that the transition of a scattering state into a bound state corresponds to the relative wavefunction $c(n)$ changing from an oscillating function into an exponentially decaying one. This process of bound-state formation is signalled in the divergence of the scattering length, which can be read off from $c(n)$ \cite{Vanderstraeten2016}.
\par In principle, however, the description of stable bound states fall within the above one-particle framework: their wavefunctions are constructed as local deformations of the ground state in a momentum superpositions. Indeed, for strongly-bound states, the one-particle ansatz has proven to be sufficient to capture the wavefunction accurately \cite{Vanderstraeten2016, Bera2017}. However, when very broad bound states are considered---when the two $B$ tensors are separated---the above quasiparticle ansatz can be insufficient in the sense that a single local tensor cannot capture the full extension of the ground-state perturbation. In that case, an extended ansatz of the form \cite{Haegeman2013}
\begin{equation*}
\ket{\Phi_p^k(B)} = \\ \sum_{n} \ensuremath{\mathrm{e}}^{ipn} \diagram{main}{15}
\end{equation*}
can be introduced. The number of parameters in the $B$ tensor scales exponentially with the number of sites $N$, such that a variational optimization becomes unfeasible rather quickly. For that reason, we can decompose the $B$ tensor in a string of $N$ one-site tensors giving rise to the ansatz
\begin{equation*}
\ket{\Phi_p^k(B)} = \\ \sum_{n} \ensuremath{\mathrm{e}}^{ipn} \diagram{main}{16}.
\end{equation*}
The variational optimization of the string of tensors can be performed using a sweeping algorithm, much in the spirit of standard DMRG \cite{Schollwoeck2011} algorithms---we refer to the appendix for more details on the implementation.
\section{Spinons and their confinement in the spin-1 chain}
\label{sec:spin1}
The MPS formalism allows to capture generic cases of spinons and their confinement in $\ensuremath{\mathrm{SU}}(2)$-invariant spin chains. In addition, we have proposed the scenario in which spin-1/2 spinons can emerge in a spin-1 chain on a first-order transition line. This phenomenon was observed to occur in a frustrated and dimerized spin-1 chain \cite{Chepiga2016}---the spin-1 chain with next-nearest neighbour and biquadratic interactions shows a similar phenomenology \cite{Pixley2014, Chepiga2016c}. In this section we apply our formalism to the former model. In addition, we study the confinement of these spinons away from the transition line. In Refs.~\onlinecite{Vanderstraeten2016, Bera2017} spinon confinement in spin-1/2 chains was already simulated using the framework of uniform MPS without symmetries. In the following we have performed the simulations using tangent-space methods for uniform MPS \cite{Vanderstraeten2019} using full $\ensuremath{\mathrm{SU}}(2)$-symmetric tensor-network operations \cite{symmTN}.
\subsection{Spinons on a first-order transition line}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{./phase_diagram_empty.pdf}
\caption{The phase diagram of the frustrated and dimerized spin-1 chain, taken from Ref.~\onlinecite{Chepiga2016}. The three phases are pictorially represented by a picture of virtual spin-1/2 particles: the SPT phase is a Haldane phase, the trivial phase can be interpreted as a next-nearest-neighbour Haldane phase, and in the dimerized phase the pairing of the virtual particles leads to a spontaneous breaking of translation symmetry. The full (dashed) lines represent second-order (first-order) transitions.}
\label{fig:diagram}
\end{figure}
We investigate the frustrated and dimerized spin-1 chain, defined by the hamiltonian
\begin{multline*}
H = J_1 \sum_i \vec{S}_i \cdot \vec{S}_{i+1} + J_2 \sum_i \vec{S}_{i-1} \cdot \vec{S}_{i+1} \\ + J_3 \sum_i \left((\vec{S}_{i-1} \cdot \vec{S}_{i})(\vec{S}_{i} \cdot \vec{S}_{i+1} ) + h.c. \right).
\end{multline*}
For $J_2=J_3=0$ this model reduces to the spin-1 Heisenberg model, which is known to be in the Haldane phase \cite{Haldane1983a, Haldane1983b}. The next-nearest neighbour term ($J_2$) adds frustration to the system and drives it through a first-order phase transition into a trivial phase \cite{Kolezhuk1996, Kolezhuk1997}, whereas the three-site interaction ($J_3$) induces a spontaneous dimerization via a second-order transition \cite{Michaud2012}. The full phase diagram (see Fig.~\ref{fig:diagram}) shows that the first-order transition extends over a finite region, and only for small $J_2$ the transition into the dimerized phase becomes second order.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{./transition.pdf}
\caption{The variational energy obtained by MPS with half-integer (blue) and integer (red) representations on the links for $J_1=1$, $J_2=0.56$, and as a function of $J_3$. We observe a crossing indicating a first-order transition between an SPT phase and a trivial phase. The blue (red) data points were obtained by taking the previous MPS as the starting point for the next point, such that the vumps algorithm stays in the local minimum corresponding to the higher-energy state. Ultimately, the variational optimization drops farther away from the phase transition, where the MPS with the wrong representations will develop a non-injective structure to approximate the true ground state.}
\label{fig:transition}
\end{figure}
\par Let us first investigate the first-order transition between the SPT phase and the trivial phase. On both sides of the transition, we can represent the ground state by an MPS with a one-site unit cell with an explicit encoding of the $\ensuremath{\mathrm{SU}}(2)$ symmetry: In the SPT phase, we choose half-integer representations on the virtual degrees of freedom, whereas in the trivial phase we choose only the integer ones. Because these two choices determine different classes of MPS, we can compare the variational energies within the two distinct classes and determine in which phase the ground state is for a given choice of parameters. In Fig.~\ref{fig:transition} we plot the variational energies on a line in the phase diagram that crosses the transition, showing nicely that this is, indeed, a first-order transition.
\begin{figure}
\subfigure[]{\includegraphics[width=0.99\columnwidth]{./disp_J3_00318.pdf}} \\
\subfigure[]{\includegraphics[width=0.49\columnwidth]{./zoom_J3_00318.pdf}}
\subfigure[]{\includegraphics[width=0.49\columnwidth]{./conv_J3_00318.pdf}}
\caption{In (a) we plot the dispersion relation of the spinon excitation at the first-order phase transition between Haldane and trivial phase at $J_2=0.56$, $J_3\approx0.0318$. The full dispersion relation has been computed using the spinon quasiparticle ansatz with $\ensuremath{\mathrm{SU}}(2)$ symmetry. To get an idea for the convergence as a function of bond dimension, in (b) and (c) we plot the dispersion around the minimum and the convergence of the gap with higher bond dimensions (up to $D=200$ for the largest subblock; for comparison, this corresponds to a non-symmetric MPS with total bond dimension $D_\mathrm{total}\approx2000$). Note that the excitation energy is not variational, because we subtract the MPS ground-state energy.}
\label{fig:spinon00318}
\end{figure}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{./disp2.pdf}
\caption{The same plot as before, now for parameters $J_2\approx0.7606$ and $J_3=0$. The inset shows that the spinon has decreased significantly with respect to the previous figure.}
\label{fig:spinon0}
\end{figure}
\par Exactly at the transition, the two ground states have the same energy density. Therefore, we can consider domain walls that interpolate between them, where, as we have shown in the previous section, the excitations necessarily carry a half-integer quantum number. In Fig.~\ref{fig:spinon00318} we plot the dispersion relation of the spinons with spin $s=1/2$ for $J_2=0.56$ and $J_3\approx0.0318$. The spinon's dispersion relation exhibits a very strong minimum at an incommensurate value of the momentum. In the inset, we provide a close-up around the minimum showing that the gap converges to a non-zero value. In addition, in Fig.~\ref{fig:spinon0} we plot the dispersion relation further along the transition line ($J_2\approx0.7606$, $J_3=0$) showing that the spinon gap decreases. It is expected that the gap ultimately closes when going further along this line---this closing of the gap can be described by a marginal operator changing sign in the $\ensuremath{\mathrm{SU}}(2)_1$ Wess-Zumino-Witten field theory with central charge $c=1$ \cite{Chepiga2016b, Tsui2017}. A continuous transition with $c=1$ between an SPT chain and a trivial phase was recently demonstrated in Ref.~\onlinecite{Gozel2019}. Unfortunately, we have found no immediate evidence for a critical point further along the transition line, and we leave an elaborate study of this question for further work.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{./NNNH_300_all.pdf}
\caption{(a) Local magnetization, (b) nearest-neighour correlations, (c) three-site correlations, and (d) bipartite entanglement profile for a chain with $N=300$ sites and $S^z_\mathrm{tot}=1$ at the first-order transition between the SPT and trivial phase at $J_2\approx0.75$ and $J_3=0$.}
\label{fig:dmrg1}
\end{figure}
\par The existence of the spinons as low-energy excitations at the phase transitions can be further confirmed from simulations on a finite chain \cite{Chepiga2016}. The SPT phase is known to have symmetry-protected spin-1/2 edge states localized at the end of the corresponding domain. Therefore a domain wall between an SPT phase and a topologically trivial phase, either NNN-Haldane or dimerized, necessarily carries a spin-1/2. At the first order transition the energy levels of the corresponding states cross and one can observe the coexistence of different domains. In Fig.~\ref{fig:dmrg1} we show the results at the first-order transition between the topologically trivial and SPT phases at $J_2=0.75$, $J_3\approx0$. Four quantities are the most relevant: the local magnetization $\langle S_j^z\rangle$ that reveals the spin-1/2 domain wall; the nearest neighbor correlations, which reflects the presence of the dimerization in the domain; the three-site correlations that signals the SPT phase when it is large and positive; and the bipartite entanglement entropy $EE_N$ that takes its maximal value at the domain wall. According to Fig.~\ref{fig:dmrg1}(c) open edges favor topologically trivial domains, while the central domain is in the Haldane phase. Although the local magnetization profile shown in Fig.~\ref{fig:dmrg1}(a) is significantly perturbed by incommensurate correlations, one can clearly see that the maximum of the amplitude is shifted away from the boundary. According to the entanglement entropy profile shown in Fig.~\ref{fig:dmrg1}(d) spin-1/2 domain walls are approximately 30 sites away from the edges, which agrees with the local magnetization profile.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{./disp3.pdf}
\caption{The spinon dispersion relation on the first-order transition line between the SPT phase and the dimerized phase, for parameters $(J_2,J_3)$ given by $(0.3265,0.0558)$ (blue) and $(0.2915,0.0603)$. The dimerized ground state breaks one-site translation invariance spontaneously, so the momentum $q$ is defined with respect to translations over two sites.}
\label{fig:disp3}
\end{figure}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{./Dim_301_all.pdf}
\caption{(a) Local magnetization, (b) nearest-neighour correlations, (c) three-site correlations, and (d) bipartite entanglement profile for a chain of $N=301$ sites and $S^z_\mathrm{tot}=1$ at the first-order transition between the SPT and dimerized phase at $J_2=0.327$ and $J_3=0.0558$.}
\label{fig:dmrg2}
\end{figure}
\par We have also studied the first-order transition between the SPT phase and the dimerized phase for smaller $J_2$. The situation is slightly more complicated, because the dimerized phase itself hosts spinon excitations as well. Indeed, the dimerized phase has an MPS ground-state description with a two-tensor unit cell, and the low-energy particles are spin-1 defects in the ground-state pattern. In order to focus on the spin-1/2 spinons around the phase transition, we perform a blocking transformation such that a dimerized ground state maps to a translation-invariant MPS described by a single tensor with integer $\ensuremath{\mathrm{SU}}(2)$ representations on the virtual bonds. In this setting, the description of the spinons on the first-order transition line is similar as before.
\par For this case, it is known \cite{Chepiga2016b} that the first-order line ends and becomes second order, where the transition is described by a $\ensuremath{\mathrm{SU}}(2)_2$ Wess-Zumino Witten field theory with central charge c=3/2. The transition between first and second order---i.e., the closing of the spinon gap as one travels on the phase-transition line---is described by a marginal operator in the field theory changing sign.
\par In Fig.~\ref{fig:disp3} we plot the spinon dispersion relation for two points on the first-order transition line between SPT and dimerized phase. First we observe that the minimum of the dispersion relation is at momentum $q=0$, so we do not have any commensurate correlations in the system. Moreover, we observe that the spinon gap becomes smaller quickly as we travel on the transition line towards the critical point. This rapid decrease of the gap is expected from the field-theory description, which predicts an exponential suppression as the critical point is approached.
\par In the present case we also confirm the existence of the spinons by looking at four finite-size profiles listed above. The calculations have been done for the lowest energy state in the sector with $S^z_\mathrm{tot}=1$ and $N=301$. Based on Figs.~\ref{fig:dmrg2}(a), (b) and (c) one can deduce that open edges favor dimerized domains, while the central region remains in the SPT phase. Note, however, that for a selected coupling constant the SPT domain is still commensurate, which implies that the ground-state is a singlet if the total number of sites is even, and the ground-states is a (Kennedy) triplet \cite{Kennedy1990}, if the total size of the domain is odd. Moreover, the dimerized domains necessary contain an even number of sites. So, keeping the total number of sites odd, we ensure a Kennedy triplet on the central SPT domain. According to Fig.~\ref{fig:dmrg2}(a) the domain walls are located at a distance about 25 spins from each edge, and the entanglement entropy also takes its maximal values at these locations.
\subsection{Confinement of spinons around the transition line}
\begin{figure}[!htbp] \centering
\includegraphics[width=0.99\columnwidth]{./spectra_hal_nnn.pdf}
\caption{Excitation spectra for different points in the phase diagram. The blue shaded areas are multi-particle continua, the lines are the first low-lying spin-0 and spin-1 excitations. We observe an accumulation of bound-state modes in the spectrum as the first-order transition is approached. The third spectrum also shows the formation of an incommensurate minimum. The extra tick in grey shows twice the momentum for which the spinon dispersion relation is minimal, see Fig.~\ref{fig:spinon00318}. The simulations were performed at bond dimension $D_\mathrm{total}=120$.}
\label{fig:spectra}
\end{figure}
\begin{figure} \centering
\includegraphics[width=0.99\columnwidth]{./2ps_minima.pdf}
\caption{The momentum for which the dispersion relation reaches its minimum at $J_3 = 0.0318$ as a function of $J_2$. At $J_2 \approx 0.56$ the system undergoes a first-order phase transition. The value of $p(\omega_{\min})$ is then compatible with two times the momentum of the gap in the free spinon dispersion relation (see Fig.~\ref{fig:spinon00318}, $p_s \approx 1.0226$) as indicated by the grey square. The simulations were performed at bond dimension $D_\mathrm{total}=120$.}
\label{fig:2ps_minima}
\end{figure}
The spinons that we have identified in the previous section exist as freely propagating particles only at the first-order phase transition exactly, but their existence has noticeable effect away from the transition as well. Indeed, we imagine that both ground states still exist independently away from the transition point, where one of the two will have slightly lower energy density (see Fig.~\ref{fig:transition}). As we have explained in Sec.~\ref{sec:bound}, we can still consider spinon/anti-spinon pairs against the background of the ground state that is favoured energetically. The excess of energy between the spinon and anti-spinon due to the higher-energy background state between them causes the spinon/anti-spinon pair to experience a linear potential and form bound states. As discussed in the introduction, this phenomenon has been studied extensively in spin-1/2 chains \cite{Sorensen1998, Affleck1998, Augier1999, Shiba1980, Vanderstraeten2016, Bera2017}.
\par The formation of spinon/anti-spinon bound states away from the first-order phase transition is observed when plotting the excitation spectrum for a few values of the coupling inside the Haldane phase, see Fig.~\ref{fig:spectra}. Indeed, for $J_2=J_3=0$ we find the usual spectrum of the Heisenberg chain with a minimum in the dispersion at momentum $\pi$. When we come closer to the phase transition, we find that the minimum starts shifting to an incommensurate value, an observation that was also made from the real-space correlation functions \cite{Chepiga2016}. More interestingly, we find different isolated lines below the continuum emerging when approaching the phase transition. The minima of these isolated lines are situated at momentum $p=0$ and at an incommensurate value $p=p_{\mathrm{inc}}$. Above we have seen that the spinon dispersion relation has a strong minimum at momentum $p_s$, so that we expect, indeed, to see bound states around momenta $p=p_s\pm p_s$.
\par In Fig.~\ref{fig:2ps_minima} we have explicitly tracked the behavior of these incommensurate values. We varied $J_2$ at constant $J_3$ towards the first order transition point. From both the Haldane phase and the NNN-trivial phase, we indeed observe convergence towards $p=2p_s$ which confirms the confinement of the spinons away from the transition line.
\begin{figure} \centering
\includegraphics[width=0.99\columnwidth]{./conv.pdf}
\caption{The convergence of the variational excitation energies at momentum $k=0$ as a function of the spatial support $N$ of the string ansatz at various points near the phase transition (bottom). These points are located on the line through the origin and the transition point $J_2 = 0.56, J_3\approx 0.0318$ (top). The convergence becomes better further away from the phase transition, which is consistent with the bound state picture. For comparison we show the convergence of the magnon (with $k=\pi$) at the Heisenberg point $J_2=0,J_3 = 0$ (black cross).} \label{fig:conv}
\end{figure}
\par In addition we have applied the extended quasiparticle ansatz for broad bound states, containing a string of tensors [Sec.~\ref{sec:bound}]. In Fig.~\ref{fig:conv} we show the performance of this extended ansatz for the lowest-lying excitation in the system upon approaching the phase transition. When far away from the transition, the variational energy converges very quickly, which shows that the bound state has a limited spatial extent. If the transition is approached, the convergence becomes slower, which points to a widening of the spinon/anti-spinon bound state. The fact that the excitations become broad, extended perturbations of the ground state as the first-order transition is approached, confirms our underlying spinon picture for the low-lying excitations in the vicinity of the first-order phase transition.
\section{Conclusions}
In the first part of this paper we laid out the formalism of MPS for describing ground states and elementary excitations of $\ensuremath{\mathrm{SU}}(2)$ invariant spin chains, showing how fractionalized spinons emerge naturally from the symmetry pattern of the ground-state MPS tensors. The same analysis can be extended to chains with other global symmetries, where the case of $\ensuremath{\mathrm{SU}}(N)$ is arguably the most interesting. For example, a three-tensor ground state naturally appears for a $\ensuremath{\mathrm{SU}}(3)$ chain in the fundamental representation, and one can consider two types of defects or spinons against this background.
\par In two dimensions, the framework of projected entangled-pair states (PEPS) allows to simulate even more exotic quasiparticles \cite{Vanderstraeten2015b, Vanderstraeten2019b}. Indeed, whereas the one-dimensional case only allows for defects in the ground-state pattern, in two dimensions we can consider quasiparticles with non-trivial strings of symmetry operations as well \cite{Schuch2011, Haegeman2015}. An $\ensuremath{\mathrm{SU}}(2)$-symmetric PEPS both hosts spinons and visons as elementary excitations \cite{Poilblanc2012}, and it would be interesting to study these quasiparticles and their confinement for spin-liquid hamiltonians.
\par Spinon excitations have been observed in neutron-scattering experiments on quasi-one-dimensional compounds \cite{Nagler1991, Tennant1993, Arai1996, Mourigal2013, Lake2013}. The fact that the spinons necessarily come in pairs leads to a broad continuum in the dynamical structure factor, in contrast to the more conventional magnon mode. In more recent neutron-scattering experiments, the confinement of these particles has been observed by the splitting of the multi-spinon continuum into a stack of bound states \cite{Grenier2015, Bera2017}. In this work, we have shown that spinon confinement is generic at first-order transitions between SPT and trivial phases in spin-1 chains, so it would be very interesting if this effect can be observed experimentally as well.
\par We acknowledge insightful discussions with Ian Affleck about solitons in spin chains. This research is supported by the Research Foundation Flanders (LV, FV), the Swiss National Science Foundation (NC, FM), the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) through TRR80 Project F8 (EW), and ERC grant QUTE (FV).
|
1,314,259,994,354 | arxiv | \section{Introduction} Let $D$ be a domain in the complex plane
${\Bbb C}.$ Throughout this paper we use the notations $z=x+iy,$ $B(z_0,r)\ \colon
=\{ z\in{\Bbb C}: |z-z_0|<r\}$ for $z_0\in{\Bbb C}$ and $r>0$, ${\Bbb B}(r)\ \colon
=B(0,r)$, ${\Bbb B}\ \colon ={\Bbb B}(1)$, and $\overline{{\Bbb C}}\ \colon
={\Bbb C}\cup{\infty}.$
\medskip
\par
\noindent
The purpose of this paper is to study the Dirichlet problem
\begin{equation}\label{Dirichlet}
\left\{\begin{array}{ccc}
f_{\overline{z}}\, =\, \mu
(z)\cdot{f_z} + \nu (z)\cdot \overline {f_z},\,\,\, &z\in D, \\
\lim\limits_{z\to\zeta}{\rm Re}\,f(z)=\varphi(\zeta), &\forall\
\zeta\in\partial D,
\end{array}\right.
\end{equation}
in a Jordan domain $D$ of the complex plane ${\Bbb C}$ with
continuous boundary data $\varphi(\zeta)\not\equiv{\rm const}.$
Here $\mu(z)$ and $\nu(z)$ stand for measurable coefficients
satisfying the inequality $|\mu (z)|+ |\nu (z)| < 1$ a.e. in $D.$
The degeneracy of the ellipticity for the Beltrami equations
\begin{equation}\label{Beltrami}
f_{\overline{z}}\, =\, \mu
(z)\cdot{f_z} + \nu (z)\cdot \overline {f_z}
\end{equation}
is controlled by the dilatation
coefficient
\begin{equation} \label{eq1.4}
K_{\mu , \nu}(z)\ \colon =\ \frac{1+|\mu (z)|+|\nu (z)|}{1-|\mu
(z)|-|\nu (z)|}\ \in L^1_{\rm loc}.
\end{equation}
We will look for a solution as a continuous, discrete and open mapping $f:D\to{\Bbb
C}$ of the Sobolev class $W_{\rm loc}^{1,1}$ and such that the Jacobian $J_f(z)\neq0$ a.e. in $D.$
Such a solution we will call a {\bf regular
solution} of the Dirichlet problem (\ref{Dirichlet})
in a domain $D.$
\par
Recall that a
mapping $f:D\to{\Bbb C}$ is called {\bf discrete} if the preimage
$f^{-1}(y)$ consists of isolated points for every $y\in{\Bbb C}$,
and {\bf open} if $f$ maps every open set $U\subseteq D$ onto an
open set in ${\Bbb C}$.
For the uniformly elliptic case, i.e. when $K_{\mu,\nu}(z)\leq K<\infty$ a.e. in $D$
the
Dirichlet problem was studied in \cite{Bo} and \cite{Vekua}. The solvability of the Dirichlet problem in the partial case, when $\nu(z)=0$ and
the degeneracy of the ellipticity for the Beltrami equations
\begin{equation}\label{Beltrami1}
f_{\overline{z}}\, =\, \mu
(z)\cdot{f_z}
\end{equation}
is controlled by the dilatation coefficient
\begin{equation}
K_\mu(z)=K_{\mu,0}(z)={1+|\mu(z)|\over 1-|\mu(z)|}\notin\
L^{\infty},
\end{equation}
is given in \cite{Dy}, \cite{GRSY2} and \cite{KPR1}.
\par
Recall that the problem on existence of homeomorphic solutions for
the equation (\ref{Beltrami1}) was resolved for the uniformly
elliptic case when $\Vert\mu\Vert_{\infty}<1$ long ago, see e.g.
\cite{Ah$_1$}, \cite{Bo}, \cite{LV}. The existence problem for the
degenerate Beltrami equations (\ref{Beltrami1}) when $K_{\mu}
\notin\ L^{\infty}$
is currently an active area of research, see e.g. the monographs \cite{GRSY2} and
\cite{MRSY} and the surveys \cite{GRSY1} and \cite{SY} and further
references therein. A series of criteria on the existence of
regular solutions for the Beltrami equation (\ref{Beltrami}) were
given in our recent papers \cite{BGR1}--\cite{BGR3}. There we
called a homeomorphism $f\in W^{1,1}_{\rm loc}(D)$ by a {\bf
regular solution} of (\ref{Beltrami}) if $f$ satisfies
(\ref{Beltrami}) a.e. in $D$ and $J_f(z)=|f_z|^2-|f_{\bar z}|^2\ne
0$ a.e. in $D.$
\medskip
\medskip
\setcounter{equation}{0
\section{Preliminaries}
To derive criteria for existence of regular solutions for the
Dirichlet problem (\ref{Dirichlet}) in a Jordan domain $D\in {\Bbb C}$
we make use of the approximate procedure based on
the existence theorems for the case $K_{\mu , \nu}\in L^{\infty}$
given in \cite{Bo} and convergence theorems for the Beltrami
equations (\ref{Beltrami}) when $K_{\mu , \nu}\in L^{1}_{\rm loc}$
established in \cite{BGR2}. The
Schwarz formula
\begin{equation}\label{eqKPRS1.3}f(z)=i\,{\rm
Im}\,f(0)+\frac{1}{2\pi i}\int\limits_{|\zeta|=1}{\rm
Re}\,f(\zeta)\cdot\frac{\zeta+z}{\zeta-z}\frac{d\zeta}{\zeta}\,,\end{equation}
that allows to recover an analytic function $f$ in the unit disk
${\Bbb B}$ by its real part $\varphi(\zeta)={\rm Re}\,f(\zeta)$ on the
boundary of ${{\Bbb B}}$ up to a purely imaginary additive constant
$c=i{\rm Im}\,f(0),$ see, e.g., Section 8, Chapter III, Part 3 in
\cite{HuCo}, as well as the Arzela--Askoli theorem combined with
moduli techniques are also used.
\medskip
The following statement, that is a consequence of Theorems 5.1 and
6.1 and the point 8.1 in \cite{Bo}, is basic for our further
considerations. See also Theorem VI.2.2 and the point VI.2.3 in
\cite{LV}, on the regularity of a $W^{1,1}_{\rm loc}$ solution to
the Beltrami equation (\ref{Beltrami1}) with the bounded
dilatation coefficient $K_\mu .$
\begin{propo} \label{pr2} Let $D,$ $0\in D,$ be a Jordan domain in the complex plane ${\Bbb C}$ and $\varphi:\partial D\to{\Bbb R}$ be a
nonconstant continuous function. If $K_{\mu ,
\nu}\in L^{\infty},$ then the Dirichlet problem
(\ref{Dirichlet}) has
the unique regular solution $f$ normalized by ${\rm Im}
f(0)=0.$
This solution has the representation \begin{equation}
\label{eq4} f={\cal A}\circ g\circ {\cal R} \end{equation} where ${\cal R}:
D\to{\Bbb B},$ ${\cal R}(0)=0,$ is a conformal mapping and $g: \overline{{\Bbb B}}\to\overline{{\Bbb B}}$ stands for a
homeomorphic regular solution of the quasilinear equation
\begin{equation} \label{Beltrami3} g_{\overline{\zeta}}\, =\, \mu^*
(\zeta)\cdot{g_{\zeta}} +\nu^* ( \zeta )\cdot \frac{\overline{{\cal
A}^{\prime}(g(\zeta))}}{{\cal A}^{\prime}(g(\zeta))}\cdot \overline
{g_{ \zeta }} \end{equation} in ${\Bbb B}$ normalized by $g(0)=0,$ $g(1)=1.$ Here $\mu*=\frac{{{\cal R}^{\prime}}}{\overline{{\cal
R}^{\prime}}} \cdot\mu\circ {\cal R}^{-1},$ $\nu*=\nu\circ {\cal
R}^{-1}$ and
\begin{equation}\label{eqKPRS1.33}{\cal A}(w)\colon =\frac{1}{2\pi i}\int\limits_{|\omega|=1}
\varphi({\cal
R}^{-1}(g^{-1}(\omega)))\cdot\frac{\omega+w}{\omega-w}\frac{d\omega}{\omega}\end{equation}
is an analytic function in the unit disk ${\Bbb B}$. \end{propo}
\begin{rem}\label{rmk11} Let $\tilde\mu:{\Bbb C}\to{\Bbb C}$ coincide a.e in the
domain $D$ with \begin{equation} \label{Beltrami33} \frac{(g\circ
{\cal R})_{\overline{z}}}{(g\circ{\cal R})_z}\, =\,
\frac{g_{\overline\zeta}\circ{\cal R}\cdot \overline{{\cal
R}^{\prime}}}{g_{\zeta}\circ{\cal R}\cdot{{\cal R}^{\prime}}} =\,
\mu + \nu\cdot\frac{\overline{{\cal R}^{\prime}}}{{\cal
R}^{\prime}} \cdot \frac{\overline{g_{{\zeta}}}}{g_{\zeta}}\circ
{\cal R} \cdot \frac{\overline{{\cal A}^{\prime}}}{{\cal
A}^{\prime}}\circ g\circ {\cal R} \end{equation} and equal to $0$
outside of $D$, see e.g. the formulas I.C(1) in \cite{Ah$_1$}.
Note that $K_{\tilde\mu}\le K_{\mu , \nu}$ a.e. in $D$ and there
is a regular solution $G:\overline{\Bbb C}\to\overline{\Bbb C}$ of the
equation $G_{\overline{z}}=\tilde\mu G_z$ such that $G(0)=0$,
$|G({\cal R}^{-1}(1))|=1,$ $G(\infty)=\infty$ and $G={\cal H}\circ
g\circ {\cal R}$ in $\overline{D}.$ Here ${\cal H}:{\Bbb B}\to G(D)$ is
a conformal mapping normalized by ${\cal H}(0)=0,$ ${\cal
H}^{\prime}(0)>0$. Thus, \begin{equation} \label{eq44} f={\cal
A}\circ h,
\end{equation}
\begin{equation}\label{eqKPRS33}{\cal A}(w)=\frac{1}{2\pi i}\int\limits_{|\omega|=1}
\varphi(h^{-1}(\omega))\cdot\frac{\omega+w}{\omega-w}\frac{d\omega}{\omega}\end{equation}
where \begin{equation} \label{eq444}
h=g\circ{\cal R}={\cal H}^{-1}\circ G\end{equation} stands for a
homeomorphism $ h:\overline{D}\to\overline{{\Bbb B}}$, $h(0)=0$, which is
a regular solution in $D$ of the quasilinear equation \begin{equation}
\label{eq111.33} h_{\overline{z}}\, =\, \mu (z)\cdot h_z +\nu
(z)\cdot\frac{\overline{{\cal A}^{\prime}(h(z))}}{{\cal
A}^{\prime}(h(z))}\cdot \overline {h_z} \end{equation}
Denote such $f$, $g$, $\cal A$, $G$, $\cal H$ and $h$ by $f_{\mu ,
\nu , \varphi}$, $g_{\mu , \nu , \varphi}$, $\cal A_{\mu , \nu ,
\varphi}$ $G_{\mu , \nu , \varphi}$, ${\cal H}_{\mu , \nu ,
\varphi}$ and $h_{\mu , \nu , \varphi}$, respectively.\end{rem}
Recall also that, given a family of paths $\Gamma $ in ${\Bbb C} ,$ a
Borel function $\rho:{\Bbb C} \to [0,\infty]$ is called {\bf admissible}
for $\Gamma ,$ abbr. $\rho \in adm\, \Gamma ,$ if \begin{equation}
\label{eq1.2v} \int\limits_{\gamma} \rho(z)\, |dz|\ \geq\ 1 \end{equation} for
each $\gamma\in\Gamma .$ The {\bf modulus} of $\Gamma$ is defined by
\begin{equation} \label{Beltramiv} M(\Gamma) =\inf\limits_{ \rho \in adm\, \Gamma}
\int\limits_{{\Bbb C}} \rho^2(z)\ dxdy\ . \end{equation}
\begin{rem}\label{rmk2.1} Note the following useful fact for a
quasiconformal mapping $f: D\to{\Bbb C}$, see e.g. V(6.6) in \cite{LV},
that \begin{equation} \label{eq2.14} M(f({\Gamma }))\ \le\ \int\limits_{{\Bbb C}} K(z)\cdot
{\rho }^2(z)\ dxdy \end{equation} for every path family ${\Gamma }$ in $D$ and for all
$\rho \in adm\, \Gamma$ where \begin{equation} \label{eq2.15} K(z)\ =\
\frac{|f_z|+|f_{\overline {z}}|}{|f_z|-|f_{\overline {z}}|} \end{equation} is
the (local) maximal dilatation of the mapping $f$ at a point $z\in
D.$
\end{rem}
Given a domain $D$ and two sets $E$ and $F$ in ${{\overline {{\Bbb C}}}}$,
$\Delta(E,F,D)$ denotes the family of all paths ${\gamma }:[a,b] \to {{\overline {{\Bbb C}}}}$
which join $E$ and $F$ in $D$, i.e., ${\gamma }(a) \in E, \ {\gamma }(b) \in F$
and ${\gamma }(t) \in D$ for $a<t<b$. Recall that a {\bf ring domain}, or
shortly a {\bf ring} in ${\overline {{\Bbb C}}}$ is a domain $R$ whose complement
${\overline {{\Bbb C}}}\setminus R$ consists of two connected components.
\medskip
Recall that, for points $z,{\zeta }\in{\overline {{\Bbb C}}} ,$ the {\bf spherical (chordal)
distance} $s(z,{\zeta })$ between $z$ and ${\zeta }$ is given by \begin{equation}
\label{eq1.5a}
s(z,{\zeta } )\ =\
\frac{|z-{\zeta }|}{(1+|z|^2)^{\frac{1}{2}}(1+|{\zeta }|^2)^{\frac{1}{2}}}\ \ \
{\mbox{if}}\ \ \ z\ \ne\ \infty\ne{\zeta }\ ,\end{equation}
$$ s(z,\infty )\ =\ \frac{1}{(1+|z|^2)^{\frac{1}{2}}}\ \ \
{\mbox{if}}\ \ \ z\ \ne\ \infty\ .$$ By ${\delta }(A)$ we denote the
spherical diameter of a set $A\subset{\Bbb C}$, i.e. $\sup\limits_{z,{\zeta }\in
A}s(z,{\zeta })$.
The following statement is a direct consequence of the known
estimate of the capacity of a ring formulated in terms of moduli,
see e.g. Lemma 2.16 in \cite{BGR2}.
\begin{lemma}{} \label{lem4.1B} Let $f:D\to{\Bbb C}$ be a homeomorphism with ${\delta }
({\overline {{\Bbb C}}}\setminus f(D)) \ge {\Delta } > 0$ and let $z_0$ be a point in $D,$
${\zeta }\in B(z_0,r_0),$ $r_0 < dist\, (z_0,\partial D).$ Then \begin{equation}
\label{eq4.33C} s(f({\zeta }), f(z_0))\ \le\ \frac{32}{{\Delta }}\cdot
\hbox{exp}\left(-\frac{2\pi}{M({\Delta }(fC,fC_0, fA))} \right) \end{equation} where
$C_0=\{z\in{\Bbb C}: |z-z_0|=r_0\}$, $C=\{z\in{\Bbb C} : |z-z_0|=|{\zeta } -z_0|\}$
and $A=\{z\in{\Bbb C} : |{\zeta } -z_0|<|z-z_0|< r_0\} .$\end{lemma}
\setcounter{equation}{0
\section{BMO, VMO and FMO functions}
Recall that a real-valued function $u$ in a domain $D$ in ${\Bbb C}$
is said to be of {\bf bounded mean oscillation} in $D$, abbr.
$u\in{\rm BMO}(D)$, if $u\in L_{\rm loc}^1(D)$ and
\begin{equation}\label{lasibm_2.2_1}\Vert u\Vert_{*}:=
\sup\limits_{B}{\frac{1}{|B|}}\int\limits_{B}|u(z)-u_{B}|\,dxdy<\infty\,,
\end{equation}
where the supremum is taken over all discs $B$ in $D$ and
$$u_{B}={\frac{1}{|B|}}\int\limits_{B}u(z)\,dxdy\,.$$ We write $u\in{\rm BMO}_{\rm loc}(D)$ if
$u\in{\rm BMO}(U)$ for every relatively compact subdomain $U$ of $D$
(we also write BMO or ${\rm BMO}_{\rm loc }$ if it is clear from the
context what $D$ is).
\medskip
The class BMO was introduced by John and Nirenberg (1961) in the
paper \cite{JN} and soon became an important concept in harmonic
analysis, partial differential equations and related areas, see e.g.
\cite{HKM} and \cite{RR}.
\medskip
A function $u$ in BMO is said to have {\bf vanishing mean
oscillation}, abbr. $u\in{\rm VMO}$, if the supremum in
(\ref{lasibm_2.2_1}) taken over all balls $B$ in $D$ with
$|B|<\varepsilon$ converges to $0$ as $\varepsilon\to0$. VMO has
been introduced by Sarason in \cite{Sarason}. There exists a number
of papers devoted to the study of partial differential equations
with coefficients of the class VMO.
\begin{rem}\label{rmk1} Note that $W^{\,1,2}\left({{D}}\right) \subset VMO
\left({{D}}\right),$ see e.g. \cite{BN}. \end{rem}
Following \cite{IR}, we say that a function $u: D\to {\Bbb R} $ has {\bf
finite mean oscillation} at a point $z_0 \in {D} $ if
\begin{equation} \label{eq2.4} \overline{\lim\limits_{\varepsilon\to 0}}\
\ \
\Xint-_{B( z_0 ,\varepsilon)}
|u(z)-\tilde{u}_{\varepsilon}(z_0)|\ dxdy\ <\ \infty \end{equation} where
$$ \tilde{u}_{\varepsilon}(z_0)=\Xint-_{B( z_0 ,\varepsilon)}
u(z)\ dxdy $$ is the mean value of the function $u(z) $ over the
disk $B( z_0 ,\varepsilon)$ with small ${\varepsilon}>0.$ We also say that a
function $u : D\to {\Bbb R} $ is of {\bf finite mean oscillation } in
$D$, abbr. $u\in$ FMO(D) or simply $u\in$ {\bf FMO}, if
(\ref{eq2.4}) holds at every point $z_0 \in {D}.$
\begin{rem}\label{rmk2.33} Clearly BMO $\subset$ FMO. There exist examples
showing that FMO is not BMO$_{\rm loc},$ see e.g. \cite{GRSY2}. By
definition FMO$\ \subset L^1_{\rm loc}$ but FMO is not a subset of
$L^p_{\rm loc}$ for any $p>1$ in comparison with BMO$_{\rm loc}\subset
L^p_{\rm loc}$ for all $p\in [1,\infty)$. \end{rem}
\begin{propo} \label{pr2.1} If, for some collection of numbers
$u_{\varepsilon}\in {{\Bbb R}},\ \ \varepsilon \in (0,\varepsilon_0] $,
\begin{equation} \label{eq2.7} \overline{\lim\limits_{\varepsilon\to 0}}\ \ \
\Xint-_{B( z_0 ,\varepsilon)} |u(z)-u_{\varepsilon}|\ dxdy\ <
\infty\, , \end{equation} then $u $ is of finite mean oscillation at
$z_0$. \end{propo}
\begin{corol} \label{cor2.1} If, for a point $z_0\in{D} ,$ \begin{equation}
\label{eq2.8} \overline{\lim\limits_{\varepsilon\to 0}}\ \
\Xint-_{B( z_0 ,\varepsilon)} |u(z)|\ dxdy\ <\ \infty \ , \end{equation}
then $u $ has finite mean oscillation at $z_0.$ \end{corol}
\begin{rem}\label{rmk2.13a} Note that the function
$u(z)=\log\frac{1}{|z|}$ belongs to BMO in the unit disk ${\Bbb B}$, see
e.g. \cite{RR}, p. 5, and hence also to FMO. However,
$\tilde{u}_{{\varepsilon}}(0)\to\infty$ as ${\varepsilon}\to 0,$ showing that the
condition (\ref{eq2.8}) is only sufficient but not necessary for a
function $u$ to be of finite mean oscillation at $z_0.$ \end{rem}
Below we use the notation $
A(\varepsilon,\varepsilon_0)=\{z\in{{\Bbb C}}:\varepsilon<|z|<\varepsilon_0\}
\, .$
\begin{lemma}{} \label{lem2.1}Let $u: D\rightarrow {{\Bbb R}}$ be a nonnegative
function with finite mean oscillation at $0\in {D}$ and let $u$
be integrable in $B(0,e^{-1})\subset D.$ Then \begin{equation} \label{eq2.9}
\int\limits_{A(\varepsilon, e^{-1})}\frac{u (z)\, dxdy}
{\left(|z| \log \frac{1}{|z|}\right)^2} \le\ C \cdot \log\log
\frac{1}{\varepsilon}\ \ \ \ \ \ \ \ \ \ \ \ \forall\ {\varepsilon}\in
(0,e^{-e}) \end{equation} \end{lemma}
For the proof of this lemma, see \cite{IR}.
\setcounter{equation}{0
\section{The main lemma}
The following lemma is the main tool for deriving criteria on the
existence of regular solutions for the Dirichlet problem to the
Beltrami equations with two characteristics in a Jordan
domain in ${\Bbb C}$.
\begin{lemma}{} \label{lem3.3A} Let $D$ be a Jordan domain in ${\Bbb C}$ with
$0\in D$ and let $\mu$ and $\nu : D\to{\Bbb C}$ be measurable functions
with $K_{\mu , \nu}\in L^1(D).$ Suppose that for every $z_0\in
\overline D $ there exist ${\varepsilon}_0={\varepsilon}(z_0)>0$ and a family of
measurable functions ${\psi }_{z_0,{\varepsilon}}:(0,\infty)\to(0,\infty),$
${\varepsilon}\in(0,{\varepsilon}_0),$ such that \begin{equation} \label{eq3.5A} 0\ <\ I_{z_0}({\varepsilon})\
\colon =\ \int\limits_{{\varepsilon}}^{{\varepsilon}_0}{\psi }_{z_0,{\varepsilon}}(t)\ dt\ <\ \infty\ ,
\end{equation} and such that \begin{equation}\label{eq3.4o}
\int\limits_{{\varepsilon}<|z-z_0|<{\varepsilon}_0}\ K_{\mu ,
\nu}(z)\cdot{\psi }^2_{z_0,{\varepsilon}}(|z-z_0|)\ dxdy\ =\ o(I^2_{z_0}({\varepsilon})) \end{equation}
as ${\varepsilon}\to 0.$ Then the the Dirichlet problem
(\ref{Dirichlet}) has
a regular solution $f$
with ${\rm Im}
f(0)=0$ for each nonconstant continuous function ${\varphi}:\partial
D\to{\Bbb R}$. \end{lemma}
Here we assume that $\mu$ and $\nu$ are extended by zero outside of
the domain $D$.
\medskip
{\it Proof.} Setting \begin{equation} \label{eq3.21p}{\mu }_n(z)\ =\ \left
\{\begin{array}{rr} {\mu }(z)\ , & \ \mbox{if}\ K_{\mu , \nu}(z)\le n,
\\ 0\ , & \ \mbox{otherwise in}\ {\Bbb C},
\end{array} \right. \end{equation} and \begin{equation} \label{eq3.21p}{\nu }_n(z)\
=\ \left \{\begin{array}{rr} {\nu }(z)\ , & \ \mbox{if}\ K_{\mu ,
\nu}(z)\le n,
\\ 0\ , & \ \mbox{otherwise in}\ {\Bbb C},
\end{array} \right. \end{equation} we have that $K_{\mu_n , \nu_n}(z)\le n$
in ${\Bbb C}$. Denote by $f_n$, ${\cal A}_n$, $G_n$, ${\cal H}_n$ and
$h_n$, the functions $f_{\mu_n , \nu_n , {\varphi}}$, ${\cal
A}_{{\mu }_n , {\nu }_n , {\varphi}}$ $G_{\mu_n , \nu_n , {\varphi}}$,
${\cal H}_{\mu_n , \nu_n , {\varphi}}$ and $h_{\mu_n , \nu_n ,
{\varphi}}$,, respectively, from Proposition \ref{pr2} and Remark
\ref{rmk11}.
\medskip
Let $\Gamma_{\varepsilon}$ be a family of all paths joining the
circles $C_{\varepsilon}=\{ z\in{\Bbb C}:|z-z_0|=\varepsilon\}$ and
$C_0=\{ z\in{\Bbb C}:|z-z_0|=\varepsilon_0\}$ in the ring
$A_{\varepsilon}=\{ z\in{\Bbb C}:\varepsilon< |z-z_0|<\varepsilon_0\}$.
Let also ${\psi }^*$ be a Borel function such that ${\psi }^*(t)={\psi } (t)$ for
a.e. $t\in (0,\infty )$. Such a function ${\psi }^*$ exists by the Lusin
theorem, see e.g. \cite{Sa}, p. 69. Then the function
$$\rho_{\varepsilon}(z)=\left\{\begin{array}{rr}
{\psi }^*(|z-z_0|)/I_{z_0}(\varepsilon), & {\rm if } \ z\in A_{\varepsilon}, \\
0, & {\rm if} \ z\in {{\Bbb C}}\backslash A_{\varepsilon},
\end{array}\right.$$ is admissible for $\Gamma_{\varepsilon}$.
Hence by Remark \ref{rmk2.1} applied to $G_n$
$$M(G_n\Gamma_{\varepsilon})\leq\int\limits_{\varepsilon<|z-z_0|<\varepsilon_0} K_{{\mu } ,{\nu }}(z)\cdot
{\rho_{\varepsilon}}^2 (|z-z_0|)\ dx dy\,,$$ and, by the condition
(\ref{eq3.4o}), $M(G_n\Gamma_{\varepsilon})\to 0$ as $\varepsilon\to
0$ uniformly with respect to the parameter $n=1,2,\dots$.
\medskip
Thus, in view of the normalization $G_n(0)=0,$ $|G_n({\cal
R}^{-1}(1))|=1$, $G_n(\infty)=\infty$, the sequence $G_n$ is
equicontinuous in $\overline{\Bbb C}$ with respect to the spherical
distance by Lemma \ref{lem4.1B} with ${\Delta } = 1/{\sqrt{2}}$.
Consequently, by the Arzela--Ascoli theorem, see e.g. \cite{Du}, p.
267, and \cite{DS}, p. 382, it has a subsequence $G_{n_l}$ which
converges uniformly in $\overline{\Bbb C}$ with respect to the spherical
metric to a continuous mapping $G$ in $\overline{\Bbb C}$ with the
normalization $G(0)=0,$ $|G({\cal R}^{-1}(1))|=1$,
$G(\infty)=\infty$. Note that $G:\overline{\Bbb C}\to\overline{\Bbb C}$ is a
homeomorphism of the class $W^{1,1}_{\rm loc}({\Bbb C})$ by Corollary 3.8 in
\cite{BGR2}.
\medskip
Hence by the Rado theorem, see e.g. Theorem II.5.2 in \cite{Go},
${\cal H}_{n_l}\to {\cal H}$ as $l\to\infty$ uniformly in
$\overline{\Bbb B}$ where ${\cal H}:\overline{\Bbb B}\to G(\overline D)$ is the
conformal mapping of ${\Bbb B}$ onto $G(D)$ with the normalization ${\cal
H}(0)=0$ and ${\cal H}^{\prime}(0)>0$. Moreover, since the locally
uniform convergence $G_{n_l} \to G$ and ${\cal H}_{n_l} \to {\cal
H}$ of the sequences $G_{n_l}$ and ${\cal H}_{n_l}$ is equivalent to
their continuous convergence, i.e., $G_{n_l}(z_l) \to G(z_*)$ if
$z_l \to z_*$ and ${\cal H}_{n_l}(\zeta_l) \to {\cal H}(\zeta_*)$ if
$\zeta_l \to \zeta_*$, see [Du], p. 268, and since $G$ and ${\cal
H}$ are injective, it follows that $G_{n_l}^{-1} \to G^{-1}$ and
${\cal H}_{n_l}^{-1} \to {\cal H}^{-1}$ continuously, and hence
locally uniformly.
\medskip
Then we have that ${\cal A}_{n_l}\to{\cal A}$ locally uniformly in
${\Bbb B}$ where
\begin{equation}\label{eqKPRS333}{\cal A}(w)=\frac{1}{2\pi i}\int\limits_{|\omega|=1}
{\varphi}(h^{-1}(\omega))\cdot\frac{\omega+w}{\omega-w}\frac{d\omega}{\omega}\
\end{equation} where $h:\overline D\to\overline{\Bbb B}$, $h(0)=0$, is a homeomorphism
$h={\cal H}^{-1}\circ G$. Note that ${\cal A}_{n_l}$ and $\cal A$
are not constant and hence ${\cal A}^{\prime}_{n_l}$ and ${\cal
A}^{\prime}$ have only isolated zeros. The collection of all such
zeros is countable. Thus, by Theorem 3.1 and Corollary 3.8 in
\cite{BGR2} $h_{n_l}\to h$ locally uniformly in $D$ and $h$ is a
homeomorphic $W^{1,1}_{\rm loc}$ solution in $D$ of the quasilinear
equation
\begin{equation} \label{eq111.333} h_{\overline{z}}\, =\, \mu
(z)\cdot{h_z} +\nu (z)\cdot\frac{\overline{{\cal
A}^{\prime}(h(z))}}{{\cal A}^{\prime}(h(z))}\cdot \overline {h_z}
\end{equation} Hence $f_{n_l}\to f$ where $f={\cal A}\circ h$ is a continuous
discrete open $W^{1,1}_{\rm loc}$ solution in $D$ of (\ref{Beltrami}).
\medskip
Next, note that ${\rm Re}\, {\cal A}_{n_l}\to{\rm Re}\, {\cal A}$
uniformly in $\overline{\Bbb B}$ by the maximum principle for harmonic
functions and ${\rm Re}\, {\cal A}={\varphi}\circ h^{-1}$ on
$\partial{\Bbb B}$ and, consequently, ${\rm Re}\, f_{n_l}\to{\rm Re}\, f$
uniformly in $\overline{\Bbb B}$ and ${\rm Re}\, f={\varphi}$ on $\partial
D$, i.e., $f$ is a continuous discrete open $W^{1,1}_{\rm loc}$ solution
of the Dirichlet problem (\ref{Dirichlet}) in ${\Bbb B}$ to the equation
(\ref{Beltrami}). It remains to show that $J_f(z)\ne 0$ a.e. in ${\Bbb B}$.
\medskip
By a change of variables which is permitted because $h_{n_l}$ and
$\tilde h_{n_l}=h_{n_l}^{-1}$ belong to the class $W^{1,2}_{\rm loc}$,
see e.g. Lemmas III.2.1 and III.3.2 and Theorems III.3.1 and III.6.1
in \cite{LV}, we obtain that for large enough $l$ \begin{equation}\label{eq4.2A}
\int\limits_{B} |\partial \tilde h_{n_l}|^2\ dudv\ \le
\int\limits_{\tilde h_{n_l}(B)} \frac{dxdy}{1-k_l(z)^2}\ \leq
\int\limits_{B^*} K_{\mu ,\nu }(z)\ dxdy\ <\ \infty\end{equation} where
$k_l(z)=|\mu_{n_l}(z)|+|\nu_{n_l}(z)|$ and $B^*$ and $B$ are
relatively compact domains in $D$ and $\tilde h(D)$, respectively,
such that $\tilde h(\bar{B}) \subset B^*$. The relation
(\ref{eq4.2A}) implies that the sequence $\tilde h_{n_l}$ is bounded
in W$^{1,2}(B)$, and hence $h^{-1} \in $ W$^{1,2}_{\rm loc},$ see e.g.
Lemma III.3.5 in \cite{Re} or Theorem 4.6.1 in \cite{EG}. The latter
condition brings in turn that $h$ has $(N^{-1})-$property, see e.g.
Theorem III.6.1 in \cite{LV}, and hence $J_h(z)\ne 0$ a.e., see
Theorem 1 in \cite{Po}. Thus, $f={\cal A}\circ h$ is a regular
solution of the Dirichlet problem (\ref{Dirichlet}) to the equation
(\ref{Beltrami}).
\begin{corol} \label{cor4.3A} Let $D$ be a Jordan domain in ${\Bbb C}$ with $0\in
D$ and let $\mu$, $\nu : {\Bbb B}\to{\Bbb C}$ be measurable functions with
$K_{\mu , \nu}\in L^1({\Bbb B}).$ Suppose that for every
$z_0\in\overline{\Bbb B}$ and some ${\varepsilon}_0>0$ \begin{equation} \label{eq3.4B}
\int\limits_{{\varepsilon}<|z-z_0|<{\varepsilon}_0} K_{\mu ,\nu}(z)\cdot{\psi }^2(|z-z_0|)\
dxdy\ \le\ O\left( \int\limits_{{\varepsilon}}^{{\varepsilon}_0}\ {\psi }(t)\ dt\right)\end{equation} as
${\varepsilon}\to 0$, where ${\psi }:(0,\infty)\to(0,\infty)$ is a measurable
function such that
\begin{equation} \label{eq3.5B}
\int\limits_{0}^{{\varepsilon}_0}{\psi }(t)\
dt\ =\ \infty\ , \ \ \ 0\ <\ \int\limits_{{\varepsilon}}^{{\varepsilon}_0}{\psi }(t)\
dt
<\ \infty\ \ \ \ \forall\ {\varepsilon}\in(0,{\varepsilon}_0) \ .
\end{equation}
Then the the Dirichlet problem
(\ref{Dirichlet}) has
a regular solution $f$
with ${\rm Im}
f(0)=0$ for each nonconstant continuous function ${\varphi}:\partial
D\to{\Bbb R}$.
\end{corol}
\setcounter{equation}{0
\section{Existence theorems}
Everywhere further we assume that the functions $\mu$ and $\nu :
D\to{\Bbb C}$ are extended by zero outside of the domain $D$.
\begin{theo}{} \label{th4.111a} Let $D$ be a Jordan domain in ${\Bbb C}$ with
$0\in D$ and let $\mu$ and $\nu :D\to{\Bbb C}$ be measurable functions
such that $K_{\mu , \nu}(z)\,\ \le\ Q(z)\ \in\ \hbox{FMO}.$
Then the the Dirichlet problem
(\ref{Dirichlet}) has
a regular solution $f$
with ${\rm Im}
f(0)=0$ for each nonconstant continuous function ${\varphi}:\partial
D\to{\Bbb R}$. \end{theo}
{\it Proof}. Lemma \ref{lem3.3A} yields this conclusion by
choosing \begin{equation} \label{eq3.4E} {\psi }_{z_0,{\varepsilon}}(t)\ =\ \frac{1}{t\log
\frac{1}{t}}\ \ , \end{equation} see also Lemma \ref{lem2.1}.
\begin{corol} \label{cor4.333b} In particular, if \begin{equation} \label{eq2.8a}
\overline{\lim\limits_{\varepsilon\to 0}}\ \ \ \Xint-_{B( z_0
,\varepsilon)} \frac{1+|\nu (z)|}{1-|\nu (z)|}\ dxdy\ <\ \infty\ \ \
\ \ \ \ \ \ \forall\ z_0\in \overline D\ , \end{equation}
Then the the Dirichlet problem
\begin{equation}\label{Dirichlet1}
\left\{\begin{array}{ccc}
f_{\overline{z}}\, =\, \nu (z)\cdot \overline {f_z},\,\,\, &z\in D, \\
\lim\limits_{z\to\zeta}{\rm Re}\,f(z)={\varphi}(\zeta), &\forall\
\zeta\in\partial D,
\end{array}\right.
\end{equation}
in a Jordan domain $D$, $0\in D,$ has
a regular solution $f$
with ${\rm Im}
f(0)=0$ for each nonconstant continuous function ${\varphi}:\partial
D\to{\Bbb R}$.
\end{corol}
Similarly, choosing in Lemma \ref{lem3.3A} the function
$\psi(t)=1/t$, we come to the following statement.
\medskip
\begin{theo}{} \label{thKPRS12b*} Let $D$ be a Jordan domain in ${\Bbb C}$ with
$0\in D$ and let $\mu$ and $\nu :D\to{\Bbb C}$ be measurable functions
such that $K_{\mu , \nu}\in\ L^1_{\rm loc}(D).$ Suppose that
\begin{equation}\label{eqKPRS12c*}
\int\limits_{\varepsilon<|z-z_0|<\varepsilon_0}K_{\mu ,\nu}(z)\
\frac{dm(z)}{|z-z_0|^2}\ =\
o\left(\left[\log\frac{1}{\varepsilon}\right]^2\right)\qquad\forall\
z_0\in\overline{D}\end{equation} as $\varepsilon\to 0$ for some
$\varepsilon_0=\delta(z_0)$. Then the the Dirichlet problem
(\ref{Dirichlet}) has a regular solution $f$ with ${\rm Im} f(0)=0$
for each nonconstant continuous function ${\varphi}:\partial D\to{\Bbb R}$.
\end{theo}
\begin{rem}\label{rmKRRSa*} Choosing in Lemma \ref{lem3.3A} the function
$\psi(t)=1/(t\log{1/t})$ instead of $\psi(t)=1/t$, we are able to
replace (\ref{eqKPRS12c*}) by
\begin{equation}\label{eqKPRS12f*}
\int\limits_{\varepsilon<|z-z_0|<\varepsilon_0}\frac{K_{\mu
,\nu}(z)\ dm(z)} {\left(|z-z_0|\log{\frac{1}{|z-z_0|}}\right)^2}
=o\left(\left[\log\log\frac{1}{\varepsilon}\right]^2\right)\end{equation}
In general, we are able to give here the whole scale of the
corresponding conditions in $\log$ using functions $\psi(t)$ of the
form
$1/(t\log{1}/{t}\cdot\log\log{1}/{t}\cdot\ldots\cdot\log\ldots\log{1}/{t})$.
\end{rem}
\begin{theo}{} \label{th3.2C} Let $D$ be a Jordan domain in ${\Bbb C}$ with $0\in
D$ and let $\mu$, $\nu : D\to{\Bbb B}$ be measurable functions, $K_{\mu ,
\nu}\in L^1(D)$ and $k_{z_0}(r)$ be the mean value of $K_{\mu ,
\nu}(z)$ over the circle $|z-z_0|=r.$ Suppose that
\begin{equation} \label{eq1}
\int\limits_{0}^{{\delta }(z_0)}\frac{dr}{rk_{z_0}( r)}\ =\ \infty
\ \ \ \ \ \ \ \ \ \forall\ z_0\in \overline D\ . \end{equation}
Then the the Dirichlet problem
(\ref{Dirichlet}) has
a regular solution $f$
with ${\rm Im}
f(0)=0$ for each nonconstant continuous function ${\varphi}:\partial
D\to{\Bbb R}$. \end{theo}
{\it Proof}. Theorem \ref{th3.2C} follows from Lemma \ref{lem3.3A}
by special choosing the functional parameter \begin{equation}
\label{eq3.21p}{\psi }_{z_0,{\varepsilon}}(t)\ \equiv\ {\psi }_{z_0}(t)\ \colon =\
\left \{\begin{array}{rr} 1/[tk_{z_0}(t)]\ , & \ t\in (0,{\varepsilon}_0)\ ,
\\ 0\ , & \ \mbox{otherwise}
\end{array} \right. \end{equation} where ${\varepsilon}_0={\delta }(z_0).$
\begin{corol} \label{cor3.2V} In particular, the conclusion of Theorem
\ref{th3.2C} holds if \begin{equation} \label{eq3.21W} k_{z_0}(r)\ =\ O\left(
\log \frac{1}{r}\right)\ \ \ \mbox{as}\ \ \ r\to 0\ \ \ \ \ \ \ \ \
\forall\ z_0\in \overline D\ . \end{equation} \end{corol}
In fact, it is clear that the condition (\ref{eq1}) implies the
whole scale of conditions in terms of $\log$ with using in the right
hand side in (\ref{eq3.21W}) functions of the form
$\log{1}/{r}\cdot\log\log{1}/{r}\cdot\ldots\cdot\log\ldots\log{1}/{r}$.
\bigskip
In the theory of mappings called quasiconformal in the mean,
conditions of the type
\begin{equation}\label{eq2} \int\limits_{{D}} \Phi
(Q(z))\ dxdy\ <\ \infty\end{equation} are standard for various
characteristics of these mappings.
In this connection, in the paper \cite{RSY1}, see also the monograph
\cite{GRSY2}, it was established the equivalence of various integral
conditions on the function $\Phi$. We
give here the conditions for $\Phi$ under which
(\ref{eq2}) implies (\ref{eq1}).\medskip
Further we use the following notion of the inverse function for
mo\-no\-to\-ne functions. Namely, for every non-decreasing function
$\Phi:[0,\infty ]\to [0,\infty ] ,$ the {\bf inverse function}
$\Phi^{-1}:[0,\infty ]\to [0,\infty ]$ can be well defined by
setting
\begin{equation}\label{eq5.5CC} \Phi^{-1}(\tau)\ =\
\inf\limits_{\Phi(t)\ge \tau}\ t\ .
\end{equation} As usual, here $\inf$ is equal to $\infty$ if the set of
$t\in[0,\infty ]$ such that $\Phi(t)\ge \tau$ is empty. Note that
the function $\Phi^{-1}$ is non-decreasing, too.
\begin{rem}\label{rmk3.333} It is evident immediately by the definition
that \begin{equation}\label{eq5.5CCC} {\Phi}^{-1}({\Phi}(t))\ \le\ t\ \ \ \ \ \ \ \
\forall\ t\in[ 0,\infty ] \end{equation} with the equality in (\ref{eq5.5CCC})
except intervals of constancy of the function $\Phi$. \end{rem}
Further, in (\ref{eq333Y}) and (\ref{eq333F}), we complete the
definition of integrals by $\infty$ if ${\Phi}(t)=\infty ,$
correspondingly, $H(t)=\infty ,$ for all $t\ge T\in[0,\infty) .$ The
integral in (\ref{eq333F}) is understood as the Lebesgue--Stieltjes
integral and the integrals (\ref{eq333Y}) and
(\ref{eq333B})--(\ref{eq333A}) as the ordinary Lebesgue integrals.
\begin{propo} \label{pr4.1aB} Let ${\Phi}:[0,\infty ]\to [0,\infty ]$ be a
non-decreasing function and set \begin{equation}\label{eq333E} H(t)\ =\ \log
{\Phi}(t)\ .\end{equation}
Then the equality \begin{equation}\label{eq333Y} \int\limits_{{\Delta }}^{\infty}
H'(t)\ \frac{dt}{t}\ =\ \infty \end{equation} implies the equality
\begin{equation}\label{eq333F} \int\limits_{{\Delta }}^{\infty} \frac{dH(t)}{t}\ =\
\infty \end{equation} and (\ref{eq333F}) is equivalent to \begin{equation}\label{eq333B}
\int\limits_{{\Delta }}^{\infty}H(t)\ \frac{dt}{t^2}\ =\ \infty \end{equation} for
some ${\Delta }>0,$ and (\ref{eq333B}) is equivalent to every of the
equalities: \begin{equation}\label{eq333C}
\int\limits_{0}^{{\delta }}H\left(\frac{1}{t}\right)\ {dt}\ =\ \infty \end{equation}
for some ${\delta }>0,$ \begin{equation}\label{eq333D} \int\limits_{{\Delta }_*}^{\infty}
\frac{d\eta}{H^{-1}(\eta)}\ =\ \infty \end{equation} for some ${\Delta }_*>H(+0),$
\begin{equation}\label{eq333A} \int\limits_{{\delta }_*}^{\infty}\ \frac{d{\tau }}{{\tau }
{\Phi}^{-1}({\tau } )}\ =\ \infty \end{equation} for some ${\delta }_*>{\Phi}(+0).$
\medskip
Moreover, (\ref{eq333Y}) is equivalent to (\ref{eq333F}) and hence
(\ref{eq333Y})--(\ref{eq333A})
are equivalent each to other if ${\Phi}$ is in addition absolutely continuous.
In particular, all the conditions (\ref{eq333Y})--(\ref{eq333A}) are
equivalent if ${\Phi}$ is convex and non--decreasing. \end{propo}
Finally, we give the connection of the above conditions with the
condition of the type (\ref{eq1}).
\medskip
Recall that a function $\psi :[0,\infty ]\to [0,\infty ]$ is called
{\bf convex} if $\psi (\lambda t_1 + (1-\lambda) t_2)\le\lambda\psi
(t_1)+ (1-\lambda)\psi (t_2)$ for all $t_1$ and $t_2\in[0,\infty ]$
and $\lambda\in [0,1]$.\medskip
\begin{propo} \label{th5.555} Let $Q:{\Bbb B}\to [0,\infty ]$ be a measurable
function such that \begin{equation}\label{eq5.555} \int\limits_{{\Bbb B}} {\Phi} (Q(z))\
dxdy\ <\ \infty\end{equation} where ${\Phi}:[0,\infty ]\to [0,\infty ]$ is a
non-decreasing convex function such that \begin{equation}\label{eq3.333a}
\int\limits_{{\delta }}^{\infty}\ \frac{d{\tau }}{{\tau } {\Phi}^{-1}({\tau } )}\ =\ \infty
\end{equation} for some ${\delta }\ > {\Phi}(0).$ Then \begin{equation}\label{eq3.333A}
\int\limits_{0}^{1}\ \frac{dr}{rq(r)}\ =\ \infty \end{equation} where $q(r)$
is the average of the function $Q(z)$ over the circle $|z|=r$. \end{propo}
Finally, combining Propositions \ref{pr4.1aB} and \ref{th5.555} we
obtain the following conclusion.
\begin{corol} \label{cor555} If ${\Phi}:[0,\infty ]\to [0,\infty ]$ is a
non-decreasing convex function and $Q$ satisfies the condition
(\ref{eq5.555}), then every of the conditions
(\ref{eq333Y})--(\ref{eq333A}) implies (\ref{eq3.333A}). \end{corol}
Immediately on the basis of Theorem \ref{th3.2C} and Corollary
\ref{cor555}, we obtain the next significant result.
\begin{theo}{} \label{th4.111a} Let $D$ be a Jordan domain in ${\Bbb C}$ with
$0\in D$ and let $\mu$ and $\nu : D\to{\Bbb C}$ be measurable functions
such that \begin{equation}\label{eq4.2} \int\limits_{{D}} \Phi (K_{{\mu },{\nu }}(z))\
dxdy\ <\ \infty \end{equation} where ${\Phi}:[0,\infty ]\to [0,\infty ]$ is a
non-decreasing convex function satisfying at least one of the
conditions (\ref{eq333Y})--(\ref{eq333A}). Then the the Dirichlet problem
(\ref{Dirichlet}) has
a regular solution $f$
with ${\rm Im}
f(0)=0$ for each nonconstant continuous function ${\varphi}:\partial
D\to{\Bbb R}$. \end{theo}
On the same basis, we obtain the following consequence.
\begin{corol} \label{cor333} In particular, the conclusion of Theorem
\ref{th4.111a} holds if
\begin{equation}\label{eqp.KR4.1c}\int\limits_{D\cap
U_{z_0}}e^{\alpha(z_0) K_{\mu , \nu}(z)}\,dxdy\ <\infty
\qquad\forall\ z_0\in \overline{D}\end{equation} for some
$\alpha(z_0)>0$
and a neighborhood $U_{z_0}$ of the point $z_0$.
\end{corol}
\begin{rem}\label{rmk5.1} By the Stoilow theorem, see
e.g. \cite{Sto}, every regular solution $f$ to the Dirichlet problem
\begin{equation}\label{Dirichlet3}
\left\{\begin{array}{ccc}
f_{\overline{z}}\, =\, \mu (z)\cdot {f_z},\,\,\, &z\in D, \\
\lim\limits_{z\to\zeta}{\rm Re}\,f(z)={\varphi}(\zeta), &\forall\
\zeta\in\partial D,
\end{array}\right.
\end{equation}
has the representation $f=h\circ g$ where $g:D\to{\Bbb B}$ stands for a homeomorphic $W^{1,1}_{\rm loc}$
solution to the Beltrami equation $g_{\overline{z}}\, =\, \mu (z)\cdot {g_z},$ and $h:{\Bbb B}\to{\Bbb C}$
is analytic. By Theorem 5.50 from \cite{RSY1}
the conditions (\ref{eq333Y})--(\ref{eq333A}) are not only
sufficient but also necessary to have a homeomorphic $W^{1,1}_{\rm loc}$ solution
for all such Beltrami equations with the integral constraint
\begin{equation} \int\limits_{{D}} \Phi (K_{{\mu }}(z))\
dxdy\ <\ \infty. \end{equation}
Note also that in the above theorem we may assume that the functions
${\Phi}_{z_0}(t)$ and ${\Phi}(t)$ are not convex and non--decreasing on the
whole segment $[0,\infty]$ but only on a segment $[T,\infty]$ for
some $T\in(1,\infty)$. Indeed, every function
${\Phi}:[0,\infty]\to[0,\infty]$ which is convex and non-decreasing on a
segment $[T,\infty]$, $T\in(0,\infty)$, can be replaced by a
non-decreasing convex function ${\Phi}_T:[0,\infty]\to[0,\infty]$ in the
following way. We set ${\Phi}_T(t)\equiv 0$ for all $t\in [0,T]$,
${\Phi}(t)={\varphi}(t)$, $t\in[T,T_*]$, and ${\Phi}_T\equiv {\Phi}(t)$,
$t\in[T_*,\infty]$, where ${\tau }={\varphi}(t)$ is the line passing through the
point $(0,T)$ and tangent to the graph of the function ${\tau }={\Phi}(t)$ at
a point $(T_*,{\Phi}(T_*))$, $T_*\ge T$. For such a function we have by
the construction that ${\Phi}_T(t)\le {\Phi}(t)$ for all $t\in[1,\infty]$
and ${\Phi}_T(t)={\Phi}(t)$ for all $t\ge T_*$. \end{rem}
The equation of the form \begin{equation} \label{eq6.1} f_{\overline{z}}\, =\,
{l}\, (z)\ {\rm Re}\, f_z \end{equation} with $|{l}\, (z)|<1$ a.e. is called a {\bf
reduced Beltrami equation}, considered e.g. in \cite{Bo$_3$} and
\cite{Vo}, though the term is not introduced there. The equation
(\ref{eq6.1}) can be written as the equation (\ref{Beltrami}) with \begin{equation}
\label{eq6.2} {\mu }(z)\ =\ {\nu } (z)\ =\ \frac{{l}\, (z)}{2} \end{equation} and then
\begin{equation} \label{eq6.3} K_{\mu , \nu}(z)\ =\ K_{{l}\, }(z)\ \colon =\
\frac{1+|{l}\, (z)|}{1-|{l}\, (z)|}\ .\end{equation} Thus, we obtain from Theorem
\ref{th4.111a} the following consequence for the reduced Beltrami
equations (\ref{eq6.1}).
\begin{theo}{} \label{th4.111aR} Let $D$ be a Jordan domain in ${\Bbb C}$ with
$0\in D$ and let ${l}\, : D\to{\Bbb C}$ be a measurable function
such that \begin{equation}\label{eq2.8aR}
\int\limits_{{D}} \Phi (K_{{l}\, }(z))\ dxdy\ <\ \infty \end{equation} where
${\Phi}:[0,\infty ]\to [0,\infty ]$ is a non-decreasing convex function
satisfying at least one of the conditions
(\ref{eq333Y})--(\ref{eq333A}).
Then the the Dirichlet problem
\begin{equation}\label{Dirichlet2}
\left\{\begin{array}{ccc}
f_{\overline{z}}\, =\,
{l}\, (z)\ {\rm Re}\, f_z ,\,\,\, &z\in D, \\
\lim\limits_{z\to\zeta}{\rm Re}\,f(z)={\varphi}(\zeta), &\forall\
\zeta\in\partial D,
\end{array}\right.
\end{equation}
in a Jordan domain $D$, $0\in D,$ has
a regular solution $f$
with ${\rm Im}
f(0)=0$ for each nonconstant continuous function ${\varphi}:\partial
D\to{\Bbb R}.$
\end{theo}
Finally, on the basis of Corollary \ref{cor333}, we obtain the
following consequence.
\begin{corol} \label{cor111} In particular, the conclusion of Theorem
\ref{th4.111aR} holds if
\begin{equation}\label{eqp.KR4.1c}\int\limits_{D\cap
U_{z_0}}e^{\alpha(z_0) K_{\lambda}(z)}\,dxdy\ <\infty \qquad\forall\
z_0\in \overline{D}\end{equation} for some $\alpha(z_0)>0$
and a neighborhood $U_{z_0}$ of the point $z_0$.
\end{corol}
\begin{rem}\label{rmk6.1} Remarks \ref{rmk5.1} are valid for reduced
Beltrami equations. Moreover, the above results remain true for the
case in (\ref{Beltrami}) when \begin{equation} \label{eq6.14} {\nu }(z)\ =\ {\mu } (z)\
e^{i\theta (z)} \end{equation} with an arbitrary measurable function
$\theta(z): D\to{\Bbb R}$ and, in particular, for the equations of the
form \begin{equation} \label{eq6.1A} f_{\overline{z}}\, =\, {l}\, (z)\ {\rm Im}\,
f_z \end{equation} with a measurable coefficient ${l}\, : D\to{\Bbb C}$, $|{l}\, (z)|<1$
a.e., see e.g. \cite{Bo$_3$}. \end{rem}
Our approach makes possible, under the certain modification, to
obtain criteria on the existence of pseudoregular and multi-valued
solutions in finitely connected domains that will be published
elsewhere.
\medskip
|
1,314,259,994,355 | arxiv | \section{Introduction} \label{sec:1}
Populations of self-sustained oscillators can exhibit various synchronization phenomena
\cite{ref:winfree80,ref:kuramoto84,ref:pikovsky01,ref:strogatz03,ref:manrubia04}.
For example,
it is well known that a limit-cycle oscillator can exhibit phase locking to a periodic external forcing;
this phenomenon is called the forced synchronization~\cite{ref:winfree80,ref:kuramoto84,ref:pikovsky01}.
Recently,
it was also found that uncoupled identical limit-cycle oscillators subject to weak common noise
can exhibit in-phase synchronization;
this remarkable phenomenon is called the common-noise-induced synchronization
\cite{ref:teramae04,ref:goldobin05,ref:nakao07,ref:kurebayashi12}.
In general,
each oscillatory dynamics is described by a stable limit-cycle solution to an ordinary differential equation,
and the phase description method for ordinary limit-cycle oscillators
has played an essential role in the theoretical analysis of the synchronization phenomena
\cite{ref:winfree80,ref:kuramoto84,ref:pikovsky01,
ref:hoppensteadt97,ref:izhikevich07,ref:ermentrout10,ref:ermentrout96,ref:brown04}.
On the basis of the phase description,
optimization methods for the dynamical properties of limit-cycle oscillators have also been developed for
forced synchronization~\cite{ref:moehlis06,ref:harada10,ref:dasanayake11,ref:zlotnik12,ref:zlotnik13} and
common-noise-induced synchronization~\cite{ref:marella08,ref:abouzeid09,ref:hata11}.
Synchronization phenomena of spatiotemporal rhythms
described by partial differential equations,
such as reaction-diffusion equations and fluid equations,
have also attracted considerable attention
\cite{ref:pikovsky01,ref:manrubia04,ref:mikhailov06,ref:mikhailov13}
(see also Refs.~\cite{ref:manneville90,ref:cross93,ref:cross09} for the spatiotemporal pattern formation).
Examples of earlier studies include the following.
In reaction-diffusion systems,
synchronization between two locally coupled domains
of excitable media exhibiting spiral waves has been experimentally investigated
using the photosensitive Belousov-Zhabotinsky reaction~\cite{ref:hildebrand03}.
In fluid systems,
synchronization in both periodic and chaotic regimes has been experimentally investigated
using a periodically forced rotating fluid annulus~\cite{ref:read09}
and a pair of thermally coupled rotating fluid annuli~\cite{ref:read10}.
Of particular interest in this paper is
the experimental study on generalized synchronization of spatiotemporal chaos
in a liquid crystal spatial light modulator~\cite{ref:rogers04};
this experimental synchronization can be considered as
common-noise-induced synchronization of spatiotemporal chaos.
However, detailed theoretical analysis of these synchronization phenomena has not been performed
even for the case in which
the spatiotemporal rhythms are described by stable limit-cycle solutions to partial differential equations,
because a phase description method for partial differential equations has not been fully developed yet.
In this paper,
we theoretically analyze common-noise-induced phase synchronization
between uncoupled identical Hele-Shaw cells exhibiting oscillatory convection;
the oscillatory convection is described by a stable limit-cycle solution to a partial differential equation.
A Hele-Shaw cell is a rectangular cavity in which the gap between two vertical walls
is much smaller than the other two spatial dimensions,
and the fluid in the cavity exhibits oscillatory convection under appropriate parameter conditions
(see Refs.~\cite{ref:bernardini04,ref:nield06} and also references therein).
In Ref.~\cite{ref:kawamura13},
we recently formulated a theory for the phase description of oscillatory convection in the Hele-Shaw cell
and analyzed the mutual synchronization between a pair of coupled systems of oscillatory Hele-Shaw convection;
the theory can be considered as an extension of our phase description method
for stable limit-cycle solutions to nonlinear Fokker-Planck equations~\cite{ref:kawamura11}
(see also Ref.~\cite{ref:nakao12} for the phase description of spatiotemporal rhythms in reaction-diffusion equations).
Using the phase description method for oscillatory convection,
we here demonstrate that uncoupled systems of oscillatory Hele-Shaw convection
can be in-phase synchronized by applying weak common noise.
Furthermore, we develop a method for obtaining the optimal spatial pattern of the common noise
to achieve synchronization.
The theoretical results are validated by direct numerical simulations of the oscillatory Hele-Shaw convection.
This paper is organized as follows.
In Sec.~\ref{sec:2},
we briefly review our phase description method for oscillatory convection in the Hele-Shaw cell.
In Sec.~\ref{sec:3},
we theoretically analyze common-noise-induced phase synchronization of the oscillatory convection.
In Sec.~\ref{sec:4},
we confirm our theoretical results by numerical analysis of the oscillatory convection.
Concluding remarks are given in Sec.~\ref{sec:5}.
\section{Phase description method for oscillatory convection} \label{sec:2}
In this section, for the sake of readability and being self-contained,
we review governing equations for oscillatory convection in the Hele-Shaw cell
and our phase description method for the oscillatory convection
with consideration of its application to common-noise-induced synchronization.
More details and other applications of the phase description method are given in Ref.~\cite{ref:kawamura13}.
\subsection{Dimensionless form of the governing equations}
The dynamics of the temperature field $T(x, y, t)$ in the Hele-Shaw cell
is described by the following dimensionless form
(see Ref.~\cite{ref:bernardini04} and also references therein):
\begin{align}
\frac{\partial}{\partial t} T(x, y, t)
= \nabla^2 T + J(\psi, T).
\label{eq:T}
\end{align}
The Laplacian and Jacobian are respectively given by
\begin{align}
\nabla^2 T
&= \left( \frac{\partial^2}{\partial x^2}
+ \frac{\partial^2}{\partial y^2} \right) T,
\\
J(\psi, T)
&= \frac{\partial \psi}{\partial x} \frac{\partial T}{\partial y}
- \frac{\partial \psi}{\partial y} \frac{\partial T}{\partial x}.
\end{align}
The stream function $\psi(x, y, t)$ is determined from the temperature field $T(x, y, t)$ as
\begin{align}
\nabla^2 \psi(x, y, t) = -{\rm Ra} \frac{\partial T}{\partial x},
\label{eq:P_T}
\end{align}
where the Rayleigh number is denoted by ${\rm Ra}$.
The system is defined in the unit square: $x \in [0, 1]$ and $y \in [0, 1]$.
The boundary conditions for the temperature field $T(x, y, t)$ are given by
\begin{align}
\left. \frac{\partial T(x, y, t)}{\partial x} \right|_{x = 0} =
\left. \frac{\partial T(x, y, t)}{\partial x} \right|_{x = 1} &= 0,
\label{eq:bcTx} \\
\Bigl. T(x, y, t) \Bigr|_{y = 0} = 1, \qquad
\Bigl. T(x, y, t) \Bigr|_{y = 1} &= 0,
\label{eq:bcTy}
\end{align}
where the temperature at the bottom ($y = 0$) is higher than that at the top ($y = 1$).
The stream function $\psi(x, y, t)$ satisfies
the Dirichlet zero boundary condition on both $x$ and $y$, i.e.,
\begin{align}
\Bigl. \psi(x, y, t) \Bigr|_{x = 0} =
\Bigl. \psi(x, y, t) \Bigr|_{x = 1} &= 0,
\label{eq:bcPx} \\
\Bigl. \psi(x, y, t) \Bigr|_{y = 0} =
\Bigl. \psi(x, y, t) \Bigr|_{y = 1} &= 0.
\label{eq:bcPy}
\end{align}
To simplify the boundary conditions in Eq.~(\ref{eq:bcTy}),
we consider the convective component $X(x, y, t)$ of the temperature field $T(x, y, t)$ as follows:
\begin{align}
T(x, y, t) = (1 - y) + X(x, y, t).
\label{eq:T_X}
\end{align}
Inserting Eq.~(\ref{eq:T_X}) into Eqs.~(\ref{eq:T})(\ref{eq:P_T}),
we derive the following equation for the convective component $X(x, y, t)$:
\begin{align}
\frac{\partial}{\partial t} X(x, y, t)
= \nabla^2 X + J(\psi, X) - \frac{\partial \psi}{\partial x},
\label{eq:X}
\end{align}
where the stream function $\psi(x, y, t)$ is determined by
\begin{align}
\nabla^2 \psi(x, y, t) = -{\rm Ra} \frac{\partial X}{\partial x}.
\label{eq:P_X}
\end{align}
Applying Eq.~(\ref{eq:T_X}) to Eqs.~(\ref{eq:bcTx})(\ref{eq:bcTy}),
we obtain the following boundary conditions for the convective component $X(x, y, t)$:
\begin{align}
\left. \frac{\partial X(x, y, t)}{\partial x} \right|_{x = 0} =
\left. \frac{\partial X(x, y, t)}{\partial x} \right|_{x = 1} &= 0,
\label{eq:bcXx} \\
\Bigl. X(x, y, t) \Bigr|_{y = 0} =
\Bigl. X(x, y, t) \Bigr|_{y = 1} &= 0.
\label{eq:bcXy}
\end{align}
That is, the convective component $X(x, y, t)$ satisfies
the Neumann zero boundary condition on $x$
and the Dirichlet zero boundary condition on $y$.
It should be noted that
this system does not possess translational or rotational symmetry owing to the boundary conditions
given by Eqs.~(\ref{eq:bcPx})(\ref{eq:bcPy})(\ref{eq:bcXx})(\ref{eq:bcXy}).
\subsection{Limit-cycle solution and its Floquet zero eigenfunctions}
The dependence of the Hele-Shaw convection on the Rayleigh number ${\rm Ra}$ is well known,
and the existence of stable limit-cycle solutions to Eq.~(\ref{eq:X}) is also well established
(see Ref.~\cite{ref:bernardini04} and also references therein).
In general, a stable limit-cycle solution to Eq.~(\ref{eq:X}),
which represents oscillatory convection in the Hele-Shaw cell,
can be described by
\begin{align}
X(x, y, t) = X_0\bigl( x, y, \Theta(t) \bigr), \qquad
\dot{\Theta}(t) = \Omega.
\label{eq:X_X0}
\end{align}
The phase and natural frequency are denoted by $\Theta$ and $\Omega$, respectively.
The limit-cycle solution $X_0(x, y, \Theta)$
possesses the following $2\pi$-periodicity in $\Theta$:
$X_0(x, y, \Theta + 2\pi) = X_0(x, y, \Theta)$.
Inserting Eq.~(\ref{eq:X_X0}) into Eqs.~(\ref{eq:X})(\ref{eq:P_X}),
we find that the limit-cycle solution $X_0(x, y, \Theta)$ satisfies
\begin{align}
\Omega \frac{\partial}{\partial \Theta} X_0(x, y, \Theta)
= \nabla^2 X_0 + J(\psi_0, X_0) - \frac{\partial \psi_0}{\partial x},
\label{eq:X0}
\end{align}
where the stream function $\psi_0(x, y, \Theta)$ is determined by
\begin{align}
\nabla^2 \psi_0(x, y, \Theta) = -{\rm Ra} \frac{\partial X_0}{\partial x}.
\label{eq:P0}
\end{align}
From Eq.~(\ref{eq:T_X}), the corresponding temperature field $T_0(x, y, \Theta)$ is given by
(e.g., see Fig.~\ref{fig:2} in Sec.~\ref{sec:4})
\begin{align}
T_0(x, y, \Theta) = (1 - y) + X_0(x, y, \Theta).
\label{eq:T0}
\end{align}
Let $u(x, y, \Theta, t)$ represent a small disturbance added to the limit-cycle solution $X_0(x, y, \Theta)$,
and consider a slightly perturbed solution
\begin{align}
X(x, y, t) = X_0\bigl( x, y, \Theta(t) \bigr) + u\bigl( x, y, \Theta(t), t \bigr).
\end{align}
Equation~(\ref{eq:X}) is then linearized with respect to $u(x, y, \Theta, t)$ as follows:
\begin{align}
\frac{\partial}{\partial t} u(x, y, \Theta, t)
= {\cal L}(x, y, \Theta) u(x, y, \Theta, t).
\label{eq:linear}
\end{align}
As in the limit-cycle solution $X_0(x, y, \Theta)$,
the function $u(x, y, \Theta)$ satisfies
the Neumann zero boundary condition on $x$
and the Dirichlet zero boundary condition on $y$.
Note that ${\cal L}(x, y, \Theta)$ is time-periodic through $\Theta$.
Therefore, Eq.~(\ref{eq:linear}) is a Floquet-type system with a periodic linear operator.
Defining the inner product of two functions as
\begin{align}
\Pd{ u^\ast(x, y, \Theta), \, u(x, y, \Theta) }
= \frac{1}{2\pi} \int_0^{2\pi} d\Theta \int_0^1 dx \int_0^1 dy \,
u^\ast(x, y, \Theta) u(x, y, \Theta),
\label{eq:inner}
\end{align}
we introduce the adjoint operator of the linear operator ${\cal L}(x, y, \Theta)$ by
\begin{align}
\Pd{ u^\ast(x, y, \Theta), \, {\cal L}(x, y, \Theta) u(x, y, \Theta) }
= \Pd{ {\cal L}^\ast(x, y, \Theta) u^\ast(x, y, \Theta), \, u(x, y, \Theta) }.
\label{eq:operator}
\end{align}
As in $u(x, y, \Theta)$,
the function $u^\ast(x, y, \Theta)$ also satisfies
the Neumann zero boundary condition on $x$
and the Dirichlet zero boundary condition on $y$.
Details of the derivation of the adjoint operator ${\cal L}^\ast(x, y, \Theta)$
are given in Ref.~\cite{ref:kawamura13}.
In the following subsection,
we utilize the Floquet eigenfunctions associated with the zero eigenvalue, i.e.,
\begin{align}
{\cal L}(x, y, \Theta) U_0(x, y, \Theta)
&= 0,
\\
{\cal L}^\ast(x, y, \Theta) U_0^\ast(x, y, \Theta)
&= 0.
\end{align}
We note that the right zero eigenfunction $U_0(x, y, \Theta)$ can be chosen as
\begin{align}
U_0(x, y, \Theta) = \frac{\partial}{\partial \Theta} X_0(x, y, \Theta),
\label{eq:U0}
\end{align}
which is confirmed by differentiating Eq.~(\ref{eq:X0}) with respect to $\Theta$.
Using the inner product of Eq.~(\ref{eq:inner})
with the right zero eigenfunction of Eq.~(\ref{eq:U0}),
the left zero eigenfunction $U_0^\ast(x, y, \Theta)$ is normalized as
\begin{align}
\Pd{ U_0^\ast(x, y, \Theta), \, U_0(x, y, \Theta) }
= \frac{1}{2\pi} \int_0^{2\pi} d\Theta \int_0^1 dx \int_0^1 dy \,
U_0^\ast(x, y, \Theta) U_0(x, y, \Theta)
= 1.
\end{align}
Here, we can show that the following equation holds
(see also Refs.~\cite{ref:hoppensteadt97,ref:kawamura13,ref:kawamura11}):
\begin{align}
\frac{\partial}{\partial \Theta}
\left[ \int_0^1 dx \int_0^1 dy \, U_0^\ast(x, y, \Theta) U_0(x, y, \Theta) \right] = 0.
\end{align}
Therefore, the following normalization condition
is satisfied independently for each $\Theta$ as follows:
\begin{equation}
\int_0^1 dx \int_0^1 dy \, U_0^\ast(x, y, \Theta) U_0(x, y, \Theta) = 1.
\end{equation}
\subsection{Oscillatory convection under weak perturbations}
We now consider oscillatory Hele-Shaw convection
with a weak perturbation applied to the temperature field $T(x, y, t)$
described by the following equation:
\begin{align}
\frac{\partial}{\partial t} T(x, y, t)
= \nabla^2 T + J(\psi, T) + \epsilon p(x, y, t).
\label{eq:T_p}
\end{align}
The weak perturbation is denoted by $\epsilon p(x, y, t)$.
Inserting Eq.~(\ref{eq:T_X}) into Eq.~(\ref{eq:T_p}),
we obtain the following equation for the convective component $X(x, y, t)$:
\begin{align}
\frac{\partial}{\partial t} X(x, y, t)
= \nabla^2 X + J(\psi, X) - \frac{\partial \psi}{\partial x} + \epsilon p(x, y, t).
\label{eq:X_p}
\end{align}
Using the idea of the phase reduction~\cite{ref:kuramoto84},
we can derive a phase equation from the perturbed equation~(\ref{eq:X_p}).
Namely, we project the dynamics of the perturbed equation~(\ref{eq:X_p})
onto the unperturbed solution as
\begin{align}
\dot{\Theta}(t)
&= \int_0^1 dx \int_0^1 dy \, U_0^\ast(x, y, \Theta)
\left[ \frac{\partial}{\partial t} X(x, y, t) \right]
\nonumber \\
&= \int_0^1 dx \int_0^1 dy \, U_0^\ast(x, y, \Theta)
\left[ \nabla^2 X + J(\psi, X) - \frac{\partial \psi}{\partial x} + \epsilon p(x, y, t) \right]
\nonumber \\
&\simeq \int_0^1 dx \int_0^1 dy \, U_0^\ast(x, y, \Theta)
\left[ \nabla^2 X_0 + J(\psi_0, X_0) - \frac{\partial \psi_0}{\partial x} + \epsilon p(x, y, t) \right]
\nonumber \\
&= \int_0^1 dx \int_0^1 dy \, U_0^\ast(x, y, \Theta)
\left[ \Omega \frac{\partial}{\partial \Theta} X_0(x, y, \Theta) + \epsilon p(x, y, t) \right]
\nonumber \\
&= \int_0^1 dx \int_0^1 dy \, U_0^\ast(x, y, \Theta)
\, \biggl[ \Omega \, U_0(x, y, \Theta) + \epsilon p(x, y, t) \biggr]
\nonumber \\
&= \Omega + \epsilon \int_0^1 dx \int_0^1 dy \, U_0^\ast(x, y, \Theta) p(x, y, t),
\end{align}
where we approximated $X(x, y, t)$ by the unperturbed limit-cycle solution $X_0(x, y, \Theta)$.
Therefore, the phase equation describing the oscillatory Hele-Shaw convection with a weak perturbation
is approximately obtained in the following form:
\begin{align}
\dot{\Theta}(t) = \Omega + \epsilon \int_0^1 dx \int_0^1 dy \, Z(x, y, \Theta) p(x, y, t),
\label{eq:Theta_p}
\end{align}
where the {\it phase sensitivity function} is defined as
(e.g., see Fig.~\ref{fig:2} in Sec.~\ref{sec:4})
\begin{equation}
Z(x, y, \Theta) = U_0^\ast(x, y, \Theta).
\end{equation}
Here, we note that
the phase sensitivity function $Z(x, y, \Theta)$ satisfies
the Neumann zero boundary condition on $x$
and the Dirichlet zero boundary condition on $y$, i.e.,
\begin{align}
\left. \frac{\partial Z(x, y, \Theta)}{\partial x} \right|_{x = 0} =
\left. \frac{\partial Z(x, y, \Theta)}{\partial x} \right|_{x = 1} &= 0,
\label{eq:bcZx} \\
\Bigl. Z(x, y, \Theta) \Bigr|_{y = 0} =
\Bigl. Z(x, y, \Theta) \Bigr|_{y = 1} &= 0.
\label{eq:bcZy}
\end{align}
As mentioned in Ref.~\cite{ref:kawamura13},
Eq.~(\ref{eq:Theta_p}) is a generalization of the phase equation for a perturbed limit-cycle oscillator
described by a finite-dimensional dynamical system
(see Refs.~\cite{ref:winfree80,ref:kuramoto84,ref:pikovsky01,
ref:hoppensteadt97,ref:izhikevich07,ref:ermentrout10,ref:ermentrout96,ref:brown04}).
However, reflecting the aspects of an infinite-dimensional dynamical system,
the phase sensitivity function $Z(x, y, \Theta)$ of the oscillatory Hele-Shaw convection
possesses infinitely many components that are continuously parameterized by the two variables, $x$ and $y$.
In this paper,
we further consider the case that the perturbation is described by a product of two functions as follows:
\begin{align}
p(x, y, t) = a(x, y) q(t).
\label{eq:p_aq}
\end{align}
That is, the space-dependence and time-dependence of the perturbation are separated.
In this case, the phase equation~(\ref{eq:Theta_p}) can be written in the following form:
\begin{align}
\dot{\Theta}(t) = \Omega + \epsilon \zeta(\Theta) q(t),
\label{eq:Theta_q}
\end{align}
where the {\it effective phase sensitivity function} is given by
(e.g., see Fig.~\ref{fig:5} in Sec.~\ref{sec:4})
\begin{align}
\zeta(\Theta) = \int_0^1 dx \int_0^1 dy \, Z(x, y, \Theta) a(x, y).
\label{eq:zeta}
\end{align}
We note that the form of Eq.~(\ref{eq:Theta_q}) is essentially the same as
that of the phase equation for a perturbed limit-cycle oscillator
described by a finite-dimensional dynamical system
(see Refs.~\cite{ref:winfree80,ref:kuramoto84,ref:pikovsky01,
ref:hoppensteadt97,ref:izhikevich07,ref:ermentrout10,ref:ermentrout96,ref:brown04}).
We also note that the effective phase sensitivity function $\zeta(\Theta)$
can also be considered as the collective phase sensitivity function
in the context of the collective phase description
of coupled individual dynamical elements exhibiting macroscopic rhythms~\cite{ref:kawamura11,ref:kawamura08,ref:kawamura10}.
\section{Theoretical analysis of the common-noise-induced synchronization} \label{sec:3}
In this section,
using the phase description method in Sec.~\ref{sec:2},
we analytically investigate common-noise-induced synchronization
between uncoupled systems of oscillatory Hele-Shaw convection.
In particular,
we theoretically determine the optimal spatial pattern of the common noise for achieving the noise-induced synchronization.
\subsection{Phase reduction and Lyapunov exponent}
We consider $N$ uncoupled systems of oscillatory Hele-Shaw convection subject to weak common noise
described by the following equation for $\sigma = 1, \cdots, N$:
\begin{align}
\frac{\partial}{\partial t} T_\sigma(x, y, t)
= \nabla^2 T_\sigma + J(\psi_\sigma, T_\sigma)
+ \epsilon a(x, y) \xi(t),
\label{eq:T_xi}
\end{align}
where the weak common noise is denoted by $\epsilon a(x, y) \xi(t)$.
Inserting Eq.~(\ref{eq:T_X}) into Eq.~(\ref{eq:T_xi}) for each $\sigma$,
we obtain the following equation for the convective component $X_\sigma(x, y, t)$:
\begin{align}
\frac{\partial}{\partial t} X_\sigma(x, y, t)
= \nabla^2 X_\sigma + J(\psi_\sigma, X_\sigma) - \frac{\partial \psi_\sigma}{\partial x}
+ \epsilon a(x, y) \xi(t).
\label{eq:X_xi}
\end{align}
As in Eq.~(\ref{eq:P_X}), the stream function of each system is determined by
\begin{align}
\nabla^2 \psi_\sigma(x, y, t)
= -{\rm Ra} \frac{\partial X_\sigma}{\partial x}.
\end{align}
The common noise $\xi(t)$
is assumed to be white Gaussian noise~\cite{ref:risken89,ref:gardiner97},
the statistics of which are given by
\begin{align}
\langle \xi(t) \rangle = 0, \qquad
\langle \xi(t) \xi(s) \rangle = 2\delta(t - s).
\label{eq:xi}
\end{align}
Here, we assume that the unperturbed oscillatory Hele-Shaw convection is a stable limit cycle
and that the noise intensity $\epsilon^2$ is sufficiently weak.
Then, as in Eq.~(\ref{eq:Theta_q}),
we can derive a phase equation from Eq.~(\ref{eq:X_xi}) as follows~\footnote{
Precisely speaking, owing to the noise,
the frequency of the oscillatory convection given in Eq.~(\ref{eq:Theta_xi})
can be slightly different from the natural frequency given in Eq.~(\ref{eq:X_X0});
however, this point is not essential in this paper
because Eq.~(\ref{eq:Lambda}) is independent of the value of the frequency.
The theory of stochastic phase reduction for ordinary limit-cycle oscillators has been intensively investigated
in Refs.~\cite{ref:yoshimura08,ref:teramae09,ref:nakao10,ref:goldobin10},
but extensions to partial differential equations have not been developed yet.
}:
\begin{align}
\dot{\Theta}_\sigma(t) = \Omega + \epsilon \zeta(\Theta_\sigma) \xi(t),
\label{eq:Theta_xi}
\end{align}
where the effective phase sensitivity function $\zeta(\Theta)$ is given by Eq.~(\ref{eq:zeta}).
Once the phase equation~(\ref{eq:Theta_xi}) is obtained,
the Lyapunov exponent characterizing the common-noise-induced synchronization
can be derived using the argument by Teramae and Tanaka~\cite{ref:teramae04}.
From Eqs.~(\ref{eq:xi})(\ref{eq:Theta_xi}),
the Lyapunov exponent,
which quantifies the exponential growth rate of small phase differences between the two systems,
can be written in the following form:
\begin{align}
\Lambda = -\frac{\epsilon^2}{2\pi} \int_0^{2\pi} d\Theta \, \Bigl[ \zeta'(\Theta) \Bigr]^2 \leq 0.
\label{eq:Lambda}
\end{align}
Here, we used the following abbreviation: $\zeta'(\Theta) = d\zeta(\Theta)/d\Theta$.
Equation~(\ref{eq:Lambda}) represents that
uncoupled systems of oscillatory Hele-Shaw convection
can be in-phase synchronized when driven by the weak common noise,
as long as the phase reduction approximation is valid.
In the following two subsections,
we develop a method for obtaining the optimal spatial pattern of the common noise
to achieve the noise-induced synchronization of the oscillatory convection.
\subsection{Spectral decomposition of the phase sensitivity function}
Considering the boundary conditions of $Z(x, y, \Theta)$, Eqs.~(\ref{eq:bcZx})(\ref{eq:bcZy}),
we introduce the following spectral transformation~\footnote{
Practically speaking, e.g., in numerical simulations,
infinite series are truncated at some sufficiently large finite number.
From a theoretical point of view,
such a truncation approximation is valid
because this system includes dissipation due to the Laplacian.
}:
\begin{align}
Z_{jk}(\Theta) = \int_0^1 dx \int_0^1 dy \, Z(x, y, \Theta) \cos(\pi j x) \sin(\pi k y),
\end{align}
for $j = 0, 1, 2, \cdots$ and $k = 1, 2, \cdots$.
The corresponding spectral decomposition of $Z(x, y, \Theta)$ is given by
\begin{align}
Z(x, y, \Theta) = 4 \sum_{j=0}^\infty \sum_{k=1}^\infty Z_{jk}(\Theta) \cos(\pi j x) \sin(\pi k y).
\label{eq:Z_CosSin}
\end{align}
By inserting Eq.~(\ref{eq:Z_CosSin}) into Eq.~(\ref{eq:zeta}),
the effective phase sensitivity function $\zeta(\Theta)$ can be written in the following form:
\begin{align}
\zeta(\Theta)
= \int_0^1 dx \int_0^1 dy \, Z(x, y, \Theta) a(x, y)
= \sum_{j=0}^\infty \sum_{k=1}^\infty b_{jk} Z_{jk}(\Theta),
\label{eq:zeta_double}
\end{align}
where the spectral transformation of $a(x, y)$ is defined as
\begin{align}
b_{jk} = 4 \int_0^1 dx \int_0^1 dy \, a(x, y) \cos(\pi j x) \sin(\pi k y).
\end{align}
The corresponding spectral decomposition of $a(x, y)$ is given by
\begin{align}
a(x, y) = \sum_{j=0}^\infty \sum_{k=1}^\infty b_{jk} \cos(\pi j x) \sin(\pi k y).
\end{align}
For the sake of convenience in the calculation below,
we rewrite the double sum in Eq.~(\ref{eq:zeta_double})
by the following single series:
\begin{align}
\zeta(\Theta)
= \sum_{j=0}^\infty \sum_{k=1}^\infty b_{jk} Z_{jk}(\Theta)
\equiv \sum_{n=0}^\infty s_n Q_n(\Theta).
\label{eq:zeta_single}
\end{align}
In Eq.~(\ref{eq:zeta_single}),
we introduced one-dimensional representations,
$s_n = b_{jk}$ and $Q_n(\Theta) = Z_{jk}(\Theta)$,
where the mapping between $n$ and $(j, k)$ is bijective.
Accordingly, we obtain the following quantity:
\begin{align}
\Bigl[ \zeta'(\Theta) \Bigr]^2
= \sum_{n=0}^\infty \sum_{m=0}^\infty s_n s_m Q_n'(\Theta) Q_m'(\Theta),
\label{eq:dzeta_squared}
\end{align}
where $Q_n'(\Theta) = dQ_n(\Theta)/d\Theta$.
From Eqs.~(\ref{eq:Lambda})(\ref{eq:dzeta_squared}),
the Lyapunov exponent normalized by the noise intensity, $-\Lambda / \epsilon^2$,
can be written in the following form:
\begin{align}
-\frac{\Lambda}{\epsilon^2}
= \frac{1}{2\pi} \int_0^{2\pi} d\Theta \, \Bigl[ \zeta'(\Theta) \Bigr]^2
= \sum_{n=0}^\infty \sum_{m=0}^\infty K_{nm} s_n s_m,
\label{eq:Lambda_K}
\end{align}
where each element of the symmetric matrix $\hat{K}$ is given by
\begin{align}
K_{nm} = \frac{1}{2\pi} \int_0^{2\pi} d\Theta \, Q_n'(\Theta) Q_m'(\Theta) = K_{mn}.
\label{eq:K}
\end{align}
\subsection{Spectral components of the optimal spatial pattern} \label{subsec:3C}
By defining an infinite-dimensional column vector $\bd{s} \equiv ( s_0, s_1, s_2, \cdots )^{\rm T}$,
Eq.~(\ref{eq:Lambda_K}) can also be written as
\begin{align}
-\frac{\Lambda}{\epsilon^2}
= \sum_{n=0}^\infty \sum_{m=0}^\infty K_{nm} s_n s_m
= \bd{s} \cdot \hat{K} \bd{s},
\label{eq:Lyapunov}
\end{align}
which is a quadratic form.
Using the spectral representation of the normalized Lyapunov exponent, Eq.~(\ref{eq:Lyapunov}),
we seek the optimal spatial pattern of the common noise for the synchronization.
As a constraint, we introduce the following condition:
\begin{align}
\bd{s} \cdot \bd{s}
= \sum_{n=0}^\infty s_n^2
= \sum_{j=0}^\infty \sum_{k=1}^\infty b_{jk}^2
= 1.
\label{eq:unity}
\end{align}
That is, the total power of the spatial pattern is fixed at unity.
Under this constraint condition, we consider the maximization of Eq.~(\ref{eq:Lyapunov}).
For this purpose, we define the Lagrangian $F(\bd{s}, \lambda)$ as
\begin{align}
F(\bd{s}, \lambda)
= \sum_{n=0}^\infty \sum_{m=0}^\infty K_{nm} s_n s_m - \lambda \left( \sum_{n=0}^\infty s_n^2 - 1 \right),
\end{align}
where the Lagrange multiplier is denoted by $\lambda$.
Setting the derivative of the Lagrangian $F(\bd{s}, \lambda)$ to be zero,
we can obtain the following equations:
\begin{align}
\frac{\partial F}{\partial s_l}
&= 2 \left( \sum_{m=0}^\infty K_{l m} s_m - \lambda s_l \right)
= 0, \qquad (\, l = 0, 1, 2, \cdots \,),
\\[1mm]
\frac{\partial F}{\partial \lambda}
&= - \left( \sum_{n=0}^\infty s_n^2 - 1 \right)
= 0,
\end{align}
which are equivalent to the eigenvalue problem described by
\begin{align}
\hat{K} \bd{s}_{\alpha} = \lambda_\alpha \bd{s}_\alpha, \qquad
\bd{s}_\alpha \cdot \bd{s}_\alpha = 1, \qquad
(\, \alpha = 0, 1, 2, \cdots \,).
\end{align}
These eigenvectors $\bd{s}_\alpha$ and the corresponding eigenvalues $\lambda_\alpha$ satisfy
\begin{align}
F(\bd{s}_\alpha, \lambda_\alpha) = \lambda_\alpha.
\end{align}
Because the matrix $\hat{K}$, which is defined in Eq.~(\ref{eq:K}), is symmetric,
the eigenvalues $\lambda_\alpha$ are real numbers.
Consequently, under the constraint condition given by Eq.~(\ref{eq:unity}),
the optimal vector that maximizes Eq.~(\ref{eq:Lambda})
coincides with the eigenvector associated with the largest eigenvalue, i.e.,
\begin{align}
\lambda_{\rm opt} = \max_{\alpha} \, \lambda_\alpha.
\end{align}
Therefore, the optimal spatial pattern $a_{\rm opt}(x, y)$ can be written in the following form:
\begin{align}
a_{\rm opt}(x, y) = \sum_{j=0}^\infty \sum_{k=1}^\infty b_{\rm opt}(j, k) \cos(\pi j x) \sin(\pi k y),
\label{eq:aopt}
\end{align}
where the coefficients $b_{\rm opt}(j, k)$ in the double series
correspond to the elements of the optimal vector $\bd{s}_{\rm opt}$ associated with $\lambda_{\rm opt}$.
From Eq.~(\ref{eq:Lyapunov}), the Lyapunov exponent is then given by
\begin{align}
\Lambda_{\rm opt} = - \epsilon^2 \lambda_{\rm opt}.
\end{align}
Finally, we note that this optimization method can also be considered as
the principal component analysis~\cite{ref:jolliffe02}
of the phase-derivative of the phase sensitivity function, $\partial_\Theta Z(x, y, \Theta)$.
\section{Numerical analysis of the common-noise-induced synchronization} \label{sec:4}
In this section,
to illustrate the theory developed in Sec.~\ref{sec:3},
we numerically investigate common-noise-induced synchronization
between uncoupled Hele-Shaw cells exhibiting oscillatory convection.
The numerical simulation method is summarized in Ref.~\footnote{
We applied the pseudospectral method,
which is composed of
a sine expansion with $128$ modes for the Dirichlet zero boundary condition
and a cosine expansion with $128$ modes for the Neumann zero boundary condition.
The fourth-order Runge-Kutta method with integrating factor
using a time step $\varDelta t = 10^{-4} \,\sim\, 10^{-6}$ (mainly, $\varDelta t = 10^{-4}$)
and the Heun method with integrating factor
using a time step $\varDelta t = 10^{-5}$
were applied for the deterministic and stochastic (Langevin-type) equations, respectively.
}.
\subsection{Spectral decomposition of the convective component}
Considering the boundary conditions of the convective component $X(x, y, \Theta)$, Eqs.~(\ref{eq:bcXx})(\ref{eq:bcXy}),
we introduce the following spectral transformation:
\begin{align}
H_{jk}(t) = \int_0^1 dx \int_0^1 dy \, X(x, y, t) \cos(\pi j x) \sin(\pi k y),
\end{align}
for $j = 0, 1, 2, \cdots$ and $k = 1, 2, \cdots$.
The corresponding spectral decomposition of the convective component $X(x, y, \Theta)$ is given by
\begin{align}
X(x, y, t) = 4 \sum_{j=0}^\infty \sum_{k=1}^\infty H_{jk}(t) \cos(\pi j x) \sin(\pi k y).
\end{align}
In visualizing the limit-cycle orbit in the infinite-dimensional state space,
we project the limit-cycle solution $X_0(x, y, \Theta)$ onto the $H_{11}$-$H_{22}$ plane as
\begin{align}
H_{11}(\Theta)
&= \int_0^1 dx \int_0^1 dy \, X_0(x, y, \Theta) \cos(\pi x) \sin(\pi y),
\\
H_{22}(\Theta)
&= \int_0^1 dx \int_0^1 dy \, X_0(x, y, \Theta) \cos(2 \pi x) \sin(2 \pi y).
\end{align}
\subsection{Limit-cycle solution and phase sensitivity function}
The initial values were prepared
so that the system exhibits single cellular oscillatory convection.
The Rayleigh number was fixed at ${\rm Ra} = 480$,
which gives the natural frequency $\Omega \simeq 622$,
i.e., the oscillation period $2\pi/\Omega \simeq 0.010$.
Figure~\ref{fig:1} shows the limit-cycle orbit of the oscillatory convection
projected onto the $H_{11}$-$H_{22}$ plane,
obtained from direct numerical simulations of the dynamical equation~(\ref{eq:X}).
Snapshots of the limit-cycle solution $X_0(x, y, \Theta)$ and other associated functions,
$T_0(x, y, \Theta)$ and $Z(x, y, \Theta)$, are shown in Fig.~\ref{fig:2},
where the phase variable $\Theta$ is discretized using $512$ grid points.
We note that Fig.~\ref{fig:1} and Fig.~\ref{fig:2} are essentially
reproductions of our previous results given in Ref.~\cite{ref:kawamura13}.
Details of the numerical method for obtaining the phase sensitivity function $Z(x, y, \Theta)$
are given in Refs.~\cite{ref:kawamura13,ref:kawamura11}
(see also Refs.~\cite{ref:hoppensteadt97,ref:izhikevich07,ref:ermentrout10,ref:ermentrout96,ref:brown04}).
As seen in Fig.~\ref{fig:2},
the phase sensitivity function $Z(x, y, \Theta)$ is spatially localized.
Namely, the absolute values of the phase sensitivity function $Z(x, y, \Theta)$
in the top-right and bottom-left corner regions of the system
are much larger than those in the other regions;
this fact reflects the dynamics of the spatial pattern of the convective component $X_0(x, y, \Theta)$.
As mentioned in Ref.~\cite{ref:kawamura13},
the phase sensitivity function $Z(x, y, \Theta)$ in this case possesses the following symmetry.
For each $\Theta$,
the limit-cycle solution $X_0(x, y, \Theta)$ and the phase sensitivity function $Z(x, y, \Theta)$,
shown in Fig.~\ref{fig:2},
are anti-symmetric with respect to the center of the system, i.e.,
\begin{align}
X_0(-x_\delta, -y_\delta, \Theta)
&= -X_0(x_\delta, y_\delta, \Theta),
\\
Z(-x_\delta, -y_\delta, \Theta)
&= -Z(x_\delta, y_\delta, \Theta),
\label{eq:anti-symmetric}
\end{align}
where $x_\delta = x - 1/2$ and $y_\delta = y - 1/2$.
Therefore, for a spatial pattern $a_{\rm s}(x, y)$
that is symmetric with respect to the center of the system,
\begin{align}
a_{\rm s}(-x_\delta, -y_\delta)
= a_{\rm s}(x_\delta, y_\delta),
\end{align}
the corresponding effective phase sensitivity function $\zeta(\Theta)$ becomes zero, i.e.,
\begin{align}
\zeta(\Theta)
= \int_0^1 dx \int_0^1 dy \, Z(x, y, \Theta) a_{\rm s}(x, y)
= 0.
\label{eq:zeta_s}
\end{align}
That is, such symmetric perturbations do not affect the phase of the oscillatory convection.
\subsection{Optimal spatial pattern of the common noise}
The optimal spatial pattern is obtained as the best combination of single-mode spatial patterns,
i.e., Eq.~(\ref{eq:aopt}).
Thus, we first consider the following single-mode spatial pattern:
\begin{align}
a(x, y) = a_{(j, k)}(x, y) \equiv \cos(\pi j x) \sin(\pi k y).
\end{align}
Then, the effective phase sensitivity function is given by the following single spectral component:
\begin{align}
\zeta(\Theta)
= \int_0^1 dx \int_0^1 dy \, Z(x, y, \Theta) \cos(\pi j x) \sin(\pi k y)
= Z_{jk}(\Theta).
\label{eq:Zjk}
\end{align}
From Eq.~(\ref{eq:Lambda}),
the Lyapunov exponent for the single-mode spatial pattern can be written in the following form:
\begin{align}
\Lambda(j, k) = -\frac{\epsilon^2}{2\pi} \int_0^{2\pi} d\Theta \, \Bigl[ Z_{jk}'(\Theta) \Bigr]^2,
\end{align}
where $Z_{jk}'(\Theta) = dZ_{jk}(\Theta)/d\Theta$.
Figure~\ref{fig:3}(a) shows the normalized Lyapunov exponent for single-mode spatial patterns,
i.e., $-\Lambda(j, k) / \epsilon^2$.
Owing to the anti-symmetry of the phase sensitivity function, given in Eq.~(\ref{eq:anti-symmetric}),
the normalized Lyapunov exponent $-\Lambda(j, k) / \epsilon^2$ exhibits a checkerboard pattern,
namely, $-\Lambda(j, k)/ \epsilon^2 = 0$ when the sum of $j$ and $k$, i.e., $j + k$, is an odd number.
The maximum of $-\Lambda(j, k) / \epsilon^2$ is located at $(j, k) = (10, 4)$;
under the condition of $j = k$,
the maximum of $-\Lambda(j, k) / \epsilon^2$ is located at $(j, k) = ( 4, 4)$.
The single-mode spatial patterns,
$a_{(10, 4)}(x, y)$,
$a_{( 4, 4)}(x, y)$, and
$a_{( 9, 4)}(x, y)$,
are shown in Figs.~\ref{fig:4}(b)(c)(d), respectively.
We note that
$a_{(10, 4)}(x, y)$ and $a_{( 4, 4)}(x, y)$ are anti-symmetric with respect to the center of the system,
whereas $a_{( 9, 4)}(x, y)$ is symmetric.
These spatial patterns are used in the numerical simulations performed below.
We now consider the optimal spatial pattern.
Figure~\ref{fig:3}(b) shows the spectral components of the optimal spatial pattern, i.e., $b_{\rm opt}(j, k)$,
obtained by the optimization method developed in Sec.~\ref{subsec:3C};
Figure~\ref{fig:4}(a) shows the corresponding optimal spatial pattern, i.e., $a_{\rm opt}(x, y)$,
given by Eq.~(\ref{eq:aopt}).
As seen in Fig.~\ref{fig:3},
when the normalized Lyapunov exponent for a single-mode spatial pattern, $-\Lambda(j, k) / \epsilon^2$, is large,
the absolute value of the optimal spectral components, $|b_{\rm opt}(j, k)|$, is also large.
As seen in Fig.~\ref{fig:4}(a),
the optimal spatial pattern $a_{\rm opt}(x, y)$ is similar to
the snapshots of the phase sensitivity function $Z(x, y, \Theta)$ shown in Fig.~\ref{fig:2}.
In fact, as mentioned in Sec.~\ref{subsec:3C},
the optimal spatial pattern $a_{\rm opt}(x, y)$ corresponds to
the first principal component of $\partial_\Theta Z(x, y, \Theta)$.
Reflecting the anti-symmetry of the phase sensitivity function, Eq.~(\ref{eq:anti-symmetric}),
the optimal spatial pattern $a_{\rm opt}(x, y)$ is also anti-symmetric with respect to the center of the system.
\subsection{Effective phase sensitivity function}
Figure~\ref{fig:5} shows the effective phase sensitivity functions $\zeta(\Theta)$
for the spatial patterns shown in Fig.~\ref{fig:4}.
When the normalized Lyapunov exponent $-\Lambda(j, k) / \epsilon^2$ is large,
the amplitude of the corresponding effective phase sensitivity function $\zeta(\Theta)$ is also large.
For the spatial pattern $a_{( 9, 4)}(x, y)$,
which is symmetric with respect to the center of the system,
the effective phase sensitivity function becomes zero, $\zeta(\Theta) = 0$,
as shown in Eq.~(\ref{eq:zeta_s}).
To confirm the theoretical results shown in Fig.~\ref{fig:5},
we obtain the effective phase sensitivity function $\zeta(\Theta)$
by direct numerical simulations of Eq.~(\ref{eq:X_p}) with Eq.~(\ref{eq:p_aq}) as follows:
we measure the phase response of the oscillatory convection
by applying a weak impulsive perturbation with the spatial pattern $a(x, y)$
to the limit-cycle solution $X_0(x, y, \Theta)$ with the phase $\Theta$;
then, normalizing the phase response curve by the weak impulse intensity $\epsilon$,
we obtain the effective phase sensitivity function $\zeta(\Theta)$.
The effective phase sensitivity function $\zeta(\Theta)$
obtained by direct numerical simulations with impulse intensity $\epsilon$
are compared with the theoretical curves in Fig.~\ref{fig:6}.
The simulation results agree quantitatively with the theory~\footnote{
When the impulsive perturbation is sufficiently weak,
the phase response curve depends linearly on the impulse intensity $\epsilon$.
Therefore, the phase response curve normalized by the impulse intensity $\epsilon$
converges to the effective phase sensitivity function $\zeta(\Theta)$ as $\epsilon$ decreases.
As shown in Fig.~\ref{fig:6}(d), when the impulsive perturbation is not weak,
the dependence of the phase response curve on the impulse intensity $\epsilon$ becomes nonlinear.
In general, when the impulsive perturbation is not weak, the phase response curve is not equal to zero,
even though the effective phase sensitivity function is equal to zero, $\zeta(\Theta) = 0$.
We also note that the linear dependence region of the phase response curve on the impulse
is generally dependent on the spatial pattern $a(x, y)$ of the impulse.
}.
\subsection{Common-noise-induced synchronization}
In this subsection,
we demonstrate the common-noise-induced synchronization
between uncoupled Hele-Shaw cells exhibiting oscillatory convection
by direct numerical simulations of the stochastic (Langevin-type) partial differential equation~(\ref{eq:X_xi}).
Theoretical values of
both the Lyapunov exponents $\Lambda$ for several spatial patterns $a(x, y)$
with the common noise intensity $\epsilon^2 = 10^{-6}$
and the corresponding relaxation time $1 / |\Lambda|$ toward the synchronized state
are summarized in Table~\ref{table:1}.
Figure~\ref{fig:7} shows
the time evolution of the phase differences $| \Theta_1 - \Theta_\sigma |$
when the common noise intensity is $\epsilon^2 = 10^{-6}$.
The initial phase values are $\Theta_\sigma(t = 0) = 2 \pi (\sigma - 1) / 128$
for $\sigma = 1, \cdots, 12$.
Figure~\ref{fig:8} shows
the time evolution of $H_{22}^{(\sigma)}(t)$,
which corresponds to Fig.~\ref{fig:7}.
The relaxation times estimated from the simulation results agree reasonably well with the theory~\footnote{
Theoretically speaking,
the phase differences shown in Fig.~\ref{fig:7}(d) should be constant
because the effective phase sensitivity function is equal to zero, $\zeta(\Theta) = 0$, for this case.
As shown in Fig.~\ref{fig:6}(d),
when the perturbation is not sufficiently weak, the phase response curve is not equal to zero;
this higher order effect causes the slight variations shown in Fig.~\ref{fig:7}(d).
}.
As seen in Fig.~\ref{fig:7} and Fig.~\ref{fig:8},
the relaxation time for the optimal spatial pattern $a_{\rm opt}(x, y)$
is actually much smaller than those for the single-mode spatial patterns.
For the cases of single-mode patterns,
the relaxation time for the single-mode spatial pattern $a_{(10, 4)}(x, y)$
is also smaller than those for the other single-mode spatial patterns,
$a_{( 4, 4)}(x, y)$ and $a_{( 9, 4)}(x, y)$.
We also note that
the time evolution of both $| \Theta_1 - \Theta_\sigma |$ and $H_{22}^{(\sigma)}(t)$ for $a_{(10, 4)}(x, y)$
is significantly different from that for $a_{( 9, 4)}(x, y)$
in spite of the similarity between the two spatial patterns of the neighboring modes;
this difference results from the difference of symmetry with respect to the center,
as shown in Eq.~(\ref{eq:zeta_s}).
Figure~\ref{fig:9} shows a quantitative comparison of the Lyapunov exponents
between direct numerical simulations and the theory
for the case of the optimal spatial pattern $a_{\rm opt}(x, y)$.
The initial phase values are $\Theta_\sigma(t = 0) = 2 \pi (\sigma - 1) / 64$ for $\sigma = 1, 2$,
i.e., the initial phase difference is $| \Theta_1(t = 0) - \Theta_2(t = 0) | \simeq 10^{-1}$.
The results of direct numerical simulations are averaged over $100$ samples for different noise realizations.
The simulation results quantitatively agree with the theory.
Figure~\ref{fig:10} shows
the global stability of the common-noise-induced synchronization of oscillatory convection
for the case of the optimal spatial pattern $a_{\rm opt}(x, y)$;
namely, it shows that the synchronization is eventually achieved from arbitrary initial phase differences,
i.e., $| \Theta_1(t = 0) - \Theta_\sigma(t = 0) | \in [0, \pi]$.
Although the Lyapunov exponent $\Lambda$ based on the linearization of Eq.~(\ref{eq:Theta_xi})
quantifies only the local stability of a small phase difference,
as long as the phase reduction approximation is valid,
this global stability holds true
for any spatial pattern $a(x, y)$ with a non-zero Lyapunov exponent,
namely, the Lyapunov exponent is negative, $\Lambda < 0$, as found from Eq.~(\ref{eq:Lambda}).
The global stability can be proved by the theory developed in Ref.~\cite{ref:nakao07},
i.e., by analyzing the Fokker-Planck equation equivalent to the Langevin-type phase equation~(\ref{eq:Theta_xi});
in addition, the effect of the independent noise can also be included.
\section{Concluding remarks} \label{sec:5}
Our investigations in this paper are summarized as follows.
In Sec.~\ref{sec:2},
we briefly reviewed our phase description method for oscillatory convection in the Hele-Shaw cell
with consideration of its application to common-noise-induced synchronization.
In Sec.~\ref{sec:3},
we analytically investigated common-noise-induced synchronization of oscillatory convection
using the phase description method.
In particular, we theoretically determined the optimal spatial pattern of the common noise
for the oscillatory Hele-Shaw convection.
In Sec.~\ref{sec:4},
we numerically investigated common-noise-induced synchronization of oscillatory convection;
the direct numerical simulation successfully confirmed the theoretical predictions.
The key quantity of the theory developed in this paper is the phase sensitivity function $Z(x, y, \Theta)$.
Thus, we describe an experimental procedure to obtain the phase sensitivity function $Z(x, y, \Theta)$.
As in Eq.~(\ref{eq:Z_CosSin}),
the phase sensitivity function $Z(x, y, \Theta)$
can be decomposed into the spectral components $Z_{jk}(\Theta)$,
which are the effective phase sensitivity functions
for the single-mode spatial patterns $a_{(j,k)}(x, y)$ as shown in Eq.~(\ref{eq:Zjk}).
In a manner similar to the direct numerical simulations yielding Fig.~\ref{fig:6},
the effective phase sensitivity function $Z_{jk}(\Theta)$
for each single-mode spatial pattern $a_{(j,k)}(x, y)$ can also be experimentally measured.
Therefore, in general, the phase sensitivity function $Z(x, y, \Theta)$
can be constructed from a sufficiently large set of such $Z_{jk}(\Theta)$.
Once the phase sensitivity function $Z(x, y, \Theta)$ is obtained,
the optimization method for common-noise-induced synchronization can also be applied in experiments.
Finally, we remark that not only the phase description method for spatiotemporal rhythms
but also the optimization method for common-noise-induced synchronization have broad applicability;
these methods are not restricted to the oscillatory Hele-Shaw convection analyzed in this paper.
For example, the combination of these methods can be applied to
common-noise-induced phase synchronization of spatiotemporal rhythms
in reaction-diffusion systems of excitable and/or heterogeneous media.
Furthermore, as mentioned above, also in experimental systems, such as
the photosensitive Belousov-Zhabotinsky reaction~\cite{ref:hildebrand03} and
the liquid crystal spatial light modulator~\cite{ref:rogers04},
the optimization method for common-noise-induced synchronization could be applied.
\begin{acknowledgments}
Y.K. is grateful to members of both
the Earth Evolution Modeling Research Team and
the Nonlinear Dynamics and Its Application Research Team
at IFREE/JAMSTEC for fruitful comments.
Y.K. is also grateful for financial support by
JSPS KAKENHI Grant Number 25800222.
H.N. is grateful for financial support by
JSPS KAKENHI Grant Numbers 25540108 and 22684020,
CREST Kokubu project of JST, and
FIRST Aihara project of JSPS.
\end{acknowledgments}
|
1,314,259,994,356 | arxiv | \section{Introduction} \label{sect1}
Next generation wireless communication networks are required to provide ubiquitous and high data rate communication with guaranteed quality of service (QoS). These requirements have led to a tremendous need for energy in both transmitter(s) and receiver(s). In practice, portable mobile devices are
typically powered by capacity limited batteries which require frequent recharging. Besides, battery technology has developed very slowly over the past decades and the battery capacities available in the near future will be unable to improve this situation. Consequently, energy harvesting based mobile communication system design has become
a prominent approach for addressing this issue. In particular, it enables self-sustainability for energy
limited communication networks. In addition to conventional energy harvesting sources such as solar, wind, and biomass, wireless power transfer has been proposed as an emerging alternative energy source, where the receivers scavenge energy from the ambient radio frequency (RF) signals \cite{CN:Shannon_meets_tesla}--\nocite{JR:MIMO_WIPT,JR:WIPT_fullpaper}\cite{JR:Kwan_secure_imperfect}. In fact, wireless power transfer technology not only eliminates the need of power cords and chargers, but also facilitates one-to-many charging due to the broadcast nature of wireless channels. More importantly, it enables the possibility of simultaneous wireless information and power transfer (SWIPT) leading to many interesting and challenging new research problems which have to be solved to bridge the gap between theory and practice. In \cite{CN:Shannon_meets_tesla}, the authors investigated the fundamental trade-off between harvested energy and wireless channel capacity across a pair of coupled inductor circuit in the presence of additive
white Gaussian noise. Then, in \cite{JR:MIMO_WIPT}, the study was extended to multiple antenna wireless broadcast systems.
In \cite{JR:WIPT_fullpaper}, the energy efficiency of multi-carrier systems with SWIPT was revealed. Specifically, it was shown in \cite{JR:WIPT_fullpaper} that integrating an energy harvester into a conventional information receiver improves the energy efficiency of a communication network. In \cite{JR:Kwan_secure_imperfect}, robust beamforming design for SWIPT systems with physical layer security was investigated. The results in \cite{CN:Shannon_meets_tesla}--\cite{JR:Kwan_secure_imperfect} indicate that both the information rate and the amount of harvested energy at the receivers can be significantly increased at the expense of an increase in the transmit power. However, despite the promising results in the literature, the performance of wireless power/energy transfer systems is still limited by the distance between the transmitter and the receiver due to the high signal attenuation associated with path loss.
Coordinated multipoint (CoMP) transmission is an important technique for extending service coverage, improving spectral efficiency, and mitigating interference \cite{JR:comp}\nocite{JR:comp2,JR:limited_backhaul,JR:Quek}--\cite{CN:Wei_yu_sparse_BF}. A possible deployment scenario for CoMP networks is to split the functionalities of the base stations between a central processor (CP) and a set of remote
radio heads (RRHs). In particular, the CP performs the power hungry and computationally intensive baseband signal processing while the RRHs are responsible for all radio frequency (RF) operations such as analog filtering and power amplification. Besides, the RRHs are distributed across the network and connected to the CP via backhaul links. This system architecture is known as cloud computing network. As a result, the CoMP systems architecture inherently
provides spatial diversity for combating path loss and shadowing. It has been shown that a significant system performance gain can be achieved when full cooperation is enabled in CoMP systems \cite{JR:comp,JR:comp2}. However, in practice, the enormous signalling overhead incurred by the information exchange between the CP and the RRHs may be infeasible when the capacity of the backhaul link is limited. Hence, resource allocation for CoMP networks with finite backhaul capacity has attracted much
attention in the research community \cite{JR:limited_backhaul}--\cite{CN:Wei_yu_sparse_BF}. In \cite{JR:limited_backhaul}, the authors studied the energy efficiency of CoMP multi-cell networks with capacity constrained backhaul links. In \cite{JR:Quek} and \cite{CN:Wei_yu_sparse_BF}, iterative sparse beamforming algorithms were proposed to reduce the load of the backhaul links while providing reliable communication to the users. However, the energy sources of the receivers
in \cite{JR:comp}\nocite{JR:comp2,JR:limited_backhaul,JR:Quek}--\cite{CN:Wei_yu_sparse_BF} were assumed to be perpetual and this assumption may not be valid for power-constrained portable
devices. On the
other hand, the signals transmitted by the RRHs could be exploited for energy harvesting by
the power-constrained receivers for extending
their lifetimes. However, the resource allocation algorithm design for CoMP SWIPT systems has not been solved sofar, and will be tackled in this paper.
Motivated by the aforementioned observations, we formulate the resource allocation algorithm design for multiuser CoMP communication networks with SWIPT as a non-convex optimization problem. We jointly minimize the total network transmit power and the maximum capacity consumption per backhaul link while ensuring quality of service (QoS) for reliable communication and efficient wireless power transfer. In particular, we propose an iterative algorithm which provides a local optimal solution for the considered optimization problem.
\section{System Model}
\label{sect:system model}
\subsection{Notation}
We use boldface capital and lower case letters to denote matrices and vectors, respectively. $\mathbf{A}^H$, $\Tr(\mathbf{A})$, and $\Rank(\mathbf{A})$ represent the Hermitian transpose, trace, and rank of matrix $\mathbf{A}$; $\mathbf{A}\succ \mathbf{0}$ and $\mathbf{A}\succeq \mathbf{0}$ indicate that $\mathbf{A}$ is a positive definite and a positive semidefinite matrix, respectively; $\vect(\mathbf{A})$ denotes the vectorization of matrix $\mathbf{A}$ by stacking its columns from left to right to form a column vector; $\mathbf{I}_N$ is the $N\times N$ identity matrix; $\mathbb{C}^{N\times M}$ denotes the set of all $N\times M$ matrices with complex entries; $\mathbb{H}^N$ denotes the set of all $N\times N$ Hermitian matrices; $\diag(x_1, \cdots, x_K)$ denotes a diagonal matrix with the diagonal elements given by $\{x_1, \cdots, x_K\}$; $\abs{\cdot}$ and $\norm{\cdot}_p$ denote the absolute value of a complex scalar and the
$l_p$-norm of a vector, respectively. In particular, $\norm{\cdot}_0$ is known as the $l_0$-norm of a vector and denotes the number of non-zero entries in the vector; the circularly symmetric complex Gaussian (CSCG) distribution is denoted by ${\cal CN}(\mu,\sigma^2)$ with mean $\mu$ and variance $\sigma^2$; $\sim$ stands for ``distributed as"; $\ceil[\big]{x}$ is the ceiling function denoting the smallest integer not smaller than $x$.
\subsection{CoMP Network Model and Central Processor}
\label{sect:multicell-central-unit}
We consider a CoMP multiuser downlink communication network. The system consists of a CP, $L$ RRHs, $K$ information receivers (IRs), and $M$ energy harvesting receivers (ERs), cf. Figure \ref{fig:system_model}. Each RRH is equipped with $N_\mathrm{T}>1$ transmit antennas. The IRs and ERs are single antenna devices which exploit the received signal powers in the RF for information decoding and energy harvesting, respectively. In practice, the ERs may be idle IRs which are scavenging energy from the RF for extending their lifetimes. On the other hand, the CP is the core unit in the network. In particular, it has the data of all information receivers. Besides, we assume that the global channel state information (CSI) is perfectly known at
the CP and all computations are performed in this
unit. Based on the available CSI, the CP computes the
resource allocation policy and broadcasts it to all RRHs. Specifically, each RRH receives the control signals for resource allocation and the data of the $K$ IRs from the CP via a backhaul\footnote{In practice, the backhaul links can be implemented by different technologies such as digital subscriber line (DSL) or out-of-band microwave links. } link. Furthermore, we assume that the CP supplies energy to the RRHs in the network via dedicated power lines to support the RRHs' power consumption.
\subsection{Channel Model}
\begin{figure}
\centering
\includegraphics[width=3.5in]{system_model.eps}\vspace*{-2mm}
\caption{Coordinated multipoint (CoMP) multiuser downlink communication system model with a central processor (CP), $L=4$ remote radio heads (RRHs), $K=2$ information receivers (IRs), and $M=2$ energy harvesting receiver (ERs).}
\label{fig:system_model}\vspace*{-4mm}
\end{figure}
We focus on a frequency flat fading channel and a time division duplexing (TDD) system. Each RRH obtains the local CSI of all receivers by exploiting channel reciprocity and handshaking signals. Subsequently, the RRHs feed their local CSI to the CP for computation of the resource allocation policy. The received signals at IR $k\in\{1,\ldots,K\}$ and ER $m\in\{1,\ldots,M\}$ are given by
\begin{eqnarray}
y_{k}^{\mathrm{IR}}\hspace*{-2mm}&=&\hspace*{-2mm}\underbrace{\mathbf{h}_k^H\mathbf{x}_k}_{\mbox{desired signal}}\hspace*{-1mm}+\hspace*{-1mm}\underbrace{\sum_{j\ne k}^K\mathbf{h}_k^H\mathbf{x}_j}_{\mbox{multiple-access interference}}+ n^{\mathrm{IR}}_{k},\,\,\\
y_{m}^{\mathrm{ER}}\hspace*{-2mm}&=&\hspace*{-2mm}\mathbf{g}_m^H\sum_{k=1}^K\mathbf{x}_k+n^{\mathrm{ER}}_{m},\,\,
\end{eqnarray}
where $\mathbf{x}_k\in\mathbb{C}^{N_{\mathrm{T}}L\times1}$ denotes the joint transmit signal vector of the $L$ RRHs to IR $k$. The channel between the $L$ RRHs and IR $k$ is denoted by $\mathbf{h}_k\in\mathbb{C}^{N_{\mathrm{T}}L\times1}$, and we use $\mathbf{g}_m\in\mathbb{C}^{N_{\mathrm{T}}L\times1}$ to represent the channel between the $L$ RRHs and ER $m$. We note that the channel vector captures the joint effects of multipath fading and path loss. $n^{\mathrm{IR}}_{k}\sim{\cal CN}(0,\sigma_{\mathrm{s}}^2)$ and $n^{\mathrm{ER}}_{m}\sim{\cal CN}(0,\sigma_{\mathrm{s}}^2)$ are additive white Gaussian noises (AWGN). We assume that the noise variances, $\sigma_{\mathrm{s}}^2$, are identical at all receivers.
\subsection{Signal Model and Backhaul Model}
In each scheduling time slot, $K$ independent signal streams are transmitted simultaneously to the $K$ IRs. Specifically, a dedicated beamforming vector, $\mathbf{w}_k^l\in\mathbb{C}^{N_{\mathrm{T}}\times1}$, is allocated to IR $k$ at RRH $l\in\{1,\ldots,L\}$ to facilitate information transmission. For the sake of presentation, we define a super-vector $\mathbf{w}_k\in\mathbb{C}^{N_{\mathrm{T}}L\times 1}$ for IR $k$ as
\begin{eqnarray}
\mathbf{w}_k=\vect\big([\mathbf{w}_k^1 \,\mathbf{w}_k^2\,\ldots\,\mathbf{w}_k^L]\big).
\end{eqnarray}
$\mathbf{w}_k$ represents a joint beamformer used by the $L$ RRHs for serving IR $k$. Then, $\mathbf{x}_k$ can be expressed as
\begin{eqnarray}
\mathbf{x}_k=\mathbf{w}_ks_k,
\end{eqnarray}
where $s_k\in\mathbb{C}$ is the data symbol for IR $k$ and ${\cal E}\{\abs{s_k}^2\}=1,\forall k\in\{1,\ldots,K\}$, is assumed without loss of generality.
On the other hand, the data of each IR is delivered from the CP to the RRHs via backhaul links. The backhaul capacity consumption for backhaul link $l\in\{1,\ldots,L\}$ is given by
\begin{eqnarray}
C^{\mathrm{Backhaul}}_l=\sum_{k=1}^K\bnorm{\Big[\norm{\mathbf{w}_k^l}_2\Big]}_0\,\, R_k,
\end{eqnarray}
where $R_k$ is the required backhaul data rate for conveying the data of IR $k$ to a RRH. We note that the backhaul links may be capacity-constrained and the CP may not be able to send the data to all RRHs as required for full cooperation. Thus, to reduce the load on the backhaul links, the CP can enable partial cooperation by sending the data of information receiver $k$ only to a subset of the RRHs. In particular, by setting $\mathbf{w}_l^k=\mathbf{0}$, RRH $l$ is not participating in the joint data transmission to IR $k$. Thus, the CP is not required to send the data for IR $k$ to RRH $l$ via the backhaul link which leads to a lower backhaul link capacity consumption.
\subsection{RRH Power Supply Model}
In the considered CoMP network, we assume that the CP transfers energy to the RRHs for supporting the power consumption at the RRHs and
facilitating a more efficient network operation. In particular,
$ E^{\mathrm{s}}_l- (E^{\mathrm{s}}_l)^2\beta_l$ units of energy are transferred to RRH $l$ via a dedicated power line where
$E^{\mathrm{s}}_l,\forall l\in\{1,\ldots,L\}$, is the power supplied by the CP to RRH $l$. $(E^{\mathrm{s}}_l)^2\beta_l$ is the power loss in delivering the power from the CP RRH $l$. $\beta>0$ is a constant proportional to the ratio between the resistance of the adopted power line and the voltage of power transmission. We note that $E^{\mathrm{s}}_l- (E^{\mathrm{s}}_l)^2\beta_l\ge 0$ always hold by the law of conservation of energy.
\section{Problem Formulation}
\label{sect:forumlation}
\subsection{Achievable Rate and Energy Harvesting}
\label{subsect:Instaneous_Mutual_information}
The achievable rate (bit/s/Hz) between the $L$ RRHs and IR $k$ is given by
\begin{eqnarray}
C_{k}=\log_2(1+\Gamma_{k}),\,\,
\mbox{where}\quad
\Gamma_{k}=\frac{\abs{\mathbf{h}_k^H\mathbf{w}_k}^2}{\sum\limits_
{\substack{j\neq k}}^K\abs{\mathbf{h}_k^H\mathbf{w}_j}^2+\sigma_{\mathrm{s}}^2}
\end{eqnarray}
is the receive signal-to-interference-plus-noise ratio (SINR) at IR $k$.
On the other hand, the information signal, $\mathbf{w}_k s_k,\forall k\in\{1,\ldots,K\}$, serves as a dual purpose carrier for conveying both information and energy concurrently in the considered system. The total amount of energy\footnote{We adopt the normalized energy unit Joule-per-second in this paper. Therefore,
the terms ``power" and ``energy" are used interchangeably.} harvested by ER $m\in\{1,\ldots,M\}$ is given by
\begin{eqnarray}\label{eqn:ER_power}
E_{m}^{\mathrm{ER}}=\mu\Big(\sum_{k=1}^K\abs{\mathbf{g}_m^H\mathbf{w}_k}^2\Big),
\end{eqnarray}
where $0<\mu\leq1$ denotes the efficiency of the conversion of the received RF energy to electrical energy for storage. We assume that $\mu$ is a constant and is identical for all ERs. Besides, the contribution of the antenna noise power to the harvested energy is negligibly small compared to the harvested energy from the information signal, $\abs{\mathbf{g}_m^H\mathbf{w}_k}^2$, and thus is neglected in (\ref{eqn:ER_power}).
\subsection{Optimization Problem Formulation}
\label{sect:cross-Layer_formulation}
The system objective is to jointly minimize the weighted sum of the total network transmit power and the maximum capacity consumption per backhaul link while providing QoS for reliable communication and power transfer. The resource allocation algorithm design is formulated as the following optimization problem:\vspace*{-2mm}
\begin{eqnarray} \label{eqn:cross-layer}\notag
&&\hspace*{-10mm} \underset{ E^{\mathrm{s}}_l,\mathbf{w}_k}{\mino}\,\, \delta\max_{l\in\{1,\ldots,L\}}\Big\{C^{\mathrm{Backhaul}}_l \Big\}+\eta \sum_{k=1}^K\sum_{l=1}^L\norm{\mathbf{w}^l_k}_2^2\\
\notag \mbox{s.t.}\hspace*{-2mm} &&\hspace*{2mm}\mbox{C1: }\Gamma_{k}\ge\Gamma_{\mathrm{req}_k},\,\, \forall k, \notag\\
&&\hspace*{2mm}\mbox{C2: }P_{\mathrm{C}}^\mathrm{CP}+\sum_{l=1}^L E^{\mathrm{s}}_l\le P_{\max}^{\mathrm{CP}},\notag\\
&&\hspace*{2mm}\mbox{C3: }P_{\mathrm{C}_l}+ \varepsilon\sum_{k=1}^K\norm{\mathbf{w}^l_k}^2_2\le E^{\mathrm{s}}_l- (E^{\mathrm{s}}_l)^2\beta_l,\,\, \forall l,\notag\\
&&\hspace*{2mm}\mbox{C4: }\sum_{k=1}^K\norm{\mathbf{w}^l_k}^2_2\le P^{\mathrm{T}_{\max}}_l,\,\, \forall l,\notag\\
&&\hspace*{-2mm}\mbox{C5:}\,\, E_{m}^{\mathrm{ER}}\ge P^{\min}_{m},\,\, \forall m,\,\, \,\, \,\,
\mbox{C6:}\,\, E^{\mathrm{s}}_l\ge 0,\,\, \forall l,
\end{eqnarray}
where $\delta\ge0$ and $\eta\ge0$ in the objective function are constants which reflect the preference of the system operator for the capacity consumption of individual backhaul links and the total network transmit power consumption, respectively. Besides, $\delta$ can also be interpreted as the energy/power cost in conveying information to the RRHs via backhaul. $\Gamma_{\mathrm{req}_k}>0$ in constraint C1 indicates the required minimum receive SINR at IR $k$ for information decoding. The corresponding data rate per backhaul link use for IR $k$ is given by $R_k=\log_2(1+\Gamma_{\mathrm{req}_k})$. In C2, $P_{\mathrm{C}}^\mathrm{CP}$ and $P_{\max}^{\mathrm{CP}}$ are the hardware circuit power consumption and the maximum power available at the CP, respectively. In C3, $P_{\mathrm{C}_l}$ and $ E^{\mathrm{s}}_l- (E^{\mathrm{s}}_l)^2\beta_l\ge0$ are the hardware circuit power consumption and the maximum available power at RRH $l$, respectively. $\varepsilon\ge 1$ is a constant which accounts for the power inefficiency of the power amplifier.
$P^{T_{\max}}_l$ in C4 is the maximum transmit power allowance for RRH $l$, which can be used to limit out-of-cell
interference. Constant $P^{\min}_{m}$ in constraint C5 specifies the required minimum harvested energy at ER $m$. C6 is the non-negativity constraint on the power optimization variables.
\begin{Remark} We note that the objective function considered in this paper is different from that in \cite{JR:Quek} and \cite{CN:Wei_yu_sparse_BF}. In particular, we focus on the capacity consumption of individual backhaul links while \cite{JR:Quek} and \cite{CN:Wei_yu_sparse_BF} studied the total network backhaul capacity consumption. Although the considered problem formulation does not constrain the capacity consumption of the individual backhaul links, it provides a first-order measure of the backhaul loading in the considered CoMP network when enabling partial cooperation. This information provides system design insight for the required backhaul deployment.
\end{Remark}
\section{Resource Allocation Algorithm Design}
The optimization problem in (\ref{eqn:cross-layer}) is a non-convex problem due to the non-convexity of the objective function,
constraint C1, and constraint C5. In particular, the combinatorial nature of the objective function results in an NP-hard optimization problem \cite{JR:Quek}. To strike a balance between system performance and computational complexity, we develop an iterative algorithm for obtaining a suboptimal solution. To this end, we first reformulate the optimization problem by approximating the original non-convex objective function as a weighted sum of convex functions with different weight factors. Then, we recast the reformulated problem as a semidefinite programming (SDP) problem via SDP relaxation and solve it optimally. Subsequently, a suboptimal solution to the original optimization problem is obtained by updating the weight factors and solving the reformulated problem iteratively.
\subsection{Convex Relaxation}
The non-convex weighted capacity consumption of backhaul link $l$, $\delta C^{\mathrm{Backhaul}}_l$, can be approximated as follows:
\begin{eqnarray}\label{eqn:objective_approx}
\delta C^{\mathrm{Backhaul}}_l \hspace*{-2mm}&\stackrel{(a)}{=}&\hspace*{-2mm}\delta \sum_{k=1}^K\bnorm{\Big[\norm{\mathbf{w}_k^l}_2^2\Big]}_0\,\, R_k\\
\hspace*{-2mm}&\stackrel{(b)}{\approx}&\hspace*{-2mm}\delta\sum_{k=1}^K\bnorm{\Big[\rho_k^l\norm{\mathbf{w}_k^l}_2^2\Big]}_1 R_k=\delta\sum_{k=1}^K \rho_k^l\norm{\mathbf{w}_k^l}_2^2 R_k\notag
\end{eqnarray}
where $\rho_k^l\ge 0,\forall k,l,$ in $(b)$ are given constant weight factors which can be used to achieve solution sparsity. $(a)$ indicates that the value of the $l_0$-norm is invariant when the input arguments are squared. $(b)$ is due to the fact that the $l_0$-norm can be approximated by its convex hull which is the $l_1$-norm. This approximation is known as convex relaxation and is commonly used in the field of compressed sensing for handling $l_0$-norm optimization problems \cite{JR:Quek}--\nocite{CN:Wei_yu_sparse_BF,JR:compressed_sensing}\cite{JR:compressive_sensing_boyd}.
\subsection{SDP Relaxation}
We substitute (\ref{eqn:objective_approx}) into (\ref{eqn:cross-layer}) and define $\mathbf{W}_k=\mathbf{w}_k\mathbf{w}_k^H$, $\mathbf{H}_k=\mathbf{h}_k\mathbf{h}_k^H$, and $\mathbf{G}_m=\mathbf{g}_m\mathbf{g}_m^H$. Then, we recast the reformulated problem in its epigraph form \cite{book:convex} which is given as follows:
\begin{eqnarray}\label{eqn:rank_one}
&&\hspace*{-10mm} \underset{\mathbf{W}_k\in \mathbb{H}^{N_{\mathrm{T}}},E^{\mathrm{s}}_l,\phi}{\mino}\,\, \phi+ \eta\sum_{k=1}^K\Tr(\mathbf{W}_k)\notag\\
\mbox{s.t.} &&\hspace*{-5mm}\mbox{C1: }\frac{\Tr(\mathbf{H}_k\mathbf{W}_k)}{\Gamma_{\mathrm{req}_k}}\ge\sum\limits_
{\substack{j\neq k}}^K\Tr(\mathbf{H}_k\mathbf{W}_j)+\sigma_{\mathrm{s}}^2,\forall k,\notag \\
&&\hspace*{10mm}{\mbox{C2}},\quad{\mbox{C6}},\notag\\
&&\hspace*{-5mm}\mbox{C3: } P_{\mathrm{C}_l}+\varepsilon\sum_{k=1}^K\Tr\big(\mathbf{B}_l\mathbf{W}_k\big)\le E^{\mathrm{s}}_l- (E^{\mathrm{s}}_l)^2\beta_l,\,\, \forall l,\notag\\
&&\hspace*{-5mm}\mbox{C4: } \sum_{k=1}^K\Tr\big(\mathbf{B}_l\mathbf{W}_k\big)\le P^{\mathrm{T}_{\max}}_l,\,\, \forall l,\notag\\
&&\hspace*{-5mm}\mbox{C5:}\notag\mu\Big(\sum_{k=1}^K\Tr\big(\mathbf{W}_k\mathbf{G}_m\big)\Big)\ge P^{\min}_{m},\,\, \forall m, \notag\\
&&\hspace*{-5mm}\mbox{C7:} \delta \Big(\sum_{k=1}^K\Tr(\mathbf{W}_k\mathbf{B}_l)\rho_k^lR_k \Big)\le \phi,\forall l,\notag\\
&&\hspace*{-15mm}\mbox{C8:}\,\, \mathbf{W}_k\succeq \mathbf{0},\,\, \forall k, \quad\,\,\,
\mbox{C9:}\,\, \Rank(\mathbf{W}_k)\le 1,\,\, \forall k,
\end{eqnarray}
where
\begin{eqnarray}
&&\hspace*{-5mm}\mathbf{B}_l\triangleq\diag\Big(\underbrace{0,\cdots,0}_{(l-1)N_\mathrm{T}},\underbrace{1,\cdots,1}_{N_\mathrm{T}},\underbrace{0,\cdots,0}_{(L-l)N_\mathrm{T}}\Big),\forall l\in\{1,\ldots,L\}\notag,
\end{eqnarray}
is a block diagonal matrix with $\mathbf{B}_l\succeq \mathbf{0}$. $\phi$ in the objective function and constraint C7 is an auxiliary optimization variable. Constraints C8, C9, and $\mathbf{W}_k\in\mathbb{H}^{N_{\mathrm{T}}},\forall k$, are imposed to guarantee that $\mathbf{W}_k=\mathbf{w}_k\mathbf{w}_k^H$ holds after optimization.
\begin{table}[t]\caption{Iterative Resource Allocation Algorithm}\label{table:algorithm}
\vspace*{-0.6cm}
\renewcommand\thealgorithm{}
\begin{algorithm} [H]
\caption{Reweighted $l_1$-norm Method}
\label{alg1}
\begin{algorithmic} [1]
\normalsize
\STATE Initialize the maximum number of iterations $L_{\max}$ and a small constant $\kappa\rightarrow 0$
\STATE Set iteration index $n=0$ and $\rho_{k}^l(n)=1,\forall k,l$
\REPEAT [Loop]
\STATE Solve (\ref{eqn:sdp_relaxation}) for a given set of $\rho_{k}^l(n)$ and obtain an intermediate beamforming vector $\mathbf{w}_k^l$
\STATE Update the weight factor as follows:
\begin{eqnarray}
\rho_{k}^l (n+1)&=&\frac{1}{\norm{\mathbf{w}_k^l}_2^2+\kappa}, \forall l,k,\notag\\
n&=&n+1\notag
\end{eqnarray}
\UNTIL{ $n=L_{\max}$}
\end{algorithmic}
\end{algorithm}
\vspace*{-0.9cm}\normalsize
\end{table}
Then, we relax constraint $\mbox{C9: }\Rank(\mathbf{W}_k)\le1$ by removing it from the problem formulation, such that the considered problem becomes a convex SDP given by
\begin{eqnarray}
\label{eqn:sdp_relaxation}&&\hspace*{-25mm} \underset{\mathbf{W}_k\in \mathbb{H}^{N_{\mathrm{T}}},E^{\mathrm{s}}_l,\phi}{\mino}\,\, \phi+ \eta\sum_{k=1}^K\Tr(\mathbf{W}_k)\notag\\
\hspace*{2mm}\mbox{s.t.} &&\hspace*{-3mm}\mbox{C1 -- C8}.
\end{eqnarray}
We note that the relaxed problem in (\ref{eqn:sdp_relaxation}) can be solved efficiently by numerical solvers such as CVX \cite{website:CVX}. If the solution $\mathbf{W}_k$ of (\ref{eqn:sdp_relaxation}) is a rank-one matrix, then the problems in (\ref{eqn:rank_one}) and (\ref{eqn:sdp_relaxation}) share the same optimal solution and the same optimal objective value. Otherwise, the optimal objective value of (\ref{eqn:sdp_relaxation}) serves as a lower bound for the objective value of (\ref{eqn:rank_one}).
Next, we reveal the tightness of the SDP relaxation adopted in (\ref{eqn:sdp_relaxation}) in the following theorem.
\begin{Thm}\label{thm:rankone_condition} Assuming the channel vectors of the IRs, $\mathbf{h}_k,k\in\{1,\ldots,K\},$ and the ERs, $\mathbf{g}_m,m\in\{1,\ldots,M\},$ can be modeled as statistically independent random variables then the solution of (\ref{eqn:sdp_relaxation}) is rank-one, i.e., $\Rank(\mathbf{W}_k)=1,\forall k$, with probability one.
\end{Thm}
\emph{\quad Proof: } Please refer to the Appendix. \qed
In other words, whenever the channels satisfy the condition stated in Theorem 1, the optimal beamformer $\mathbf{w}^*_k$ of (\ref{eqn:rank_one}) can be obtained with probability one by performing an eigenvalue decomposition of the solution $\mathbf{W}_k$ of (\ref{eqn:sdp_relaxation}) and selecting the principal eigenvector as the beamformer.
\subsection{Iterative Resource Allocation Algorithm}
In general, for a fixed weight factor, $\rho_k^l$, the solution of (\ref{eqn:rank_one}) does not necessarily provide sparsity and the approximation adopted in (\ref{eqn:objective_approx}) may not be tight. For improving the obtained solution, we adopt the \emph{Reweighted $l_1$-norm Method} which was originally designed to enhance the data acquisition in compressive sensing \cite{JR:compressive_sensing_boyd}. The overall resource allocation algorithm is summarized in Table \ref{table:algorithm}. In particular, the weight factor $\rho_k^l$ is updated as in line 5 of the iterative algorithm such that the magnitude of beamforming vectors $\norm{\mathbf{w}_k^l}_2^2$ with small values are further reduced in the next iteration. As a result, by iteratively updating $\rho_k^l$ and solving (\ref{eqn:sdp_relaxation}), a suboptimal beamforming solution with sparsity can be constructed. We note that the iterative algorithm in Table \ref{table:algorithm} converges to a local optimal solution of the original problem formulation in (\ref{eqn:cross-layer}) for $\kappa\rightarrow 0$ and a sufficient number of iterations \cite{JR:Quek,JR:compressive_sensing_boyd}. Furthermore, when the primal-dual path-following
method \cite{JR:SDP_relaxation1} is used by the numerical solver for solving (\ref{eqn:sdp_relaxation}), the computational complexity of the proposed algorithm is $\bigo(L_{\max}\max\{N_{\mathrm{T}}L,K+3L+M\}^4(N_{\mathrm{T}}L)^{1/2}\log(1/\epsilon))$ for a given solution accuracy $\epsilon>0$. The computational complexity is significantly reduced compared to the computational complexity of an exhaustive search with respect to $K$ and $L$, i.e., $\bigo((2^L-1)^K\max\{N_{\mathrm{T}}L,K+3L+M\}^4(N_{\mathrm{T}}L)^{1/2}\log(1/\epsilon))$.
\begin{figure}[t]
\centering
\includegraphics[width=2.0in]{system_model_simulation.eps}\vspace*{-2mm}
\caption{CoMP network simulation topology with $L=3$ RRHs, $K=5$ IRs, and $M=2$ ERs.}
\label{fig:simulation_model}
\end{figure}
\begin{table}[t]\caption{System parameters}\label{tab:feedback} \centering
\begin{tabular}{|L|l|}\hline
\hspace*{-1mm}Carrier center frequency & 1.9 GHz \\
\hline
\hspace*{-1mm}Path loss exponent & 3.6 \\
\hline
\hspace*{-1mm}Multipath fading distribution & \mbox{Rayleigh fading} \\
\hline
\hspace*{-1mm}Total noise variance, $\sigma_{\mathrm{s}}^2$ & \mbox{$-23$ dBm} \\
\hline
\hspace*{-1mm}Minimum required SINR, \hspace*{-1mm}$\Gamma_{\mathrm{req}_k}=\Gamma_{\mathrm{req}},\forall k\in\{1,\ldots,K\}$ & \mbox{$15$ dB} \\
\hline
\hspace*{-1mm}Circuit power consumption at CP, $P_{\mathrm{CP}}^{\mathrm{C}}$ & $40$ \mbox{dBm} \\
\hline
\hspace*{-1mm}Circuit power consumption at the $l$-th RRH, $P_{\mathrm{C}_l}$ & $30$ \mbox{dBm} \\
\hline
\hspace*{-1mm}Max. power supply at the CP, $P^{\mathrm{CP}}_{\max}$ & $50$ \mbox{dBm} \\
\hline
\hspace*{-1mm}Power amplifier power efficiency & $1/{\varepsilon}=0.38$ \\
\hline
\hspace*{-1mm}Max. transmit power allowance, $P_l^{T_{\max}}$ & $46$ dBm \\
\hline
\hspace*{-1mm}Min. required power transfer, $P_{m}^{\min}$ & $0$ dBm \\
\hline
\hspace*{-1mm}RF to electrical energy conversion efficiency, $\mu$ & $0.5$ \\
\hline
\hspace*{-1mm}Power loss in transferring power from the CP to \hspace*{-1mm}RRH, $1-\beta_l$ & $0.2$ \\
\hline
\end{tabular}\vspace*{-4mm}
\end{table}
\section{Results}
In this section, we evaluate the network performance of the proposed resource allocation design via simulations.
There are $L=3$ RRHs, $K=5$ IRs, and $M=2$ ERs in the system. We focus on the network topology shown in Figure \ref{fig:simulation_model}. The distance between any two RRHs is $500$ meters. The three RRHs construct an equilateral triangle while the IRs and ERs are uniformly distributed inside a disc with radius $1000$ meters centered at the centroid of the triangle. The simulation parameters can be found in Table \ref{tab:feedback}. In the iterative algorithm, we set $\kappa$ and $L_{\max}$ to $0.0001$ and $20$, respectively. The numerical results in this section were
averaged over 1000 independent channel realizations for both
path loss and multipath fading. The performance of the proposed scheme is compared with the performances of a full cooperation scheme, an optimal exhaustive search scheme, and a traditional system with co-located transmit antennas. For the full cooperation scheme, the solution is obtained by setting $\delta=0$, $\eta=1$, and solving (\ref{eqn:sdp_relaxation}) by SDP relaxation. For the exhaustive search, it is expected that multiple optimal solutions for (\ref{eqn:cross-layer}) may exist. Thus, for the set of optimal solutions, we further select the one having the minimal total system backhaul capacity consumption. If there are multiple optimal solutions with the same total system backhaul capacity consumption, then we select the one requiring the minimal total network transmit power. As for the co-located transmit antenna system, we assume that there is only one RRH located at the center of the system equipped with the same number of antennas as all RRHs combined in the distributed stetting. Besides, the CP is not at the same location as the RRH for the co-located transmit antenna system, i.e., a backhaul is still needed. Furthermore, we set $P_l^{T_{\max}}=\infty$ for the co-located transmit antenna system to study its power consumption.
\begin{figure}[t]
\subfigure[Average maximum capacity consumption per backhaul link.]{
\includegraphics[width=3.5 in]{min_max_per_link.eps}
\label{fig:backhaul_nt_subfig1} }\vspace*{-1mm}
\subfigure[Average total system backhaul capacity consumption.]{
\includegraphics[width=3.5 in]{min_max_total_links.eps}
\label{fig:backhaul_nt_subfig2} }\caption[Optional caption for list of figures]
{Average backhaul capacity consumption versus total number of transmit antennas in the network for different resource allocation schemes.}
\label{fig:backhaul_nt}
\end{figure}
\subsection{Average Backhaul Capacity Consumption}
In Figures \ref{fig:backhaul_nt_subfig1} and \ref{fig:backhaul_nt_subfig2}, we study the average maximum backhaul capacity consumption per backhaul link and the average total system backhaul capacity consumption, respectively, versus the total number of transmit antennas in the network, for different resource allocation schemes. We set $\delta=1$ and $\eta=0$ in (\ref{eqn:sdp_relaxation}) for the proposed scheme
to fully minimize the maximum capacity consumption per backhaul link. The performance of the proposed iterative algorithm is shown for $10$ and $20$ iterations. It can be seen from Figure \ref{fig:backhaul_nt_subfig1} that the proposed iterative algorithm achieves a close-to-optimal backhaul capacity consumption in all considered scenarios even for the case of $10$ iterations. We note that the gap between the proposed algorithm and the exhaustive search in Figure \ref{fig:backhaul_nt_subfig1} is caused by the sub-optimality of
the objective function approximation in (\ref{eqn:objective_approx}) and insufficient numbers of
iterations. In fact, the superior average maximum system backhaul capacity consumption of the optimal exhaustive scheme in Figure \ref{fig:backhaul_nt_subfig1} compared to the proposed scheme comes at the expense of an exponential computational complexity with respect to the number of IRs and RRHs. On the other hand, the performance gap between the proposed iterative resource allocation algorithm and the full cooperation scheme(/co-located antennas system) increases as the total number of transmit antennas. In Figures \ref{fig:backhaul_nt_subfig1} and \ref{fig:backhaul_nt_subfig2}, we observe that the average backhaul capacity consumption of the proposed algorithm decreases monotonically with an increasing number of antennas and converges to constant values close to the lower bounds, respectively. The lower bounds in Figures \ref{fig:backhaul_nt_subfig1} and \ref{fig:backhaul_nt_subfig2} are given by $\ceil[\Big]{\frac{K}{L}}\log_2(1+\Gamma_{\mathrm{req}})$ and $K\log_2(1+\Gamma_{\mathrm{req}})$, respectively. Indeed, when both the power budget and the number of antennas at the RRHs are sufficiently large, full cooperation may not be beneficial. In this case, conveying the data of each IR to one RRH may be sufficient for providing the QoS requirements for reliable communication and efficient power transfer. Hence, backaul system resources can be saved. Besides, it can be seen from Figure \ref{fig:backhaul_nt_subfig2} that the system with co-located antennas requires the smallest amount of total system backhaul capacity since the data of each IR is conveyed only to a single RRH. However, the superior performance of the co-located antenna system in terms of total network backhaul capacity consumption incurs the highest capacity consumption per backhaul link among all the schemes, cf. Figure \ref{fig:backhaul_nt_subfig1}.
\subsection{Average Total Transmit Power and Harvested Power}
In Figure \ref{fig:PT_NT}, we study the average total transmit power versus total number of transmit antennas for different resource allocation schemes. It can be observed that the total transmit power decreases monotonically with increasing number of transmit antennas. This is due to the fact that the degrees of freedom for resource allocation increase with the number of transmit antennas, which enables a more power efficient resource allocation. Besides, the proposed algorithm consumes a lower transmit power compared to the optimal exhaustive search scheme. This is because the exhaustive search scheme consumes a smaller backhaul capacity at the expense of a higher transmit power. Furthermore, the system with co-located antennas consumes a higher transmit power than the proposed scheme and the full cooperation scheme in all considered scenarios which reveals the power saving potential of CoMP due to its inherent spatial diversity. On the other hand, it is expected that the full cooperation scheme is able to achieve the lowest average total transmit power at the expense of an exceedingly large backhaul capacity consumption, cf. Figure \ref{fig:backhaul_nt}.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{pt_nt.eps}\vspace*{-2mm}
\caption{Average total transmit power (dBm) versus total number of transmit antennas in the network for different resource allocation schemes. }
\label{fig:PT_NT}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{hp_nt.eps}\vspace*{-2mm}
\caption{Average total harvested power (dBm) versus total number of transmit antennas in the network for different resource allocation schemes. }
\label{fig:hp_NT}
\end{figure}
In Figure \ref{fig:hp_NT}, we study
the average total harvested power versus the total number of transmit antennas in the network for different resource allocation schemes. We compare the average total harvested power of all resource allocation schemes with a lower bound which is computed by assuming that constraint C5 is satisfied with equality for all ERs. As can be observed, the total
average harvested powers in all considered scenarios are monotonically non-increasing with respect to the number of transmit antennas.
This is because the extra degrees of freedom offered by the increasing number of antennas improve the efficiency of resource allocation. In particular, the direction of beamforming
matrix $\mathbf{W}_k$ can be more accurately steered towards the IRs which reduces the power allocation to $\mathbf{W}_k$ and the leakage of power to the ERs. This also explains the lower harvested power for the full cooperation scheme and the system with co-located antennas since they both exploit all transmit antennas in the network for joint transmission. On the other hand, the highest amount of radiated power can be harvested for the exhaustive search scheme at the expense of a higher total transmit power.
\section{Conclusions}\label{sect:conclusion}
In this paper, we studied the resource allocation algorithm design for CoMP multiuser communication systems with SWIPT. The algorithm design was formulated as a non-convex combinatorial optimization problem with the objective to jointly minimize the total network transmit power and the maximum capacity consumption of the backhaul links. The proposed problem formulation took into account QoS requirements for communication reliability and power transfer. A suboptimal iterative resource allocation algorithm was proposed for obtaining a locally optimal solution of the considered problem. Simulation results
showed that the proposed suboptimal iterative resource allocation scheme performs close to the optimal exhaustive search scheme and provides a substantial reduction in backhaul capacity consumption compared to full cooperation. Besides, our results unveiled the potential power savings enabled by CoMP networks compared to centralized systems with multiple antennas co-located for SWIPT.
\section*{Appendix-Proof of Theorem \ref{thm:rankone_condition}}
It can be verified that (\ref{eqn:sdp_relaxation}) satisfies Slater's constraint qualification and is jointly convex with respect to the optimization variables. Thus, strong duality holds and solving the dual problem is equivalent to solving the primal problem \cite{book:convex}. For the dual problem, we need the Lagrangian function of the primal problem in (\ref{eqn:sdp_relaxation}) which is given by
\begin{eqnarray}
&&\hspace*{-6mm}{\cal L}\Big(\mathbf{W}_k,E^{\mathrm{s}}_l,\phi,\mathbf{Y}_k,\psi_l,\xi_k,\tau_m,\lambda,\omega_l,\theta_l,\chi_l\Big)\\
\hspace*{-2mm}=&&\hspace*{-6mm}\sum_{k=1}^K\Tr(\mathbf{A}_k\mathbf{W}_k)-\sum_{k=1}^K\Tr\Big(\mathbf{W}_k
\big(\mathbf{Y}_k+\frac{\xi_k\mathbf{H}_k}{\Gamma_{\mathrm{req}_k}}\big)\Big)+\Delta\notag
\end{eqnarray}
where\begin{eqnarray}\label{eqn:A_k}
\hspace*{-2mm}\mathbf{A}_k\hspace*{-3mm}&=&\hspace*{-3mm}\mathbf{D}_k+\sum_{j\neq k}^K\xi_j\mathbf{H}_j-\mu\sum_{m=1}^M\tau_m\mathbf{G}_m, \\
\hspace*{-2mm}\mathbf{D}_k\hspace*{-3mm}&=&\hspace*{-3mm} R_k\delta\sum_{l=1}^{L}\mathbf{B}_l\rho_k^l\chi_l +\eta\mathbf{I}_{N_{\mathrm{T}}}+
\sum_{l=1}^L(\psi_l+\theta_l)\varepsilon\mathbf{B}_l,\, \mbox{and}\\
\hspace*{-2mm}\Delta\hspace*{-3mm}&=&\hspace*{-3mm}\phi+\lambda(P_{\mathrm{C}}^\mathrm{CP}\hspace*{-0.5mm}+\hspace*{-0.5mm}\sum_{l=1}^L E^{\mathrm{s}}_l\hspace*{-0.5mm}- \hspace*{-0.5mm}P_{\max}^{\mathrm{CP}})\hspace*{-0.5mm}+\hspace*{-0.5mm}\sum_{m=1}^M \tau_m P^{\min}_m\hspace*{-0.5mm}-\hspace*{-0.5mm} \sum_{l=1}^L \omega_lE^{\mathrm{s}}_l\notag\\
&&\hspace*{-12mm}+\sum_{k=1}^K\xi_k\sigma_{\mathrm{s}}^2+\sum_{l=1}^L \Big[\psi_l(P_{\mathrm{C}_l}\hspace*{-0.5mm}-\hspace*{-0.5mm} ( E^{\mathrm{s}}_l- ( E^{\mathrm{s}}_l)^2\beta_l))\hspace*{-0.5mm}-\hspace*{-0.5mm}\theta_lP^{\mathrm{T}_{\max}}_l\hspace*{-0.5mm}-\hspace*{-0.5mm}\chi_l\phi\Big].\notag
\end{eqnarray}
Here, $\Delta$ denotes the collection of terms that only involve variables that are independent of $\mathbf{W}_k$. $\mathbf{Y}_{k}$ is the dual variable matrix for constraint C8. $\xi_k$, $\lambda$, $\psi_l$, $\theta_l$, $\tau_m$, $\omega_l$, and $\chi_l$ are the scalar dual variables for constraints C1--C7, respectively.
Then, the dual problem of (\ref{eqn:sdp_relaxation}) is given by\small
\begin{equation}\label{eqn:dual}
\underset{\underset{\theta_l,\lambda,\omega_l\ge 0,\mathbf{Y}_k\succeq \mathbf{0}}{\psi_l,\xi_k,\tau_m,\chi_l\ge 0}}{\maxo} \,\underset{\underset{E^{\mathrm{s}}_l,\phi}{\mathbf{W}_k\in\mathbb{H}^{N_{\mathrm{T}}}}}{\mino} \,\,{\cal L} \Big(\hspace*{-0.5mm}\mathbf{W}_k,E^{\mathrm{s}}_l,\phi,\hspace*{-0.5mm}\mathbf{Y}_k,\psi_l,
\xi_k,\tau_m,\lambda,\omega_l,\theta_l,\chi_l\hspace*{-0.5mm}\Big)
\end{equation}\normalsize
subject to $\sum_{l=1}^K \chi_l=1$. For the sake of notational simplicity, we define $\mathbf{\Upsilon}^*\triangleq\{\mathbf{W}_k^*,E^{\mathrm{s}*}_l,\phi^*\}$ and $\mathbf{\Xi}^*\triangleq\{\mathbf{Y}_k^*,\psi_l^*,\xi_k^*,\tau_m^*,\lambda^*,\omega_l^*,\theta_l^*,\chi_l^*\}$ as the set of optimal primal and dual variables of (\ref{eqn:sdp_relaxation}), respectively. Now, we consider the following Karush-Kuhn-Tucker (KKT) conditions which are useful in the proof:
\begin{eqnarray}
\hspace*{-3mm}\mathbf{Y}_k^*\hspace*{-3mm}&\succeq&\hspace*{-3mm}\mathbf{0},\,\,\tau_m^*,\,\psi_l^*,\xi_k^*\ge 0,\,\forall k,\,\forall m,\,\forall l, \label{eqn:dual_variables}\\
\hspace*{-3mm}\mathbf{Y}_k^*\mathbf{W}_k^*\hspace*{-3mm}&=&\hspace*{-3mm}\mathbf{0},\label{eqn:KKT-complementarity}\\
\hspace*{-3mm}\mathbf{Y}_k^*\hspace*{-3mm}&=&\hspace*{-3mm}\mathbf{A}_{k}^*-\xi_k^*\frac{\mathbf{H}_k}{\Gamma_{\mathrm{req}_k}},
\label{eqn:lagrangian_gradient}
\end{eqnarray}
where $\mathbf{A}_{k}^*$ is obtained by substituting the optimal dual variables $\mathbf{\Xi}^*$ into (\ref{eqn:A_k}). $\mathbf{Y}_k^*\mathbf{W}_k^*=\mathbf{0}$ in (\ref{eqn:KKT-complementarity}) indicates that for $\mathbf{W}^*_k\ne\mathbf{0}$, the columns of $\mathbf{W}^*_k$ are in the null space of $\mathbf{Y}^*_k$. Therefore, if $\Rank(\mathbf{Y}^*_k)=N_{\mathrm{T}}L-1$, then the optimal beamforming matrix $\mathbf{W}^*_k\ne \mathbf{0}$ must be a rank-one matrix. We now show by contradiction that $\mathbf{A}_k^*$ is a positive definite matrix with probability one in order to reveal the structure of $\mathbf{Y}^*_k$. Let us focus on the dual problem in (\ref{eqn:dual}). For a given set of optimal dual variables, $\mathbf{\Xi}^*$ , power supply variables, $E^{\mathrm{s}*}_l$, and auxiliary variable $\phi^*$, the dual problem in (\ref{eqn:dual}) can be written as
\begin{eqnarray}\hspace*{-2mm}\label{eqn:dual2}
\,\,\underset{\mathbf{W}_k\in\mathbb{H}^{N_{\mathrm{T}}}}{\mino} \,\, {\cal L}\Big(\hspace*{-0.5mm}\mathbf{W}_k,\phi^*,E^{\mathrm{s}*}_l,\mathbf{Y}_k^*,\psi_l^*,\xi_k^*,\tau_m^*,\lambda^*,\omega_l^*,\theta_l^*,\chi_l^*\hspace*{-0.5mm}\Big).
\end{eqnarray}
Suppose $\mathbf{A}_k^*$ is not positive definite, then we can choose $\mathbf{W}_k=r\mathbf{w}_k\mathbf{w}_k^H$ as one of the optimal solutions of (\ref{eqn:dual2}), where $r>0$ is a scaling parameter and $\mathbf{w}_k$ is the eigenvector corresponding to one of the non-positive eigenvalues of $\mathbf{A}_k^*$. We substitute $\mathbf{W}_k=r\mathbf{w}_k\mathbf{w}_k^H$ into (\ref{eqn:dual2}) which leads to
\begin{equation}
\underbrace{\sum_{k=1}^K\Tr(r\mathbf{A}_k^*\mathbf{w}_k\mathbf{w}_k^H)}_{\le 0}-r\sum_{k=1}^K\Tr\Big(\mathbf{w}_k\mathbf{w}_k^H
\big(\mathbf{Y}_k^*+\frac{\xi_k^*\mathbf{H}_k}{\Gamma_{\mathrm{req}_k}}\big)\Big)+\Delta.
\end{equation}
On the other hand, since the channel vectors of $\mathbf{g}_m$ and $\mathbf{h}_k$ are assumed to be statistically independent, it follows that by setting $r\rightarrow \infty$, the term $-r\sum_{k=1}^K\Tr\Big(\mathbf{w}_k\mathbf{w}_k^H
\big(\mathbf{Y}_k^*+\frac{\xi_k^*\mathbf{H}_k}{\Gamma_{\mathrm{req}_k}}\big)\Big)\rightarrow -\infty$ and the dual optimal value becomes unbounded from below. Besides, the optimal value of the primal problem is non-negative for $\Gamma_{\mathrm{req}_k}>0$. Thus, strong duality does not hold which leads to a contradiction. Therefore, $\mathbf{A}_k^*$ is a positive definite matrix with probability one, i.e., $\Rank(\mathbf{A}_k^*)=N_{\mathrm{T}}L$.
By exploiting (\ref{eqn:lagrangian_gradient}) and a basic inequality for the rank of matrices, we have
\begin{eqnarray}
\hspace*{-3mm}&&\hspace*{-2mm}\Rank(\mathbf{Y}^*_k)+\Rank\big(\xi_k^*\frac{\mathbf{H}_k}{\Gamma_{\mathrm{req}_k}}\big)\\
\hspace*{-3mm} \notag&\ge &\hspace*{-2mm}\Rank\big(\mathbf{Y}^*_k+\xi_k^*\frac{\mathbf{H}_k}{\Gamma_{\mathrm{req}_k}}\big)=\Rank(\mathbf{A}_k^*)=N_\mathrm{T}L\\
\hspace*{-3mm}&\Rightarrow &\hspace*{-2mm}
\Rank(\mathbf{Y}^*_k)\ge N_{\mathrm{T}}L-\Rank\big(\xi_k^*\frac{\mathbf{H}_k}{\Gamma_{\mathrm{req}_k}}\big).\notag
\end{eqnarray}
Thus, $\Rank(\mathbf{Y}^*_k)$ is either $N_\mathrm{T}L-1$ or $N_\mathrm{T}L$. Furthermore, $\mathbf{W}_k^*\ne\mathbf{0}$ is required to satisfy the minimum SINR requirement of IR $k$ in C1 for $\Gamma_{\mathrm{req}_k}>0$. Hence, $\Rank(\mathbf{Y}^*_k)=N_{\mathrm{T}}L-1$ and $\Rank(\mathbf{W}^*_k)=1$ hold with probability one. In other words,
the optimal joint beamformer $\mathbf{w}^*_k$ can be obtained by performing eigenvalue decomposition of $\mathbf{W}^*_k$ and selecting the principal eigenvector as the beamformer.
|
1,314,259,994,357 | arxiv | \section{Introduction}
The construction of a quantitative, predictive, force-level theory of activated glassy structural relaxation at the level of atoms or molecules remains a grand challenge in statistical mechanics [1,2,3]. Recently, Mirigian and Schweizer formulated and applied a force-based dynamical theory that relates thermodynamics, structure and activated relaxation for colloidal suspensions [4,5], supercooled molecular liquids [4,6] and polymer melts [7] -- the “Elastically Collective Nonlinear Langevin Equation” (ECNLE) theory. Quantitative tractability for real materials is achieved based on an a priori mapping of chemical complexity [4] to a thermodynamic-state-dependent effective hard sphere fluid using experimental equation-of-state data. The basic relaxation event involves coupled large amplitude cage-scale hopping and a long range but low amplitude collective elastic distortion of the surrounding liquid, resulting in two inter-related, but distinct, barriers. The elastic barrier becomes very important in the deeply supercooled regime and grows much faster with cooling than the local cage barrier.
The initial formulation of ECNLE theory for rigid molecules is based on a quasi-universal mapping, is devoid of fit parameters, has no divergences at finite temperature or below random close packing, and accurately captures the alpha relaxation time over 14 decades [4,5]. Extension to polymer liquids is based on a disconnected Kuhn segment model [7]. To capture the wide variation of fragility in polymer melts, non-universality was introduced motivated by the system-specific nature of the nm-scale conformational dynamics required for segmental hopping [8]. Good results have been demonstrated for $T_g$, fragility and the temperature dependent segmental relaxation time.
ECNLE theory has also been extended and applied to other problems: spatially heterogeneous relaxation in free standing thin films [9,10,11], segmental relaxation in polymer nanocomposites [12], attractive glass and gel formation in dense sticky colloidal suspensions [13], the effect of random pinning in dense liquids [14], penetrant diffusion in supercooled liquids and glasses [15,16], and activated relaxation in dynamically-asymmetric 2-component mixtures [17].
In this article, we revisit the basics of ECNLE theory of 1-component liquids to further establish it physical picture and address new questions. After a brief review of key technical aspects in section II, new numerical studies are presented in section III that explore a possible universality of the dynamic transient localization length, and alternative perspectives of the temperature-dependent barrier and effective volume fraction are also studied. Section IV analyzes the magnitude and temperature dependence of the particle-level cooperative displacement of the alpha process and an alternative measure of a growing cooperativity length scale. The latter is shown to be strongly correlated with the alpha time. An alternative continuum mechanics analysis of the elastic barrier and its consequences on the alpha time and cooperativity length scale is presented in Section V. The article concludes in Section VI with a discussion.
\section{ENCLE Theory and Chemical Mapping}
As relevant background, the present state of bulk liquid ECNLE theory is briefly reviewed. All aspects have been discussed in great detail in prior papers [4-8].
\begin{figure*}[htp]
\centering
{
\includegraphics[width=8cm]{Fig1a_bulk.pdf}
\label{fig:first_sub}
}
\bigski
{
\includegraphics[width=8cm]{Fig1b.pdf}
\label{fig:second_sub}
}
\caption{Left panel. Schematic of the fundamental relaxation event for spheres which involves a local, large amplitude cage-scale hopping motion on the $\sim 3$ particle diameter length scale and a nonlocal, spatially long-range collective elastic motion to accommodate the local rearrangement. Cage scale hopping is described by the dynamic free energy as a function of particle displacement, which sets the amplitude of the long range elastic displacement field outside the cage. Various key length and energy scales are indicated. Right panel shows numerical calculations of the mean alpha time (secs) for OTP liquid as a function of temperature (main frame) and inverse temperature (inset).}
\label{fig:1}
\end{figure*}
\subsection{Quasi-Universal ECNLE Theory of Spherical Particle Liquids}
ECNLE theory describes the activated relaxation of a tagged particle as a mixed local-nonlocal rare hopping event [6]. Figure 1 shows a cartoon of the key physical elements. The foundational quantity for a tagged spherical particle (diameter, $d$) liquid of packing or volume fraction $\Phi$ is the angularly-averaged instantaneous displacement (denoted, $r$) dependent dynamic free energy, $F_{dyn}(r)=F_{ideal}(r)+F_{cage}(r)$, the derivative of which determines the effective force on a moving particle due to its surroundings. The “ideal” term $\beta F_{ideal}(r)=-3\ln(r/d)$ favors unbounded diffusion or delocalization. The localizing “caging” contribution. The localizing “caging” contribution $F_{cage}(r)$ is constructed from knowledge of the equilibrium pair correlation function $g(r)$ or structure factor $S(k)$. It captures kinetic constraints on the nearest neighbor cage length scale defined from the location of the first minimum of $g(r)$ ($r_{cage}\approx 1.5d$). Large amplitude local hopping drives irreversible rearrangement, but is strongly coupled to (or "facilitated by") a spatially long-range collective elastic adjustment of all particles outside the cage needed to create the extra space required to accommodate a hop.
Key local lengths (see Fig.\ref{fig:1}) are the minimum and maximum of the dynamic free energy ($r_L$ and $r_B$, respectively), and jump distance $\Delta r=r_B-r_L$; key energies are the local cage barrier height,$F_B$, and harmonic curvature at the dynamic free energy minimum, $K_0$. The precise nature of the elastic fluctuation field ($u(r)$ in Fig.1) required to facilitate a cage scale hop is a priori unknown. As a technical approximation, the liquid outside the cage is treated as a continuum linear elastic material (following Dyre [18]) which allows calculation of the displacement field using continuum mechanics supplemented by a microscopic boundary condition [6]. The so-computed radially-symmetric single particle displacement field decays as an inverse square power law of distance [6]:
\begin{eqnarray}
u(r)=\Delta r_{eff}\frac{r_{cage}^2}{r^2}, \qquad r\geq r_{cage}
\label{eq:1}
\end{eqnarray}
The amplitude is set by the microscopically-determined mean cage expansion length, $\Delta r_{eff}$ [6]:
\begin{eqnarray}
\Delta r_{eff}\approx 3\Delta r^2/32r_{cage} \leq r_L
\label{eq:2}
\end{eqnarray}
where $\Delta r \approx 0.2-0.4d$ and grows with density or cooling. The prefactor of 3/32 in Eq.(\ref{eq:2}) follows from assuming each spherical particle in the cage independently hops in a random direction by $\Delta r$.
There are two ways to then compute the elastic barrier. One could invoke literal continuum mechanics, as done by Dyre in his seminal phenomenological approach [18]. However, in ECNLE theory the local cage and long range collective elastic aspects are intimately related. Given the former is described microscopically, for consistency prior work has invoked a particle-level calculation of the elastic barrier we refer to as "molecular Einstein-like". It corresponds to computing the elastic barrier by summing over all harmonic particle displacements outside the cage region which yields: [6]
\begin{eqnarray}
F_{elastic} &=& \rho\frac{K_0}{2}\int_{r_{cage}}^\infty dr 4\pi r^2 u^2(r)g(r)\nonumber\\
&\approx& 12K_0\Phi\Delta r_{eff}^2\left(\frac{r_{cage}}{d}\right)^3
\label{eq:3}
\end{eqnarray}
where $r$ is relative to the cage center and $K_0=3k_BT/r_L^2$. Note the long range nature of the integrand in eq 3 which decays as $\sim r^{-2}$, and hence the total elastic barrier converges slowly to its full value with the leading correction scaling as $\sim r^{-1}$.
The sum of the coupled (and in general temperature and density dependent) local and elastic collective barriers determine the mean total barrier for the alpha relaxation process:
\begin{eqnarray}
F_{total} = F_B + F_{elastic}
\label{eq:4}
\end{eqnarray}
The elastic barrier increases much more strongly with increasing density or cooling than its cage analog, and dominates the growth of the alpha time as the laboratory glass transition is approached [6]. A generic measure of the average structural relaxation time follows from a Kramers calculation of the mean first passage time for hopping [6]. For barriers in excess of a few kBT one has: [5,6]
\begin{eqnarray}
\frac{\tau_\alpha}{\tau_s} = 1+ \frac{2\pi(k_BT/d^2)}{\sqrt{K_0K_B}}\exp{\frac{F_B+F_{elastic}}{k_BT}}
\label{eq:5}
\end{eqnarray}
where $K_B$ is the absolute magnitude of the barrier curvature in units of $k_BT/d^2$. The alpha time is expressed in units of a "short time/length scale" relaxation process (cage-renormalized Enskog theory) the explicit formula for which is given elsewhere [6,8]. Physically, it is meant to capture the alpha process in the absence of strong caging defined by the parameter regime where no barrier is predicted (e.g., $\Phi < 0.43$ for hard spheres [19]). The latter condition corresponds to being below the naive mode coupling theory "transition" which in ECNLE theory is manifested as a smooth dynamic crossover [6,19,20].
\subsection{Mappings for Molecular and Polymeric Liquids}
The theory is rendered quantitatively predictive for rigid molecular liquids via a mapping [4,5] to an effective hard sphere fluid guided by the requirement that it exactly reproduces the equilibrium dimensionless density fluctuation amplitude (compressibility) of the liquid [21], $S_0(T) = \rho k_BT\kappa_T$. This long wavelength thermodynamic quantity sets the amplitude of nm-scale density fluctuations, and follows from the experimental equation-of-state (EOS). The mapping relation is: [4]
\begin{eqnarray}
S_0^{HS}&=&\frac{(1-\Phi)^4}{(1+2\Phi)^2}\equiv S_{0,exp}=\rho k_BT\kappa_T\nonumber\\
&\approx &\frac{1}{N_s}\left(-A+\frac{B}{T} \right)^{-2}
\label{eq:6}
\end{eqnarray}
The first equality employs Percus-Yevick (PY) integral equation theory [21] for hard sphere fluids. The final equality is an accurate analytic description of experimental data derived previously [4]. Temperature enters all 3 factors in $S_0(T)$. This mapping determines a material-specific, temperature-dependent effective hard sphere packing fraction, $\Phi_{eff}(T)$. From Eq.(\ref{eq:6}) one has the explicit expression:
\begin{eqnarray}
\Phi_{eff}(T;A,B,N_s)&=& 1 + \sqrt{S_0^{expt}(T)} \nonumber\\
&-& \sqrt{S_0^{expt}(T) + 3\sqrt{S_0^{expt}(T)}}
\label{eq:7}
\end{eqnarray}
Thus, in practice, 4 known chemically-specific parameters enter in the minimalist mapping [4,5,7,8]: $A$ and $B$ (interaction site level entropic and cohesive energy EOS parameters, respectively), the number of elementary sites that define a rigid molecule, $N_s$ (e.g., $N_s=6$ for benzene), and hard sphere diameter, $d$. Knowledge of $\Phi_{eff}(T)$ allows $g(r)$ and $S(k)$ to be computed, which determines $F_{dyn}(r)$, from which all dynamical results follow. With this mapping, ECNLE theory can make alpha time predictions with no adjustable parameters. The theory has accurately predicted the alpha time over 14 decades for nonpolar organic molecules, and with less quantitative accuracy for hydrogen-bonding molecules (e.g., glycerol).[4,5]
Figure 1 shows mean alpha relaxation time calculations for orthoterphenyl (OTP) in two temperature representations; for this system no adjustable parameter agreement with experiment has been documented [4,5]. Detailed analytic and numerical analyzes of the theoretical form of the temperature dependence of the alpha time have been performed [4]. Over various restricted temperature or time scale regimes, the theory is consistent with essentially all of the diverse forms in the literature including the entropy crisis VFT [3], dynamic facilitation parabolic model [22], empirical two-barrier form [23,24], and MCT critical power law [25]; see ref 4 for a detailed discussion.
Polymers have additional complexities associated with conformational isomerism and chain connectivity. As a minimalist model the polymer liquid is replaced by a fluid of disconnected Kuhn-sized segments modeled as non-interpenetrating hard spheres composed of a known number of interaction sites, $N_s$, and effective hard core diameter [7]. Polymer-specific errors must be incurred based on such a mapping. To address this, a one-parameter non-universal version of ECNLE theory has been developed based on the hypothesis the amount of cage expansion depends on sub-nm chemical (conformational) details that are coarse-grained over in the effective hard sphere description[8]. Nonuniversality enters via a modified jump distance, $\Delta r \rightarrow \lambda\Delta r$, where the constant $\lambda$ is adjusted to \emph{simultaneously} provide the best theoretical description of $T_g$ and fragility on a polymer-specific basis [8]. From Eqs(\ref{eq:2}) and (\ref{eq:3}), this results in $F_{elastic}\rightarrow \lambda^4 F_{elastic}$. Hence, the relative importance of the local versus collective elastic barrier acquires a polymer-specificity. Very high (very low) fragility polymers correspond to $\lambda$ values greater (smaller) than the universal model value of unity. Hence, within ECNLE theory increasing $\lambda$ and dynamic fragility corresponds to a more cooperative alpha process as defined by the relative importance of the collective elastic contribution to relaxation [8].
In this article, we present representative calculations for a subset of organic molecules and polymer melts previously studied [4,7,8]. Specifically [8], polystyrene (PS; fragility = $m \sim 110$) and orthoterphenyl (fragility $\sim 82$) where $\lambda_{PS}=\lambda_{OTP}=1$, very high fragility ($m \sim 142$) polycarbonate (PC) where $\lambda_{PC}=\sqrt{2}$, and low fragility ($m\sim 46$) polyisobutylene (PIB) where $\lambda_{PIB}=0.47$.
\section{Temperature Dependence of Short and Long Time Dynamics and Effective Volume Fraction}
\subsection{Apparent Plateau Mean Square Displacement}
The single particle mean square displacement (MSD) at intermediate time scales where particles are approximately "transiently localized" is a quantity of interest in simulation [3,26] and experiment (e.g., quasi-elastic neutron scattering [27]). In a log-log plot, the displacement corresponding to the minimum non-Fickian slope of the MSD serves as an objective and practical measure of a "localization length" [3,28]. Consistent with intuition, it has been numerically shown based on stochastic trajectory solution of NLE theory [29] that this condition corresponds to the mean displacement, $R^*$, where the cage restoring force of the dynamic free energy is a maximum. Analytic analysis of NLE theory yields [30]:
\begin{eqnarray}
R^* \varpropto \sqrt{d.r_L}
\label{eq:8}
\end{eqnarray}
This practical measure of a dynamic localization length is not the same as the literal minimum of the dynamic free energy at $r_L$.
Calculations of $R^*$ as a function of temperature for several systems are shown in Fig.2. Possible universality based on a high temperature crossover temperature, $T_A$, is explored where the latter is defined via when the total barrier is either 1 or 3 $k_BT$. The doubly normalized plot in Fig.2 reveals that over a very wide range of reduced temperatures (corresponding to the alpha time changing by more than 10 decades), a good collapse is found which depends little on chemistry or which criterion is adopted for $T_A$. . Using a prior analytic result [19,30] of NLE theory that $r_L\approx 30d\exp(-12.5\Phi)$ plus Eq.(\ref{eq:8}) one has:
\begin{eqnarray}
\left(\frac{R^*}{R_A^*} \right)^2 \approx \frac{r_L}{r_{L,A}}\approx \exp(-12.5(\Phi(T) - \Phi(T_A)) )
\label{eq:9}
\end{eqnarray}
These results can be potentially tested against simulation and experiment. Note that while the numerical data in Figure 2 can be reasonably described as linear in temperature over the narrow range probed in simulation, the functional form is nonlinear at the lower temperatures of primary experimental interest. This cautions against linear extrapolation of high temperature simulation data.
\begin{figure}[htp]
\includegraphics[width=8cm]{Fig2.pdf
\caption{\label{fig:2}(Color online) Square of the displacement of maximum cage restoring force normalized by its value at the high temperature reference state as a function of reduced temperature for PS, OTP, PIB, PC and two choices of $T_A$ corresponding to the low barrier states $\Phi_A=0.50$ and $\Phi_A=0.53$. The curves through the points are guides to the eye. }
\end{figure}
\subsection{Dynamic Barriers}
Fundamental connections of the alpha relaxation time and measures of short time dynamics have been predicted by ECNLE theory in prior studies [4,6]. Recently, Simmons [31,32] suggested based on simulations performed at relatively high temperatures that a roughly exponential, but non-universal, connection exists between the effective barrier deduced from the logarithm of the alpha time and the MSD in the pseudo-plateau regime if both quantities are non-dimensionalized by a high temperature crossover value. In the simulations, the latter is defined as the temperature $T_A$ where there is $\sim 10\%$ deviation from Arrhenius relaxation.
Motivated by the above, Figure 3 plots our calculations for the temperature-dependent total barrier divided by its value at $T_A$ against the normalized square of $R^*$. For PS, PC, and OTP (not plotted, identical to PS) they are well fit (including all of the deeply supercooled regime) by:
\begin{eqnarray}
\frac{F_{total}(T)}{F_{total}(T_A)} = -a + b\exp\left[c\left(\frac{R_A^*}{R^*} \right)^2 \right]
\label{eq:10}
\end{eqnarray}
where $a$, $b$, $c$ are positive system-specific constants, and c increases monotonically with fragility. Thus ECNLE theory does predict a specific exponential connection between the barrier and $R^*$ if expressed in a dimensionless form. Note that for the very low fragility PIB ($m\sim 46$), the plot is nearly linear up to a barrier of $\sim 10k_BT$.
\begin{figure}[htp]
\center
\includegraphics[width=8cm]{Fig3.pdf
\caption{\label{fig:3}(Color online) Total dynamic barrier divided by its value at the high temperature reference state versus $(R_A^*/R^*)^2$ for PS, PC, PIB at two values of the high temperature reference state volume fraction. The dotted curve is a fit to the red data points: $3.85+2.77\exp(0.54(R_A^*/R^*)^2)$.}
\end{figure}
Various theoretical models based on different physics generally correspond to different forms of the temperature dependence of the effective barrier [37]. Mauro et al [33] have proposed a phenomenological model they claim can fit experimental data over many decades based on a configurational entropy perspective significantly modified in a manner motivated by constraint theory ideas typically employed for network glass-formers. This model corresponds to an effective barrier in thermal energy units (logarithm of the non-dimensionalized shear viscosity [33]) that grows exponentially with a material-specific energy scale divided by the thermal energy. Hence, a structural relaxation time that is roughly a double exponential of inverse temperature.
With the above motivation, the main frame of Figure 4a plots ECNLE theory barrier calculations for OTP in a log-linear inverse temperature Angell representation. The individual contributions to the total barrier are not very exponential in inverse temperature. However, surprisingly, the total barrier over a wide range of barrier heights, including the deeply supercooled regime (total barrier ~ 6-32 kBT, corresponding to alpha times ~ 10 ns-100 s) is not far from an apparent Arrhenius form. This seems to us at least partially accidental, given the different physical processes underlying the local and collective elastic barriers in ECNLE theory. The inset of Fig.4a buttresses this view since PS, PC and PIB do not show as good apparent Arrhenius growth of the total barrier as does OTP.
\begin{figure}[h]
\center
\includegraphics[width=8cm]{Fig4a.pdf}\\
\includegraphics[width=8cm]{Fig4b.pdf
\caption{\label{fig:4}(Color online) Local, elastic and total barrier as a function of inverse temperature normalized by the bulk $T_g$ for OTP liquid. Inset shows the total barriers for all 4 systems of present interest. (b) OTP results of panel (a) plotted versus temperature in Kelvin; inset shows the corresponding location of the minimum of the dynamic free energy.}
\end{figure}
Figure 4b plots the same OTP results in the less common linear in temperature format. Curiously, the elastic and total barriers are reasonably exponential in this representation. The inset of Fig.4b shows the corresponding value of $r_L$ for OTP, which is also roughly exponential. This behavior has physical meaning given the connection of $r_L$ with the effective volume fraction in Eq. (\ref{eq:9}), and the near linear growth of the latter with cooling as established in the next sub-section.
\subsection{Mapped Volume Fraction and Dynamic Crossovers}
The key quantity to treat thermal liquids in ECNLE theory is the effective hard sphere temperature-dependent volume fraction of Eq.(7). Figure 5a shows calculations of this quantity for the four systems of present interest in the standard inverse temperature representation. The effective volume fraction grows sub-linearly with inverse temperature, which is perhaps not unexpected given Eqs. (6) and (7). Figure 5b plots the same results versus temperature. Rather surprisingly, the behavior is remarkably simple, following an almost linear growth over a huge temperature range corresponding to a total barrier growth from $\sim 1-32k_BT$
\begin{eqnarray}
\Phi_{eff}(T)\approx 0.5 + K(T_{ref}-T)
\label{eq:11}
\end{eqnarray}
where $K$ and $T_{ref}$ depend on material. This implies that if the quantities that enter Eq. (\ref{eq:7}) are expanded through linear order in $T$, then the content of the mapping is almost fully captured. We note the slope for OTP in Fig.5b is $\sim 6\times 10^{-4}$ $K^{-1}$, nearly identical to its linear expansion coefficient of $\sim 7\times 10^{-4}$ $K^{-1}$. Precise agreement should not be expected since the mapping is based on the dimensionless compressibility which has 3 temperature dependent quantities. On the other hand, the naive idea that under the isobaric (1 atm) conditions of interest the temperature dependence of $\Phi_{eff}(T)\equiv \rho(T)d_{eff}^3(T)$ is mainly due to thermal expansion (an EOS property) seems reasonable.
\begin{figure}[htp]
\center
\includegraphics[width=8cm]{Fig5a.pdf}\\%\vspace{-40pt}
\includegraphics[width=8cm]{Fig5b.pdf
\caption{\label{fig:5}(Color online) Effective hard sphere volume fraction versus (a) $1000/T$ and (b) $T$ for the 4 systems of interest based on the thermal mapping.}
\end{figure}
The implications of Eq.(\ref{eq:11}) for dynamics are interesting. First note that in the standard representation of Fig.5a the smooth curve could be crudely viewed as consisting of high and low temperature linear branches. With such a construction (not shown), for PS we find the lines intersect at $T^*\sim 540$ $K$, corresponding to $T^*/T_g\sim 1.25$. A similar exercise for OTP and PC yields $T^*/T_g\sim 1.17$ and 1.3, respectively. The absolute value of $T^*/T_g$, and its reduction with increasing fragility, agrees well with trends of experimentally-deduced dynamical crossover temperatures [3,34,35,36]. The latter are based on empirically fitting the ideal MCT critical power law or other functions to alpha time data plotted as a function of inverse temperature. Thus, empirically, one is tempted to associate the smooth thermodynamic $T^*$ crossover as underpinning the dynamical crossover. This perspective in reinforced by examining dynamical properties. A representative example for PS is shown in Fig.6. In either the $T$ or $T^{-1}$ plotting formats, a crossover in the local barrier is found at $\sim 520-540$ $K$, nearly identical to the $T^*$ value found from Fig.5a. A caveat is that although dynamic properties plotted versus $T$ show the crossover, the effective volume fraction of Fig.5b does not.
\begin{figure}[htp]
\center
\includegraphics[width=8cm]{Fig6a.pdf}\\%\vspace{-40pt}
\includegraphics[width=8cm]{Fig6b.pdf
\caption{\label{fig:6}(Color online) Local cage barrier versus (a) $T$ and (b) $1000/T$ for PS melt. The two blue straight lines show two roughly linear regimes, and their extrapolated intersection occurs at $\sim 515$ $K$.}
\end{figure}
\section{Measures of Cooperativity}
At present, ECNLE theory focuses on average dynamic properties. Explicit space-time dynamic heterogeneity (DH) is not addressed. However, the concept of "cooperativity" is not the same as DH. It can be analyzed in the ECNLE theory framework.
\subsection{Cooperative Displacement}
Real space analyses of simulations [37,38] have attempted to identify the number of particles involved in an relaxation event and, even more objectively, the total particle mean square displacement associated with a re-arrangement, defined here as $MSD^*$. Schall, Spaepen and Weitz [39] experimentally extracted the full displacement field associated with activated re-arrangements in glassy colloidal suspensions. They found a picture akin the ECNLE theory where $\sim 12$ particles in a compact "cage" region of space move by a large amount and are surrounded by a long range collective displacement field. From the observed displacements, the total particle MSD can potentially be measured.
Based on the coupled local-nonlocal physical picture of alpha relaxation in ECNLE theory, the "number of re-arranging particles" is ill-defined (in contrast to other models such as Adams-Gibbs [40] and RFOT [41] which involve compact clusters). However, we can compute $MSD^*$. The cage consists of a central particle plus $\sim 12$ nearest neighbors. In NLE theory, each particle is envisioned to move a distance $\Delta r$ during the alpha relaxation event. The jump distance increases from $\sim 0.25 - 0.35$ particle diameters upon cooling from the lightly supercooled regime to $T_g$ [6]. Hence, the local component is:
\begin{eqnarray}
MSD^*_{cage}\approx 13\Delta r^2 \approx (0.8-1.6)d^2
\label{eq:12}
\end{eqnarray}
The collective elastic fluctuation contribution corresponds to a total displacement of:
\begin{eqnarray}
MSD^*_{elastic} &=&4\pi\rho\int_d^{\infty}dr r^2\left[\Delta r_{eff}\frac{r_{cage}^2}{r^2} \right]^2 \nonumber\\
&=& 24\Phi\left(\frac{r_{cage}}{d} \right)^4\left(\frac{3\Delta r^2}{32r_{cage}d} \right)^2d^2 \\
&\approx& 0.71\left(\frac{\Delta r}{d} \right)^4\Phi d^2 \approx (0.003-0.011)\Phi d^2 \nonumber
\label{eq:13}
\end{eqnarray}
This is far smaller than the local hopping contribution. Hence, the total linear displacement $\sqrt[]{MSD^*_{total}}$ $\sim 1-2$ particle diameters, grows weakly with cooling, and is dominated by local physics even though the long range elastic effects make a large contribution to the activation barrier. The obtained modest value of $\sqrt[]{MSD^*_{total}}$ does seem reasonable compared to simulation studies [37,38,42].
\subsection{Cooperativity Length Scale}
Since collective elastic effects involve a scale-free displacement field, there is no intrinsic length scale in the usual sense. However, a cooperativity length can be defined by asking a question recently explored in studies of thin film heterogeneous dynamics [43,44,45,46]. There, one can define a length scale as the distance from the surface where some pre-determined fraction of the bulk alpha relaxation time is recovered. The analog of this idea in the context of bulk ECNLE theory corresponds to adjusting the upper limit in Eq.(3) to define a length-scale-dependent elastic barrier within a spherical region of radius varying from $r_{cage}$ to $\xi_{bulk}$:
\begin{eqnarray}
F_{elastic}(\xi_{bulk}) &=& 4\pi\rho\int_{r_{cage}}^{\xi_{bulk}}dr r^2 g(r)\left[\frac{K_0u^2(r)}{2} \right]\nonumber\\
&=& F_{elastic}^{bulk}\left[1-\frac{r_{cage}}{\xi_{bulk}} \right]
\label{eq:14}
\end{eqnarray}
Note the slow inverse in distance decay to its asymptotic value. From this, a cooperativity length scale is defined as when a fixed percentage ($C$) of the bulk alpha time is recovered:
\begin{eqnarray}
\ln\left(\frac{\tau_\alpha(\xi_{bulk})}{\tau_\alpha^{bulk}} \right) &\equiv& \ln C\approx\frac{F_{elastic}(\xi_{bulk})-F_{elastic}^{bulk}}{k_BT} \nonumber\\
&=& -\frac{r_{cage}}{\xi_{bulk}}\frac{F_{elastic}^{bulk}}{k_BT},
\label{eq:15}
\end{eqnarray}
where $\xi_{bulk}$ is proportional to the bulk elastic barrier. Prior work argued fragility is dominated by collective elasticity which is the origin of "cooperativity" [4,6,8] in ECNLE theory. From Eq.(15) one can write:
\begin{eqnarray}
\xi_{bulk} \approx -\frac{r_{cage}}{\ln C}\frac{F_{elastic}^{bulk}}{k_BT}=-\frac{r_{cage}}{\ln C}\frac{12\Phi K_0\Delta r_{eff}^2}{k_BT}\left(\frac{r_{cage}}{d} \right)^3.
\label{eq:16}
\end{eqnarray}
Figure 7 shows sample calculations of bulk for PS and OTP (they are almost identical) based on the criteria $C=0.5$ and $0.8$. This cooperativity length grows strongly with cooling, and is well described by a cubic polynomial. For the 50 $\%$ criterion, bulk $\xi_{bulk} \sim 30d$ at the laboratory $T_g$. The inset of Figure 7 shows the analogous results for the hard sphere fluid. Concerning the large cooperativity lengths in Figure 7, recall that the emergence of the collective elastic barrier as an important effect begins around a crossover volume fraction of $\sim 0.57-0.58$ [6], and here bulk is relatively small. For example, at $\Phi\sim$ 0.58, the inset of Figure 7 shows that bulk $\xi_{bulk} \sim 6d$ for $C=0.5$. To place this value in context, we note that ECNLE theory predicts for PS parameters that at $\Phi \sim 0.58$ the alpha time $\sim$ 200 nsec. This time scale lies in the practical dynamical crossover regime deduced experimentally for fragile liquids ($\sim$ $10^{-7}$ s) [35,36]. Importantly, it is essentially the longest time scale that has been probed in molecular dynamics (MD) simulation. Hence, since existing MD simulations cannot access the deeply supercooled regime where the collective elastic effects become dominant, the molecular cooperativity lengths they can probe are modest, perhaps no more than $\sim 4-6$ particle diameters.
\begin{figure}[htp]
\center
\includegraphics[width=8cm]{Fig7.pdf
\caption{\label{fig:7}(Color online) Main frame: Cooperativity length scales defined by the criteria $C= 0.5$ and $0.8$ as a function of inverse normalized temperature by the bulk $T_g$ for OTP and PS. Inset shows the underlying hard sphere fluid results. The dash-dot curves in the main frame are fits to a cubic polynominal.}
\end{figure}
\subsection{Time-Length Scale Connection}
One can ask if a simple connection exists between the cooperativity length and alpha time, or its natural logarithm which defines an effective barrier. This question is of prime interest in diverse glass physics theories [3,26,47]. Each theory typically has a (growing) length scale, with a distinct physical meaning or origin, and often posits a specific power law connection between the effective barrier and this length scale via expressions such as:
\begin{eqnarray}
\ln\left(\frac{\tau_\alpha}{\tau_0}\right) \varpropto \frac{Barrier}{k_BT} \varpropto \left(\xi/d\right)^\nu \varpropto \frac{\left(\xi/d\right)^\nu}{k_BT} \varpropto \left(\frac{\xi/d}{k_BT}\right)^\nu
\label{eq:17}
\end{eqnarray}
Whether ECNLE theory obeys any of the above three relations is not a priori obvious given the many different microscopic quantities that enter the alpha time calculation and the presence of two barriers with distinct density and temperatures dependences.
We have explored the above question for several thermal liquids and choices of the criterion parameter C. Remarkably, we generically find that all forms in Eq.(17) can represent extremely well our results for the alpha time, typically over 12-15 orders of magnitude in time. Figure 8 shows representative results for the plotting format associated with the final proportionality in Eq.(17). The apparent exponent is $3/4$, and works equally well for two different C values and different chemical species. But the first form in Eq.(17) works just as well (not shown), with an exponent only slightly larger of 0.8. Thus, we robustly find a single activated time-length scale relation holds over essentially the entire temperature regime (lightly to deeply supercooled) with an effective barrier scaling in a weakly sub-linear manner with the cooperativity length. Given Eq.(16), one might think this is not surprising. But recall the alpha time and total barrier involves both local and long range elastic contributions which have very different temperature dependences and relative importances that can vary widely for polymers of diverse fragilities. We note the recent interesting finding of simulations [45,46] of free standing thin films that the bulk alpha time varies exponentially with an effective barrier that grows with roughly one power of a length scale that defines the characteristic width of the mobility gradient near the vapor interface.
\begin{figure}[htp]
\center
\includegraphics[width=8cm]{Fig8.pdf
\caption{\label{fig:8}(Color online) Logarithm of bulk relaxation time as a function of $(\xi_{bulk}/d/T)^{0.75}$ for two systems and two criteria based on the molecular Einstein approach to computing the elastic barrier. Straight dashed lines are fits to the numerical calculations.}
\end{figure}
\section{Alternative Continuum Mechanics Calculation of the Elastic Barrier}
\subsection{Bulk Analysis and Comparison to Einstein Model Analog}
The original motivation for extending the local NLE theory of hopping [6] to include collective elastic effects was the phenomenological "shoving model" of Dyre [18]. He derived the displacement field in eq 1 albeit with an empirically adjustable amplitude. The elastic energy was computed assuming a literal continuum elastic picture, not the molecular Einstein perspective of ECNLE theory. In our notation, the former corresponds to strain and stress fields in spherical coordinates given by:
\begin{eqnarray}
\varepsilon_{rr}(r)&=&\frac{\partial u(r)}{\partial r}=-\frac{2r_{cage}^2\Delta r_{eff}}{r^3}, \nonumber\\
\varepsilon_{\theta\theta}(r) = \varepsilon_{\varepsilon\varepsilon}(r) &=& \frac{u(r)}{r} = \frac{r_{cage}^2\Delta r_{eff}}{r^3} \\
\sigma_{rr}(r)=-2G\varepsilon_{rr}(r)&,& \sigma_{\theta\theta}(r)=\sigma_{\varepsilon\varepsilon}(r) = G\varepsilon_{rr}(r) \nonumber
\label{eq:18}
\end{eqnarray}
where $G$ is the high frequency dynamic shear modulus. The strain energy, identified as the elastic barrier, is then:
\begin{eqnarray}
U_e &=&\frac{4\pi}{2}\int_{r_{cage}}^\infty dr r^2\left(\sigma_{rr}(r)\varepsilon_{rr}(r) + 2\sigma_{\theta\theta}(r)\varepsilon_{\theta\theta}(r) \right) \nonumber\\
&=& 8\pi G\Delta r_{eff}^2 r_{cage}
\label{eq:19}
\end{eqnarray}
This basic form is similar to Eq. (\ref{eq:3}) with three differences: (i) numerical prefactor, (ii) the macroscopic shear modulus replaces the single particle spring constant $K_0$, and (iii) the integrand decays not as $r^{-2}$ as does the molecular Einstein model, but much more quickly as $r^{-4}$.
To establish the consequences of the above differences for bulk relaxation, we adopt an accurate analytic formula for G derived in prior NLE theory studies [5,6,30]:
\begin{eqnarray}
G = \frac{9\Phi k_BT}{5\pi r_L^2d}=\frac{3}{5\pi}\Phi\frac{K_0}{d}
\label{eq:20}
\end{eqnarray}
Substituting Eq.(\ref{eq:20}) into Eq.(\ref{eq:19}) gives
\begin{eqnarray}
U_e &=& F_{elastic}^{bulk}\frac{2d^2}{5r_{cage}^2}\approx\frac{8}{45}F_{elastic}^{bulk}\nonumber\\
&\approx& \frac{F_{elastic}^{bulk}}{6}, \quad \mbox{for} \quad r_{cage}\sim 1.5d
\label{eq:21}
\end{eqnarray}
Hence, almost identical results per the molecular Einstein approach are obtained to within a nearly constant numerical prefactor. For the bulk relaxation time and $T_g$ there are no conceptual differences between using continuum mechanics versus molecular Einstein ideas to compute the elastic barrier.
\subsection{Cooperativity Length Scale}
\begin{figure}[htp]
\center
\includegraphics[width=8cm]{Fig9.pdf
\caption{\label{fig:9}(Color online) Main frame: cooperativity length scale based on using $C= 0.5$ and $0.8$ as a function of inverse normalized temperature by the bulk $T_g$ for PS liquid based on the continuum mechanics approach to computing the elastic barrier in ECNLE theory. Inset shows the corresponding hard sphere fluid results. }
\end{figure}
Given the elastic energy decays much faster ($\sim r^{-4}$) in the continuum mechanics approach compared to the molecular Einstein analog ($\sim r^{-2}$), there must be significant differences for $\xi_{bulk}$. Figure 9 presents representative calculations analogous to those in Fig.7. One sees a massive reduction in the length scale, and a temperature dependence that is now roughly linear in inverse temperature. The numerical results are easily understood by repeating the analysis in section IVB to obtain:
\begin{eqnarray}
\ln C&\approx& \ln\left(\frac{\tau_\alpha(\xi_{bulk})}{\tau_\alpha^{bulk}} \right) \approx\frac{F_{elastic}(\xi_{bulk})-F_{elastic}^{bulk}}{k_BT} \nonumber\\
&=& -\frac{r_{cage}^3}{\xi_{bulk}^3}\frac{F_{elastic}^{bulk}}{k_BT} \nonumber\\
\ln\left(\tau_\alpha(\xi_{bulk}) \right) &=& \frac{F_B}{k_BT}-\ln C\frac{\xi_{bulk}^3}{r_{cage}^3}.
\label{eq:22}
\end{eqnarray}
Simple algebra yields the relation between the cooperativity lengths based on the two calculations labeled with subscripts $C$ and $E$ for continuum and Einstein, respectively:
\begin{eqnarray}
\frac{\xi_{bulk,C}}{r_{cage}}\approx \left(\frac{\xi_{bulk,E}}{r_{cage}} \right)^{1/3}.
\label{eq:23}
\end{eqnarray}
The cube root relation explains the huge length scale reduction. Given the cubic polynomial fit in Fig. 7, it also explains to zeroth order the nearly inverse temperature dependence in Fig. 9. Note that based on the continuum mechanics calculation, the weakly varying with temperature local barrier now also affects the cooperativity length scale far more than in the molecular Einstein approach.
\subsection{Time-Length Scale Connection}
We have carried out the same numerical exercise as in section IVC to explore the validity of the three forms of the barrier-alpha time relationships of Eq.(\ref{eq:17}). Results analogous to Fig.8 are shown in Fig.10. Remarkably good straight lines are again obtained, with a much larger apparent exponent now of $\sim 2$. This is roughly three times (as expected given Eq.(\ref{eq:23})) the value of 0.75 found in Fig.8. We also find (not shown) essentially equally good representations of our alpha time calculations by the relations $\ln\left(\tau_\alpha^{bulk} \right) \sim \left(\xi_{bulk}/d \right)^{5/2}$ and $\ln\left(\tau_\alpha^{bulk} \right) \sim \left(\xi_{bulk}/d \right)^{2}/k_BT$.
\begin{figure}[htp]
\center
\includegraphics[width=8cm]{Fig10.pdf
\caption{\label{fig:10}(Color online) Logarithm of bulk relaxation time as a function of $(\xi_{bulk}/d/T)^2$ for two criteria and two systems based on the continuum mechanics approach to computing the elastic barrier. Straight dashed lines are fits to the numerical data points.}
\end{figure}
We conclude that the existence of a tight connection between the alpha time and a growing cooperativity length scale in ECNLE theory is present regardless of the approach used to compute the elastic barrier. However, the absolute magnitude and temperature dependence of the cooperativity length scale, and the apparent exponent that relates it to the barrier, differ substantially.
\section{Discussion}
We have analyzed new aspects of ECNLE theory to provide deeper insight and address new questions. Calculations have been performed for the hard sphere fluid and thermal molecular and polymeric liquids of diverse fragilities. We find a near universality of the temperature-dependence of the apparent dynamic localization length if one adopts a high crossover temperature as a reference state. In contrast, strong nonuniversalities remain for the total activation barrier. Surprising simplicities emerge for the temperature-dependent effective volume fraction and various dynamical properties if results are plotted against temperature and not its inverse.
The particle-level total displacement associated with the alpha event is found to be weakly temperature-dependent (grows with cooling) and only $\sim 1-2$ particle diameters. An alternative amplitude-based criterion for determining a cooperativity length scale was also analyzed. It grows strongly with cooling, reaches very large values at the laboratory $T_g$, and is correlated in an exponential manner with the alpha time over an enormous number of decades in relaxation time with a barrier-length scale apparent scaling exponent modestly smaller than unity. An alternative calculation of the elastic barrier based on continuum mechanics results in little change of the predictions of ECNLE theory for bulk average properties, but leads to a much smaller and more weakly growing with cooling cooperativity length scale due to the stronger spatial decay of the elastic field with distance.
The issue of the molecular Einstein versus literal continuum mechanics approach to computing the collective elastic barrier might be more incisively probed by performing new simulations and/or confocal imaging experiments in colloidal materials. This question is especially germane to how solid or vapor boundaries can "cut off" or modify the elastic barrier in thin films [9-12]. Work in this latter direction is underway and will be reported in a future article.
\begin{acknowledgments}
This work was performed at the University of Illinois and supported by DOE-BES under Grant No. DE-FG02-07ER46471 administered through the Frederick Seitz Materials Research Laboratory. We thank Professor David Simmons for many stimulating and informative discussions.
\end{acknowledgments}
|
1,314,259,994,358 | arxiv | \section{Representations of Data in Machine Learning}
\subsection{Introduction}
We start by the definition of a data representation. Suppose we are given a set of $P$ data samples $\bf{x^{(1)}},\bf{x^{(2)}},...\bf{x^{(P)}}$ of a $N$-dimensional random variable $\bf{X}$ having joint density $P(\bf{X})$. A data transformation is a deterministic transformation from the multidimensional vector space of data into another one:
\begin{equation}
F: \bf{x} \in \mathbb{R}^N \rightarrow {\bf x'} = F(\bf{x}) \in \mathbb{R}^M \ ,
\end{equation}
where $M$ can be larger or smaller than $N$. In general, $F$ is assumed to be differentiable, but is not necessarily invertible. We say that the random vector $\bf{X}'$ is a representation of the original random vector $\bf{X}$. Changing the representation of a random variable can be often extremely helpful in data science because: i) it allows for better visualization and understanding of the process that generated the data; ii) the performance of machine-learning algorithm, such as classification or clustering methods heavily depends on the choice of representation used.
Although it is not obvious that a given representation is good, it is clear that many, many representations are useless: if $F({\bf X}) = 0, \forall {\bf X}$, then ${\bf X'}$ is a trivial random variable, and does not carry any information about ${\bf X}$. More generally, it is clear that any transformation $F$ that does not vary strongly across the support of ${\bf X}$ is of little use. On the opposite, $F=Id$ is not of much use either, since the properties of the data distribution have not changed. Typically, a good data representation ${\bf X}'$ must have helpful properties that ${\bf X}$ does not have, such as low dimensionality, independence between components or sparse values, while carrying information on the original random vector ${\bf X}$. Thus, the transformation $F$ must depend on $P({\bf X})$ and should be learnt. Once learnt, a data representation can often shed light on how the data was generated: one can find so-called 'features', i.e. frequent collective modes of variation in the data, find a partition into classes, discover outliers...
Moreover, a good data representation can significantly improve the performance of subsequent machine learning tasks, by retaining only useful information about the data sample. For instance, in so-called deep neural network, one learns a sequence of data transformations, e.g. to predict label from an image. By using non-linearities and so-called pooling architectures, the learnt intermediate representations of the data can become invariant, e.g. w.r.t. noise, shifts, rotations... hence learn quicker \cite{mallat2016understanding}. Deep neural networks have brought remarkable breakthrough in many areas, such as visual and speech recognition, natural language processing,... \cite{Bengio2013,LeCun2015}
We now illustrate these concepts with two examples of great relevance in applications.
\subsection{Example 1: Dimensionality reduction} \label{secpca0}
One important subclass of data transformation are Dimensionality Reduction transformations. One aims at compressing a random vector ${\bf X}$ of typically high dimension $N$, into a smaller random vector ${\bf X'}$ of dimension $M<N$, e.g. $M=2$ or $3$, while keeping as much information as possible about ${\bf X}$. Such compression is motivated by the fact that data very often lie in or close to a subspace of much lower dimension than $N$. This is the so-called 'manifold hypothesis'. Indeed, consider for instance a data set constituted by pictures of a person's face, taken in many different positions; each picture is made of, say, $1000\times 1000$ pixels. It is clear that this data set is a very small subset of all possible $1000\times1000$ colored pictures, which define a $3 \,10^6$--dimensional vector. The reason is that, for a given face, there are only $\sim 50$ varying degrees of freedom (the position of all muscles), a very small number compared to $10^6$ \cite{LecunCDF}. Hence, all data points lie in a (non-linear) manifold, of very low dimension $M$ compared to $N$. More generally, the variability in the data often comes from a small number of explanatory latent factors that affect all components, and we would like to recover them.
In all generality, we do not have good and general methods to learn functions that turns an image into this kind of 'muscle positions' representation. Some simpler dimensionality reductions can nonetheless be learnt and be extremely useful. For instance, dimensionality reduction can be obtained through a simple linear transformation:
\begin{equation}
{\bf X'} = W \, {\bf X}\ .
\end{equation}
where the weight $W$ is a $M \times N$ rectangular matrix that must be trained on the data in order to retain as much information as possible from ${\bf X}$. An interesting choice of matrix $W$ is obtained by the Principal Components Analysis (PCA) algorithm: the rows $W_{i,.}$ are the eigenvectors corresponding to the $i$'th largest eigenvalues of the empirical data covariance matrix $C_{ij} =\langle X_i X_j \rangle-\langle X_i \rangle\langle X_j \rangle$, where the average is computed over the data; this choice will be justified in Section \ref{secpca}. Such transformation mainly serve two purposes. The first one is to provide a better understanding of the data by visualizing it: one computes a 2 or 3 dimensional-representation of the data; then each data point is represented in a 2 or 3D space. For example, one can compute the 2D PCA representation $28 \times 28$ images of digits from the MNIST handrwitten digits dataset, vectorized as $784$ dimensional vectors, see Fig.~\ref{PCA_MNIST}; the scatter plot shows two distinct clusters, corresponding to two digit types (0s and 1s). A more interesting illustration is the interpretation of molecular dynamics simulation of complex systems, made of many strongly interacting and heterogeneous microscopic components. Observing the dynamics of such systems, e.g. a protein described at the atomic level, amounts in practice to look at thousands of correlated time series. Principal component analysis offer low-dimensional projections of these time traces, and allows one to visualize collective motions underlying the evolution of the system, see \cite{muellerstein06} for a recent review on applications to biomolecules, including nucleic acids and proteins.
\begin{figure}
\begin{subfigure}{.33 \textwidth}
\label{MNIST_samples}
\includegraphics[scale=0.38]{MNIST_samples.png}
\end{subfigure}
\begin{subfigure}{.33 \textwidth}
\includegraphics[scale=0.38]{2d_viz_MNIST.png}
\end{subfigure}
\begin{subfigure}{.33 \textwidth}
\label{PCA_MNIST2}
\includegraphics[scale=0.38]{eigen_digits.png}
\end{subfigure}
\caption{(a) Some MNIST data samples. (b) A 2-dimensional PCA representation of the MNIST handwritten digits data set. Each point is a different image with $x$ and $y$ coordinates being the value of the first and second components of the representation. Here, only the digits 0's (blue) and 1's (red) are represented. (c) Visualization of the weight matrix $W$. Each image is a principal component vector $W_{i,.}$; blue (resp. red) pixels denote large positive (resp. negative) values. the PCA representation is obtained by computing the set of overlaps between an image and each principal component vector}
\label{PCA_MNIST}
\end{figure}
The second purpose of dimensionality reduction is to overcome the so-called curse of dimensionality. In very high dimensional spaces, most datasets sample only very sparsely the vector space $\mathbb{R}^N$. Consider for instance the following supervised learning problem. We are given a training data basis of $10,000$ $100 \times 100$ grayscale (normalized between 0 and 1) images of cats and dogs, with binary labels attached, and we want to train a parametric model to classify whether images are cats or dogs. At this point, it is useful to think that this classification task is essentially an interpolation problem: there exist a mathematical function $\theta: {\bf X} \rightarrow y \in \{0,1\}$ that assigns $0$ to cats and $1$ to dogs. We observe pairs of values $ \left( {\bf X}^i, y^i =\theta({\bf X}^i) \right)$, with $i=1...10,000$, and want to interpolate the values of $\theta$ for new test images. This interpolation problem would be trivial if the input space was densely sampled, \textit{e.g.} if for any point in $\mathbb{R}^N$ there would be a training data point at distance $\leq \epsilon$. In practice, it is impossible because the latter condition requires about $\epsilon^{-N}$ data points, which is out-of-reach when $N$ is large.
One possible way-out is to first learn a new data representation of lower dimension, $\bf{x'} = F(x)$, e.g. using PCA, and then train a classification model of the form: $y = \theta(\bf{x'})$. If the low dimensional representation keeps relevant information about the nature of the image, then learning can be performed. One popular application of PCA for supervised learning is the 'eigenface' face recognition algorithm. A PCA representation is trained on a data set of faces, before applying supervised learning \cite{turk1991face}. The eigenface algorithm is considered among the first successful face recognition algorithms.
\subsection{Example 2: Extracting latent features from data}
The variability in real-world data, such as images, can often be decomposed into a set of largely independent modes of variation. For instance, two faces are different because some of their parts are different: nose, ears, lips... At a lower level of description, an image can contain or not an edge at a given location, or at some angle or scale, and two different images have different set of activated edges. Extracting these so-called 'features' is of particular interest for machine learning, in particular for classification, because the decision function $y = \theta(X)$ that must be learnt may be expressed more easily as a function of these 'features' $X'$ than from the raw pixels $X$. For instance, one could achieve better results by expressing $\theta(X')$ as a linear function of $X'$, instead of a higher order polynomial of $X$. Moreover, the learnt representations have interesting statistical properties, such as low statistical dependence between modes, invariance with respect to irrelevant perturbations of the data such as corruption by noise... that can be used for denoising. Some notable algorithms for unsupervised feature extraction are Independent Component Analysis (ICA) \cite{hyvarinen2004independent}, sparse autoencoders \cite{ng2011sparse}, and sparse dictionary learning \cite{olshausen1996emergence}. We display in Fig.~\ref{ICA_MNIST} the features learnt by ICA applied to the MNIST digits data set. The features learnt correspond to individual handwritten strokes, unlike PCA where the principal component do not have a simple interpretation. Interestingly, the features found by sparse dictionary learning applied to natural images dataset qualitatively match very well the receptive fields of neurons in the visual cortex of mammalians, such as in monkey \cite{ringach2002orientation,zylberberg2011sparse}. Feature extraction carried out in the brain bear strong analogies with machine-learning procedures. \cite{poggio16}.
\begin{figure}
\centering
\includegraphics[scale=0.25]{sparse_features_MNIST.png}
\vskip -1cm
\caption{Features learnt by Independant Component Analysis on MNIST}
\label{ICA_MNIST}
\end{figure}
\section{Low-Dimensional Representations: Principal Component Analysis (PCA)}
\label{secpca}
In this section, we focus on the PCA transformation introduced in Section \ref{secpca0}. To cast PCA in a Bayesian framework, we start with a basic reminder about Bayes's approach to inference.
\subsection{Mathematical reminder: Bayesian inference}
We observe a data sample $\sigma$, and would like to fit these data with a model parametrized by some variable $\tau$. We assume that both $\sigma$ and $\tau$ are random variables, with a joint distribution $p(\sigma,\tau)$. According to the definition of conditional probabilities, we may write
\begin{equation}\label{bayes1}
p(\sigma,\tau) = p(\sigma| \tau)\times p(\tau) =p( \tau | \sigma) \times p(\sigma) \ .
\end{equation}
In the first equality, $p(\sigma| \tau)$ is the the probability of the data given the model parameters, also called \textit{likelihood} of the model parameters given the data.
The second term, $p(\tau)$, is the \textit{prior distribution} over model parameters. The expression can be rewritten using the posterior distribution of the parameters given the observations $p(\tau | \sigma)$ (which can be maximized, sampled from...), and the overall probability $p(\sigma)$ of the data to be generated by the class of models uner consideration. This posterior distribution is given by Bayes formula:
\begin{equation}
p(\tau| \sigma) = \frac{p(\sigma | \tau) p(\tau)}{p(\sigma)}\ ,
\end{equation}
which is simply derived from eqn (\ref{bayes1}).
One historical application of the Bayesian inference formula is Laplace's statistical proof that boys and girls have different birth rates. Laplace had access to the number of boys and girls born in Paris between 1745 and 1770: $\sigma =245,945$ girls out of $P= 245,945 + 251,527 = 497,472$ babies born during this time period. Although the numbers of male and female births are different, it is not possible to know a priori whether the discrepancy comes from a statistical fluctuation or from a systematic difference in birth rates. Laplace assumed that each birth is a realization of an independent and identically distributed random variable, giving a girl with probability $\tau$ and a boy with probability $1-\tau$. Under these basic assumption, $\sigma$ follows a binomial distribution $\mathcal{B}(P,\tau)$, with a likelihood:
\begin{equation}\label{lapla1}
p(\sigma | \tau) = \left( \begin{array}{rr} P\\ \sigma
\end{array} \right) \tau^\sigma (1-\tau)^{P-\sigma} \ .
\end{equation}
Assuming a uniform density prior over $\tau\in[0;1]$, $p(\tau) = 1$, the posterior distribution reads
\begin{equation}\label{lapla2}
p(\tau |\sigma) = C\; \tau^\sigma (1-\tau)^{P-\sigma} \ ,
\end{equation}
where $C$ is a normalization constant, and is shown in Fig.~\ref{laplace_inference}. It is then easy to calculate the mean value and the standard deviation of $\tau$ with the posterior distribution, with the results Mean$( \tau )= 0.490291$ and Std$(\tau)=0.007117$. The probability that $\tau$ is actually larger or equal to $\frac 12$ is given by the integral of $p(\tau|\sigma)$ over the $\tau\in[\frac 12;1]$ interval, and is approximately equal to $ 10^{-42}$. This extremely small value makes it very unlikely that the discrepancies between the large numbers of female and male births is due to a pure statistical fluctuation.
\begin{figure}
\begin{minipage}[c]{0.5\textwidth}
\includegraphics[width=\textwidth]{laplace_inference.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.45\textwidth}
\caption{The posterior probability density of the female birth rate $\tau$ for Laplace's birth rate problem according to eqn (\ref{lapla2}). Notice that $p(\tau = 0.5)$ is very small, but non zero. }
\label{laplace_inference}
\end{minipage}
\end{figure}
\subsection{Multivariate Gaussian variables}\label{secprob}
A popular example of parametric model is the multivariate Gaussian distribution, used to model sets of continuous random variables exhibiting correlations. Hereafter, we will assume that all random variables have zero mean for the sake of simplicity. Given a vector of $N$ variables $\boldsymbol\sigma = (\sigma_1, \sigma_2, ... , \sigma_N )$, we write:
\begin{equation}
\rho(\boldsymbol\sigma | \boldsymbol\tau) = \frac{\sqrt{\det \boldsymbol\tau}}{ (2\pi)^{\frac{N}{2}}} \exp \left( - \frac{1}{2} \boldsymbol\sigma^T \cdot \boldsymbol\tau \cdot\boldsymbol \sigma \right) \,
\end{equation}
where $\tau$, called \textit{precision matrix} is a symmetric, positive definite matrix that encodes the inter-dependencies between variables. Its off-diagonal entries can be interpreted as (minus) the couplings between the variables. For instance, with $N=2$, $\tau_{12} >0$ means that the configurations $(a,b)$ and $(-a,-b)$ are more likely that the configurations $(-a,b)$ and $(a,-b)$ (when $a,b>0$), leading to a positive correlation between $\sigma_1$ and $\sigma_2$. Given a data set of $P$ samples $\sigma^{(1)},\sigma^{(2)},..\sigma^{(P)} $, one can compute analytically the maximum likelihood estimator of the precision matrix:
\begin{equation}
\tau^{MLE} = \argmaxB \sum_{s=1}^P \log \rho (\sigma^{(s)}|\tau ) \ .
\end{equation}
To solve for $\tau^{MLE}$, the gradient of the right hand side in the above equation reads
\begin{equation}
\frac{\partial}{\partial \tau_{ij}} \sum_{s=1}^P \log \rho (\sigma^{(s)}|\tau ) = - \frac{1}{2} \sum_{s=1}^P \sigma_i^{(s)} \sigma_j^{(s)} + \frac{P}{2} \, (\tau^{-1})_{ji} \ .
\end{equation}
We recognize that first term is the empirical data covariance matrix, $C$. The gradient vanishes -and it is easy to show that this corresponds to a global maximum- when
\begin{equation}
\tau^{MLE} = C^{-1}\ .
\end{equation}
The inversion can be performed numerically as long as $P\ge N$; for $P < N$, the data covariance matrix is not full rank. However, finite sampling effects of order $\frac{1}{\sqrt{P}}$ in $C$ result in error on $\tau = C^{-1}$ of the order of $\sqrt{\frac{N}{P}}$. Thus, for large-dimensional data sets, i.e. when the ratio $N/P$ is of the order of unity, we expect inference to be plagued with errors.
\subsection{Principal components as minimal models of interacting variables}
The simplest model distribution over vector of random variables, each components being normalized to have zero mean and unit variance, is the independent one, which corresponds to $C =\tau= Id$. In this case, we have $p(\sigma | \tau) \propto \exp \left( - \frac{1}{2} \sum_i \sigma_i^2 \right)$, and the resulting distribution is isotropic, see Fig.~\ref{null_model}(left). A minimal non-trivial model is obtained by breaking this isotropy. We assume there exists a specific direction, denoted by $|e\langle$, in the $N$ dimensional space with a larger variance:
\begin{equation} \label{pcmodel1}
\begin{split}
\tau = Id - \frac{s}{1+s}\; |e\rangle\langle e|
\Longleftrightarrow C = \tau^{-1} = Id + s \; |e\rangle\langle e| \ ,
\end{split}
\end{equation}
where $s>0$. In this expression, $|e\rangle$, the \textit{principal component} can be interpreted as a \textit{collective mode} of variation of the data. Indeed, the random variable $\sigma_e = \sum_{i=1}^N e_i \sigma_i $ has variance $V_e = \langle e| C |e\rangle = 1+s$ larger than 1, whereas it would be 1 if the $\sigma_i$ were independent. The $\sigma_i$ variables correlate in a way that makes $\sigma_e$ have large variance, see Fig.~\ref{null_model}(right).
\begin{figure}
\begin{subfigure}{.45 \textwidth}
\includegraphics[scale=0.5]{null_model.png}
\end{subfigure}
\begin{subfigure}{.45 \textwidth}
\includegraphics[scale=0.5]{pca_model.png}
\end{subfigure}
\caption{Probability density contours for (left) the null model ($\tau = Id$) and (right) the principal-component model of eqn (\ref{pcmodel1}).}
\label{null_model}
\end{figure}
Given a data set, maximum likelihood estimation can be performed analytically to infer the principal component $|e\rangle$. The likelihood writes
\begin{equation}
\rho\big(\sigma^{(s)} \big| \; |e\rangle \big) = \frac{\sqrt{\det \boldsymbol\tau}}{(2 \pi)^{\frac{N}{2}}} \exp \left( -\frac{1}{2} \sum_{i,j} \sigma_i^{(s)} \tau_{ij}\, \sigma_j^{(s)} \right) \ .
\end{equation}
The $|e\rangle$-dependent part of the log-likelihood is simply
\begin{equation} \label{maxL}
L = \frac{s}{2(1+s)} \sum_{i,j} e_i\, e_j \,\left( \sum_s \sigma_i^{(s)} \sigma_j^{(s)} \right) \ .
\end{equation}
Hence, the MLE for the direction $|e\rangle$ (assumed to be normalized) is the top eigenvector (with largest eigenvalue) of the empirical covariance matrix $C = \frac{1}{n} \sum_s \sigma_i^{(s)} \sigma_j^{(s)}$. Although the inference can be performed easily for any covariance matrix, we do not expect that the inferred vector is always statistically significant, according to the discussion at the end of Section \ref{secprob}. For instance, even if the data are generated according to the null model $\tau=Id$, the empirical covariance matrix has a largest eigenvalue ($>1$) due to finite sampling (Fig.~\ref{eigen_pca_finite}). Similarly, if the data is generated according to the principal-component model but $s$ is 'small' and $P$ is finite, the largest eigenvector of the empirical correlation matrix may be far away from $|e\rangle$ (Fig.~\ref{eigen_pca_finite}). In the next section, we report analytical results derived using random matrix theory and statistical mechanics tools telling us when inference is possible.
\begin{figure}
\begin{subfigure}{.30 \textwidth}
\label{eigen_null_infinite}
\includegraphics[scale=0.5]{eigen_null_infinite.png}
\end{subfigure}
\vskip -.8cm \hskip 5cm
\begin{subfigure}{.30 \textwidth}
\label{eigen_null_finite}
\includegraphics[scale=0.5]{eigen_null_finite.png}
\end{subfigure}
\vskip -1.55cm \hskip 10cm
\begin{subfigure}{.30 \textwidth}
\includegraphics[scale=0.5]{eigen_pca_finite.png}
\end{subfigure}
\caption{Distribution of the eigenvalues of the empirical covariance matrix for: (a) the null model with infinite sampling; (b) the null model with finite sampling; (c) the principal component model with finite sampling}
\label{eigen_pca_finite}
\end{figure}
\subsection{The retarded-learning phase transition}
\label{secretard}
We first study the empirical covariance matrix $C_{ij}= \frac{1}{n} \sum_s \sigma_i^{(s)} \sigma_j^{(s)}$, and its spectrum when the data are generated according to the null model $\tau = Id$. We are interested in particular in the empirical density probability of eigenvalues:
\begin{equation}
\rho(\lambda) = \frac 1N \sum_{\mu=1}^N \overline{\delta(\lambda_\mu - \lambda) }\ ,
\end{equation}
where $\{\lambda_1,...,\lambda_N\}$ is the set of eigenvalues of $C$. The overbar denotes the average over the realizations of the $N$ data samples (s). Note that, in the large $P,N$ limits with a fixed ratio $r \equiv \frac{N}{P}$, we expect that the spectrum attached to a random realization will coincide with the average spectrum $\rho$ with high probability.
The probability density of eigenvalues can be computed analytically using random matrix theory tools, in the limit case where the dimension $N$ and number of data points $P$ both go to infinity, and at fixed noise level $r$ \cite{marvcenko1967distribution}. The result is the so-called Marcenko-Pastur distribution:
\begin{equation}\label{mp}
\rho(\lambda)=\frac{\sqrt{(\lambda_+ - \lambda)(\lambda-\lambda_-)}}{2 \pi r \lambda} \ , \quad \text{with}\quad
\lambda_{\pm} = \left( 1 \pm \sqrt{r} \right)^2 \ .
\end{equation}
The expression above is valid for $r<1$; for larger $r$, the covariance matrix is not full rank ,and there is also a Dirac peak of mass $1-\frac{1}{r}$ in $\lambda=0$. The distribution of eigenvalues is plotted in Fig.~\ref{marcenko_pastur} for various values of the noise level $r$. For very good sampling $r\to 0$, Wigner semi-circle law is recovered around $\lambda=1$, as the different entries of the correlation matrix becomes essentially uncorrelated. Interestingly, the spectrum density can be quite wide when $r$ is small: for instance, for $r=1$, eigenvalues can be as large as $\lambda=4$. As a consequence, these 'sampling noise' eigenvectors can screen away true principal components if $s$ is not too large.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{marcenko_pastur.png}
\end{center}
\caption{The Marcenko-Pastur distribution of eigenvalues, $\rho(\lambda)$, for various values of the noise level $r$, reported in the Mathematica script on the top. Left: $r<1$. Right: $r>1$; the Dirac peak in $\lambda=0$ is omitted.}
\label{marcenko_pastur}
\end{figure}
The same computation as above can be carried out for the principal-component model with $s>0$, and shows the existence a phase transition, see Fig.~\ref{retarded_learning} \cite{baik2005phase,rattray}:
\begin{itemize}
\item If $r<s^2$ (weak noise regime), the largest eigenvalue is well above the 'bulk' of eigenvalues due to finite sampling, and the largest eigenvector $|v_1\rangle$ has a finite overlap $\langle e| v_1\rangle$ with the principal component $|e\rangle$.
\item If $r>s^2$ (strong noise regime), the principal eigenvalue is inside the Marcenko-Pastur 'bulk' of eigenvalues, and $|v_1\rangle$ is merely noise, i.e. the overlap $\langle e|v_1\rangle$ vanishes in the large size limit.
\end{itemize}
In summary, recovering the principal component is impossible unless $n^\star \sim \frac{N}{s^2}$ examples at least are presented, after which the error decays monotonously; hence the name of retarded learning \cite{watkin1994optimal} coined in a slightly different context. This computation can be generalized for any finite number of eigenvalues $K>1$, associated to the set $s_1 > s_2 >... >s_K$; each time the noise level $r$ crosses $s_k^2$, one more eigenvalue pops out of the noisy bulk of eigenvalues, and the corresponding eigenvector is informative about the $k^{th}$ principal component to be inferred. A practical application of this computation is to serve as a guideline for how many principal components one should keep when PCA is used for dimensionality reduction. For instance, one can choose to keep only the eigenvalues that are larger than $\lambda_+$, the bulk top eigenvalue in eqn (\ref{mp}).
\begin{figure}
\begin{subfigure}{.30 \textwidth}
\includegraphics[scale=0.3]{retarded.png}
\end{subfigure}
\hskip 5cm
\begin{subfigure}{.3 \textwidth}
\includegraphics[scale=0.4]{retarded_learning_overlap.png}
\end{subfigure}
\caption{The retarded-learning phase transition. Left panels: spectrum of eigenvalues of the empirical correlation matrix in the principal-component model in the cases of weak noise (left, $r< s^2$) and of strong noise (right, $r>s^2$). Right panel: average squared overlap between the top components of the true and empirical correlation matrices as a function of the noise level $r$, for $s=0.2$.}
\label{retarded_learning}
\end{figure}
\subsection{Incorporating prior information}
We have seen in the previous Section that, when $r<s^2$, it is possible to extract a vector with a finite scalar product with the top component $|e\rangle$. One natural question is whether exploiting prior information about the structure of the top component can help us increase this threshold, i.e. find out the top component with less data. We assume a prior distribution for the entries of the top component:
\begin{equation}
P(|e\rangle) =\prod_i P(e_i)\ , \quad \text{with}\quad P(e_i) \propto \exp \big[ V(e_i) \big] \ .
\end{equation}
Several expressions of interest can be considered for the potential $V$, with the representative curves shown in Fig.~\ref{priorV}. We may for instance know that the top component has all its components $e_i$ positive or zero.
\begin{equation} \label{nonnegative}
V(e) = \left\lbrace
\begin{array}{rr}
+\infty \text{ if }e < 0 \\
0 \text{ if }e \geq 0 \\
\end{array}
\right.
\end{equation}
This can be useful in practice if we look for a collective excitatory mode, e.g. in gene expression data \cite{badea2005sparse,zass2007nonnegative}.
At first sight, it seems trivial to find a vector with a positive dot product with $|e\rangle$, as a good candidate is $|v\rangle = \frac 1{\sqrt N} (1,1,...,1)$. However, since $|e\rangle$ may be arbitrarily sparse (have all components equal to zero but a finite number), there is no guarantee that $\langle e|v\rangle$ is actually finite in the large $N$ limit. It was recently shown that maximizing (\ref{maxL}) under the condition $e_i\ge 0, \forall i$ leads to an estimate of the top component with a positive dot product as long as $r< 2\, s^2$ \cite{richard2014statistical}, see Fig.~\ref{priorV}. Hence, incorporating prior information about the non-negativity of the entries of the top components allows for doubling the noise level.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{prior_for_PCA.png}
\end{center}
\caption{Prior potentials $V(e)$ used for learning top components. In the nonnegative case (left), the top component can be inferred when $r<2s$ in the presence of the prior, which is twice bigger than the maximum-likelihood threshold, $r=s^2$. Other prior potentials include the $L_1$ regularization (middle), and a potential favoring large components (right), see \cite{monasson2015estimating}. }
\label{priorV}
\end{figure}
Other cases of interest are the $L_1$ regularization,
\begin{equation} \label{sparseV}
V(e) = V_0 \, |e|
\end{equation}
which favors sparsity. Recently, motivated by the study of covariation in protein families and the search for eigenvectors of the residue-residue correlation matrix with strong components on sites in contact on the 3D structure \cite{cocco2013principal}, we have considered the following potential \cite{monasson2015estimating,monasson2016inference}.
\begin{equation} \label{large_entries}
V(e) = - V_0\, e^4 \ ,
\end{equation}
which favors large components. As the total normal of $|e\rangle$ is still fixed to unity, only a finite number of components can be large (and finite). Note that the cost of weak components is very small, hence this potential does not enforce any sparsity constraint, contrary to eqn (\ref{sparseV}). Instead of maximizing the likelihood, we now maximize the full a posteriori probability $P(|e\rangle | C) \propto P(C| |e\rangle) \times P(|e\rangle)$. The optimization can not be performed analytically anymore, but the analysis of the typical properties of the solution can be analyzed with the replica method \cite{mezard1987spin}. For a prior \ref{large_entries}, it is shown in particular that small values of $V_0$ can reduce the learning lag, whereas too large values impeach learning, see Fig.~\ref{PCA_prior}.
\begin{figure}
\begin{minipage}[c]{0.55\textwidth}
\includegraphics[width=\textwidth]{map.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.4\textwidth}
\caption{The PCA retarded-learning phase diagram using priors for large entries in eqn (\ref{large_entries}), for $s=0.5$. At fixed $r$, inference of the top component is possible if the strength of the large-component prior, $V_0$, is comprised between $V_-$ and $V_+$. For $V_0<V_-$, the prior is too weak, and the situation is similar to maximum likelihood decoding. For $V_0>V_+$, the prior is too strong: the inferred vector will have few large entries $e_i$, as required, but the sites $i$ carrying these large entries will not match their counterparts in the true top component.}
\label{PCA_prior}
\end{minipage}
\end{figure}
\subsection{Inverse Hopfield problem}
\label{sechopfieldinv}
PCA has also a strong connection with the inverse Ising problem of Section \ref{secising}, when the coupling matrix $J$ is constrained to have low rank, typically $\ll p$. This assumption may help avoid overfitting the data \cite{cocco2011high,cocco2013principal}. We therefore write the interaction matrix as follows,
\begin{equation}\label{pattern}
J_{ij} = \frac 1N
\sum_{\mu =1}^k \xi_i^\mu \, \xi_j^\mu -\frac 1N
\sum_{\mu =1}^{\hat k} \hat \xi_i^\mu \, \hat \xi_j^\mu \ .
\end{equation}
Here, $k$ and $\hat k$ are, respectively the numbers of positive and negative eigenvalues of $J$, and the total rank is $k+\hat k$; note that the number of variables . The interaction matrix (\ref{pattern}), together with the Gibbs measure in ean (\ref{pisi}), define a generalized Hopfield model, made of the standard {\em attractive} patterns $\boldsymbol\xi^\mu$ and of {\em repulsive} pattern ${\hat {\boldsymbol \xi}}^\mu$,. To make the meaning of these patterns more explicit, we rewrite the probability distribution \ref{pisi} of this generalized Hopfield model:
\begin{equation}\label{hopfieldp}
P\big({\bf s}|h,\boldsymbol\xi, \hat{\boldsymbol \xi}\big) = \frac 1{Z(h,\boldsymbol\xi, \hat{\boldsymbol \xi})} \; \exp \left({\bf h}\cdot {\bf s} +
\frac 1{2N} \sum _{\mu=1}^k \big( {\boldsymbol\xi} ^\mu \cdot {\bf s}\big) ^2 -\frac 1{2N} \sum _{\mu=1}^{\hat k} \big({\hat {\boldsymbol\xi}} ^\mu \cdot {\bf s}\big)^2 \right) \ ,
\end{equation}
where $\cdot$ denotes the scalar product (summation over components $i$). The meaning of the patterns is transparent: they define favored, for attractive patterns, or disfavored, for repulsive patterns,
directions in the space of configurations $\bf s$, along which the probability increases or decreases quadratically.
Attractive and repulsive patterns may be inferred through minimization of the cross-entropy defined in eqn (\ref{entropy}). To the lowest order in $\xi_i^\mu/\sqrt N$ and $\hat \xi_i^\mu/\sqrt N$, one finds that the fields and patterns minimizing $S$ are given by
\begin{eqnarray}\label{Ninfini}
h_i&=& \log p_i \nonumber \\
\xi _i ^\mu &=& \sqrt{1 - \frac 1{\lambda ^\mu}}\
\frac{v_i^\mu} {\sqrt{p_i (1-p_i)}} \qquad (\mu=1,\ldots , k)\nonumber \\
\hat \xi _i ^\mu &=& \sqrt{\frac 1{\lambda ^{N-\mu}}-1}\
\frac{v_i^{N -\mu}} {\sqrt{p_i (1-p_i)}}\qquad (\mu=1,\ldots , \hat k)
\end{eqnarray}
where $\lambda ^1\ge \lambda^2\ge ... \ge 1\ge ...\ge \lambda ^{N-1}\ge\lambda ^{N}$ are the eigenvalues of the Pearson correlation matrix,
\begin{equation}\label{matrixc}
C_{ij}= \frac{p_{ij} -p_i\,p_j}{\sqrt{p_i(1-p_i)\, p_j(1-p_j)}} \ ,
\end{equation}
and the ${\bf v}^\mu$ are the associated eigenvectors (with squared norms equal to $N$)\footnote{Note that the Hopfield model is, by construction, invariant under global rotations in the pattern index space, e.g. any rotation ${\cal O}$ of all the attractive patterns in the $k-$dimensional space:
\begin{equation}
\boldsymbol \xi ^\mu \to \sum _{\nu}{\cal O}^{\mu, \nu} \; \boldsymbol \xi ^\nu \ .
\end{equation}
In other terms, the patterns are defined up to a rotation and are not unique; the gauge chosen in eqn (\ref{Ninfini}) corresponds to orthogonal patterns in site space. Obviously, the couplings $J_{ij}$ are gauge-invariant.}.
The above procedure is strongly reminiscent of PCA. Formula (\ref{Ninfini}) shows, however, that the patterns do not coincide with the eigenvectors of $C$ due to the presence of $p_i$-dependent terms. Furthermore, the presence of the $\lambda^\mu$-dependent factor discounts the patterns corresponding to eigenvalues close to unity. This effect is easy to understand in the case of independent spins: in the limit of perfect sampling ($B\to\infty$), $C$ coincides with the identity matrix, hence $\lambda^\mu=1, \forall \mu$, and the patterns and the couplings vanish as they should. In the general case of coupled spins, the sum of the eigenvalues of $C$ is equal to $N$ (since $C_{ii}=1, \forall i$). Therefore, the largest and smallest eigenvalues are guaranteed to be, respectively, above and below unity, and the corresponding attractive and repulsive patterns are real valued.
\begin{figure}
\begin{minipage}[c]{0.58\textwidth}
\includegraphics[width=\textwidth]{deltalpf0014.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.4\textwidth}
\caption{Top: Eigenvalue spectrum of the Pearson correlation matrix for the
sequences of the trypsin inhibitor family (PF00014); the noise ratio is $r\simeq 0.5$, hence the edges of the Marcenko-Pastur spectrum are approximately $\lambda_-=0.09$ and $\lambda_+=2.9$, see eqn (\ref{mp}). Bottom: pattern
contributions to the log-likelihood of the inferred Hopfield model for the
patterns corresponding to the eigenvalues along the $x$-axis.
The most-contributing patterns are attractive patterns corresponding to
the largest eigenvalues and repulsive patterns corresponding to the
smallest eigenvalues. See \cite{cocco2013principal} for more details. }
\label{PCA14}
\end{minipage}
\vskip 0cm
\end{figure}
\begin{figure}
\begin{center}
{\includegraphics[width=12cm]{xi2}}
\caption{ Two of the smallest-eigenvalue repulsive patterns obtained for the trypsin inhibitor family (PF00014). $x$ index: $i+s/20$, where $i$ is the site index and $s=1\ldots 20$ is the amino acid (or gap) index. Each pattern is localized on essentially two components, corresponding to two sites $i_1,i_2$ in contact through a cysteine-cysteine bridge. Importantly, the two sites are close on the 3D fold but distabt along the protein backbone. See \cite{cocco2013principal} for more details.}
\label{fig-schema-2}
\end{center}
\end{figure}
Inserting expression (\ref{Ninfini}) in the cross-entropy (\ref{entropy}), we obtain the contribution (per data configuration) of pattern $\mu$ to the log-likelihood,
\begin{equation}\label{gainphi}
\delta {\cal L} ^\mu = \frac 12 \big( \lambda^\mu -1 -\log \lambda^\mu \big) \ ,
\end{equation}
a quantity which is strictly positive for $\lambda^\mu \ne 1$, see Fig.~\ref{fig-schema-2}. This expression helps select most relevant patterns, in decreasing order of their contributions $\delta {\cal L}^\mu$. This is in analogy with PCA when one selects the signal eigenvectors as the ones detaching most from the spectrum of the Marcenko-Pastur distribution, see Section \ref{secretard}. However, at difference with PCA, where only top eigenvalues (large $\lambda^\mu$) and attractive patterns are taken into account, selected patterns in the inverse Hopfield model are on both ends of the spectrum. Small eigenvalues, much bloe $\lambda=1$, can give large contributions to the log-likelihood (Fig.~\ref{fig-schema-2}). In applications to the study of covariations in protein families, repulsive patterns can be shown to be localized on a small number of sites; they are much more information about structural constraints in the protein, e.g. on the pairs of amino acids in contact \cite{cocco2013principal}.
\section{Compositional Representations: Restricted Boltzmann Machines (RBM)}
\subsection{Definition and motivation}
A Restricted Boltzmann Machine (RBM) is a graphical model, i.e., a probability distribution over a multidimensional data set, similar to the multivariate gaussian distribution or the Boltzmann Machine distribution. It is constituted by two sets of random variables, a visible layer (v) -the data layer- and a hidden layer (h), which are coupled together, see Fig.~\ref{fig:archi}. The joint probability distribution of the visible and hidden unit configurations, ${\bf v} = (v_1,v_2,...,v_N)$ and ${\bf h} = (h_1,h_2,...,h_M)$, is the Gibbs distribution
\begin{equation}
P(\boldsymbol v,\boldsymbol h) = \frac{1}{Z} e^{-E(\boldsymbol v,\boldsymbol h)} \ ,
\end{equation}
defined by the energy
\begin{equation}
E(\boldsymbol v,\boldsymbol h) = \sum_{i=1}^N \mathcal{U}_i(v_i) + \sum_{\mu =1}^M \mathcal{U}_\mu(h_\mu) - \sum_{i,\mu} w_{i,\mu} v_i h_\mu \ ,
\end{equation}
where the $\mathcal{U}_i, \mathcal{U}_\mu$ are unary potentials that control the marginal distributions of the variables $v_i, h_\mu$, and the weight matrix $w_{i,\mu}$ couples the visible and hidden layers. Depending on the choice of the potentials, the visible and hidden variables can be binary or continuous. The visible potentials $\mathcal{U}_i$ is in general chosen based on the data we want to model; for example if $v \in [0,1]$, then $U_i(v_i) = -g_i v_i$, where the field $g_i$ is a parameter of the model. The hidden potentials $\mathcal{U}_\mu$ can be chosen arbitrarily as long as sampling is feasible. Some useful examples are:
\begin{itemize}
\item The Bernoulli potential: $\mathcal{U}_\mu(h_\mu) = -g_\mu h_\mu \quad \text{with} \quad h_\mu \in [0,1]$\ ;
\item The Quadratic potential: $\mathcal{U}_\mu(h_\mu) = \frac 12 \, h_\mu^2, \; \; h_\mu \in \mathbb{R}$ \ ;
\item The ReLU potential:
$\mathcal{U}_\mu(h_\mu) = \left\lbrace \begin{array}{r r r}
\frac 12\, { h_\mu^2}+ \theta_\mu \, h_\mu & \text{if } & h_\mu \geq 0 \\
+\infty & \text{if } & h_\mu < 0
\end{array} \right.$ \ .
\end{itemize}
\begin{figure}[b]
\begin{minipage}[c]{0.3\textwidth}
\includegraphics[width=\textwidth]{RBM_architecture.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.55\textwidth}
\caption{Architecture of Restricted Boltzmann Machines. A RBM is defined on a bidirectional bipartite graph, with a visible (v) layer that represents the data, connected to a hidden (h) layer supposed to extract statistically meaningful features from the data and, in turn, to condition their distribution. There are $N$ visible units indexed by $i$, and $M$ hidden units indexed by $\mu$. The connections between visible and hidden units are denoted by $w_{i\mu}$.}
\label{fig:archi}
\end{minipage}
\end{figure}
By marginalizing over the hidden units, one can compute the probability distribution over the visible layer:
\begin{equation} \label{marginal}
P(\boldsymbol v) = \int \prod_\mu \, dh_\mu\; P(\boldsymbol v,\boldsymbol h) \equiv \frac{1}{Z_{eff}} \exp \left[ - E_{eff}(\boldsymbol v) \right]
\end{equation}
The former expression \ref{marginal} can be expressed analytically in terms of the weight matrix and the potentials. Training an RBM consists in fitting this marginal distribution to the data by maximum likelihood \cite{smolensky1986information}. Unlike multivariate Gaussian distributions studied in the previous Section, the optimal RBM must be found numerically, e.g. using approximate stochastic gradient ascent over the likelihood \cite{fischer2014training}.
We stress that, in contradistinction with Boltzmann Machines (Ising models) or multivariate Gaussian distributions, there are no direct couplings between pairs of units in the same layer (hence, the name restricted). RBM can nonetheless model correlations between visible variables, as the latter can be indirectly correlated through the hidden layer. Informally speaking, instead of explaining the correlations between several visible units through a set of adequate couplings, we interpretate them as collective variation driven by the common inputs (the hidden units) shared by these visible units, see Fig.~\ref{RBM_intuition}. The hidden units thus represent collective modes of variation of the data.
\begin{figure}
\includegraphics[width=.42\textwidth]{rbm-struct-1.png} \hskip.5cm
\includegraphics[width=.55\textwidth]{rbm-struct-2.png}
\caption{How to model correlations among a set of variables. {\bf A.} Boltzmann Machine approach: The matrix of pairwise correlations between variables is computed from data, and a network of couplings is inferred to reproduce those correlations. {\bf B.} Restricted Boltzmann Machine approach: observed correlations are due to one or more common input(s), whose values drive the configurations of the variables. A network of connection between the visible layer (support of data configurations) and a layer of hidden units (support of common inputs) is found to maximize the probability of the data items. The rightmost column indicates the magnitude $h$ of the hidden unit as a function of the visible configuration.}
\label{RBM_intuition}
\end{figure}
Another way to state this is to observe that, as one marginalizes over the hidden layer, effective couplings between visible layer units arise. For instance, it is easy to show that for Gaussian hidden units, i.e. for the Quadratic potential ${\cal U}_\mu$ in the list above, the marginal distribution over the visible layer is:
\begin{equation}
E_{eff}(\boldsymbol v) = -\sum_i g_i \,v_i + \frac{1}{2} \sum_\mu \left( \sum_i w_{i\mu} v_i \right)^2
\end{equation}
In that case, we recognize a pairwise effective Hamiltonian, the Hopfield model with $M$ patterns \cite{Hopfield82,barra12}. In general, non-quadratic hidden-unit potentials generate effective Hamiltonian for the visible units with high-order interactions. The presence of couplings to all orders produced from a unique set of $N\times M$ connections $w_{i\mu}$ has deep effects on the sampling dynamics of RBM.
\subsection{Sampling}
\label{secsam}
The connection with data representation algorithms is best seen when considering the sampling scheme. Since there are no connections within a layer, the hidden layer units are conditionally independent given the configuration of the visible layer, and conversely; hence the following Gibbs sampling procedure, schematized in Fig.~\ref{sampling_RBM}:
\begin{itemize}
\item Compute hidden units inputs $I_\mu^H = \sum_i w_{i\mu} v_i$
\item Sample each hidden unit independently $P(h_\mu | I_\mu^H) \propto \exp \left[ - U_\mu(h_\mu) + h_\mu I_\mu^H \right]$
\item Compute the visible layer inputs $I_i^V = \sum_\mu w_{i\mu} h_\mu$
\item Sample each visible unit independently $P(v_i | I_i^v) \propto \exp \left[ (g_i + I_i^V) v_i \right]$
\end{itemize}
The first two steps can be seen as a stochastic feature extraction from configuration ${\bf v}$, whereas the last two steps are a stochastic reconstruction of ${\bf v}$ from the features ${\bf h}$. One can define in particular a data representation as the most likely hidden layer configuration given a visible layer configuration, that is, through the set of
\begin{equation}
{h^*_\mu}({\bf v}) = \arg \max P(h_\mu |{\bf v}) = \argmaxB P(h_\mu|{\bf v})= \Phi_\mu(I_\mu^H({\bf v})) \ ,
\end{equation}
where $\Phi_\mu = (\mathcal{U}_\mu')^{-1}$ is the transfer function, see Fig.~\ref{transfer_functions_RBM}.
\begin{figure}
\begin{minipage}[c]{0.7\textwidth}
\includegraphics[width=\textwidth]{sampling_RBM.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.25\textwidth}
\caption{Back-and-forth sampling procedure in RBM. Hidden configurations $\bf h$ are sampled from visible configurations $\bf v$, and, in turn, define the distribution of visible configurations at the next sampling step.}
\label{sampling_RBM}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}[c]{0.4\textwidth}
\includegraphics[width=\textwidth]{transfer_function.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.5\textwidth}
\caption{Transfer functions $\Phi$ for various hidden units potentials ${\cal U}_\mu$, see main text. The transfer function gives the most likely value of the hidden unit, $h^*$, as a function of the input $I^H$ received from the visible units.}
\label{transfer_functions_RBM}
\end{minipage}
\end{figure}
\subsection{Phenomenology of RBM trained on data}
\label{phenomenology}
Once maximum-likelihood training is completed, RBM can be very good generative models for complex, multimodal distributions. In the following, we describe the phenomenology of RBM, with various kind of hidden-unit potentials, trained on MNIST, a dataset of 60,000 $28\times28$ grayscale images of handwritten digits. Each image can be flattened, binarized by thresholding the grayscale level, into a 784-dimensional binary vector. The following observations can be done:
\begin{itemize}
\item After training, samples drawn from the equilibrium distribution of Bernoulli or ReLU RBM look like real digits, suggesting that it is a good fit for the data distribution, see Fig.~\ref{phenomenology_RBMs}(b). On the contrary, Gaussian RBM, \textit{i.e.} pairwise Hamiltonians, do not fit the data distribution as well.
\item Each hidden unit is activated selectively by the presence of a specific feature of the data: this is is seen by visualizing the columns of the weight matrix $w_{i\mu}$, see Fig.~\ref{phenomenology_RBMs}(a). The features are strokes, that is, small part of digits. The weight matrix is therefore essentially sparse, with a fraction of nonzero weights $p \sim 0.1$, see Fig.~\ref{phenomenology_RBMs}(d).
\item For ReLU hidden units, each data image strongly activates around $\sim 20$ hidden units, whereas most hidden units are silent or weakly activated, see Fig.~\ref{phenomenology_RBMs}(c). For a precise definition of how the number strongly hidden units is estimated, see eqn (\ref{pr}) and \cite{tubiana2017emergence}.
\item The learnt probability distribution is very rough, with many local maxima of probability (much larger than the values of $N$ or $M$), as seen in Fig.~\ref{phenomenology_RBMs}(e). Remarkably, after training, each data sample is within few pixels of a local maximum of probability. This shows the combinatorial nature of RBM, capable of generating a very large number of configurations after training.
\end{itemize}
This phenomenology raises several questions. First, how can such simple networks generate a complex distribution with a large variety of local minima, matching the original data points? Secondly, why do some hidden unit potentials give good results, whereas others do not? Lastly, can we connect this behavior to the one of the Hopfield model, corresponding to the case of quadratic potential?
\begin{figure}
\begin{minipage}[c]{0.6\textwidth}
\includegraphics[scale = 0.52]{phenomenology_RBMs2.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.35\textwidth}
\caption{Training of RBM on MNIST. Data are composed of $N=28\times 28$ binarized images, and the RBM includes $M=400$ hidden ReLU. {\bf (a)} Set of weights ${\bf w}_{\mu}$ attached to four representative hidden units $\mu$. {\bf (b)} Averages of $\bf v$ conditioned to five hidden-unit configurations $\bf h$ sampled from the RBM at equilibrium. Black and white pixels correspond respectively to averages equal to $0$ and 1; few intermediary values, indicated by grey levels, can be seen on the edges of digits. {\bf (c)} Distributions of the numbers of very strongly activated hidden units, $\hat L $ (left), and of silent hidden units, $\hat S$ (right), at equilibrium. {\bf (d)} Evolution of the weight sparsity $\hat p$ (red) and the squared weight value $W_2$ (blue). The training time is measured in epochs (number of passes over the data set), and represented on a square--root scale. {\bf (e)} Evolution of the number of distinct local maxima of $P({\bf v})$ in eqn (\ref{marginal}) (left scale) and distance to the original sample (right scale, for training and test sets). For each sample, the local minimum is obtained through $T=0$--sampling of the RBM, see Section \ref{secsam}.}
\label{phenomenology_RBMs}
\end{minipage}
\end{figure}
\subsection{Statistical mechanics of RBM}
It is hopeless to provide answers to these questions in full generality for a given RBM with parameters fitted from real data. However, statistical physics methods and concepts allow us to study the typical energy landscape and properties of RBM drawn from appropriate random ensembles. We follow this approach hereafter, using the replica method \cite{tubiana2017emergence}. We define the Random-RBM ensemble model for ReLU hidden units as follows, see also drawing in Fig.~\ref{RBM_drawing},
\begin{itemize}
\item $N$ binary visible units, $M$ ReLU hidden units, with $N,M \rightarrow \infty$ and $\alpha = \frac{M}{N}$ is finite.
\item uniform visible layer fields, i.e. $g_i = g,\ \forall i$.
\item uniform hidden layer thresholds, i.e. $\theta_\mu = \theta, \ \forall \mu$.
\item a random weight matrix $w_{i\mu} = \frac{\xi_{i\mu}}{\sqrt{N}} $, where each 'pattern' $\xi_{i\mu}$ is drawn independently, taking values $+1,-1$ with probabilities $\frac{p}{2}$ and $0$ with probability $1-p$. The \textit{degree of sparsity} $p$ is the fraction of non-zero weights.
\end{itemize}
Hence, $\alpha$, $p$, $g$ and $\theta$ are the control parameters of our model. Several variants or special cases have already been addressed in the literature. Choosing Gaussian hidden units and $\pm 1$ visible units leads back to the original Hopfield model, studied in \cite{AGS87}. The sparse weight distribution was previously introduced to study parallel storage of multiple sparse items in the Hopfield model \cite{agliari2012multitasking,agliari2013immune}.
\begin{figure}
\begin{minipage}[c]{0.6\textwidth}
\includegraphics[width=\textwidth]{random_model.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.3\textwidth}
\caption{The Random-RBM ensemble, with its control parameters: threshold $\theta$ of hidden ReLU, ratio $\alpha$ of the sizes of the hidden and visible layers, field $g$ on visible units, sparsity $p$ of the weights (rescaled by $W=1/\sqrt N$). }
\label{RBM_drawing}
\end{minipage}
\end{figure}
It is important to understand the magnitude of hidden-unit activations for a given a visible layer configuration $\bf v$. Two cases are encountered:
\begin{itemize}
\item let us call $L$ the number of hidden units $\mu$ coding for features ${\bf w}_\mu$ present in $\bf v$. These hidden units will be strongly activated, as their inputs $I^H_\mu= {\bf w}_\mu\cdot {\bf v}$ will be strong and positive, comparable to the product of the norms of ${\bf w}_{\mu}$, of the order of $\simeq \sqrt p$ for large $N$, and $\bf v$, of the order of $\sqrt {p\,N}$. Therefore, we expect $I^H_\mu$ to scale as $m\sqrt N$, where the prefactor $m$, called magnetization, is finite in the thermodynamical limit.
\item The remaining $M-L$ hidden units $\mu'$ have, however, features ${\bf w}_{\mu'}$ essentially orthogonal to $\bf v$. Hence, the vast majority of hidden units receive random inputs $I^H_{\mu'}$ fluctuating around zero, with finite variances.
\end{itemize}
To answer the questions raised in Section \ref{phenomenology}, we are interested in computing the averages of $m$, $L$ over the distribution and over the random weights. They can be obtained through a replica computation of the average free energy,
\begin{equation}
f(\alpha,p,g,\theta) \equiv \lim_{N \rightarrow \infty} - \frac{1}{\beta N}\; \overline{ \log Z \left(\alpha,\beta,p,g,\theta, \{\xi_{i\mu} \} \right) } \ ,
\end{equation}
where the overbard denotes the average over the $ \{\xi_{i\mu} \} $ and the partition function reads
\begin{equation}
Z \left(\alpha,\beta, p,g,\theta,\{\xi_{i\mu}\}\right) = \sum_{{\bf v} \in \{0,1\}^N} \int \prod_{\mu=1}^{M} dh_\mu\; e^{-\beta E( {\bf v},{\bf h})} \ .
\end{equation}
After some algebra, see \cite{tubiana2017emergence}, we find that $f(\alpha,p,g,\theta)$ is obtained through optimizing a free-energy functionial over the order parameters $L,m,q,r,B,C$:
\begin{itemize}
\item $m$ and $L$ are, respectively, the magnetization and the number of feature-encoding hidden units,
\item $r$ is the mean squared activity of the other hidden units,
\item $q= \frac{1}{N} \sum_i \overline{\langle v_i\rangle}$ is the mean activity of the visible layer in the GS,
\item $B,C$ are response functions, {\em i.e.} derivatives of the mean activity of, respectively, hidden and visible units with respect to their inputs.
\end{itemize}
For non-sparse weights ($p=1$) and depending on the values of the other control parameters, the system can show one of two following qualitative different behaviors, as is found for the Hopfield model \cite{AGS87} and Fig.~\ref{all_phases}:
\begin{itemize}
\item A \underline{ferromagnetic} phase, in which hidden configurations with $L=1$ and $m>0$ dominate. Visible configurations have strong overlap with one feature, say, $\mu=1$. It is likely that $v_i=1$ if $\xi_{i,1}=1$ and $v_i=0$ if $\xi_{i,1}=-1$. As the choice of $\mu$ is arbitrary, there are $\alpha N$ such 'basins' of visible configurations. Phases with $L>1$, i.e. having strong overlap with several features exist and may be thermodynamically stable, but are unfavorable: their free energies increase with $L$.
\item A \underline{spin-glass} phase, in which configurations with $m=0$ dominate. Most configurations have weak overlap $\sim \frac{1}{\sqrt{N}}$ with all hidden units.
\end{itemize}
The phase transition occurs out of the frustration in the system. Assume for instance that the system is in the ferromagnetic phase. The input recevied by a visible unit, say, $i$ has a strong contribution (of the order of $1$ as $N\to\infty$) from the strongly magnetized unit, say, $\mu=1$, and a lot (of the order of $\alpha N$) of weak inputs (of the order of $\pm 1/\sqrt N$) from the other hidden units. As the ratio $\alpha$ increases, these numerous, weak noisy contributions win over the unique, strong signal contribution, and the systems enters the glassy phase. The transition takes place at a well defined value of $\alpha$, which depends on $\theta$ and $g$ \cite{tubiana2017emergence}.
For small $p$, a new intermediate qualitative behavior emerges:
\begin{itemize}
\item The \underline{compositional} phase, in which visible configurations have strong overlap with $L$ features, where $1\ll L\ll M$, see Fig.~\ref{all_phases}. As observed for RBM trained on real data in Fig.~\ref{phenomenology_RBMs}(e), random RBM may generate a combinatorial diversity of low-energy visible configurations, corresponding to different choices of the subset $\{ \mu_1,... ,\mu_L\}$ of strongly activated hidden units. This new phase is found in the low $p$ limit, and for appropriate values of the threshold $\theta$ (large enough to silence a large number of hidden units and suppress interference, see Fig.~\ref{phenomenology_RBMs}(c)), and of the field $g$ (to reproduce the average activity of the data in the visible layer).
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{all_phases.png}
\end{center}
\caption{The three regimes of operation of Random RBM, see text. Black, grey and white hidden units symbolize, respectively, strong ($h\sim \sqrt N$), weak ($h\sim \pm 1$) and null ($h=0$) activations.}
\label{all_phases}
\end{figure}
\subsection{Validation on data}
One of the outcomes of our statistical physics analysis is that, in the compositional phase, the number $L$ of strongly activated hidden units scales as the inverse of the degree of sparsity, $p$. More precisely, $L\sim \frac{\ell}p$, when $p\to 0$, where $\ell$ is determined by minimzing the free energy of the Random-RBM model. The minimum $\ell^*$ of the free energy is found at $\ell^*>0$ in the compositional phase, contrary to the ferromagnetic phase, where $\ell^*=0$.
This prediction can be tested in RBM trained on real data, e.g. MNIST, see Fig.~\ref{phenomenology_RBMs}(b). The value of $p$ at the end of training without any regularization was found to be $\sim 0.1$, see Fig.~\ref{phenomenology_RBMs}(d). However, higher sparsities, i.e. lower values of $p$, can be imposed through regularization of the weights. To do so, we add to the log-likelihood the penalty term
\begin{equation}\label{regu}
C(\{w_{i\mu}\}) = - \sum _\mu \big(\sum_i |w_{i\mu}|\big)^x\ ,
\end{equation}
where $x\ge 0$. The case $x=1$ gives standard $L_1$ regularization, while, for $x > 1$, the effective penalty strength, $\propto \big(\sum_i |w_{i\mu}|\big)^{x-1}$, increases with the weights, hence promoting homogeneity among hidden units. After training we generate Monte Carlo samples of each RBM at equilibrium, and monitor the average number of active hidden units, $L$, estimated through the participation ratio
\begin{equation}\label{pr}
L = \frac{(\sum_\mu h_\mu^2)^2} { \sum_\mu h_\mu^4}\ .
\end{equation}
By changing the value of $x$, we obtain, at the end of training, RBM with higher sparsities. Figure~\ref{scalingL} shows that the theoretical scaling law $L\sim \ell^*/p$ is well reproduced over one decade of variation of $p$. In addition, the product $L\times p$ is in good agreement with the theoretical prediction $\ell^*$ \cite{tubiana2017emergence}.
\begin{figure}
\begin{minipage}[c]{0.5\textwidth}
\includegraphics[width=\textwidth]{validation_1.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.4\textwidth}
\caption{ Average number $L$ of active hidden units vs. degree $p$ of sparsity of the weights, for RBM trained on MNIST data. Values of the ratio $\alpha$ and of the exponent $x$ in the regularization term in eqn (\ref{regu}) are reported in the figure. The figure also shows the theoretical curve obtained for Random RBM ($CV=0$, as in Fig.~\ref{RBM_drawing}), and for RBM under another more realistic statistical ensemble of random weights, in which the degree of sparsity $p$ fluctuates with the visible sites $i$. Dashed lines show one standard deviations away from the mean value of $L$ due to finite-size fluctuations, see \cite{tubiana2017emergence} for details.
}
\label{scalingL}
\end{minipage}
\end{figure}
\section{Introduction}
In the early 80's, statistical physicists proved that ideas issued from their field could lead to substantial advances in other disciplines. Simulated Annealing, a versatile optimization procedure in which a fictitious sampling temperature is decreased until the minimum (ground state) of a cost function is reached, had major impact in applied computer science and engineering \cite{Kirkpatrick83}. Attractor neural network models for memories \cite{Hopfield82}, soon analytically solved with spin-glass techniques \cite{AGS87}, emerged as one major conceptual tool in computational neuroscience. From a theoretical point of view, it became rapidly clear that statistical physics offered a powerful framework to deal with problems outside physics, in particular in computer science and theoretical neuroscience, involving many random, heterogeneous, strongly interacting components, which had remained very hard to tackle so far.
The purpose of the present document is to present some applications of statistical physics ideas and tools to the understanding of high-dimensional representations in neural networks. How the brain represents and processes information coming from the outside world is a central issue of computational neuroscience \cite{Rieke97}. Experimental progress in electrophysicological and optical recordings make now possible to record the activity of populations of tens to thousands of neural cells in behaving animals, opening the way to study this question with unprecedented access to data and to ask new questions about brain operation on large scales \cite{Sompolinsky14}. Concomittantly, machine learning algorithms, largely based on artificial neural network architectures, have recently achieved spectacular performance in a variety of fields, such as image processing, or speech recognition/production \cite{LeCun2015}. How these machines produce efficient representations of the data and of their underlying distributions is a crucial question \cite{Bengio2013}, far from being understood \cite{Coveney16}. Profound similarities seem to emerge between the representations encountered in real and artificial neural networks \cite{poggio16} and between the questions raised in both contexts \cite{Ganguli12}.
It is utterly hard to cover recent advances in such a diverse and vivid field, and the task is impossible in two lectures of two hours each. The material gathered here merely reflects the interests and, presumably, the ignorance of the authors more than anything else. The present notes focus on two applications of statistical physics to the study of neural representations in the contexts of computational neuroscience and machine learning. The first part is motivated by the representation of spaces, i.e. multiple environments, in hippocampal place-cell networks. An extension of Hopfield's attractor neural network to the case of finite-dimensional attractors is introduced and its phase diagram and dynamical properties, such as diffusion within one attractor or transitions between distinct attractors, are analyzed. We also show that effective, functional Ising models fitted from hippocampal multi-electrode recordings (limited to date to few tens of neurons) or from 'neural' data generated by spatially subsampling our model, share common features with our abstract model, and can be used to decode and to track the evolution of spatial representations over time. In a second part, we move to representations of data by machine learning algorithms. Special emphasis is put on two aspects: low-dimensional representations achieved by principal component analysis, and compositional representations, produced by restricted Bolztmann machines combining multiple features inferred from data. In both cases, we show how statistical physics helps unveil the different properties of these representations, and the role of essential control parameters.
\section{Representation of space(s) in the hippocampus: model}
\subsection{Context and background}
\subsubsection{Zero-dimensional attractors: Hopfield model of associative memory}
Statistical Mechanics and Neuroscience are not so far apart as they may seem at first sight. Indeed, brains are made of billions of neurons that are connected together. In many cases, brain functions are thought to be the outcome of collective states. This makes it a good playground for Statistical Mechanics. Here, we will focus on one particular brain function: memory.
In 1949, D. Hebb had the visionary intuition that memory could correspond to the retrieval of certain activity patterns in a network of interconnected neurons \cite{Hebb49}. This \emph{attractor hypothesis} goes as follows: (1) what is memorized are attractors of the network, \emph{i.e.} activity states stable under the dynamical evolution rule; hence, recalling a memory corresponds to retrieving its activity pattern; (2) attractors are stored in the network couplings $J_{ij}$ that govern the network dynamics and stable states; (3) a possible way to make an arbitrary pattern an attractor is to 'wire together neurons that fire together' in that pattern (the so-called 'Hebb rule').
In 1982, J.J. Hopfield \cite{Hopfield82} proposed a model based on Hebb's ideas in the case of zero-dimensional, or, equivalently, point attractors. This model, known as the Hopfield model, is strongly inspired by statistical physics models used in the context of the magnetic systems, such as the Ising model. It consists of a number $N$ of binary neurons ${\{s_i\}_{i=1\dots N}=\pm 1}$ and stores a number $P$ of configurations ${\{\xi_i^\mu\}_{i=1\dots N,\mu=1\dots p}=\pm1 }$ (point attractors), e.g. independently and uniformly drawn at random. The synaptic couplings that allow these configurations to be attractors are given by the Hebb rule:
\begin{equation}
\label{eq:hebb}
J_{ij}=\frac1N\sum\limits_{\mu=1}^P\xi_i^\mu\xi_j^\mu\ \ \forall i,j\ .
\end{equation}
The last thing to define is the dynamics of the network. In the original paper \cite{Hopfield82}, time was discretized and, at each time step $t$, neurons responded deterministically to their local fields, through the updating rule:
\begin{equation}\label{dyna}
s_i^{t+1} = \text{sign} \big( \sum_j J_{ij} s_j ^t \big) \ .
\end{equation}
Later studies, e.g. \cite{AGS85}, incorporated the possibility of stochasticity in the response, through a noise parameter $T$, so that the system obeyed detailed balance for the Gibbs distribution associated to the Hamiltonian
\begin{equation} \label{ej}
E_J[{\bf s}]=-\sum\limits_{i<j}J_{ij}s_i s_j \
\end{equation}
at 'temperature' $T$.
In terms of biological relevance, the Hopfield model is of course extremely schematic. Yet, it captures many fundamental and robust aspects of neurons (in particular their linear summation of inputs, combined with a thresholding effect) and network (synaptic coefficients with values affected by the activity through the Hebb rule), while remaining, to a large extent, analytically tractable. The Hopfield model aroused a great excitement in the Statistical Mechanics community during the 80's, since it shared many common points with frustrated and disordered magnetic systems. Tools and methods from the statistical physics of disordered systems that had just been developed in the field of spin glasses \cite{mezard87} could therefore be used to derive analytically the properties of the Hopfield model \cite{AGS85}. The first question was to check whether the patterns ${\{\xi_i^\mu\}_{i=1\dots N,\mu=1\dots p}}$ were indeed attractive fixed points of the dynamics in eqn (\ref{dyna}). The answer turned out to be positive (up to a small fraction of the $N$ neurons) for small enough values of $T$ and of the ratio $\alpha=P/N$ (in the double limit $N,P\to\infty$), i.e. for not too strong noise and memory load. Many aspects of this model were studied and refined, in particular to make it more biologically realistic. The reader is kindly referred to \cite{Amit89} for a detailed presentation of the literature. Rather, we will focus in an extension of this model to a different kind of attractors, that is finite-dimensional attractors.
\subsubsection{Place cells in the rodent hippocampus}
Let us now turn to real brains, more specifically, how space is represented in the brain \cite{Moser08review}. Experimentalists use small electrodes that, implanted in the brain of awake animals, are able to record the simultaneous activity of a population of \emph{single neurons}.
In particular, in a brain area called hippocampus, O'Keefe \& Dostrovsky have discovered the existence of 'place cells' when recording in rodents freely moving in an enclosure \cite{OKeefeDostrovsky71}. These neurons have the surprising property that they fire only when the animal is physically located in a precise region of space, hence their name. The region of activity corresponding to a place cell in the environment defines its 'place field'. In the CA3 area of the hippocampus, a region with strong recurrent connection between pyramidal cells, the different place fields attached to a given place cell across different environments visited by the rodent seem to be totally uncorrelated --- a property called global 'remapping'. In another hippocampal area, called CA1, remapping of place fields from one environment to another is generally weaker; the change in the activity of a place cell is characterized mainly by a modulation of its firing rate, a phenomenon called rate remapping, though global changes of the place fields as in CA3 may also be observed for some cells, see Fig.~\ref{fig:remappingca1}.
\begin{figure}
\begin{minipage}[c]{0.5\textwidth}
\includegraphics[width=\textwidth]{remapping-CA1.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.45\textwidth}
\caption{
Remapping of place field for one recorded place cell in CA1 for a living rat exploring two square environments A and B with identical shapes. The size of the environments is $60\times 60$ cm. The figure reports the average firing rate of the recorded cell when the rat is in each of the $3\times 3$-cm spatial bins; values in Hz, see color bar. Data from experiment by K. Jezek et al \cite{Jezek11}. }
\label{fig:remappingca1}
\end{minipage}
\end{figure}
For many reasons, the hippocampus --- more precisely its subregion CA3 --- is often supposed to work as a \emph{continuous} attractor neural network \cite{TrevesRolls94,Tsodyks99}. It means that the attractors are not point configurations (of zero dimension, as in the Hopfield model), but attractors in one \footnote{The one-dimensional case would correspond to linear corridors.} or two dimensions\footnote{Though there is experimental evidence that place cells code also for rich contextual information \cite{SmithMizumori06}, we consider only the spatial correlate of place-cell activity in the present document.} : each manifold corresponds to an environment, \emph{i.e.} the collection of activity configurations of the hippocampal neural population characterizing the set of all positions in that environment. Apart from the dimensionality of the attractors, place cells share common points with the Hopfield model, such as the absence of correlation between attractors due to random remapping, and the Hebb rule that has some biological counterparts. Hence, it is appealing to extend the Hopfield model to continuous attractors.
\subsection{A model for memorizing D-dimensional attractors (spatial maps)}
\label{secmodel}
We thus introduce a model for place cells in one- or two-dimensional spaces (the extension to higher dimensions is straightforward). As an extension of the Hopfield model, our model is based on binary neurons; other models with real-valued neural variable, e.g. firing rates, can be found in literature \cite{BattagliaTreves98,Samsonovich97}.
The $N$ place cells are modeled by binary units $s_i$ equal to 0 (silent state) or 1 (active state)\footnote{We will hereafter use indifferently the terms "neuron", "place cell" and "spin", from the analogy with magnetic systems.}. These neurons interact together through excitatory couplings $J_{ij}$. Moreover, they interact with inhibitory interneurons, whose effect is to maintain the total activity of the place cells to a fraction $f$ of active cells (global inhibition). We also assume that there is some stochasticity in the response of the neurons, controlled by a noise parameter $T$. All these assumptions come down to considering that the network states are distributed according to the Gibbs distribution associated to the Hamiltonian (\ref{ej}), restricted to configuration of spins $\bf s$ such that
\begin{equation}\label{constraint1}
\sum_i s_i=f N \ .
\end{equation}
We want to store $L+1$ environments in the coupling matrix. We call place field a position of space where a place cell preferentially fires. An environment $\ell$ is defined as a random permutation $\pi^\ell$ of the $N$ neurons' place fields (assuming that the place fields are regularly arranged on a grid). This models the experimentally observed remapping of place fields from one map to the other\footnote{In this basic version of the model, every place cells have place fields in every environments. The possibility of silent cells has been taken into account \cite{Monasson13}}. With this definition, an environment is said to be stored when activity patterns localized in this environment are stable states of the dynamics. In other words, the configurations where active neurons have neighbouring place fields in this environment are equilibrium states. To make this possible, we assume a Hebbian prescription for the couplings $J_{ij}$ that is a straightforward extension of the Hopfield synaptic matrix to the case of quasi-continuous attractors. This rule is illustrated in Figure~\ref{fig:remappingmodel}, and is mathematically described as follows:
\begin{itemize}
\item additivity: $J_{ij}=\sum\limits_{\ell=0}^LJ_{ij}^\ell$ where the sum runs over all the environments.
\item potentiation of excitatory couplings between units that may become active together when the animal explores the environment:
\begin{equation}\label{rule1}
J^\ell_{ij} =
\frac 1N \ \hbox{\rm if} \ d^\ell_{ij} \le d_c\ , \quad
0 \ \hbox{\rm if} \ d^\ell_{ij} > d_c \ ,
\end{equation}
where $d^\ell_{ij}$ is the distance between the place-field centers of $i$ and $j$ in the environment $\ell$; for instance, in dimension $D=1$, $d^\ell_{ij}=\frac 1N|\pi^\ell(i)-\pi^\ell(j)|$. $d_c$ represents the distance over which place fields overlap. In practice, it is chosen so that, in each environment, each neural cell is coupled to a fraction $w$ of the other cells (its neighbours); in dimension $D=1$ again, we may choose $d_c=\frac w2$. The $\frac 1N$ factor in eqn (\ref{rule1}) ensures that the total input received by a cell remains finite as $N$ goes to infinity, a limit case in which exact calculations become possible \cite{lebowitz66}.
\end{itemize}
\begin{figure}
\begin{minipage}[c]{0.4\textwidth}
\includegraphics[width=\textwidth]{remappingmodel.png}
\end{minipage}\hfill
\begin{minipage}[c]{0.5\textwidth}
\caption{
Remapping and connectivity rule in the model, illustrated with three units and ${L+1}$ two-dimensional environments. The place field centers of the units are displayed respectively in red, blue and green. Thick yellow lines indicate the excitatory couplings between cells with neighbouring place fields in each environment. These place fields overlap; here, for the sake of clarity, only the centers of the place fields are represented.
}
\label{fig:remappingmodel}
\end{minipage}
\end{figure}
\subsection{Replica theory and phase diagram}
\label{sec:phasediag}
The aim of this calculation is to study the stable states of the network, and to find under which conditions these stable states correspond to a set of active neurons whose corresponding place fields are nearby in one of the environments. In other words, we want to know for which parameter values the Hebbian synaptic matrix (\ref{rule1}) ensures the retrieval of the stored maps. The system under study enjoys both disordered (due to the random allocation of place fields in each map) and frustrated (from the competition between excitatory synapses and the global inhibition) interactions.
We start by computing the free energy of the system,
\begin{equation}
F=-T\log Z_J(T)\ , \quad \text{where}\quad Z_J(T)=\sum_{\boldsymbol s\ \text{with constraint}\ (\ref{constraint1})}\exp(-E_J(\boldsymbol s)/T)\ .
\end{equation}
This quantity depends a priori on the realization of the random permutations in each map. We assume that, in the large $N$ limit, the free energy is self-averaging: its particular value for a given realization of the disorder is typically close to its average over all possible realizations of the disorder, which is thus a good approximation of $F$. The randomness of the remapping process is thus a key hypothesis for the model to be tractable. To compute the average of the logarithm of $Z_J(T)$ we use the replica method \cite{mezard87}: we first compute the $n^{th}$ moment of $Z_J(T)$, and then compute its first derivative with respect to $n\to 0$.
Since we are interested in configurations where the place fields of the active neurons are spatially concentrated in one of the environments, we arbitrarily select one of the environments (called ``reference environment'') and do the averaging over the remaining $L$ other permutations; details about the calculation can be found in \cite{Monasson13}. This choice is totally arbitrary because the difference between environments is eventually averaged out. In the reference environment, neurons are indexed in the same order as their place fields, which allows us to move from a microscopic activity configuration $\boldsymbol s$ to a macroscopic activity density over continuous space
\begin{equation}\label{density_ave}
\rho(x) \equiv\lim _{\epsilon \to 0}\lim _{N \to \infty} \; \frac 1{\epsilon N} \sum_{(x-\frac \epsilon 2)N\le i < (x+\frac \epsilon 2)N}\overline{ \langle s _i \rangle_J }\ ,
\end{equation}
where the overbar denotes the average over the random remappings while the brackets correspond to the average over the fast noise. For simplicity, we have assumed that the environment is one-dimensional here, but the above formula can easily be extended to higher dimensions. Note that our model is analytically tractable in the large $N$ limit (thermodynamic limit), as each unit has an infinite number of neighbours it weakly interacts with. The mean-field approximation therefore becomes exact in the sense that order parameters such as the density in (\ref{density_ave}) exhibits no fluctuation when $N\to\infty$; however, contrary to standard mean field approaches, these order parameters depend on space \cite{lebowitz66}.
The case of a single (reference) environment, i.e. $L=0$, is strongly reminiscent of the theory of the liquid-vapor transition by Lebowitz and Penrose \cite{lebowitz66}: the continuous
translational symmetry is spontaneously broken at low enough temperature, i.e. $\rho(x) \ne f$, and a liquid drop (bump of high density fluid) is surrounded by low-density vapor, see Fig.~\ref{fig:lp}. this bump can then freely diffuse, and describes a finite-dimensional continuum of ground states.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{lebopenr.png}
\caption{Phase diagram of Lebowitz \& Penrose's theory of the lquid/vapor transition, in dimension $D=1$ (periodic boundar conditions). Insets show the density of particles $\rho(x)$ as a function of position over space, $x\in[0;1]$. Parameters: $f=0.1$, $w=0.05$. Note the coexistence between the homogeneous and bump states at intermediate temperatures. The location of the bump is arbitrary.
}
\label{fig:lp}
\end{figure}
In the $L>0$ case, the $n^{th}$ moment of $Z_J(T)$ is given by equation (20) in \cite{Monasson13}. The averaged term depends on the configurations $\boldsymbol s^1,\cdots\boldsymbol s^n$ only through the overlap matrix with entries $q^{ab}=\frac 1N \boldsymbol s^a \cdot\boldsymbol s^b$ \cite{mezard87,Monasson13}.
Then, to perform the $n\to 0$ limit, we consider the replica symmetric Ansatz, which assumes that the overlaps $q^{ab}$ take a single value (over all replica indices $a\ne b$),
\begin{equation}\label{defqab}
q \equiv \frac 1N \sum_{j} \overline{ \langle s _i \rangle_J ^2}\ ,
\end{equation}
which is the Edwards-Anderson parameter of spin glasses, characterizing the site-to-site fluctuations of the spin magnetizations \cite{EdwardsAnderson75}. This Ansatz is generally valid at high-enough temperature, when the Gibbs measure defined on the energy landscape is not too rough, see below. It allows us to compute the free energy as a function of the order parameters $\rho(x)$, $\mu(x)$ (chemical potential conjugated to $\rho(x)$), $q$ and $r$ (conjugated to $q$). $\mu(x)$ and $r$ have simple interpretations. The effective field acting on neuron $i$, whose place field is located in $x=i/N$ in the reference environment, is the sum of two terms: a 'signal' contribution $\mu(x)$ coming from neighboring neurons $j$ in the retrieved map (through the couplings $J^0_{ij}$), and a Gaussian noise, of zero mean and variance $\alpha\, r$ coming from the other maps $\ell\ge 1$ (see Fig. 14 in \cite{Monasson13}). Here, $\alpha\equiv L/N$ denotes the load of the memory.
The four order parameters (two scalar, two functional) fulfilll the following saddle-point equations obtained through extremization of the free-energy functional,
\begin{eqnarray}\label{eq:saddlepoint1}
r&=&2(q-f^2)\sum\limits_{k\geq 1}\left[\frac{k \pi}{\sin(k\pi w)}-\beta(f-q)\right]^{-2}\ , \quad
q=\int\mathrm{d}x\int\mathrm{D}z\; \big[1+e^{-\beta z\sqrt{\alpha r}-\beta\mu(x)}\big]^{-2}\ ,\nonumber\\
\rho (x)&=&\int\mathrm{D}z\; \big[1+e^{-\beta z\sqrt{\alpha r}-\beta\mu(x)}\big]^{-1}\ , \quad
\mu(x)=\int\mathrm{d}y\, J_w(x-y)\, \rho(y)+\lambda\ ,
\end{eqnarray}
where $\beta\equiv1/T$, $Dz=\exp(-z^2/2)/\sqrt{2\pi}$ is the Gaussian measure, and $\lambda$ is determined to enforce the fixed activity level constraint ${\int\mathrm{d}x\,\rho(x)=f}$, see (\ref{constraint1}). The precise expression of $r$ depends on the eigenvalue spectrum of the $J^0_{ij}$ matrix. Changing the hard cut-off $d_c$ to a smooth e.g. exponential decay of the coupling with the distance $d_{ij}^\ell$ between the place-field centers would change the expression for $r$, but would not affect the overall behaviour of the model.
We find three distinct solutions to these coupled equations:
\begin{itemize}
\item a paramagnetic phase (PM), corresponding to high levels of noise $T$, in which the average local activity is uniform over space, $\rho(x)=f$, and neurons are essentially uncorrelated, $q=f^2$.
\item a 'clump' phase (CL), where the activity depends on space, {\em i.e.} $\rho(x)$ varies with $x$ and is localized in the reference environment. This phase corresponds to the 'retrieval phase' where the environment is actually memorized. In fact, all the $L+1$ environments are memorized since any of them could be chosen as the reference environment. Note that the value of $x$ (center of the bump of activity) is totally arbitrary, as all positions are equivalent after averaging over the permutations.
\item a glassy phase (SG), corresponding to large loads $\alpha$, in which the local activity $\langle s_i\rangle$ varies from neuron to neuron ($q>f^2$), but does not cluster around any specific location in space in any of the environments ($\rho(x)=f$ after averaging over remappings). In this SG phase the crosstalk between environments is so large that none of them is actually stored in the network activity.
In the SG phase, contrary to the CL phase, no environment is memorized. This is the 'black-out catastrophe' \cite{Amit89} already described in the Hopfield model, in which retrieval also takes place in an all-or-nothing fashion.
\end{itemize}
We now need to determine which solution is selected as functions of $\alpha$, $T$, that is, the phase of lowest free energy that will be thermodynamically favored, as well as the domains of existence (stability) of those three phases against longitudinal and replicon modes \cite{deAlmeidaThouless78}, and the transition lines between them. To study the stability, we write the Hessian of the free energy and study its eigenvalues in the longitudinal and replicon sectors. Then, the transition between two phases is the line where the free energies in both phases equalize. We have done these calculations in the one-dimensional case, as detailed in ref. \cite{Monasson13}. The outcome is the phase diagram shown in Fig.~\ref{fig:phasediag}, displaying the three phases domains in the $(\alpha,T)$ plane:
\begin{itemize}
\item the paramagnetic solution exists for all $\alpha$, $T$ and is stable for ${T>T_\text{PM}(\alpha)}$ displayed with the dot-dashed line in Fig.~\ref{fig:phasediag}.
\item the glassy phase exists for ${T<T_\text{PM}(\alpha)}$, and is always replica-symmetry broken; we expect replica symmetry breaking to be continuous in this region, as in the celebrated Sherrington-Kirkpatrick model \cite{mezard87}.
\item the longitudinal stability of the clump phase is computed numerically and shown with the thin dashed line in Fig.~\ref{fig:phasediag}. The clump is stable against replicon modes except in a little low-$T$ high-$\alpha$ region (dotted line). An interesting feature of the clump phase stability domain is the reentrance of the high-$\alpha$ boundary.
\end{itemize}
We have checked this analytically-derived phase diagram by Monte Carlo simulations. For a detailed comparison of this phase diagram with the one of the Hopfield model, and how the dimensionality of the attractors plays a role, see \cite{theseSophie}.
\begin{figure}
\begin{minipage}[c]{0.6\textwidth}
\includegraphics[width=\textwidth]{figure9.jpg}
\end{minipage}\hfill
\begin{minipage}[c]{0.33\textwidth}
\caption{
Phase diagram in the $(\alpha,T)$ plane in $D=1$ with $f=0.1$ and $w=0.05$. Thick lines: transition between phases. Dashed-dotted line: ${T_\text{PM}(\alpha)}$. Thin dashed line: CL phase's longitudinal stability regions. Dotted line: CL phase's RSB line. ${\alpha_\text{CL}}$: storage capacity at ${T=0}$ of the replica-symmetric clump phase. ${\alpha_g}$: CL-SG transition load at ${T=0}$. ${T_\text{CL}}$: temperature of loss of stability of the clump at ${\alpha=0}$. ${T_c}$: CL-PM transition temperature at ${\alpha=0}$. ${T_\text{PM}=T_\text{PM}(\alpha=0)}$ (see text).
}
\label{fig:phasediag}
\end{minipage}
\end{figure}
\subsection{Dynamics within one map and transitions between maps}
The phase diagram above informs us on the stable states of the model.For moderate temperature and memory load, i.e. in the CL phase, thermodynamically stable states have an activity spatially localized somewhere in one of the maps. But this does not constrain \emph{which} map is retrieved and \emph{where} in this map, i.e. what is the position intersecting the place fields of the active cells. Indeed, under the influence of noise, the bump of activity can move around in a given map, and also jump to another map. We have studied both these dynamics within one map and between maps, respectively in \cite{Monasson14} and \cite{Monasson15}.
Within one map, we have shown formally that, in the case of a single continuous attractor (one map, \emph{i.e.} $\alpha=0$), the bump of activity behaves like a quasi-particle with little deformation. This quasi-particle undergoes a pure diffusion with a diffusion coefficient that can be computed exactly from first principle, i.e. from the knowledge of microscopic flipping rates of spins in Monte Carlo simulations. The diffusion coefficient scales as $1/N$, see Eq.~(31) in \cite{Monasson14}. When imposing a force (spins) on the spins, see Section \ref{secforce}, the activity changes so as to move the bump. An illustration is shown in Fig.~\ref{montecarlo}. It can be shown analytically that the mobility of the bump and its diffusion coefficient obey the Stokes-Einstein relation.
\begin{figure}
\hspace*{\fill}%
\subcaptionbox{Session A\label{reference-a}}{\includegraphics[width=1.5in]{refA.jpg}}\hfill%
\subcaptionbox{Session B\label{reference-b}}{\includegraphics[width=1.5in]{refB.jpg}}\hfill%
\subcaptionbox{Test Session\label{test}}{\includegraphics[width=1.5in]{test.jpg}}%
\hspace*{\fill}%
\caption{Monte Carlo simulation sessions of our memory model in the case of two 1D environments (random permutations), denoted by A and B. X-axis: states of the system ${\bf s}(t)$ (black dots correspond to active neurons $s_i=1$ and white dots to silent cells, $s_i=0$), with neurons ordered in increasing order of their place field centers in the A (left part of columns) or B (right part of columns) permutations. Y-axis: time in MC rounds, increasing from top to down. The bump is forced to move rightwards with an external force, see \cite{Monasson14}.
In columns {\bf (a)} and {\bf (b)}, the system is initialized with a localized bump of activity in environments, respectively, A and B.
Column {\bf (c):} Test simulations composed of the second halves of simulations reported in {\bf (a)} and {\bf (b)} used for decoding purposes, see text. Parameter values: $T=0.006$, $N=1000$, $w = 0.05$, $f= 0.1$.}
\label{montecarlo}
\end{figure}
In the presence of multiple maps, the disorder due to the presence of multiple maps stored in the couplings creates an effective free-energy landscape for the bump of activity in the reference environment. The free-energy barriers scale typically as $\sqrt N$, and are correlated over space length of the order of the bump size, see \cite{Monasson14}. In one dimension, the bump therefore effectively undergoes Brownian motion in the Sinai potential, with strongly activated diffusion. In higher dimension, diffusion is facilitated with respect to the 1D case, as can be observed in simulations.
In addition to moving in the reference environment, the bump can also spontaneously jump between maps. Fast transitions between maps, evoked by light inputs, have been observed by K. Jezek and colleagues in the so-called 'teleportation experiment' \cite{Jezek11}. Understanding these transitions in the framework of a simple model provides insight on the mechanisms involved in the biological system. Map-to-map transitions can be studied with replica theory again, but in a more subtle framework, where solutions with non-uniform activities in two maps (and not only one as in eqn. (\ref{eq:saddlepoint1}) above) are searched for. There are two scenarios for spontaneous transitions between spatial representations, see Fig. 4 in \cite{Monasson15}:
\begin{itemize}
\item through a mixed state, which gives bumps of activity in both maps; these bumps are weaker than the one in the CL phase in a single reference environment. This scenario is preferred (has lower free-energy cost) at low $T$. Transitions take place at special 'confusing' positions in both environments, where both maps locally resemble most.
\item through a non-localized state, i.e. through the PM phase. Owing to the liquid-vapor analogy, the bump of activity in map A evaporates, and then condensates in map B. This scenario is preferred
(has lower free-energy cost) at high $T$ (but sufficiently low to make the CL phase thermodynamically favorable with respect to PM, see phase diagram in Fig.~\ref{fig:phasediag}).
\end{itemize}
We show in Fig.~\ref{figbarriere}A the rate of transitions between maps computed from Monte Carlo simulations, see Supplemental Material in \cite{Monasson15} for details. We observe that the rate increases with temperature, and diminishes with the load and the system size. According to Langer's nucleation theory \cite{Langer69}, we expect the rate to be related to the free-energy barrier $\Delta F$ between CL phase in environment A and the CL phase in environment B through (see formula 3.37 in \cite{Langer69}),
\begin{equation}\label{lang6}
R = \kappa \; \sqrt{\frac{T}{2\pi\,N\,|\lambda_-|}}\; {\cal V}\; \exp\big( - N\, \Delta F/T\big) \ ,
\end{equation}
where $\kappa$ is the growth rate of the unstable mode at the transition state, $\lambda_-$ is the unique negative eigenvalue of the Hessian of the free-energy at the transition state, ${\cal V}$ is the volume of the saddle-point subspace (resulting from the integral over continuous degrees of freedom leaving the saddle point configuration globally
unchanged). Hence, we expect the measures of the rates obtained for different system size to collapse onto each other upon the following rescaling:
\begin{equation}\label{sca}
R \to - \frac{\log\big( R\, \sqrt N \big) }N \ .
\end{equation}
This scaling is nicely confirmed by numerics, see Fig.~\ref{figbarriere}B. We observe the collapse to a limiting curve, related to the barrier height through $\Delta F/T$. Note that $\Delta F$ is itself a function of temperature $T$, calculated in \cite{Monasson15}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{figbarriere}
\end{center}
\caption{Map-to-map spontaneous transitions. {\bf A.} rate $R$ of transitions computed from Monte Carlo simulations, as a function of the temperature $T$, see \cite{Monasson15}. Parameters: $f = 0.1$, $w = 0.05$. {\bf B.} Replotting the same data as in panel {\bf A} for $\alpha= 0.006$ after transformation in eqn (\ref{sca}) allows us to estimate the ratio of the free-energy barrier over temperature, see eqn (\ref{lang6}).}
\label{figbarriere}
\end{figure}
\section{Representation of space(s) in the hippocampus: decoding and data analysis}
\subsection{Decoding neural representations with effective Ising networks}
The problem of decoding which brain state is internally represented from the observation of neural activity has a natural application in experiments involving the simultaneous recording of a neural population \cite{cocco2017functional}. The \textit{decoding} problem can be tackled by learning statistical properties of a known set of brain states and classifying new observations accordingly, a problem which is deeply connected to high-dimensional classification in machine learning. An example is provided by the environmental memory in the hippocampus. Activity is recorded while the animal explores a set of environments. Each environment is associated to a memorized cognitive map, and these \textit{reference sessions} are then used to learn models of the activity associated to each map. In turn, these models can be used to decode activity recorded during a controlled \textit{test session}, i.e. decide which map this activity is associated to. During the test session, environmental cues are manipulated by the experimentalist, and decoding the internal response to the change of experimental conditions allows us to investigate the induced dynamics over internal representations \cite{Jezek11, posani2017functional}.
We can tackle the decoding problem by inferring a probability density function over the neural patterns ${\bf s}$ for each brain state $M$, $P({\bf s} | M)$\footnote{For definiteness, we will hereafter consider the neural configuration at a given time, ${\bf s} = \{s_1, s_2, \ldots , s_N\}$, to be the set of binarized neural activities of the neurons $i$=1...N under consideration, with $s_i=0$ if neuron $i$ is silent, $1$ if it is active, see Section \ref{secising} for more details.}. These probability distributions can be used to decode the internal state $M$ given an observation (a neural pattern) ${\bf s}$ in the test session. More precisely, $M$ is decoded by maximizing the log-likelihood
\begin{equation}
\mathcal{L}(M |{\bf s}) = \log P({\bf s} |M)\ .
\end{equation}
This inference framework relies on the definition of a parametric probability function, whose parameters are inferred by solving the corresponding \textit{inverse problem} from reference data. According to the max-entropy principle, our choice is to use the family of graphical models \cite{jaynes1957information, wainwright2008graphical, MacKayBook} as parametric probabilistic functions. Depending on the reference sample size and/or the complexity of representations we can invert the Independent model, which accounts for the diferent average activations of neurons in different brain states, or make a step further and include correlations between neural activity, defining an Ising model for each state $M$.
\begin{equation}\label{pisi}
P({\bf s} |M) = \frac{\exp{\left(\sum_{i} h^{M}_{i} s_{i} + \sum_{i<j} J_{ij}^{M} s_{i}s_{j} \right)}}{\mathcal{Z}^{M}(h,J)}
\end{equation}
where ${\cal Z}^M$ is a normalization constant
\begin{equation}\label{z}
{\cal Z}^M(h,J)= \sum_{\bf s} \exp{\left(\sum_{i} h^{M}_{i} s_{i} + \sum_{i<j} J_{ij}^{M} s_{i}s_{j} \right)}\ .
\end {equation}
The core steps of the Ising decoding procedure are:
\begin{enumerate}
\item {\em Reference session.} For each brain state $M$, (a) collect samples of neural pattern in a known brain state $M$ (reference session), and compute the frequencies $p_{i}^{M}$ and pairwise joint frequencies $p_{ij}^{M}$ of the recorded neurons; (b) find the Ising model that reproduces the same quantities on average, i.e. such that $\left< s_{i} \right> = p_{i}^{M}$ and $\left< s_{i}s_{j} \right> = p_{ij}^{M}$, where $\left< \cdot \right>$ denotes the average over the probability distribution $P({\bf s} |M)$. This is a highly non-trivial computational problem, reviewed in Section \ref{secising}.
\item{\em Test session.} Given a neural pattern from the test session ${\bf s}^{t}$, compute the log-likelihood of each brain state, and decode the internal state as the most likely one
\begin{equation}
M^{t} = \argmaxB_{M} \ \mathcal{L}(M |{\bf s}^{t})\ .
\end{equation}
\end{enumerate}
Within this framework we can therefore decode the neural representation from the observed neural pattern. This procedure has been applied to experimental data from the hippocampus, showing good performance in retrieving the explored environment from neural activity \cite{posani2017functional}. Similar procedures have been successfully applied to other brain regions, see for instance \cite{tavoni2017functional,Tavoni2016b, schneidman2006weak, Stevenson08}.
Before applying the decoding procedure in the context of the representations of space, let us review how the effective Ising model in eqn (\ref{pisi}) can be fitted from data.
\subsection{ Inference of effective Ising models from data }
\label{secising}
We discuss in this Section the problem of the inference of a graphical model from data \cite{ackley1985learning,schneidman2006weak,wainwright2008graphical,aurell2012inverse,cocco2011adaptive}.
Data are defined here as a set of $B$ recorded configurations of $N$ variables, ${\bf s}^b = \{s_1^b, s_2^b, \ldots , s_N^b\}$, with $i=1,\ldots ,N$; $s_i^b$ denotes the value of the variable at site $i$ in configuration $b$. Variables $s_i^b$ can be real valued, binary, or multi-categorical (Potts state). Two cases of great practical interest are:
\begin{itemize}
\item Neurons can be described by binary variables $s_i=0,1$, expressing whether they are silent or active, i.e. emit a spike. Spiking times obtained from multi-electrode recordings \cite{mcnaughton83b,meister1994,schneidman2006weak,Peyrache09} can be processed into a series of $B=T/\Delta t$ recorded neural configurations ${\bf s}^b$, by dividing the recording time $T$ in small time windows $b=1...B$ of duration $\Delta t$; then, $s_i^b$ is equal to 1 if neuron emits one or more spikes in time bin $b$, to 0 if it remains silent.
\item Amino acid $s_i$ at site $i$ in a protein sequence can take 20 different values. Data configurations are sequences from a familiy of homologous proteins, assumed to share a common 3D fold and biological function, collected in protein databases \cite{durbin1998biological,finn2016pfam,de2013emerging,morcos2011direct,balakrishnan2011learning}.
\end{itemize}
For the sake of simplicity, we assume hereafter that variables take binary values, $s=0,1$. We further assume that the distribution over configurations $\bf s$ of $N$ such variables is given by the Ising model defined in eqn (\ref{pisi}); to lighten notations, we will drop the $M$ subscript hereafter. The model is parametrized by $ N$ fields $h_i$ and $\frac 12 N( N -1)$ couplings $J_{ij}$.
We assume that the different data configurations are independently drawn from $P({\bf s}|h,J)$ in eqn (\ref{pisi}). Hence, the probability of the data reads
\begin{equation}
\prod _{b=1}^B P\big({\bf s}^b |h,J \big) = \exp \left[- B \ S\big(h ,J \big)\right]
\end{equation}
where the cross-entropy $S$ is
\begin{equation}
\label{entropy}
S(h,J )= \log Z(h,J) - \sum _{i} h_i\; p_i - \sum _{i<j} J_{ij}\; p_{ij}
\end{equation}
depends on the data through the single-site and pairwise frequencies
\begin{equation}
p_i =\frac 1B \sum_b s_i^b \quad \text{and}\quad p_{ij} =\frac 1B \sum_b s_i^b \, s_j^b \ .
\end{equation}
The best values for the fields and the couplings are the one minimizing $S$. While $S$ is a convex function of its arguments, the minimum is not guaranteed to be finite. For instance, the minimum of $S$ is realized at $J_{12}=-\infty$ when neurons 1 and 2 never spike together ($p_{12}=0$). This problem can
be avoided by including a prior, also called regularization, over the fields and couplings. Usual regularization schemes include adding the $L_1$ and/or $L_2$ norms of the couplings, to avoid small nonzero couplings or couplings with very large, unrealistic values. Another regularization scheme consists of imposing a small rank for the coupling matrix, see Section \ref{sechopfieldinv}.
The computational problem in the minimization of eqn (\ref{entropy}) is the calculation of the partition function $ Z(h,J) $, which is generally intractable as it involves a summation over the $2^N$ configuration of the systems. Some methods to solve the inverse Ising problem bypass the calculation of $Z$, such has Boltzmann Machine algorithm \cite{ackley1985learning}, the Pseudo-Likelihood approximations \cite{wainwright2008graphical,aurell2012inverse}, the minimum probability flow \cite{sohl2011new}, or resort to approximate expressions for $Z$, e.g. mean field \cite{Opper2001}, high-temperature expansions \cite{sessak2009small}, and adaptive cluster expansions \cite{cocco2011adaptive,cocco2012}.
Once the Ising model has been inferred, it can be used for various tasks:
\begin{itemize}
\item Extract structural information on the connectivity/coupling matrix between the variables. In the case of neurons, this {\em functional connectivity} is not physiological (synaptic), but is an effective set of couplings depending on the brain state \cite{friston2011,tavoni2017functional,cocco2017functional}. In protein covariation analysis, it has been shown that large couplings often coincide with amino acids in contact on the three-dimensional structure of the protein \cite{morcos2011direct,de2013emerging}.
\item Use the inferred model to score new configurations and decide if they are compatible with the data in the training data set. We will see a direct application in Sections~\ref{secmod2} and \ref{secdata2}.
\item Generate new configurations through Monte Carlo simulations. This can be very useful to obtain in silico data with the same features as the ones in the training set, for instance, new proteins with the same structure or function as natural proteins.
\end{itemize}
\subsection{Back to model: the subsampling problem} \label{secmod2}
The theoretical model for spatial memory in place-cell populations from Section \ref{secmodel} shows remarkable features compatible with the recall of brain states at the level of population activity \cite{monasson2013crosstalk, monasson2014crosstalk}. The existence of a \textit{clump phase}, in which the system is maintained in a local minimum of self-sustained localized activity, is compatible with the attractor-neural-network (ANN) general paradigm of cognitive functions being represented by collective states of the neural network. The presence of spontaneous transitions from one representation to another is consistent with the flickering phenomena triggered by weak inputs observed in the rat hippocampus \cite{Jezek11}, where spatial memory is thought to be stored and retrieved \cite{tsodyks1995associative, o1971hippocampus, o1978hippocampal}. However, theoretical results were obtained in the limit of very large systems, which is also the case of real brain regions ($\sim 10^{9}$ neurons), while electrophysiological setups permit to record simultaneously a much smaller ($<10^{2}$) number of neurons. It is natural to wonder to what extent such a small number of neurons could provide information about the collective state of the whole population.
Hereafter, we describe an attempt to draw a parallel between experimental conditions of multi-array recordings and the theoretical model for environment memory in the attractor neural network framework. We first design a Monte Carlo simulation that mimics an experiment with two memorized environments, referred to as A and B. We simulate single-environment {\em reference sessions} by forcing the activity to explore local minima corresponding to the memorized environments in a system with a relatively large number ($N=1,000$) of neurons. We then address the question if a small, randomly selected, set of neurons (here, $N_{sam}=33$ over 1,000) could provide enough information to perform the decoding procedure and infer the time course of the spatial representations from neural activity. As place cells are non topographical, i.e. cells that are physically nearby in the hippocampus can have distant place fields, recording a spatially located population of cells can be thought of to be equivalent to a random subsample in the place-field abstract space.
The decoding task is finally performed on the test session using Ising and independent models learned from the reference sessions, and their decoding capability is tested in a classification problem on a test session composed by samples from both states. The relationship between true and inferred coupling is then analyzed.
\subsubsection{Simulations: constructing reference and test sessions}
\label{secforce}
Monte Carlo simulations are conducted as follows:
\begin{itemize}
\item First we define two 1D environments, hereby referred to as A and B, through their two random place-field permutations, denoted by $\pi^{A}$, $\pi^B$.
\item From these two environments, two coupling matrices $J^{M}$, $M \in \left\{ A, B \right\}$, are created using learning prescription described in eqn (\ref{rule1}):
\begin{equation}
\label{neuron}
J^{M}_{ij} \defeq \left\{
\begin{array}{l l l}
\frac 1N & \quad \text{if } & \frac1N \left | \pi^{M}(i) - \pi^{M}(j) \right | \leq \frac{w}{2} \ , \\
0 & \ & \text{otherwise}\ .
\end{array} \right.
\end{equation}
\item A unique coupling matrix $J$ is then constructed as point-sum of the two single-environment matrices: $J_{ij}=J^{A}_{ij} + J^{B}_{ij}$.
\item simulations are performed, with $n = 10^{4}$ Monte Carlo steps, each one starting from an initial neuronal condition localized in one of the two reference environments $M$. To maintain the total activity constant, we select, at each algorithm step, one active spin $s_{i} = 1$ and one silent spin $s_{j} = 0$. The flip trial is then defined as the joint flip of these spins.
\item an additional small force is added to make the bump exhaustively explore the one dimensional map, by an asymmetric term in the energy. This results in a left-right asymmetry in the Monte Carlo acceptance rule:
\begin{equation}
\Delta E = \sum_{k \neq i,j} (J_{ik} - J_{jk}) s_{k} + A^{M}(i,j)
\end{equation}
with $A^{M}(i,j)$ being a right-pulling force in the environment M, namely
\begin{equation}
A^{M}(i,j) \defeq \frac{1}{f N^{2}} \times \left( \pi^{M}(i) - \pi^{M}(j) + N \epsilon_{M}(i,j) \right)
\end{equation}
where $ \pi^{M}(i)$ is the position occupied by the place field of neuron $i$ in environment $\pi^{M}$, and $\epsilon_{M} \in \left\{ -1,0,1 \right\}$ ensures periodic boundary conditions.
\end{itemize}
Two simulations, one for A and one for B, are conducted. Parameters are carefully chosen such that the \textit{clump} phase is maintained, the bump thoroughly explores the environment, no spontaneous transitions occur. In other words, the same system is sampled in one of the two maps during the whole simulation; this mimicks the fact that the rodent explores a single environment in \cite{Jezek11}. Two \textit{reference sessions} are defined using the first half (5,000 steps) of each simulation, and a \textit{test session} is constructed by concatening the second halves, for a total of 10,000 total time steps. Parameters used in the following analysis are: $T=0.006$, $N=1000$, $w = 0.05$, $f=0.1$.
\subsubsection{Decoding Results}
As a measure of decoding precision we use the true positive rate (TPR), i.e. the overall fraction of correctly-classified neural pattern.
\begin{equation}
\text{TPR} \defeq \frac{\text{\# correctly classified time steps}}{ \text{\# total time steps}}
\end{equation}
We obtain
\begin{align}
\text{Ising model : } \ \text{TPR} &= 0.928 \\ \notag
\text{Independent model : } \ \text{TPR} &= 0.491
\end{align}
The difference between the use of independent and Ising model, shown in Fig.~\ref{subsampling-plot}, is remarkable. The independent model, in which all couplings are set to zero, accounts only for the average firing rates of the cells. It shows no decoding capability at all, with a TPR equal to 0.49 (compatible with random guessing). This could be expected from the fact that the localized bump of activity, which represents position of the rat within the retrieved a map, moves along the entire environment during reference sessions. Hence the average activity of all cells is close to $f$ in both maps. The independent model, which only uses information on averages to decode the activity, is therefore unable to achieve useful discrimination.
Conversely, the Ising model exhibits an impressive performance in the decoding task. As shown in Fig.~\ref{sub-plot-is}, the time course of the likelihood difference $\Delta {\mathcal L}$ allows us to unambiguously decode the spatial representation as a function of time. This difference is also clear from the scatter plot of the likelihoods in the test session, which shows a well-separated pattern in the plane, contrary to the Independent model (Fig. \ref{subsampling-scatter}). Computation of true positive rates fully justifies the remarkable visual difference between the two models:
\begin{figure}[h]
\hspace*{\fill}%
\centering
\subcaptionbox{Independent model\label{sub-plot-ind}}{\includegraphics[width=\linewidth]{independent-MC-subsample-eps-converted-to.pdf}}\hfill%
\centering
\subcaptionbox{Ising model\label{sub-plot-is}}{\includegraphics[width=\linewidth]{Ising-MC-subsample-eps-converted-to.pdf}}%
\hspace*{\fill}%
\caption{Log-likelihood difference $\mathcal{L}_{A}(t) - \mathcal{L}_{B}(t)$ along the test session using independent model and Ising model on the montecarlo test session. The first half of the test session is sampled from environment \textbf{A}, the second half from environment \textbf{B}.}
\label{subsampling-plot}
\end{figure}
\begin{figure}[h]
\hspace*{\fill}%
\subcaptionbox{Independent model\label{sub-scat-ind}}{\includegraphics[width=0.5\linewidth]{independent-scatter.png}}\hfill%
\subcaptionbox{Ising model\label{sub-scat-is}}{\includegraphics[width=0.5\linewidth]{true-ising-scatter.png}}%
\hspace*{\fill}%
\caption{Likelihoods scatters computed from the Independent (a) and Ising (b) models. Each dot represents the value of $-\mathcal{L}_{A}$ and $-\mathcal{L}_{B}$ for each neural configuration ${\bf s}^t$ during the Monte Carlo test session.}
\label{subsampling-scatter}
\end{figure}
\subsubsection{Inferred vs. true couplings}
The application of inference routines to a simulated neural network allows us to investigate the relationship between functional couplings, i.e. the inferred $J_{ij}$ in the inverse Ising model, and the real coupling strength, defined in eqn. (\ref{neuron}). We show in Fig.~\ref{scatterJ} the couplings inferred between the neurons as functions of the distances between their palce-field centers in each map. We observe that:
\begin{itemize}
\item Couplings decay very rapidly with the distance, on a typical scale compatible with both $w\,N$ and $f\,N$, and the width of the bump; Note that $w,f$ have similar values in the simulations. At long distances, couplings are independent of distance, and equal to a negative value. The presence of many long-range inhibitory couplings, clearly visible in the histrograms of Fig.~\ref{histoJ}, is a natural consequence of constraint (\ref{constraint1}) on the level of activity.
\item The magnitude of coupling at small distances, $\sim 2-3$ in Fig.~\ref{scatterJ}, is much larger than the one of the 'true' couplings in the model, equal to $J^0=\frac 1{T\,N}=0.167$. This suggest that the inferred couplings are {\em effective}, and would coincide with the true couplings only in the limit of perfect spatial sampling ($N_{sam}=N$).
\end{itemize}
To better understand the value of the inferred couplings, let us compute the statistical moments of the neurons. As explained above in the description of the independent-cell models, all neurons have the same average activity in both environments:
\begin{equation}
p_i^A=p_i^B= f\ , \quad \forall i\ .
\end{equation}
We can also estimate easily the joint probability that two neurons are active. In the large-$N$ limit, the true couplings vanish as they scale as $1/N$. Hence, spins become two-by-two independent in a ground state of the Hamiltonian, that is, when the bump is centered around a given position $x$. Conditioned to $x$, and defining the position of the place-field center of cell $i$ in map $M$ through
\begin{equation}
x_i =\frac{\pi^M(i)}N \ ,
\end{equation}
we have
\begin{equation}
\langle s_i\rangle_x = \rho\big(x_i-x\big), \ \langle s_j\rangle_x = \rho\big(x_j-x\big),\ \langle s_i s_j\rangle_x -\langle s_i\rangle_x \langle s_j\rangle_x \sim \frac 1N\ ,
\end{equation}
where $\rho$ is implicitly centered in 0 in the above expression.
\begin{figure}[t]
\hspace*{\fill}%
\subcaptionbox{Environment A\label{sub-scat-A}}{\includegraphics[width=0.4\linewidth]{J-distance-envA-eps-converted-to.pdf}}\hfill%
\subcaptionbox{Environment B\label{sub-scat-B}}{\includegraphics[width=0.4\linewidth]{J-distance-envB-eps-converted-to.pdf}}%
\hspace*{\fill}%
\caption{Inferred coupling $J_{ij}$ vs. distance $\left | \pi^M(i) - \pi^M(j) \right |$ between the place-field centers of the corresponding neurons in environment $M=A$ (left) and $M=B$ (right).}
\label{scatterJ}
\end{figure}
However, we have to average over the position $x$ of the bump that moves across the environment (Fig.~\ref{montecarlo}). Doing so, we obtain the pairwise activity, see eqn (36) and Fig.~12 in \cite{Monasson13}:
\begin{equation}
p_{ij}^M= \int dx \, \rho\big(x_i+x\big)\, \rho\big(x_j+x\big)\quad \forall i,j\ .
\end{equation}
This effective matrix of pairwise activities threfore depends on the map, which explains why the Ising model, contrary to the Independent model, is map-specific and can efficiently decode the representation. However, the effective correlation between neurons, $p_{ij}^M-f^2$, does not scale as $\frac 1N$: the Ising couplings are thus effective interactions, not simply related to the true couplings in the model. We expect this statement to hold also for the functional couplings inferred from real recordings and their physicological, synaptic counterparts.
\begin{figure}
\hspace*{\fill}%
\subcaptionbox{Environment A\label{sub-scat-A}}{\includegraphics[width=0.4\linewidth]{histA}}\hfill%
\subcaptionbox{Environment B\label{sub-scat-B}}{\includegraphics[width=0.4\linewidth]{histB}}%
\hspace*{\fill}%
\caption{Relationship between true couplings and inferred couplings. In purple, historgram of inferred couplings. In red, inferred couplings corresponding to truly connected neurons in the environment.}
\label{histoJ}
\end{figure}
\subsection{Analysis of multi-electrode recordings in CA1}\label{secdata2}
The inference routines and the test-session validation described above are directly applicable to micro array recordings of neural activity in vivo. As previously introduced, one can record the brain area activity in a collection of stable memory states, build models from these activties, and use them to decode representations (neural states) in a successive test session.
A good testing ground for this analysis is spatial memory in the rat hippocampus. Once environments have been memorized by the animal, it is relatively easy to collect samples of one memory state by letting the rat explore the corresponding environment. In a recent experiment, conducted by Jezek et al. \cite{Jezek11}, environmental conditions (light cues) are abruptly changed to trigger instabilities in the evoked spatial maps in the test session. By decoding which representation is being expressed during the test session as a function of time, we can investigate the response and fast dynamics of the memory state in the hippocampal network.
\begin{figure}
\hspace*{\fill}%
\begin{center}
\subcaptionbox{Independent model\label{ca1-plot-ind}}{\includegraphics[width=.95\linewidth]{CA1-both-ind}}\hfill%
\subcaptionbox{Ising model\label{ca1-plot-is}}{\includegraphics[width=.95\linewidth]{CA1-both-ising}}%
\hspace*{\fill}%
\end{center}
\caption{Log-likelihood difference $\mathcal{L}_{A}(t) - \mathcal{L}_{B}(t)$ along the test session using Ising and independent decoder on CA1 hippocampal data. The environmental conditions are abruptly changed from A to B in correspondence to the red line.}
\label{ca1-teleportation}
\end{figure}
The Ising inference method has been used in this context to decode which map, denominated A or B, is retrieved during test sessions with unknown environmental conditions. A test session recorded from hippocampal CA1 containing an environment-switch (teleportation) event is shown in Fig.~\ref{ca1-teleportation}. Contrary to the decoding analysis performed on the subsampled theoretical model, the Iindependent-cell model shows good performance in decoding task on real recordings:
\begin{align}
\text{Ising model: } \ \text{TPR} &= 0.85 \\ \notag
\text{Independent model: } \ \text{TPR} &= 0.81 \notag
\end{align}
One possible reason is that our theoretical model, designed to describe the memory storage and retrieval functions of the hippocampal network, does not account for the anatomical context of the region. In the mammalian brain, the hippocampus plays a role in a complex neural circuitry that involves inputs and outputs to other brain areas, such as medio entorhinal cortex (MEC) and lateral entorhinal cortex (LEC). The sensory input, conveyed by LEC and MEC, can dramatically change the firing properties of hippocampal place cells with respect to external environmental conditions, even to the point of silencing neurons in all but one environments. The mean neural activity could therefore carry enough information to achieve useful discrimination between the explored environments and the corresponding recalled memory states, while it is identical in all maps and therefore useless for decoding in our model. The properties of remapping of place cells activities in different representations have been extensively studied in the neuroscience literature, and different features have been reported on different hippocampal sub-regions \cite{fyhn2007hippocampal}. For a more detailed analysis and comparison of inference methods applied to hippocampal data see \cite{posani2017functional, posani2017position}.
|
1,314,259,994,359 | arxiv | \section{Introduction}
Stellar variability is a subject of great interest as there is a wide
variety of object classes producing a range of lightcurve forms. Interest
in variable stars has increased in recent years, due in no small part to
large-scale surveys and the capabilities to reduce vast datasets. There are
a number of all-sky surveys (e.g., \citet{woz04}) whose data have
identified many previously unknown variables. However, these surveys are
not always suitable for detecting short-period low-amplitude variability
due to the infrequent sampling time. A large number of variable star
detections have also resulted from intensive monitoring programs. Perhaps
the largest datasets to emerge from photometric monitoring surveys are from
microlensing experiments such as MACHO, OGLE, and EROS-2, whose
observations generally target the Galactic bulge and satellite galaxies.
Many variable star discoveries from these groups have already been
published (e.g., \citet{alc03,woz02,der02}) with much data still to be
mined.
With the recent activity surrounding detection of transiting extra-solar
planets, additional monitoring of selected fields has been undertaken.
These surveys are generally divided into cluster transit searches (e.g.,
\citet{moc02,str03a}) and wide-field transit searches
(e.g., \citet{bor01,kan04}).
As well as producing candidate extra-solar planet detections, these
surveys have yielded a wealth of new additions to variable star
catalogues. In particular, transit surveys of stellar clusters have
yielded the discovery of new variable stars as a result of their
observations (e.g., \citet{moc02,str02}). Wide-field observations of
field stars have also produced a number of new variable stars (e.g.,
\citet{bak02,eve02,har04}).
The Wide Angle Search for Planets prototype (hereafter WASP0) is a
wide-field (9-degree) instrument mounted piggy-back on a commercial
telescope and is primarily used to search for planetary transits.
WASP0 has been used to monitor three fields at two separate sites in 2000
and 2002. The monitoring programs undertaken using WASP0 make the data an
excellent source for both long and short period variability detection.
We present results from a monitoring program which targeted a field in
Pegasus. The field was monitored in order to test the capabilities of
WASP0 by detecting the known transiting planet around HD 209458. As a
by-product of this test, lightcurves of thousands of additional stars in
the field were also produced and a number of new variable stars have been
found. Here we report on 75 variable stars detected, of which 73 are new
discoveries. We estimate the fraction of stars in the field exhibiting
variable behaviour and make a comparison with other such studies of
stellar variability amongst field stars. We conclude with a discussion of
implications for extra-solar planetary transit surveys.
\section{Observations}
The WASP0 instrument is an inexpensive prototype whose primary aim is to
detect transiting extra-solar planets. The instrument consists of a
6.3cm aperture F/2.8 Nikon camera lens, Apogee 10 CCD detector (2K
$\times$ 2K chip, 16-arcsec pixels) which was built by Don Pollacco at
Queen's University, Belfast. Calibration frames were used to measure the
gain and readout noise of the chip and were found to be 15.44 e$^-$/ADU
and 1.38 ADU respectively. Images from the camera are digitized with
14-bit precision giving a data range of 0--16383 ADUs. The instrument
uses a clear filter which has a slightly higher red transmission than
blue.
The first observing run of WASP0 took place on La Palma, Canary Islands
during 2000 June 20 -- 2000 August 20. During its observing run on La
Palma, Canary Islands, WASP0 was mounted piggy-back on a commercial 8-inch
Celestron telescope with a German equatorial mount. Observations on La
Palma concentrated on a field in Draco which was regularly monitored for
two months. These Draco field observations were interrupted on four
occasions when a planetary transit of HD 209458 was predicted. On those
nights, a large percentage of time was devoted to observing the HD 209458
field in Pegasus. Exposure times for the four nights were 5, 30, 50, and
50 seconds respectively. The data for each night were rebinned into 60
second frames for ease of comparison. This campaign resulted in the
successful detection of the planet transiting HD 209458, described in
more detail in \citet{kan04}.
\section{Data Reduction}
The reduction of the WASP0 data proved to be a challenging task as
wide-field images contain many spatially dependent aspects, such as the
airmass and the heliocentric time correction. The most serious issues
arise from vignetting and barrel distortion produced by the camera optics
which alter the position and shape of stellar profiles. The data reduction
pipeline which has been developed and tested on these data is able to
solve many of these problems to produce high-precision photometry.
The pipeline first automatically classifies frames by statistical
measurements and the frames are classified as one of bias, flat, dark,
image, or unknown. A flux-weighted astrometric fit is then performed on
each image frame through cross-identification of stars with objects in the
Tycho-2 \citep{hog00} and USNO-B \citep{mon03} catalogues. This produces an
output catalogue which is ready for the photometry stage. Rather than fit
the variable point-spread function (PSF) shape of the stellar images,
weighted aperture photometry is used to compute the flux centred on
catalogue positions. The resulting photometry still contains time and
position dependent trends which we removed by post-photometry calibration.
Post-photometry calibration of the data is achieved through the use of
code which constructs a theoretical model. This model is subtracted from
the data leaving residual lightcurves which are then fitted via an
iterative process to find systematic correlations in the data. This
process also allows the separation of variable stars from the bulk of the
data since the $\mathrm{rms} / \sigma$ for variables will normally be
significantly higher than that for relatively constant stars, depending
upon the amplitude and period of the variability. The reduction of the
WASP0 data is described in more detail in \citet{kan04}.
\section{Variable Star Detection}
In this section we describe the methods used for sifting the variable stars
from the data. We then discuss the classification of the variable stars and
the methods applied for period determination.
\subsection{Photometric Accuracy}
\begin{figure}
\includegraphics[width=8.2cm]{figure01.ps}
\caption{Photometric accuracy versus magnitude diagram including 22000
stars from one night of WASP0 observations. The upper panel shows the rms
accuracy in magnitudes in comparison with the theoretical accuracy
predicted based on the CCD noise model. The lower panel is the ratio of
the observed rms divided by the predicted accuracy. Around 4\% of stars
have an rms better than 1\%.}
\end{figure}
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[angle=270,width=8.2cm]{figure02a.ps} &
\hspace{0.5cm}
\includegraphics[angle=270,width=8.2cm]{figure02b.ps} \\
\includegraphics[angle=270,width=8.2cm]{figure02c.ps} &
\hspace{0.5cm}
\includegraphics[angle=270,width=8.2cm]{figure02d.ps} \\
\end{tabular}
\end{center}
\caption{Folded lightcurve (left) and periodogram (right) for two of the
variable stars detected in the Pegasus field.}
\end{figure*}
Obtaining adequate photometric accuracy is one of the many challenges
facing wide-field survey projects such as WASP0. This has been overcome
using the previously described pipeline, the results of which are
represented in Figure 1. The upper panel shows the rms versus magnitude
diagram and the lower panel shows the same rms accuracy divided by the
predicted accuracy for the CCD. The data shown include around 22000 stars
at 311 epochs from a single night of WASP0 observations and only includes
unblended stars for which a measurement was obtained at $> 20$\% of epochs.
The upper curve in each diagram
indicates the theoretical noise limit for aperture photometry with the
1-$\sigma$ errors being shown by the dashed lines either side. The lower
curve indicates the theoretical noise limit for optimal extraction using
PSF fitting.
The first stage of mining variable stars from the data was performed by a
combination of visual inspection and selecting those stars with the
highest rms/$\sigma$. Visual inspection of the stars was made far easier by
the dense sampling of the field which would otherwise have obscured many
short-period variables in the data. The selected stars were then
extracted and analysed using a spectral analysis technique which will now
be described in greater detail.
\subsection{Model Fitting and Period Calculation}
In analysing variable stars, there are various methods that can be used
to extract period information from the lightcurves. For this analysis, we
make use of the ``Lomb method'' \citep{pre92} which is especially suited
to unevenly sampled data. This method uses the Nyquist frequency to
perform spectral analysis of the data resulting in a Lomb-Scargle
statistic for a range of frequencies. The Lomb-Scargle statistic indicates
the significance level of the fit at that particular frequency and hence
yields the likely value for the period. This method generally works quite
well for data which are a combination of sines and cosines, but care must
be taken with non-sinusoidal data as the fitted period may be half or
twice the value of the true period.
Fortran code was written to automatically apply the Lomb method to each
of the suspected variable star lightcurves and attempt to determine the
period, which could then be used to produce phase-folded lightcurves. The
stars are then sorted according to the fitted period and examined
individually. Figure 2 shows the periodogram and the folded lightcurve for
two of the variable stars. The upper lightcurve has good phase coverage
and a sinusoidal shape whilst the lower lightcurve has poor phase coverage
and only a single eclipse was observed.
Data were acquired for $\sim 6$ hours per night for a total of 4 nights.
Each of these nights were spaced 7 nights apart. The implications of this
are that the data are far more sensitive to short ($< 0.5$ day) period
variables, but also that we can expect a bias towards shorter periods due
to the large gaps between observations which produces significant multiple
peaks in the associated periodograms. In the case of stars for which only
one photometric variation is observed over the entire observing run, the
bias is towards longer periods since the spectral analysis is not
constrained by the data from the other nights. The lower lightcurve shown
in Figure 2 is an example of this effect as demonstrated by the strong
aliasing visible in the periodogram.
\section{Results}
\begin{figure*}
\includegraphics[angle=270,width=16.0cm]{figure03.ps}
\caption{Histogram of variable star magnitudes (top-left), rms
(top-right), periods (bottom-left), and colour (bottom-right).}
\end{figure*}
In total, 75 variable stars were identified in the WASP0 Pegasus data,
as listed in Table 1 in order of increasing period.
Through the use of the databases SIMBAD \citep{wen00} and VIZIER
\citep*{och00}, it was found that 73 are previously unknown variables.
To assist in the classification of the variables, colour information
provided by the Two Micron All Sky Survey (2MASS) project was utilised.
This provided accurate colours using $J$, $H$, and $K$ filters down to the
magnitude limits of the WASP0 data.
Shown in Figure 3 are histograms for the instrumental magnitude, rms,
period, and colour for the detected variable stars. Of particular interest
are the significant number of small rms (and hence small amplitude)
variables detected. The period distribution peaks at slightly less than
0.5 days as expected.
\begin{table*}
\caption{List of detected variable stars sorted in order of increasing
period. Known variable stars are marked with a *. Variables for which there
is an associated question mark have a small uncertainty regarding the
classification. Variables for which a suitable classification could not be
assigned are designated as unknown (U).}
\begin{tabular}{@{}lccccccc}
star \# & catalogue \# & period (days) & mag & rms (\%) & $J - H$ &
$H - K$ & class\\
1 & Tycho 1689-00007-1 & 0.049 & 10.854 & 1.788 & 0.06 & 0.03 & $\delta$ Scuti \\
2 & Tycho 1682-00151-1 & 0.070 & 8.344 & 0.938 & 0.09 & 0.01 & $\delta$ Scuti \\
3 & Tycho 1691-01562-1 & 0.081 & 12.935 & 7.051 & 0.20 & 0.04 & $\delta$ Scuti \\
4 & Tycho 1682-00125-1 & 0.087 & 11.770 & 3.879 & 0.05 & 0.06 & $\delta$ Scuti \\
5 & Tycho 2202-01669-1 & 0.094 & 12.787 & 9.129 & 0.11 & 0.05 & $\delta$ Scuti \\
6 & Tycho 1683-00394-1 & 0.102 & 10.039 & 0.660 & 0.10 & 0.07 & $\delta$ Scuti \\
7 & USNO 1077-0716280 & 0.113 & 13.374 & 9.357 & 0.15 & 0.00 & $\delta$ Scuti \\
8 & Tycho 1674-00459-1 & 0.120 & 10.547 & 1.629 & 0.14 & 0.04 & $\delta$ Scuti \\
9 & USNO 1080-0687717 & 0.122 & 12.388 & 2.870 & 0.62 & 0.16 & $\delta$ Scuti \\
10 & Tycho 1693-02012-1 & 0.135 & 9.386 & 0.630 & 0.17 &-0.25 & $\delta$ Scuti \\
11 & USNO 1083-0635832 & 0.135 & 12.621 & 6.928 & 0.33 & 0.08 & $\delta$ Scuti \\
12 & Tycho 1683-01839-1 & 0.153 & 9.935 & 1.690 & 0.12 & 0.05 & $\delta$ Scuti \\
13 & USNO 1063-0610342 & 0.168 & 13.537 & 19.764 & 0.16 & 0.05 & $\delta$ Scuti \\
14 & Tycho 1681-01275-1 & 0.200 & 12.278 & 5.764 & 0.26 & 0.05 & $\delta$ Scuti \\
15 & USNO 1098-0571100 & 0.203 & 11.054 & 1.133 & 0.52 & 0.06 & U \\
16 & USNO 1082-0661701 & 0.203 & 12.640 & 4.997 & 0.51 & 0.04 & U \\
17 & USNO 1079-0712902 & 0.207 & 12.556 & 8.334 & 0.49 & 0.08 & U \\
18 & Tycho 1685-01214-1 & 0.217 & 11.361 & 2.566 & 0.48 & 0.12 & U \\
19 & USNO 1062-0603571 & 0.230 & 13.287 & 13.675 & 0.25 & 0.00 & U \\
20 & USNO 1062-0604608 & 0.259 & 12.707 & 8.872 & 0.34 & 0.08 & U \\
21 & USNO 1118-0583493 & 0.261 & 13.330 & 12.253 & 0.40 & 0.04 & U \\
22 & USNO 1109-0567069 & 0.271 & 13.559 & 9.350 & 0.44 & 0.08 & EW \\
23 & USNO 1084-0602522 & 0.276 & 12.736 & 8.405 & 0.50 & 0.09 & EW \\
24 & USNO 1088-0594050 & 0.279 & 12.904 & 10.692 & 0.23 & 0.08 & U \\
25 & USNO 1051-0620188 & 0.291 & 12.287 & 6.450 & 0.26 & 0.08 & U \\
26 & Tycho 1690-01643-1 & 0.291 & 9.916 & 1.536 & 0.24 &-0.11 & EW \\
27 & USNO 1130-0634392 & 0.292 & 12.419 & 8.573 & 0.45 & 0.17 & EW \\
28 & USNO 1083-0636244 & 0.297 & 14.033 & 24.659 & 0.36 & 0.20 & EW \\
29 & USNO 1077-0715703 & 0.309 & 12.264 & 8.284 & 0.60 & 0.20 & EW? \\
30 & USNO 1099-0576195 & 0.310 & 12.127 & 7.187 & 0.33 & 0.04 & EW \\
31 & USNO 1067-0619020 & 0.318 & 12.721 & 9.156 & 0.42 & 0.08 & U \\
32 & Tycho 1687-01479-1 & 0.342 & 12.051 & 4.822 & 0.21 & 0.08 & EW \\
33 & USNO 1124-0632100 & 0.346 & 13.264 & 24.698 & 0.46 & 0.14 & EW \\
34 & Tycho 1670-00251-1 & 0.347 & 11.851 & 14.219 & 0.35 & 0.09 & EW \\
35 & USNO 1077-0715670 & 0.349 & 13.368 & 12.316 & 0.52 & 0.12 & EW? \\
36 & USNO 1085-0593094 & 0.349 & 11.462 & 1.610 & 0.25 & 0.07 & EW? \\
37 & USNO 1123-0641431 & 0.370 & 13.006 & 9.257 & 0.28 & 0.07 & EW? \\
38 & Tycho 1666-00301-1 & 0.380 & 11.595 & 3.978 & 0.27 & 0.07 & EW? \\
39 & Tycho 1147-01237-1 & 0.389 & 11.848 & 4.550 & 0.18 & 0.09 & EW \\
40 & USNO 1111-0568829 & 0.399 & 12.524 & 14.847 & 0.15 & 0.06 & RR Lyrae \\
41 & USNO 1120-0611140 & 0.406 & 12.335 & 6.101 & 0.27 & 0.04 & EW? \\
42 & USNO 1077-0719427 & 0.407 & 12.906 & 5.343 & 0.34 & 0.04 & EW \\
43 & USNO 1074-0692109 & 0.407 & 12.539 & 5.418 & 0.32 & 0.05 & U \\
44 & USNO 1048-0613132 & 0.414 & 12.421 & 8.323 & 0.23 & 0.08 & U \\
45 & Tycho 1684-01512-1 & 0.422 & 10.225 & 0.659 & 0.17 & 0.11 & U \\
46 & USNO 1096-0579000 & 0.433 & 13.763 & 13.391 & 0.56 & 0.17 & EW? \\
47 & Tycho 2203-01663-1 & 0.435 & 10.080 & 12.856 & 0.21 & 0.01 & EW \\
48 & USNO 1127-0652941 & 0.440 & 13.071 & 18.065 & 0.25 & 0.08 & EW \\
49 & USNO 1067-0619189 & 0.443 & 12.500 & 5.927 & 0.34 &-0.01 & U \\
50 & Tycho 1666-00208-1 & 0.453 & 8.952 & 0.740 & 0.80 & 0.26 & BY Draconis \\
51 & Tycho 1684-00522-1 & 0.456 & 12.119 & 9.499 & 0.28 & 0.06 & EW \\
52 & Tycho 1687-00659-1 & 0.487 & 10.515 & 20.782 & 0.23 & 0.05 & EB \\
53 & Tycho 1685-01784-1 & 0.488 & 12.373 & 32.408 & 0.14 & 0.09 & RR Lyrae \\
54$^*$ & Tycho 2202-01379-1 & 0.502 & 11.057 & 21.161 & 0.24 & 0.06 & RR Lyrae \\
55 & USNO 1063-0601782 & 0.506 & 12.902 & 6.996 & 0.29 & 0.01 & U \\
56 & Tycho 1688-01026-1 & 0.515 & 11.856 & 21.775 & 0.24 & 0.09 & EB \\
57 & USNO 1126-0625388 & 0.524 & 11.173 & 5.835 & 0.80 & 0.30 & EA \\
58 & Tycho 1685-00588-1 & 0.556 & 12.869 & 13.129 & 0.16 & 0.05 & RR Lyrae \\
59 & USNO 1061-0601686 & 0.561 & 12.929 & 7.108 & 0.27 & 0.10 & EW? \\
60 & USNO 1090-0579466 & 0.566 & 12.878 & 11.671 & 0.12 & 0.05 & EW? \\
61 & Tycho 1683-00877-1 & 0.587 & 11.404 & 4.362 & 0.53 & 0.24 & EW \\
62 & Tycho 1684-00288-1 & 0.596 & 11.711 & 4.571 & 0.17 & 0.10 & EW \\
63 & USNO 1047-0628425 & 0.603 & 13.076 & 13.307 & 0.17 & 0.08 & EW \\
\end{tabular}
\end{table*}
\begin{table*}
\contcaption{}
\begin{tabular}{@{}lccccccc}
star \# & catalogue \# & period (days) & mag & rms (\%) & $J - H$ &
$H - K$ & class\\
64 & Tycho 1686-00904-1 & 0.647 & 13.072 & 12.663 & 0.12 & 0.03 & EW \\
65 & Tycho 1679-01714-1 & 0.668 & 8.195 & 0.884 & 0.13 & 0.01 & U \\
66 & Tycho 1684-00561-1 & 0.717 & 11.085 & 2.281 & 0.03 & 0.09 & $\delta$ Scuti? \\
67 & Tycho 1686-00469-1 & 0.737 & 10.504 & 12.776 & 0.51 & 0.07 & BY Draconis? \\
68 & Tycho 1684-00023-1 & 0.830 & 12.605 & 6.648 & 0.39 & 0.07 & E? \\
69$^*$ & Tycho 1674-00732-1 & 1.041 & 7.693 & 1.614 & 0.10 & 0.05 & $\delta$ Scuti? \\
70 & Tycho 1682-00761-1 & 1.141 & 10.931 & 1.754 & 0.55 & 0.12 & U \\
71 & USNO 1123-0632849 & 1.187 & 12.638 & 13.051 & 0.35 & 0.06 & EA \\
72 & USNO 1125-0628249 & 1.212 & 12.459 & 9.359 & 0.16 & 0.08 & E? \\
73 & Tycho 1691-01257-1 & 1.741 & 8.935 & 1.489 & 0.18 & 0.05 & E? \\
74 & Tycho 1666-00644-1 & 2.158 & 8.858 & 1.008 & 0.80 & 0.24 & BY Draconis? \\
75 & Tycho 1149-00326-1 & 2.262 & 9.480 & 1.444 & 0.18 & 0.05 & U \\
\end{tabular}
\end{table*}
\subsection{Colour-Magnitude Diagram}
The WASP0 instrument normally uses a clear filter as described in
\citet{kan04} and so no colour information is available in the WASP0
data. However, we made use of the astrometric catalogues in this by
transforming the USNO-B colours to a more standard system.
Specifically, the USNO-B colours used were second epoch IIIa-J, which
approximates as B, and second epoch IIIa-F, which approximates as R.
\citet{kid04} describes a suitable linear transformation from USNO-B
filters to the more standard Landolt system. This colour transformation
is given by:
\begin{eqnarray}
\mathrm{B: Landolt} & = & 1.097*\mathrm{USNO(B)} - 1.216 \nonumber \\
\mathrm{R: Landolt} & = & 1.031*\mathrm{USNO(R)} - 0.417 \nonumber
\end{eqnarray}
A linear least-squares fit to the colours computed in \citet{bes90} was
used to convert from $B - R$ to $B - V$. Using this transformation, we
are able to construct an approximate colour-magnitude diagram as shown
in Figure 4. The variable stars cover a broad range of magnitudes and
spectral types, with almost all of them falling on the main sequence.
Several of the variables are late-type giant stars and are therefore
separate from the bulk of variables shown in Figure 4 and the colour
histogram of Figure 3. The apparent colour cutoff at $B - V \approx 1$
is an artifact from the insertion of Tycho-2 objects into the USNO-B
catalogue.
Using the 2MASS colour information, the period of the variables are
plotted as a function of $J - H$ colour in Figure 5. The well-known
period-colour relationship for contact binaries \citep{rub01} is clearly
evident in the figure. These various variable groups represented in the
figure are now discussed in more detail, and the phase-folded
lightcurves of the variables are shown in Figures 6--8.
\begin{figure}
\includegraphics[width=8.2cm]{figure04.ps}
\caption{Colour-magnitude diagram for the Pegasus field with the
location of variable stars shown as triangles.}
\end{figure}
\begin{figure}
\includegraphics[angle=270,width=8.2cm]{figure05.ps}
\caption{Log period as a function of $J - H$ colour for each of the
variable stars.}
\end{figure}
\subsection{$\delta$ Scuti Stars}
Due to the high time resolution of the WASP0 observations, the WASP0 data
are particularly sensitive to very short period variations in stellar
lightcurves. The best examples are the pulsating $\delta$ Scuti stars, of
which 15 were identified from our observations. A number of the $\delta$
Scuti stars exhibit multi-periodic behaviour, such as stars 1 and 5. Stars
66 and 69 are also of the $\delta$ Scuti type but are multi-periodic and
have been folded on the longer period. The bright star HD 207651 (star 69)
is one of the two known variables to have been detected. Observations by
\citet*{hen04} revealed this to be a triple system with the lower
frequency variations resulting from ellipsoidal variations.
\subsection{Eclipsing Binaries}
Eclipsing binaries comprise at least 45\% of the variables detected in
this survey, and most of the unclassified variables are also strongly
suspected to be eclipsing binaries. Most of the binaries are of the
W Ursae Majoris (EW) type, the remainder being possible candidates for
Algol (EA) or Beta Lyrae (EB) type binaries. The best example is star 47,
for which exceptional data quality and clear phase coverage reveal
slightly differing minima values, indicative of a close binary pair.
Of particular interest is star 57, an apparent eclipsing binary whose
colour suggests a very late-type star. Although lack of phase information
adds considerable uncertainty to the period, the estimated small period
suggests that this may be an eclipsing system with M-dwarf components.
There are only a few known eclipsing binaries of this type
\citep{mac04,rib03} and they are very interesting as they allow rare
opportunities to investigate the physical properties of late-type stars.
\subsection{BY Draconis Stars}
BY Draconis stars are generally spotted late-type stars whose variability
arises from their rotation. Late-type stars comprise the minority of our
variables list and so only 3 were identified as being of this type.
Star 74 has been classified as a BY Draconis due to the late spectral type
and variable minima of the lightcurve. However, the lack of phase
information and therefore uncertain period means that there are
undoubtedly other equally viable classifications.
\subsection{RR Lyrae Stars}
The mean period of RR Lyrae stars is around 0.5 days and are therefore
highly likely to be detected by the WASP0 observations. A total of 4
RR Lyrae stars were detected in this survey, one of which is a known
RR Lyrae (star 54), the F0 star AV Peg. The remaining three consist of
star 40 (type b), star 53 (type a), and star 58 (type c).
\begin{figure*}
\includegraphics[angle=270,width=17.3cm]{figure06.ps}
\caption{Variable stars 1--32 detected in the Pegasus field.}
\end{figure*}
\begin{figure*}
\includegraphics[angle=270,width=17.3cm]{figure07.ps}
\caption{Variable stars 33--64 detected in the Pegasus field.}
\end{figure*}
\begin{figure*}
\includegraphics[angle=270,width=17.3cm]{figure08.ps}
\caption{Variable stars 65--75 detected in the Pegasus field.}
\end{figure*}
\subsection{Unclassified and Suspected Variables}
Around 24\% of the detected variables were unable to be classified due
to a lack of S/N and/or a lack of phase information. Examples of low
S/N stars are stars 17--21. Stars 43-44 are examples of lightcurves
missing valuable phase information. For some of these stars, the stellar
image fell close to the edge of the chip and so was not observed
consistently throughout the four nights. Star 65 has the shape, period,
and colour (spectral type $\sim$ F0) of a typical RR Lyrae. However, the
small amplitude of the variability is too small and so it remained
unclassified. It is likely that many of the unclassified variables
are eclipsing binaries for which one of the eclipses was not observed,
as evidenced by their close proximity to the period-colour relationship
visible in Figure 5.
\section{Discussion}
In total, $\sim 20000$ stars were searched for variability amongst the
Pegasus field stars down to a magnitude of $\sim 13.5$. From these stars,
75 variable stars were positively identified. The techniques used to
extract the variable stars from the data detected variability to an rms
of around 0.6\%. Hence, it is estimated that $\sim 0.4$\% of Pegasus
field stars brighter than $V \sim 13.5$ are variables with an amplitude
greater than $\sim 0.6$\%. For comparison, \citet{har04} used
a similar instrument to observe a field in Cygnus and found that
around 1.5\% of stars exhibit variable behaviour with amplitudes greater
than $\sim 3$\%. \citet{eve02} detected variability in field stars for a
much fainter magnitude range ($13.8 < V < 19.5$) and found the rate to
be as high as 17\%.
However, as discussed earlier, the results presented here are likely
to be biased towards lower periods due to the spacing of the
observations.
This also means that many long period variables may have remained
undetected. Therefore the percentage of Pegasus field stars which
exhibit variable behaviour calculated above should be considered a
lower limit, as the actual value may well be slightly higher.
The Pegasus field stars surveyed in this study are predominantly
solar-type stars. However, 60\% of the variables detected are bluer than
solar ($J - H < 0.32$) with $\sim 30$\% of these blue variables
classified as $\delta$ Scuti stars. Those variables classified as
``unknown'' are fairly evenly distributed in colour and are most likely
to be eclipsing binaries.
It has been noted by \citet{eve02} that low levels of stellar variability
are likely to interfere with surveys hunting for transits due to
extra-solar planets. Indeed it has developed into a major challenge for
transit detection algorithms to distinguish between real planetary
transits and false-alarms due to variables, especially grazing eclipsing
binaries \citep{bro03}. If many of the unclassified variable stars from
this survey are eclipsing binaries then these will account for well over
half ($> 40$) of the total variables found. This will have a significant
impact upon transit surveys, especially wide-field surveys for which the
stellar images are prone to under-sampling and therefore vulnerable to
blending.
The radial velocity surveys have found that 0.5\%--1\% of Sun-like stars
harbour a Jupiter-mass companion in a 0.05 AU (3--5 day) orbit
\citep{lin03}. Since approximately 10\% of these planets with randomly
oriented orbits will transit the face of their parent star, a suitable
monitoring program of 20000 stars can be expected to yield $\sim 10$
transiting extra-solar planets. If we assume that the typical depth of
a transit signature is similar to that of the OGLE transiting planets
\citep{bou04,tor04}, then the photometric deviation for each transit
from a constant lightcurve will be $< 3$\%. Figure 5 shows that many of
the variables are of very low amplitude, with 20 having an rms of
$< 3$\%. Hence, for the Pegasus field, the number of low amplitude
variables detected outnumber the number of expected transits by a factor
of 2:1. As previously mentioned, the number of variables detected by the
WASP0 observations is relatively small compared to other similar
studies of field stars. Thus, it can be expected that transit detection
algorithms will suffer from large contamination effects due to variables,
unless additional stellar attributes, such as colour, can be incorporated
to reduce the false-alarm rate.
\section{Conclusions}
This paper has described observations of a field in Pegasus using the
Wide Angle Search for Planets prototype. These observations were
conducted from La Palma as part of a campaign to hunt for transiting
extra-solar planets. Careful monitoring of stars in the Pegasus field
detected 75 variable stars, 73 of which were previously unknown to be
variable. It is estimated from these observations that $\sim 0.4$\% of
Pegasus field stars brighter than $V \sim 13.5$ are variables with an
amplitude greater than $\sim 0.6$\%. This is relatively low compared to
other similar studies of field stars.
A concern for transiting extra-solar planet surveys is reducing the
number of false-alarms which often arise from grazing eclipsing binaries
and blended stars. It is estimated that the number of detectable
transiting extra-solar planets one could expect from the stars monitored
is a factor of two smaller than the number of variables with similar
photometric amplitudes. Since the number of variables detected is
relatively low, this has the potential to produce a strong false-alarm
rate from transit detection algorithms.
Surveys such as this one are significantly improving our knowledge of
stellar variability and the distribution of variable stars. It has
been a considerable source of interest for many to discover the extent
of sky that remains to be explored in this way, even to relatively
shallow magnitude depths. It is expected that our knowledge of variable
stars will continue to improve, particularly with instruments such as
SuperWASP \citep{str03b} intensively monitoring even larger areas of sky.
\section*{Acknowledgements}
The authors would like to thank Ron Hilditch for several useful
discussions. The authors would also like to thank PPARC for supporting
this research and the Nichol Trust for funding the WASP0 hardware.
This publication makes use of data products from the Two Micron All Sky
Survey, which is a joint project of the University of Massachusetts and
the Infrared Processing and Analysis Center/California Institute of
Technology, funded by the National Aeronautics and Space Administration
and the National Science Foundation.
|
1,314,259,994,360 | arxiv | \section{Introduction}
Let $X$ be a projective $K3$ surface. In \cite{beauvoisin}, Beauville and the author
proved that $X$ carries a canonical $0$-cycle $c_X$ of degree $1$,
which is the class in $CH_0(X)$ of any point of $X$ lying on a (possibly singular)
rational curve on $X$.
This cycle has the property that
for any divisors $D,\,D'$ on $X$,
we have
$$D\cdot D'={\rm deg} (D\cdot D')\,c_X\,\,{\rm in}\,\,CH_0(X).$$
In recent works of Huybrechts \cite{huybrechts} and O'Grady \cite{ogrady}, this
$0$-cycle appeared to have other characterizations. Huybrechts proves for example the following result (which is proved in \cite{huybrechts}
to have much more general consequences on spherical objects and autoequivalences of the derived category of $X$):
\begin{theorem}\label{theohuybrechts}(Huybrechts \cite{huybrechts})
Let $X$ be a projective complex
$K3$ surface. Let ${F}$ be a simple vector bundle on $X$ such that $H^1( {\rm End}\,F)=0$ (such an $F$ is called spherical in \cite{huybrechts}).
Then $c_2({F})$ is
proportional to $c_X$ in $CH_0(X)$ if one of the following conditions holds.
\begin{enumerate}
\item The Picard number of $X$ is at least $2$.
\item The Picard group of $X$ is $\mathbf{Z}H$ and the determinant of $F$ is
equal to $k H$ with $k=\pm1\,\,{\rm mod}. \,\,r ={\rm rank}\,F$.
\end{enumerate}
\end{theorem}
This result is extended in the following way by O'Grady : In \cite{ogrady}, he
introduces the following increasing filtration of $CH_0(X)$
$$S_0(X)\subset S_1(X)\subset \ldots \subset S_d(X)\subset\ldots\subset CH_0(X),$$
where $S_d(X)$ is defined as the set of classes of cycles of the form
$z+z'$, with $z$ effective of degree $d$ and $z'$ a multiple of
$c_X$. It is also convenient to
introduce $S_d^k(X)$ which will be by definition the set of degree $k$ $0$-cycles
on $X$ which lie in $S_d(X)$. Thus by definition
$$S_d^k(X)=\{z\in CH_0(X),\,z=z'+(k-d)c_X\},$$
where $z'$ is effective of degree $d$.
Consider a torsion free or more generally a pure sheaf $\mathcal{F}$ on $X$ which is $H$-stable with respect to a polarization
$H$. Let $2d(v_{\mathcal{F}})$ be the dimension of the space of
deformations of $\mathcal{F}$, where $v_\mathcal{F}$ is the Mukai
vector of $\mathcal{F}$ (cf. \cite{huybrechtslehn}). We recall that
$v_\mathcal{F}\in H^*(X,\mathbf{Z})$ is the triple
$$(r,l,s)\in H^0(X,\mathbf{Z})\oplus H^2(X,\mathbf{Z})\oplus H^4(X,\mathbf{Z}),$$ with $r={\rm rank}\,\mathcal{F}$, $l=c_1^{top}({\rm det}\,\mathcal{F})$
and $s\in H^4(X,\mathbf{Z})$ is defined
as
\begin{eqnarray}v_\mathcal{F}=ch(\mathcal{F})\sqrt{td(X)}.\label{vraimukai}
\end{eqnarray} In particular
we get by the Riemann-Roch formula that
$$\sum_i(-1)^i{\rm dim}\,Ext^i(\mathcal{F},\mathcal{F})=<v_\mathcal{F},v_\mathcal{F}^*>=2rs-{l^2}=2-2d(v_\mathcal{F}),$$
where $<\,,\,>$ is the intersection pairing on $H^*(X,\mathbf{Z})$,
and $v^*=(r,-l,s)$ is the Mukai vector of $\mathcal{F}^*$ (if $\mathcal{F}$ is locally free).
In particular $d(v_\mathcal{F})=0$ if $\mathcal{F}$ satisfies
${\rm End}\,\mathcal{F}=\mathbb{C}$ and ${\rm Ext}^1(\mathcal{F},\mathcal{F})=0$, so that $\mathcal{F}$ is spherical as in Huybrechts' theorem.
Noticing that $S_0(X)=\mathbf{Z}c_X$, one can then rephrase Huybrechts' statement by saying
that if $\mathcal{F}$ satisfies ${\rm End}\,(\mathcal{F})=\mathbb{C},\,d(v_\mathcal{F})=0$, then
$c_2(\mathcal{F})\in S_0(X)$, assuming the Picard number of $X$ is at least $2$.
O'Grady then extends Huybrechts' results as follows:
\begin{theorem} \label{theogrady} (O'Grady \cite{ogrady}) Assuming $\mathcal{F}$ is $H$-stable, one has $c_2(\mathcal{F})\in S_{d(v_\mathcal{F})}(X)$, $v_\mathcal{F}=(r,l,s)$,
if furthermore one of the following conditions holds:
\begin{enumerate}
\item
$l=H$, $l$ is primitive and $s\geq 0$.
\item
The Picard number of $X$ is at least $2$, $r$ is coprime to the divisibility of $l$ and $H$ is $v$-generic.
\item
$r\leq 2$ and moreover $H$ is $v$-generic if $r=2$.
\end{enumerate}
\end{theorem}
In fact, O'Grady's result is stronger, as he also shows that $S_{d(v)}^{k}(X),\,k={\rm deg}\,c_2(v)$,
is equal to the set of classes $c_2(\mathcal{G})$ with $\mathcal{G}$ a deformation of
$\mathcal{F}$. O'Grady indeed proves, by a nice argument involving
the rank of the Mukai holomorphic $2$-form on the moduli space of deformations of
$\mathcal{F}$, the following result:
\begin{proposition} \label{propogrady}(O'Grady \cite[Prop. 1.3]{ogrady}) If there is a
$H$-stable torsion free sheaf $\mathcal{F}$ with $ v=v(\mathcal{F})$, and the conclusion of Theorem
\ref{theogrady} holds for the deformations of $\mathcal{F}$, then
$$\{c_2(\mathcal{G}),\,\mathcal{G}\in \overline{\mathcal{M}^{st}}(X,H,v) \,\,\}= S_{d(v)}^k(X),\,k={\rm deg}\,c_2(\mathcal{F}).$$
\end{proposition}
In this statement, $\overline{\mathcal{M}^{st}}(X,H,v)$ is any smooth completion of the
moduli space of $H$-stable sheaves with Mukai vector $v$.
Our results in this paper are of two kinds: First of all we provide another description of $S_d^k(X)$ for any $d\geq 0$, $k\geq d$.
In order to state this result, let us introduce the following
notation: Given an integer $k\geq 0$, and a cycle $z\in CH_0(X)$ of degree $k$,
the subset $O_z$ of $X^{(k)}$ consisting
of effective cycles $z'\in X^{(k)}$ which are rationally equivalent to
$z$ is a countable union of closed algebraic subsets of $X^{(k)}$
(see \cite[Lemma 10.7]{voisinbook}). This is the ``effective orbit''
of $z$ under rational equivalence, and the analogue of $|D|$ for a
divisor $D\in CH^1(W)$ on any variety $W$. We define ${\rm
dim}\,O_z$ as the supremum of the dimensions of the components of
$O_z$. This is the analogue of $r(D)={\rm dim}\,|D|$ for a divisor
$D\in CH^1(W)$ on any variety $W$. We will prove the following:
\begin{theorem}\label{char} Let $X$ be a projective
$K3$ surface. Let $k\geq d\geq 0$. We have the following characterization of $S_d^k(X)$:
$${\rm if}\,\,k>d,\,\,S_d^k(X)=\{z\in CH_0(X),\,O_z\,\,{\rm nonempty},\,\,{\rm dim}\,O_z\geq k-d \}.$$
\end{theorem}
\begin{remark}{\rm The inclusion
$S_d^k(X)\subset\{z\in CH_0(X),\,\,O_z\,\,{\rm nonempty},\,\,{\rm dim}\,O_z\geq k-d \}$ is easy since
the cycle $(k-d)c_X$ has its orbit of dimension $\geq k-d$, (for example
$C^{(k-d)}\subset X^{(k-d)}$, for any rational curve $C\subset X$, is contained in the orbit of $(k-d)c_X$). Hence any cycle of the form $z+(k-d)c_X$ with $z$ effective
of degree $d$ has an orbit of dimension $\geq k-d$.
}
\end{remark}
A particular case of the theorem above is the case where $d(v)=0$.
By definition $S_0(X)$ is the subgroup $\mathbf{Z}c_X\subset
CH_0(X)$. We thus have:
\begin{corollary} \label{corochar} For $k>0$, the cycle $kc_X$ is the unique $0$-cycle $z$ of degree
$k$ on $X$ such that ${\rm dim}\,O_z\geq k$.
\end{corollary}
\begin{remark}{\rm We have in fact ${\rm dim}\,O_z=k,\,z=kc_X$ since
by Mumford's theorem
\cite{mumford},
any component $L$ of $O_z$ is Lagrangian for the holomorphic
symplectic form on
$S^{(k)}_{reg}$, hence of dimension $\leq k$ if $L$ intersects $S^{(k)}_{reg}$.
If $L$ is contained in the singular locus of $S^{(k)}$, we can consider the
minimal multiplicity-stratum of $S^{(k)}$ containing $L$, which is determined by
the multiplicities $n_i$ of the general cycle
$\sum_in_ix_i,\,x_i$ distinct, parametrized by $L$) and apply the same argument.}
\end{remark}
\begin{remark}{\rm We will give in Section \ref{sec1} an alternative proof of
Corollary \ref{corochar}, using the remark above, and the fact that
any Lagrangian subvariety of $X^k$ intersects a product $D_1\times\ldots\times D_k$ of
ample divisors on $X$.}
\end{remark}
Our main application of Theorem \ref{char} is the following result which
generalizes O'Grady's and Huybrechts' theorems \ref{theogrady}, \ref{theohuybrechts}
in the case of simple vector bundles
(instead of semistable torsion free sheaves). We do not need any of the assumptions
appearing in Theorems \ref{theogrady}, \ref{theohuybrechts}, but our results, unlike those of O'Grady,
are restricted to the locally free case.
\begin{theorem}\label{ogrady} Let $X$ be a projective
$K3$ surface. Let $F$ be a simple vector bundle on $X$ with Mukai vector
$v=v(F)$. Then
$$c_2(F)\in S_{d(v)}(X).$$
\end{theorem}
A particular case of this statement is the case where $d=0$ : The corollary below proves Huybrechts' Theorem \ref{theohuybrechts} without any assumption on the Picard group of the $K3$ surface or on the determinant of $F$. It is conjectured in \cite{huybrechts}.
\begin{corollary} Let $F$ be a simple rigid vector bundle on a $K3$ surface. Then
the element $c_2(F)$ of $ CH_0(X)$ is a multiple of $c_X$.
\end{corollary}
We also deduce the following corollary, in the same spirit (and with essentially the same proof)
as Proposition \ref{propogrady}:
\begin{corollary}\label{corogrady}
Let $v\in H^*(X,\mathbf{Z})$ be a Mukai vector, with $k=c_2(v)$. Assume there exists a simple vector bundle
$F$
on $X$ with Mukai vector $v$. Then
$$S_d^k(X)= \{c_2({G}), \,\,G\,\,{\rm a\,\, simple\,\, vector\,\, bundle\,\,on}\,\,X,\,\,v_G=v\},$$
where $k=c_2(v):=c_2(F)$.
\end{corollary}
These results answer for
simple vector bundles on $K3$ surfaces questions asked by O'Grady (see \cite[section 5]{ogrady}) for simple sheaves.
The paper is organized as follows: in Section \ref{sec1}, we prove Theorem \ref{char}. We also show a variant concerning family of subschemes (rather than $0$-cycles) of given length in a constant rational equivalence class.
In section \ref{sec2}, Theorem \ref{ogrady} and Corollary \ref{corogrady}
are proved.
\vspace{0.5cm}
{\bf Thanks.} I thank Daniel Huybrechts and Kieran
O'Grady for useful and interesting comments on a preliminary version of this paper.
\vspace{0.5cm}
{\it J'ai grand plaisir \`{a} d\'{e}dier cet article \`{a} Rob Lazarsfeld, avec
toute mon estime et ma sympathie.
Son merveilleux
article \cite{laz} red\'{e}montrant un grand th\'{e}or\`{e}me classique sur les
s\'{e}ries lin\'{e}aires sur les courbes g\'{e}n\'{e}riques
a aussi jou\'{e} un r\^{o}le d\'{e}cisif dans l'\'{e}tude des fibr\'{e}s vectoriels et des $0$-cycles
sur les surfaces $K3$.}
\section{An alternative description of O'Grady's filtration\label{sec1}}
This section is devoted to the proof of Theorem \ref{char} which we state in the following form:
\begin{theorem}\label{autreforme}
Let $k\geq d$ and let $Z\subset X^{(k)}$ be a Zariski closed
irreducible algebraic subset of dimension $k-d$. Assume that all
cycles of $X$ parametrized by $Z$ are rationally equivalent in $X$.
Then the class of these cycles belongs to $S_d^k(X)$.
\end{theorem}
We will need for the proof the following simple lemma, which already appears in
\cite{voisinGandT}.
\begin{lemma}\label{lemma} Let $X$ be a projective $K3$ surface and let $C\subset S$ be a
(possibly singular) curve such that all points of $C$ are rationally equivalent
in $X$. Then any point of $C$ is rationally equivalent to $c_X$.
\end{lemma}
\begin{proof} Let $L$ be an ample line bundle on $X$.
Then $c_1(L)_{\mid C}$ is a $0$-cycle on $C$ and our assumptions imply
that $j_*(c_1(L)_{\mid C})={\rm deg}\,(c_1(L)_{\mid C})\,c$, for any point $c$ of $C$.
On the other hand, we have
$$j_*(c_1(L)_{\mid C})=c_1(L)\cdot C\,\,{\rm in}\,\,CH_0(X)$$
and thus, by \cite{beauvoisin}, $j_*(c_1(L)_{\mid C})={\rm deg}\,(c_1(L)_{\mid C})\,c_X$ in $CH_0(X)$.
Hence we have
$${\rm deg}\,(c_1(L)_{\mid C})\,c={\rm deg}\,(c_1(L)_{\mid C})\,c_X\,\,{\rm in}
\,\,CH_0(X).$$
This concludes the proof, since $c$ is arbitrary, ${\rm deg}\,(c_1(L)_{\mid C})\not=0$ and $CH_0(X)$ has no torsion.
\end{proof}
\begin{lemma} \label{corodense} The union of
curves $C$ satisfying the property stated in Lemma
\ref{lemma} is Zariski dense in $X$.
\end{lemma}
\begin{proof} The $0$-cycle $c_X$ is represented by any point lying on a (singular)
rational curve
$C\subset X$ (see \cite{beauvoisin}), so the result is clear if one knows that there
are infinitely many distinct rational curves contained in $X$. This result
is to our knowledge known only for general $K3$ surfaces but not for all
$K3$ surfaces (see however \cite{hassettetal}
for results in this direction). In any case, we can use the following argument
which already appears in \cite{maclean}:
By \cite{morimukai}, there is a $1$-parameter family of (singular) elliptic curves
$E_t$ on $X$. Let $C$ be a rational curve on $X$ which meets the fibers $E_t$.
For any integer $N$, and any point $t$, consider the points
$y\in \widetilde{E}_t$ (the desingularization of $E_t$), which
are rationally equivalent in $\widetilde{E}_t$ to the sum of a point
$x_t\in E_t\cap C$ (hence rationally equivalent to $c_X$)
and a $N$-torsion $0$-cycle on $\widetilde{E}_t$.
As $CH_0(X)$ has no torsion, the images $y_t$ of these points in
$X$ are all rationally equivalent to $c_X$ in $X$. Their images are
clearly parametrized for $N$ large enough by a (maybe reducible)
curve $C_N\subset X$. Finally, the union over all $N$ of the points
$y_t$ above is Zariski dense in each $\widetilde{E}_t$, hence the
union of the curves $C_N$ is Zariski dense in $X$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{autreforme}] The proof is by induction on $k$, the case $k=1$, $d=0$ being Lemma \ref{lemma} (the case $k=1$, $d=1$ is trivial).
Let $Z'$ be an irreducible component of the inverse image of
$Z$ in $X^k$. Let $p:Z'\rightarrow X$ be the first projection.
We distinguish two cases and note that they exhaust all possibilities, up to
replacing $Z'$ by another component $Z''$ deduced from $Z'$ by letting the symmetric group $\mathfrak{S}_k$ act.
\vspace{0.5cm}
{\it Case 1.} {\it The morphism $p:Z'\rightarrow X$ is dominant.}
For a curve $C\subset X$ parametrizing points rationally equivalent
to $c_X$, consider the hypersurface
$$Z'_C:=p^{-1}(C)\subset Z'.$$
Let $q:Z'\rightarrow X^{k-1}$ be the projection on the product of
the $k-1$ last factors. Assume first that ${\rm dim}\,q(Z'_C)={\rm
dim}\,Z'_C=k-d-1$. Note that all cycles of $X$ parametrized by
$q(Z'_C)$ are rationally equivalent in $X$. Indeed, an element $z$
of $Z'_C$ is of the form $(c,z')$ with $c\in C$ so that $c=c_X$ in
$CH_0(X)$. So the rational equivalence class of $z'$ is equal to
$z-c_X$ and is independent of $z'\in Z'_C$. Thus the induction
assumption applies and the cycles of degree $k-1$ parametrized by
${\rm Im}\,q$ belong to $S^{k-1}_d(X)$. It follows in turn that the
classes of the cycles parametrized by $Z'$ (or $Z$) belong to
$S_d^k(X)$. Indeed, as just mentioned above, a $0$-cycle $z$
parametrized by $Z'$ is rationally equivalent to $z=c_X+z'$ where
$z'\in S^{k-1}_d(X)$, so $z'$ is rationally equivalent to
$(k-d-1)c_X+z''$, $z''\in X^{(d)}$. Hence $z$ is rationally
equivalent in $X$ to $(k-d)c_X+z''$, for some $z''\in X^{(d)}$. Thus
$z\in S^k_d(X)$.
Assume to the contrary that ${\rm dim}\,q(Z'_C)<{\rm dim}\,Z'_C=k-d-1$ for any curve
$C$ as above. We use now the fact (see Lemma \ref{corodense})
that these curves $C$ are Zariski dense in $X$.
We can thus assume that there is a point
$x\in Z'_C$ which is generic in $Z'$, so that both $Z'$ and $Z'_C$ are smooth at $x$,
of respective dimensions $k-d$ and $k-d-1$.
The fact that ${\rm dim}\, q(Z'_C)<k-d-1$ implies that $q$ is
not of maximal rank $k-d$ at $x$ and as $x$ is generic in $Z'$, we conclude that
$q$ is of rank $<k-d$ everywhere on $Z'_{reg}$, so that
${\rm dim}\,{\rm Im}\,q\leq k-d-1$.
Now recall that all $0$-cycles parameterized by $Z'$ are rationally equivalent.
It follows that for any fiber $F$ of $q$, all points in $p(F)$ are rationally equivalent in $X$. This implies that all these points are rationally equivalent
to $c_X$ by Lemma \ref{lemma}.
This contradicts the fact that $p$ is surjective.
\vspace{0.5cm}
{\it Case 2.} {\it None of the projections $pr_i,\,i=1,\ldots, k$,
from $X^k$ to its factors is dominant.} Let $C_i:={\rm
Im}\,p_i\subset X$ if ${\rm Im}\,pr_i$ is a curve, and any curve
containing ${\rm Im}\,p_i$ if ${\rm Im}\,p_i$ is a point. Thus $Z'$
is contained in $C_1\times\ldots\times C_k$.
Let $C$ be a non necessarily irreducible ample curve such that all points
in $C$ are rationally equivalent to $c_X$.
Observe that the line bundle
$pr_1^*\mathcal{O}_X(C)\otimes \ldots\otimes pr_k^*\mathcal{O}_X(C)$ on $X^k$
has its restriction to
$C_1\times\ldots\times C_k$ ample and that its $k-d$-th self-intersection
on $C_1\times\ldots\times C_k$ is a complete intersection of ample
divisors and is equal to
\begin{eqnarray}
\label{cyclepositif}
W:=(k-d)!\sum_{i_1<\ldots<i_{k-d}}p_{i_1}^*\mathcal{O}_{C_1}(C)\cdot \ldots\cdot p_{i_{k-d}}^*\mathcal{O}_{C_{k-d}}(C)
\end{eqnarray}
in
$CH^{k-d}(C_1\times \ldots\times C_k)$,
where the $p_i$ are the projections from $\prod_iC_i$ to its factors.
The cycle $W$ of (\ref{cyclepositif}) is as well
the restriction to $C_1\times\ldots \times C_k$
of the effective cycle
\begin{eqnarray}
\label{cycleeffectif}
W':=(k-d)!\sum_{i_1<\ldots<i_{k-d}}pr_{i_1}^*C\cdot\ldots\cdot pr_{i_{k-d}}^*C.
\end{eqnarray}
As
the $k-d$ dimensional subvariety $Z'$ of $C_1\times\ldots\times C_k$
has a nonzero intersection with $W$, it follows that
the intersection number of
$Z'$ with $W'$ is nonzero in $X^k$, hence
that
$$Z'\cap pr_{i_1}^*C\cdot\ldots\cdot pr_{i_{k-d}}^*C\not=\emptyset$$
for some choice of indices $i_1<\ldots<i_{k-d}$.
This means that there exists a cycle in $Z$ which is of the form
$$z=z'+z''$$
with $z'\in C^{(k-d)}$ and $z''\in X^{(d)}$. As $z'$ is supported on $C$,
it is equal to $(k-d)c_X$ in $CH_0(X)$ and we conclude that
$z\in S_d^k(X)$.
\end{proof}
Let us now prove the following variant of Theorem
\ref{autreforme}. Instead of a family of $0$-cycles (that is, elements of $X^{(k)}$),
we now consider families of $0$-dimensional {\it subschemes} (that is, elements of
$X^{[k]}$) :
\begin{variant}\label{variant} Let $k\geq d$ and let $Z\subset X^{[k]}$ be a Zariski closed irreducible
algebraic subset of dimension $k-d$. Assume
that all cycles of $X$ parametrized by $Z$ are rationally equivalent
in $X$.
Then the class of these cycles belongs to $S_d^k(X)$.
\end{variant}
\begin{proof} Let $z\in Z$ be a general point. The cycle $c(z)$ of $z$,
where $c:X^{[k]}\rightarrow X^{(k)}$ is the Hilbert-Chow morphism , is of the
form $\sum_ik_i x_i$, with $\sum_ik_i=k$, where $x_i$ are $k'$ distinct points of $X$.
We have of course
\begin{eqnarray}\label{etdun}k'=k-\sum_i(k_i-1).
\end{eqnarray}
The fiber of $c$ over a cycle of the form $\sum_ik_i x_i$ as above is
of dimension $\sum_i(k_i-1)$ (see for example \cite{ellingsrud}). It follows that
the image $Z_1$ of $Z$ in $X^{(k)}$ is of dimension $\geq k-d-\sum_i(k_i-1)$.
By definition, $Z_1$ is contained in a multiplicity-stratum of $X^{(k)}$ where
the support of the considered cycles has cardinality $\leq k'$.
Let $Z'_1\subset X^{k'}$ be the set of $(x_1,\ldots,x_{k'})$ such that
$\sum_ik_ix_i\in c(Z)$. Then the morphism
$$Z'_1\rightarrow Z_1,\,(x_1,\ldots,x_{k'})\mapsto\sum_ik_ix_i$$
is finite and surjective, so that
\begin{eqnarray}\label{etdeux}{\rm dim}\,Z'_1= {\rm dim}\,Z_1\geq k-d-\sum_i(k_i-1),
\end{eqnarray}
which by (\ref{etdun}) can be rewritten as
$${\rm dim}\,Z'_1= {\rm dim}\,Z_1\geq k'-d.$$
Note that by construction, $Z'_1$ parameterizes $k'$-uples $(x_1,\ldots,x_{k'})$
with the property that
$\sum_ik_ix_i$ is rationally equivalent to a constant cycle.
The proof of the variant \ref{variant} then concludes with the following statement :
\begin{proposition} \label{prop4aout}Let $l$ be a positive integer,
$k_1>0,\ldots,k_{l}>0$ be positive multiplicities.
Let $Z$ be a closed algebraic subset of
$X^{l}$. Assume that ${\rm dim} \,Z\geq l-d$ and
the cycles $\sum_ik_ix_i$, $(x_1,\ldots,x_{l})\in Z$, are all rationally equivalent in $X$. Then the class
of the cycles $\sum_ik_ix_i$, $(x_1,\ldots,x_{l})\in Z$, belongs to
$S_d^k(X)$, where $k=\sum_ik_i$.
\end{proposition}
\end{proof}
For the proof of Proposition \ref{prop4aout}, we have to start with the following
Lemma:
\begin{lemma} \label{lemma4aout} Let $x_1,\ldots,x_d\in X$ and let $k_i\in \mathbf{Z}$.
Then $\sum_ik_ix_i\in S_d^k(X)$, $k=\sum_ik_i$.
\end{lemma}
\begin{proof} We use the following characterization of $S_d(X)$ given
by O'Grady: \begin{proposition}\label{procharogrady} (O'Grady
\cite{ogrady}) A cycle $z\in CH_0(X)$ belongs to $S_d(X)$ if and
only if there exists a (possibly singular, possibly reducible) curve
$j:C\subset X$, such that the genus of the desingularization of $C$
(or the sum of the genera of its components if $C$ is reducible) is
non greater than $d$ and $z$ belongs to ${\rm
Im}\,(j_*:CH_0(C)\rightarrow CH_0(X))$.
\end{proposition}
Let now $x_1,\ldots, x_d$ be as above. There exists by \cite{morimukai} a curve $C\subset X$, whose desingularization has
genus $\leq d$ and containing $x_1,\ldots, x_d$. Thus for any $k_i$, the cycle
$\sum_ik_ix_i$ is supported on $C$, which proves the Lemma by Proposition
\ref{procharogrady}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop4aout}]
Proposition \ref{prop4aout}
is proved exactly
as Theorem\ref{autreforme}, by induction on $l$. In case 1 considered in the induction step, we apply the same argument as in that proof. In case 2
considered in the induction step, using the same notations as in that proof, we conclude
that there is in $Z$ a $l$-uple $(x_1,\ldots, x_l)$ satisfying (up to permutation of the indices)
$$x_{d+1}\,\ldots,x_{l}\in C,$$
and as any point of $C$ is rationally equivalent to
$c_X$, we find that
$$\sum_ik_ix_i=(\sum_{i>d}k_i)c_X+\sum_{l\leq i\leq d}k_ix_i.$$
By Lemma \ref{lemma4aout}, $\sum_{1\leq i\leq d}k_ix_i\in S_d(X)$, so that
$\sum_ik_ix_i\in S_d(X)$.
\end{proof}
As mentioned in the introduction, Theorem \ref{autreforme} in the case
$d=0$ provides the following characterization of the cycle $kc_X$, $k>0$: It is the only degree $k$
$0$-cycle $z$ of $X$, whose orbit $O_z\subset X^{(k)}$ is $k$-dimensional
(cf. Corollary
\ref{corochar}).
Let us give a slightly more direct proof in this case.
We use the following Lemma \ref{lag}:
Let $V$ be a $2$-dimensional complex vector space. Let $\eta\in
\bigwedge^2V^*$ be a nonzero generator, and let
$\omega\in \bigwedge^{1,1}_\mathbf{R}(V^*)$ be a positive real $(1,1)$-form on $V$.
\begin{lemma}\label{lag} Let $W\subset V^k$ be a $k$-dimensional complex vector subspace which is Lagrangian for the nondegenerate $2$-form
$\eta_k:=\sum_ipr_i^*\eta$ on $V^k$, where the $pr_i$'s are the projections from $V^k$
to $V$. Then $\prod_ipr_i^*\omega$ restricts to a volume form on
$W$.
\end{lemma}
\begin{proof} The proof is by induction on $k$. Let $\pi:W\rightarrow V^{k-1}$ be the
projector on the product of the last $k-1$ summands.
We can clearly assume up to changing the order of factors, that ${\rm dim}\,{\rm Ker}\,\pi<2$. As
${\rm dim}\,{\rm Ker}\,\pi\leq 1$, we can choose a linear form $\mu$ on $V$ such that
the $k-1$-dimensional
vector space $W_\mu:={\rm ker}\,pr_1^*\mu_{\mid W}$ is sent injectively by
$\pi$ to a $k-1$-dimensional subspace
$W'$ of $V^{k-1}$. Furthermore, since
$W$ is Lagrangian for $\eta_k$, $W'$ is Lagrangian for
$\eta_{k-1}$ because $W_\mu\subset {\rm Ker}\,\mu\times V^{k-1}$, and
on ${\rm Ker}\,\mu\times V^{k-1}$, $\eta_k=\pi^*\eta_{k-1}$.
By the induction hypothesis, the form
$\prod_{i>1}pr_i^*\omega$ restricts to a volume form on
$W'$, where the projections here are considered as restricted to $0\times V^{k-1}$,
and it follows that
$$ pr_1^*(\sqrt{-1}\mu\wedge\overline{\mu})\wedge \prod_{i>1}pr_i^*\omega$$ restricts to a volume form on
$W$. It immediately follows that
$\prod_{i\geq 1}pr_i^*\omega$ restricts to a volume form on
$W$ since for a positive number $\alpha$, we have
$$ \omega\geq \alpha\sqrt{-1}\mu\wedge \overline{\mu}$$
as real $(1,1)$-forms on $V$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{corochar}] Let $z\in CH_0(X)$ be a cycle of degree $k$
such that ${\rm dim}\,O_z\geq k$.
Let $\Gamma\subset X^k$ be an irreducible component of the inverse
image of a $k$-dimensional component of $O_z\subset X^{(k)}$ via the
map $X^k\rightarrow X^{(k)}$.
By Mumford's theorem
\cite{mumford}, using the fact that all the $0$-cycles parameterized by $\Gamma$ are rationally equivalent in
$X$, $\Gamma$ is Lagrangian for the symplectic form
$\sum_ipr_i^*\eta_X$ on $X^k$, where $\eta_X\in H^{2,0}(X)$ is a generator.
Let $L$ be an ample line bundle on $X$ such that there is a
curve $D\subset X$ in the linear system $|L|$, all of whose components
are rational.
We claim that
$$\Gamma\cap D^k\not=\emptyset.$$
Indeed, it suffices to prove that the intersection number
\begin{eqnarray}\label{intnumber}
[\Gamma]\cdot [D^k]
\end{eqnarray}
is positive.
Let $\omega_L\in H^{1,1}(X)$ be a positive representative
of $c_1(L)$. Then
(\ref{intnumber}) is equal to
\begin{eqnarray}\label{integral}
\int_{\Gamma_{reg}}\prod_ipr_i^*\omega_L.
\end{eqnarray}
By Lemma \ref{lag}, the form $\prod_ipr_i^*\omega_L$ restricts to a volume form
on $\Gamma$ at any smooth point of $\Gamma$ and the integral (\ref{integral})
is thus positive.
\end{proof}
\section{Second Chern class of simple vector bundles\label{sec2}}
This section is devoted to the proof of Theorem \ref{ogrady}. Recall first from
\cite{ogrady}, that in order to prove the result for a vector bundle $F$ on
$X$, it suffices to prove it for $F\otimes L$, where $L$ is a line bundle on
$X$. Choosing $L$ sufficiently ample, we can thus assume that
$F$ is generated by global sections, and furthermore that
\begin{eqnarray}\label{van}H^1(X,F^*)=0.
\end{eqnarray}
Let $r={\rm rank}\,F$. Choose a general $r-1$-dimensional subspace
$W$
of $H^0(X,F)$, and consider the evaluation morphism:
$$e_W:W\otimes\mathcal{O}_X\rightarrow F.$$
The following result is well-known
(cf. \cite[5.1]{huybrechtslehn}).
\begin{lemma} \label{necessary} The morphism $e_W$ is generically injective, and the locus $Z\subset X$
where its rank is $<r-1$ consists of $k$ distinct reduced points, where $k=c_2^{top}(F)$.
\end{lemma}
\begin{proof} Let $G=Grass(r-1,H^0(X,F))$ be the Grassmannian of
$r-1$-dimensional subspaces of $H^0(X,F)$. Consider the following
universal subvariety of $G\times X$:
$$G_{deg}:=\{(W,x)\in G\times X, {\rm rank}\,e_{W,x}<r-1\}.$$
Since $F$ is generated by sections, $G_{deg}$ is a fibration over $X$,
with fibers smooth away from the singular locus
$$G_{deg}^{sing}:=\{(W,x)\in G\times X, {\rm rank}\,e_{W,x}<r-2\}.$$
Furthermore, we have
$${\rm dim}\,G_{deg}={\rm dim}\,(G\times X)-2={\rm dim}\,G$$
and ${\rm dim}\,G_{deg}^{sing}<{\rm dim}\,G$.
Consider the first projection: $p_1:G_{deg}\rightarrow G$.
It follows from the observations above and from Sard's theorem that for general $W\in G$, $p_1^{-1}(W)$ avoids $G_{deg}^{sing}$
and consists of finitely many reduced points
in $X$.
The statement concerning the number $k$ of points follows
from \cite[14.3]{fu}, or from the following argument
that we will need later on:
Given a $W$ such that the morphism $e_W$ is generically injective, and the locus
$Z_W$
where its rank is $<r-1$ consists of $k$ distinct reduced points,
we have an exact sequence
\begin{eqnarray}\label{exact}
0\rightarrow W\otimes \mathcal{O}_X\rightarrow F\rightarrow \mathcal{I}_{Z_W}\otimes \mathcal{L}\rightarrow0,\end{eqnarray}
where $\mathcal{L}={\rm det}\,F$.
Hence $c_2(F)=c_2(\mathcal{I}_Z\otimes \mathcal{L})=c_2(\mathcal{I}_Z)=Z$, and in particular
$c_2^{top}(F)={\rm deg}\,Z$.
This proves the lemma.
\end{proof}
By Lemma \ref{necessary}, we have a rational map
$$\phi:G\dashrightarrow X^{(k)},\,W\mapsto c(Z_W),$$
where $c:X^{[k]}\rightarrow X^{(k)}$ is the Hilbert-Chow morphism.
\begin{proposition} \label{propinj}If $F$ is simple and satisfies the assumption (\ref{van}), the rational map $\phi$ is generically
one to one on its image.
\end{proposition}
\begin{proof} Let $G^0\subset G$
be the Zariski open set parameterizing the
subspaces $W\subset H^0(X,F)$ of dimension $r-1$ satisfying
the conclusions of Lemma \ref{necessary}. Note that $c$ is a local isomorphism at a point $Z_W$ of $X^{[k]}$ consisting
of $k$ distinct points,
so that the dimension of the image of $\phi$ is equal to the
dimension of the image of the rational map $G\dashrightarrow X^{[k]},\,W\mapsto Z_W$,
which we will also denote by $\phi$.
This $\phi$ is a morphism on $G^0$ and
it suffices to show
that the map $ \phi^0:=\phi_{\mid G^0}$ is injective.
Let $W\in G^0,\,Z:=\phi(W)$. For any $W'\in {\phi^0}^{-1}(Z)$, we have an exact
sequence as in (\ref{exact}):
\begin{eqnarray}\label{exact'}
0\rightarrow W'\otimes \mathcal{O}_X\rightarrow F\rightarrow \mathcal{I}_Z\otimes \mathcal{L}\rightarrow0,\end{eqnarray}
so that $W'$ determines a morphism
$$t_{W'}:F\rightarrow \mathcal{I}_{Z}\otimes\mathcal{L},$$
and conversely, we recover $W'$ from the data of $t_{W'}$ up to a scalar as the space of sections of
${\rm Ker}\,t_{W'}\subset F$.
We thus have an injection of the fiber ${\phi^0}^{-1}(Z)$ into
$\mathbf{P}({\rm Hom}\,(F,\mathcal{I}_{Z}\otimes\mathcal{L}))$.
In order to compute ${\rm Hom}\,(F,\mathcal{I}_{Z}\otimes\mathcal{L})$, we
tensor by $F^*$ the exact sequence (\ref{exact}).
We then get the long exact sequence:
\begin{eqnarray}\ldots\rightarrow {\rm Hom}\,(F,F)\rightarrow {\rm Hom}\,(F,\mathcal{I}_{Z}\otimes\mathcal{L})\rightarrow H^1(X,F^*\otimes W).
\end{eqnarray}
By the vanishing (\ref{van}) we conclude that
the map $${\rm Hom}\,(F,F)\rightarrow {\rm Hom}\,(F,\mathcal{I}_{Z}\otimes\mathcal{L})$$
is surjective. As $F$ is simple, the left hand side is generated by $Id_F$, so the
right hand side is generated by $t_W$. The fiber ${\phi^0}^{-1}(Z)$ thus consists of one point.
\end{proof}
\begin{proof}[Proof of Theorem \ref{ogrady}] Let $F$ be a simple nontrivial
globally generated vector bundle of rank $r$, with $h^1(F)=0$ and
with Mukai vector
$$v=v_F=(r,l,s)\in H^*(X,\mathbf{Z}).$$
This means that $r={\rm rank}\,F$, $l=c_1^{top}(F)\in H^2(X,\mathbf{Z})$ and
\begin{eqnarray}\label{derder}\chi(X,{\rm End}\,F)=<v,v^*>=2rs-l^2.
\end{eqnarray}
The Riemann-Roch formula applied to ${\rm End} \,F$
gives
\begin{eqnarray}\label{der}\chi(X,{\rm End}\,F)=2r^2+(r-1)l^2-2rc_2^{top}(F),
\end{eqnarray}
hence we get the formula (which can be derived as well from the definition
(\ref{vraimukai})) :
\begin{eqnarray}\label{for0}s=r+\frac{l^2}{2}-c_2^{top}(F).
\end{eqnarray}
We also have by definition of $d(v)$
$$\chi(X,{\rm End}\,F)=2-2d(v)$$
and thus by (\ref{derder}),
\begin{eqnarray}\label{for1}d(v)=1-rs+\frac{l^2}{2}.
\end{eqnarray}
The Riemann-Roch formula applied to $F$ gives on the other hand:
\begin{eqnarray}\label{for2}\chi(X,F)=2r+\frac{l^2}{2}-c_2^{top}(F)
\end{eqnarray}
which by (\ref{for0}) gives
\begin{eqnarray}\label{for3}\chi(X,F)=r+s.
\end{eqnarray}
As we assume $h^1(F)=0$ and we have $h^2(F)=0$ since $F$ is
nontrivial, generated by sections and simple, we thus get
\begin{eqnarray}\label{for3bis}h^0(X,F)=r+s.
\end{eqnarray}
With the notations introduced above, we conclude that
$${\rm dim}\,G=(r-1)(s+1).$$
By Proposition \ref{propinj}, as all cycles parameterized by ${\rm Im}\,\phi$
are rationally equivalent in $X$, the orbit under rational equivalence of
$c_2(F)$ in $X^{(k)}$, $k=c_2^{top}(F)$, has dimension greater than or equal to
$$ (r-1)(s+1)=rs-s+r-1.$$
But we have by (\ref{for0}) and (\ref{for1}):
$$k-d(v)=r-s+rs-1.$$
By Theorem \ref{autreforme}, we conclude that
$c_2(F)\in S^k_{d(v)}(X)$.
\end{proof}
\begin{remark}{\rm Instead of proving that the general $Z_W$ is reduced and applying
Theorem \ref{autreforme}, we could as well apply directly the variant \ref{variant}
to the family of subschemes $Z_W$.}
\end{remark}
For completeness, we conclude this section with the proof of
Corollary \ref{corogrady}, although a large part of it mimics the
proof of Proposition \ref{propogrady} in \cite{ogrady}.
We recall for convenience the statement:
\begin{corollary} \label{corointrobis}Let $v\in H^*(X,\mathbf{Z})$ be a Mukai vector. Assume there exists a simple vector bundle
$F$
on $X$ with Mukai vector $v$. Then
$$S_d^k(X)= \{c_2({G}), \,\,G\,\,{\rm a\,\, simple\,\, vector\,\, bundle\,\,on}\,\,X,\,\,v_G=v\},$$
where $d=d(v),\,k=c_2(v):=c_2^{top}(F),\,v_F=v$.
\end{corollary}
\begin{proof} The inclusion
\begin{eqnarray}\label{*}\{c_2({G}), \,\,G\,\,{\rm a\,\, simple\,\, vector\,\,
bundle\,\,on}\,\,X,\,\,v_G=v\}\subset S_d^k(X)
\end{eqnarray}
is the content of Theorem
\ref{ogrady}.
For the reverse inclusion, we first prove that there exists a Zariski open set
$U\subset X^{(d)}$ such that
\begin{eqnarray}\label{inclusionU}
cl(U)+(k-d(v))c_X\subset \{c_2({G}), \,\,G\,\,{\rm a\,\, simple}\\
\nonumber\, {\rm vector\,\, bundle\,\,on}\,\,X,\,\,v_G=v\}
\end{eqnarray}
where $cl: X^{(d)}\rightarrow CH_0(X)$ is the cycle map.
As $F$ is simple, the local deformations of $F$ are unobstructed.
Hence there exist a smooth connected quasi-projective variety
$Y$, a locally free sheaf $\mathcal{F}$ on $Y\times X$ and a point
$y_0\in Y$ such that $\mathcal{F}_{y_0}\cong F$ and the Kodaira-Spencer map
$$\rho: T_{Y,y_0}\rightarrow H^1(X,{\rm End}\,F)$$
is an isomorphism.
As $\mathcal{F}_{y_0}$ is simple, so is $\mathcal{F}_y$ for $y$ in a
dense Zariski open set of $Y$. Shrinking $Y$ if necessary,
$\mathcal{F}_y$ is simple for all $y\in Y$. By Theorem \ref{ogrady},
we have $c_2(\mathcal{F}_y)\in S_{d(v)}(X)$ for all $y\in Y$.
Let $\Gamma:=c_2(\mathcal{F})\in CH^2(Y\times X)$.
Consider the following set $R\subset Y\times X^{(d(v))}$
$$R=\{(y,z),\,\Gamma_*(y)=c_2(\mathcal{F}_y)=cl(z)+(k-d(v))c_X\,\,{\rm in}\,\,CH_0(X)\},$$
where $cl: X^{(d(v))}\rightarrow CH_0(X)$ is the cycle map and $k=c_2(v)$.
$R$ is a countable union of closed algebraic subsets of
$Y\times X^{(d)}$ and by the above inclusion (\ref{*}), the first projection
$$R\rightarrow Y$$
is surjective. By a Baire category argument, it follows that for
some component $R_0\subset R$, the first projection is dominant.
We claim that the second projection $R_0\rightarrow X^{(d(v))}$ is
also dominant. This follows from the fact that by Mumford's theorem,
the pull-back to $R_0$ of the holomorphic $2$-forms on $Y$ and
$X^{(d(v))}$ are equal. As the first projection is dominant and the
Mukai form on $Y$ has rank $2d(v)$, the same is true for its
pull-back to $R_0$ (or rather its smooth locus). Hence the pull-back
to $R_0$ of the symplectic form on $X^{(d(v))}$ by the second
projection also has rank $2d(v)$. This implies that the second
projection is dominant hence that its image contains a Zariski open
set. Thus (\ref{inclusionU}) is proved. The proof of Corollary
\ref{corointrobis} is then concluded with Lemma \ref{lemmafinal}
below.
\end{proof}
\begin{lemma}\label{lemmafinal} Let $X$ be a $K3$ surface and $d>0$ be an integer.
Then for any open set (in the analytic or Zariski topology) $U\subset
X^{(d)}$, we have
$$cl(U)=cl(X^{(d)})\subset CH_0(X).$$
\end{lemma}
\begin{proof} It clearly suffices to prove the result for $d=1$. It is proved in
\cite{maclean} that for any point $x\in X$, the set of points $y\in X$ rationally equivalent to $x$ in $X$ is dense in $X$ for the usual topology. This set thus
meets $U$, so that $x\in cl(U)$.
\end{proof}
|
1,314,259,994,361 | arxiv | \section{Introduction} \label{int}
Dynamic problems involving contact between deformable bodies or between a body and a foundation still remain today an important subject of study for mathematical and numerical analysis. In the literature, many references offer different approaches for the usual contact conditions with friction \cite{Ac,CDR98,CFHMPR,kpr,lebon2003,pietrzak1999large,laur2002,RCL88,SM,wr}. As a matter of fact, the equations resulting from frictional contact problems are difficult to solve both mathematically and numerically due to their inherent non-linearity and the non-smoothness. In order to handle these issues, several methods have already been successfully tested. They rely on handling the original conditions by normal compliance methods \cite{KO,OK,MartOden} or by other relevant methods such as the quasi-augmented lagrangian method \cite{Ac}, the bi-potential method \cite{dSF91, dumont13}, the conjugate gradient method \cite{laur2002,wr}, Uzawa method \cite{RCL88,uzawa} and Nitsche finite element method \cite{Chouly14,Chouly,CFHMPR} and references therein. Moreover, Newton's semi-smooth method with Primal-Dual Active Set (PDAS) procedure appears as one of the most relevant methods to solve contact problems with friction because of their efficiency and their simplicity of implementation \cite{hint2,wohlcoulomb,wohl}. Some works have been dedicated to studying the efficiency of PDAS methods, as well as to solve linear elasticity problems with unilateral boundary conditions \cite{hint2,hint,ABD1}, or even to solve non-linear multi-body contact problems in elastodynamics \cite{wohlcoulomb,wohl,ABCD2}.
However, when considering impact problems whether in small or large deformations, even the standard implicit schemes ($\theta$-method, Newmark schemes, midpoints or Hilber-Hughes-Taylor type methods \cite{hht}) lose their unconditional stability, which leads either to an energy explosion or to a numerical dissipation which is neither realistic nor acceptable mechanically, as explained in \cite{hht,simo,gonz,arro}. Therefore, it is necessary to use appropriate implicit schemes providing energy conservation type properties.
To this purpose, many references \cite{hauret2006energy,laur2002,simo,gonz,arro, Lach,AP,LL,ayyad2009formulation,Acary13,Acary15,barboteu2015hyperelastic} propose relevant approaches to solve these impact problems with balance energy properties.
This work proposes two traits of novelty. The first one arises from the improved frictional contact model we consider, which provides intrinsic energy-controlling properties. Contact is modeled with a general Moreau--Yosida regularization \cite{conj,MoYosi,regular} of the unilateral condition. The Moreau--Yosida regularization seems to be an appropriate tool to find a regular contact model (Normal Compliance) which respects the kinetic energy of the system while preserving the non-interpenetration of contact. The discrete Improved Normal Compliance (INC) will be well suited to respect energy conservation in adequation with the continuous framework. The second trait of novelty consists in the analysis and the implementation of an energy-controlling method, based on a semi-smooth Newton method combined with an active set method via the complementarity functions for normal compliance with friction models. Based on representative examples from the literature \cite{laur2002,ayyad2009formulation,khenous2008mass}, we study and analyze numerically this Improved Normal Compliance method for dynamic hyperelastic problems with the main objective of respecting the conservation of energy and the non-interpenetration condition during impacts.
The remainder of the article is organized as follows. In Section \ref{s2}, we present and explain the physical framework and the mathematical model studied, then we recall the formulation of hyperelastic problems with frictional contact. We present in detail various contact models as unilateral contact, persistent contact, and normal compliance conditions. After formulating the strong and variational problems, we detail the energy conservation and dissipation properties in the continuous case using the specific properties of the normal compliance conditions. Section \ref{s3} is devoted to the discretization of the hyperelastodynamic problem with contact and the approach (INC) adapted to respect the energy conservation in adequation to the continuous framework. In Sections \ref{s4} and \ref{s5}, we propose an innovative, fast and efficient Primal Dual Active Set (PDAS) method to solve a hyperelastodynamic problem with Improved Normal Compliance and Coulomb friction. Contact and friction conditions are realized by applying an Active Set strategy via a non-linear complementarity function based on the semi-smooth Newton iterative scheme.
In the last section, we provide the numerical experiments of the hyperelastodynamic problems in contact with and without friction carried out with a dynamic elastic ball, then a hyperelastic ring both launched in the direction of a rigid foundation. We present comparative studies between different numerical energy conservation methods during the impacts of the systems [1, 2, 31, 32].
\section{Hyperlastic problems for low velocity impact with friction}\label{s2}
\subsection{Hyperlastic framework}
A hyperelastic body occupies a bounded domain $\Omega\subset \mathbb{R}^d$ $(d=1,2,3)$ with a continuous Lipschitz boundary $\Gamma$, divided into three disjoint measurable parts $\Gamma_1 $, $\Gamma_2$, and $\Gamma_3$. We denote by $\mbox{\boldmath{$x$}}=(x_i)$ the point in $\Omega\cup\Gamma$ used and we designate by ${\mbox{\boldmath{$n$}}}=(n_i)$ the unit outward normal over $\Gamma$. The indices $i$, $j$, $k$, $l$ vary between $1$ and $d$ ($d$ is the space dimension), and unless otherwise specified, the summation convention on repeated indices is employed. We denote by $\mathbb{M}^d$ the space of second-order tensors on $\mathbb{R}^d$ or, equivalently, the space of square matrices of order $d$. The scalar product and the norm on $\mathbb{R}^d$ and $\mathbb{M}^d$ are defined by
\begin{eqnarray*}
&\mbox{\boldmath{$u$}}\cdot \mbox{\boldmath{$v$}}=u_i v_i\ ,\qquad
\displaystyle{\|\mbox{\boldmath{$u$}}\|=(\mbox{\boldmath{$u$}}\cdot \mbox{\boldmath{$u$}})^{\frac{1}{2}}}\qquad
\forall \,\mbox{\boldmath{$u$}}, \mbox{\boldmath{$v$}}\in \mathbb{R}^d, \\
&\mbox{\boldmath{$\sigma$}}:\mbox{\boldmath{$\gamma$}}=\sigma_{ij}\gamma_{ij}\ ,\qquad
\|\mbox{\boldmath{$\sigma$}}\|=(\mbox{\boldmath{$\sigma$}}:\mbox{\boldmath{$\sigma$}})^{\frac{1}{2}} \qquad\,\forall\,
\mbox{\boldmath{$\sigma$}},\mbox{\boldmath{$\gamma$}}\in\mathbb{M}^d.
\end{eqnarray*}
\noindent Let $\mbox{\boldmath{$u$}}$ and $\boldsymbol{\Pi}$ denote the displacement field and the first {Piola--Kirchhoff} stress tensor, respectively,
and the normal and tangential components of $\mbox{\boldmath{$u$}}$ on $\Gamma$, which are given by $u_{n} = \mbox{\boldmath{$u$}}\cdot{\mbox{\boldmath{$n$}}}$, $\mbox{\boldmath{$u$}}_{\tau}=\mbox{\boldmath{$u$}} - u_{n}\mbox{\boldmath{$n$}}$, where $\mbox{\boldmath{$n$}}$ is the unit normal outside $\Gamma$. We consider that an index following a comma represents the partial derivative with respect to the corresponding spatial variable of $\mbox{\boldmath{$x$}}$,\ $\displaystyle u_{i,j}=\frac{\partial u_i}{ \partial x_j}$. Dots above a function represent partial derivatives with respect to time, i.e.~$\dot\mbox{\boldmath{$u$}}=\displaystyle\frac{\partial\mbox{\boldmath{$u$}}}{\partial t}$ and
$\ddot\mbox{\boldmath{$u$}}=\displaystyle\frac{\partial^2\mbox{\boldmath{$u$}}}{\partial t^2}$.
Moreover, we recall that the divergence operator acting on a tensor field $\boldsymbol\tau$ is ${\rm Div}\,\boldsymbol{\tau}=(\tau_{ij,j})$.\\
In the problems studied later, the behavior of the material is described by a hyperelastic constitutive law. We recall that hyperelastic constitutive laws are characterized by the first Piola--Kirchhoff tensor $\boldsymbol{\Pi}$, which derives from a deformation energy density $W: \Omega\times\mathbb{ M}^d_+\to\mathbb{R}$, $\boldsymbol{\Pi}(\mbox{\boldmath{$x$}},\mathbf F)=
\frac{\partial}{\partial {\bf F}} W(\mbox{\boldmath{$x$}},{\bf F})=\partial_{\bf F} W(\mbox{\boldmath{$x$}},{\bf F})$, for all $\mbox{\boldmath{$x$}} \in\Omega$ and ${\bf F}\in\mathbb M_+^d$, where $\mathbb M_+^d = \{{\bf F}\in \mathbb M^d: \det {\bf F} > 0\}$. Here ${\bf F}$ is the deformation gradient defined by ${\bf F} = {\bf I} + \nabla {\mbox{\boldmath{$u$}}}$
and $\partial_{\bf F}$ represents the differential with respect to the variable ${\bf F}$, for more details on hyperelasticity see \cite{ciarlet1988mathematical,le1994numerical,laur2002}. In what follows, we consider a dynamic problem with contact and friction in which the hyperelastic body comes into contact with a perfectly rigid obstacle, the foundation (see Figure \ref{body}).
\begin{figure}[h!]
\centering
\subfloat[]{\includegraphics[scale=.425]{disegno1.pdf}} \hspace{2cm}
\subfloat[]{\raisebox{.05cm}{\includegraphics[scale=.425]{disegno2.pdf}}}
\caption{Reference (a) and deformed (b) configurations of a body.}
\label{body}
\end{figure}
The hyperelastic body is subjected to the action of volumetric forces with density $\mbox{\boldmath{$f$}}_0$ and surface tractions with density $\mbox{\boldmath{$f$}}_2$
which act on $\Gamma_2$. In the rest of the paper, we consider the time interval of interest $[0,T]$ with $T>0$. We denote by $t\in [0,T]$ the time variable.
\subsection{Frictional contact conditions}\label{reg_cd}
Suppose the body is fixed on $\Gamma_1$ and can come into contact at $\Gamma_3$ with the foundation. In the following, the frictional contact conditions are based on the combination of normal compliance conditions with a Coulomb law of dry friction on $\Gamma_3$. We denote by $\mbox{\boldmath{$\varphi$}}:\overline{\Omega}\times[0,T]\to\mathbb{R}^d$ the deformation field, with $\mbox{\boldmath{$\varphi$}}(\mbox{\boldmath{$x$}},t)=\mbox{\boldmath{$x$}}+\mbox{\boldmath{$u$}}( \mbox{\boldmath{$x$}},t)$ the position at time $t\in[0,T]$ of point $\mbox{\boldmath{$x$}}\in\overline{\Omega}$. For any point $\mbox{\boldmath{$x$}}\in\Gamma_3$, we define the point $\overline{\mbox{\boldmath{$y$}}}(\mbox{\boldmath{$x$}})$ of the foundation closest to $\mbox{\boldmath{$x$}}$:
\begin{eqnarray*}
\overline{\mbox{\boldmath{$y$}}}(\mbox{\boldmath{$x$}},t)=\arg\min_{{\mbox{\boldmath{$y$}} \in\text{foundation}}}{\|\mbox{\boldmath{$\varphi$}} (\mbox{\boldmath{$x$}},t)-\mbox{\boldmath{$y$}} \|_2}.
\end{eqnarray*}
In this way, one can define the minimal allowed contact distance (gap) between a point of $\Gamma_3$ and its orthogonal projection on the rigid foundation as follows:
\begin{eqnarray*}
g_{\nu}=(\mbox{\boldmath{$\varphi$}}(\mbox{\boldmath{$x$}},t)-\overline{\mbox{\boldmath{$y$}}}(\mbox{\boldmath{$x$}},t))\cdot\mbox{\boldmath{$\nu$}}, \quad \forall \mbox{\boldmath{$x$}} \in \Gamma_3,
\end{eqnarray*}
\noindent where $\mbox{\boldmath{$\nu$}}$ is the inner unit normal vector to the rigid foundation. The normal force of contact $\Pi_\nu$, assumed to be negative, can be written in the direction $\mbox{\boldmath{$\nu$}}$:
\begin{eqnarray*}
\Pi_\nu=\mbox{\boldmath{$\nu$}} \cdot \mbox{\boldmath{$\Pi$}} \mbox{\boldmath{$n$}}.
\end{eqnarray*}
In the same way, the tangential force of contact can also be expressed according to the first tensor of Piola--Kirchhoff:
\begin{eqnarray*}
\mbox{\boldmath{$\Pi$}}_{\tau}=\mbox{\boldmath{$\Pi$}}\mbox{\boldmath{$n$}}-\Pi_{\nu} \mbox{\boldmath{$\nu$}},
\end{eqnarray*}
With these definitions in place, the tangential contact velocity $\dot{\mbox{\boldmath{$g$}}}_\tau$ of a point $\mbox{\boldmath{$x$}}\in \Gamma_3$, relative to the opposite surface of the foundation, is given by
\begin{eqnarray*}
\dot{\mbox{\boldmath{$g$}}}_\tau=[\mbox{\boldmath{$I$}}_d-\mbox{\boldmath{$\nu$}} \otimes \mbox{\boldmath{$\nu$}}]\dot{\mbox{\boldmath{$u$}}} (\mbox{\boldmath{$x$}},t),
\end{eqnarray*}
The conditions of contact and friction are posed on the boundary $\Gamma_3$ or $\Gamma_3\times[0,T]$ in a time-dependent problem and pertain to the normal and tangential components $\Pi_\nu$ and $\mbox{\boldmath{$\Pi$}}_\tau $ of the surface contact force, respectively, and the displacements $u_\nu$, $\mbox{\boldmath{$u$}}_\tau$. With these considerations, the unilateral conditions on the boundary of contact $\Gamma_3$ are given in the following section.
\subsubsection{Law of unilateral contact with friction}
The law of unilateral contact of a solid on a rigid obstacle was proposed in 1933 by Signorini \cite{Signorini} and is written in the form of three conditions: a condition of non-penetration, a condition of compression and a condition of complementarity. Thereafter and for convenience, we will change notation for the contact variables:
$\delta_{\nu}=g_{\nu}$ and $\lambda_{\nu}=-\Pi_{\nu} $. Therefore, the unilateral contact conditions read
\begin{equation}\label{cont_uni}
\begin{aligned}
&\textrm{Non-penetration condition:}\quad&\delta_{\nu}\leq 0, \\
&\textrm{Compression condition:} \quad&\lambda_\nu\geq0, \\
& \textrm{Complementarity condition:} \quad&\lambda_\nu\delta_\nu=0. \\
\end{aligned}
\end{equation}
From a mechanical viewpoint, this amounts to considering a perfectly rigid foundation; no matter how much compressive force is applied, no penetration occurs.\\
Coulomb's law of friction involves the tangential friction stress $\mbox{\boldmath{$\Pi$}}_\tau $, the normal contact pressure $\Pi_\nu$, and the tangential contact velocity $ \dot \mbox{\boldmath{$u$}}_{\tau}$ that we now denote $\dot\mbox{\boldmath{$\delta$}}_{\tau}=\dot\mbox{\boldmath{$u$}}_{\tau}$, as well as the tangential stress $\boldsymbol\lambda_{\tau}=-\mbox{\boldmath{$\Pi$}}_{\tau}$, as follows:
\begin{equation}
\begin{aligned}
&\|\boldsymbol\lambda_\tau\|\le\mu\,|\lambda_\nu|&\quad \rm stick\ status, &\\
&\displaystyle\|\boldsymbol\lambda_\tau\|=\mu\,|\lambda_\nu|\,\frac{{\dot\mbox{\boldmath{$\delta$}}}_\tau}{\|{\dot\mbox{\boldmath{$\delta$}}}_\tau\| } \ \ {\rm if}\ \ \dot{\mbox{\boldmath{$\delta$}}}_\tau\ne 0
&\quad \rm slip\ status.
\end{aligned}
\end{equation}
\noindent where $\mu \ge 0$ is the coefficient of friction.\\
If the norm of the tangential stress $\boldsymbol\lambda_\tau$ is less than the friction threshold $\mu\,|\lambda_\nu|$, then there is sticking between the body and the foundation. If, on the other hand, this threshold is reached, then the body slides on the foundation while the tangential stress is constant and depends on the unit tangent $\mbox{\boldmath{$\tau$}}={\dot\mbox{\boldmath{$\delta$}}}_{\tau}/\|{\dot\mbox{\boldmath{$\delta$}}}_{\tau}\|$.
\noindent Note that it is possible to write conditions (\ref{cont_uni}) as the following subdifferential inclusion:
\begin{equation}
\lambda_{\nu} \in \partial \Psi_{\mathbb{R}^{-}}(\delta_{\nu})\quad {\rm on}\quad \Gamma_3\times(0,T) ,
\end{equation}
where $\partial$ represents the sub-differential operator in the sense of convex analysis and $\Psi_A$ denotes the indicator function of the set $A \subset \mathbb{R}$.
A similar consideration for frictional stress leads to
\begin{equation}
\boldsymbol\lambda_{\tau}\in \mu\lambda_{\nu}\partial\|{\dot\mbox{\boldmath{$\delta$}}}_{\tau}\|
\quad{\rm on}\quad\Gamma_3\times(0,T).
\end{equation}
\subsubsection{Law of persistent contact with friction}
Such a law is slightly on the margins of the previous one insofar as its main interest resides in its natural properties of energy conservation. The persistency condition is expressed as a complementarity condition between the normal stress $\lambda_\nu$ and the tangential velocity $\dot\delta_\nu$, namely:
\begin{eqnarray}
\lambda_\nu \dot{\delta}_\nu={0}. \label{pers}
\end{eqnarray}
This condition alone is sufficient to guarantee that the work of the contact normal force $\displaystyle{\int_{\Gamma_3}\lambda_\nu \dot{\delta}_\nu}=0$ vanishes. By combining with the unilateral contact law, it takes the following form:
\begin{equation}
\begin{cases}
\rm{if} \quad \delta_\nu<0,\quad\lambda_\nu=0,&\\
\rm{if} \quad \delta_\nu=0, \quad\lambda_\nu\in-\partial\Psi_\mathbb{R^+}(\dot{\delta}_\nu).
\end{cases}
\end{equation}
We refer the reader to~\cite{laur2002,hauret2006energy,ayyad2009formulation} for more details.
\subsubsection{Normal compliance law via a $\alpha$-Moreau--Yosida regularization}
We briefly present the $\alpha$-Moreau--Yosida regularization \cite{conj, MoYosi} of the unilateral condition of Signorini and begin with a reminder of the concepts of variational analysis. Let $S$ be a subset of a Hilbert space $\mathbf{H}$ endowed with the norm $\|{\cdot}\|$.
\begin{definition}
Let $f$ be a lower bounded semicontinuous function defined by
$f: \mathbf{H} \to \mathbb{R}\cup {\{+\infty\}}$.
For all $r>0$, the $\alpha$-Moreau--Yosida envelope \cite{conj, MoYosi, reguEmi, regular} of $f$, with $\alpha\geq 2$, is defined by:
\begin{equation}
f_r^{\alpha}(z)=\inf_{y\in \mathbf{H}}\displaystyle{\Big(f(y)+\frac{1}{r}\|y-z\|^\alpha\Big),\quad\forall z\in\mathbf H. }
\end{equation}
\end{definition}
\begin{theorem}
Let $f: \mathbf{H} \to \mathbb{R}\cup {\{-\infty\}}$, the regularization of $\partial f$ for all $r>0$ is the gradient $\nabla{f_r^{\alpha}}$ associated with the envelope $f_r^{\alpha}$ .
\end{theorem}
If we take $f=\Psi_{\mathbb{R}^-}$, for $r>0$ and $\alpha\geq 2$, we have
\begin{equation*}
(\Psi_{\mathbb{R}^-})_r^{\alpha}(z)=\inf_{y\in \bf H}\displaystyle{\Big(\Psi_{\mathbb{R}^-}(y )+\frac{1}{r}\|y-z\|^\alpha\Big)}
=\inf_{y\in \mathbb{R}^-}\Big(\displaystyle{\frac{1}{r}\|y-z\|^\alpha\Big)} \\
\eqqcolon\displaystyle{\frac{1}{r} {\rm {\rm dist}}_{\mathbb{R}^-}^\alpha}(z).
\end{equation*}
\begin{proposition}
Let $\alpha\geq2$ on;
\begin{equation}
\nabla(\Psi_{\mathbb{R}^-})_r^{\alpha}(z)=\nabla \displaystyle{\Big(\frac{1}{r} {\rm dist}_{\mathbb{R} ^-}^\alpha}(z)\Big) \\
=\frac{1}{r}{\rm proj}_{\mathbb{R}^+}(z)\|{\rm proj}_{\mathbb{R}^+}(z)\|^{\alpha-2}.
\end{equation}
\end{proposition}
The contact with a deformable foundation is modeled by the normal compliance condition. It attributes a reactive normal pressure depending on the interpenetration of the foundation; this means that the normal stress $\lambda_\nu$ is a function of the normal displacement $\delta_\nu$. A general expression for the normal compliance condition is then given by
\begin{equation}
\lambda_\nu=p(\delta_\nu) \quad\mbox{on} \quad \Gamma_3\times(0,T)
\end{equation}
where $p(\cdot)$ vanishes for negative arguments.
The normal compliance condition was first introduced in \cite{MartOden,OM}. This standard condition can be considered as a Moreau--Yosida regularization of the unilateral contact condition of Signorini with $\alpha = 2$ and $z=\delta_\nu$: \\
{\it Standard Normal Compliance (SNC) with friction}
\begin{equation}\label{compl}
\lambda_\nu=c_\nu\frac{\alpha}{2}([\delta_\nu]_+)^{\alpha-1}\quad \mbox{on} \quad \Gamma_3\times(0, T),
\end{equation}
where $[x]_+$ is the positive part of $x\in \mathbb R$, and $c_\nu = \frac{1}{r} $ can be assimilated to the stiffness coefficient of the foundation. This law of normal compliance (\ref{compl}) has two particularities: first, it allows to reduce the interpenetration while the second aspect lies in its natural quasi-energy-conservation properties (see energy balance in section \ref{prop_ener}), which is critical from a physical point of view. In adequacy with this regularization process, it is assumed that the law of friction can expressed as
\begin{eqnarray}\label{compl1}
\left\{\begin{array}{ll}
\textrm{if}\quad ||\boldsymbol\lambda_\tau||<\mu |\lambda_\nu|\quad \boldsymbol\lambda_\tau=c_\tau\dot\mbox{\boldmath{$\delta$}}_\tau, \ \rm stick &\\
\textrm{if}\quad ||\boldsymbol\lambda_\tau||=\mu |\lambda_\nu|\quad \boldsymbol\lambda_\tau=\mu \lambda_\nu\displaystyle{\frac{\dot\mbox{\boldmath{$\delta$}}_\tau} {\|\dot\mbox{\boldmath{$\delta$}}_\tau\|}}, \ \rm slip
\end{array}\right.
\mbox{on}&\ \Gamma_3\times(0,T),
\end{eqnarray}
where $c_\tau > 0$ is the tangential compliance parameter.
\subsection{Strong formulation of the mechanical problem}\label{strong}
With the preceding notation, the strong formulation of the problem is the following one.\\
\medskip\noindent
{\bf Problem} ${\cal P}$. {\it Find displacement field
$\mbox{\boldmath{$u$}}:\Omega\times [0,T]\to\mathbb{R}^d$ and the stress field
$\boldsymbol{\Pi}:\Omega\times [0,T]\to\mathbb{M}^d$ such that}
\begin{eqnarray}
\label{1} \boldsymbol{\Pi}=\partial_{\bf F} W({\bf F})\quad&{\rm in}\
&\Omega\times(0,T),\\[3mm]
\label{2} {\rm Div}\,\boldsymbol{\Pi}+\mbox{\boldmath{$f$}}_0= \rho\ddot{\mbox{\boldmath{$u$}}}\quad&{\rm
in }\ &\Omega\times(0,T),\\[3mm]
\label{3} \mbox{\boldmath{$u$}}=\mbox{\boldmath{$0$}}\quad &{\rm on}\ &\Gamma_1\times(0,T),\\[3mm]
\label{4} \boldsymbol{\Pi}\mbox{\boldmath{$\nu$}}=\mbox{\boldmath{$f$}}_2\quad&{\rm on}\
&\Gamma_2\times(0,T),\\[3mm]
\label{5} \lambda_\nu=c_\nu\frac{\alpha}{2}[\delta_\nu]_+^{\alpha-1} \quad \quad&{\rm on}\
&\Gamma_3\times(0,T),\\[3mm]\label{6} \left\{\begin{array}{ll} ||\boldsymbol\lambda_\tau||<\mu|\lambda_\nu| \quad \boldsymbol\lambda_\tau=c_\tau\dot\mbox{\boldmath{$\delta$}}_\tau & \\
||\boldsymbol\lambda_\tau||=\mu|\lambda_\nu|\quad \boldsymbol\lambda_\tau=\mu \lambda_\nu\displaystyle{\frac{\dot\mbox{\boldmath{$\delta$}}_\tau}{\|\dot\mbox{\boldmath{$\delta$}}_ \tau\|}}\end{array}\right. \quad
&{\rm on}&\ \Gamma_3\times(0,T), \\[3mm]
\label{7}\mbox{\boldmath{$u$}}(0)=\mbox{\boldmath{$u$}}_0,\ \dot{\mbox{\boldmath{$u$}}}(0)=\mbox{\boldmath{$u$}}_1 \quad &{\rm in}&\ \Omega.
\end{eqnarray}
with $\alpha\ge 2$, $\mu$ the coefficient of friction depending on the sliding rate and $c_\nu$, $c_\tau$ the compliance parameters.\\
Equation (\ref{1}) represents the hyperelastic constitutive law of the material, (\ref{2}) represents the equation of motion in which $\rho> 0 $ is the material density and is assumed constant, for simplicity. The conditions (\ref{3}), (\ref{4}) represent respectively the boundary conditions of displacement and traction. Conditions (\ref{5}) and (\ref{6}) respectively represent the conditions of contact with normal compliance and friction described in the preceding section. Finally, (\ref{7}) represents the initial conditions in which $\mbox{\boldmath{$u$}}_0$ and $\mbox{\boldmath{$u$}}_1$ are respectively the initial displacement and velocity.
\subsection{Variational formulation of the problem}\label{variat}
In order to derive the variational formulation of {\bf Problem} ${\cal P}$, additional notation and some preliminary elements are necessary. The classical notation for the Sobolev and Lebesgue spaces associated with $\Omega$ and $\Gamma$ is used.
We consider a closed subspace of $H^{1}$ as follows:
\begin{center}
$ V=\{\mbox{\boldmath{$v$}}\in H^{1}(\Omega;\mathbb{R}^d): \mbox{\boldmath{$v$}}=\mathbf 0\,\,\,\text{on} \,\,\,\Gamma_1\}\quad $ with $\quad H=L^{2}(\Omega;\mathbb{R}^d)$;
\end{center}
they are Hilbert spaces endowed with the scalar products $(\mbox{\boldmath{$u$}},\mbox{\boldmath{$v$}})_V$ and $(\boldsymbol{\Pi},\mbox{\boldmath{$\tau$}})_H$ and their associated norms $\|{\cdot}\|_ {V}$ and $\|{\cdot}\|_{H}$ respectively.\\
Note that the Lagrange multipliers $\lambda_{\nu}$ and $\boldsymbol\lambda_{\tau}$ are taken equal to $-\Pi_{\nu}$ and $-\mbox{\boldmath{$\Pi$}}_{\tau}$, respectively.\ \
In order to introduce a variational formulation of the mechanical problem, we consider the following spaces:
$X_\nu=\left\{\,v_\nu|_{\Gamma_3}: \ \mbox{\boldmath{$v$}} \in V\, \right\}$ and
$X_\tau=\left\{\,\mbox{\boldmath{$v$}}_\tau|_{\Gamma_3}: \ \mbox{\boldmath{$v$}} \in V\, \right\}$ the traces of spaces provided with their usual norms. We denote by $X_{\nu}^{'}$ and $X_{\tau}^{'}$ the duals of the spaces $X_{\nu}$ and $X_{\tau}$, respectively. Moreover, we denote by
$\langle\cdot,\cdot\rangle_{X_{\nu}^{'},X_{\nu}}$ and $\langle\cdot,\cdot\rangle_{X_{\tau}^{'}, X_{\tau}}$ the corresponding duality products.
We move on to the variational formulation (or weak formulation) for {\bf Problem} ${\cal P}$.
Multiplying $(\ref{2})$ by any virtual velocity $\mbox{\boldmath{$v$}}$ and applying Green's formula, we get
\begin{equation}\label{G_step2}
- \int_{\Omega} {\bf \Pi}(t):\nabla{\mbox{\boldmath{$v$}}}\, dx + \int_{\partial \Omega} {\bf \Pi}(t){\mbox{\boldmath{$n$}}}\cdot{\mbox{\boldmath{$v$}}}\ da + \int_{\Omega} {\boldsymbol f}_0(t)\cdot{\boldsymbol v} \,dx = \int_{ \Omega} \rho \ddot{\boldsymbol{u}}(t)\cdot{\mbox{\boldmath{$v$}}}\, dx\\
\end{equation}
To establish the variational formulation of {\bf Problem} ${\cal P}$ (\ref{1})--(\ref{7}), we need some additional notation.
Thus, we consider the function $\mbox{\boldmath{$f$}}:(0,T)\rightarrow V^*$ where the exterior volume and surface force densities are assumed to be such that
\begin{eqnarray}
\label{f}
&& \mbox{\boldmath{$f$}}_0 \in L^2(0,T;L^{2}(\Omega)), \qquad \mbox{\boldmath{$f$}}_2\in L^2(0,T;L^{2}(\Gamma_2)),
\end{eqnarray}
so that
\begin{eqnarray}
&&\label{ff}(\mbox{\boldmath{$f$}}(t),\mbox{\boldmath{$v$}})_V=(\mbox{\boldmath{$f$}}_0(t),\mbox{\boldmath{$v$}})_H+(\mbox{\boldmath{$f$}}_2(t),\mbox{\boldmath{$v$}})_{L^2(\Gamma_2)},\qquad\forall\mbox{\boldmath{$v$}}\in V .
\end{eqnarray}
Using the duality between $V^*$ and $V$, Green's formula and the contact conditions with friction on the boundary $\Gamma_3$, the equation (\ref{G_step2}) becomes:
\begin{eqnarray}
&& \langle{\rho\ddot{\mbox{\boldmath{$u$}}}(t),\mbox{\boldmath{$v$}}}\rangle_{{V^*}\times{V}} + \label{x1}\langle{\boldsymbol{\Pi}(t),\nabla\mbox{\boldmath{$v$}}}\rangle_{{V^*}\times{V}}=(\mbox{\boldmath{$f$}}(t),\mbox{\boldmath{$v$}})_V
+\int_{\Gamma_3}\,\Pi_\nu(t)
v_\nu\,da+\int_{\Gamma_3}\,\boldsymbol{\Pi}_\tau(t)\cdot\mbox{\boldmath{$v$}}_\tau\,da.\nonumber
\end{eqnarray}
Finally, we obtain the variational formulation of the contact problem with friction ${\cal P}$ using the Lagrange multiplier $\lambda_{\nu}$, related to the normal contact stress $\Pi_{\nu}$, and the Lagrange multiplier $\boldsymbol\lambda_{\tau}$, related to the tangential contact stress $\mbox{\boldmath{$\Pi$}}_{\tau}$, in terms of two unknown fields (here and in the following, we drop explicit mention of time dependence for ease of presentation).\\
{\textbf{Problem} ${\cal P}_V$}:
{\it Find the displacement field $\mbox{\boldmath{$u$}} \in L^\infty(0,T;V)$, with $\dot\mbox{\boldmath{$u$}} \in L^2(0,T;V)$ and \mbox{$\ddot \mbox{\boldmath{$u$}} \in L^2(0,T;V^*)$}, the normal stress field $\lambda_{\nu}: (0,T)\rightarrow X_{\nu}^{'}$ and the tangential stress field $\boldsymbol\lambda_{\tau}: (0,T)\rightarrow X_{\tau}^{'}$ such that, $\forall\,\mbox{\boldmath{$v$}}\in V$},
\begin{eqnarray}
\label{10}
\langle\rho\ddot{\mbox{\boldmath{$u$}}},\mbox{\boldmath{$v$}}\rangle_{{V^*}\times{V}}+\langle{\boldsymbol{\Pi},\nabla\mbox{\boldmath{$v$}}}\rangle_{{V^* }\times{V}} + \langle \lambda_{\nu},v_{\nu}\rangle_{X_{\nu}^{'},X_{\nu}}+\langle\boldsymbol\lambda_{\tau}, \mbox{\boldmath{$v$}}_{\tau}\rangle_{X_{\tau}^{'},X_{\tau}}=
(\mbox{\boldmath{$f$}},\mbox{\boldmath{$v$}})_V, \quad \text{in }\ \ (0,T),\\[3mm]
\label{11}\lambda_\nu=c_\nu\frac{\alpha}{2}[\delta_\nu]_+^{\alpha-1} \quad {\rm on}\quad\Gamma_3\times (0,T),\\[3mm]
\label{12}
\left\{\begin{array}{ll} ||\boldsymbol\lambda_\tau||<\mu|\lambda_\nu|\quad \boldsymbol\lambda_\tau=c_\tau\dot\mbox{\boldmath{$\delta$}}_\tau &\\
||\boldsymbol\lambda_\tau||=\mu|\lambda_\nu|\quad \boldsymbol\lambda_\tau=\mu \lambda_\nu\displaystyle{\frac{\dot\mbox{\boldmath{$\delta$}}_\tau}{\|\dot\mbox{\boldmath{$\delta$}}_ \tau\|}}\end{array}\right.\quad {\rm on}\quad\Gamma_3\times(0,T),\\[3mm]\nonumber
\end{eqnarray}
{\it and, moreover,}
\begin{equation}\label{BDSxx}
\mbox{\boldmath{$u$}}(0)=\mbox{\boldmath{$u$}}_0,\qquad\dot\mbox{\boldmath{$u$}}(0)=\mbox{\boldmath{$u$}}_1.
\end{equation}
\subsection{Energy conservation properties in the continuous case}\label{prop_ener}
From a physical point of view, the solution of a hyperelastic problem should satisfy some conservation properties like conservation of energy, conservation of kinematic momentum, and conservation of linear momentum.
We are now particularly interested in the energy conservation properties.
In the absence of contact and friction, the conservation of energy can be written as follows:
\begin{eqnarray}
\int_{0}^{t} \dot{E}(s)ds = E(t)
-E(0)
= \int_{0}^{t}\int_{\Omega} {\bf f}_0\cdot \dot{\mbox{\boldmath{$u$}}}\,\,dx \, ds \ + \ \int_{0}^{t}
\int_{\Gamma_2} {\bf f}_2\cdot \dot{\bf u}\,\, da\, ds,
\end{eqnarray}
where $(s,t) \in (0,T)\times(0,T)$ and ${E}(t)$ denotes the internal energy of the system at time $t$, defined as
\begin{equation}
{E}(t) = \frac{1}{2} \int_{\Omega} \rho |\dot{\bf u}|^{2} dx +
\int_{\Omega} \widetilde{W}({\bf C})dx \label{ene_elas}
\end{equation}
where ${\bf C} = {\bf F}^T {\bf F}$ is the Cauchy-Green tensor. \\
Moreover, the conservation of energy for hyperelastic phenomena with frictional contact is written in the following way:
\begin{equation}
\begin{aligned}
E(t)-E(0)
= \int_{0}^{t}\int_{\Omega} {\bf f}_0\cdot {\dot\mbox{\boldmath{$u$}}}\,\, dx ds \ + \ \int_{0}^{t}
\int_{\Gamma_2} {\bf f}_2\cdot {\dot\mbox{\boldmath{$u$}}}\,\, da ds \ \\ -\int_{0}^{t}\int_{\Gamma_3}\,\lambda_\nu
\dot\delta_\nu\,da ds-\int_{0}^{t}\int_{\Gamma_3}\,\boldsymbol\lambda_\tau \cdot \dot\mbox{\boldmath{$\delta$}}_\tau\,da ds
\end{aligned}
\label{bil_ini_1}
\end{equation}
Using the variational formulation of the problem by taking ${\mbox{\boldmath{$v$}}} = \dot{\mbox{\boldmath{$u$}}}(t,{\mbox{\boldmath{$x$}}})$, we obtain the frictional contact reaction work given by
\begin{equation}
\mathcal{W}_{c+f} =
\int_{\Gamma_3} ({\lambda_\nu}
{\dot\delta_\nu}+{\boldsymbol\lambda_\tau \cdot \dot\mbox{\boldmath{$\delta$}}_\tau}
)\ da.
\end{equation}
where $ \dot\delta_\nu$ and $\dot\mbox{\boldmath{$\delta$}}_\tau$ represent the time derivatives of $\delta_\nu$ and ${\mbox{\boldmath{$\delta$}}_\tau}$ respectively. \\
When the volume and surface forces are not taken into account, and with the presence of the normal contact compliance law without friction, the expressions of the energy allow us to obtain
\begin{equation}
\displaystyle{E(t)-E(0)
=-\int_{0}^{t}\int_{\Gamma_3} {\lambda_\nu}
{\dot\delta_\nu}}\ da\ ds.
\end{equation}\label{cons0}
Using the general normal compliance condition (\ref{11}), we get:
\begin{equation}\label{cons}
\displaystyle{E(t)-E(0)=-
\int_{0}^{t}\int_{\Gamma_3}c_\nu\frac{\alpha}{2} ( \delta_\nu\big]_{+}^{\alpha-1}{\dot\delta_\nu} da \ ds}
=-\frac{c_\nu}{2}\int_{\Gamma_3}\big(\big[\delta_\nu\big]_{+}^{\alpha}(t)-\big[\delta_\nu\big]_{+}^{\alpha}(0)\big)\, da
\end{equation}
The difference $\big[\delta_\nu\big]_{+}^{\alpha}(t)-\big[\delta_\nu\big]_{+}^{\alpha}(0)$ in (\ref{cons}) is very small since the penetrations $\big[\delta_\nu\big]_{+}(t)$ at any time $t$ are also small as long as the compliance parameter $c_\nu$ is sufficiently large. Therefore, the energy of the system is ``almost" conserved: $E(t)\approx E(0)$.\\
Considering now the friction, we obtain
\begin{equation}\label{diss}
{\boldsymbol\lambda_\tau}\cdot{{\dot
\mbox{\boldmath{$\delta$}}_\tau}} \geq 0 \Rightarrow
- \mathcal{W}_{c+f} \leq 0 \quad \Rightarrow \ E(0) \geq E(t)
\end{equation}
Concerning the expression (\ref{diss}), we note a dissipation of energy between the instants $0$ and $t$ because of the friction, which is physically acceptable due to the dissipative nature of this phenomenon.\\
When we take these energy expressions with the ``persistent'' conditions (\ref{pers}), we get the following results:
\begin{eqnarray}\label{pe}
&{\rm{Case\ without\ friction:}}\quad {\lambda_\nu}{{\dot
\delta_\nu}}=0,\ \boldsymbol\lambda_\tau\cdot\dot{\mbox{\boldmath{$\delta$}}_\tau} = 0 \Rightarrow
\mathcal{W}_{c+f} = 0 \Rightarrow \ E(0) = E(t), \\
& {\rm{Case\ with\ friction:}}\ \label{di} \ {\lambda_\nu}{{\dot
\delta_\nu}}=0,\ {\boldsymbol\lambda_\tau}\cdot{{\dot\mbox{\boldmath{$\delta$}}_\tau}}\geq 0 \Rightarrow
\mathcal{W}_{c+f}\geq 0 \Rightarrow \ E(0) \geq E(t).
\end{eqnarray}
Here, (\ref{pe}) expresses the conservation of total energy when the persistence condition is applied, whereas by (\ref{di}), because of friction, we observe a dissipation of energy between the instants $0$ and~$t$.
\section{ Discrete formulation of the frictional contact problems}\label{s3}
\subsection{Variational approximation}\label{variat_approx}
In this section, we introduce a discrete approximation in time and space of the problem ${{\mathcal P}}_V$, based on arguments similar to those used in \cite{ayyad2009formulation, ayyad2009frictionless, barboteu2015hyperelastic, barboteu2015dynamic, barboteu2018analysis, khenous2006discretization, khenous2006hybrid}. First of all, we recall some preliminary elements concerning the time discretization step.\\
Let $ N $ be an integer, and $\Delta t=\frac{T}{N}$ a time step.
For a continuous function $f$ with respect to time, we will use the notation $f_j=f(t_j)$ for $0 \leq j \leq N$. In what follows, we consider a collection of discrete times $\{t_n\}_{n=0}^{N}$ which defines a uniform partition of the time interval $[0,T]= \bigcup_{ \scriptstyle n=1}^{N}[t_{n-1},t_{n}]$, with $t_0=0$, $t_{n}=t_{n-1} + \Delta t$, and $t_N=T$. Finally, for a sequence $\{u_n\}_{n=1}^N$, we denote the divided differences of the midpoints by:
\begin{equation}\label{midp}
\dot \mbox{\boldmath{$u$}}_{n-\frac{1}{2}}=\frac{\mbox{\boldmath{$u$}}_{n}-\mbox{\boldmath{$u$}}_{n-1}}{\Delta t} = \frac{1}{2}(\dot \mbox{\boldmath{$u$}}_{n} + \dot \mbox{\boldmath{$u$}}_{n-1}),
\end{equation}
and, equivalently, we have $\dot \mbox{\boldmath{$u$}}_{n}=-\dot
\mbox{\boldmath{$u$}}_{n-1}+\frac{2}{\Delta t}(\mbox{\boldmath{$u$}}_{n}-\mbox{\boldmath{$u$}}_{n-1})$. Then we use the notation $\Box_{n-\frac{1}{2}} = \frac{1}{2}(\Box_n +
\Box_{n-1})$, where $\Box_n$ represents the approximation of
$\Box(t_n)$. Let us note that the time integration scheme employed is based on an implicit scheme of order 2 which one finds in (\ref{midp}).\\
We now present some elements concerning the spatial discretization step. Let $\Omega$ be a polyhedral domain. Consider a regular partition $\{{\cal T}^h\}$ of triangular finite elements of $\overline {\Omega}$ which are compatible with the boundary decomposition
$\Gamma=\overline{\Gamma_1}\cup\overline{\Gamma_2}\cup\overline{\Gamma_3}$, i.e., if one side of an element $\mathrm T\in{\cal T}^h$ has more than one point on $ \Gamma $, then the side lies entirely on $\overline{\Gamma_1}$, $\overline{\Gamma_2}$ or
$\overline{\Gamma_3}$. The space $V$ is approximated by the finite dimensional space $V^h \subset V$ of continuous and piecewise affine functions, that is:
\begin{eqnarray}
&& \hspace*{-0.7cm}V^h=\{\,\mbox{\boldmath{$v$}}^h\in [C(\overline{\Omega})]^d \;:
\; \mbox{\boldmath{$v$}}^h|_{\mathrm T}\in [P_1(\mathrm T)]^d\,
\,\,\, \forall\,\mathrm T\in {\cal T}^h, \nonumber\\
&& \qquad \qquad \quad \mbox{\boldmath{$v$}}^h=\mbox{\boldmath{$0$}} \,\,\, \hbox{at nodes on}\,\,\, \Gamma_1\},\nonumber
\end{eqnarray}
where $P_1(\mathrm T)$ represents the space of polynomials of degree less than or equal to 1 in $\mathrm T$ and $h>0$ is the spatial discretization parameter. For the discretization of the Lagrange multiplier spaces $X_{\nu}^{'}$ and $X_{\tau}^{'}$, we use piecewise constant functions as done in \cite{Ac,pietrzak1999large,khenous2006discretization,barboteu2015hyperelastic, BDMP:21, convergence-contact}.
The discrete Lagrange multiplier spaces, denoted by $X_{\nu}^{'^h}$ and $X_{\tau}^{'^h}$, are related to the discretization of the
normal stress $\lambda_\nu$ and the discretization of the friction stress $\boldsymbol\lambda_\tau$, respectively.\\
With these previous notation and the midpoint scheme (\ref{midp}), we have the following discrete approximation for the problem ${{\mathcal P}}_V$ at time $t_{n-\frac {1}{2}}$: \\
\medskip \noindent{\bf Problem} ${\mathcal P}_V^{h}$. {\it
Find a discrete displacement field
$\mbox{\boldmath{$u$}}^{h}=\{\mbox{\boldmath{$u$}}_n^{h}\}_{n=0}^N\subset V^h$, a discrete normal stress field $\lambda_{\ nu}^{h} =\{{\lambda_{\nu}}_{n}^{h}\}_{n=0}^N
\subset X_{\nu}^{' h}$, and a discrete tangential stress field
$\boldsymbol\lambda_{\tau}^{h} =\{{\boldsymbol\lambda_{\tau}}_{n}^{h}\}_{n=0}^N \subset
X_{\nu}^{' h}$ such that, for all $n=1,\ldots,N$,}
\begin{eqnarray}
&& \rho \ddot{\mbox{\boldmath{$u$}}}^{h}_{n-\frac{1}{2}} + \mbox{\boldmath{$B$}}(\mbox{\boldmath{$u$}}^{h}_{n-\frac{1}{ 2}}) + {\lambda_{\nu}}^{h}_{n-\frac{1}{2}}\mbox{\boldmath{$\nu$}}_{n-\frac{1}{2}}+ {\boldsymbol\lambda_ {\tau}}^{h}_{n-\frac{1}{2}}- \mbox{\boldmath{$f$}} = \mbox{\boldmath{$0$}} \\[2mm]
&&\label{LMch} {\lambda_{\nu}}^{h}_{n-\frac{1}{2}}=
c_\nu\frac{\alpha}{2}[\delta^{h}_{\nu_{n-\frac{1}{2}}}]_+^{\alpha-1},\\ [2mm]
&&\label{LMfh}
\left\{\begin{array}{ll}
{\|\boldsymbol\lambda_{\tau}}^{h}_{n-\frac{1}{2}}\|<\mu{|\lambda_{\nu}}^{h }_{n-\frac{1}{2}}|\quad {\boldsymbol\lambda_{\tau}}^{h}_{n-\frac{1}{2}}=c_{\tau } \dot\mbox{\boldmath{$\delta$}}_{{\tau}^{h}_{n-\frac{1}{2}}} &\\
{\|\boldsymbol\lambda_{\tau}}^{h}_{n-\frac{1}{2}}\|=\mu|{\lambda_{\nu}}^{h }_{n-\frac{1}{2}}|\quad {\boldsymbol\lambda_{\tau}}^{h}_{n-\frac{1}{2}}=\mu {\lambda_{\nu}}^{h}_{n-\frac{1}{2}}\displaystyle{\frac{\dot{\mbox{\boldmath{$\delta$}}_{\tau}}^{h}_ {n-\frac{1}{2}}}{\|\dot{\mbox{\boldmath{$\delta$}}_{\tau}}^{h}_{n-\frac{1}{2}}\|}} \end{array}\right.\quad\\[2mm]
&&\label{cih12} \mbox{\boldmath{$u$}}^{h}_{0}=\bar\mbox{\boldmath{$u$}}^h_{0}, \quad\quad
\dot\mbox{\boldmath{$u$}}^{h}_{0}=\bar\mbox{\boldmath{$u$}}^h_{1}.\nonumber
\end{eqnarray}
where $\mbox{\boldmath{$B$}}(\cdot)$ is the internal stress operator representing the first Piola--Kirchhoff tensor ${\mbox{\boldmath{$\Pi$}}}$. The initial values $\bar\mbox{\boldmath{$u$}}^h_0 \in V^h$ and $\bar\mbox{\boldmath{$u$}}^h_1 \in V^h$ are discrete values resulting from the finite element approximation of $\mbox{\boldmath{$u$}}_0$ and $\mbox{\boldmath{$u$}}_1$, respectively.
\subsection{Usual discrete framework of energy conservation}\label{stan_cd_dis}
In the rest of the section, to simplify notation and readability, we do not indicate the dependence of the different variables on the discretization parameter $h$, i.e., we write $\mbox{\boldmath{$u$}}$ instead of $\mbox{\boldmath{$u$}}^{h}$. We begin by recalling some preliminaries regarding discrete energy conservation in the hyperelastodynamic contactless framework.
In order to solve a hyperelastic dynamic problem, we have to use adapted time integration schemes. When considering nonlinear dynamical problems, standard implicit schemes ($\theta$-method, Newmark schemes, midpoint methods or HHT methods) lose their unconditional stability, as explained in \cite{hht, gonz, laur2002}.
Therefore, it is necessary to use implicit energy-conserving schemes like those used in \cite{arro,gonz,hauret2006energy,laur2002,simo,ayyad2009formulation,barboteu2015hyperelastic,Acary15} which are appropriate due to their long-term time integration accuracy and stability. In all these methods, the corresponding discrete mechanical conservation properties are satisfied.
By taking into account the implicit second-order temporal integration scheme of midpoint (\ref{midp}), we obtain the weak form of a nonlinear hyperelastodynamic problem integrated between times ${t_{n-1 }}$ and $t_{n}$:
\begin{eqnarray}\label{f_dyna_int}
\begin{cases}
\mbox{Find} \ {\mbox{\boldmath{$u$}}}_{n} \in U \quad \mbox{such that}\\
\displaystyle{\frac{1}{\Delta t} \int_{\Omega} \rho( \dot{\mbox{\boldmath{$u$}}}_{n}-\dot{\mbox{\boldmath{$u$}}}_{n-1})\cdot{\mbox{\boldmath{$v$}} }\ \rm{dx}
+ \int_{\Omega} {\bf \Pi}^{\text{algo}}_{n-\frac{1}{2}}:\nabla{\mbox{\boldmath{$v$}}}\ \rm{dx} -
\int_{\Gamma_2} { {\mathbf f}_2}_{n-\frac{1}{2}}\cdot{\mbox{\boldmath{$v$}}}\ \rm{dx} -
\int_{\Omega}
{{\mathbf f}_0}_{n-\frac{1}{2}}\cdot{\mbox{\boldmath{$v$}}}\ \rm{da}=0.}
\end{cases}
\end{eqnarray}
There, the discrete tensor ${\bf \Pi }^{\text{algo}}$ is introduced in order to satisfy the exact properties of the discrete energy. This tensor, defined by Gonzalez in \cite{gonz}, takes the form:
\begin{equation}\label{syst}
\begin{cases}
{\bf \Pi }^{\text{algo}}_{n-\frac{1}{2}}=\textbf{F}_{n-\frac{1}{2}}{\mbox{\boldmath{$\Sigma$}}}^{\text{algo}} \\
\displaystyle
{\mbox{\boldmath{$\Sigma$}}}^{\text{algo}}=2\frac{\partial \widetilde W}{\partial \mbox{\boldmath{$C$}}}(\mbox{\boldmath{$C$}}_{n-\frac{1}{2}}) + 2[\widetilde{W}( {\mbox{\boldmath{$C$}}}_{n})- \widetilde{W}({\mbox{\boldmath{$C$}}}_{n-1})-\frac{\partial \widetilde W }{\partial \mbox{\boldmath{$C$}}}(\mbox{\boldmath{$C$}}_{n- \frac{1}{2}}):\Delta {\mbox{\boldmath{$C$}}}_{n-1}]\frac{\Delta {\mbox{\boldmath{$C$}}}_{n-1}}{\Delta {\mbox{\boldmath{$C$}}}_{n -1}:\Delta {\mbox{\boldmath{$C$}}}_{n-1}}
\end{cases}
\end{equation}
with $\Delta {\mbox{\boldmath{$C$}}}_{n-1}={\mbox{\boldmath{$C$}}}_{n}- {\mbox{\boldmath{$C$}}}_{n-1}$ and ${\mbox{\boldmath{$C$}}}_{n-1}={\mbox{\boldmath{$F$}}}^T_{n-1} {\mbox{\boldmath{$F$}}}_{n-1}$. Using the arguments of \cite{gonz} and as shown by the axiom of material indifference which implies that $\widetilde{W}(\mbox{\boldmath{$F$}})=\widetilde{W}(\mbox{\boldmath{$C$}})$, it follows that (\ref{syst}) verifies the exact conservation of energy characterized by the following condition:
\begin{equation}\label{algo}
\displaystyle{\mbox{\boldmath{$\Pi$}}}^{\text{algo}}_{n-\frac{1}{2}}:({\mbox{\boldmath{$F$}}}_{n}- {\mbox{\boldmath{$F$}}}_{n-1})={\mbox{\boldmath{$\Pi$}}} ^{\text{algo}}_{n-\frac{1}{2}}:(\nabla{\mbox{\boldmath{$u$}}}_{n}-\nabla{\mbox{\boldmath{$u$}}}_{n-1} )=\widetilde{W}( {\mbox{\boldmath{$C$}}}_{n})- \widetilde{W}({\mbox{\boldmath{$C$}}}_{n-1}).
\end{equation}
For more details on the standard energy conservation framework, we refer the reader to \cite{arro,gonz,hauret2006energy,laur2002,simo}.
Much work has been devoted to extending the conservative properties of the frictionless contact; more precisely, Lauren and Chawla \cite{Lach} and Armero and Petocz \cite{AP} showed the advantage of the persistence condition to conserve energy in the discrete setting. Nevertheless, in all these works, the numerical method shows that the interpenetration only disappears when the time step tends towards zero. In order to overcome this
drawback, Laursen and Love \cite{LL} have developed an efficient
method, by introducing a discrete jump in velocity; however, this
method requires the solution of an auxiliary system in order to
compute the velocity update results. Furthermore, Hauret and Le
Tallec \cite{hauret2006energy} have considered a specific penalized
enforcement of the contact conditions which allows to provide
energy conservation properties. Then, Khenous, Laborde and Renard
\cite{KLR} have introduced the Equivalent Mass Matrix method
(EMM), based on a procedure of redistribution of the mass matrix.
Interpretations and extensions of this method can be found in
\cite{H}. The resulting problem exhibits Lipschitz regularity in
time and achieves good energy evolution properties, due to the
fact that the persistency condition is automatically satisfied.
The EMM approach was studied and used in many
works; for instance, theoretical and computational aspects related
to this model can be found in \cite{HSWW,KLR}.
\subsection{Improved approach to ``almost'' conserve the energy}\label{imp_cd_dis}
In what follows, based on the papers \cite{hauret2006energy,barboteu2015hyperelastic}, we present a improved energy conservation method for hyperelastodynamic contact problems with its extension to dissipation phenomena with friction. This method allows to enforce the general normal compliance law during each time step with ``minimal'' contact penetrations and with conservation properties which respect ``almost'' the energy.\\
In order to take into account contact at time $t_{n -\frac{1}{2}}$, we choose to implicitly approach the term of contact with friction; thus the weak formulation integrated between the times $t_{n-1}$ and $t_{n}$ is
\begin{eqnarray}\label{f_dyna_int_cont}
\begin{cases}
\mbox{Find} \ {\mbox{\boldmath{$u$}}}_{n} \in V \ \mbox{such that}\\
\displaystyle{\frac{1}{\Delta t} \int_{\Omega} \rho( \dot{\mbox{\boldmath{$u$}}}_{n}-\dot{\mbox{\boldmath{$u$}}}_{n-1})\cdot{\mbox{\boldmath{$v$}} }\ dx
+ \int_{\Omega} {\bf \Pi}^{\text{algo}}_{n-\frac{1}{2}}:\nabla{\mbox{\boldmath{$v$}}}\ dx -
\int_{\Gamma_2} {\mathbf f_2}_{n-\frac{1}{2}}\cdot{\mbox{\boldmath{$v$}}}\ dx -
\int_{\Omega}
{\mathbf f_0}_{n-\frac{1}{2}}\cdot{\mbox{\boldmath{$v$}}}\ da}\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad +\displaystyle{\int_{\Gamma_3} [ \lambda_{\nu_{n-\frac{1}{2}}}
\dot{\delta}_{\nu_{n-\frac{1}{2}}}+\boldsymbol\lambda_{\tau_{n-\frac{1}{2}}}
\cdot\dot{\mbox{\boldmath{$\delta$}}}_{\tau_{n-\frac{1}{2}}}]\ da=0}
\end{cases}
\end{eqnarray}
In order to obtain energy conservation properties, we propose to change the normal constraint $\lambda_\nu{_{n-\frac{1}{2}}}$ by an improved discrete value (\ref{improvement}) which will respect the energy balance of the continuous case (\ref{cons}).\\
\textbf{Discrete form of Improved Normal Compliance (INC) conditions}\\% $\lambda_{\nu_{n-\frac{1}{2}}}$}.\\
In order to ``almost'' conserve the energy and to respect the energy balance of the continuous case, we replace the normal contact distance $(\delta_{\nu}^n)^{\alpha-1}$ by
\begin{eqnarray}\label{xx}
\widetilde{\delta}_{\nu}^n\coloneqq\displaystyle{\frac{\big[\big(\delta_{\nu}^n\big)_{+}\big]^{\alpha}- \big[\big(\delta_{\nu}^{n-1})_{+}\big]^{\alpha}}{\alpha\big(\delta_{\nu}^n-\delta_{ \nu}^{n-1}\big)}}
\end{eqnarray}
We then obtain the normal stress value $\lambda_{\nu_{n-\frac{1}{2}}}$ of the improved normal compliance condition in the discrete case:
\begin{equation}\label{improvement}
\lambda_{\nu_{n-\frac{1}{2}}}=c_{\nu}\frac{\alpha}{2}\widetilde{\delta}_{\nu}^n.
\end{equation}
\textbf{Discrete Energy Evolution Analysis}\\
This part is devoted to establishing the energy conservation properties induced by the improved normal compliance described in the previous paragraph.
We use above the notation $E_n$ and $E_{n-1}$ for the energy $E$ of the hyperelastic system of contact with friction evaluated at times $t_n$ and $t_{n-1}$ respectively.
For example, the discrete energy at time $t_n$ can be written as follows:
\begin{equation}\label{energy_dis}
E_{n} = \frac{1}{2} \int_{\Omega} \rho |\dot{\bf u}^{2}_{n}|\ dx + \int_{\Omega} \widetilde{W} ({\mbox{\boldmath{$C$}}_{n}})\ dx.
\end{equation}
The general evaluation of the discrete energy of the contact problem with friction between times $t_n$ and $t_{n-1}$ is based on the following proposition.
\begin{proposition}
\rm{The following discrete energy balance holds between times $t_n$ and $t_{n-1}$}:
\begin{eqnarray}
\displaystyle{E_n-E_{n-1}=\Delta t\langle \mbox{\boldmath{$f$}}_{n-\frac{1}{2}},u_{n-\frac{1}{2}}\rangle_{V^* \times V} -\Delta t\int_{\Gamma_3} \big[ \lambda_{ \nu_{n-\frac{1}{2}}}\dot{\delta}_{\nu_{n- \frac{1}{2}}}+
\boldsymbol\lambda_{ \tau_{n-\frac{1}{2}}}\cdot
\dot{\mbox{\boldmath{$\delta$}}}_{ \tau_{n-\frac{1}{2}}}\big]}\ da
\end{eqnarray}
\end{proposition}
\begin{proof}
Using the variational formulation (\ref{f_dyna_int_cont}) with
$$\displaystyle{ \mbox{\boldmath{$v$}}=\dot{\mbox{\boldmath{$u$}}}_{n-\frac{1}{2}}=\frac{\mbox{\boldmath{$u$}}_{n}-{\mbox{\boldmath{$u$}}}_{n-1}}{\Delta t}=\frac{\dot{\mbox{\boldmath{$u$}}}_{n}+{\dot\mbox{\boldmath{$u$}}_{n-1}}}{2}}$$ we get the following equality
\begin{equation}
\frac{1}{2\Delta t} \int_{\Omega} \rho( \dot{\mbox{\boldmath{$u$}} }_{n}-\dot{\mbox{\boldmath{$u$}} }_{n-1})\cdot(\dot{\mbox{\boldmath{$u$}} }_{n}+\dot{\mbox{\boldmath{$u$}} }_{n-1})\ {\rm{dx}}
+ \frac{1}{\Delta t}\int_{\Omega} {\bf \Pi}^{\text{algo}}:\nabla({\mbox{\boldmath{$u$}}_{n}-\mbox{\boldmath{$u$}}_{n-1}})\ \rm {dx} \nonumber
\end{equation}
\begin{equation}
\quad\quad\quad=\langle \mbox{\boldmath{$f$}}_{n-\frac{1}{2}},\mbox{\boldmath{$u$}}_{n-\frac{1}{2}}\rangle_{V^*\times V} +\int_{\Gamma_3} \big[ \lambda_{ \nu_{n-\frac{1}{2}}} \dot{\delta}_{ \nu_{n-\frac{1}{2}}} +\boldsymbol\lambda_{ \tau_n}\cdot\dot{\mbox{\boldmath{$\delta$}}}_{\tau_{n-\frac{1}{2}}}\big] \ da
\end{equation}
Also, using the identity
$(\dot{\mbox{\boldmath{$u$}}}_{n}-\dot{\mbox{\boldmath{$u$}} }_{n-1})\cdot(\dot{\mbox{\boldmath{$u$}} }_{n}+\dot{\mbox{\boldmath{$u$}} }_{n-1 })=[\dot{\mbox{\boldmath{$u$}} }_{n}]^2-[\dot{\mbox{\boldmath{$u$}} }_{n-1}]^2$
and the conservation property of the Gonzalez scheme given in equality (\ref{algo}), we get that
\begin{eqnarray}
\frac{1}{2 \Delta t} \int_{\Omega} \rho\big( [\dot{\mbox{\boldmath{$u$}} }_{n}]^2-[\dot{\mbox{\boldmath{$u$}}}_{n-1}]^2\big)\ dx
+ \frac{1}{\Delta t}\int_{\Omega} \big(\widetilde{W}({\mbox{\boldmath{$C$}}}_{n})- \widetilde{W}({\mbox{\boldmath{$C$}}}_{n-1})\big) \ dx \nonumber \\
=\langle \mbox{\boldmath{$f$}}_{n-\frac{1}{2}},\mbox{\boldmath{$u$}}_{n-\frac{1}{2}}\rangle_{V^* \times V} -\int_{\Gamma_3} \big[ \lambda_{ \nu_{n-\frac{1}{2}}}
\dot{\delta}_{ \nu_{n-\frac{1}{2}}}+
\boldsymbol\lambda_{ \tau_{n-\frac{1}{2}}}\cdot
\dot{\mbox{\boldmath{$\delta$}}}_{ \tau_{n-\frac{1}{2}}} \big] \ da
\end{eqnarray}
By using the definition (\ref{energy_dis}) of the discrete energy at times $t_{n-1}$ and $t_n$, we obtain the assertion.
\end{proof}
Using the previous proposition, we can give an estimate of the discrete energy balance for the contact law with Improved Normal Compliance (\ref{improvement}).
When the external forces are assumed to be zero, by using $\frac{\delta_{\nu}^{n}-{\delta}_{\nu}^{n-1}}{\Delta t}=\dot{\delta}_{{\nu}_{n-\frac{1}{2}}}$ and considering the formula (\ref{xx}), the energy balance is
\begin{eqnarray}
E_n-E_{n-1}=-\int_{\Gamma_3} \frac{c_\nu}{2}\big([(\delta_\nu^n)_{+}]^{\alpha}- [(\delta_\nu^{n-1})_{+}]^{\alpha}\big)\ da-\Delta t\int_{\Gamma_3} \boldsymbol\lambda_{ \tau_{n-\frac{ 1}{2}}}\cdot\dot{\mbox{\boldmath{$\delta$}}}_{ \tau_{n-\frac{1}{2}}}\ da.
\end{eqnarray}
We notice that the condition of Improved Normal Compliance allows in the case without friction an evaluation of the discrete energy which is in agreement with the continuous case (\ref{cons}).\\
\textbf{Case without friction}\\
The difference $[(\delta_\nu^n)_{+}]^{\alpha}-[(\delta_\nu^{n-1})_{+}]^{\alpha}$ is very small since the penetrations $(\delta_\nu^n)_{+}$ and $(\delta_\nu^{n-1})_{+}$ are also small. So the energy of the system is ``almost'' conserved, i.e. $E_n\approx E_{n-1}$.\\
\textbf{Case with friction}\\
In this case the product $\boldsymbol\lambda_{ \tau_{n-\frac{1}{2}}}\cdot
\dot{\mbox{\boldmath{$\delta$}}}_{ \tau_{n-\frac{1}{2}}}$ is always positive due to the friction law, so
we observe a dissipation of the energy: $E_n\leq E_{n-1}$. In other words this strategy limits the dissipation of energy between times $t_n $ and $t_{n-1}$. \\
In summary, the INC strategy respects the dissipation in the case of friction and ``almost'' conserves the energy in the case of contact without friction, and this is achieved by limiting the penetration.\\
In the following sections, we will propose both semi-smooth Newton's method as well as the Primal Dual Active Set (PDAS) algorithm in the case of Normal Compliance conditions.
\section{Semi-Smooth Newton approach for solving normal compliance conditions}\label{s4}
The semi-smooth Newton/PDAS methods appear to be one of the most relevant methods for solving frictional contact problems (cf \cite{hint2,wohlcoulomb, wohl}). These methods are based on the following principle: the conditions of contact and friction are reformulated in terms of non-linear complementarity equations whose solution is provided by the semi-smooth iterative method of Newton \cite{hint2, hint}. To this end, we need the generalized derivative of complementary functions for contact and friction. In practice, the conditions of contact with Coulomb's friction can be formulated in terms of a fixed point problem related to a quasi-optimization one. From a purely algorithmic point of view, the main goal of these methods is to separate the nodes potentially in contact into two subsets (active and inactive) and to find the correct subset of all the nodes actually in active contact (subset $ \cal A $), as opposed to those that are inactive (subset $ \cal I $). In practice, the semi-smooth Newton/PDAS methods do not require the use of Lagrange multipliers. In fact, the boundary conditions on the subsets $ \cal A $ and $ \cal I $ are directly enforced thanks to the fixed points found by the semi-smooth Newton method, and consequently, their implementation can be achieved without much effort.
\subsection{Standard Normal Compliance (SNC) conditions}\label{semi_snc}
\subsubsection{Complementary function}\label{comp_sn}
The standard normal compliance contact conditions (\ref{compl}) with $\alpha=2$ can be formulated from the following non-linear complementary function:
\begin{align}
{\cal C}_{\nu}^{\lambda}(\delta_\nu,\lambda_\nu)=\lambda_\nu - c_\nu [\delta_\nu]_+,\label{complementary_sn}
\end{align}
where we have dropped, for the sake of readability, the time index $n+1$.
%
\subsubsection{Generalized derivative of complementary functions}\label{gen_deriv_sn}
\noindent We provide the generalized derivative of the complementary functions in the gap and contact cases.
$\bullet$ {Gap case: $\delta_\nu\le 0$}.
\noindent According to the complementary function ${\cal C}_{\nu}^{\lambda}(\delta_\nu,\lambda_\nu)=\lambda_\nu$ we have
\begin{align}
&d_{\delta_\nu} {\cal C}_{\nu}^{\lambda}=0\label{phi11_sn},\\
&d_{\lambda_\nu} {\cal C}_{\nu}^{\lambda}=d{\lambda_\nu}.\label{phi12_sn}
\end{align}
$\bullet$ {Contact case: $\delta_\nu > 0$}.
\noindent Given the complementary function ${\cal C}_{\nu}^{\lambda}(\delta_\nu,\lambda_\nu)=\lambda_\nu - c_\nu \delta_\nu$, we have
\begin{align}
&d_{\delta_\nu} {\cal C}_{\nu}^{\lambda}=-c_\nu d{\delta_\nu}\label{phi11st_sn},\\
&d_{\lambda_\nu} {\cal C}_{\nu}^{\lambda}=d{\lambda_\nu}\label{phi12st_sn}.
\end{align}
\noindent By combining (\ref{phi11_sn})--(\ref{phi12st_sn}), with the generalized derivative ${\cal D}_{{\cal C}_{\nu}^{\lambda}}$ of ${\cal C}_{\nu}^{\lambda}$, we obtain
\begin{align}
&{\cal D}_{{\cal C}_{\nu}^{\lambda}}(\delta_\nu,\lambda_\nu)(d\delta_\nu,d \lambda_\nu)= - c_\nu({\mathbf 1}_{\text{Contact}})d \delta_\nu + d \lambda_\nu,
\end{align}
where
\begin{align*}
& {\mathbf 1}_{\text{Contact}} = 0 \ {\rm if}\ \delta_\nu\le 0,\\
& {\mathbf 1}_{\text{Contact}} = 1 \ {\rm if}\ \delta_\nu > 0.
\end{align*}
\subsubsection{Fixed point conditions from Newton's Semi-Smooth approach}\label{fix_pt_sn}
Using now the semi-smooth Newton formalism (indexed by the superscript $k$) at the current fixed point iterate $(\delta^{k}_\nu,\lambda^{k}_\nu)$ of the complementary functions ${\cal C}_{\nu}^\lambda$, one can derive the new iterate $(\delta^{k+1}_\nu,\lambda^{k+1}_\nu)$ as follows:
\begin{align}
&{\cal D}_{{\cal C}_{\nu}^{\lambda}}(\delta^{k}_\nu,\lambda^{k}_\nu)(\Delta \delta^{k+1}_\nu,\Delta \lambda^{k+1}_\nu)= - {\cal C}_{\nu}^{\lambda} (\delta^{k}_\nu,\lambda^{k}_\nu),\label{G_R_np_sn}\\[2mm]
& (\delta^{k+1}_\nu,\lambda^{k+1}_\nu) =(\delta^{k}_\nu,\lambda^{k}_\nu) +(\Delta \delta^{k+1}_\nu,\Delta \lambda^{k+1}_\nu)\nonumber.
\end{align}
$\bullet$ {Gap case: $ {\mathbf 1}_{\text{Contact}} = 0$}.
\noindent From the equations (\ref{G_R_np_sn}) we have
\begin{align}
&\lambda^{k+1}_\nu-\lambda^{k}_\nu=-\lambda^{k}_\nu.
\end{align}
Next, the gap conditions of the semi-smooth Newton formalism are as follows
\begin{align}
&\lambda^{k+1}_\nu=0.
\end{align}
$\bullet$ {Contact case: $ {\mathbf 1}_{\text{Contact}} = 1$}.
\noindent From the equations (\ref{G_R_np_sn}) we have
\begin{align}
&- c_\nu(\delta^{k+1}_\nu-\delta^{k}_\nu) + (\lambda^{k+1}_\nu-\lambda^{k}_\nu) = -\lambda^{k}_\nu + c_\nu \delta^{k}_\nu.
\end{align}
Next,
\begin{align}
& \lambda^{k+1}_\nu = c_\nu \delta^{k+1}_\nu.
\end{align}
\subsection{Improved Normal Compliance (INC) conditions}\label{semi_inc}
\subsubsection{Complementary function}\label{comp_in}
The improved normal compliance contact condition (\ref{improvement}) can be formulated from the following non-linear complementary function:
\begin{align}
{\cal C}_{\nu}^{\lambda}(\delta_\nu^{n+1},\lambda_\nu^{n+1})=\lambda_\nu^{n+1} - [ c_\nu \frac{\alpha}{2} \widetilde \delta^{n+1}_\nu]_+.\label{complementary_in}
\end{align}
For the sake of readability, from now on, we use this equation: $ {\cal C}_{\nu}^{\lambda}(\delta_\nu,\lambda_\nu)=\lambda_\nu - [ c_\nu \frac{\alpha}{2} \widetilde \delta^{n+1}_\nu]_+$.
\subsubsection{Generalized derivative of complementary functions}\label{gen_deriv_in}
\noindent Now, we provide the generalized derivative of the complementary functions in the gap and contact cases.
$\bullet$ {Gap case: $\widetilde \delta^{n+1}_\nu\le 0$}.
\noindent According to the complementary function ${\cal C}_{\nu}^{\lambda}(\delta_\nu,\lambda_\nu)=\lambda_\nu$ we have the following derivative
\begin{align}
&d_{\delta_\nu} {\cal C}_{\nu}^{\lambda}=0\label{phi11_in},\\
&d_{\lambda_\nu} {\cal C}_{\nu}^{\lambda}=d{\lambda_\nu}. \label{phi12_in}
\end{align}
$\bullet$ {Contact case: $\widetilde \delta^{n+1}_\nu > 0$}.
\noindent Given the complementary functions ${\cal C}_{\nu}^{\lambda}(\delta_\nu,\lambda_\nu)=\lambda_\nu - c_\nu \widetilde \delta_\nu$, we have
\begin{align}
&d_{\delta_\nu} {\cal C}_{\nu}^{\lambda}=-c_\nu \frac{\alpha (([\delta^{n+1}_\nu]_+)^{\alpha-1} - \widetilde \delta^{n+1}_\nu)}{2(\delta^{n+1}_\nu - \delta^{n}_\nu)} d{\delta_\nu}\label{phi11st_in},\\
&d_{\lambda_\nu} {\cal C}_{\nu}^{\lambda}=d{\lambda_\nu}\label{phi12st_in}.
\end{align}
\noindent By combining (\ref{phi11_in})--(\ref{phi12st_in}), with ${\cal D}_{{\cal C}_{\nu}^{\lambda}}$ the generalized derivative of ${\cal C}_{\nu}^{\lambda}$, we obtain
\begin{align}
&{\cal D}_{{\cal C}_{\nu}^{\lambda}}(\delta_\nu,\lambda_\nu)(d \delta_\nu,d \lambda_\nu)= -c_\nu \frac{\alpha (([\delta^{n+1}_\nu]_+)^{\alpha-1} - \widetilde \delta^{n+1}_\nu)}{2(\delta^{n+1}_\nu - \delta^{n}_\nu)} ({\mathbf 1}_{\text{Contact}})d \delta_\nu + d \lambda_\nu,
\end{align}
where
\begin{align*}
& {\mathbf 1}_{\text{Contact}} = 0 \ {\rm if}\ \widetilde \delta^{n+1}_\nu \le 0,\\
& {\mathbf 1}_{\text{Contact}} = 1 \ {\rm if}\ \widetilde \delta^{n+1}_\nu > 0.
\end{align*}
\subsubsection{Fixed point conditions from Newton's Semi-Smooth approach}\label{fix_pt_in}
Using now the semi-smooth Newton formalism (indexed by the superscript $k$) at the current fixed point iterate $(\delta^{k}_\nu,\lambda^{k}_\nu)$ of the complementary functions ${\cal C}_{\nu}^\lambda$, one can derive the new iterate $(\delta^{k+1}_\nu,\lambda^{k+1}_\nu)$
\begin{align}
&{\cal D}_{{\cal C}_{\nu}^{\lambda}}(\delta^{k}_\nu,\lambda^{k}_\nu)(\Delta \delta^{k+1}_\nu,\Delta \lambda^{k+1}_\nu)= - {\cal C}_{\nu}^{\lambda} (\delta^{k}_\nu,\lambda^{k}_\nu),\label{G_R_np_in}\\[2mm]
& (\delta^{k+1}_\nu,\lambda^{k+1}_\nu) =(\delta^{k}_\nu,\lambda^{k}_\nu) +(\Delta \delta^{k+1}_\nu,\Delta \lambda^{k+1}_\nu)\nonumber.
\end{align}
$\bullet$ {Gap case: $ {\mathbf 1}_{\text{Contact}} = 0$}
\noindent From the equations (\ref{G_R_np_in}) we have
\begin{align}
&\lambda^{k+1}_\nu-\lambda^{k}_\nu=-\lambda^{k}_\nu.
\end{align}
Next, the gap conditions of the semi-smooth Newton formalism are as follows
\begin{align}
&\lambda^{k+1}_\nu=0.
\end{align}
$\bullet$ {Contact case: $ {\mathbf 1}_{\text{Contact}} = 1$}
\noindent From the equations (\ref{G_R_np_in}) we have
\begin{align}
& -c_\nu \frac{\alpha (([\delta^{k,n+1}_\nu]_+)^{\alpha-1} - \widetilde \delta^{k,n+1}_\nu)}{2(\delta^{k,n+1}_\nu - \delta^{n}_\nu)} (\delta^{k+1,n+1}_\nu-\delta^{k,n+1}_\nu) + (\lambda^{k+1}_\nu-\lambda^{k}_\nu) = -\lambda^{k}_\nu + c_\nu \frac{\alpha}{2} \widetilde \delta^{k,n+1}_\nu.
\end{align}
Next,
\begin{align}
& \lambda^{k+1}_\nu = c_\nu \frac{\alpha}{2} \widetilde \delta^{k,n+1}_\nu + c_\nu \frac{\alpha (([\delta^{k,n+1}_\nu]_+)^{\alpha-1} - \widetilde \delta^{k,n+1}_\nu)}{2(\delta^{k,n+1}_\nu - \delta^{n}_\nu)} (\delta^{k+1,n+1}_\nu-\delta^{k,n+1}_\nu).
\end{align}
\subsection{Compliance for friction conditions}\label{semi_fcc}
\subsubsection{Complementary function}\label{comp_fcf}
The compliance for friction conditions (\ref{compl1}) can be formulated from the following non-linear complementary function ${\cal C}_{\tau}^{\lambda}(\delta_\nu^{n+1},\dot\mbox{\boldmath{$\delta$}}_\tau^{n+1},\lambda_\nu^{n+1},\boldsymbol\lambda_\tau^{n+1})$
\begin{align}\label{ccoulomb}
&\hspace{-2mm}{\cal C}_{\tau}^{\lambda}(\delta_\nu^{n+1},\dot\mbox{\boldmath{$\delta$}}_\tau^{n+1},\lambda_\nu^{n+1},\boldsymbol\lambda_\tau^{n+1})=\max ( \mu \lambda_{\nu}^{n+1}, \| c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{n+1}\|)\boldsymbol\lambda_{\tau}^{n+1}- \mu \lambda_{\nu}^{n+1}(c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{n+1}).
\end{align}
For the sake of readability, in the following we use this equation: $${\cal C}_{\tau}^{\boldsymbol\lambda}(\delta_{\nu},\dot\mbox{\boldmath{$\delta$}}_{\tau},\lambda_{\nu},\boldsymbol\lambda_{\tau})=\max ( \mu \lambda_{\nu}, \|c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}\|)\boldsymbol\lambda_{\tau}- \mu \lambda_{\nu}(c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}).$$
\subsubsection{Generalized derivative of complementary functions}\label{gen_deriv_in}
\noindent Now, we provide the generalized derivative of the complementary function in the gap and friction cases.\\
$\bullet$ {Gap case}: $ \boldsymbol\lambda_{\tau} = \mathbf 0$, ${\cal C}_{\tau}^{\boldsymbol\lambda}(\delta_{\nu},\dot\mbox{\boldmath{$\delta$}}_{\tau},\lambda_{\nu},\boldsymbol\lambda_{\tau})= \mathbf 0$. \\[0.2cm]
$\bullet$ {Stick case}: $\|\boldsymbol\lambda_{\tau}\| < \mu \lambda_{\nu}$:\\
\begin{align*}
&{\cal C}_{\tau}^{\boldsymbol\lambda}(\delta_{\nu},\dot\mbox{\boldmath{$\delta$}}_{\tau},\lambda_{\nu},\boldsymbol\lambda_{\tau})=\mu \lambda_{\nu} \boldsymbol\lambda_{\tau} -\mu c_\tau \lambda_{\nu}\dot\mbox{\boldmath{$\delta$}}_{\tau}.
\end{align*}
Then
\begin{align}
&d_{\delta_{\nu}} {\cal C}_{\nu}^{\boldsymbol\lambda}=0, \label{phi21st}\\
&d_{\dot\mbox{\boldmath{$\delta$}}_{\tau}} {\cal C}_{\tau}^{\boldsymbol\lambda}=-\mu c_\tau \lambda_{\nu}d\dot\mbox{\boldmath{$\delta$}}_{\tau},\label{phi22st}\\
&d_{\lambda_{\nu}} {\cal C}_{\tau}^{\boldsymbol\lambda}=(\mu \boldsymbol\lambda_{\tau}-\mu c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau})d{\lambda_{\nu}},\label{phi23st}\\
&d_{\boldsymbol\lambda_{\tau}} {\cal C}_{\tau}^{\boldsymbol\lambda}= \mu \lambda_{\nu} d{\boldsymbol\lambda_{\tau}}\label{phi24st}.
\end{align}
$\bullet$ {Slip case}: $\|\boldsymbol\lambda_{\tau}\| \ge \mu \lambda_{\nu}$
\begin{align*}
&{\cal C}_{\tau}^{\boldsymbol\lambda}(\delta_{\nu},\dot\mbox{\boldmath{$\delta$}}_{\tau},\lambda_{\nu},\boldsymbol\lambda_{\tau})=\|c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}\|\boldsymbol\lambda_{\tau}- \mu\lambda_{\nu} (c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}).
\end{align*}
Then
\begin{align}
&d_{\delta_{\nu}} {\cal C}_{\tau}^{\boldsymbol\lambda}=0,\label{phi21sl}\\
&d_{\dot\mbox{\boldmath{$\delta$}}_{\tau}} {\cal C}_{\tau}^{\boldsymbol\lambda}=\Big(c_\tau \boldsymbol\lambda_{\tau}\frac{( \dot\mbox{\boldmath{$\delta$}}_{\tau})^T}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}\|}-\mu c_\tau \lambda_{\nu}\mbox{\boldmath{$I$}}_2\Big)d{\dot\mbox{\boldmath{$\delta$}}_{\tau}},\label{phi22sl}\\
&d_{\lambda_{\nu}} {\cal C}_{\tau}^{\boldsymbol\lambda}=- \mu c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}d\lambda_{\nu},\label{phi23sl}\\
&d_{\boldsymbol\lambda_{\tau}} {\cal C}_{\tau}^{\boldsymbol\lambda}=\|c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}\|d{\boldsymbol\lambda_{\tau}}\label{phi24sl}.
\end{align}
\subsubsection{Fixed point conditions from Newton's Semi-Smooth approach}\label{fix_pt_in}
\noindent By combining (\ref{phi21st})--(\ref{phi24sl}), with ${\cal G}_{{\cal C}_{\tau}^{\boldsymbol\lambda}}$ the generalized derivative of ${\cal C}_{\tau}^{\boldsymbol\lambda}$, respectively, we obtain
\begin{align}
&{\cal G}_{{\cal C}_{\tau}^{\boldsymbol\lambda}}(\delta_{\nu},\dot\mbox{\boldmath{$\delta$}}_{\tau},\lambda_{\nu},\boldsymbol\lambda_{\tau})(\Delta \delta_{\nu},\Delta\dot\mbox{\boldmath{$\delta$}}_{\tau},\Delta\lambda_{\nu},\Delta\boldsymbol\lambda_{\tau})=
{\mathbf 1}_{\text{Stick}} \Big( \mu \boldsymbol\lambda_{\tau} \Big)\Delta{\lambda_{\nu}} \nonumber\\
&+ {\mathbf 1}_{\text{Slip}} \Big(c_\tau \boldsymbol\lambda_{\tau}\frac{( \dot\mbox{\boldmath{$\delta$}}_{\tau})^T}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}\|}\Big)\Delta{\dot\mbox{\boldmath{$\delta$}}_{\tau}} - \mu c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}\Delta\lambda_{\nu} -\mu c_\tau \lambda_{\nu}\Delta{\dot\mbox{\boldmath{$\delta$}}_{\tau}} \nonumber\\
& + {\mathbf 1}_{\text{Stick}}\Big(\mu\lambda_{\nu}\Big)\Delta{\boldsymbol\lambda_{\tau}} + {\mathbf 1}_{\text{Slip}}\Big(\|c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}\| \Big)\Delta{\boldsymbol\lambda_{\tau}} \nonumber
\end{align}
where
\begin{align*}
& {\mathbf 1}_{\text{Stick}} = 1, {\mathbf 1}_{\text{Slip}} = 0\ {\rm if}\ \|\boldsymbol\lambda_{\tau}\| < \mu \lambda_{\nu},\\
& {\mathbf 1}_{\text{Stick}} = 0, {\mathbf 1}_{\text{Slip}} = 1\ {\rm if}\ \|\boldsymbol\lambda_{\tau}\| \ge \mu \lambda_{\nu}.
\end{align*}
Using now the semi-smooth Newton formalism at the current iterate $(\delta_{\nu}^{(k)},\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)},\lambda_{\nu}^{(k)},\boldsymbol\lambda_{\tau}^{(k)})$, one can derive the new iterate $(\delta_{\nu}^{(k+1)},\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k+1)},\lambda_{\nu}^{(k+1)},\boldsymbol\lambda_{\tau}^{(k+1)})$ as follows:
\begin{align}
&{\cal G}_{{\cal C}_{\tau}^{\boldsymbol\lambda}}(\delta_{\nu}^{(k)},\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)},\lambda_{\nu}^{(k)},\boldsymbol\lambda_{\tau}^{(k)})(\Delta \delta_{\nu}^{(k+1)},\Delta\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k+1)},\Delta\lambda_{\nu}^{(k+1)},\Delta\boldsymbol\lambda_{\tau}^{(k+1)})
= - {\cal C}_{\tau}^{\boldsymbol\lambda} (\delta_{\nu}^{(k)},\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)},\lambda_{\nu}^{(k)},\boldsymbol\lambda_{\tau}^{(k)})\nonumber,\\[2mm]
& (\delta_{\nu}^{(k+1)},\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k+1)},\lambda_{\nu}^{(k+1)},\boldsymbol\lambda_{\tau}^{(k+1)})
=(\delta_{\nu}^{(k)},\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)},\lambda_{\nu}^{(k)},\boldsymbol\lambda_{\tau}^{(k)}) +(\Delta \delta_{\nu}^{(k+1)},\Delta\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k+1)},\Delta\lambda_{\nu}^{(k+1)},\Delta\boldsymbol\lambda_{\tau}^{(k+1)})\nonumber.
\end{align}
\noindent$\bullet$ {Stick case}: $ {\mathbf 1}_{\text{Stick}} = 1, {\mathbf 1}_{\text{Slip}} = 0$.
We have
\begin{align*}
& -\mu c_\tau \lambda_{\nu}^{(k)}(\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k+1)}-\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}) + \mu\lambda_{\nu}^{(k)} (\boldsymbol\lambda_{\tau}^{(k+1)}-\boldsymbol\lambda_{\tau}^{(k)}) + (\mu \boldsymbol\lambda_{\tau}^{(k)}-\mu c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}) (\lambda_{\nu}^{(k+1)}-\lambda_{\nu}^{(k)}) \\
& = - \mu\lambda_{\nu}^{(k)} (\boldsymbol\lambda_{\tau}^{(k)} +\mu c_\tau \lambda_{\nu}^{(k)}\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}.
\end{align*}
Next, with $\boldsymbol\lambda_{\tau}^{(k)} = c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}$, we obtain
\begin{align}
& \boldsymbol\lambda_{\tau}^{(k+1)} =c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k+1)}.
\end{align}
\noindent$\bullet$ {Slip case}: $ {\mathbf 1}_{\text{Stick}} = 0, {\mathbf 1}_{\text{Slip}} = 1$.
We obtain
\begin{align}
&\Big(c_\tau \boldsymbol\lambda_{\tau}^{(k)}\frac{( \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)})^T}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}\|}-\mu c_\tau \lambda_{\nu}^{(k)}\mbox{\boldmath{$I$}}_2\Big)(\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k+1)}-\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}) \label{phi2c}\\
& - \mu (c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)})(\lambda_{\nu}^{(k+1)}-\lambda_{\nu}^{(k)}) +\Big( \|c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}\| \Big)(\boldsymbol\lambda_{\tau}^{(k+1)}-\boldsymbol\lambda_{\tau}^{(k)})\nonumber\\
&= -\|c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}\|\boldsymbol\lambda_{\tau}^{(k)}+ \mu\lambda_{\nu}^{(k)} (c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)})\nonumber.
\end{align}
Therefore, after an elementary computation, with $\boldsymbol\lambda_{\tau}^{(k)} =\mu \lambda_{\nu}^{(k)} \frac{ \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}\|}$ we have
\begin{align*}
&\boldsymbol\lambda_{\tau}^{(k+1)} = \mu\lambda_{\nu}^{(k+1)} \frac{ \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}\|} - \Big( \boldsymbol\lambda_{\tau}^{(k)}\frac{( \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)})^T}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}\|^2}-\mu \lambda_{\nu}^{(k)} \frac{\mbox{\boldmath{$I$}}_2}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}\|} \Big) (\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k+1)}-\dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)})\nonumber.
\end{align*}
For a two dimensional case, one obtains a simplified version of the previous condition:
\begin{align*}
&\boldsymbol\lambda_{\tau}^{(k+1)} = \mu\lambda_{\nu}^{(k+1)} \frac{ \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{(k)}\|}\nonumber.
\end{align*}
For more details regarding the obtention of such a simplified version, we refer to the proof provided in \cite{ABCD2}.
\section{Primal-Dual Active Set methods}\label{s5}
This section is devoted to the numerical treatment of the contact conditions using a Primal-Dual Active Set method within the framework of dynamic contact problems. After defining the active and inactive subsets of all nodes that are potentially in contact, we compute the contact conditions on each subset only in terms of contact reaction, using the local general equations of motion.
\subsection{Primal-Dual Active Set method for standard normal compliance}\label{activ_snc}
Let us denote by ${\cal S}$ the set of potential contact and $\gamma$ a potential contact node belonging to ${\cal S}$. The standard normal contact condition (\ref{compl}) with $\alpha=2$ is enforced by applying an active set strategy which derives directly from the computation of the fixed point on the non-linear complementary functions ${\cal C}_{\nu}^{\boldsymbol\lambda}$ and ${\cal C}_{\tau}^{\boldsymbol\lambda}$ based on the Newton semi-smooth scheme. The active and inactive sets are defined as follows
\begin{align*}
&{\cal A}_{\nu}^{k+1}=\{\gamma\in {\cal S}: \delta^{\gamma,k}_\nu \geq 0\},\\
&{\cal I}_{\nu}^{k+1}=\{\gamma\in {\cal S}:\delta^{\gamma,k}_\nu < 0\},\\
&{\cal A}_{\tau}^{k+1}=\{p\in {\cal S}:\|\boldsymbol\lambda_{\tau}^{\gamma,k}\| < \mu \lambda_{\nu}^{\gamma,k}\},\\
&{\cal I}_{\tau}^{k+1}=\{p\in {\cal S}:\|\boldsymbol\lambda_{\tau}^{\gamma,k}\| \ge \mu \lambda_{\nu}^{\gamma,k}\}.
\end{align*}
\noindent The status of a given potential $\gamma$ at the non-linear iteration $k$ depends on the set it belongs to. It can be either in the non-contact or frictional contact status (either stick or slip status). It yields the following Algorithm \ref{PDAS_SNC}\\
\begin{algorithm}
\caption{PDAS for standard normal compliance}\label{PDAS_SNC}
\qquad(i) Choose $(\mbox{\boldmath{$\delta$}}^{(0)},\boldsymbol\lambda^{(0)})$, $c_{\nu}>0$, $c_{\tau}>0$ and set $k=0$.
\qquad(ii) Compute: $\tau^{\gamma}_\nu = \delta^{\gamma,k}_\nu$ and $\tau^{\gamma}_\tau = - \|\boldsymbol\lambda_{\tau}^{\gamma,k}\| + \mu \lambda_{\nu}^{\gamma,k}$ \ \ for each $\gamma \in {\cal S}$.
\qquad(iii) Set the active and inactive sets:
\begin{align*}
&{\cal A}_{\nu}^{k+1}=\{\gamma\in {\cal S}:\tau^{\gamma}_\nu \geq 0\},\\
&{\cal I}_{\nu}^{k+1}={\cal S}\setminus {\cal A}_{\nu}^{k+1}, \\
&{\cal A}_{\tau}^{k+1}=\{p\in {\cal S}:\tau^{\gamma}_\tau > 0 \},\\
&{\cal I}_{\tau}^{k+1}={\cal S}\setminus {\cal A}_{\tau}^{k+1}.
\end{align*}
\qquad(iv) Find $(\mbox{\boldmath{$\delta$}}^{\gamma,k+1},\boldsymbol\lambda^{\gamma,k+1 })$ such that
\begin{eqnarray}
&\label{Inu_exact_1}\lambda^{\gamma,k+1}_{\nu}=0, \qquad \boldsymbol\lambda^{k+1}_{\tau,p}=\mbox{\boldmath{$0$}} \quad \forall \gamma \in {\cal I}_{\nu}^{k+1},\\[1mm]
&\label{Anu_exact_1}\lambda^{\gamma,k+1}_{\nu}=c_\nu \delta^{\gamma,k+1}_\nu \qquad \forall \gamma \in {\cal A}_{\nu}^{k+1},\\[1mm]
&\label{Atau_1} \boldsymbol\lambda_{\tau}^{\gamma,k+1} = c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k+1} \quad\forall \gamma \in {\cal A}_{\tau}^{k+1}\cap {\cal A}_{\nu}^{k+1},\\[1mm]
&\label{Itau_1} \boldsymbol\lambda_{\tau}^{\gamma,k+1} = \mu\lambda_{\nu}^{\gamma,k+1} \frac{ \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}\|} - \Big( \boldsymbol\lambda_{\tau}^{\gamma,k}\frac{( \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k})^T}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}\|^2}-\mu \lambda_{\nu}^{\gamma,k} \frac{\mbox{\boldmath{$I$}}_2}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}\|} \Big) (\dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k+1}-\dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}) \ \forall \gamma \in {\cal I}_{\tau}^{k+1}\cap {\cal A}_{\nu}^{k+1}. \label{robin}
\end{eqnarray}
\qquad(v) If $\|(\mbox{\boldmath{$\delta$}}^{\gamma,k+1},\boldsymbol\lambda^{\gamma,k+1})-(\mbox{\boldmath{$\delta$}}^{\gamma, k},\boldsymbol\lambda^{\gamma,k})\|\leq\epsilon$, ${\cal A}_{\nu}^{k+1}={\cal A}_{\nu}^{k}$ and ${\cal A}_{\tau}^{k+1}={\cal A}_{\tau}^{k}$ stop, else go to (ii).
\end{algorithm}
\subsection{Primal-Dual Active Set method for improved normal compliance}\label{activ_inc}
Likewise, using similar notation, the improved contact condition (\ref{improvement}) are enforced by applying another active set strategy yielding a different Algorithm \ref{PDAS_INC}, described below.
\begin{algorithm}
\caption{PDAS for improved normal compliance}\label{PDAS_INC}
\qquad(i) Choose $(\mbox{\boldmath{$\delta$}}^{(0)},\boldsymbol\lambda^{(0)})$, $c_{\nu}>0$, $c_{\tau}>0$ and set $k=0$.
\qquad(ii) Compute: $\tau^{\gamma}_\nu = \widetilde \delta^{\gamma,k,n+1}_\nu$ and $\tau^{\gamma}_\tau = - \|\boldsymbol\lambda_{\tau}^{\gamma,k}\| + \mu \lambda_{\nu}^{\gamma,k}$ for each $\gamma \in {\cal S}$.
\qquad(iii) Set the active and inactive sets:
\begin{align*}
&{\cal A}_{\nu}^{k+1}=\{\gamma\in {\cal S}:\tau^{\gamma}_\nu \geq 0\},\\
&{\cal I}_{\nu}^{k+1}={\cal S}\setminus {\cal A}_{\nu}^{k+1}, \\
&{\cal A}_{\tau}^{k+1}=\{p\in {\cal S}:\tau^{\gamma}_\tau > 0 \},\\
&{\cal I}_{\tau}^{k+1}={\cal S}\setminus {\cal A}_{\tau}^{k+1}.
\end{align*}
\qquad(iv) Find $(\mbox{\boldmath{$\delta$}}^{\gamma,k+1},\boldsymbol\lambda^{\gamma,k+1 })$ such that
\begin{eqnarray}
&\label{Inu_exact_2}\lambda^{\gamma,k+1}_{\nu}=0, \qquad \boldsymbol\lambda^{\gamma,k+1}_{\tau,p}=\mbox{\boldmath{$0$}} \qquad\forall \gamma\in {\cal I}_{n}^{k+1},\\[1mm]
&\label{Anu_exact_2}\lambda^{\gamma,k+1}_{\nu}= c_\nu \frac{\alpha}{2} \widetilde \delta^{\gamma,k,n+1}_\nu + c_\nu \frac{\alpha (([\delta^{\gamma,k,n+1}_\nu]_+)^{\alpha-1} - \widetilde \delta^{\gamma,k,n+1}_\nu)}{2(\delta^{\gamma,k,n+1}_\nu - \delta^{\gamma,n}_\nu)} (\delta^{\gamma,k+1,n+1}_\nu-\delta^{\gamma,k,n+1}_\nu) \ \ \forall \gamma \in {\cal A}_{\nu}^{k+1}, \\[1mm]
&\label{Atau_2} \boldsymbol\lambda_{\tau}^{\gamma,k+1} = c_\tau \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k+1} \quad \forall\gamma \in {\cal A}_{\tau}^{k+1}\cap {\cal A}_{\nu}^{k+1},\\[1mm]
&\label{Itau_2} \boldsymbol\lambda_{\tau}^{\gamma,k+1} = \mu\lambda_{\nu}^{\gamma,k+1} \frac{ \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}\|} - \Big( \boldsymbol\lambda_{\tau}^{\gamma,k}\frac{( \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k})^T}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}\|^2}-\mu \lambda_{\nu}^{\gamma,k} \frac{\mbox{\boldmath{$I$}}_2}{\| \dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}\|} \Big) (\dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k+1}\!\!-\dot\mbox{\boldmath{$\delta$}}_{\tau}^{\gamma,k}) \ \ \forall \gamma \in {\cal I}_{\tau}^{k+1}\cap {\cal A}_{\nu}^{k+1}. \label{robin}
\end{eqnarray}
\qquad(v) If $\|(\mbox{\boldmath{$\delta$}}^{\gamma,k+1},\boldsymbol\lambda^{\gamma,k+1})-(\mbox{\boldmath{$\delta$}}^{\gamma, k},\boldsymbol\lambda^{\gamma,k})\|\leq\epsilon$, ${\cal A}_{\nu}^{k+1}={\cal A}_{\nu}^{k}$ and ${\cal A}_{\tau}^{k+1}={\cal A}_{\tau}^{k}$ stop, else goto (ii).
\end{algorithm}
Note that the main difference between Algorithms \ref{PDAS_SNC} and \ref{PDAS_INC} lies in step (iv), regarding the normal compliance handling (respectively for \ref{Anu_exact_1} and \ref{Anu_exact_2}), as one would expect.
\section{Numerical experiments}\label{s6}
In what follows, we carry out a comparative study of methods of the energy conservation type, focusing more particularly on the behavior
of the discrete energy of the system during and after the impact.
To do this, we consider two numerical examples, the first concerns
the impact of an elastic ball and the second represents the impact of a
hyperelastic ring. The goal is to demonstrate that the Active Set--Improved Normal Compliance method respects the conservation of energy after impact and practically guarantees non-penetration.
\subsection{Impact of a linear elastic ball against a foundation}
This representative benchmark problem describes the frictionless impact of a linear elastic ball against a foundation. The elastic ball is launched with an initial velocity ($\mbox{\boldmath{$u$}}_1=(0,-10)\,m/s$) towards the foundation
$\left\{(x_1,x_2)\in \mathbb{R}^2:\ x_2 \leq 0 \right\}$.
The domain $\Omega$ represents the cross-section of the ball, under the assumption of plane stress.
The behavior of the material is described by a linear elastic constitutive law defined by the energy density
\begin{equation}
W^e(\mbox{\boldmath{$\varepsilon$}})=\frac{\lambda^*}{2}({\mathrm{tr}}\,
\mbox{\boldmath{$\varepsilon$}})^2 +\mu \,{\mathrm{tr}}(\mbox{\boldmath{$\varepsilon$}}^2), \quad \forall
\mbox{\boldmath{$\varepsilon$}} \in \mathbb{M}^d.
\end{equation}\label{elastic_density}
with $$\displaystyle\lambda^*=\frac{2\lambda\mu}{\lambda+2\mu},\quad \mu=\frac{E\kappa}{2(1+\kappa)} \quad and \quad \lambda=\frac{E\kappa}{2(1+\kappa)(1-2\kappa)}$$
Here, $E$ and $\kappa$ are respectively the Young modulus and the Poisson ratio of the material and $\mathrm{tr}(\cdot)$ is the trace operator. Note that $\mbox{\boldmath{$\varepsilon$}} = \frac{1}{2} ({\nabla \mbox{\boldmath{$u$}}}^T + \nabla \mbox{\boldmath{$u$}}) $ represents the linearized strain tensor within the framework of the theory of small strains ($\|\mbox{\boldmath{$u$}}\| \ll 1$ and
$\|\nabla\mbox{\boldmath{$u$}}\| \ll 1$ in $\Omega$).
The physical setting is shown in Figure \ref{fig_num1}. Here:
\begin{align*}
& \Omega\, =\left\{(x_1,x_2)\in \mathbb{R}^2: \ (x_1-100)^2
+ (x_2-100)^2 \leq 100 \right\},\\
&\Gamma_1 = \varnothing,\quad \Gamma_2= \varnothing,\\
&\Gamma_3=\left\{(x_1,x_2)\in \mathbb{R}^2: \ (x_1-100)^2
+ (x_2-100)^2 = 100 \right\}.
\end{align*}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=5cm]{ex_num.eps}
\end{center}
\caption{Discretization of the elastic ball in contact with a foundation.}
\label{fig_num1}
\end{figure}
We assume that the volume forces do not act on the body during the process. For the discretization of the problem of contact represented in Figure \ref{fig_num1}, we use 7820 elastic nodes. For numerical experiments, we use the following data:
\begin{eqnarray}
\begin{array}{l}
\rho = 1000\, kg/m^3, \quad T=2\,s, \quad \Delta t=0.001\,s, \nonumber \\[2mm]
\mbox{\boldmath{$u$}}_0=(0,0)\,m, \quad \mbox{\boldmath{$u$}}_1=(0,-10)\,m/s, \nonumber \\[2mm]
E= 100\,GPa,\quad \kappa=0.35, \quad {\mbox{\boldmath{$f$}}}_0=(0,0)\,Pa, \nonumber \\[2mm]
g=50\,m, \quad r =1000, \quad \mu = 0. \nonumber
\end{array}
\end{eqnarray}
In Figure \ref{defcont}, the successive positions of the deformed ball as well as the contact forces are represented before, during, and after the impact.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=11.5cm]{def.eps}
\end{center}
\caption{Sequence of deformed ball and contact forces before, during, and after impact.}
\label{defcont}
\end{figure}
The interest of this representative example lies in the comparison of the numerical results of the Improved Normal Compliance--PDAS methods ($\alpha=2$) with other classical numerical methods. For this, we consider five existing methods: \\
\hspace*{0.5cm}- The classical quasi-Lagrangian method with the Signorini contact condition.\\
\hspace*{0.5cm}- The penalty method with a standard normal compliance condition of the form ${\lambda_{{\boldsymbol \nu}}} = c_\nu ({\delta}_{\boldsymbol \nu})_+$. \\
\hspace*{0.5cm}- The Equivalent Mass Matrix (EMM) method proposed by Khenous [32], which represents a specific distribution of the mass matrix without any inertia of the contact nodes.
This method is characterized by relevant stability properties of the contact stress.\\
\hspace*{0.5cm}- Newton's adapted continuity method, developed by Ayyad and Barboteu [2], which is characterized by enforcing, after two steps, the unilateral contact law and the persistence condition during each time increment.\\
\hspace*{0.5cm}- The Active Set method with the persistent contact law.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=18cm]{ene_ball_meths_2022.jpg}
\end{center}
\caption{Discrete energy behavior of selected time integration schemes during impact ($\Delta t=0.001$, $c_\nu=\frac{1}{1000}$).}
\label{ene_ball_1}
\end{figure}
In what follows, we analyze the methods in terms of discrete energy evolution. For this we introduce the total discrete energy at time $t_{n}$ which is given by the following formula:
$$
E_{n} = \frac{1}{2} \int_{\Omega} \rho |\dot{\mbox{\boldmath{$u$}}}^{2}_{n}| d\mbox{\boldmath{$x$}} +
\int_{\Omega} {\boldsymbol \sigma}^e_{n}:
\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}_{n}) d\mbox{\boldmath{$x$}},
$$
where ${\boldsymbol \sigma}^e= \frac{\partial W^e}{\partial \mbox{\boldmath{$\varepsilon$}}}$ denotes the stress tensor for infinitesimal strains.\\
Figure \ref{ene_ball_1} represents the evolution of the total discrete energy of the dynamical system. We note that after the impact (i.e. for $t \ge 1.52\ s$) and for the considered time step $ \Delta t = 1.0\ e^{-3}\ s$, the classical quasi-Lagrangian method with Signorini's law (curve $\circleddash$) as well as the method with standard normal compliance condition (curve $\boxminus$) are characterized by non-conservation of energy, which is not realistic from a physical point of view. We also notice that the EMM method (curve $\blacktriangledown$) strongly reduces the energy dissipation, without obtaining the exact conservation. It turns out the schemes developed by Ayyad and Barboteu [1]
(curve {\Large\textbullet}) and the Improved Normal Compliance--PDAS method (curve~$\blacksquare$) do conserve energy after impact. However, for the penalized method, we find energy fluctuations which disappear after the impact. Besides. for the penalized method and the method used in [1], the unilateral contact is not exactly satisfied, see Table \ref{tab}. Indeed, the penalized method (standard normal compliance) generates a maximum error on the displacement of normal contact of $1.4 e^{-4}\ m$ and $5.1 e^{-3}\ m$ for the method of Ayyad and Barboteu [1]. The Improved Normal Compliance--PDAS method allows to obtain a better energy conservation and in addition to limit the penetration: $5.7 e^{-4}\ m$. The Active Set method for persistent contact (shown by the curve~$\blacklozenge$) enforces exactly the energy conservation without any fluctuation. Due to the ``leapfrog" time step predictor, this Active Set method generates a maximum error on the normal contact displacement of $1.54 e^{-2}\ m$, larger than all the other methods.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lc}
\hline
Methods & Maximum error on the $\delta_\nu\ (m)$ \\
\hline
Quasi-Lagrangian with Signorini law & 0. \\
Scheme with Signorini and persistent (Ayyad--Barboteu) & $5.1 e^{-3}$ \\
Active set with persistent conditions & $1.54 e^{-2}$ \\
Active set with classical normal compliance & $1.4 e^{-4}$ \\
Active set with improved normal compliance ($\alpha = 2$) & $5.7 e^{-4}$ \\
Active set with improved normal compliance ($\alpha = 3$ )& $8.05 e^{-3}$ \\
\hline
\end{tabular}
\end{center}
\caption{Maximum error on normal contact displacement ($\Delta t=0.001$, $c_\nu=1 e^{3}$).}
\label{tab}
\end{table}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{ene_ball_nc_alpha_cnu.jpg}
\end{center}
\caption{Discrete energy behavior of the Active Set scheme with Improved Normal Compliance ($\alpha =3$) as a function of $c_\nu$ ($\Delta t=0.01$).}
\label{ene_ball_2}
\end{figure}
In order to overcome these difficulties, we consider the Active~Set method with improved normal compliance, by analyzing the behavior of discrete energy with respect to several parameters ($c_\nu$, $\alpha$, $\Delta t$). \\
Figure~\ref{ene_ball_2} allows to evaluate the influence of the normal compliance parameter $c_\nu$. We see that, for $c_\nu>1e^{4}$, this method respects the conservation of energy after impact and practically guarantees the non-penetration for $\alpha=2$. Nevertheless, this method generates slight fluctuations of the discrete energy during the impact. For $c_\nu<1e^{4}$, the Active Set method with improved normal compliance produces more fluctuations which can be explained by energy dissipation during the impact, but the energy of the system is recovered and conserved after the impact.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=16cm]{ene_ball_nc_alpha_dt.jpg}
\end{center}
\caption{Discrete energy behavior of the Active Set scheme with Improved Normal compliance ($\alpha =3$) as a function of $\Delta t$ ($c_\nu={1 e^{3}}$).}
\label{ene_ball_3}
\end{figure}
In Figure \ref{ene_ball_3}, we analyze the discrete energy behavior of the Active Set scheme for the improved normal compliance condition for different time steps. For $\Delta t$ varying from $1e^{-2}\ s$ to $1e^{-4}\ s$, we can notice that the obtained numerical results are similar and this method is characterized by a dissipation of
energy during the impact, but after the impact the energy of the system is conserved. For $\Delta t=1e^{-2}\ s$, the method generates even more fluctuations during impact. So taking $\Delta t$ below $1e^{-3}\ s$ does not guarantee the minimization of fluctuations.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=18cm]{ene_ball_nc_alpha_dt100_nbc.jpg}
\end{center}
\caption{Discrete energy behavior of Active Set schemes with classic Normal Compliance and with Improved Normal compliance ($\alpha =3$) as a function of the number of contact points ($\Delta t=0.01$, $c_\nu=1 e^{3}$).}
\label{ene_ball_4}
\end{figure}
In Figure \ref{ene_ball_4}, we observe the behavior of the discrete energy given by the Active Set scheme with classic normal and improved compliance with respects to the parameter $\alpha$ and mesh refinement related to the contact boundary, that is to say the number of contact points $nbc$. We notice that the numerical results obtained by using the classical normal compliance with a different number of contact points show a strong energy dissipation (between $24\%$ and $28\%$). Such a phenomenon is expected in this configuration and can be explained by the discrete energy balance; we refer to \cite{ayyad2009formulation,ayyad2009frictionless} for more details. However, using the Improved Normal Compliance method ($\alpha=3$) with the same contact boundary discretization, we see a remarkable improvement regarding the conservation of energy in each case after the impact and with less fluctuation during the impact. It is interesting, albeit expected, to observe to what extent the mesh refinement related to the contact boundary impacts the energy conservation properties in both cases.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{ene_ball_nc_alpha_dt100.jpg}
\end{center}
\caption{Discrete energy behavior of the Active Set scheme with Improved Normal compliance ($\Delta t =0.01\ s$) as a function of $\alpha$ ($c_\nu=1 e^{3}$).}
\label{ene_ball_5}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{ene_ball_nc_alpha_dt1000.jpg}
\end{center}
\caption{Discrete energy behavior of the Active Set scheme with Improved Normal compliance ($\Delta t =0.001$) as a function of $\alpha$ ($c_\nu=1 e^{3}$).}
\label{ene_ball_6}
\end{figure}
\newpage
In Figure \ref{ene_ball_5} with a time step of $1e^{-2}\ s$, we notice that the Active Set scheme produces a significant energy dissipation ($28\%$) with the standard normal compliance method. For the improved normal compliance, we notice some fluctuations, but the system recovers exactly the energy after the impact. By comparing with Figure \ref{ene_ball_6} ($\Delta t=1e^{-3}\ s$) and that of Figure \ref{ene_ball_5}, we notice large fluctuations for different values of $\alpha$ greater than $4$ for the improved normal compliance method. Note that, while it seems at first sight that the classical scheme with normal compliance conserves the energy, that is not actually the case as there is a slight energy dissipation ($1\%$). Such a phenomena is expected and can be explained by the smallness of $\Delta t$ considered.
Regarding the fluctuations, the results observed seem consistent with the theory. As a matter of fact, let us consider the following configurations:
\begin{itemize}
\item Let $\alpha_1$ and $\alpha_2$ be two values of the $\alpha$ coefficient used in INC, such that $\alpha_2>\alpha_1$;
\item Let the associated normal stress values $\lambda_{{\boldsymbol \nu-\frac{1}{2}}}^1$, $\lambda_{{\boldsymbol \nu-\frac{1}{2}}}^2$ be given by (\ref{improvement}).
\end{itemize}
As the penetrations are very small for this physical setting, for the same penetration we have $\lambda_{{\boldsymbol \nu-\frac{1}{2}}}^1 > \lambda_{{\boldsymbol \nu-\frac{1}{2}}}^2$. It means that the penetrations observed with $\alpha=\alpha_2$ are to be larger than with $\alpha=\alpha_1$ (see Table \ref{tab}). Therefore, the energy dissipated is expected to be larger with $\alpha=\alpha_2$ and it turns out to be the case with Figure \ref{ene_ball_6}, as the fluctuation amplitude is strictly increasing with respect to the value of $\alpha$.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lccccc}
\hline
nbc&16&32&64&128&256\\
\hline
dof&50&192&784&3160&12520\\
\hline
CPU time for Quasi-Lagrangian with Signorini law & 6.2 & 25.6 & 248.9 & 2950.1 & 47452.8 \\
CPU time for Active set with persistent conditions & 7.2 & 33.7 & 159.2 & 1089.9 & 10492.8 \\
CPU time for Active set with improved normal compliance & 7.2 & 34.2 & 158.6 & 1101.1 & 10527.1 \\
CPU time for Active set with classical normal compliance & 7.9 & 33.8 & 159.1 & 1074.5 & 10384.2 \\
\hline
\end{tabular}
\end{center}
\caption{Results of the Active Set method and the quasi-Lagrangian method in comparison with the number of degrees of freedom (dof), the number of contact nodes (nbc) and the total CPU time (CPU) in seconds ($\Delta t=0.01\,s$, $\alpha=3$, $c_\nu=1 e^{3}$).} \label{tabb}
\end{table}
In Table \ref{tabb}, we study the convergence in CPU time of the Active Set method compared to different methods (persistent contact, classic normal compliance and the quasi-Lagrangian approach for unilateral contact), depending on the number of degrees of freedom (dof) and the number of contact points (nbc) on the boundary $\Gamma_3$.
When $\text{nbc}<64$, the quasi-Lagrangian method is slightly more efficient than the Active Set method. When we increase the number of degrees of freedom (dof) we notice that the Active Set method is much faster in terms of CPU time than the quasi-Lagrangian method; this can be explained by the fact that the Active Set method does not require the use of Lagrange multipliers, and this method requires less Newton iterations; for more details we refer for instance to \cite{ABD1,ABCD2}.
\subsection{Impact of a hyperelastic ring on a foundation}
The interest of this example is to give a validation of the Active Set method with the improved normal compliance conditions on a hyperelastodynamic problem. This non trivial example, introduced by Laursen \cite{laur2002} concerns an academic problem of frictional impact of a hyperelastic ring against a foundation.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=10cm]{ring_sequence.eps}
\end{center}
\caption{Sequence of the deformed hyperelastic ring before, during, and after impact.}
\label{ring_test}
\end{figure}
The details on the physical setting of the problem are given below:
\begin{align*}
&\Omega\ , =\left\{(x_1,x_2)\in \mathbb{R}^2:\ 81\leq(x_1-100)^2
+ (x_2-100)^2 \leq 100 \right\},\\
& \partial_0 \Omega= \varnothing, \qquad \partial_g \Omega= \left\{(x_1,x_2)\in \mathbb{R}^2: \ (x_1-100)^2
+ (x_2-100)^2 = 81 \right\}, \\
& \partial_c \Omega=\left\{(x_1,x_2)\in \mathbb{R}^2: \ (x_1-100)^2
+ (x_2-100)^2 = 100 \right\}.
\end{align*}
The domain $\Omega$ represents the cross-section of a three-dimensional deformable body under the assumption of plane stress. The ring is launched with an initial velocity towards a foundation, as shown in Figure \ref{ring_test}. The foundation is given by $\left\{(x_1,x_2)\in \mathbb{R}^2:\ x_2 \leq 0 \right\}$. For the discretization, we use 1664 elastic nodes. For numerical experiments, the data are:
\begin{eqnarray}
\begin{array}{l}
\rho = 1000 \,kg/m^3, \quad T=10\,s, \quad \Delta t=0.01\,s, \nonumber \\[2mm]
{\bf u}_0=(0,0)\,m, \quad {\bf u}_1=(10,-10)\,m/s, \quad {\bf f}_0=(0,0)\,N/m ^2, \quad {\bf f}_1= (0,0)\, N/m, \nonumber \\[2mm]
c_1 = 0.5\,M\!Pa, \quad c_2 = 5.0 e^{-3}\,M\!Pa, \quad c_3 = 5.0 e^{-5}\,M\!Pa, \quad D = 100\,M\!Pa, \nonumber \\[2mm]
c_{\nu}=1000, \quad a=1.5 ,\quad b=0.5, \quad \alpha =100. \nonumber
\end{array}
\end{eqnarray}
The response of the compressible material, considered for Ogden's constitutive law \cite{ciarlet1982lois} is characterized by the following energy density:
\begin{equation}\nonumber
W({\bf F}) = c_1 (I_1 - 3) + c_2 (I_2 - 3) + d (I_3 - 1) - (c_1 + 2 c_2 + d) \ln
I_3,
\end{equation}
with $c_1 = 0.5 \,M\!Pa$, $c_2 = 0.5 \cdot 10^{-2}\, M\!Pa$ and $d = 0.35 \,M\!Pa$ and, using the Green-Lagrange tensor defined by $\mathbf C = {\mathbf F}^T \mathbf F$, the invariants $I_1$, $I_2$ and $I_3$ are defined by
$$I_1({\bf C}) ={\rm tr}({\bf C}), \qquad I_2({\bf C}) = \frac{(\rm tr({\bf C}))^2 - tr({\bf C}^2)}{2}, \qquad I_3({\bf C}) = \det({\bf C}). $$
\begin{figure}[!h]
\begin{center}
\includegraphics[width=11cm]{ene_ring_meths2022.jpg}
\end{center}
\caption{Discrete energy behavior of selected time integration schemes during frictionless impact ($\Delta t=0.01$, $c_\nu=1 e^{3}$).}
\label{ene_ring_1}
\end{figure}
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lc}
\hline
Methods & Maximum error on the $\delta_\nu$ \\
\hline
Quasi-Lagrangian with Signorini law & 0. \\
Scheme with Signorini and persistent (Ayyad--Barboteu) & $7.1 e^{-2}$ \\
Active set with persistent conditions &$8.9 e^{-2}$\\
Active set with classical normal compliance & $1 e^{-5}$ \\
Active set with improved normal compliance ($\alpha = 2$) & $3.4 e^{-4}$ \\
Active set with improved normal compliance ($\alpha = 3$)& $4.9 e^{-3}$ \\
\hline
\end{tabular}
\end{center}
\caption{Maximum error on normal contact displacement ($\Delta t=1 e^{-2}$, $c_\nu=1 e^{3}$).} \label{tabbb}
\end{table}
\begin{table}[!ht]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{lccccc}
\hline
nbc&32&64&128&256&512\\
\hline
dof&192&384&1792&4608&15360\\
\hline
CPU time for Quasi-Lagrangian with Signorini law & 8.59 & 18.03 & 139.69 & 428.80 & 2693.86 \\
CPU time for Active set with persistent conditions & 7.3 & 16.2 & 99.5 & 301.8 & 1252.7 \\
CPU time for Active set with improved normal compliance $\alpha=3$ & 6.9 & 17.3 & 107.0 & 274.7 & 1251.8 \\
CPU time for Active set with classical normal compliance & 6.3 & 13.4 & 87.9 & 221.9 & 1056.8 \\
\hline
\end{tabular}
}
\end{center}
\caption{Results of the Active Set method and the quasi-Lagrangian method in comparison with the number of degrees of freedom (dof), the number of contact nodes (nbc) and the total CPU time (CPU) in seconds ($\Delta t=0.01$,~$c_\nu=1 e^{3}$).} \label{tabbbb}
\end{table}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=10cm]{CPU_ring_meths2022.jpg}
\end{center}
\caption{CPU time of the Active Set method and the quasi-Lagrangian method in comparison with the number of degrees of freedom ($\Delta t=0.01$, $c_\nu=1 e^{3}$).}
\label{CPU_ring}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=18cm]{ene_ring_nc_alpha_dt100.jpg}
\end{center}
\caption{Discrete energy behavior of the Active Set scheme with Improved Normal compliance ($\Delta t=0.01$) as a function of $\alpha$ without friction ($c_\nu=1 e^{3}$).}
\label{ene_ring_2}
\end{figure}
In Figure \ref{ene_ring_1}, we observe the evolution of the total discrete energy of the frictionless dynamic system depending on selected time-integration schemes, i.e, leapfrog scheme with persistent conditions, classical normal compliance, quasi-Lagrangian method, two-step scheme \cite{ayyad2009formulation} and improved normal compliance. We find that the leapfrog scheme and the two-step scheme (Signorini and persistent condition) conserve the energy of the system unlike the quasi-Lagrangian method with unilateral contact ($9\%$ of energy dissipation) and the classical normal compliance ($9\%$ of energy dissipation) which generates maximum error on the normal contact displacement of $1e^{-5} m$ ($c_\nu = 1e^4$). Due to the increment step of the leapfrog scheme, this method generates a maximum error on the normal contact displacement of $8.9e^{-2}m$ (see Table \ref{tabbb}), therefore higher than the other methods. Moreover, the improved normal compliance method is characterized by an exact conservation of energy after the impact. This method also allows to minimize interpenetration with $\alpha=2$ ($3.4e^{-4} m$, see Table \ref{tabbb}).
In Figure \ref{ene_ring_2}, with a time step of $1e^{-2}\,s$, we present the evolution of the total discrete energy of the frictionless dynamic system with the Active Set scheme with improved normal compliance conditions according to the parameter $\alpha$. We observe an exact conservation of energy after the impact. However, the method generates some fluctuations during impact. Based on these observations, the choice $\alpha=3$ seems to perform at best.
In Table \ref{tabbbb}, we study the convergence in CPU time of the Active Set--INC method compared with different methods (persistent contact, standard normal compliance and quasi-Lagrangian method for unilateral contact) depending on the number of degrees of freedom (dof) and the number of contact points (nbc) on the boundary $\Gamma_3$. For different values of nbc, the Active Set--INC method is better than the quasi-Lagrangian method in terms of CPU time; as in the elastic ball numerical example, this can be explained by the fact that the Active Set--INC method does not involve Lagrange multipliers and that it generates fewer nonlinear Newton iterations.\\
\begin{figure}[!h]
\begin{center}
\includegraphics[width=18cm]{ene_ring_nc_alpha_dt.jpg}
\end{center}
\caption{Discrete energy behavior of the Active Set scheme with Improved Normal compliance ($\alpha =3$) as a function of $\Delta t$ ($c_\nu=1 e^{3}$).}
\label{ene_ring_4}
\end{figure}
In Figure \ref{ene_ring_4}, we are interested in the Active Set--INC method ($\alpha=3$) depending on the time step. For $\Delta t=1e^{-3}\ s$, we notice that the method is characterized by a better conservation of energy during the impact. On the other hand, for a time step $\Delta t$ ranging from $1e^{-1}\, s$ to $1e^{-2}\,s$, we notice that the method generates some energy fluctuations during the impact, but it conserves energy after the impact.
In Figure \ref{ene_ring_3}, we observe the behavior of the discrete energy using an Active Set scheme with normal compliance condition with friction ($\mu=0.2$). This method is characterized by a slightly dissipative behavior of friction; this is consistent with the physical dissipative nature of the friction phenomenon. The difference of the dissipative behavior comes from the model of frictional contact according to the parameter $\alpha$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=18cm]{ene_ring_nc_alpha_dt100_frot.jpg}
\end{center}
\caption{Discrete energy behavior of the Active Set scheme with Improved Normal compliance ($\Delta t =1e^{-2}\,s$) as a function of $\alpha$ with friction ($\mu = 0.2$, $c_\nu=1 e^{3}$, $c_\tau=1 e^{3}$).}
\label{ene_ring_3}
\end{figure}
\section{Conclusion and perspectives} \label{s7}
In this work, we investigated a new energy conservation method for hyperelastodynamic contact problems, from a theoretical and numerical point of view. We first introduced the mathematical framework for general frictional contact problems in large deformations, and demonstrated how the Improved Normal Compliance (INC) condition can be applied in order to obtain energy conservation properties during impact. Next, we assessed these conservation properties in the continuous case and in its space-time discrete counterpart. Then, we derived two semi-smooth Newton Primal Dual algorithms to handle respectively the Standard Normal Compliance (SNC) and INC conditions. The following section presents numerical results on two classical representative academic test cases, namely the impact of an elastic ball and the impact of a hyperelastic ring on a foundation. The aim was to compare the performances of the INC with respect to the SNC and the classical methods used within the literature. It turned out that the methods implemented are at least just as much relevant as several other methods, regarding energy conservation properties after impact, as they display physically realistic behaviors. In the second test case, even more challenging from a numerical point of view, the energy is even almost perfectly conserved during and after the impact. Also, one of the interesting and well-known advantages when using the Active Set approach is the numerical efficiency. Indeed, in both test cases, this method is much more efficient (CPU time-wise, between twice and five times as fast for large problems) than the classical quasi-Lagrangian approach.
At this stage, we outline some future perspectives. As the results in 2D cases seem quite promising, an extension to 3D ought to be considered in order to assess the behavior of this approach on more complex cases. We can reasonably expect the performances with respect to the quasi-Lagrangian approach in 3D to be of an even greater order of magnitude than in 2D. Regarding an underlying practical use case, it could be interesting to investigate a potential application to granular media. Finally, this method was only applied for hyperelastic and elastic material so far, and provided quite successful results, which suggests to consider alternative constitutive laws. Amongst them, in particular, an extension to viscosity and plasticity would make sense as such materials involves energy dissipation phenomena.
\bibliographystyle{unsrt}
|
1,314,259,994,362 | arxiv | \section{Introduction}
In the present paper we discuss the stochastic limit for a quantum particle interacting with a quantum Bose field without dipole approximation. We shaw that in this limit some collective excitations of the field and the particle (the entangled operators defined as a free evolution of contributions to the interaction Hamiltonian of the field and the particle) obtain free (or quantum Boltzmann) statistics. The algebra of free creation--annihilation operators is given by the relations
$$
b_ib_j^{\dag}=\delta_{ij},
$$
annihilation operators $b_i$ do not commute and constitute a free algebra. We show that in the model under consideration some deformation of this algebra emerges. Free operators of creation and annihilation were studied in the free probability theory, see \cite{Voiculescu}, these operators are related to the theory of random matrices and the limit semicircle Wigner law.
The stochastic limit studies the dynamics of quantum system interacting with environment with coupling constant $\lambda$ in the van Hove--Bogoliubov rescaling $t\mapsto t/\lambda^2$ in the limit $\lambda\to0$, i.e. the effect of weak interactions in the regime of large times \cite{theBook}.
Before the stochastic limit the entangled operators constitute an algebra with ''dynamically'' $q$-deformed relations, see formulae (\ref{qdef1}), (\ref{qdef2}), (\ref{qdef3}) below, free creation and annihilation operators arise as the stochastic limit of the $q$-deformed creations and annihilations. The quantum field (the environment) is in some Gaussian (for example temperature) state.
For the case when the environment field is in the Fock state (i.e. the state is given by the vacuum average) the corresponding model was considered in \cite{qdef}, see also \cite{theBook}, \cite{hotfree}. Other models with non-linear interaction and free statistics in the stochastic limit were studied in \cite{120}, \cite{129}. In the present paper we generalize the approach of \cite{qdef} for the case of general Gaussian state. For the consideration of the temperature state we apply the free analogue of the temperature double construction, i.e. a variant of the Bogoliubov transformation which allows to represent an arbitrary Gaussian state of the field using Fock states for a pair of auxiliary fields (see section 4 of this paper).
The exposition of the present paper is as follows. In Section 2 the model of interaction of a particle and a Bose field without dipole approximation is described, entangled operators describing collective excitations of the field and the particle are introduced, dynamically $q$-deformed relations for these operators are described, and the example of 4-point correlation function demonstrates the emerging of free statistics in the stochastic limit. In Section 3 correlation functions for the algebra of $q$-deformed relations for entangled operators are computed in an arbitrary (non-squeezed mean zero) Gaussian state and the stochastic limit for these correlation functions is obtained. In Section 4 it is shown that the stochastic limit of correlation functions of Section 3 equals to correlation functions for some algebra of free creation and annihilation operators constructed in analogy to the construction of the temperature double.
\section{Dynamical $q$-deformation}
\noindent{\bf Formulation of the model.}\quad
Let us consider Hamiltonian of a quantum particle interacting with quantum Bose field without dipole approximation
$$
H=H_0+\lambda H_I=\int\omega(k)a^{\dag}(k)a(k)dk+{\frac{1}{2}}\,p^2+\lambda H_I,
$$
$$
H_I=p\cdot{\cal A}(q)+{\cal A}(q)\cdot p=\int d^3k\left(g(k) p\cdot e^{ikq} a^{\dag}(k)+ \overline {g(k)} p\cdot e^{-ikq} a(k)\right) + h.c.,
$$
$$
q=(q_{1}, q_{2}, q_{3}),\quad p=(p_{1}, p_{2}, p_{3}),\quad [q_{h}, p_{k} ] = i \delta_{hk},
$$
$$
a(k)=(a_{1}(k), a_{2}(k), a_{3}(k)),\quad a^{\dag}(k)=
(a^{\dag}_{1}(k), \ldots ,a^{\dag}_3(k)),\quad [a_{j}(k),a_{h}^{\dag}(k')]=\delta_{jh}\delta(k-k').
$$
Here $q$ and $p$ are coordinate and momentum of the particle, the field $a(k)$ possesses dispersion $\omega(k)$, function $g(k)$ is a formfactor of interaction of the field and the particle.
In the procedure of the stochastic limit one computes approximation for the perturbation series for evolution operator in interaction picture. We study the free evolution of the interaction Hamiltonian $H_I(t)=e^{it H_0}H_Ie^{-itH_0}$ and the van-Hove--Bogoliubov rescaling by square of the coupling constant $\lambda$, then we take the weak coupling limit $\lambda\to 0$
$$
{\frac{1}{\lambda}}\,H_I\left({\frac{t}{\lambda^2}}\right)=
{\frac{1}{\lambda}}A(t/\lambda^2) + h.c. =
\int d^3k p(\overline g(k) a_\lambda(t,k)+ g(k)a^{\dag}_\lambda(t,k)) + h.c.
$$
We obtain values which describe the simultaneous free evolution of the field and the system (entangled operators)
\begin{equation}\label{entangled1}
a_\lambda(t,k)={\frac{1}{\lambda}}\,e^{i{\frac{t}{\lambda^2}}\,H_0}e^{-ikq}a(k)e^{-i{\frac{t}{\lambda^2}}\,H_0}
=\frac{1}{\lambda}e^{-i{\frac{t}{\lambda^2}}\left[\omega(k)+{\frac{1}{2}}\,k^2+kp\right]}e^{-ikq}a(k),
\end{equation}
\begin{equation}\label{entangled2}
a^{\dag}_\lambda(t,k)=\frac{1}{\lambda}e^{i{\frac{t}{\lambda^2}}\left[\omega(k)-{\frac{1}{2}}\,k^2+kp\right]}e^{ikq}a^{\dag}(k).
\end{equation}
\medskip
\noindent{\bf Dynamical $q$-deformation.}\quad
There is an important observation: entangled operators satisfy the algebra of dynamically $q$-deformed relations
\begin{equation}\label{qdef1}
a_\lambda(t,k)a^{\dag}_\lambda(t',k')=a^{\dag}_\lambda(t',k')a_\lambda(t,k)
q_\lambda(t-t',kk')+
{\frac{1}{\lambda^2}}\,q_\lambda\left(t-t',\omega(k)+{\frac{1}{2}}\,k^2+kp\right)\delta(k-k'),
\end{equation}
\begin{equation}\label{qdef2}
a_\lambda(t,k)p=(p+k)a_\lambda(t,k),
\end{equation}
\begin{equation}\label{qdef3}
a_\lambda(t,k)a_\lambda(t',k')=a_\lambda(t',k')a_\lambda(t,k)q_\lambda^{-1}(t-t',kk'),
\end{equation}
where the function $q$ is the oscillating exponent
$$
q_\lambda(t-t',x)=e^{-i{\frac{t-t'}{\lambda^2}}\,x}.
$$
Let us call $t-t'$ the time argument of the oscillating exponent and $x$ the energetic argument. Let us consider the problem of construction of the master--field, i.e. the stochastic limit of the entangled operator
$$
b(t,k)=\lim_{\lambda\to0}a_\lambda(t,k)
$$
and the corresponding commutation relations.
We would like to use the limit for the oscillation exponent (in the space of tempered distributions)
\begin{equation}\label{oscillation}
\lim_{\lambda\to0}q_\lambda(t,x)=0,
\qquad\lim_{\lambda\to0}{\frac{1}{\lambda^2}}\,
q_\lambda(t,x)=2\pi\delta(t)\delta(x).
\end{equation}
It is natural to expect that commutation relations for the master--field are limits of the dynamically $q$-deformed relations (\ref{qdef1}), (\ref{qdef2})
\begin{equation}\label{free1}
b(t,k)b^{\dag}(t',k')=2\pi\delta(t-t')\delta\left(\omega(k)+{\frac{1}{2}}\,k^2+kp\right)\delta(k-k'),
\end{equation}
\begin{equation}\label{free2}
b(t,k)p=(p+k)b(t,k),
\end{equation}
and the third relation (\ref{qdef3}) generates the free statistics, i.e. makes creations non-commuting (contrary to the Bose case) and creations form a free algebra. We will check that the master--field has free statistics and satisfies (\ref{free1}), (\ref{free2}) in the case when the field $a(k)$ is in the Fock state. For more general Gaussian states of the field (in particular temperature state) we will get a generalization of (\ref{free1}), (\ref{free2}) --- a quantum algebra with free statistics and relations which depend on the temperature. Here master--field is delta--correlated with respect to time, i.e. it is a quantum white noise (with free statistics).
\medskip
\noindent{\bf Example: Free statistics for 4-point correlator.}\quad
Let us discuss the emerging of free statistics in the limit $\lambda\to0$. Operators $a_\lambda(t,k)$ satisfy relations (\ref{qdef3}). One could suggest that in the limit $\lambda\to0$ these relations tend to $b(t_1,k_1)b(t_2,k_2)=0$. But this is not the case. Actually this relation leads to free statistics (i.e. annihilation operators in the limit do not commute). To prove this let us consider the 4-point correlator (in the Fock state)
\begin{equation}\label{4-point}
\langle
a_\lambda(t_1,k_1)a_\lambda(t_2,k_2)
a^{\dag}_\lambda(t'_2,k'_2)
a^{\dag}_\lambda(t'_1,k'_1)\rangle=
\end{equation}
$$
=\frac{1}{\lambda^2}\,q_{\lambda}\left(t_2-t'_2,
\omega(k_2)+\frac{1}{2}k_2^2+ k_2(p+k_1)\right)\delta(k_2-k'_2) $$ $$
\frac{1}{\lambda^2}\,q_{\lambda}\left(t_1-t'_1,
\omega(k_1)+\frac{1}{2}k_1^2+ k_1p\right)\delta(k_1-k'_1) q_{\lambda}(t_2-t'_2,k_2 k'_2)
+
$$
$$
+
\frac{1}{\lambda^2}\,q_{\lambda}\left(t_1-t'_2,
\omega(k_1)+\frac{1}{2}k_1^2+ k_1 p\right)\delta(k_1-k'_2) $$ $$
\frac{1}{\lambda^2}\,q_{\lambda}\left(t_2-t'_1,
\omega(k_2)+\frac{1}{2}k_2^2+ k_2 p\right)\delta(k_2-k'_1) q_{\lambda}(t_2-t'_2,k_2 k'_2).
$$
The first contribution contains the product of terms $\frac{1}{\lambda^2}q_{\lambda}(t_1-t'_1,\cdot)$, $\frac{1}{\lambda^2}\,q_{\lambda}(t_2-t'_2,\cdot)$ (which tend to $\delta$-functions), and $q_{\lambda}(t_2-t'_2,\cdot)$ (which tend to zero), but this term (tending to zero) has the time argument coinciding with the time argument of one of non-zero in the limit $\lambda\to 0$ oscillating exponents. Hence the product of these two terms survives in the limit (the limit of products is not equal to the product of limits).
In the second contribution all oscillating exponents have different time arguments, thus the tending to zero oscillating exponent can not cancel and the limit $\lambda\to 0$ of the second contribution equals to zero. The first contribution corresponds to a non-crossing, or rainbow diagram, the second contribution corresponds to a crossing diagram. One can see that in the stochastic limit only non-crossing diagrams survive.
Let us consider the 4-point correlator with permutated first two terms (using the commutation relation (\ref{qdef3}))
\begin{equation}\label{4-point'}
\langle
a_\lambda(t_2,k_2)a_\lambda(t_1,k_1)
a^{\dag}_\lambda(t'_2,k'_2)a^{\dag}_\lambda(t'_1,k'_1)\rangle
q_\lambda^{-1}(t_1-t_2,k_1k_2)=
\end{equation}
$$
=q_\lambda^{-1}(t_1-t_2,k_1k_2)\biggl(
\frac{1}{\lambda^2}\,q_{\lambda}\left(t_1-t'_2,
\omega(k_1)+\frac{1}{2}k_1^2+ k_1(p+k_1)\right)\delta(k_1-k'_2)
$$
$$
\frac{1}{\lambda^2}\,q_{\lambda}\left(t_2-t'_1,
\omega(k_2)+\frac{1}{2}k_2^2+ k_2 p\right)\delta(k_2-k'_1) q_{\lambda}(t_1-t'_2,k_1 k'_2)
+
$$
$$
+
\frac{1}{\lambda^2}\,q_{\lambda}\left(t_2-t'_2,
\omega(k_2)+\frac{1}{2}k_2^2+ k_2 p\right)\delta(k_2-k'_2)
$$
$$
\frac{1}{\lambda^2}\,q_{\lambda}\left(t_1-t'_1,
\omega(k_1)+\frac{1}{2}k_1^2+ k_1 p\right)\delta(k_1-k'_1) q_{\lambda}(t_1-t'_2,k_1 k'_2)
\biggr).
$$
In this case due to presence of the oscillating exponent $q_\lambda^{-1}(t_1-t_2,k_1k_2)$ this first contribution tends to zero. In the second contribution two oscillating exponents (which tend to zero) cancel and give $q_\lambda(t_2-t'_2,k_1k_2)$, the time argument is this term coincides with the argument in one of non tending to zero oscillating exponents. Hence application of relation (\ref{qdef3}) implies that in the limit of 4-point correlator crossing diagram survives and non-crossing vanishes. But these diagrams correspond to above diagrams in (\ref{4-point}) (for (\ref{4-point}) and (\ref{4-point'}) crossing and non-crossing diagrams change roles, hence if one tries to permute annihilations using relation (\ref{qdef3}), then operators in the limit permute restoring the initial order). Therefore commutation relation (\ref{qdef3}) in the limit vanishes (the statistics becomes free).
\section{Computation of $N$-point correlator}
In present Section we compute the form of many-point correlator for algebra of entangled operators (\ref{qdef1}), (\ref{qdef2}), (\ref{qdef3})
$$
\langle a^{\varepsilon_1}_\lambda(t_1,k_1)\dots a^{\varepsilon_N}_\lambda(t_N,k_N)\rangle.
$$
Here $a^{\varepsilon}$ denotes $a$ or $a^{\dag}$ $(\varepsilon=-1$ for $a$ and $\varepsilon=1$ for $a^{\dag}$), brackets $\langle \cdot \rangle$ denote a Gaussian (in particular temperature) state.
\medskip
\noindent{\bf Gaussian state} (non-squeezed and mean-zero) for the algebra of canonical commutation relations is described by the Wick theorem (or variation of this theorem by Bloch and de Dominicis). In this state average of a monomial over creations and annihilations can be non-zero only if the number $n$ of creations equals to the number of annihilations. Non-zero pairings of creations and annihilations (averages of products of two operators) are equal to
$$
\langle a^{\dag}(k)a(k')\rangle=N(k)\delta(k-k'),\qquad \langle a(k)a^{\dag}(k')\rangle=(N(k)+1)\delta(k-k').
$$
In particular for the temperature state
$$
N(k)=\frac{1}{e^{\beta\omega(k)}-1},
$$
where $\beta$ is the inverse temperature and $\omega(k)$ is the dispersion of the field with wave vector $k$ (the Planck constant is taken equal to one).
For a monomial of length $N=2n$ correlation function
$$
\langle a^{\varepsilon_1}(k_1)\dots a^{\varepsilon_N}(k_N) \rangle,\quad N=2n
$$
is equal to the sum over partitions of the monomial in pairs of creations and annihilations of products of pairings and has the following form.
Let $\varepsilon=(\varepsilon_1,\dots,\varepsilon_{2n})$ be a sequence of creations and annihilations in the monomial, $\sigma(\varepsilon)=\{(m_j,m'_j)\}$, $j=1,\dots,n$ be a partition of the monomial in pairs of creations and annihilations ($m_{j}$ is the position of creation and $m'_{j}$ is the position of annihilation in the pair, all positions in the partition are different). Let us also introduce the symbol of ordering of a pair $\delta_j={\rm sign}\left(m_{j}-m'_{j}\right)$ (i.e. $\delta_j=1$ if $a^\dag$ is on the right side of $a$ in the $j$-th pair and $\delta_j=-1$ in the opposite case). To any partition of a monomial one can put in correspondence a diagram where operators are vertices in a segment ordered in relation to their positions in the monomial (to the left in the monomial --- to the left in the segment), and pairs are shown by edges connecting vertices, let the segment belongs to the abscissa axis in the coordinate plane and the edges lie in the upper half-plane. A diagram is called non-crossing if edges can be depicted without crossings. Any vertex belongs to exactly one edge $(m_j,m'_j)$. Let us enumerate edges in the diagram according to order of their left vertices (i.e. edge with left vertex to the left will have smaller number in the numeration of edges).
The correlation function is equal to a sum over partitions $\sigma$ (i.e. over diagrams)
$$
\langle a^{\varepsilon_1}(k_1)\dots a^{\varepsilon_N}(k_N) \rangle=\sum_{\sigma(\varepsilon)}\prod_{j=1}^n M(k_{m_j})\delta(k_{m_j}-k_{m'_j}),
$$
\begin{equation}\label{M(k)}
M(k_{m_j})=N(k_{m_j})+\frac{1}{2}(\delta_j+1),\quad \delta_j={\rm sign}\left(m_{j}-m'_{j}\right).
\end{equation}
In particular for the Fock state $N(k)=0$ and $m_j>m'_{j}$ for all non-zero contributions to the correlation function. Due to presence of delta-functions wave vectors $k_{m_j}$, $k_{m'_j}$ of creations and annihilations in a pair coincide.
\medskip
\noindent{\bf $N$-point correlator for entangled operators} (\ref{entangled1}), (\ref{entangled2}) has the form
$$
\langle a^{\varepsilon_1}_\lambda(t_1,k_1)\dots a^{\varepsilon_N}_\lambda(t_N,k_N)\rangle=
\frac{1}{\lambda^{2n}}\prod_{s=1}^{2n}\left[e^{-i{\frac{t_s}{\lambda^2}}\left[\omega(k_s)+{\frac{1}{2}}\,k_s^2+k_sp\right]}e^{-ik_sq}\right]^{-\varepsilon_s} \sum_{\sigma(\varepsilon)}\prod_{j=1}^n M(k_{m_j})\delta(k_{m_j}-k_{m'_j}),
$$
which contains the product of delta-functions corresponding to edges in the diagram (pairs of creations and annihilations from the partition of the monomial) and the product of exponents corresponding to vertices of the diagram. This product contains non-commuting operators and is ordered from the left to the right.
Let us also enumerate time arguments for entangled operators as $t_{m_j}$, $t_{m'_j}$. Analogously, let us enumerate indices $\varepsilon_{m_j}$, $\varepsilon_{m'_j}$. Both quadratic correlation functions (pairings of vertices of the diagram) $\langle a_\lambda(t_{m'_j},k_{m'_j})a^{\dag}_\lambda(t_{m_j},k_{m_j})\rangle$, $\langle a^{\dag}_\lambda(t_{m_j},k_{m_j})a_\lambda(t_{m'_j},k_{m'_j})\rangle$ corresponding to edges of the diagram can be put in the form
\begin{equation}\label{pairing}
\frac{1}{\lambda^{2}}e^{i{\frac{\left(t_{m_{j}}-t_{m'_{j}}\right)}{\lambda^2}}\left[\omega(k_{m_{j}})+\delta_j\frac{1}{2}k_{m_{j}}^2+pk_{m_{j}}\right]}
M(k_{m_{j}})\delta(k_{m_j}-k_{m'_j}).
\end{equation}
Let us express $N$-point correlator using above correlation functions.
Correlation function is a sum over diagrams (partitions of the monomial). We would like to express contribution to the correlation function which corresponds to a diagram in the form of product over edges of pairings of vertices, this product should also contain multipliers arising from permutations of the terms in the products of non-commuting exponents in the above formula. To put contribution in this form one has to move multipliers $e^{i \delta_l k_l q}$ in the product (corresponding to vertices connected by edge) from the right to the left (move the right end of the edge to the left end). Since edge corresponds to delta-function of wave vectors, multipliers $e^{\pm i kq}$ for different ends of the edge cancel. We obtain product of commuting operators (which contain functions of the momentum $p$).
Let us introduce notations $a(l)=\min(m_{l},m'_{l})$, $b(l)=\max(m_{l},m'_{l})$. Let us perform the described above procedure of moving multipliers for the first pair $(m_{1},m'_{1})$. We will obtain pairing (\ref{pairing}) for the first pair and commutation of multipliers gives (for $l=1$)
$$
\prod_{s=a(l)+1}^{b(l)-1}\left[e^{-i{\frac{t_s}{\lambda^2}}\left[\delta_l k_s k_{m_l}\right]}\right]^{-\varepsilon_s} = \prod_{s=a(l)+1}^{b(l)-1}e^{i{\frac{t_s}{\lambda^2}}\varepsilon_s\delta_l k_s k_{m_l}}.
$$
Repeating this procedure for all edges of the diagram we get for contribution to the correlation function related to diagram the expression
$$
\prod_{j=1}^{n}\frac{1}{\lambda^{2}}e^{i{\frac{\left(t_{m_{j}}-t_{m'_{j}}\right)}{\lambda^2}}\left[\omega(k_{m_{j}})+\delta_j\frac{1}{2}k_{m_{j}}^2+pk_{m_{j}}\right]}
M(k_{m_{j}})\delta(k_{m_j}-k_{m'_j})
\prod_{s=a(j)+1}^{b(j)-1}e^{i{\frac{t_s}{\lambda^2}}\varepsilon_s\delta_j k_s k_{m_j}}.
$$
Herewith if some $l$-th edge is situated inside the $j$-th edge then the product over $s$ will contain two contributions from the $l$-th edge (corresponding to ends of this edge), moreover wave vectors for ends of the edge will coincide. Hence product of these terms is equal to
$$
e^{i{\frac{t_{a_l}-t_{b_l}}{\lambda^2}}\varepsilon_{a_l}\delta_j k_{m_l} k_{m_j}} = e^{i{\frac{t_{m_l}-t_{m'_l}}{\lambda^2}}\delta_j k_{m_l} k_{m_j}},
$$
therefore this contribution will depend on the time argument which coincides with the time argument for some edge of the diagram. If some edges cross we will obtain time arguments which will not correspond to any edge of the diagram. In the stochastic limit $\lambda\to0$ contributions of these diagrams will tend to zero. Product of contributions of crossings of edges in the diagram which cross the $j$-th edge takes the form
$$
\left[\prod_{l:a(l)<a(j)<b(l)<b(j)} e^{i{\frac{t_{b_l}}{\lambda^2}}\varepsilon_{b_l}\delta_j k_{b_l} k_{m_j}}\right]
\left[\prod_{l:a(j)<a(l)<b(j)<b(l)} e^{i{\frac{t_{a_l}}{\lambda^2}}\varepsilon_{a_l}\delta_j k_{a_l} k_{m_j}}\right].
$$
Summing up the above discussion and using (\ref{oscillation}) we get:
\medskip
\noindent{\bf Theorem 1}\quad {\sl
$N$-point correlation function of entangled operators has the form
\begin{equation}\label{N-point}
\langle a^{\varepsilon_1}_\lambda(t_1,k_1)\dots a^{\varepsilon_N}_\lambda(t_N,k_N)\rangle= $$ $$=
\sum_{\sigma(\varepsilon)}\prod_{j=1}^n
\frac{1}{\lambda^{2}}e^{i{\frac{\left(t_{m_{j}}-t_{m'_{j}}\right)}{\lambda^2}}\left[\omega(k_{m_{j}})+\delta_j\frac{1}{2}k_{m_{j}}^2+pk_{m_{j}}
+\sum_{l:a(l)<a(j)<b(j)<b(l)} \delta_l k_{m_l}k_{m_{j}}\right]}
$$
$$
\left[\prod_{l:a(l)<a(j)<b(l)<b(j)} e^{i{\frac{t_{b_l}}{\lambda^2}}\varepsilon_{b_l}\delta_j k_{b_l} k_{m_j}}\right]
\left[\prod_{l:a(j)<a(l)<b(j)<b(l)} e^{i{\frac{t_{a_l}}{\lambda^2}}\varepsilon_{a_l}\delta_j k_{a_l} k_{m_j}}\right]
$$
$$
M(k_{m_j})\delta(k_{m_j}-k_{m'_j}).
\end{equation}
Here $a(l)=\min(m_{l},m'_{l})$, $b(l)=\max(m_{l},m'_{l})$, $M(k_{m_j})$ is given by (\ref{M(k)}).
The stochastic limit of this correlator takes the form
\begin{equation}\label{N'-point}
\lim_{\lambda\to 0}\langle a^{\varepsilon_1}_\lambda(t_1,k_1)\dots a^{\varepsilon_N}_\lambda(t_N,k_N)\rangle= $$ $$=
\sum_{\sigma'(\varepsilon)}\prod_{j=1}^n 2\pi
\delta \left(t_{m_{j}}-t_{m'_{j}}\right)\delta\left[\omega(k_{m_{j}})+\delta_j\frac{1}{2}k_{m_{j}}^2+pk_{m_{j}}
+\sum_{l:a(l)<a(j)<b(j)<b(l)} \delta_l k_{m_l}k_{m_{j}}\right]
$$ $$
M(k_{m_j}) \delta(k_{m_j}-k_{m'_j}),
\end{equation}
where notation $\sigma'(\varepsilon)$ means that the summation is restricted to non-crossing diagrams.
}
\section{Algebra of the master--field}
\noindent
{\bf Construction of the temperature double} (a variant of the Bogoliubov transformation) allows to express arbitrary Gaussian state using the Fock state. Let us consider for bosonic algebra transformation which expresses annihilation operator $a(k)$ in the form of linear combination of creation and annihilation $a_1(k)$ and $a_2^{\dag}(k)$ for two independent Bose fields (here $u$, $v$ are number parameters)
$$
a(k)\mapsto u a_{1}(k)+ v a_2^{\dag}(k),
$$
$$
a^{\dag}(k)\mapsto u^* a^{\dag}_{1}(k)+ v^* a_2(k).
$$
Commutation relation conserves if
$$
[u a_{1}(k)+ v a_2^{\dag}(k),u^* a^{\dag}_{1}(k)+ v^* a_2(k)]=|u|^2-|v|^2=1.
$$
Let fields $a_1(k)$, $a_2(k)$ be in Fock states. Then
$$
\langle a^{\dag}(k)a(k')\rangle=|v|^2\delta(k-k'),
$$
$$
\langle a(k)a(k')\rangle=\langle a^{\dag}(k)a^{\dag}(k')\rangle=0.
$$
Hence using this transformation the Fock state becomes a Gassian non-Fock state (non-squeezed mean zero). To prove this statement it is sufficient to show that correlation functions for the field $a(k)$ can be computed using the Wick theorem which is guaranteed by the Wick theorem for fields $a_1(k)$, $a_2(k)$.
The next theorem allows to represent stochastic limits of correlation functions computed in Theorem 1 of previous Section using analogue of the temperature double construction for the algebra of master--field (quantum noise) with free statistics. Contrary to the bosonic case for the free temperature double not only the state changes but also commutation relations for the master--field.
\medskip
\noindent{\bf Theorem 2}\quad {\sl
The stochastic limit of the $N$-point correlation function for entangled operators is equal to correlator for the master--field algebra
$$
\lim_{\lambda\to 0}\langle a^{\varepsilon_1}_\lambda(t_1,k_1)\dots a^{\varepsilon_N}_\lambda(t_N,k_N)\rangle=\langle b^{\varepsilon_1}(t_1,k_1)\dots b^{\varepsilon_N}(t_N,k_N)\rangle,
$$
where the master field possesses relations with free statistics and is given by the sum of two contributions
\begin{equation}\label{limit0}
b(t,k)=b_1(t,k)+b_2^{\dag}(t,k),
\end{equation}
independent in the free sense $b_1b_2^{\dag}=b_2b_1^{\dag}=0$ and relations
\begin{equation}\label{limit1}
b_1(t,k)b_1^{\dag}(t',k')=2\pi\delta(t-t')\delta\left(\omega(k)+{\frac{1}{2}}\,k^2+kp\right)(N(k)+1)\delta(k-k'),
\end{equation}
\begin{equation}\label{limit2}
b_2(t,k)b_2^{\dag}(t',k')=2\pi\delta(t-t')\delta\left(\omega(k)-{\frac{1}{2}}\,k^2+kp\right)N(k)\delta(k-k').
\end{equation}
\begin{equation}\label{limit3}
b_1(t,k)p=(p+k)b_1(t,k),
\end{equation}
\begin{equation}\label{limit4}
b_2(t,k)p=(p-k)b_2(t,k).
\end{equation}
Correlation functions for the master--field are given by the Fock state for the algebra of free creations and annihilations with the generators $b_1$, $b_1^{\dag}$, $b_2$, $b_2^{\dag}$.
}
\medskip
We have to prove the deformed variation of the Wick theorem for the free statistics. Correlation function $\langle b^{\varepsilon_1}(t_1,k_1)\dots b^{\varepsilon_N}(t_N,k_N)\rangle$, $N=2n$ takes the form of a sum over all non-crossing diagrams (which follows from free statistics). Edges in the diagrams correspond to pairings $b_1b_1^{\dag}$, $b_2b_2^{\dag}$ (any edge $bb^{\dag}$ corresponds to pairing $b_1b_1^{\dag}$ and edge $b^{\dag}b$ corresponds to pairing $b_2b_2^{\dag}$). Formulae (\ref{limit1}), (\ref{limit2}) imply that any pairing of this form generates a term in the correlation function (\ref{N'-point}) (not taking in account the summation over $l$). The contribution inside the delta-function which contains summation over $l$ arises due to moving of the pairing through edges in the process of ordering of operator delta-functions taking in account relations (\ref{limit3}), (\ref{limit4}) (using the same arguments applied in the proof of Theorem 1 which generated similar contributions in correlation function (\ref{N'-point})).
\medskip
\noindent{\bf Acknowledgments.} This work is supported by the Russian Science Foundation under grant 19--11--00320.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.